Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction Electronic Communications of the EASST Volume 81 (2022) 9th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation - Doctoral Symposium, 2021 Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction Zafeirakopoulos Dimitrios, Almpani Sofia and Stefaneas Petros 12 pages Guest Editors: Sven Jörges, Anna-Lena Lamprecht, Anila Mjeda, Stefan Naujokat ECEASST Home Page: http://www.easst.org/eceasst/ ISSN 1863-2122 http://www.easst.org/eceasst/ ECEASST Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction Zafeirakopoulos Dimitrios1, Almpani Sofia and Stefaneas Petros 1 dimzaf252@yahoo.gr, National Technical University of Athens, Greece Abstract: The purpose of this article is to give a small introduction of deontic logic in the world of wearable technology and show how we hope deontic logic could be used to address some of the issues concerning both the users and the developers of wearable technology by providing ethical frameworks expressed in formal logic. We begin by presenting a short overview of what we view as major issues concern- ing wearable technology. After a brief introduction to deontic logic, we show how deontic logic might be helpful towards addressing the concerns we have presented. Finally, we give a small, introductory application of deontic logic, showing exactly how it can be used to address one of these issues regarding wearable technology with a concrete example. Keywords: Deontic Logic, Wearable Technology, Wearable Robots, Ethical Issues, Data Privacy, Responsibility, Artificial Intelligence 1 Introduction As wearable technology we define any kind of electronic device designed to be worn on the user’s body. Such devices can gather and analyze information concerning the user’s vital signs. This covers a wide range of applications, from wearable robots that have mobility capabilities, to simpler devices, like smartwatches, that can be used to track the user’s biometric data. In our pa- per we will concern ourselves both with issues relevant to specific wearable technologies, such as those supervising patients in hospitals and with issues concerning any wearable device that col- lects its users’ data. Such technologies that hope to bring human and machine interaction closer together are undoubtedly a growing part of the future and it is important to take any necessary steps to ensure that as such technologies spread more, they are developed in ways that ensure they remain both safe to use and respectful of the users’ individual needs and desires. Introduc- ing ethics to machines such as robots has been an ongoing effort with many attempts [AA07]. Among the various approaches, there has been research utilizing deontic logic specifically, as a tool to bind a robot’s behavior within the desirable limits [BAB06]. For wearable technology specifically, there have been attempts to address some ethical matters by focusing specifically on data privacy [GMP20]. Furthermore, in terms of ensuring an ethical approach to artificial intel- ligence there have been various attempts , from the Three Laws of Robotics [Asi42], to modern approaches at creating guides with design suggestions [CCT20]. Another proposed idea has been to embed ethicists in the development process [MFT+22], hoping their inclusion would be helpful. 1 / 12 Volume 81 (2022) mailto:dimzaf252@yahoo.gr Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction What these approaches lack and we hope to bring is the mathematical foundation that deontic logic offers. Instead of guidelines with suggestions on how artificial intelligence agents,which can be embedded in wearables, called to make choices ought to be designed, we propose the creation of concrete ethical frameworks in which ethics and obligations are explained thoroughly in mathematical form, leaving little room for grey areas and confusion. A logic based approach is particularly suitable for wearable devices, as one of the areas where wearable devices are used, is the health industry [WH19, LT15]. Health is a field of extreme importance. While every technology needs to be efficient at what it sets out to do, this is partic- ularly true of technologies concerning human health. Therefore, we see wearable technology as a specifically suitable area for the benefits deontic logic can bring. With this paper we will show how deontic logic can be used to create theoretical models of a robust ethical guideline for the correct behavior of artificial intelligence agents utilized by wearable devices. Such a guideline could be used in further research to create concrete software for wearable technology. Additionally, this paper can help open a dialogue on how deontic logic or other approaches based on mathematics could be used to improve wearable technology. In the subsequent part of this paper we highlight the main issues concerning wearable tech- nology. In the next section an overview of deontic logic is given. Following that, we present how deontic logic can help address the issues concerning wearable technology .Afterwards, we present an example of deontic logic being used to address one of those issues. The final section concludes with an overview of our paper. 2 Issues Concerning Wearable Technology Wearable technology is a growing area promising innovations and improvements in the quality of life, regarding both patients and healthy people [RBN20]. Our paper will mostly focus on the former. While wearable technology has been known to help patients’ rehabilitation and provide ap- plications in physical medicine [Bon05], there is a number of concerns regarding its use. Since wearable devices often concern patients, it is imperative that human caretakers will not neglect patients using them. This could happen either due to viewing users of some devices, such as wearable robots as less important, or due to considering that since their physical needs are ad- dressed by the robot, there is no need for further assistance [FKHF18]. That would be a problem, as humans are are more suitable for the vital aspect of emotional care that a robot cannot provide so far. A major concern relating to all wearable devices is privacy as regards patients’ personal in- formation [MAMI18]. It is likely that a wearable device might collect data from the patient for a variety of reasons. For example, to monitor their health, or to monitor its own progress and how it is addressing the patient’s needs, in an attempt to further improve itself (if we are refer- ring to a wearable device enhanced with some form of artificial intelligence). In such cases, if a patient desires their data not to be collected or shared with certain individuals, then it is of vital importance that their wishes will be respected. Robots in general raise concerns regarding ISoLA DS 2021 2 / 12 ECEASST privacy [TCT20]. Beyond that, there is the added risk of a person’s wearable device being po- tentially hacked by malicious sources [KFFH20]. That could lead to their data being stolen, or even render them unable to use the wearable device, if its function is compromised. Finally, it is important to know who should be held responsible each time there is a problem with a device like a wearable robot [FKHF18, Mat04]. For example, if a wearable robot user ends up injuring a third party with their robot, who would be responsible [ZF20]? Would the users themselves be responsible, due to bad use of the robot? Would the designer company be li- able, because they failed to develop a robot with safeguards that would prevent such things from happening? Furthermore, if we examine wearable robots equipped with some form of artificial intelligence and capable of making choices, for example regarding whether to prioritize a given patient’s privacy or safety, there might be times, where the device itself would have to make choices. In the case of robots capable of making choices, would it be possible for the robots themselves to be held responsible if something goes wrong [FFMR19]? And would introducing the concept of responsibility for machines inevitably give rise to the question of whether some robots should be given rights too [Sul11]? 3 Introduction to Deontic Logic The term Deontic Logic refers to a formal logic that is part a group of logics referred to as modal logics. With the term “modal logics” we refer to a large family of logics that share a mathemat- ical foundation and common attributes, like those of propositional logic. However, besides this common core each modal logic has a set of unique operands that deals with its unique field of interest. Such fields are ethics, beliefs, existence, etc, depending on the individual logic [Gar21]. Deontic logic specifically, is an area of formal logic which deals with matters of ethics, such as whether a certain action is obligatory, impermissible, etc. To achieve that, deontic logic uses standard assets of formal logic along with additional operands, specifically designed to tackle ethical matters. These are as follows: [MV21]: OB: The operand to indicate that the following statement is obligatory. PE: The operand to indicate that the following statement is permissible. IM: The operand to indicate that the following statement is impermissible. OM: The operand to indicate that the following statement is omissible. OP: The operand to indicate that the following statement is optional. The function of each operand will become clearer by showing how each of them can be defined through the use of only one of them. The operand traditionally used to for that, is the one to indi- cate whether something is obligatory, OB. OB can be used to define each of the other operands, as follows [Zaf18]: 3 / 12 Volume 81 (2022) Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction PE p ⇔ ¬OB¬p IM p ⇔ OB¬p OM p ⇔ ¬OBp OPp ⇔ (¬OBp ∧¬OB¬p) The first statement explains that something is permissible, if it is not obligatory not to do it. The second statement explains that something is impermissible, if it is obligatory not to do it. The third statement explains that something is omissible, if it is not obligatory to do it. The fourth statement explains that something is optional if it is neither obligatory to do it, nor not to do it. A very important aspect of deontic logic that sets it apart from other logics, is that it acknowl- edges that obligations may not be fulfilled. The set OBx,¬x isn’t a contradiction in deontic logic, in the same way that x,¬x is. All obligations may very well be unfulfilled, in a specific example, without an inherent “penalty”, from the point of view of deontic logic. This is particularly useful for creating models describing choices, obligations and other ethical matters, because it allows for examining both whether an act is for example, obligatory or impermissible and whether the act took place or not. This is advantageous when examining the actions of humans, who are perfectly capable of breaking their obligations, obligations which may at times even have a con- tradictory nature. Furthermore, it allows for agents of artificial intelligence to compare situations where they might take different actions, one time following an obligation and one time ignoring it, in an attempt to analyze which choice leads to better results. When giving a core of “ethics” to an artificial intelligence agent, deontic logic, is something much “closer” to what a machine can understand compared to natural language, since its foundation lies in mathematics, which is far more objective and clear than natural language, which can be riddled with subjectivities and uncertainties. The ability of deontic logic to create clear models for various scenarios makes it a very capa- ble tool when it comes to comparing different scenarios involving sets of rules and examining whether they are being followed or not. A very simplistic system describing an obligation to do x with the obligation being followed through is: OBx x while one where the same obligation exists but is not followed through is: OBx ¬x We see that deontic logic provides very simple and clear formulas for handling obligations and ethical matters, something that spoken language might at times not provide as efficiently, par- ticularly when dealing with large sets of rules. Furthermore, precisely because deontic logic is clear, it allows us to pinpoint a potential contradiction, such as conflicting obligations within a set of rules much more easily, than if the same rules were described with natural language. ISoLA DS 2021 4 / 12 ECEASST 4 Wearable Technology and Deontic Logic In this section, we examine the various issues concerning wearable devices that we highlighted previously and examine how deontic logic can apparently help with them, directly or indirectly, to varying degrees. Matters concerning data privacy and responsibility could definitely benefit from approaches based on deontic logic. For data privacy, we would like for wearable devices to protect the data of the users and not share them against their will. As wearable devices are designed by engineers, we propose that having a guideline used as the basis for their operating systems, expressed in a clear way would be a good baseline for ensuring the designers follow the rules we want them to follow and respect patient privacy. Naturally, like with every technology, if there is malicious intent, it will always be possible for designers to bypass guidelines and ignore them. However, as a first step, we propose that deontic logic can be used to make blueprints for how a wearable piece of technology could operate and what rules it ought to follow. Beyond using this as a tool for communicating to designers what the necessary guidelines are, as well as a blueprint for developing the software of each wearable device, a further benefit of utilizing deontic logic is that it could add another argument as to why a particular wearable device equipped with artificial intelligence will be safe to use, by giving it a concrete ethical guideline to follow. We hope this could further help persuade the public that said device is safe to use, beyond already existing technical safeguards from an engineering standpoint. Naturally, as data privacy is a growing concern both among the scientific world and the population at large, many potential users would have doubts that a wearable robot would respect their data [Cal11]. Communicating to them that not only have there been guidelines in place to help ensure that, but that these guidelines have their foundation in something as absolute as mathematical formulas could help alleviate their concerns a great deal. Of course, some matters regarding data privacy are more difficult than others to be handled by deontic logic. For example, while deontic logic can be used as a guideline on how wearable devices should ideally be designed to ensure they respect their users data, deontic logic is not a useful tool against hacking from external sources, at least not within the scope of this paper. However, the area where deontic logic can absolutely shine when it comes to wearable device issues, is responsibility and choice, an area that of course, has an intersection with concerns regarding data privacy too. Deontic logic can absolutely be used as a tool to showcase how, for example, a wearable device equipped with an artificial intelligence agent making choices on its own can properly function, maintaining a balance between ensuring the user’s well-being and respecting their choices. As we mentioned earlier, when introducing deontic logic, its mathematical form allows it to be much “closer” to what can be understood by a machine than human language. Finally, deontic logic might stand to help with the question of whether patients using wearables might end up seen as less needing of care. A code of obligations for caretakers with a concrete foundation in mathematical formulas, such as deontic logic, can indeed be helpful in showing that patients using wearable devices deserve as much attention as the ones without. 5 / 12 Volume 81 (2022) Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction 5 Deontic Logic Application In the following part of this paper we will present a simplistic model of the choices an artificial intelligence agent that “exists” within a wearable device might have to make, depending on the various circumstances, as well as show what limitations it might have. The scenario we present is a very simplistic one and of course, it is not an accurate description of a real life situation. We use it as a means to introduce the use that deontic logic might have for matters of responsibility and choice, for artificial intelligence agents in wearable devices. Specifically, it involves a single user of a wearable device designed to monitor the health of the patient using it. The patient might have agreed to have their personal data shared with a health center, such as a hospital, or might have opted out of this service and instead prefer to keep their data private. The wearable device in question will need to respect their wishes, while also ensuring that if the patient is facing a serious problem, they will not be left unattended. In each case, the device will calculate the full tree of possibilities, until it reaches a potential violation of an obligation or exhausts all the possibilities. The set of rules the wearable device has, expressed in deontic logic, are as follows (the (..) after each “OB” are added for better readability): OB(nr) OB(divulge) or OB(¬divulge) From the pair of rules in the second line above, only one will “exist” in the wearable device’s system, depending on the patient’s preferences. The first rule says that the wearable has an obli- gation to ensure that the user will not face serious health risk, something that within the scope of this scenario we equate with the possibility of the patient being both very sick and unwilling to convey information regarding their health data to, for example, a hospital that might help them. We also add a rule explaining this very specific point: sick ∧¬divulge → ¬nr The above means that if the patient is revealed to be very sick and won’t communicate their health data to the hospital, the situation is such that there are serious risks. In this model, “nr” stands for “no serious risks”. We note that in contrast to the previous set of rules, this one in- dicates not an obligation, but a natural consequence, in our theoretical scenario. We treat the patient being very sick and not divulging their data as an event that automatically triggers them being in serious jeopardy as far as the wearable device is concerned. This falls in line with what we mentioned earlier, regarding how systems built on deontic logic can describe both absolute facts and obligations which may or may not be fulfilled. Further rules of our hypothetical simple scenario are that: 1. if the data regarding the health status of the patient are divulged, then the patient is treated as not facing any serious risk, as far as the wearable device is concerned. Handling the patient’s situation now falls within the health facility’s responsibilities. divulge → nr ISoLA DS 2021 6 / 12 ECEASST 2. if the data regarding the health status of the patient are such, that the patient is diagnosed as not being sick, then the patient is treated as not facing any serious risk. ¬sick → nr 3. Finally, the wearable device has the means to check the patient’s health, at unspecified intervals and can conclude whether they are particularly sick or not. check → sick/¬sick Within the scope of this model we are not concerned with how, in real life, each individual wearable device would collect such data from the patient. It could be, for example, through monitoring their heart rate. The purpose of this paper is to present a simplified description of such scenarios using deontic logic. Naturally, a real life approach would require far more complicated systems, which would nonetheless follow the same basic principles. The final system we have for the rules guiding the “mind” of our wearable device is as follows (divulge/not divulge depends on what the user has chosen): { OB(nr), OB(divulge/¬divulge) , sick∧¬divulge → ¬nr , divulge → nr , ¬sick → nr , check → sick/¬sick} We will now use this model to examine how the various scenarios can play out and how a wearable device equipped with deontic logic can operate. 1) The patient not being particularly sick. The system is as follows. {OB(nr) , OB(divulge/¬divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick} At an unspecified interval, the wearable device might check the user’s health and find them either sick or not sick. We illustrate these steps, by adding the necessary terms to the system, gradually. We note, that in this case, the scenario plays out the same, whether the patient opted to divulge their data or not, so we will leave that particular part open ended, as it will not be of importance. {OB(nr), OB(divulge/¬divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check} and then {OB(nr), OB(divulge/¬divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, ¬sick} Which due to ¬sick → nr automatically leads to 7 / 12 Volume 81 (2022) Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction {OB(nr), OB(divulge/¬divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, ¬sick, nr} The wearable device concludes the user is not at any risk, the obligation for the patient to not be at any risk remains fulfilled. When it comes to whether it should divulge that data or not, it can easily follow that obligation too, in any case, depending on the patient’s wishes, without facing any conflict with the obligation of the patient not facing any risk, as that one is already fulfilled. 2) The patient being particularly sick and having opted to divulge their data. {OB(nr), OB(divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick} Again, at an unspecified interval, the wearable device checks the patient’s health status. {OB(nr), OB(divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check} and then {OB(nr), OB(divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick} The wearable device is now called to make a choice, should it divulge or should it not divulge? The way it will handle that dilemma is by examining the two possibilities and their outcomes and deciding accordingly: {OB(nr), OB(divulge), sick ∧ ¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick, divulge} {OB(nr), OB(divulge), sick∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick, ¬divulge, ¬nr} (due to sick ∧¬divulge → ¬nr) The wearable device will know to follow the first choice, not only because the patient opted for their data to be divulged, but also because not doing so would demonstrably lead to serious risks for their health. The device would make the exact same choice if the user was indifferent to whether their data would be divulged or not, in order to avoid them facing any serious risks. 3) The patient being particularly sick and having opted not to divulge their data. The system is as follows. {OB(nr), OB(¬divulge), sick∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick} ISoLA DS 2021 8 / 12 ECEASST which, if the patient is indeed sick, as before, will lead to {OB(nr), OB(¬divulge), sick∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick} As before, the wearable device is facing a choice, should it divulge or not divulge? It attempts to solve the situation by comparing how the outcome would look like each time. {OB(nr), OB(¬divulge), sick ∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick, divulge} {OB(nr),OB(¬divulge), sick∧¬divulge → ¬nr, divulge → nr, ¬sick → nr, check → sick/¬sick, check, sick, ¬divulge, ¬nr} (due to sick ∧¬divulge → ¬nr) We are met with a unique situation, one in which, no matter what the wearable device chooses to do, it will inevitably violate one obligation. In the first instance, the user’s desire for their data to not be shared, in the second the obligation to protect its user from facing serious risk. There are a number of ways this situation could be solved and we present it more to open a dialogue than to provide a specific solution. One approach, that we support, would be for the device to send a signal to a human supervisor, instructing them to talk with the user. This would not divulge the user’s health data, but it would bring them in contact with a human that could discuss with them and figure out an approach that might help the user, as humans will, at least for the foreseeable future, always have more methods in dealing with other humans, than a pre- programmed machine. Of course, we could expand the deontic logic framework in our example, to incorporate this action within the terms described in deontic logic. However, we choose to leave the system as it is, to illustrate that when it comes to artificial intelligence agents, in wearable devices and in general, human involvement could also be needed. 6 Conclusion Deontic logic can be used to address some of the issues pertaining to wearable technology, par- ticularly regarding with matters of privacy, responsibility, and choice. Because deontic logic has its basis in mathematics, it provides a clear representation of moral statements. Clear rep- resentation allows for more efficient communication. This could both help engineers aiming to turn deontic logic guidelines into code, and alleviate concerns the general population has, by explicitly showing the wearable devices’ moral guidelines. An approach focusing just on soft- ware would not be accessible to the general population. Similarly, an approach focusing just on natural language lacks because natural language is inherently less concise than mathematical formulas. While the example we provided relied on a simplified scenario, we maintain that it is a useful introduction to how deontic logic can serve as a guideline for attempting to protect the privacy of users of wearable devices while also ensuring that general goals, such as keeping the user healthy and safe are also taken into consideration by creating a concrete ethical framework 9 / 12 Volume 81 (2022) Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction expressed in mathematics. This paper above all means to serve as an introduction to approaches incorporating deontic logic into the world of wearable devices and start a dialogue that could spark further research into the subject. Acknowledgements: The research for this paper was supported by E.L.K.E. (Special Account for Research Funding) of the National Technical University of Athens. Bibliography [AA07] M. Anderson, S. Anderson. Machine Ethics: Creating an Ethical Intelligent Agent. Ai Magazine 28:15–26, 12 2007. [Asi42] I. Asimov. Runaround. Street Smith, 1942. [BAB06] S. Bringsjord, K. Arkoudas, P. Bello. Toward a General Logicist Methodology for Engineering Ethically Correct Robots. IEEE Intelligent Systems 21:38–44, 07 2006. doi:10.1109/MIS.2006.82 [Bon05] P. Bonato. Advances in wearable technology and applications in physical medicine and rehabilitation. Journal of neuroengineering and rehabilitation 2:2, 03 2005. doi:10.1186/1743-0003-2-2 [Cal11] M. R. Calo. Peeping Hals. Artificial Intelligence 175(5):940–941, 2011. Special Re- view Issue. doi:https://doi.org/10.1016/j.artint.2010.11.025 https://www.sciencedirect.com/science/article/pii/S0004370211000166 [CCT20] E. Commission, C. Directorate-General for Communications Networks, Technology. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assess- ment. Publications Office, 2020. doi:10.2759/002360 [FFMR19] E. Fosch Villaronga, H. Felzmann, T. Mahler, M. Ramos Montero. Cloud services for robotic nurses? Assessing legal and ethical issues in the use of cloud services for healthcare robots. 01 2019. doi:10.1109/IROS.2018.8593591 [FKHF18] H. Felzmann, A. Kapeller, A.-M. Hughes, E. Fosch Villaronga. Ethical, legal and social issues in wearable robotics: Perspectives from the work of the COST Action on Wearable Robots. 10 2018. [Gar21] J. Garson. Modal Logic. https://plato.stanford.edu/archives/spr2021/entries/ logic-modal/, 2021. ISoLA DS 2021 10 / 12 http://dx.doi.org/10.1109/MIS.2006.82 http://dx.doi.org/10.1186/1743-0003-2-2 http://dx.doi.org/https://doi.org/10.1016/j.artint.2010.11.025 https://www.sciencedirect.com/science/article/pii/S0004370211000166 http://dx.doi.org/10.2759/002360 http://dx.doi.org/10.1109/IROS.2018.8593591 https://plato.stanford.edu/archives/spr2021/entries/logic-modal/ https://plato.stanford.edu/archives/spr2021/entries/logic-modal/ ECEASST [GMP20] M. Gross, R. Miller, A. Pascalev. Ethical Implementation of Wearables in Pandemic Response: A Call for a Paradigm Shift. 05 2020. [KFFH20] A. Kapeller, H. Felzmann, E. Fosch Villaronga, A.-M. Hughes. A Taxonomy of Ethical, Legal and Social Implications of Wearable Robots: An Expert Perspective. Science and Engineering Ethics 26:1–19, 12 2020. doi:10.1007/s11948-020-00268-4 [LT15] C. Lutz, A. Tamo Larrieux. RoboCode-Ethicists – Privacy-friendly robots, an ethical responsibility of engineers? 06 2015. doi:10.1145/2786451.2786465 [MAMI18] P. Maurice, L. Allienne, A. Malaisé, S. Ivaldi. Ethical and Social Considerations for the Introduction of Human-Centered Technologies at Work. Pp. 131–138. 09 2018. doi:10.1109/ARSO.2018.8625830 [Mat04] A. Matthias. The responsibility gap: Ascribing responsibility for the actions of learn- ing automata. Ethics and Information Technology 6:175–183, 09 2004. doi:10.1007/s10676-004-3422-1 [MFT+22] S. McLennan, A. Fiske, D. Tigard, R. Müller, S. Haddadin, A. Buyx. Embedded ethics: a proposal for integrating ethics into the development of medical AI. BMC Medical Ethics 23, 01 2022. doi:10.1186/s12910-022-00746-3 [MV21] P. McNamara, F. Van De Putte. Deontic Logic, The Stanford Encyclopedia of Phi- losophy. https://plato.stanford.edu/archives/spr2021/entries/logic-deontic/, 2021. [RBN20] C. Rodriguez-Guerrero, J. Babic, D. Novak. Wearable Robots: Taking a Leap From the Lab to the Real World [From the Guest Editors]. IEEE Robotics & Automation Magazine 27:20–21, 03 2020. doi:10.1109/MRA.2020.2967004 [Sul11] J. Sullins. When is a robot a moral agent? Machine Ethics 6:151–162, 01 2011. [TCT20] M. Tavakoli, J. Carriere, A. Torabi. Robotics, Smart Wearable Technologies, and Autonomous Intelligent Systems for Healthcare During the COVID-19 Pandemic: An Analysis of the State of the Art and Future Vision. Advanced Intelligent Systems 2, 05 2020. doi:10.1002/aisy.202000071 [WH19] B. Wright, C. Hobbs. Improving Quality of Life with Wear- able Robotics. Retrieved from:https:// coe.gatech.edu/ news/ 2019/ 09/ improving-quality-life-wearable-robotics, 2019. [Zaf18] D. Zafeirakopoulos. Development of Models for Ethical Behaviour of Artificial In- telligence through use of Deontic Logic. Diploma thesis, School of Electrical and Computer Engineering, National Technical University of Athens, 2018. 11 / 12 Volume 81 (2022) http://dx.doi.org/10.1007/s11948-020-00268-4 http://dx.doi.org/10.1145/2786451.2786465 http://dx.doi.org/10.1109/ARSO.2018.8625830 http://dx.doi.org/10.1007/s10676-004-3422-1 http://dx.doi.org/10.1186/s12910-022-00746-3 https://plato.stanford.edu/archives/spr2021/entries/logic-deontic/ http://dx.doi.org/10.1109/MRA.2020.2967004 http://dx.doi.org/10.1002/aisy.202000071 https://coe.gatech.edu/news/2019/09/improving-quality-life-wearable-robotics https://coe.gatech.edu/news/2019/09/improving-quality-life-wearable-robotics Supporting Ethical Decisions in Wearable Technology with Deontic Logic: A Brief Introduction [ZF20] L. Zardiashvili, E. Fosch Villaronga. “Oh, Dignity too?” Said the Robot: Human Dignity as the Basis for the Governance of Robotics. Minds and Machines 30, 03 2020. doi:10.1007/s11023-019-09514-6 ISoLA DS 2021 12 / 12 http://dx.doi.org/10.1007/s11023-019-09514-6 Introduction Issues Concerning Wearable Technology Introduction to Deontic Logic Wearable Technology and Deontic Logic Deontic Logic Application Conclusion