Article

Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this article the authors explore the various ways in which robot behaviour is regulated. A distinction is drawn between imposing regulations on robots, imposing regulation by robots, and imposing regulation in robots. Two angles are looked at in depth: regulation that aims at influencing human behaviour and regulation whose scope is robots' behaviour. The artificial agency of robots requires designers and regulators to look at the issue of how to regulate robot behaviour in a way that renders it compliant with legal norms. Regulation by design offers a means for this. Returning to Asimov's three laws of robotics, which have been widely neglected by hands-on roboticists, the idea of artificial agency is explored through the example of automated cars. Whilst practical issues such as space locomotion, obstacle avoidance and automatic learning are important, robots also have to observe social and legal norms. For example, social robots in hospitals are expected to observe social rules, and robotic dust cleaners scouring the streets for waste as well as automated cars will have to observe traffic regulations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Introducing Autonomous Vehicles (AVs) that can automatically make driving decisions according to traffic rules can address traffic violations [1][2][3][4]. However, it is challenging for AVs to adhere to traffic rules whilst making driving decisions. ...
... All of this data is gathered via the CARRS-Q advanced driving simulator. 4 Moreover, ontologies are highly flexible and can be expanded effortlessly by incorporating new ideas, depending on the requirements, making them extremely adaptable. To create a road map in the simulator, we acquire road-related data from the QLD Transport and Main Roads website for Queensland, Australia. ...
Article
Full-text available
Improving the safety of autonomous vehicles (AVs) by making driving decisions in accordance with traffic rules is a complex task. Traffic rules are often expressed in a way that allows for interpretation and exceptions, making it difficult for AVs to follow them. This paper proposes a novel methodology for driving decision making in AVs based on defeasible deontic logic (DDL). We use DDL to formalize traffic rules and facilitate automated reasoning, allowing for the effective handling of rule exceptions and the resolution of vague terms in rules. To supplement the information provided by traffic rules, we incorporate an ontology for AV driving behaviour and environment information. By applying automated reasoning to formalized traffic rules and ontology-based AV driving information, our methodology enables AVs to make driving decisions in accordance with traffic rules. We present a case study focussing on the overtaking traffic rule to illustrate the usefulness of our methodology. Our evaluation demonstrates the effectiveness of the proposed driving decision-making methodology, highlighting its potential to improve the safety of AVs on the road.
... Automated Vehicles (AVs) are designed and programmed to follow traffic rules; it is therefore suggested that AVs would be the solution to traffic rule violations (Khorasani et al. 2013;Leenes and Lucivero 2015). AVs have the potential to reduce crashes, reduce traffic congestion, and improve driving efficiency (Ryerson et al. 2019). ...
... Also, an AV may not be able to follow the rules related to exceptions without additional interpretation. It is also unclear whether the AV can act appropriately in traffic scenarios where traffic rules conflict (Prakken 2017) and where traffic rules are implicitly defined (Leenes and Lucivero 2015). Currently, for some rules and situations, a human is more reliable in making the right decision than an AV (Schwarting et al. 2018). ...
Article
Full-text available
Automated Vehicles (AVs) are designed and programmed to follow traffic rules. However, there is no separate and comprehensive regulatory framework dedicated to AVs. The current Queensland traffic rules were designed for humans. These rules often contain open texture expressions, exceptions, and potential conflicts (conflict arises when exceptions cannot be handled in rules), which makes it hard for AVs to follow. This paper presents an automatic compliance checking framework to assess AVs behaviour against current traffic rules by addressing these issues. Specifically, it proposes a framework to determine which traffic rules and open texture expressions need some additional interpretation. Essentially this enables AVs to have a suitable and executable formalization of the traffic rules. Defeasible Deontic Logic (DDL) is used to formalize traffic rules and reasoning with AV information (behaviour and environment). The representation of rules in DDL helps effectively in handling and resolving exceptions, potential conflicts, and open textures in rules. 40 experiments were conducted on eight realistic traffic scenarios to evaluate the framework. The evaluation was undertaken both quantitatively and qualitatively. The evaluation result shows that the proposed framework is a promising system for checking Automated Vehicle interpretation and compliance with current traffic rules.
... In the legal literature and in legislative practice, both domestic and foreign, issues related to the legal regulation of unmanned vehicles are presented in a fairly wide range. In the initial phase of the introduction of this concept, most of the research raised such issues as liability for accidents, regulation of the design and production of autonomous vehicles, regulation of the (experimental or regular) use of unmanned vehicles, issues of data protection and privacy, as well as the issue of whether the law should be changed to accommodate autonomous vehicles (Anderson et al, 2016;Leenes et al, 2014;Leenes et al, 2014;Vellinga, 2017). However, in recent years, given the growing interest in this issue, the range of legal issues in this context has expanded significantly. ...
... In the legal literature and in legislative practice, both domestic and foreign, issues related to the legal regulation of unmanned vehicles are presented in a fairly wide range. In the initial phase of the introduction of this concept, most of the research raised such issues as liability for accidents, regulation of the design and production of autonomous vehicles, regulation of the (experimental or regular) use of unmanned vehicles, issues of data protection and privacy, as well as the issue of whether the law should be changed to accommodate autonomous vehicles (Anderson et al, 2016;Leenes et al, 2014;Leenes et al, 2014;Vellinga, 2017). However, in recent years, given the growing interest in this issue, the range of legal issues in this context has expanded significantly. ...
Article
Full-text available
Nowadays, the problems of legal regulation of unmanned vehicles take place not only in domestic, but also in foreign legislation. The reason for this situation is connected both with the high pace of development of the IT-sphere and digital technologies, and with the lack of a unified approach to solving legal issues in the field of regulation of the legal relations under consideration. The most debatable are attempts to determine the subject of responsibility for the harm caused with the participation of autonomous vehicles, as well as the distribution of this responsibility. However, there is no doubt that there is a need for legal fixing of certain issues that will allow the most correct regulation of legal relations with the participation of autonomous vehicles in various legal branches from the legal point of view.
... Framing those definitions is essential for laws to establish roles, rights, and responsibilities for those involved in or impacted by those behaviors, and failing to adhere to them carries not always favorable consequences [35]. A particularly intriguing field is that of robotics, which does not enjoy clear definitions in the law (only in standards, which we will expand on later) [36] [50] [15]. On the contrary, the law thinks of 'products,' 'toys,' 'medical devices,' 'machinery,' and, recently, 'artificial intelligence,' and establishes distinct safeguards for each. ...
Conference Paper
Full-text available
Over the past two decades, socially assistive (SARs) or interactive (SIRs) robots have been developed rapidly due to their beneficial uses in elderly care, rehabilitation, and education. Given their multiple embodiments and contexts of use, however, defining what these robots are remains a difficult task, which further challenges understanding which legal safeguards developers need to follow to ensure a safe human-robot interaction (HRI). Establishing legislation that adequately frames the issues is complex if these concepts remain confusing. Despite pioneer efforts to characterize what these robots are in international standards and the literature, there is currently no consensus on which legal category they are, and, therefore, related problems are covered unevenly in different pieces of legislation. Following a systematic review, we analyzed 1,359 works (out of 3,446) to clarify definitions, categories, and function-alities of SARs and SIRs to establish a baseline for understanding and regulating these robots. The first results show that the ISO 13482:2014 definition of Mobile Service Robots (MSRs), the formal name for SARs, is incompatible with the current literature. Moreover, more consensus on what qualifies as assistive or interactive under this technology is needed to determine the regulation and safeguards to mitigate the issues these robots entail for users.
... There are two points of view here. On the one hand, it is necessary to affirm the possibility of the application of laws to robots, open up the content applicable to artificial intelligence by modifying and improving the existing legal system, transform the traditional system of strengthening legal regulations, and establish a strict supervision and control policy system to form an institutional and law-based AI governance system (Leenes & Lucivero, 2014). The affirmative view is that, firstly, artificial intelligence is similar to natural human beings, and when it develops to a certain extent, it can generate autonomous will to control its behaviour, which means that artificial intelligence already has the ability to assume criminal responsibility (Ellamey & Elwakad, 2023). ...
Article
With the emergence of artificial intelligence, new situations, new problems, and new challenges in the field of network security are emerging one after another, affecting the global economic pattern, development pattern, and security pattern, also putting forward higher requirements for network security assurance. Nowadays, cases of using fake audio and video such as AI face-changing and AI voice-changing to commit new cybercrimes occur occasionally. From the individual level, artificial intelligence lowers the threshold for personal cybercrime. At the social level, artificial intelligence forges false information that misleads the public and poses a certain threat to people’s property. This article aims to analyse the function of artificial intelligence and discuss the impact of emerging technologies on social development. This article by comparing domestic and foreign researches on artificial intelligence, using a qualitative research methodology with case studies and literature, and expresses views and ideas regarding the laws about artificial intelligence. The article finds that maintaining network security is the common responsibility of the whole society and requires the joint participation of governments, social organisations, and the majority of netizens. Therefore, it is suggested that other security risks that may be caused by artificial intelligence in the future, should be prevented based on risk prevention, and active use laws and regulations to regulate and guide. Secondly, by referring to the law, ensuring the protection of personal information, and standardise the cyber security environment is so significant. Finally, carry out online publicity and education to improve the network security awareness.
... The 'Rules of Acquisition' among the Ferengi set out the core libertarian capitalist nature of their society and its misogyny (Grech and Grech 2015, 35, 38-40), while the 'prime directive'a rule incessantly broken in the narratives-is theoretically core to Starfleet (Peltz 2003, 635). Isaac Asimov's Three Laws of Robots and Zeroth's Law form not only a core source of law for Asimov's worlds but also seep beyond into other fiction as well as debates on robotics, artificial intelligence and ethics and their regulation (Kaminka et al. 2017, 343;Leenes and Lucivero 2014, 193). The Broken Earth trilogy raises our consciousness of the 'power and historical patterns of oppression, exploitation, and marginalization' manifest in laws implicitly written in stone (Sarat et al. 2010, 7). ...
Article
Full-text available
In N.K. Jemisin’s Broken Earth trilogy, core laws are written on stone. But the tablets are incomplete, open to interpretation and their authorship uncertain. Nonetheless, Stone Law forms the basis of the governance system. Ultimately, the narrative reveals that the Stone Laws are recent in origin and an instrument of subjugation whose claims to common sense belie its harms. This article considers immutability in law and the ways in which particular laws become as if written in stone. Constitutional law and jus cogens are two examples of immutable worldbuilding laws represented as inevitable, absolute, unyielding and perpetual. Debates in law and humanities on genre, performance, interpretation and the concerns of a particular era are often reflected and refracted through both the laws and the literature of an era. In particular, the practice of worldbuilding is used to demonstrate the wariness necessary when laws are represented as immutable.
... Robotics and AI scholars have examined concepts of robot regulation in the framework of the existing legal parameters of a given jurisdiction, which largely encompasses liability if things go awry with the AI enabled systems, in addition to the rights and legal status of the robots (Leenes & Lucivero, 2014;Pagallo, 2013). There is presently little consensus on how best to regulate ATs in the global market (Abbott, 2020), and more research on the impacts of government and policies has been called for in the hospitality literature (Mohammed et al., 2015). ...
Article
This conceptual paper examines the interplay between Porter’s Diamond, the role of government, and varying political ideologies on automated technology regulation in the global hospitality industry. The way in which these factors influence a global organization’s ability to achieve competitive advantage through the use of technology are examined. Specifically, mercantilist, liberal, social democratic, and communist ideologies are explored in relation to how they support or dissuade regulation, and their respective and collective impacts on competition. Additionally, the sources of government regulation, including global, bloc, country-level, and sub-country levels are discussed in relation to automated technology regulations. Ultimately, this study offers suggestions for competition as a result of existing and potential automated technology regulations for the hospitality industry, and suggests areas of study and questions for further consideration.
... According to his idea, humanoid robots would behave like domestic helpers and require a set of programming guidelines to keep them safe [70]. Although their applicability is questioned, these laws, which were outlined in a work of fiction, are nevertheless used as guidelines for the building of robots [71]. This brings us to the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) of the United Nations Educational, Scientific, and Cultural Organization [72]. ...
Article
Full-text available
Many businesses have been significantly influenced by the rapid emergence of Artificial Intelligence (AI), a branch of Computer Science focused on developing intelligent machines. This study examines the current state of machine learning (ML), a core aspect of AI, as well as its global impact and ethical considerations. It explores how the availability of data and advancements in computational models have expanded the capacity of AI systems to handle complex tasks. It also investigates the ethical issues surrounding AI technology and the proposed solutions. This discussion addresses the various ways AI is evolving and how it affects industries such as manufacturing, healthcare, transportation, and finance, highlighting both potential benefits and drawbacks. The study aims to illuminate the intricate relationship between machine learning and society by examining existing literature, including both well-established and newly released studies. It outlines the key insights drawn from the literature review, covering machine learning's trends, advancements, ethical considerations, and societal impact. Finally, the study emphasizes the connection between machine learning and society, summarizes the key findings, and suggests areas for further research.
... Murray and Scott have analysed the regulatory modalities that stem from the functionalist view in a framework comprising four categories of control -hierarchical (e.g., law), community-based (e.g., community norms), competition-based (e.g., markets), and design-based (e.g., code) -and three forms of control -standard setting, information gathering, and behaviour modification (Murray & Scott, 2002). These regulative modalities operate interrelatedly (Leenes & Lucivero, 2014). Design can be incorporated in the process of regulation by law, for instance, by outlining designbased requirements for organisations and designers, as well as after the implementation of regulation by law, for example, in developing a new technology product that modifies the behaviour of users by design. ...
Article
Full-text available
Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, methods, and technologies. Building on that structure, we distinguish among three types of RBD practices: compliance by design, value creation by design, and optimisation by design. We then explore the challenges and limitations of RBD practices, which stem from risks associated with compliance by design, contextual limitations, or methodological uncertainty. Finally, we examine the governance implications of RBD and outline possible future directions of the research field and its practices.
... Murray and Scott have analysed the regulatory modalities that stem from the functionalist view in a framework comprising four categories of controlhierarchical (e.g., law), community-based (e.g., community norms), competition-based (e.g., markets), and designbased (e.g., code)and three forms of controlstandard setting, information gathering, and behaviour modification (Murray & Scott, 2002). These regulative modalities operate interrelatedly (Leenes & Lucivero, 2014). Design can be incorporated in the process of regulation by law, for instance, by outlining design-based requirements for organisations and designers, as well as after the implementation of regulation by law, for example, in developing a new technology product that modifies the behaviour of users by design. ...
... -sun'iy intellekt mahsulotining arxitekturasi va logistikasi uchun ma'lum standartlarni (texnik jihatdan tartibga solish va standartlashtirish normalari bo'yicha) belgilash orqali sun'iy intellekt mahsulotining xatti-harakatlarini tartibga solish [16]. ...
Article
Full-text available
В данной статье автор проводит анализ правового регулирования искусственного интеллекта и его внедрения в различные отрасли и сферы. Исследованы нормативно-правовые акты, регулирующие применение искусственного интеллекта в существующих правовых системах. Автор также изучает мнения и точки зрения ученых и практиков относительно правового регулирования технологий искусственного интеллекта. В заключении статьи автор высказывает обоснованные предложения о необходимости принятия специализированных нормативных правовых актов, касающихся технологий искусственного интеллекта. Особое внимание уделяется принципу осторожности в разработке систем искусственного интеллекта и необходимости сертификации технологий и продукции в этой области.
... However, experts rightly note that full identification of AI with a source of increased danger is inappropriate, and the responsibility for potential harm should be borne by the person who programmed it or the person responsible for its operation [15]. In our view, these two approaches are not mutually exclusive: the presumption of the owner's liability should not deprive the latter of the right to prove the occurrence of harm due to faulty programming or faulty operation -with a corresponding redistribution of liability. ...
... -регулювання поведінки штучного інтелекту за допомогою встановлення (нормами технічної регламентації і стандартизації) певних стандартів архітектури штучного інтелекту) [8]. ...
Article
Full-text available
Сучасний розвиток технологій штучного інтелекту породжує виклики для правового середовища, вимагаючи адаптації законодавства до нових реалій. Наша стаття присвячена аналізу поточного стану та перспектив правового регулювання інноваційних технологій у сфе
... [65, pp.589-590]). This notion of "code as law" has influenced much thought on how governance is played out for primarily digital environments, for example on digital platforms [83], digitally mediated property [84], but also governance of AI, for example in Japan [85], as well as in robotics [86]. This technologically designed side of norms may not be explicitly intended to be normativenot in the sense that formal law explicitly is intended to be normative-but may, just as well, be. ...
Article
Full-text available
While recent progress has been made in several fields of data-intense AI-research, many applications have been shown to be prone to unintendedly reproduce social biases, sexism and stereotyping, including but not exclusive to gender. As more of these design-based, algorithmic or machine learning methodologies, here called adaptive technologies, become embedded in robotics, we see a need for a developed understanding of what role social norms play in social robotics, particularly with regards to fairness. To this end, we (i) we propose a framework for a socio-legal robotics, primarily drawn from Sociology of Law and Gender Studies. This is then (ii) related to already established notions of acceptability and personalisation in social robotics, here with a particular focus on (iii) the interplay between adaptive technologies and social norms. In theorising this interplay for social robotics, we look not only to current statuses of social robots, but draw from identified AI-methods that can be seen to influence robotics in the near future. This theoretical framework, we argue, can help us point to concerns of relevance for questions of fairness in human–robot interaction.
... A good external environment can facilitate trust building. With the constraints of relevant laws, policies and ethics, the safety of users (doctors and patients) in using machines can be guaranteed and the trust of users in using them can be enhanced [125,126]. ...
Article
Full-text available
As human‐machine interaction (HMI) in healthcare continues to evolve, the issue of trust in HMI in healthcare has been raised and explored. It is critical for the development and safety of healthcare that humans have proper trust in medical machines. Intelligent machines that have applied machine learning (ML) technologies continue to penetrate deeper into the medical environment, which also places higher demands on intelligent healthcare. In order to make machines play a role in HMI in healthcare more effectively and make human‐machine cooperation more harmonious, the authors need to build good human‐machine trust (HMT) in healthcare. This article provides a systematic overview of the prominent research on ML and HMT in healthcare. In addition, this study explores and analyses ML and three important factors that influence HMT in healthcare, and then proposes a HMT model in healthcare. Finally, general trends are summarised and issues to consider addressing in future research on HMT in healthcare are identified.
... A good external environment can facilitate trust building. With the constraints of relevant laws, policies and ethics, the safety of users (doctors and patients) in using machines can be guaranteed and the trust of users in using them can be enhanced [125,126]. ...
... Such discussions are increasingly prominent in recent years, as many proposals for rules and regulations to govern robotic and artificial intelligence (AI) technologies are currently in development [DG IPOL et al. 2016]. In close connection with this, the potential implications of the increasing prevalence of robots are a growing topic of inquiry in fields like ethics, legal studies, and governance studies [Boden et al. 2017;Leenes and Lucivero 2014;Nagenborg et al. 2008]. Beyond this, these types of interpretations of the trust-building process are generally important for the development of procedures that can help to mitigate the effects of emerging technologies on society. ...
Chapter
As artificial agents are introduced into diverse workplaces, basic configurations underlying the organization of work undergo a fundamental change. This implies that the work we do is subject to alteration along with who does the work that opens new social challenges. Questions regarding the extent of acceptance of these agents in work set�tings as well as the consequences of collaborating with artificial agents on human agents indicate the need to better understand the mechanisms that underpin a collaborative sociotechnical system. This book chapter discusses how the interplay between humans and artificial agents enables human–robot collaboration as a new way of working. Thus, we first focused on the agents and their interactive processes in the system to analyze how agency is ascribed to nonhuman entities. Thereafter, the results of two experiments are presented to reflect on the impact of attributing agency to an artificial agent on humans. This study provides recommendations for the design of artificial agents and organizational strategies in terms of which social practices and changes in the working context are required to provide possibilities for successful collaborations.
... It is important to bring both human (programmers, designers, or users) and robot behaviour into regulations so that it can be controlled by law and code. Leenes et al. [28], distinguished code or law into four categories mentioned below. ...
Article
Full-text available
Every year, especially in urban areas, the population density rises quickly. The effects of catastrophes (i.e., war, earthquake, fire, tsunami) on people are therefore significant and grave. Assisting the impacted people will soon involve human-robot Search and Rescue (SAR) operations. Therefore, it is crucial to connect contemporary technology (i.e., robots and cognitive approaches) to SAR to save human lives. However, these operations also call for careful consideration of several factors, including safety, severity, and resources. Hence, ethical issues with technologies in SAR must be taken into consideration at the development stage. In this study, the most relevant ethical and design issues that arise when using robotic and cognitive technology in SAR are discussed with a focus on the response phase. Among the vast variety of SAR robots that are available nowadays, snake robots have shown huge potential; as they could be fitted with sensors and used for transporting tools to hazardous or confined areas that other robots and humans are unable to access. With this perspective, particular emphasis has been put on snake robotics in this study by considering ethical and design issues. This endeavour will contribute to providing a broader knowledge of ethical and technological factors that must be taken into account throughout the design and development of snake robots.
... As a result, there have been arguments for different approaches to AI governance. Some of these approaches include the independent audit governance approach, governance of AI systems in their design, and Adaptive governance [1,7,54,55]. Governance by independent audit calls for an AAA audit style approach to governance [7]. ...
Article
Background: The continuous development of artificial intelligence (AI) and increasing rate of adoption by software startups calls for governance measures to be implemented at the design and development stages to help mitigate AI governance concerns. Most AI ethical design and development tools mainly rely on AI ethics principles as the primary governance and regulatory instrument for developing ethical AI that inform AI governance. However, AI ethics principles have been identified as insufficient for AI governance due to lack of information robustness, requiring the need for additional governance measures. Adaptive governance has been proposed to combine established governance practices with AI ethics principles for improved information and subsequent AI governance. Our study explores adaptive governance as a means to improve information robustness of AI ethical design and development tools. We combine information governance practices with AI ethics principles using ECCOLA, a tool for ethical AI software development at the early developmental stages. Aim: How can ECCOLA improve its robustness by adapting it with GARP® IG practices? Methods: We use ECCOLA as a case study and critically analyze its AI ethics principles with information governance practices of the Generally Accepted Recordkeeping principles (GARP®). Results: We found that ECCOLA’s robustness can be improved by adapting it with Information governance practices of retention and disposal. Conclusions: We propose an extension of ECCOLA by a new governance theme and card, # 21.
... As a result, the code is a regulatory block, a set of architectural or behavioral principles implanted in a system that forbids departure. Many contend that self-driving cars, such as the Google car, should adhere to pre-programmed traffic restrictions into the car's software [20]. ...
Chapter
Because of the inherent potentials, robotics has grown increasingly popular in many workplaces throughout time. Unless malfunctioning, a robot can perform its assigned jobs non-stop, perfectly, and quickly. It can perform in extreme conditions, such as deactivating explosives; exploring mines; finding sunk shipwrecks; and rescuing survivors. These large-scale uses of robotics inescapably cause tremendous ethical, social, and legal challenges in the contemporary world, which need to be redressed. The main focus of this article is to analyze those challenges encompassing AI and robotics and shed light on the prospective solutions thereof. This paper argues that the challenges in the field cannot be solved by a single effort; rather, an integrated action is needed from all stakeholders. Hence, a joint action plan, accelerated by national–international collaboration and cooperation and led by the United Nations, might be the chosen alternative.KeywordsEmergence of roboticsChallengesProspectsRegulationsSuggestions
... Generally, legal conformity presents challenges related to the interpretation of laws which can be vague, admit exceptions, or be internally incoherent; the resolution of all of which may demand some degree of common sense reasoning in order to be solved (Prakken, 2017). Additionally, with adherence to traffic laws comes the need to embed relatively abstract norms, used in laws to map concrete behavior, into an AV, and more broadly, into an autonomous system (Leenes and Lucivero, 2014). Some authors have already attempted to implement some portions of various traffic codes -related to circulation and behavior -into an autonomous vehicle, such as Rizaldi et al., 2017 (German legislation). ...
Thesis
Full-text available
From a societal point of view, autonomous vehicles (AV) can bring many benefits, such as an increase of road safety and a decrease of congestion. However, due to the highly dynamic environments these vehicles an encounter, technical concerns about their feasibility and ethical concerns about their actions’ deliberation, explainability and consequences seems to damper the enthusiasm and optimism that once ruled the automotive sector and the public opinion. This thesis addresses the core questions surrounding autonomous vehicles today: how they should account the road users sharing the same environment in its decision making and also how they should deliberate to act in dilemma situations. Firstly, a decision-making process for the autonomous vehicle under general situations is proposed, and then, as a particular situation, the deliberation under certain collision, with other vehicles, pedestrians or static objects, is treated. Then, to relax the assumption that the behavior of the other road users does not change during the execution of a policy, used to implement the decision making, the intent of each one of them are estimated comparing the prediction made by a Kalman filter and the real-time observation. To account the interaction between road users an incomplete game model is proposed, using the intent probabilities calculated beforehand, finally producing a coherent estimate of the evolution of the other road users.
... The law is often aligned with the encoding of rules in design [34], which has given rise to the idea that "code is law" or that it is possible to effect regulation through programming [39] and the "architecture of the internet" [48]. This is not the only possible relationship, however. ...
Preprint
We consider a series of legal provocations emerging from the proposed European Union AI Act 2021 (AIA) and how they open up new possibilities for HCI in the design and development of trustworthy autonomous systems. The AIA continues the by design trend seen in recent EU regulation of emerging technologies. The AIA targets AI developments that pose risks to society and citizens fundamental rights, introducing mandatory design and development requirements for high-risk AI systems (HRAIS). These requirements regulate different stages of the AI development cycle including ensuring data quality and governance strategies, mandating testing of systems, ensuring appropriate risk management, designing for human oversight, and creating technical documentation. These requirements open up new opportunities for HCI that reach beyond established concerns with the ethics and explainability of AI and situate AI development in human-centered processes and methods of design to enable compliance with regulation and foster societal trust in AI.
... Obviously, the shortages of technology law and legal technologies study programs, underdeveloped legal entrepreneurship ecosystems, in the context of global technological developments, is a practical problem. However, this problem has attracted the attention of researchers from various fields as well (Tegmark, 2015;Leenes & Lucivero, 2014). Integrity of law and technologies is a new interdisciplinary research field. ...
Conference Paper
Full-text available
The usage of rubrics is nowadays a developing trend in the world of Higher Education. One can think about two main reasons for this. First, even if there is no definitive proof, rubrics seem to be adequate for supporting the learning of complex skills, in particular for formative assessment. Rubrics are then finding a natural place in HE institutions in the context of the 21st Century where digital education skills become more and more important and need to be well defined and assessed. Secondly, rubrics are based on very easy principles and this simplicity may contribute to the trend noted. However our experience of doing rubrics for defining and assessing the students’ digital education skills revealed us that the design of rubrics needs its basic principles but also additional rules in order to make a rubric that can be used as an efficient assessment tool. In this perspective, we decided to compile and explain in this article the rules that we have applied during our rubric design work. Some rules were found in the literature; other ones were elaborated during our work progression. With this compilation, we want to bring the reader concrete guiding elements and steps for the design of rubrics. A general rule seems to emerge from our work: a rubric maker should always try to distinguish between all the aspects of the competences needed to perform a task and all the aspects of all the different levels that can be seen in the competences of a person who is performing the task.
... 212 Regulation of technology through code is indeed quintessential Asimov's Three Laws of Robotics. 213 Proposing laws of roboticists rather than laws of robotics therefore turned the logic around. 214 The ghost in the machine is inherently human. ...
Chapter
Full-text available
This paper discusses the bidimensional definition of regulation by design, i.e., “any alteration of human or technological behaviour through algorithmic code or data” (p. 448). The author argues that cyberspace made possible the advent of the digital society. If code is the new regulation, coders are the new regulators. Questions nevertheless remain as to how law can ensure appropriate regulation by design mechanisms in the AI Age. This conceptual paper identifies these questions and proposes two solution – a procedural and a substantive – that would render by-design regulation through AI more fundamental righs-proof.
... -регулювання розробки та виробництва ШІ за допомогою прийняття спеціального законодавства в даній області; -регулювання поведінки користувача технології ШІ за допомогою застосування наявних законодавчих інструментів; -регулювання поведінки ШІ за допомогою встановлення (нормами технічної регламентації і стандартизації) певних стандартів архітектури ШІ. [3]. Також, є необхідним перегляд концепції правосуб'єктності, що само по собі є досить спірним та непростим завданням. ...
Article
Full-text available
Дана стаття присвячена аналізу підходів до правового регулювання штучного інтелекту. Питання врегулювання сфери штучного інтелекту та його вплив на реалізацію та захист прав людини перебувають на етапі активних розробок в працях вчених переважно країн Європи, та дещо менше освітлені в наукових розробках вчених України та інших країн колишнього Радянського союзу. З огляду на тенденції стрімкого розвитку технологій ШІ можемо допустити, що найближчим часом ця тема займатиме центральне місце серед наукових досліджень юриспруденції. З огляду на визначення місця технологій штучного інтелекту в системі права проводиться загальний аналіз підходів до правового регулювання цієї технології. Під час аналізу розглядається структура правового регулювання на прикладі розробок вчених та висновків Ради Європи, надається декілька альтернативних підходів до визначення суб’єктно-об’єктної природи поняття «штучний інтелект». Деякі вчені пропонують здійснювати правове регулювання «штучного інтелекту» як об’єкта правових відносин, в основі якого лежить створена та керована людиною технологія. Інші ж вчені пропонують наділити «штучний інтелект» суб’єктною правоздатністю та дієздатністю, розглядаючи його як щось автономне та таке, що може самостійно нести відповідальність за свої дії. У статті проаналізовано підгрунтя та можливі наслідки імплементації цих двох в правові системи. Також висвітлено питання прав та обов’язків розробників, власників та осіб, що використовують роботів у своїй діяльності. В статті частково висвітлено гібридну модель правовідносин, за якої частина суспільних відносин реалізується без участі людей. Також наголошується на позитивних та негативних наслідках застосування запропонованих вченими підходів. Авторкою наголошено на низькому рівні розвитку правових підходів та відсутності єдності в підході, який можна було б застосовувати на практиці.
... These incorporate the concept of reversibility and inclusion of emergency and protection services. Meanwhile, the General Data Protection Regulation (GDPR) [57] initiated by the European Union (EU) involves: (a) Rules on automated decision-making processes; (b) the right to be forgotten, which means that results should be removed if no longer relevant, irrelevant, or inadequate [33]; and (c) data protection in the design stage [58]. ...
Article
Full-text available
Recent years have seen a rapid development of the Internet of Things (IoT) and the growth of autonomous robotic applications which are using network communications. Accordingly, an increasing advancement of intelligent devices with wireless sensors (that means autonomous robotic platforms) operating in challenging environments makes robots a tangible reality in the near future. Unfortunately, as a result of technical development, security problems emerge, especially when considering human–robot collaboration. Two abnormalities often compromise the basic security of collaborative robotic fleets: (a) Information faults and (b) system failures. This paper attempts to describe the methodology of a control framework design for secure robotic systems aided by the Internet of Things. The suggested concept represents a control system structure using blocks as the components. The structure is designed for the robots expected to interact with humans safely and act connected by communication channels. The properties of the components and relations between them are briefly described. The novelty of the proposed concept concerns the security mechanisms. The paper also categorizes two different modes of network attacks summarizing their causal effects on the human–robot collaboration systems. The issue of standardization is also raised. In particular, the works of the National Institute of Standards and Technology (NIST) and European Parliament (EP) on the security templates for communication channels are commented.
Article
Full-text available
Quadruped robots have emerged as a prominent field of research due to their exceptional mobility and adaptability in complex terrains. This paper presents an overview of quadruped robots, encompassing their design principles, control mechanisms, perception systems, and applications across various industries. We review the historical evolution and technological milestones that have shaped quadruped robotics. To understand their impact on performance and functionality, key aspects of mechanical design are analyzed, including leg configurations, actuation systems, and material selection. Control strategies for locomotion, balance, and navigation are all examined, highlighting the integration of artificial intelligence and machine learning to enhance adaptability and autonomy. This review also explores perception and sensing technologies that enable environmental interaction and decision-making capabilities. Furthermore, we systematically examine the diverse applications of quadruped robots in sectors including the military, search and rescue, industrial inspection, agriculture, and entertainment. Finally, we address challenges and limitations, including technical hurdles, ethical considerations, and regulatory issues, and propose future research directions to advance the field. By structuring this review as a systematic study, we ensure clarity and a comprehensive understanding of the domain, making it a valuable resource for researchers and engineers in quadruped robotics.
Article
Full-text available
Artificial intelligence (AI) is revolutionizing how humans conduct transactions, make decisions, and engage in social settings, with applications in important sectors, such as healthcare, banking, and criminal justice. While AI systems offer full efficiency, they additionally pose significant threats and ethical issues, such as accountability gaps and algorithmic biases. The challenge for AI-driven governments is to establish governance structures that effectively manage the evolving threats posed by increasingly complex and autonomous AI systems. This paper seeks to establish a theoretical framework for analyzing AI governance by examining mainly four critical issues: bias, transparency, economic influence, and the need for adaptive regulatory approaches. These issues are selected based on their relevance to the societal and ethical implications of AI, their impact on public trust, and their influence on fairness and equitable outcomes. Through this lens, the paper evaluates current regulatory models and self-regulation initiatives, analyzing their capacity to balance effective oversight with the need for innovation. Drawing on governance theories and socio-technical systems, the paper explores how these frameworks can address AI’s risks while enhancing its positive contributions. In addition, the paper identifies key gaps in existing governance approaches and calls for further research and policy development. Ultimately, this work aims to advance the understanding of AI governance by offering actionable insights that inform the creation of models capable of protecting public interests while enabling technological progress.
Chapter
The swift progression of artificial intelligence (AI) technology offers unparalleled prospects as well as noteworthy ethical dilemmas. A strong ethical foundation is becoming more and more necessary as AI systems are incorporated into more areas of society, such as healthcare, banking, law enforcement, and everyday consumer products. It appears that current developments in AI come with significant challenges. This chapter emphasizes the significance of ethical frameworks in AI-enhanced contexts such as data-driven decision making, automation, adaptability and interactivity, intelligent assistance, and increased efficiency. This chapter explores the challenges associated with AI in detail. By striking a balance between scientific innovation and ethical ideals, we can harness the revolutionary potential of AI while minimizing its risks. AI has many potential applications in a wide range of industries, but as it has quickly gained traction, it has also brought up significant ethical issues that need to be carefully considered and addressed right away.
Article
مِمَّا لا شك فيه أن التطور التكنولوجي أصبح هو القوة المحركة لآليات العولمة والتي أثرت بشكل مُباشر على حياة البشرية ماديًّا ومعنويًّا في حياتهم اليومية، وذلك في مُختلف المجالات؛ لذا أبدت دولة الإمارات العربية المتحدة اهتمامًا كبيرًا في تتبع كل جديد في شأنه، حتى غدت من أكثر دول العالم اهتمامًا بالاندماج في العصر الرقمي. وفي أكتوبر ٢٠١٧ أدهشت الإمارات دول العالم بإطلاق استراتيجيتها في الذكاء الاصطناعي مِمّا حدا بهذه الدول التتبع عن كثب نظرًا لاعتبار هذه الاستراتيجية هي الأولى من نوعها في المنطقة والعالم والتي تسعى من خلالها الدولة إلى تحقيق العديد من الأهداف المتميزة منها الاعتماد على الذكاء الاصطناعي في الخدمات بمعدل (١٠٠%) بحلول عام ٢٠٣١، واستهدفت هذه الاستراتيجية أغلب القطاعات الحيوية في الدولة منها قطاع النقل وذلك من خلال تقليل الحوادث والتكاليف التشغيلية وتلعب تقنية الذكاء الاصطناعي دورًا هامًّا في وسائل النقل العام، وذلك في السيارات ذاتية القيادة حيث تقوم المركبة بتولي مهام القيادة دون السائق، ويكون دور السائق مقتصرًا على تحديد جهة الوصول وعندها تقود المركبة نفسها بنفسها، وتُعد المسئولية القانونية أحد التحديات التي تواجه السيارات ذاتية القيادة في مستواها الأخير دون تدخل بشري، ونحاول من خلال الدراسة مُعالجة التحدّيات التنظيميّة والقانونيّة النّاجِمة عن استخدامات الذّكاء الاصطناعي في مجال النّقل.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Article
Artificial intelligence, as well as a number of other products of the technological revolution, served as a prerequisite for the formation of a specific area of public relations, which led to the search for adequate forms and methods of legal regulation. The use of legal resources in the process of creating the necessary regulators has led to the accumulation of relevant practical experience and the emergence of a disciplinary ontology accumulating doctrinal knowledge about the AI legal existence. The central place in it is occupied by the issue of AI legal identification, which has both theoretical and practical significance. Its development involves the development of a complex of fundamental, programmatic and design issues. The article presents the solutions forming this complex in the context of the development of scientific legal knowledge and practice of legal regulation in the field of AI creation and use, as well as a rational view of mediation by law of the considered public relations in its current form; the author characterizes the content and dynamics of this view, analyzes the accumulated law-making experience and practice of legal experiments. The article also sets out forecasts for the further development of this segment of the legal sphere and the tools used to streamline it; defines the tasks of the legal doctrine for the future. The article was prepared on the basis of a scientific report presented by the author at a meeting of the Presidium of the Russian Academy of Sciences on March 12, 2024.
Chapter
Apart from being a buzzword, AI is a rapidly developing, complex, interdisciplinary research area. Seen as a technology capable of learning from experience and acting autonomously without human intervention, AI can be one of the, if not the most disruptive and transformative piece of technology developed by humanity. To date, AI applications can be found in a vast spectrum of available technologies, starting from consumer appliances and up to fully autonomous vehicles. The impact of AI can be felt in various areas, such as financial markets, state administration, healthcare, transportation, physical and digital infrastructure. The global economy at large is benefiting vastly from the advancements in AI research and development. According to some estimates, AI could potentially create 3.5 trillion US Dollars (USD) to 5.8 trillion USD in annual value in the global economy. Moreover, considering significant investments made by leading tech corporations worldwide to the research and development of AI, companies’ total AI absorption level is estimated to reach about 50 per cent by the year 2030.
Preprint
Full-text available
This is a long paper, an essay, on ambiguity, pragmatics, legal ecosystems, and the expressive function of law. It is divided into two parts and fifteen sections. The first part (Pragmatics) addresses ambiguity from the perspective of linguistic and cognitive pragmatics in the legal field. The second part (Computing) deals with this issue from the point of view of human-centered design and artificial intelligence, specifically focusing on the notion and modelling of rules and what it means to comply with the rules. This is necessary for the scaffolding of smart legal ecosystems (SLE). I will develop this subject with the example of the architecture, information flows, and smart ecosystem of OPTIMAI, an EU project of Industry 4.0 for zero-defect manufacturing (Optimizing Manufacturing Processes through Artificial Intelligence and Virtualization).
Chapter
This chapter investigates the growing role of robotics in international politics, focusing on security, diplomacy, and global governance. We explore how advances in robotics, particularly autonomous systems, shape military strategies, and alter global security dynamics. The advent of robotic warfare and the possibility of unmanned conflict raises new ethical, legal, and strategic issues that necessitate comprehensive international norms and regulations. The chapter discusses the influence of robotics on diplomacy. Furthermore, it addresses the broader implications of robotics on global governance, including issues of technical inequality and its capacity to reconfigure power relations. The chapter concludes by emphasizing the importance of international cooperation in developing policies and ethical guidelines to navigate the emerging robotics landscape in international politics.KeywordRoboticsPoliticsConflictAutonomous weaponsRobot taxRobot normsRobot Regulations
Article
Full-text available
Актуальність статті зумовлена доцільністю застосування адаптивного тестування в освіті на основі електронних засобів навчання, зокрема Google Forms для точного вимірювання знань і вмінь здобувачів освіти. Мета: полягає у висвітленні загальної концепцію адаптивного тестування здобувачів освіти в контексті використання електронних засобів навчання. Методи: аналіз літератури спрямовувався на детальне вивчення наукових робіт зарубіжних та вітчизняних дослідників, статей, книг та інших джерел інформації, які стосуються об'єкта дослідження – для з’ясування існуючого стану проблеми, виявлення невирішених питань та визначення напрямів подальших досліджень; кейс-стаді (вивчення випадків) – для аналізу конкретного випадку, або ряду випадків у контексті дослідження; формування висновків. Результати: висвітлено значення адаптивного тестування в освітньому процесі, що підлаштовується під потреби кожного здобувача освіти. Розкрито його переваги та недоліки. З'ясовано різновиди адаптивного тестування: лінійне, комп'ютерне та комбіноване. Висвітлено значимість адаптивного тестування з використанням штучного інтелекту. Розглянуто вимоги до підготовки адаптивних тестів, зокрема важливість критеріїв оцінювання і параметрів складності. Наголошено на значущості зворотного зв'язку від учнів і необхідності перегляду тестів для підтримання їх релевантності та валідності. Схарактеризовано загальні правила безпеки під час роботи з Google Forms та важливість автоматичного оцінювання відповідей учнів. Визначено процес аналізу відповідей учнів та відображення результатів тестування. Розглянуто можливості інтеграції Google Forms з освітніми платформами. Підкреслено переваги та обмеження використання Google Forms для адаптивного тестування в закладах загальної середньої освіти. Висновки: Визначено, що адаптивне тестування є важливим інструментом для глибокого аналізу і створення точних параметрів складності тестових питань. Підкреслено необхідність впровадження зворотного зв'язку від учнів для постійного удосконалення процесу тестування. Зазначено важливість регулярного перегляду та актуалізації тестів для забезпечення їх релевантності. Акцентовано увагу на врахуванні диференціації питань за рівнем складності, релевантності навчального контексту та загальних освітніх цілей.
Article
Full-text available
Este artículo tiene como objetivo analizar los debates sobre tribulación e inteligencia artificial, identificando la problemática asociada con la personalidad jurídica y la responsabilidad. Se plantea que, a pesar de los esfuerzos realizados en la última década, no se ha articulado una base jurídica sólida que responda a los requerimientos de la cuarta revolución industrial y sus externalidades. A través de un examen analítico-deductivo se muestra que, actualmente, no es posible introducir este tipo de impuestos para garantizar la tribulación efectiva de los beneficiarios finales de ingresos que provengan del uso de inteligencia artificial.
Article
Full-text available
Once autonomous systems become more widespread, product liability is expected to be called for help to compensate damages. The defectiveness of autonomous systems as a condition of product liability is difficult to assess. This difficulty becomes more evident in the case of design defects because the consumer expectations test adopted by the EU Product Liability Directive has been criticized for its vagueness, but the new Proposal for Product Liability Directive (COM(2022) 495 final) preserves it. This paper defends that the consumer expectations test is still fit for purpose and able to accommodate challenges posed by the deployment of autonomous systems. The paper first attempts to bring together the cases where there are clear safety expectations regarding autonomous systems, so the defectiveness assessment is relatively easy. In case of unclear safety expectations, it is difficult to measure and decide whether an autonomous system is defective or not. Hence, the paper’s second attempt is to make contribution to the concretization of the test in cases where it is difficult to know what to expect from autonomous systems. The public is entitled to expect the producers of autonomous systems to eliminate the harmful consequences of such products. Since the distinguishing feature of such products is their autonomy, producers are expected to consider the safety implications of the autonomy and eliminate the harmful consequences hereof. From the liability perspective, there are two main implications of autonomy that will be explored below: human–machine interaction and testing of autonomous systems. Accordingly, the difficulties arising from these two main implications should shape the scope and extent of the public’s expectations and hence the producers’ duties.
Chapter
Although American scholars sometimes consider European legal scholarship as old-fashioned and inward-looking and Europeans often perceive American legal scholarship as amateur social science, both traditions share a joint challenge. If legal scholarship becomes too much separated from practice, legal scholars will ultimately make themselves superfluous. If legal scholars, on the other hand, cannot explain to other disciplines what is academic about their research, which methodologies are typical, and what separates proper research from mediocre or poor research, they will probably end up in a similar situation. Therefore we need a debate on what unites legal academics on both sides of the Atlantic. Should legal scholarship aspire to the status of a science and gradually adopt more and more of the methods, (quality) standards, and practices of other (social) sciences? What sort of methods do we need to study law in its social context and how should legal scholarship deal with the challenges posed by globalization?
Chapter
Drones have become of enormous practical usefulness for crime prevention and criminal investigation. The use of drones, indeed, contribute to the collection of a wide range of information and data that could convey crucial knowledge for criminal proceedings. However, considering the highly intrusive potential of such tools on people’s private lives, serious concerns arise within the perspective of fundamental rights. The purpose of this study is to analyse the potentialities of drones in a modern criminal justice system while examining whether and how their usage is compliant with the rights to privacy and data protection.
Thesis
Full-text available
The development of automotive technologies and transportation systems brings innovations such as autonomous vehicles and a developed urban environment. These innovations pose serious challenges for professionals in the transport sector. An important area of research in the future will be how the rules and tools currently used in transport change and the relationship between drivers, vehicles and the environment. Traffic signalling systems are of great importance in traffic today and are likely to persist in some form in the future. All road users are informed and given appropriate information about their surroundings based on the posted signs. This is why it is important to perceive and interpret the environment correctly and react to its changes, from automated vehicles to pedestrians. The number of vehicles with various driving assistance functions has been growing rapidly in recent years. The most important goal is to introduce fully autonomous road transport in the next 20-30 years, but we still have a lot of challenges to overcome. One of the problems is that achieving a fully autonomous future will depend on the developments of vehicle manufacturers but will also require significant changes in legislation and infrastructure. So we can say that a fully autonomous future is still a long way off; however, more and more features are appearing in vehicles that facilitate this. However, it should be mentioned with these systems that, although vehicles are already available today, they are still in the development stage, which often means that they may still malfunction.
Article
Full-text available
Введение: вопросы защиты прав на цифровой контент, созданный с применением технологии искусственного интеллекта и нейросетей, актуализируются по мере развития данных технологий и расширения их применения в различных сферах жизни общества. Вопросы о защите прав и законных интересов разработчиков вышли на первый план в сфере права интеллектуальной собственности. При помощи интеллектуальных систем создается не только охраноспособный с точки зрения права контент, но и иные данные, отношения по поводу которых также подлежат охране. В связи с этим вопросы стандартизации требований к процедурам и средствам хранения больших данных, используемых при разработке, тестировании и эксплуатации систем искусственного интеллекта, приобретают особое значение. Цель: сформировать представление о направлениях правового регулирования и перспективах применения технологии искусственного интеллекта с позиций права на основе анализа российских и зарубежных научных концепций. Методы: эмпирические методы сравнения, описания, интерпретации; теоретические методы формальной и диалектической логики; частнонаучные методы: юридикодогматический и метод толкования правовых норм. Результаты: анализ практики применения систем искусственного интеллекта показал, что сегодня под интеллектуальными алгоритмами понимаются разные технологии, которые основаны или связаны с интеллектуальными системами, но не всегда подпадают под понятие классического искусственного интеллекта. Строго говоря, классический искусственный интеллект представляет собой лишь одну из технологий интеллектуальных систем. Результаты, созданные автономным искусственным интеллектом, обладают признаками произведений. В то же время требуют разрешения вопросы публично-правового характера: получение согласие на обработку данных от субъектов этих данных; определение правосубъектности указанных лиц; установление юридической ответственности в связи с недобросовестным использованием данных, необходимых для принятия решения. Помочь в их разрешении способны стандартизация и применение технологии блокчейн. Выводы: в связи с выявленным и постоянно меняющимся составом высоких технологий, подпадающих под определение искусственного интеллекта, обнаруживают себя разнопорядковые вопросы, которые делятся на группы: ряд вопросов правового регулирования в данной сфере уже решены и потеряли свою актуальность для передовой юридической науки (правосубъектность технологии искусственного интеллекта); другие вопросы могут быть урегулированы с помощью имеющихся правовых механизмов (анализ персональных данных и иной информации в ходе применения технологии вычислительного интеллекта для принятия решений); часть вопросов требуют новых подходов со стороны правовой науки (выработка правового режима sui generis для результатов деятельности технологии искусственного интеллекта при условии получения оригинального результата).
Article
Full-text available
Дана стаття присвячена розкриттю теоретико-прикладних питань, які пов’язані із реалізацією однієї із основних гарантій професійної діяльності адвоката, а саме забезпечення конфіденційності його спілкування зі своїй клієнтом. В рамках даного дослідження було розкрито різні підходи науковців, а також позиції законодавця щодо практичного забезпечення права підозрюваного (обвинуваченого) на конфіденційне спілкування зі своїм захисником насамперед у випадках затримання такої особи. Аналізуються також міжнародні стандарти та практика Європейського суду з прав людини в аспекті реалізації заборони втручання у приватне спілкування адвоката із його клієнтом. Аргументується позиція, згідно з якою існування безсумнівної довіри до професійної діяльності адвоката, як квінтесенція здійснення адвокатської діяльності, можливе лише за умови неухильного дотримання принципу конфіденційності, що забезпечується в тому числі, й забороною втручання у приватне спілкування адвоката з клієнтом. Для досягнення поставленої мети, автором були застосовані характерні для правової науки методи. Дослідження проводилося із застосуванням діалектичного методу пізнання правової дійсності, що надав можливість проаналізувати сутність гарантії щодо заборони втручання у приватне спілкування адвоката з клієнтом, тоді як використання системно-структурного методу надало можливість визначити загальну структуру роботи, що сприяло належному розкриттю завдань дослідження. На підставі проведеного дослідження, автор доходить висновку, що українське законодавство приділяє значну увагу гарантії забезпечення конфіденційності спілкування адвоката зі своїм клієнтом, яке в цілому відповідає міжнародним засадам у цій сфері та спрямоване на створення належних умов забезпечення дотримання принципу конфіденційності та адвокатської таємниці як необхідних умов здійснення адвокатської діяльності.
Chapter
Full-text available
Technology affects behaviour. Speed bumps, for instance, provide an effective way to enforce speed limits imposed by the legislator. In cases such as these, technology is instrumental to the enforcement of legal norms. This kind of regulation by technology, techno-regulation, or ‘code as code’ has become part of the contemporary regulator’s toolbox. The idea underlying this kind of influencing behaviour by means of technology is relatively straightforward. Norms can be transformed into computer code or architecture in a way that affords certain actions or functions and inhibits others. What is less clear is what the boundaries of techno-­regulation are. In this paper we analyse how technology affects human behaviour and we present a typology of techno-effects in order to provide a clear boundary of techno-regulation vis-à-vis other normative and functional aspects of technology. We survey topics such as nudging, affordance, scripts embedded in technological designs, and anthropomorphization. The paper draws from legal philosophy, STS, human computer interaction and regulation theory.
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Article
Full-text available
What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contacting each other? The contributors to this book discuss the possibility and desirability of some form of long-term computer Companions now being a certainty in the coming years. It is a good moment to consider, from a set of wide interdisciplinary perspectives, both how we shall construct them technically as well as their personal philosophical and social consequences. By Companions we mean conversationalists or confidants – not robots – but rather computer software agents whose function will be to get to know their owners over a long period. Those may well be elderly or lonely, and the contributions in the book focus not only on assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization.
Article
Full-text available
Artifacts are generally constructed on purpose and have intended and unintended effects on the conduct of people. As such, architecture can be used in regulating society, as speed ramps convincingly show. But is this de facto regulating behaviour by means of technology, regulating society in a legal sense, or is it merely disciplining society? Individuals can decide not to comply with legislation but are generally forced to observe the norms imposed upon them by techno-regulation. Many prominent examples of techno-regulation can be found in the context of ICT, for instance DRM, content filtering, privacy enhancing technologies. Users in these contexts are typically bound by the norms embedded in the technology, without these norms being very transparent. Furthermore, techno-regulation in the ICT context is most prominently driven by industry, not government. The combination of the obscurity of the norms embedded in the technology, the strict enforcement of these norms and the process of their enactment raise many questions regarding the legal status and legal effects of techno-regulation. This paper explores the different forms of techno-regulation instituted by both public and private regulators in more detail and tries to answer the question how techno-regulation by public and private regulators should be understood from a legal point of view. The paper argues that state authored techno-regulation has to be seen as supplemental to regular regulation because legitimacy requires the norms to be transparent and the regulator to be accountable for the norms. With regards to non-state authored techno-regulation, the image is more diffuse. Some instances of techno-regulation have a clear legal status and the legal effects of transgressing the techno-norms are clear as well. In other cases, the legal status of the norms is unclear, yet their regulative effect real.
Article
Full-text available
Personification of non-humans is best understood as a strategy of dealing with the uncertainty about the identity of the other, which moves the attribution scheme from causation to double contingency and opens the space for presupposing the others' self-referentiality. But there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems, as Luhmann argues. Personifying other non-humans is a social reality today and a political necessity for the future. The admission of actors does not take place, as Latour suggests, into one and only one collective. Rather, the properties of new actors differ extremely according to the multiplicity of different sites of the political ecology.
Article
Full-text available
The concept of Artificial Agents (AA) and the separation of the concerns of morality and responsibility of AA were discussed. Method of abstraction (MOA) was used as a vital component for analyzing the level of abstraction (LoA) at which an agent was considered to act. The approach facilitated the discussion of the morality of agents both in the cyberspace and in the biosphere where systems like organization can play the role of moral agents. It was found that computer ethics had an important scope for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility.
Article
Full-text available
Requirements engineering has been recognized as a fundamental phase of the software engineering process. Nevertheless, the elicitation and analysis of requirements are often left aside in favor of architecture-driven software development. This tendency, however, can lead to issues that may affect the success of a project. This paper presents our experience gained in the elicitation and analysis of requirements in a large-scale security-oriented European research project, which was originally conceived as an architecture-driven project. In particular, we illustrate the challenges that can be faced in large-scale research projects and consider the applicability of existing best practices and off-the-shelf methodologies with respect to the needs of such projects. We then discuss how those practices and methods can be integrated into the requirements engineering process and possibly improved to address the identified challenges. Finally, we summarize the lessons learned from our experience and the benefits that a proper requirements analysis can bring to a project.
Article
Full-text available
As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a “legal person.” In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of “human,” “person,” and “property.” Once it is understood that person in the legal sense can apply to a non-biological entity such as a corporation, the inquiry then goes on to examine the folk psychology view of intentionality and the concept of autonomy. The conclusion reached is that these two attributes can support the view that a non-biological machine, at least in theory, can be viewed as a legal person.
Article
Full-text available
This paper proposes a systematic treatment of NFRs in descriptions of patterns and when applying patterns during design. The approach organizes, analyzes and refines non-functional requirements, and provides guidance and reasoning support when applying patterns during the design of a software system. Three design patterns taken from the literature are used to illustrate this approach. 1. Introduction Requirements Engineering is now widely recognized as a crucial part of software engineering, and has established itself as a distinct research area. Equally important is how requirements drive the rest of software development. In particular, during the design phase, much of the quality aspects of a system are determined. Systems qualities are often expressed as non-functional requirements, also called quality attributes e.g. [1,2]. These are requirements such as reliability, usability, maintainability, cost, development time, and are crucial for system success
Article
There are at least three things we might mean by “ethics in robotics”: the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater capabilities and ethical sophistication. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions.
Book
This book explores how the design, construction, and use of robotics technology may affect today’s legal systems and, more particularly, matters of responsibility and agency in criminal law, contractual obligations, and torts. By distinguishing between the behaviour of robots as tools of human interaction, and robots as proper agents in the legal arena, jurists will have to address a new generation of “hard cases.” General disagreement may concern immunity in criminal law (e.g., the employment of robot soldiers in battle), personal accountability for certain robots in contracts (e.g., robo-traders), much as clauses of strict liability and negligence-based responsibility in extra-contractual obligations (e.g., service robots in tort law). Since robots are here to stay, the aim of the law should be to wisely govern our mutual relationships.
Book
This book presents the results of an assessment of the state of robotics in Japan, South Korea, Western Europe and Australia and a comparison of robotics R&D programs in these countries with those in the United States. The comparisons include areas like robotic vehicles, space robotics, service robots, humanoid robots, networked robots, and robots for biological and medical applications, and based on criteria such as quality, scope, funding and commercialization. This important study identifies a number of areas where the traditional lead of the United States is being overtaken by developments in other countries.
Article
Automation may be assumed to have a beneficial impact on traffic flow efficiency. However, the relationship between automation and traffic flow efficiency is complex because behavior of road users influences this efficiency as well. This paper reviews what is known about the influence of automation on traffic flow efficiency and behavior of road users, formulates a theoretical framework, and identifies future research needs. It is concluded that automation can be assumed to have an influence on traffic flow efficiency and on the behavior of road users. The research has shortcomings, and in this context directions are formulated for future scientific research on automation in relation to traffic flow efficiency and human behavior.
Book
This book encapsulates around a decade's collaborative research between Samir Chopra (City University of New York Philosophy Department) and Laurence White (lawyer and policymaker). The book deals with issues relating to contract law, agency law, knowledge attribution to artificial agents and their principals, tort liability of and for artificial agents, and personhood for artificial agents. The book takes a comparative approach, drawing on a wide range of sources in US, EU and Australian law.
Article
Can computers change what you think and do? Can they motivate you to stop smoking, persuade you to buy insurance, or convince you to join the Army? "Yes, they can," says Dr. B.J. Fogg, director of the Persuasive Technology Lab at Stanford University. Fogg has coined the phrase "Captology"(an acronym for computers as persuasive technologies) to capture the domain of research, design, and applications of persuasive computers.In this thought-provoking book, based on nine years of research in captology, Dr. Fogg reveals how Web sites, software applications, and mobile devices can be used to change peoples attitudes and behavior. Technology designers, marketers, researchers, consumers-anyone who wants to leverage or simply understand the persuasive power of interactive technology-will appreciate the compelling insights and illuminating examples found inside. Persuasive technology can be controversial-and it should be. Who will wield this power of digital influence? And to what end? Now is the time to survey the issues and explore the principles of persuasive technology, and B.J. Fogg has written this book to be your guide.
Article
In this paper I elaborate on previous work about the implications of the shift from the era of the script and the printing press to the era of the digital and of autonomic computing. I will argue that some of the crucial protections provided by modern law, notably privacy, non-discrimination and due process, are an affordance of the socio-technical infrastructure of the printing press. Referring to Ricoeur I will discuss the fourfold distantiation inherent in the script, reinforced by the printing press, that has evoked a need for interpretation, which in turn generated space and time for the contestation of dominant frames of interpretation. The shift from the script and the printing press to the digital era provokes an epistemic shift that magnifies the virtualisations (Pierre Levy) already enabled by the printing press, but paradoxically ends up collapsing them. This collapse of distance entails the emergence of a new sense of time-space, often called 'real time', which conflates distance in time and space to a new kind of synchronisation or parallel processing that differs substantially from the linear-sequential reasoning characteristic of the bookish mind. I will argue that, as a result, the interpretation that is inherent in any type of communication becomes less visible or even invisible, which may favour the customised frames of interpretation supplied by the digital autonomic environment. The question I then seek to raise is how constitutional democracy can sustain legal protections based on the technologies of the script and the printing press, in the face of an epistemic shift towards a digital age that collapses the distance that seems preconditional for the contestation of autonomic decision-taking.
Article
Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) research attempts to develop such models. But even as cognitive science has displaced behavioralism as the dominant paradigm for investigating the human mind, fundamental questions about the very possibility of artificial intelligence continue to be debated. This Essay explores those questions through a series of thought experiments that transform the theoretical question whether artificial intelligence is possible into legal questions such as, "Could an artificial intelligence serve as a trustee?" What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context. Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the "cash value" of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive. Second, and more controversially, we can view the legal system as a repository of knowledge-a formal accumulation of practical judgments. The law embodies core insights about the way the world works and how we evaluate it. Moreover, in common-law systems judges strive to decide particular cases in a way that best fits the legal landscape-the prior cases, the statutory law, and the constitution. Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts. By using a thought experiment that explicitly focuses on wide coherence, we increase the chance that the positions we eventually adopt will be in reflective equilibrium with our views about related matters. In addition, the law embodies practical knowledge in a form that is subject to public examination and discussion. Legal materials are published and subject to widespread public scrutiny and discussion. Some of the insights gleaned in the law may clarify our approach to the artificial intelligence debate.
Article
Technology influences people's behaviour in significant ways. Increasingly, norms are intentionally being built-in in technology to steer behaviour, for example, by making it impossible to copy a dvd or to filter websites with undesirable material. These norms are generally not developed and designed according to the accepted procedures for law-making, but are made by a variety of private and public actors and translated into "code". Since normative technology can influence people's behaviour equally significantly as laws, this development raises questions about the acceptability of normative technology. In order to be able to assess the acceptability of normative technology in light of democratic and constitutional values, in this essay, a systematic set of criteria is proposed that can be applied to "code as law".
Article
Smart regulators know that traditional command and control interventions, however tempting to politicians, are not always an effective or efficient form of response; they know that the criminal law tends to do better at defining crime into existence rather than defining it out; they know that private law remedies are of limited impact; and they know that public law control exercised by agency licensing or negotiation is open to the twin charges of being too soft or being too tough. Even smarter regulators know that they can sometimes achieve the desired regulatory effect by relying vicariously on non-governmental pressure (whether in the form of self-regulation or co-regulation by or with business or the professions, pressure exerted by consumers, the activities of pressure groups, and so on) or by relying on market mechanisms; in addition, they know that careful consideration needs to be given to selecting the optimal mix of various regulatory instruments.
Article
This paper reviews the case for libertarian paternalism presented by Thaler and Sunstein in Nudge. Thaler and Sunstein argue that individuals' preferences are often incoherent, making paternalism is unavoidable; however, paternalistic interventions should 'nudge' individuals without restricting their choices, and should nudge them towards what they would have chosen had they not been subject to specific limitations of rationality. I argue that the latter criterion provides inadequate guidance to nudgers. It is inescapably normative, and so allows nudgers' conceptions of well-being to override those of nudgees. Even if nudgees' rationality were unbounded, their revealed preferences might still be incoherent.
Article
The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes produce predictable shifts of preference when the same problem is framed in different ways. Reversals of preference are demonstrated in choices regarding monetary outcomes, both hypothetical and real, and in questions pertaining to the loss of human lives. The effects of frames on preferences are compared to the effects of perspectives on perceptual appearance. The dependence of preferences on the formulation of decision problems is a significant concern for the theory of rational choice.
Framing Techno-Regulation: An Exploration of State and Non-State Regulation by Technology
  • Roger Brownsword
Roger Brownsword, 'What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity' in Roger Brownsword (ed), Global Governance and the Quest for Justice (Hart Publishing, 2004) ch 13, 203; Ronald Leenes, 'Framing Techno-Regulation: An Exploration of State and Non-State Regulation by Technology' (2011) 5 Legisprudence 143.
This is apparent from the two ways we framed the obligation of the urban robot to stop at the red traffic light
  • Ibid
Ibid, 24. This is apparent from the two ways we framed the obligation of the urban robot to stop at the red traffic light.
Code, Control, and Choice
  • Brownsword
Brownsword, 'Code, Control, and Choice' (n 42) 17.
On Non-Functional RequirementsFrom Non-Functional Requirements to Design through Patterns
  • Martin Glinz
Martin Glinz, 'On Non-Functional Requirements' [2007] 15th IEEE International Requirements Engineering Conference 21; Daniel Gross and Eric Yu, 'From Non-Functional Requirements to Design through Patterns' (2001) 6 Requirements Engineering 18.
39) 5. 44 Ibid, 41. 45 See eg www.euroncap.com/rewards/technologies/lane.aspx for an overview of status and limitations
  • See Fogg
43 See Fogg (n 39) 5. 44 Ibid, 41. 45 See eg www.euroncap.com/rewards/technologies/lane.aspx for an overview of status and limitations. 46 See Thaler and Sunstein (n 40) 3. 47 Ibid.
96) 25 shares these under the heading of 'specific quality requirements' , which are requirements that pertain to a quality concern other than the quality of meeting the functional requirements
  • Glinz
Glinz (n 96) 25 shares these under the heading of 'specific quality requirements', which are requirements that pertain to a quality concern other than the quality of meeting the functional requirements.
Legal Personhood for Artificial Intelligences' (1992) 70 North Carolina Law Review 1231
  • B Lawrence
  • Solum
Lawrence B Solum, 'Legal Personhood for Artificial Intelligences' (1992) 70 North Carolina Law Review 1231.
Extending Legal Rights to Social Robots
  • Gunther Teubner
eg Gunther Teubner, 'Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law' (2007) 33 Journal of Law and Society 497; K Darling, 'Extending Legal Rights to Social Robots', 'We Robot' Conference, University of Miami, April 2012, http://ssrn.com/abstract=2044797.
What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity
  • Roger Brownsword
Roger Brownsword, 'What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity' in Roger Brownsword (ed), Global Governance and the Quest for Justice (Hart Publishing, 2004) ch 13, 203;
Engineering Privacy by Design
  • Seda Gürses
  • Carmen Troncoso
  • Claudia Diaz
Gürses, Carmen Troncoso and Claudia Diaz, 'Engineering Privacy by Design', International Conference on Privacy and Data Protection, Brussels, January 2011, https://www.cosic.esat.kuleuven.be/publi cations/article-1542.pdf. 99 Both as researchers and as evaluators.
96) 25 shares these under the heading of ‘specifc quality requirements
  • Glinz
  • Wynsberghe Aimee van
  • Fogg BJ
Implementation of 3D Services for “Ageing Well” Applications: Robot-Era Project', Forftaal: Ambient Assisted Living IV Forum Italiano
  • F Cavallo
  • M C Aquilano
  • P Carozza
  • Dario