Article

Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot’s famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Human development has forced legal development due to social development through the creation of a family/social system in primitive times or technological developments through new inventions/disruptions in present times [5,6]. Moreover, whenever legislative developments take long to respond, judicial pronouncements fill the gap. ...
... Technologies like artificial intelligence applied to vehicles have created a way to the development of autonomous vehicles. Cunneen Martin, Martin Mullins, Finbarr Murphy, and Seán Gaines have suggested that framing artificial intelligence onto vehicles is not enough, and applying human intelligence to the moral values is also important [6]. Tripat Gill's research suggested that progress of autonomous vehicles could be successful if a change in the prevailing moral norms happens and governments would promote increased self-interest among the consumers of vehicles [11]. ...
... As discussed, the technology is exposed to the cyberattack vectors, but research has been carried out in the direction of making the technology safe for customers. Technologies like face detection [6,20], driverless braking systems for pedestrians [7], controlled artificial intelligence aspects [8], and ethical guidelines [10] are being enabled into AVT. There are certain legal and ethical challenges related to this integration as highlighted by Vivek K. Singh, Elisabeth Andre, Susanne Boll, Mireille Hildebrandt, David A. Shamma [29]. ...
Chapter
Full-text available
Automation has touched all the segments of the vehicle manufacturing industry, which has led to an era where our transportation system is getting smarter day by day. The automated driving system (ADS) is one of the parts of smart transportation. The focus of concern in automated driving vehicles is on the fast track now and the related technologies are evolving, which automatically redirected the vehicle manufacturing industry to prepare for encountering a broad range of cybercrimes with the automatic driving vehicles and legal issues associated with them. There are different automation levels in the vehicles, and the automation level decides the rules and regulations. This chapter focuses on the critical analysis of crimes associated with the automated driving vehicles to prioritize the interest of the occupant of the vehicle in situations with conflicts of interest. The researchers also present a perspective of legal issues related to development, testing, and implementation of automated, autonomous, and connected vehicles in India.
... One of the examples of artificial intelligence that has entered our lives are "smart" video cameras capable of recognizing documents [1] and identifying a person [2]. Visual navigation systems have found application both in unmanned vehicle control [3] and in algorithms for the movement of humanoid robots [4]. ...
... One use case for AI is unmanned vehicle control. Errors in unmanned control technology can lead to tragic consequences [3], serious economic and environmental damage. The first serious road traffic accident (RTA) with a "self-propelled" hybrid crossover from Google, in which three employees were injured, occurred in 2015. ...
Article
Full-text available
Artificial intelligence technologies are being implemented in various fields, replacing the human mind with the help of specially designed algorithms. These systems are able to learn in the course of their functioning, free us from routine work, save time and material resources. The article presents the results of research on trust in breakthrough digital technologies as an important condition for their use, including in social life. There was revealed a high demand for «smart» technologies with an insufficient level of knowledge in this area, lack of interest in professional development. The article identifies the factors causing a negative attitude towards innovation. In the current conditions of the pandemic, a tendency has been revealed of an increase in the need for solutions using artificial intelligence and machine learning technologies, including in ensuring information security.
... Consumer behaviour research has found that redress not only is an important option but that the perceived likelihood of success determines whether dissatisfied consumers consider asking for redress and allow companies a 'second chance' (Cullet 2004). This finding is particularly relevant considering the recent discussion around distrust in online intermediaries (Blodgett et al. 1995) and dissatisfied users of online platforms (Cunneen et al. 2018). The importance of redress is, for instance, shown by people who want to have more agency in their AI-driven environment, such as gig economy workers striving for more transparency and control of their data and how they are steered by algorithms (Booth 2020). ...
Article
Full-text available
Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI systems and their decisions, as well as to request redress. Currently, however, users structurally lack such mechanisms, which increases risks for vulnerable communities, for instance patients interacting with AI healthcare chatbots. To empower users in AI-mediated communication processes, this article introduces the concept of active human agency. We link our concept to contestability and redress mechanism examples and explain why these are necessary to strengthen active human agency. We argue that AI policy should introduce rights for users to swiftly contest or rectify an AI-enabled decision. This right would empower individual autonomy and strengthen fundamental rights in the digital age. We conclude by identifying routes for future theoretical and empirical research on active human agency in times of ubiquitous AI.
... The error of reasoning arises from the implication that since people say they would act in this way (a descriptive claim), it follows that the machine ought to act in this way (a normative claim). 10 See, for example, (Allen et al., 2011;Wallach and Allen, 2009;Saptawijaya, 2015, 2011;Berreby et al., 2015;Danielson, 2015;Lin, 2015;Malle et al., 2015;Pereira, 2015, 2016;Bentzen, 2016;Bhargava and Kim, 2017;Casey, 2017;Cointe et al., 2017;Greene, 2017;Lindner et al., 2017;Santoni de Sio, 2017;Welsh, 2017;Wintersberger et al., 2017;Bjørgen et al., 2018;Grinbaum, 2018;Misselhorn, 2018;Pardo, 2018;Sommaggio and Marchiori, 2018;Baum et al., 2019;Cunneen et al., 2019;Krylov et al., 2019;Sans and Casacuberta, 2019;Wright, 2019;Agrawal et al., 2020;Awad et al., 2020;Banks, 2021;Bauer, 2020;Etienne, 2020;Gordon, 2020;Harris, 2020;Lindner et al., 2020;Nallur, 2020). ...
Preprint
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research.
... This framing by Mobileye's founder, Professor Amnon Shashua, is an important example of the commercial framings of AVs and the claim of AV decisional superiority. The second framing rehearses the claim that AVs will be significantly limited in terms of driving decisionality and will be unable to achieve full autonomous driving (Cunneen et al. 2019a(Cunneen et al. , 2019b. This view is endorsed by Professor Luciano Floridi, a prominent AI researcher; in the above quote he is dismissing the possibility of superior AV decisionality. ...
... Then they evaluate this approach by comparing their results with data from the Moral Machine Experiment. Cunneen [31] suggest that the use of trolley-style problems as an elucidatory tool is a necessary precedent (i.e., is necessarily prior) to focusing AI applications on moral theories. And, Sutfield et al. [94] suggest that models of ethics (specifically for autonomous vehicles) should aim to match human decisions made in the same context. ...
Article
Full-text available
Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. This work has included using moral dilemmas as validation mechanisms for implementing decision-making algorithms in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
... According to Cunneen et al. [16,17,23,24], the deployment of an emerging technology creates many complex challenges for governance regimes. Governance risk is exacerbated by a lack of clarity about what the best forms of governance are for AI applications, such as automated bus lane enforcement. ...
Article
Full-text available
There is an explosion of camera surveillance in our cities today. As a result, the risks of privacy infringement and erosion are growing, as is the need for ethical solutions to minimise the risks. This research aims to frame the challenges and ethics of using data surveillance technologies in a qualitative social context. A use case is presented which examines the ethical data required to automatically enforce bus lanes using camera surveillance and proposes ways of minimising the risks of privacy infringement and erosion in that scenario. What we seek to illustrate is that there is a challenge in using technologies in positive, socially responsible ways. To do that, we have to better understand the use case and not just the present, but also the downstream risks, and the downstream ethical questions. There is a gap in the literature in this aspect as well as a gap in the actual thinking of researchers in terms of understanding and responding to it. A literature review and detailed risk analysis of automated bus lane enforcement is conducted. Based on this, an ethical design framework is proposed and applied to the use case. Several potential solutions are created and described. The final chosen solution may also be broadly applicable to other use cases. We show how it is possible to provide an ethical AI solution for detecting infringements that incorporates privacy-by-design principles, while being fair to potential transgressors. By introducing positive, pragmatic and adaptable methods to support and uphold privacy, we support access to innovation that can help us mitigate current emerging risks.
... This is already emerging in the cases of health and motor insurance by reference to a form of relational ontology or relationality between commercial data use and citizen data use. 25 This requirement to understand the relationality has been implicit in the structuring of work within the expert group, with different sub-groups tackling distinct areas or business lines of insurance. To give an example, there are differences in the relations that exist in the area of health insurance from that of domestic home insurance lines of business. ...
Article
Full-text available
The European Union (EU) has a strong reputation and track record for the development of guidelines for the ethical use of artificial intelligence (AI) generally. In this paper, we discuss the development of an AI and ethical framework by the European Insurance and Occupational Pensions Authority (EIOPA), for the European insurance market. EIOPA's earlier report on big data analytics (EIOPA, 2019) provided a foundation to analyze the complex range of issues associated with AI being deployed in insurance, such as behavioral insurance, parametric products, novel pricing and risk assessment algorithms, e-service, and claims management. The paper presents an overview of AI in insurance applications throughout the insurance value chain. A general discussion of ethics, AI, and insurance is provided, and a new hierarchical model is presented that describes insurance as a complex system that can be analyzed by taking a layered, multi-level approach that maps ethical issues directly to specific level(s).
... Third, given that increasing inputs in most cases implies various trade-offs or risks thereof, the question is what trade-offs are justified for reducing that decisional uncertainty? Thus, the ethics of machine decisions is a moving target in so far as all three aspects of the problem involve the question of how the machine ought to be constituted, because how the 7 See, e.g., Awad et al. (2018), Borenstein et al. (2019), Casey (2017), Cunneen et al. (2018), Goodall (2014Goodall ( , 2016, Hern (2016), Himmelreich (2018), JafariNaimi (2018), Keeling (2020a), Lin (2013Lin ( , 2015, Lundgren (2020a), Mirnig and Meschtscherjakov (2019), Nyholm and Smids (2016), Santoni de Sio (2017), and Wolkenstein (2018). See Nyholm (2018) for an overview. ...
Article
Full-text available
This article is about the role of factual uncertainty for moral decision-making as it concerns the ethics of machine decision-making (i.e., decisions by AI systems, such as autonomous vehicles, autonomous robots, or decision support systems). The view that is defended here is that factual uncertainties require a normative evaluation and that ethics of machine decision faces a triple-edged problem, which concerns what a machine ought to do, given its technical constraints, what decisional uncertainty is acceptable, and what trade-offs are acceptable to decrease the decisional uncertainty.
... These findings align with the results posited by Alogaili and Mannering (2022), who analysed the severity of vehicle-pedestrian accidents. Even though many accidents in these conditions might be unavoidable road traffic accidents (Cunneen et al., 2019), this paper promotes further research and development of ADAS in adverse lighting conditions to help reduce the frequency and severity of these critical accidents. As Alogaili and Mannering (2022) suggested, these might involve policies and technologies that minimise the perception differences between daylight and dark environments. ...
Article
Full-text available
Advanced Driver Assistance Systems (ADAS) have introduced several benefits in the vehicular industry, and their proliferation presents potential opportunities to decrease road accidents. The reasons are mainly attributed to the enhanced perception of the driving environment and reduced human errors. However, as environmental and infrastructural conditions influence the performance of ADAS, the estimation of accident reductions varies across geographical regions. This study presents an interdisciplinary methodology that integrates the literature on advanced driving technologies and road safety to quantify the expected impact of ADAS on accident reduction across combinations of road types, lighting, and weather conditions. The paper investigates the safety effectiveness of ADAS and the distribution of frequency and severity of road accidents across 18 driving contexts and eight accident types. Using road safety reports from the United Kingdom (UK), it is found that a high concentration of accidents (77%) occurs within a small subset of contextual conditions (4 out of 18) and that the most severe accidents happen in dark conditions on rural roads or motorways. The results of the safety effectiveness analysis show that a full deployment of the six most common ADAS would reduce the road accident frequency in the UK by 23.8%, representing an annual decrease of 18,925 accidents. The results also show that the most frequent accident contexts, urban-clear-daylight and rural-clear-daylight, can be reduced by 29%, avoiding 7,020 and 3,472 accidents, respectively. Automatic Emergency Braking (AEB) is the most impactful technology, reducing three out of the four most frequent accident categories – intersection (by 28%), rear-end (by 27.7%), and pedestrian accidents (by 28.4%). This study helps prioritise resources in ADAS research and development focusing on the most relevant contexts to reduce the frequency and severity of road accidents. Furthermore, the identified contextual accident hotspots can assist road safety stakeholders in risk mitigation programs.
... This forewarning is becoming increasingly relevant and critical nowadays. For instance, in relation to AI-based recommendations and decisions made by automated vehicle technologies responding to unavoidable road traffic accidents (Cunneen et al., 2019). Concerns have emerged about the use of AI-based recommendations related to the accuracy of medical diagnosis and prognosis (Jain et al., 2020;Thrall et al., 2021), how inaccurate AIbased healthcare recommendations may adversely impact levels of trust between physicians and patients (Hoeren & Niehoff, 2018), as well as new technology acceptance levels among users (Fan et al., 2018). ...
Article
Full-text available
One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society’s most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.
... This framing by Mobileye's founder, Professor Amnon Shashua, is an important example of the commercial framings of AVs and the claim of AV decisional superiority. The second framing rehearses the claim that AVs will be significantly limited in terms of driving decisionality and will be unable to achieve full autonomous driving (Cunneen et al. 2019a(Cunneen et al. , 2019b. This view is endorsed by Professor Luciano Floridi, a prominent AI researcher; in the above quote he is dismissing the possibility of superior AV decisionality. ...
Article
Full-text available
This article aims to introduce a degree of technological and ethical realism to the framing of autonomous vehicle perception and decisionality. The objective is to move the socioethical dialog surrounding autonomous vehicle decisionality from the dominance of “trolley framings” to more pressing ethical issues. The article argues that more realistic ethical framings of autonomous vehicle technologies should focus on the matters of HMI, machine perception, classification, and data privacy, which are some distance from the decisionality framing premise of the MIT Moral Machine experiment. To support this claim the article appeals to state-of-the-art technologies and emerging technologies concerning autonomous vehicle perception and decisionality, as a means to inform and frame ethical contexts. This is further supported by considering a context specific ethical framing for each time phase we anticipate regarding emerging autonomous vehicle technology.
... One possible way to navigate what EIOPA posit as digital ethics across insurance products and services and contextualised in specific product clusters as socio-technological relations. This is already emerging in the cases of health and motor insurance, by reference to a form of relational ontology or relationality between commercial data use and citizen data use (Cunneen, Mullins, Murphy, & Gaines, 2019). This requirement to understand the relationality has been implicit in the structuring of work within the expert group with different sub-groups tackling distinct areas or business lines of insurance. ...
Article
Drawing on teleology, this study aims to conceptualize destination smartness from a tourist perspective by identifying what intelligences a “smart” destination has executed. Thematic analysis of 25 interviews with experienced “smart tourists” unveiled a hierarchical framework of destination smartness, visualizing the components of destination smartness as seen by tourists. Eight identified intelligences were then situated within a 2 (crystalized development path–fluid development path) × 2 (task-oriented focus–interaction-oriented focus) × 2 (active service provision–passive service provision) plane. This study also lays a theoretical foundation for future studies and provides practical implications for the development of smart tourism.
Article
Background: Artificial intelligence (AI) represents the epitome of scientific advancement and the future of technology. The use of algorithmic decision making has expanded into our reality and touched the lives of millions of people around the world. Yet, the proper development, implementation, and evaluation of AI technologies requires considering their ethical implications on humanity and society at large. The objective of this study is to synthesize evidence on the ethical considerations of developing, implementing, and evaluating AI technologies. Methods: We reviewed the literature by searching Medline, Embase, PsychINFO, and 2 other databases for analytical and experimental publications on ethics and AI. We used a mixed-methods data analysis plan to quantitatively and qualitatively synthesize evidence and derive themes on the subject matter by utilizing verbatim word processing and heat mapping. Results: Of the 1504 records that were captured by our search, we included n=50 publications for analysis (33 conceptual analyses and 17 experiments). Our findings highlight five ethical themes pertaining to AI technologies: transparency and trust; privacy and safety; morality and fairness "equity"; accountability and responsibility; and stakeholder autonomy. Conclusion: Ensuring an ethical development, implementation, and evaluation of AI technologies requires gaining the trust of end users by mitigating opacity, maintaining accountability, protecting privacy, and respecting autonomy. Future research should further explore these ethical prerequisites in different contexts and with different applications of AI.
Article
Traffic accidents forecasting represents a major priority for traffic governmental organisms around the world to ensure a decrease in life, property, and economic losses. The increasing amounts of traffic accident data have been used to train machine learning predictors, although this is a challenging task due to the relative rareness of accidents, inter-dependencies of traffic accidents both in time and space, and high dependency on human behavior. Recently, deep learning techniques have shown significant prediction improvements over traditional models, but some difficulties and open questions remain around their applicability, accuracy, and ability to provide practical information. This paper proposes a new spatio-temporal deep learning framework based on a latent model for simultaneously predicting the number of traffic accidents in each neighborhood in Madrid, Spain, over varying training and prediction time horizons.
Preprint
Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. This work has included using moral dilemmas as validation mechanisms for implementing decision-making algorithms in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
Article
Full-text available
With the advent of autonomous vehicles society will need to confront a new set of risks which, for the first time, includes the ability of socially embedded forms of artificial intelligence to make complex risk mitigation decisions: decisions that will ultimately engender tangible life and death consequences. Since AI decisionality is inherently different to human decision-making processes, questions are therefore raised regarding how AI weighs decisions, how we are to mediate these decisions, and what such decisions mean in relation to others. Therefore, society, policy, and end-users, need to fully understand such differences. While AI decisions can be contextualised to specific meanings, significant challenges remain in terms of the technology of AI decisionality, the conceptualisation of AI decisions, and the extent to which various actors understand them. This is particularly acute in terms of analysing the benefits and risks of AI decisions. Due to the potential safety benefits, autonomous vehicles are often presented as significant risk mitigation technologies. There is also a need to understand the potential new risks which autonomous vehicle driving decisions may present. Such new risks are framed as decisional limitations in that artificial driving intelligence will lack certain decisional capacities. This is most evident in the inability to annotate and categorise the driving environment in terms of human values and moral understanding. In both cases there is a need to scrutinise how autonomous vehicle decisional capacity is conceptually framed and how this, in turn, impacts a wider grasp of the technology in terms of risks and benefits. This paper interrogates the significant shortcomings in the current framing of the debate, both in terms of safety discussions and in consideration of AI as a moral actor, and offers a number of ways forward.
Article
Full-text available
With respect to questions of fact, people use heuristics – mental short-cuts, or rules of thumb, that generally work well, but that also lead to systematic errors. People use moral heuristics too – moral short-cuts, or rules of thumb, that lead to mistaken and even ab- surd moral judgments. These judgments are highly relevant not only to morality, but to law and politics as well. Examples are given from a number of domains, including risk regulation, punishment, reproduction and sexuality, and the act/omission distinction. In all of these contexts, rapid, intuitive judgments make a great deal of sense, but sometimes produce moral mistakes that are replicated in law and pol- icy. One implication is that moral assessments ought not to be made by appealing to intuitions about exotic cases and problems; those intuitions are particularly unlikely to be reliable. Another implication is that some deeply held moral judgments are unsound if they are products of moral heuristics. The idea of error-prone heuristics is especially controversial in the moral domain, where agreement on the correct answer may be hard to elicit; but in many contexts, heuristics are at work and they do real damage. Moral framing effects, in- cluding those in the context of obligations to future generations, are also discussed.
Article
Full-text available
This paper explores how the phenomenology of using self-driving cars influences conditions for exercising and ascribing responsibility. First, a working account of responsibility is presented, which identifies two classic Aristotelian conditions for responsibility and adds a relational one, and which makes a distinction between responsibility for (what one does) and responsibility to (others). Then, this account is applied to a phenomenological analysis of what happens when we use a self-driving car and participate in traffic. It is argued that self-driving cars threaten the excercise and ascription of responsibility in several ways. These include the replacement of human agency by machine agency, but also the user’s changing epistemic relation to the environment and others, which can be described in terms of (dis)engagement. It is concluded that the discussion about the ethics of self-driving cars and related problems of responsibility should be restricted neither to general responsibilities related to the use of self-driving cars and its objective risks, nor to questions regarding the behavior, intelligence, autonomy, and ethical “thinking” of the car in response to the objective features of the traffic situations (e.g. various scenarios). Rather, it should also reflect on the shifting experience of the user: how the new technology reshapes the subjectivity of the user and on the morel consequences this has.
Article
Full-text available
As automated vehicles receive more attention from the media, there has been an equivalent increase in the coverage of the ethical choices a vehicle may be forced to make in certain crash situations with no clear safe outcome. Much of this coverage has focused on a philosophical thought experiment known as the “trolley problem,” and substituting an automated vehicle for the trolley and the car’s software for the bystander. While this is a stark and straightforward example of ethical decision making for an automated vehicle, it risks marginalizing the entire field if it is to become the only ethical problem in the public’s mind. In this chapter, I discuss the shortcomings of the trolley problem, and introduce more nuanced examples that involve crash risk and uncertainty. Risk management is introduced as an alternative approach, and its ethical dimensions are discussed.
Conference Paper
Full-text available
Automated vehicle (AV) as a social agent in a dynamic traffic environment mixed with other road users, will encounter risk situations compelling it to make decisions in complex dilemmas. This paper presents the AVEthics (Ethics policy for Automated Vehicles) project. AVEthics aims to provide a framework for an ethics policy for the artificial intelligence of an AV in order to regulate its interactions with other road users. First, we will specify the kind of (artificial) ethics that can be applied to AV, including its moral principles, values and weighing rules with respect to human ethics and ontology. Second, we will implement this artificial ethics by means of a serious game in order to test interactions in dilemma situations. Third, we will evaluate the acceptability of the ethics principles proposed for an AV applied to simulated use cases. The outcomes of the project are expected to improve the operational safety design of an AV and render it acceptable for the end-user.
Article
Full-text available
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100% safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
Article
Full-text available
Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
Chapter
Full-text available
If motor vehicles are to be truly autonomous and able to operate responsibly on our roads, they will need to replicate – or do better than – the human decision-making process. But some decisions are more than just a mechanical application of traffic laws and plotting a safe path. They seem to require a sense of ethics, and this is a notoriously difficult capability to reduce into algorithms for a computer to follow.
Chapter
Full-text available
As agents moving through an environment that includes a range of other road users – from pedestrians and cyclists to other human or automated drivers – automated vehicles continuously interact with the humans around them. The nature of these interactions is a result of the programming in the vehicle and the priorities placed there by the programmers. Just as human drivers display a range of driving styles and preferences, automated vehicles represent a broad canvas on which the designers can craft the response to different driving scenarios.
Article
Full-text available
I start with the premise that any social robot must have moral competence. I offer a framework for what moral competence is and sketch the prospects for it to be developed in artificial agents. After considering three proposals for requirements of 'moral agency' I propose instead to examine moral competence as a broader set of capacities. I posit that human moral competence consists of five components and that a social robot should ideally instantiate all of them: (1) A system of norms; (2) a moral vocabulary; (3) moral cognition and affect; (4) moral decision making and action; and (5) moral communication.
Article
Full-text available
In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.
Article
Full-text available
Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash, (b) an automated vehicle's decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.
Article
Full-text available
A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement of autonomous vehicles. In particular, Patrick Lin's concern that any security gain derived from the introduction of autonomous cars would constitute a trade-off in human lives will be addressed. The second question is whether it would be morally permissible to impose liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. Doubts about the moral legitimacy of such a scheme are based on the notion that it is a form of defamation if a person is held to blame for causing the death of another by his inattention if he never had a real chance to intervene. Therefore, the legitimacy of such an approach would depend on the user having an actual chance to do so. The last option discussed in this paper is a system in which a person using an autonomous vehicle has no duty (and possibly no way) of interfering, but is still held (financially, not criminally) responsible for possible accidents. Two ways of doing so are discussed, but only one is judged morally feasible.
Article
Full-text available
Introduction: Bart Selman AI textbooks and papers often discuss the big questions, such as "how to reason with uncertainty", "how to reason efficiently", or "how to improve performance through learning." It is more difficult, however, to find descriptions of concrete problems or challenges that are still ambitious and interesting, yet not so open-ended. The goal of this panel is to formulate a set of such challenge problems for the field. Each panelist was asked to formulate one or more challenges. The emphasis is on problems for which there is a good chance that they will be resolved within the next five to ten years. A good example of the potential benefit of a concrete AI challenge problem is the recent success of Deep Blue. Deep Blue is the result of a research effort focused on a single problem: develop a program to defeat the world chess champion. Although Deep Blue has not yet quite achieved this goal, it played a remarkably strong game against Kasparov in the recent A
Book
This book takes a look at fully automated, autonomous vehicles and discusses many open questions: How can autonomous vehicles be integrated into the current transportation system with diverse users and human drivers? Where do automated vehicles fall under current legal frameworks? What risks are associated with automation and how will society respond to these risks? How will the marketplace react to automated vehicles and what changes may be necessary for companies? Experts from Germany and the United States define key societal, engineering, and mobility issues related to the automation of vehicles. They discuss the decisions programmers of automated vehicles must make to enable vehicles to perceive their environment, interact with other road users, and choose actions that may have ethical consequences. The authors further identify expectations and concerns that will form the basis for individual and societal acceptance of autonomous driving. While the safety benefits of such vehicles are tremendous, the authors demonstrate that these benefits will only be achieved if vehicles have an appropriate safety concept at the heart of their design. Realizing the potential of automated vehicles to reorganize traffic and transform mobility of people and goods requires similar care in the design of vehicles and networks. By covering all of these topics, the book aims to provide a current, comprehensive, and scientifically sound treatment of the emerging field of "autonomous driving".
Article
This paper presents a knowledge synthesis of ethical questions for the application of rational ethics theories to human factors in vehicle automation. First, a brief summary of ethical concerns related to transportation automation and human factors is presented. A series of theoretical questions are then posed for different levels of vehicle automation. Particular concerns relating to the Principle of Utility and the Principle of Respect for Persons are highlighted for low levels of automation, high levels of automation, and full automation through the use of theoretical scenarios. Although some recommendations are drawn from these scenarios, the primary purpose of this paper is to serve as a starting point to encourage discussion and collaboration between human factors professionals, engineers, policymakers, transportation officials, software programmers, manufacturers, and the driving public regarding realistic goals for automated vehicle implementation.
Book
For the past hundred years, innovation within the automotive sector has created safer, cleaner, and more affordable vehicles, but progress has been incremental. The industry now appears close to substantial change, engendered by autonomous, or "self-driving," vehicle technologies. This technology offers the possibility of significant benefits to social welfare — saving lives; reducing crashes, congestion, fuel consumption, and pollution; increasing mobility for the disabled; and ultimately improving land use. This report is intended as a guide for state and federal policymakers on the many issues that this technology raises. After surveying the advantages and disadvantages of the technology, RAND researchers determined that the benefits of the technology likely outweigh the disadvantages. However, many of the benefits will accrue to parties other than the technology's purchasers. These positive externalities may justify some form of subsidy. The report also explores policy issues, communications, regulation and standards, and liability issues raised by the technology; and concludes with some tentative guidance for policymakers, guided largely by the principle that the technology should be allowed and perhaps encouraged when it is superior to an average human driver.
Article
The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.
Book
This book encapsulates around a decade's collaborative research between Samir Chopra (City University of New York Philosophy Department) and Laurence White (lawyer and policymaker). The book deals with issues relating to contract law, agency law, knowledge attribution to artificial agents and their principals, tort liability of and for artificial agents, and personhood for artificial agents. The book takes a comparative approach, drawing on a wide range of sources in US, EU and Australian law.
Article
As robots become more autonomous — capable of acting in complex ways, independent of direct human interaction — their actions will challenge traditional notions of responsibility. How, for example, do we sort out responsibility when a self-driving car swerves this way or that in a situation where all possible outcomes lead to harm? This paper explores the question of responsibility from both philosophical and legal perspectives, by examining the relationship between designers, semi-autonomous robots and users. Borrowing concepts from the philosophy of technology, bioethics and law, I argue that in certain use contexts we can reasonably describe a robot as acting as a moral proxy on behalf of a person. In those cases I argue it is important to instantiate the proxy relationship in a morally justifiable way. I examine two questions that are helpful in determining how to appropriately instantiate proxy relationships with semi-autonomous robots, and that we can also ask when attempting to sort out responsibility: 1) On whose behalf was the robot acting?; and 2) On whose behalf ought the robot to have been acting?Focusing on proxy relationships allows us to shift our focus away from a strictly causal model of responsibility and focus also on a proxy model informed by an ethical analysis of the nature of the designer-artefact-user relationship. By doing so I argue that we gain some traction on problems of responsibility with semi-autonomous robots. I examine two cases to demonstrate how a shift towards a proxy model of responsibility, and away from a strictly causal model of responsibility helps to manage risks and provides a more accurate accounting of responsibility in some use contexts. I offer some suggestions how we might decide whom a robot ought legitimately to be acting on behalf of, while offering some thoughts on what legal and ethical implications my argument carries for designers and users.
Article
Autonomous vehicles are complex systems with many interacting hardware and software components operating in an uncertain and dynamic environment. Organizational principles and procedures are described which help assure reliable and intelligent actions on the part of the vehicle. This includes both high-level system models, as well as process level monitoring and testing to verify and validate the system components on the fly. We propose a high-level model based on a probabilistic characterization of the inputs and outputs (or other observable elements) of the modules, and for individual components, we propose to exploit Instrumented Logical Sensors. These methodologies are to be demonstrated in the context of the autonomous vehicle.
Book
The human-built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to-date, "ethically blind" in the sense that the decision-making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
I will begin by stating three theses which I present in this paper. The first is that it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology, in which we are conspicuously lacking. The second is that the concepts of obligation, and duty—moral obligation and moral duty, that is to say—and of what is morally right and wrong, and of the moral sense of “ought,” ought to be jettisoned if this is psychologically possible; because they are survivals, or derivatives from survivals, from an earlier conception of ethics which no longer generally survives, and are only harmful without it. My third thesis is that the differences between the wellknown English writers on moral philosophy from Sidgwick to the present day are of little importance.
Conference Paper
Road traffic accidents are a social and public challenge. Various spatial concentration detection methods have been proposed to discover the concentration patterns of traffic accidents. However, current methods treat each traffic accident location as a point without consideration of the severity level, and the final traffic accident risk map for the whole study area ignores the users’ requirements. In this paper, we propose an ontology-based traffic accident risk mapping framework. In the framework, the ontology represents the domain knowledge related to the traffic accidents and supports the data retrieval based on users’ requirements. A new spatial clustering method that takes into account the numbers and severity levels of accidents is proposed for risk mapping. To demonstrate the framework, a system prototype has been implemented. A case study in the city of Calgary is also discussed.
Conference Paper
ABSTRACT AI NEEDS MANY IDEAS THAT HAVE HITHERTO BEEN STUDIED ONLY BY PHILOSO-PHERS. THIS IS BECAUSE A ROBOT, IF IT IS TO HAVE HUMAN LEVEL INTELLIGENCE AND ABILITY TO LEARN FROM ITS EXPERIENCE, NEEDS A GENERAL WORLD VIEW IN WHICH TO ORGANIZE FACTS. IT TURNS OUT THAT MANY PHILOSOPHICAL PROBLEMS TAKE NEW FORMS WHEN THOUGHT ABOUT IN TERMS OF HOW TO DESIGN A ROBOT. SOME APPROACHES TO PHILOSOPHY ARE HELPFUL AND OTHERS ARE NOT.
Rationale of Reward, Book 3, Chapter 1
  • J Bentham
The ethics of saving lives with autonomous cars are far murkier than you think
  • P Lin
The momentous advance in artificial intelligence demands a new set of ethics
  • J Millar
The moral algorithm: how to set the moral compass for autonomous vehicles moral decisions by autonomous vehicles and the need for regulation
  • S Young
Review of accident causation models used in road accident research of the EC FP7 project DaCoTA
  • T Hermitte
The myth of morality
  • R Joyce
Driving automation & changed driver’s task - Effect of driver-interfaces on intervention
  • A P Van Den Beukel
  • M C Van Der
  • Voort
Meaning as use in the digital turn
  • A Biletzki
Methodische probleme der volkswirtschaftlichen bewertung von verkehrsunfaellen
  • U Van Suntum