ArticlePublisher preview available

Correctness and Completeness of Programming Instructions for Traffic Circulation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the present article we exploit the logical notions of correctness and completeness to provide an analysis of some fundamental problems that can be encountered by a software developer when transforming norms for traffic circulation into programming instructions. Relying on this analysis, we then introduce a question and answer procedure that can be helpful, in case of an accident, to clarify which components of an existing framework should be revised and to what extent software developers can be held responsible.
Vol.:(0123456789)
Science and Engineering Ethics (2021) 27:72
https://doi.org/10.1007/s11948-021-00350-5
1 3
ORIGINAL RESEARCH/SCHOLARSHIP
Correctness andCompleteness ofProgramming
Instructions forTraffic Circulation
DanielaGlavaničová1 · MatteoPascucci2
Received: 12 November 2020 / Accepted: 28 October 2021 / Published online: 22 November 2021
© The Author(s), under exclusive licence to Springer Nature B.V. 2021
Abstract
In the present article we exploit the logical notions of correctness and completeness
to provide an analysis of some fundamental problems that can be encountered by a
software developer when transforming norms for traffic circulation into program-
ming instructions. Relying on this analysis, we then introduce a question and answer
procedure that can be helpful, in case of an accident, to clarify which components of
an existing framework should be revised and to what extent software developers can
be held responsible.
Keywords Autonomous vehicles· Encoding rules· Framework revision· Question
and answer procedure· Responsibility
Introduction
The possible large-scale release of autonomous vehicles (hereafter, AVs) in the
coming years is currently the object of an intense debate between different com-
munities. One of the main issues is understanding how our existing normative sys-
tems and infrastructures for traffic circulation have to change in order to accom-
modate AVs (see, for instance, Douma and Palodichuk 2012). This change will
be hopefully reached at the end of a gradual procedure, in which several tests will
serve the purpose of detecting and correcting defects of existent frameworks, and
will involve vehicles with an increasing level of automation. For instance, in the
United States, the National Highway Traffic Safety Administration currently (June
* Matteo Pascucci
matteopascucci.academia@gmail.com
Daniela Glavaničová
daniela.glavanicova@gmail.com
1 Department ofLogic andMethodology ofSciences, Faculty ofArts, Comenius University
inBratislava, Gondova 2, 81102Bratislava, SlovakRepublic
2 Department ofAnalytic Philosophy, Institute ofPhilosophy, Slovak Academy ofSciences,
Klemensova 19, 81364Bratislava, SlovakRepublic
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... 2 This particular literary use of neural networks, in which authorship is assigned to the network itself, is quite new; up to now, neural networks have largely been used to manage social media, internet shopping, customer service, and research. The literature on such technologies therefore often focuses on ethical issues regarding safety, responsibility, and trust in the context of social media (see Bezuidenhout & Ratti, 2021;Boem & Galletti, 2021;Ratti & Stapleford, 2021;Glavaničová & Pascucci, 2021. ...
Article
Full-text available
In this paper, I explore Derrida’s concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger’s view of machine creation and then present Derrida’s criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida’s criticism is based. The thesis defended in the paper is that Derrida’s account of iterability provides a helpful framework for understanding the phenomenon of machine learning–generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida’s concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.
Conference Paper
Full-text available
Logicians participating in this conference stand united for peace. Logic4Peace invited contributions in any area of logic, including: • philosophical logic, philosophy of logic and history of logic; • mathematical and computational logic; • applied logic and logical structures used in science and the humanities. All registration fees and donations were spent on two specific causes: to help our colleagues in Ukraine in this time of war, who are either displaced or have lost their homes, and to support the charitable fund 'Voices of children' which provides humanitarian aid and assists with the ongoing evacuation processes. https://events.illc.uva.nl/Logic4Peace
Conference Paper
Full-text available
It is not to say that formal logic is useless in field of law. However, formal logic is neither sufficient nor necessary to reason and argue like a jurist. At the same time, in order to be a successful jurist, one must grasp and use the special legal logic, which belongs to the contemporary informal logic domain. The idea of material logic at all and of the nontrivial legal logic in particular looks like a manifestation of the total nowadays movement to make logic, so to speak, less transcendental, more empirical.
Article
Full-text available
Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with self‐driving cars. Next, worries about responsibility gaps and retribution gaps from the philosophical literature are introduced. This leads to a discussion of whether self‐driving cars are a form of agents that act independently of human agents. It is suggested that it is better to analyze their apparent agency in terms of human–robot collaborations, within which humans play the most important roles. The next topic is the idea that the safety potential of self‐driving cars might create a duty to either switch to self‐driving cars or seek means of making conventional cars safer. Lastly, there is a short discussion of ethical issues related to safe human–robot coordination within mixed traffic featuring both self‐driving cars and conventional cars.
Article
Full-text available
Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios. The aim of the article is to provide an overview of the existing literature on these topics and to assess how far the discussion has gotten so far.
Article
Full-text available
Despite numerous ethical examinations of automated vehicles, philosophers have neglected to address how these technologies will affect vulnerable people. To account for this lacuna, researchers must analyze how driverless cars could hinder or help social justice. In addition to thinking through these aspects, scholars must also pay attention to the extensive moral dimensions of automated vehicles, including how they will affect the public, nonhumans, future generations, and culturally significant artifacts. If planners and engineers undertake this task, then they will have to prioritize their efforts to avoid additional harm. The author shows how employing an approach called a “complex moral assessment” can help professionals implement these technologies into existing mobility systems in a just and moral fashion.
Article
Full-text available
Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that, they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.
Article
In emergency situations, autonomous vehicles will be forced to operate at their friction limits in order to avoid collisions. In these scenarios, coordinating the planning of the vehicle's path and speed gives the vehicle the best chance of avoiding an obstacle. Fast reaction time is also important in an emergency, but approaches to the trajectory planning problem based on nonlinear optimization are computationally expensive. This paper presents a new scheme that simultaneously modifies the desired path and speed profile for a vehicle in response to the appearance of an obstacle, significant tracking error, or other environmental change. By formulating the trajectory optimization problem as a quadratically constrained quadratic program, solution times of less than 20 milliseconds are possible even with a 10 second planning horizon. A simplified point mass model is used to describe the vehicle's motion, but the incorporation of longitudinal weight transfer and road topography mean that the vehicle's acceleration limits are modeled more accurately than in comparable approaches. Experimental data from on an autonomous vehicle in two scenarios demonstrate how the trajectory planner enables the vehicle to avoid an obstacle even when the obstacle appears suddenly and the vehicle is already operating near the friction limits.
Chapter
Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and numerous other sectors. While many of these changes are beneficial, some will inevitably lead to harm. Who should be liable when a robot causes harm? This chapter addresses how the law can and should account for robot liability, including robots that exist today and that could potentially be built in the future. Current and near-future robots pose no significant challenge: existing law or minor variations therein can readily handle them. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable, as well as if future robots can cause major global catastrophe.
Chapter
In this chapter, we give a brief overview of the traditional notion of responsibility and introduce a concept of distributed responsibility within a responsibility network of engineers, driver, and autonomous driving system. In order to evaluate this concept, we explore the notion of man-machine hybrid systems with regard to self-driving cars and conclude that the unit comprising the car and the operator/driver consists of such a hybrid system that can assume a shared responsibility different from the responsibility of other actors in the responsibility network. Discussing certain moral dilemma situations that are structured much like trolley cases, we deduce that as long as there is something like a driver in autonomous cars as part of the hybrid system, she will have to bear the responsibility for making the morally relevant decisions that are not covered by traffic rules.
Chapter
As society embarks on the robotics revolution, lawmakers will need to enact laws that directly address robotic interactions with humans. They will have to consider who or what is responsible for any harm caused by robots and how to properly compensate the injured parties. If they do not, robot producers may incur unexpected and excessive costs, which would disincentivize investment. Or if victims are not adequately compensated, such producers may face a backlash from injured parties. This chapter examines these issues in the realm of autonomous vehicles, and it recommends that manufacturers of autonomous vehicles be treated as the drivers of their vehicles for purposes of assigning civil liability for harm caused by the vehicles’ autonomous mode.
Article
Both humans and the sensors on an autonomous vehicle have limited sensing capabilities. When these limitations coincide with scenarios involving vulnerable road users, it becomes important to account for these limitations in the motion planner. For the scenario of an occluded pedestrian crosswalk, the speed of the approaching vehicle should be a function of the amount of uncertainty on the roadway. In this work, the longitudinal controller is formulated as a partially observable Markov decision process and dynamic programming is used to compute the control policy. The control policy scales the speed profile to be used by a model predictive steering controller.