Chapter

Motor Liability Insurance in a World with Autonomous Vehicles

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Based on a data set of 91 papers and 22 industry studies, we analyse the impact of artificial intelligence on the insurance sector using Porter’s (1985) value chain and Berliner’s (1982) insurability criteria. Additionally, we present future research directions, from both the academic and practitioner points of view. The results illustrate that both cost efficiencies and new revenue streams can be realised, as the insurance business model will shift from loss compensation to loss prediction and prevention. Moreover, we identify two possible developments with respect to the insurability of risks. The first is that the application of artificial intelligence by insurance companies might allow for a more accurate prediction of loss probabilities, thus reducing one of the industry’s most inherent problems, namely asymmetric information. The second development is that artificial intelligence might change the risk landscape significantly by transforming some risks from low-severity/high-frequency to high-severity/low-frequency. This requires insurance companies to rethink traditional insurance coverage and design adequate insurance products.
Article
Full-text available
In recent years, many sectors have experienced significant progress in automation, associated with the growing advances in artificial intelligence and machine learning. There are already automated robotic weapons, which are able to evaluate and engage with targets on their own, and there are already autonomous vehicles that do not need a human driver. It is argued that the use of increasingly autonomous systems (AS) should be guided by the policy of human control, according to which humans should execute a certain significant level of judgment over AS. While in the military sector there is a fear that AS could mean that humans lose control over life and death decisions, in the transportation domain, on the contrary, there is a strongly held view that autonomy could bring significant operational benefits by removing the need for a human driver. This article explores the notion of human control in the United States in the two domains of defense and transportation. The operationalization of emerging policies of human control results in the typology of direct and indirect human controls exercised over the use of AS. The typology helps to steer the debate away from the linguistic complexities of the term “autonomy.” It identifies instead where human factors are undergoing important changes and ultimately informs about more detailed rules and standards formulation, which differ across domains, applications, and sectors.
Article
Full-text available
The benefits of autonomous vehicles (AVs) are widely acknowledged, but there are concerns about the extent of these benefits and AV risks and unintended consequences. In this article, we first examine AVs and different categories of the technological risks associated with them. We then explore strategies that can be adopted to address these risks, and explore emerging responses by governments for addressing AV risks. Our analyses reveal that, thus far, governments have in most instances avoided stringent measures in order to promote AV developments and the majority of responses are non-binding and focus on creating councils or working groups to better explore AV implications. The US has been active in introducing legislations to address issues related to privacy and cybersecurity. The UK and Germany, in particular, have enacted laws to address liability issues; other countries mostly acknowledge these issues, but have yet to implement specific strategies. To address privacy and cybersecurity risks strategies ranging from introduction or amendment of non-AV specific legislation to creating working groups have been adopted. Much less attention has been paid to issues such as environmental and employment risks, although a few governments have begun programmes to retrain workers who might be negatively affected.
Article
Full-text available
This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand.
Article
Full-text available
A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement of autonomous vehicles. In particular, Patrick Lin's concern that any security gain derived from the introduction of autonomous cars would constitute a trade-off in human lives will be addressed. The second question is whether it would be morally permissible to impose liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. Doubts about the moral legitimacy of such a scheme are based on the notion that it is a form of defamation if a person is held to blame for causing the death of another by his inattention if he never had a real chance to intervene. Therefore, the legitimacy of such an approach would depend on the user having an actual chance to do so. The last option discussed in this paper is a system in which a person using an autonomous vehicle has no duty (and possibly no way) of interfering, but is still held (financially, not criminally) responsible for possible accidents. Two ways of doing so are discussed, but only one is judged morally feasible.
Chapter
2018 witnessed several key developments which have shaped the digital single market. Whereas some issues are now in the final stages of the legislative process, other key topics are in their infancy and therefore require in-depth discussion as to how EU law should react to the challenges and needs of the digital economy. This volume focuses on an issue central to the digital single market: the ‘Liability for Artificial Intelligence and the Internet of Things’. European legislators face the challenge of deciding between adapting existing product liability rules or the creation of a new concept of objective liability for autonomous systems. The 2018 Münster colloquium provided a forum for intense discussion of these questions between renowned experts on digital law and representatives from EU institutions and industry. With contributions by Cristina Amato, Georg Borges, Jean-Sébastien Borghetti, Giovanni Comandé, Ernst Karner, Bernhard Koch, Sebastian Lohsse, Eva Lux, Miquel Martín-Casals, Reiner Schulze, Gerald Spindler, Dirk Staudenmayer, Gerhard Wagner, Herbert Zech
Article
Driver error currently causes the vast majority of motor vehicle crashes. By eliminating the human driver, autonomous vehicles will prevent thousands of fatalities and serious bodily injuries, which makes a compelling safety case for policies that foster the rapid development of this technology. Major technological advances have occurred over the past decade, but there is widespread concern that the rate of development is hampered by uncertainty about manufacturer liabilities for a crash. Apparent variations in the requirements of state tort law across the country make it difficult for manufacturers to assess their liability exposure in the national market. The patchwork of state laws and the resultant uncertainty have prompted calls for the federal safety regulation of autonomous vehicles. The uncertainty seems to be the inevitable result of trying to predict how tort rules governing old technologies will apply to the new technology of automated driving. As I will attempt to demonstrate, however, well-established tort doctrines widely adopted by most states, if supplemented by two new federal safety regulations, would provide a comprehensive regulatory approach that would largely dissipate the costly legal uncertainty now looming over this emerging technology.
Article
Hidden algorithms drive decisions at major Silicon Valley and Wall Street firms. Thanks to automation, those firms can approve credit, rank websites, and make myriad other decisions instantaneously. But what are the costs of their methods? And what exactly are they doing with their digital profiles of us? Leaks, whistleblowers, and legal disputes have shed new light on corporate surveillance and the automated judgments it enables. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy. Even after billions of dollars of fines have been levied, underfunded regulators may have only scratched the surface of troublingly monopolistic and exploitative practices. Drawing on the work of social scientists, attorneys, and technologists, The Black Box Society offers a bold new account of the political economy of big data. Data-driven corporations play an ever larger role in determining opportunity and risk. But they depend on automated judgments that may be wrong, biased, or destructive. Their black boxes endanger all of us. Faulty data, invalid assumptions, and defective models can’t be corrected when they are hidden. Frank Pasquale exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. Demanding transparency is only the first step. An intelligible society would assure that key decisions of its most important firms are fair, nondiscriminatory, and open to criticism. Silicon Valley and Wall Street need to accept as much accountability as they impose on others. In this interview with Lawrence Joseph, Frank Pasquale describes the aims and methods of the book.
Article
Autonomous vehicles have the potential for a variety of societal benefits. Individual mobility can be expanded to parties including the physically challenged, the elderly and the young. However, this article will consider two associated aspects of autonomous driving namely, privacy implications and issues of liability. Despite the many advantages of autonomous or connected vehicles, the downside in respect of privacy is that the ability to move about in relative anonymity will be lost. A secret rendezvous with a lover will be a thing of the past because the data bank associated with such vehicles will include information regarding exactly who is riding, where the passengers were picked up and dropped off, at what time and what route was taken. This information is a legitimate (and potentially very valuable!) business asset of the companies that own and operate autonomous vehicle fleets, who rely on such data to analyse how many vehicles are needed, in which locations and when they should be charged or re-fuelled, but the consequences on privacy (and the susceptibility of cyberattack) are tangible. Similarly, whilst another advantage of autonomous driving is that traffic accidents may be virtually eliminated, some people will nevertheless die or be injured in accidents involving autonomous vehicles. Therefore, in autonomous driving, a key question is that of liability and, specifically, where liability should reside in the event of such accident. This article considers how best to exploit autonomous vehicle innovation whilst, at the same time, securing the type of regulation appropriate to deal with the issues raised above.
Article
While self-driving cars may seem like something that can exist only in a futuristic movie, the technology is developing rapidly, and many states already allow test runs of self-driving cars on state roads. Many car companies have announced that they will make self-driving cars available as early as 2020. However, several manufacturers of the self-driving car technology predict that personal ownership of vehicles will be replaced by a car-sharing system, where companies own the self-driving cars and rent them to consumers who pay per use. With more widespread introduction of this technology comes many questions about how to assess liability for accidents involving self-driving cars, and how insurance should be structured to pay for those accidents. This Note discusses the potential parties who could be held liable: drivers, car-sharing companies, and manufacturers. This Comment suggests the elimination of liability for any accidents involving self-driving cars, and recommends the creation of a National Insurance Fund to pay for all damages resulting from those accidents.
Article
Experimental self-driving cars are being tested on public roads, and will at some point be commercially sold or made otherwise available to the public. A self-driving car and its digital control systems take over control tasks previously performed by the human driver. This places high demands on this control system which has to perform the highly complex task of driving the car through traffic. When this system does not perform its task adequately and damage ensues the failure of the control system may be used as a stepping stone to claim liability of the manufacturer of the car or the control system. Uncertainties about the application of (product) liability law may slow down the uptake of self-driving cars more than is warranted on the basis of technical progress. This article examines how the decision about the timing of a market introduction can be approached and how possible chilling effects of liability law can be redressed with an adequate system of obligatory insurance.
Article
The general rules on liability and compensation in Sweden is to be found in The Tort Liability Act (Skadeståndslagen, Law No 1972:207, amended 1975:404 and 1995:1190). 1 This law is based on the principle of liability for negligence, the culpa rule. 2 Historically this has been the predominant rule in Swedish tort law. However, already under the 19th century principles of strict liability were developed in the legislation, beside the culpa rule. There are now some important laws based on strict liability, most prominent perhaps The Traffic Damage Act (Law No 1975:1410). 3 During the latest decades voluntary agreements have been arranged on compensation to the victim without regard to fault. Among these collectively bargained agreements may be mentioned the Security Insurance for Occupational Injuries as well as the Pharmaceutical Insurance. The Patient Insurance was from the beginning also a voluntary agreement but is now part of the legislation. A characteristic for all these legal or agreed compensation rules is that they generally guarantee the victim full compensation, according to the principles of the tort law. Indemnities are paid in accordance with the general principles of this law, i.e. the guiding rules in the Tort Liability Act, chapter 5. When it comes 1 The reform in 1975 was based on a report by the Damages Commission, SOU 1973:51 Skadestånd V, see Government Bill 1975:12. 2 See Jan Hellner: The Swedish alternative in an International Perspective, in Oldertz/Tidefelt (ed.) Compensation for Personal Injury in Sweden and other Countries, Stockholm 1988, p. 17. Also published in The American Journal of Comparative Law vol. XXXIV 1984, p. 613. 3 The development and principles of the different types of compensation schemes known as "the Swedish model" see articles by Jan Hellner, Carl Oldertz and Erland Strömbäck in Oldertz/Tidefelt, op cit., p. 17, 51 resp. 41.
Article
Legal ambiguity refers to an unknown outcome regarding the requirements of a legal rule or body of law, as applied to a set of known facts, for which the probability cannot be confidently or reliably defined and must be estimated by decision makers. The legal ambiguity generated by the tort system became significantly more pronounced over the course of the twentieth century, making the market for liability insurance increasingly volatile. Without reliable estimates of the relevant probabilities (the likelihood of a policyholder incurring tort liabilities and the amount thereof), insurers must use subjective estimates of risk that are prone to forecasting errors with the resultant swings in profits and losses. Legal ambiguity increases the cost of capital for insurers (and therefore premiums) and creates an expectations-driven pricing structure that is prone to cyclical volatility, including periods of substantial underwriting losses that disrupt the supply of liability insurance. Due to these market disruptions, liability insurers have supported tort reform measures that reduce the unpredictability of the liability costs covered by the insurance policy, making it easier for them to set premiums. The movement has been successful, and the vast majority of states by now have legislatively curbed tort liability, with common reforms involving damage caps and the elimination of joint and several liability. Although different in substance, the varied reforms share the trait of significantly reducing systemic legal ambiguity, which in turn makes it easier for liability insurers to forecast their expected liabilities. Tort reform has become biased towards reductions of ambiguity that enhance the predictability of liability insurance, regardless of whether the reforms address the problem of ambiguity in a fair or just manner. Each of these factors has become increasingly important over the course of the twentieth century, producing an evolutionary path for the tort system that is now shaped by the interplay between legal ambiguity, liability insurance, and legislative tort reform.
Prediction machines: the simple economics of artificial intelligence
  • A Agrawal
  • J Gans
  • A Goldfarb
Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: the simple economics of artificial intelligence. Boston, Massachusetts: Harvard Business Review Press.
Autonomous cars and tort liability: why the market will 'drive' autonomous cars out of the marketplace
  • K Colonna
Colonna, K. (2012). "Autonomous cars and tort liability: why the market will 'drive' autonomous cars out of the marketplace", 4 Case W. Res. JL Tech. & Internet, 81.
European added value assessment: EU common approach on the liability rules and insurance related to connected and autonomous vehicles: accompanying the European Parliament's legislative own-initiative report
  • E F D Engelhard
  • R W De Bruin
Engelhard, E. F. D., & de Bruin, R. W. (2017). "EU common approach on the liability rules and insurance related to connected and autonomous vehicles. European added value assessment: EU common approach on the liability rules and insurance related to connected and autonomous vehicles: accompanying the European Parliament's legislative own-initiative report", European Added Value Unit, Brussels, 38-131.
The coming collision between autonomous vehicles and the liability system
  • Gary E Marchant
  • A R Lindor
Marchant, Gary E., Lindor, A. R. (2012). "The coming collision between autonomous vehicles and the liability system", Santa Clara Law Review, 52: 1321.
The European road to autonomous vehicles
  • F P Patti
Patti, F. P. (2019). "The European road to autonomous vehicles", Fordham International Law Journal, 43, 125.
Products liability and autonomous vehicles: who's driving whom
  • K C Webb
Webb, K. C. (2016). "Products liability and autonomous vehicles: who's driving whom", Richmond Journal of Law & Technology, 23(4), 1-52.