ChapterPDF Available

The Laws and Regulation of AI and Autonomous Systems

Authors:
  • AGW Legal & Advisory

Abstract

Our regulatory systems have attempted to keep abreast of new technologies by recalibrating and adapting our regulatory frameworks to provide for new opportunities and risks, to confer rights and duties, safety and liability frameworks, and to ensure legal certainty for businesses. These adaptations have been reactive and sometimes piecemeal, often with artificial delineation on rights and responsibilities and with unintended flow-on consequences. Previously, technologies have been deployed more like tools, but as autonomy and self-learning capabilities increase, robots and intelligent AI systems will feel less and less like machines and tools. There is now a significant difference, because machine learning AI systems have the ability to learn, adapt their performances and ‘make decisions’ from data and ‘life experiences’. This chapter provides brief insights on some of the topical developments in our regulatory systems and the current debates on some of the risks and challenges from the use and actions of AI, autonomous and intelligent systems [1].
This is the preprint manuscript published by Springer Nature Switzerland AG 2020 in L. Strous
et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854, 2020. https://doi.org/10.1007/978-
3-030-64246-4_4
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
The Laws and Regulation of AI and Autonomous Systems
Anthony Wong1,2,3
1Technology Lawyer, AGW Legal & Advisory, Sydney, Australia
2Vice-President, IFIP
3Past President, Australian Computer Society (ACS)
anthonywong@agwconsult.com
Abstract. Our regulatory systems have attempted to keep abreast of new tech-
nologies by recalibrating and adapting our regulatory frameworks to provide for
new opportunities and risks, to confer rights and duties, safety and liability frame-
works, and to ensure legal certainty for businesses. These adaptations have been
reactive and sometimes piecemeal, often with artificial delineation on rights and
responsibilities and with unintended flow-on consequences. Previously, technol-
ogies have been deployed more like tools, but as autonomy and self-learning ca-
pabilities increase, robots and intelligent AI systems will feel less and less like
machines and tools. There is now a significant difference, because machine learn-
ing AI systems have the ability to learn, adapt their performances and ‘make de-
cisions’ from data and life experiences. This chapter provides brief insights on
some of the topical developments in our regulatory systems and the current de-
bates on some of the risks and challenges from the use and actions of AI, auton-
omous and intelligent systems. [1]
Keywords: AI, Robots, Automation, Regulation, Law, Job Transition, Employ-
ment, Data Ownership, Data Portability, Access, Control, Intellectual Property,
Legal Personhood, Liability, Transparency, Explainability, Data Protection, Pri-
vacy.
1 Introduction
The base tenets of our regulatory systems were created long before the advances and
confluence of new technologies including AI (artificial intelligence), IoT (Internet of
Things), blockchain, cloud and others. With the rise of these new technologies we have
taken many initiatives to address their consequences by recalibrating and adapting our
regulatory frameworks to provide for new opportunities and risks, to confer rights and
duties, safety and liability frameworks, and ensure legal certainty for business.
Sector-specific regulation has also been adopted and adapted to address market fail-
ures and risks in critical and regulated domains. These changes have often been reactive
and piecemeal, with artificial delineation of rights and responsibilities. There have been
2
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
many unintended consequences. More recently we have begun to learn from past mis-
haps, and these regulatory adaptations are now more likely to be drafted in technologi-
cally neutral way avoiding strict technical definition, especially when the field is still
evolving rapidly.
AI and algorithmic decision-making will over time bring significant benefits to many
areas of human endeavour. The proliferation of AI systems imbued with increasingly
complex mathematical and data modelling, and machine learning algorithms, are being
integrated in virtually every sector of the economy and society, to support and in many
cases undertake more autonomous decisions and actions.
How much autonomy should AI and robots have to make decisions on our behalf
and about us in our life, work and play? How do we ensure they can be trusted, and that
they are transparent, reliable, accountable and well designed?
Previously, technologies have often been deployed more like tools, as a pen or paint-
brush, but as autonomy and self-learning capabilities increase, robots and intelligent AI
systems feel less and less like machines or tools. AI will equip robots and systems with
the ability to learn using machine-learning algorithms. They will have the ability to
interact and work alongside us or to augment our work. They will increasingly be able
to take over functions and roles and, perhaps more significantly, the ability to make
decisions.
When I reviewed AI ethical frameworks in 2019, there were more than 70 in exist-
ence. The number continues to grow. In 2019, jurisdictions including Australia [2] and
the EU [3] published their frameworks, adding to the lists of contributors including the
OECD Principles on Artificial Intelligence [4], the World Economic Forum AI Gov-
ernance: A Holistic Approach To Implement Ethics Into AI [5] and the Singapore
Model AI Governance Framework [6]. The debates have matured significantly since
then, beyond ethical principles to more detailed guidelines on how such principles can
be operationalised in the design and implementation to minimise risks and negative
outcomes. But the challenge has always been putting principles into practice.
Emerging technologies are rapidly transforming the regulatory landscape. They are
providing timely opportunities for fresh approaches in the redesign of our regulatory
systems to keep pace with technological changes, now and into the future. AI is cur-
rently advancing more rapidly than the process of regulatory recalibration. Unlike the
past, there is now a significant differencewe must now take into consideration, ma-
chine learning AI systems that have the ability to learn, adapt their performances and
‘make decisions’ from data and life experiences.
This chapter provides brief insights on some of the topical developments in our reg-
ulatory systems and the current debates to address some of the challenges and risks
from the use and actions of AI, autonomous and intelligent systems. [1]
2 Automation, Jobs and Employment law implications
Over the past few years we have been inundated with predictions that robots and auto-
mation will devastate the workplace, replacing many job functions within the next 10
to 15 years. We have already seen huge shifts in manufacturing, mining, agriculture,
3
administration and logistics, where a wide range of manual and repetitive tasks have
been automated. More recently, cognitive tasks and data analyses are increasingly being
performed by AI and machines.
Historically, new technologies have always affected the structure of the labour mar-
ket, leading to a significant impact on employment, especially lower skilled and manual
jobs. But now the pace and spread of autonomous and intelligent technologies are out-
performing humans in many tasks and radically challenging the base tenets of our la-
bour markets and laws. These developments have raised many questions.
Where are the policies, strategies and regulatory frameworks to transition workers
in the jobs that will be the most transformed, or those that will disappear altogether due
to automation, robotics and AI?
Our current labour and employment laws, such as sick leave, hours of work, tax,
minimum wage and overtime pay requirements, were not designed for robots. What is
the legal relationship of robots to human employees in the workplace? In relation to
workplace safety what liabilities should apply if a robot harms a human co-worker?
Would the employer of the robot be vicariously liable? What is the performance man-
agement and control plan for work previously undertaken by human employees work-
ing under a collective bargaining agreement, now performed or co-performed with AI
or robots? How would data protection and privacy regulations apply to personal infor-
mation collected and consumed by robots? Who would be responsible for cyber secu-
rity and the criminal use of robots or AI?
Are there statutory protection and job security for humans displaced by automation
and robots? Should we tax robot owners to pay for training for workers who are dis-
placed by automation, or should there be a universal minimum basic income for people
displaced? Should we have social plans, such as exist in Germany and France, if re-
structuring through automation disadvantages employees?
There are many divergent views on all these questions. All are being hotly debated.
Governments, policy makers, institutions and employers all have important roles to
play in the development of digital skills, in the monitoring of long-term job trends, and
in the creation of policies to assist workers and organisations adapt to an automated
future. If these issues are not addressed early and proactively, they may worsen the
digital divide and increase inequalities between countries and people.
ICT professionals are also being impacted as smart algorithms and other autonomous
technologies supplement software programming, data analysis and technical support
roles. With AI and machine learning developing at an exponential rate, what does the
future look like?
2.1 Case study - line between human and robo advisers in financial services
FinTech (financial technology) start-ups are emerging to challenge the roles of banks
and traditional financial institutions. FinTechs are rapidly transforming and disrupting
the marketplace by providing ‘robo-advice’ using highly sophisticated algorithms op-
erating on mobile and web-based environments. The technology is called robotic pro-
cess automation (RPA) and is becoming widespread in business, and particularly in
financial institutions. Robo-advice or automated advice is the provision of automated
4
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
financial product advice using algorithms and technology and without the direct in-
volvement of a human adviser [7].
Robo-advice and AI capabilities have the potential to increase competition and lower
prices for consumers in the financial advice and financial services industries by radi-
cally reshaping the customer experience. They are designed, modelled and programmed
by human actors. Often they operate behind the scenes 24/7 assisting the people who
interact with consumers. There are considerable tasks and risks involved in writing al-
gorithms to accurately portray the full offerings and complexity of financial products.
In 2017 Australia, after a number of scandals, introduced professional standards leg-
islation for human financial advisers [8]. These regulations set higher competence and
ethical standards, including requirements for relevant first or higher degrees, continuing
professional development requirements and compliance with a code of ethics. The ini-
tiatives were introduced into a profession already under pressure from the robo envi-
ronment.
Because robo-advice is designed, modelled and programmed by human actors,
should these requirements also apply to robo-advice? Should regulators also hold ICT
developers and providers of robots and autonomous systems to the same standards de-
manded from human financial advisers? What should be the background, skills and
competencies of these designers and ICT developers?
Depending on the size and governance framework of an organisation, various play-
ers and actors could be involved in a collaborative venture in the development, deploy-
ment and lifecycle of AI systems. These might include the developer, the product man-
ager, senior management, the service provider, the distributor and the person who uses
the AI or autonomous system. Their domain expertise could be in computer science, or
mathematics or statistics, or they might be an interdisciplinary group composed of fi-
nancial advisers, economists, social scientists or lawyers.
In 2016 the Australian regulator laid down sectoral guidelines [9] for monitoring and
testing algorithms deployed in robo-advice. The regulatory guidance requires busi-
nesses offering robo-advice to have people within the business who understand the ra-
tionale, risk and rules used by the algorithms and have the skills to review the resulting
robo-advice. What should be the competencies and skills of the humans undertaking
the role?
The EU General Data Protection Regulation (GDPR) [10] went further, by placing
an explicit onus on the algorithmic provider to provide “meaningful information about
the logic involved” [11]. In addition, GDPR provides an individual with explicit rights
including the rights to obtain human intervention, to express their point of view and to
contest the decision made solely by automated systems [12] that has legal or similarly
significant impact. GDPR applies only when AI uses personal data within the scope of
the legislation.
Revealing the logic behind an algorithm may potentially risk and disclose commer-
cially sensitive information and trade secrets used by the AI model and on how the
system works.
The deployment of robo-advice raises many new, interesting and challenging ques-
tions for regulators accustomed only to assessing and regulating human players and
actors.
5
3 Do robots and AI dream of owning Intellectual Property?
AI and machine-learning systems have already developed to the point where they can
write music, generate automated reports, create art or even display human traits such as
curiosity and conduct experiments to self-learn and develop [13]. Humans excel in cre-
ativity, imagination, problem solving, collaboration, management, and leadership
which, at least for now, are very far off for AI and automation.
Will AI eventually outpace human capability and creativity? This may happen, but
there is no consensus on when. Whatever the case, we are seeing more examples of
original works created not by humans, but by autonomous AI. Businesses are increas-
ingly investing in new AI and robotics technologies, and in research and innovation to
enhance competitiveness.
AI has introduced extra dimensions to the complexity of intellectual property (IP).
Investors should tread with caution while questions remain about the ownership of
works generated or supplemented by AI. Who owns intangible outputs which could be
perceived as IP when they are generated by a robot or AI? Who owns the IPthe man-
ufacturer, the developer or the programmer? Could ownership fall to the user who pro-
vided the data for the robot to create the output? Or alternatively, could the robot own
its creations?
But what happens when inventions, source code, objects or other assets are created
autonomously and are directed by non-human entities, as will increasingly be the case
in the future? The distinction between human-generated works and AI-generated works
is emerging to be a controversial topic.
Our current regulatory framework generally assumes that IP is created by natural
persons. The UK [14], European [15] and US [16] patent offices, recently rejected pa-
tent applications in which an AI machine ‘DABUS’ was designated as the inventor.
Commentators have long distinguished between computer-assisted [17] and com-
puter-generated works. In many countries, including Australia, the former category has
created few copyright problems, but computer-generated works with little or no human
involvement pose a challenge to copyright’s subsistence. Any works created by auton-
omous AI and robots will suffer serious hurdles in securing copyright protection. They
might not have sufficient human authorial contribution for copyright to subsist. Given
that technological research and progress are often driven by the promise of financial
rewards, this uncertainty around IP ownership could be a disincentive for commercial
entities to invest in AI development.
Some jurisdictions have implemented specific provisions to protect literary, dra-
matic, musical or artistic work which is computer-generated. [18] Section 178 of the
UK Copyright Designs and Patents Act 1988 defines computer-generated work to
mean work generated by computer in circumstances such that there is no human author
of the work. The author is the person who undertook the arrangements necessary for
the creation of the work [19].
The WIPO’s Second Session of the Conversation of Intellectual Property and Arti-
ficial Intelligence have disclosed the significance of the debate and that the attribution
of copyright to AI-generated works will go to the heart of the social purpose for which
the copyright system exists” [20].
6
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
4 Data Fuels AI But Who Owns Data?
Data is at the centre of the operation of many AI machine learning models. Industrial
and public data, as well as personal data, are important sources of input for the training
and evaluation of AI machine learning models.
The deployment of advanced intelligent algorithmic software, in conjunction with
the rapid declining cost of digital storage, is fuelling the assembly and combination of
vast datasets (known as ‘Big Data’) for automated data processing and interrogation.
These algorithmic programs are more cost effective and efficient than human readers
and are being progressively deployed across all domains of our society. Their aims are
to unlock and discover new forms of value, to connect previously unseen linkages, and
provide insights to stimulate growth and innovation in the digital economy [21].
Economies have formed around data, irrespective of whether an adequate regulatory
framework has been built around it. In their relentless technological development, the
AI and Big Data phenomena have overtaken the slow march of our law and have em-
braced and encapsulated some of the facets of our concepts of property without giving
due regard and serious thought to the implications of treating data as property. In an
attempt to create order from a runaway phenomenon, should there be underlying policy
reasons to accord some form of property rights in the context of Big Data, and if not,
some ‘bundles of rights’? [22]
Property rights evolve and change to address the practical needs of a given epoch in
our society. Those needs change as our values and norms evolve. There is abundant
literature on the different senses in which the term ‘property’ has been used to encap-
sulate the move from the traditional notions of property, such as land and chattels, to
the notion of property in intangibles, such as artistic works. We are embarking on yet
another significant leap, this time regarding property or ‘property-like’ considerations
in data.
It is difficult to define property with any precision as the notions of property inevi-
tably change to reflect their context [23]. Property law deals with rights and if recog-
nised under established heads of law are claims ‘good against the world’, often de-
scribed as ‘rights to exclude others’ [24].
Protecting value and proprietary rights in data involves a balancing act between
many vested interests, including the interests of the purported owner, the interests of
the custodian, the interests of competing third parties, and the interests of the public to
access and use data. The debate on data ownership rights, and the layered complexities
and issues pertaining to the granting of property rights in data, has intensified as the use
and control of data assets become more and more critical to our economy and our ability
to innovate. This requires a balancing of the commercial, private and public interests in
data, as well as data protection and privacy concerns.
Existing laws in relation to copyright, patent, confidential information and trade se-
cret, and trademark all relate to and protect rights involving information.
As observed by Nimmer, “copyright law has become a primary source of property
rights in information in the 1990s [25]. But existing copyright law is an inadequate
framework for the consideration of property rights in data, because it provides owners
7
with only a limited property right in the expression of the information [26]. Copyright
law does not concern itself with the control or flow of ideas, facts or data per se. The
data components contained in the copyrighted work may not be protected, no matter
how valuable. Ideas and facts are generally regarded to be in the public domain [27].
The right to control use of information may also arise under patent or other laws.
Patent protects the use of ideas or information contained in the patent, by restricting the
practice of the invention for a period of time.
In Australia and elsewhere, the question of whether information can be properly
characterised as property in the context of confidential information has been subjected
to much academic and judicial commentary over the last half century [28]. But if the
owner of the confidential information places it in the public domain and accessible for
Big Data mining and analysis, the inherent ‘secrecy’ may be lost. In Australia, as in the
United Kingdom, there is authority which supports the proposition that information is
not property [29].
AI, Big Data and our society’s dependence on the digital economy have emerged
comparatively rapidly. This has heightened the debate on our ability and freedom to
use and extract value from data without fear of prosecution as we try to gain insights
into new discoveries, innovation and growth. Granting separate property rights to dis-
crete collections of data (datasets) would create a substantial barrier to the evolution of
Big Data and our ability to mine valuable information from these datasets.
In the world of Big Data these datasets can be created, collected and obtained (some-
times even verified) automatically, or as a by-product of another business function.
Some will require the investment of time, capital and labour, while others may only
require computer processing time. It will depend upon the types and forms of datasets,
how they are derived, and the purpose they serve.
The different types and forms of Big Data will continue to challenge our thinking
and concepts around the question of data ownership. They will also continue to create
uncertainty about the boundaries of control and data ownership.
Rights in data come in many forms and from a variety of sources. For the most part,
traditional intellectual property law has proven to be inadequate in providing protection.
[30] These traditional intellectual property regimes do not provide adequate cover for
data and information-based products. Indeed, these laws exclude most Big Data da-
tasets (in whole or in part) from protection.
With the pervasive use of technology today, a rapidly growing percentage of our
information is created automatically from the use of IoT devices, mobile and GPS de-
vices, smart meters, systems collecting transactional data, and many other sources.
Most of these sources generate factual information, so it is unlikely that they would be
protected under our traditional intellectual property laws. Should rights be left to the
realms of contract, confidential information, trade secrets, unfair competition laws and
other mechanisms? Or should government provide the custodianship to enhance re-
searchers’ access to Big Data?
In 2006, the European Union adopted the Database Directive [31] in recognition of
the fact that copyright is inadequate to protect the investment made by database owners.
The database directive provides for two levels of protection:
8
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
a) a sui generis database protection where a substantial investment has been un-
dertaken (financial, technical or human) in “obtaining, verifying, or presenting
the contents of the database” [32].
b) in addition to that provided by copyright law, where by reason of the selection
or arrangement of their contents constitute the author's own intellectual crea-
tion [33].
Article 1 of the directive defines a database as a “collection of independent works, data
or other materials arranged in a systematic or methodical way and individually acces-
sible by electronic or other means”.
In the USA, the tort of misappropriation allows owners some control over the use
that can be made of their databases.
4.1 Is it about Data Portability, Access and Control?
In the era of AI, machine learning models, data portability and the right to control ac-
cess to data are also relevant. The right to control another's access to information can
involve several distinct bodies of law, including contract law, the law of confidential
information and trade secrets, computer and cyber crime law, communications law, and
various laws relating to privacy.
Recently we have seen examples of government intervention using the regulatory
framework to regulate interest in data in the digital environment, without the require-
ment to establish ownership in the data held or restricted by an access control system
associated with a function of the computer.
The Australian Consumer Data Right (CDR) regulations [34] give individuals and
businesses greater control over their data, including the ability to access particular data
in a usable form and to direct a business to securely transfer that data to a trusted third
party. The consumer right will roll out across sectors of the economy, commencing in
the banking sector from July 2020 followed by the energy and telecommunications sec-
tors. The data regulatory framework also imposes significant additional privacy and
data sharing obligations and penalties for breach.
In the EU, the Free Flow of Non-Personal Data Regulation [35] and the General Data
Protection Regulation [36] allow users of data processing services to use the data gath-
ered in different EU markets to improve their productivity and competitiveness. Both
EU Regulations refer to data portability and aim to make it easier to port data from one
IT environment to another one, to enable switching of service providers and to foster
competition.
5 Legal personhoods for AI
Historically, our regulatory systems have granted rights and legal personhood to slaves,
women, children, corporations and more recently to landscape and nature. Two of In-
dia’s rivers, the Ganga and the Yamuna, have been granted legal status. In New Zealand
9
legislation was enacted to grant legal personhoods to the Whanganui river, Mount Ta-
ranaki and the Te Urewera protected area. Previously, corporations were the only non-
human entities recognised by the law as legal persons.
“To be a legal person is to be the subject of rights and duties” [37]. Granting legal
personality [38] to AI and robots will entail complex legal considerations and is not a
simple case of equating them to corporations.
Who foots the bill when a robot or an intelligent AI system makes a mistake, causes
an accident or damage, or becomes corrupted? The manufacturer, the developer, the
person controlling it, or the robot itself? Or is it a matter of allocating and apportioning
risk and liability?
As autonomic and self-learning capabilities increase, robots and intelligent AI sys-
tems will feel less and less like machines and tools. Self-learning capabilities for AI
have added complexity to the equation. Will granting ‘electronic rights’ to robots assist
with some of these questions? Will human actors use robots to shield themselves from
liability or shift any potential liabilities from the developers to the robots? Or will the
spectrum, allocation and apportionment of responsibility keep step with the evolution
of self-learning robots and intelligent AI systems? Regulators around the world are
wrestling with these questions.
The EU is leading the way on these issues. In 2017 the European Parliament, in an
unprecedented show of support, adopted a resolution on Civil Law Rules on Robotics
[39] by 396 votes to 123. One of its key recommendations was to call on the European
Commission to explore, analyse and consider “a specific legal status for robots … so
that at least the most sophisticated autonomous robots could be established as having
the status of electronic persons responsible for making good any damage they may
cause, and possibly applying electronic personality to cases where robots make auton-
omous decisions” [40].
The EU resolution generated considerable debate and controversy, because it calls
for sophisticated autonomous robots to be given specific legal status as electronic per-
sons. The arguments from both sides are complex and require fundamental shifts in
legal theory and reasoning.
In an open letter, experts in robotics and artificial intelligence have cautioned the
European Commission that plans to grant robots legal status are inappropriate and “non-
pragmatic [41].
The European Group on Ethics in Science and New Technologies, in its Statement
on Artificial Intelligence, Robotics and Autonomous Systems, advocated that the con-
cept of legal personhood is the ability and willingness to take and attribute moral re-
sponsibility. Moral responsibility is here construed in the broad sense in which it may
refer to several aspects of human agency, e.g. causality, accountability (obligation to
provide an account), liability (obligation to compensate damages), reactive attitudes
such as praise and blame (appropriateness of a range of moral emotions), and duties
associated with social roles. Moral responsibility, in whatever sense, cannot be allo-
cated or shifted to ‘autonomous’ technology [42].
In 2020, the EU Commission presented its White Paper on Artificial Intelligence
A European approach to excellence and trust for regulation of artificial intelligence
(AI) [43] and a number of other documents including a “Report on the safety and
10
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
liability implications of Artificial Intelligence, the Internet of Things and robotics” [44]
for comments. The White Paper is non-committal on the question of endowing robots
with specific legal status as electronic persons. It proposes a risk-based approach to
create an ‘ecosystem of trust’ as one of the key elements of a future regulatory frame-
work for AI in Europe, so that the regulatory burden is not excessively prescriptive or
disproportionate.
I concur with the conclusions reached by Bryson et al [45] that the case for electronic
personhood is weak and the negatives outweigh the benefits—at least for the foreseea-
ble future.
As evidenced by the historical debates on the status of slaves, women, corporations
and, more recently landscape and nature, the question of granting legal personality to
autonomous robots will not be resolved any time soon. There is no simple answer to
the question of legal personhood, and one size will not fit all.
Should legal personhood for robots or autonomous systems eventuate in the future,
any right invoked on behalf of robots, or obligation enforced against them, will require
new approaches and significant recalibration of our regulatory systems. Legal person-
hood could potentially allow autonomous robots to own their creations, as well as being
open to liability for problems or negative outcomes associated with their actions.
6 Responsibility and Liability for damages caused by AI
How should regulators manage the complexity and challenges arising from the design,
development and deployment of robots and autonomous systems? What legal and social
responsibilities should we give to algorithms shielded behind statistically data-derived
‘impartiality’? Who is liable when robots and AI get it wrong?
There is much debate as to who amongst the various players and actors across the
design, development and deployment lifecycle of AI and autonomous systems should
be responsible and liable to account for any damages that might be caused. Would au-
tonomy and self-learning capabilities alter the chain of responsibility of the producer
or developer as the AI-driven or otherwise automated machine which, after consider-
ation of certain data, has taken an autonomous decision and caused harm to a human’s
life, health or property” [46]?
Or has “inserting a layer of inscrutable, unintuitive, and statistically-derived code in
between a human decisionmaker and the consequences of that decision, AI disrupts our
typical understanding of responsibility for choices gone wrong”? [47] Or should the
producer or programmer foresee the potential loss or damage even when it may be dif-
ficult to anticipateparticularly in unusual circumstances, the actions of an autono-
mous system? These questions will become more critical as more and more autonomous
decisions are made by AI systems.
One of the more advanced regulatory developments in AI is in the trialling of auton-
omous vehicles [48] and in the regulatory frameworks for drones [49].
11
The rapid adoption of AI and autonomous systems into more diverse areas of our
livesfrom business, education, healthcare and communication through to infrastruc-
ture, logistics, defence, entertainment and agriculturemeans that any laws involving
liability will need to consider a broad range of contexts and possibilities.
We are moving rapidly towards a world where autonomous and intelligent AI sys-
tems are connected and integrated in complex IoT environments in the mesh and the
plurality of actors involved can make it difficult to assess where a potential damage
originates and which person is liable for it. Due to the complexity of these technologies,
it can be very difficult for victims to identify the liable person and prove all necessary
conditions for a successful claim, as required under national law” [50]. The burden of
proof in a tort fault-based liability system in some countries could significantly increase
the costs of litigation.
We will need to establish specific protections for potential victims of AI-related in-
cidents to give consumers confidence that they will have legal recourse if something
goes wrong.
One of the proposals being debated is for the creation of a mandatory insurance
scheme to ensure that victims of incidents involving robots and intelligent AI systems
have access to adequate compensation. This might be similar to the mandatory com-
prehensive insurance that owners need to purchase before being able to register a motor
vehicle [51].
Another approach is for the creation of strict liability rules to compensate victims
for potential harm caused by AI and autonomous systems along the lines of current
product liability laws in the EU and Australia. Strict liability rules would ensure that
the victim is compensated regardless of fault. But who amongst the various players and
actors should be strictly liable?
Whether the existing mixture of fault-based and strict liability regimes are appropri-
ate is also subject to much debate.
Introducing a robust regulatory framework with relevant input from industry, poli-
cymakers and government would create greater incentive for AI developers and manu-
facturers to reduce their exposure by building in additional safeguards to minimise the
potential risks to humanity.
7 Transparency and Explainability of AI
Algorithms are increasingly being used to analyse information and define or predict
outcomes with the aid of AI. These AI systems may be embedded in devices and sys-
tems and deployed across many industries and increasingly in critical domains, often
without the knowledge and consent of the user. Should humans be informed that they
are interacting with AI, on the purposes of the AI, and on the data used for the training
and evaluation?
To ensure that AI based systems perform as intended, the quality, accuracy and rel-
evance of data are essential. Any data bias, error or statistical distortion will be learned
and amplified. In situations involving machine learningwhere algorithms and deci-
12
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
sion rules are trained using data to recognize patterns and to learn to make future deci-
sions based on these observations, regulators and consumers may not easily discern the
properties of these algorithms. These algorithms are able to train systems to perform
certain tasks at levels that may exceed human ability and raise many challenging ques-
tions including calls for greater algorithmic transparency to minimise the risk of bias,
discrimination, unfairness, error and to protect consumer interests.
Over the last few years legislators have started to respond to the challenge. In the
EU, Article 22 of the General Data Protection Regulation (GDPR) [52] gives individu-
als the right not to be subject to a decision based solely on automated decision-making
(no human involvement in the decision process), except in certain situations including
explicit consent and necessity for the performance of or entering into a contract. The
GDPR applies only to automated decision-making involving personal data.
In the public sector, AI systems are increasingly being adopted by governments to
improve and reform public service processes. In many situations, stakeholders and us-
ers of AI will expect reasons to be given for transparency and accountability of govern-
ment decisions which are important elements for the proper functioning of public ad-
ministration. It is currently unclear how our regulatory frameworks would adjust to
providing a meaningful review by our courts of decisions undertaken by autonomous
AI systems, or in what circumstances a sub-delegation by a nominated decision-maker
to an autonomous AI systems would be lawful. We may need to develop new principles
and standards and “to identify directions for thinking about how administrative law
should respond that makes sense from both a legal and a technical point of view.
[53].
As machine learning evolves, AI models [54] often become even more complex, to
the point where it may be difficult to articulate and understand their inner workings
even to people who created them. This raises many questions: what types of explanation
are suitable and useful to the audience? [55] How and why does the model perform the
way it does? How comprehensive does the explanation need to beis an understanding
on how the algorithmic decision was reached required, or should the explanation be
adapted in a manner which is useful to a non-technical audience?
In the EU, the GDPR explicitly provides a data subject with the following rights:
a) rights to be provided and to access information about the automated decision-
making; [56]
b) rights to obtain human intervention and to contest the decision made solely by
automated decision-making algorithm; [57] and
c) places explicit onus on the algorithmic provider to provide “meaningful infor-
mation about the logic involved” in algorithmic decision, the “significance”
and the “envisaged consequences” of the algorithmic processing [58].
But how would these rights operate and be enforced in practice? With recent and
more complex non-linear black-box AI models, it can be difficult to provide meaningful
explanations, largely due to the statistical and probabilistic character of machine learn-
ing and the current limitations of some AI modelsraising concerns including account-
ability, explainability, interpretability, transparency, and human control.
What expertise and competencies would be required from a data subject to take ad-
vantage of the rights or for the algorithmic provider to provide the above rights?
13
“In addition, access to the algorithm and the data could be impossible without the
cooperation of the potentially liable party. In practice, victims may thus not be able to
make a liability claim. In addition, it would be unclear, how to demonstrate the fault of
an AI acting autonomously, or what would be considered the fault of a person relying
on the use of AI [59].
This opacity will also make it difficult to verify whether decisions made with the
involvement of AI are fair and unbiased, whether there are possible breaches of laws,
and whether they will hamper the effective access to the traditional evidence necessary
to establish a successful liability action and to claim compensation.
Should organisations consider and ensure that specific types of explanation be pro-
vided for their proposed AI system to meet the requisite needs of the audience before
starting the design process? Should the design and development methodologies adopted
have the flexibility to embrace new tools and explanation frameworks, ensuring ongo-
ing improvements in transparency and explainability in parallel with advancement in
the state of the art of the technology throughout the lifecycle of the AI system?
While rapid development methodologies may have been adopted by the IT Industry,
embedding transparency and explainability into AI system design requires more exten-
sive planning and oversight, and requiring input and knowledge from a wider mix of
multi-disciplinary skills and expertise.
New tools and better explanation frameworks need to be developed to instill the de-
sired human values and to reconcile the current tensions and trade-off between accu-
racy, cost and explainability of AI models. Developing such tools and frameworks is
far from trivial, warranting further research and funding.
8 Summary and Looking Beyond
This chapter raises some of the major topical regulatory issues and debates relating to
job transition and employment law; data ownership, portability, access and control; le-
gal status of AI and personhood; intellectual property ownership by AI; AI liability;
transparency and meaningful AI explanation; and aspects of data protection and pri-
vacy.
In the wake of the 2020 “black lives matter” protests, a number of technology com-
panies have announced limitations on plans to sell facial recognition technology. There
have also been renewed calls for a moratorium on certain uses of facial recognition
technology that has legal or significant effects on individuals until appropriate legal
framework has been established [60].
The need to address AI and autonomous system challenges has increased in urgency
as the adverse potential impact could be significant in specific critical domains. If not
appropriately addressed, human trust will suffer, impacting on adoption and oversight
and in some cases posing significant risks to humanity and societal values.
From this brief exploration, it is clear that the values and issues outlined in the chap-
ter will benefit from much broader debate, research and consultation. There are no de-
finitive answers to some of the questions raisedas for many, it is a matter of perspec-
tive. I trust that this chapter will embark you on your own journey as to what our future
14
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
regulatory systems should encapsulate. Different AI applications create and pose dif-
ferent benefits, risks and issues. The solutions that might be adopted in the days ahead,
will potentially challenge our traditional beliefs and systems for years to come. We are
facing a major paradigm shift which will require significant rethink of some of our
long-established legal principles, as we must now take into consideration, machine
learning AI systems that have the ability to learn, adapt and ‘make decisions’ from data
and life experiences.
ICT professionals understand better than most in relation to the trends and trajecto-
ries of technologies and their potential impact on the economic, safety and social con-
structs of the workplace and society. Is it incumbent on ICT professionals and profes-
sional societies to raise these issues and ensure they are widely debated, so that appro-
priate and intelligent decisions can be made for the changes, risks and challenges
ahead? ICT professionals are well placed to address some of the risks and challenges
during the design and lifecycle of AI-enabled systems. It would be beneficial to society
for ICT professionals to assist government, legislators, regulators and policy formula-
tors with their unique understanding of the strengths and limitations of the technology
and its effects.
Historically, our regulatory adaptations have been conservative and patchworked in
their ability to keep pace with technological changes. Perhaps the drastic disruptions
that COVID-19 has caused in our work, life and play beyond the normal will provide
sufficient impetus and tenacity to consider and re-think on how our laws and regulatory
systems should recalibrate with AI and autonomous systems, now and into the future.
Acknowledgments
I would like to acknowledge and express my appreciation to Graeme Philipson for his editorial
assistance. He is an ICT editor, writer and publisher, and author of ‘The Vision Splendid:
The History of Australian Computing’. www.philipson.info
1. This chapter is for general reference purposes only. It does not constitute legal or profes-
sional advice. It is general comment only. Before making any decision or taking any action
you should consult your legal or professional advisers to ascertain how the regulatory system
applies to your particular circumstances in your jurisdiction
2. Australian AI Ethics Framework (2019). https://www.industry.gov.au/data-and-publica-
tions/building-australias-artificial-intelligence-capability/ai-ethics-framework, last ac-
cessed 2020/6/6
3. European Commission: Ethics guidelines for trustworthy AI (2019). https://ec.eu-
ropa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai, last accessed
2020/6/6
4. OECD, OECD Principles on Artificial Intelligence (22 May 2019),
https://www.oecd.org/going-digital/ai/principles/, last accessed 2020/6/20
5. World Economic Forum: AI Governance: A Holistic Approach to Implement Ethics into AI,
https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-
ethics-into-ai, last accessed 2020/6/20
6. Singapore Model AI Governance Framework, https://www.pdpc.gov.sg/-/me-
dia/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf, last ac-
cessed 2020/6/20
15
7. Definition from the Australian Securities & Investments Commission: Regulatory Guide
255 - Providing digital financial product advice to retail client, https://asic.gov.au/regula-
tory-resources/find-a-document/regulatory-guides/rg-255-providing-digital-financial-prod-
uct-advice-to-retail-clients/, last accessed 2020/6/6
8. Corporations Amendment (Professional Standards of Financial Advisers) Act 2017
9. Australian Securities & Investments Commission: Regulatory Guide 255 - Providing digital
financial product advice to retail client, https://asic.gov.au/regulatory-resources/find-a-doc-
ument/regulatory-guides/rg-255-providing-digital-financial-product-advice-to-retail-cli-
ents/, last accessed 2020/6/6
10. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal data and on
the free movement of such data, and repealing Directive 95/46/EC (General Data Protection
Regulation), 2016 O.J. (L 119/1) [GDPR].
11. Ibid art. 15(1)(h)
12. Ibid art. 22(3)
13. This section is based on the article, Wong, Anthony,: Do robots and artificial intelligence
think about copyright?. The Australian, September 5, 2017
14. UK Intellectual Property Office, refer patent decision BL O/741/19 of December 2019,
https://www.ipo.gov.uk/p-challenge-decision-results/p-challenge-decision-results-
bl?BL_Number=O/741/19, last accessed 2020/07/10
15. European Patent Office, refer decision of January 2020 https://www.epo.org/news-is-
sues/news/2020/20200128.html, last accessed 2020/07/10
16. US Patent and Trademark Office, refer to decision of April 2020 on Application No.
16/524,350 https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf,
last accessed 2020/07/10
17. Here the computer is used as a tool equivalent of the painter's brush or the writer's pen by
the author in the creation of the work
18. Similar provisions have been replicated in New Zealand, Ireland, India, Hong Kong and
South Africa
19. Copyright Designs and Patents Act 2988 (UK) s 9(3)
20. World Intellectual Property Organisation (WIPO), Conversation of Intellextual Property and
Artificial Intelligence, Revised Issues paper on Intellextual Property and Artificial Intelli-
gence, May 2020, paragraph 23, https://www.wipo.int/meetings/en/doc_de-
tails.jsp?doc_id=499504, last accessed 2020/07/20
21. In recognition of the importance of the ‘Digital Economy’, the US President Obama re-
quested a study to examine how the US can benefit from the data economy in January 2014.
The report Big Data: Seizing Opportunities, Preserving Values concluded that data can be a
driver for economic growth and innovation (‘Big Data: Seizing Opportunities, Preserving
Values’)
22. For an overview on data ownership, refer to Wong, Anthony,: Big Data Fuels Digital Dis-
ruption and Innovation, But Who Owns Data? In: Chaikin, David., Coshott, Derwent. (eds.)
Digital Disruption Impact of Business Models, Regulation & Financial Crime ch 2, Austral-
ian Scholarly Publishing, Australia (2017)
23. Beverley-Smith, Huw,: The Commercial Appropriation of Personality, p 296. Cambridge
University Press (2002)
24. Merges, Robert P.: Justifying Intellectual Property, p 100. Harvard University Press (2011)
25. Nimmer, Raymond T.: Information Law, [2:8]. Thomson Reuters (May 2014)
16
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
26. The nature of the copyright in a literary, dramatic or musical work is defined in copyright
legislation in the respective jurisdictions and in Australia, under the Copyright Act 1968
(Cth) s 31
27. Samuelson, Pamela.: Is Information Property?. In: Communications of the ACM (1991)
34(3), p 16
28. For an introduction to the protection of information using the law of confidential infor-
mation, see Lahore, LexisNexis: Patents, Trade Marks & Related Rights (at 25 April 2016)
[30,000]
29. See, eg, Federal Commissioner of Taxation v United Aircraft Corp (1943) 68 CLR 525 at
534; Moorgate Tobacco Co Ltd v Philip Morris Ltd (No 2) (1984) 156 CLR 414 at 438;
Breen v Williams (1996) 186 CLR 71 at 81, 90, 111, 125; and Australian Broadcasting Cor-
poration v Lenah Game Meats Pty Ltd (2001) 208 CLR 199 at 271
30. See Osenga, Kristen.: Information May Want to Be Free, But Information Products Do Not:
Protecting and Facilitating Transactions in Information Products. Cardozo Law Review
(2009) 30(5) 2099. p 2101
31. Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the
legal protection of databases, OJ L 077, 27/03/1996
32. Ibid art 7
33. Ibid art 3
34. Treasury Laws Amendment (Consumer Data Right) Act 2019, https://www.accc.gov.au/fo-
cus-areas/consumer-data-right-cdr-0, last accessed 2020/6/2
35. Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November
2018 on a framework for the free flow of non-personal data in the European Union, OJ L
303, 28.11.2018
36. Article 20 of the Regulation (EU) 2016/679 of the European Parliament and of the Council
of 27 April 2016 on the protection of natural persons with regard to the processing of per-
sonal data and on the free movement of such data, and repealing Directive 95/46/EC (Gen-
eral Data Protection Regulation)
37. Smith, B.: Legal personality. Yale Law J 37(3), 283-299 (1928), p 283
38. For a discussion on the concept and expression ‘‘legal personality’’ refer to Bryson, J. J.,
Diamantis, M. E., Grant, T.D.: Of, for, and by the people: the legal lacuna of synthetic per-
sons. Artificial Intelligence and Law. 25(3) (2017), p. 277
39. European Parliament: European Parliament resolution of 16 February 2017 with recommen-
dations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), https://eur-
lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52017IP0051, last accessed
2020/6/9
40. Ibid paragraph 59(f)
41. Refer http://www.robotics-openletter.eu/, last accessed 2020/6/9
42. European Group on Ethics in Science and New Technologies: Statement on Artificial Intel-
ligence, Robotics and ‘Autonomous’ Systems, p 10. European Commission, Brussels
(2018), http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf, last accessed
2020/6/9
43. European Commission: White Paper on Artificial Intelligence - A European approach to
excellence and trust, COM(2020) 65 (Feb. 19, 2020), https://ec.eu-
ropa.eu/info/sites/info/files/commissionwhite-paper-artificial-intelligence-feb2020_en.pdf,
last accessed 2020/6/9
17
44. European Commission: Report on the safety and liability implications of Artificial Intelli-
gence, the Internet of Things and robotics, COM (2020) 64 (Feb. 19, 2020), https://ec.eu-
ropa.eu/info/files/commission-report-safety-and-liability-implications-ai-internet-thing-
sand-robotics_en, last accessed 2020/6/9
45. Bryson, J. J., Diamantis, M. E., Grant, T.D.: Of, for, and by the people: the legal lacuna of
synthetic persons. Artificial Intelligence and Law. 25(3) (2017), pp. 273-291
46. The World Economic Forum; White Paper on AI Governance A Holistic Approach to Im-
plement Ethics into AI, p. 6. Geneva, Switzerland (2019), https://www.weforum.org/white-
papers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai, last accessed
2020/6/9
47. Selbst, Andrew D.: Negligence and AI’s Human Users. In: Public Law & Legal Theory
Research Paper No. 20-01, p 1. UCLA School of Law (2018)
48. For a brief rundown of the regulatory frameworks and developments in selected countries
refer to the Australian National Transport Commission 2020, Review of ‘Guidelines for tri-
als of automated vehicles in Australia’: Discussion paper, NTC, Melbourne, pp. 16-18,
https://www.ntc.gov.au/sites/default/files/assets/files/NTC%20Discussion%20Paper%20-
%20Review%20of%20guidelines%20for%20trials%20of%20automated%20vehi-
cles%20in%20Australia.pdf, last accessed 2020/6/6. For examples of Australian legislation
refer to: Motor Vehicles (Trials of Automotive Technologies) Amendment Act 2016 (SA),
Transport Legislation Amendment (Automated Vehicle Trials and Innovation) Act 2017
(NSW), Road Safety Amendment (Automated Vehicles) Act 2018 (Vic)
49. For the new European Union drone rules refer to: https://www.easa.europa.eu/do-
mains/civil-drones-rpas/drones-regulatory-framework-background. For the Australia drone
rules refer to: https://www.casa.gov.au/knowyourdrone/drone-rules and the Civil Aviation
Safety Amendment (Remotely Piloted Aircraft and Model AircraftRegistration and Ac-
creditation) Regulations 2019
50. European Commission: Report on the safety and liability implications of Artificial Intelli-
gence, the Internet of Things and robotics, COM (2020) 64 (Feb. 19, 2020), p 14,
https://ec.europa.eu/info/files/commission-report-safety-and-liability-implications-ai-inter-
net-thingsand-robotics_en, last accessed 2020/6/
51. Australian National Transport Commission 2020, Review of ‘Guidelines for trials of auto-
mated vehicles in Australia’: Discussion paper, NTC, Melbourne, pp. 26-27,
https://www.ntc.gov.au/sites/default/files/assets/files/NTC%20Discussion%20Paper%20-
%20Review%20of%20guidelines%20for%20trials%20of%20automated%20vehi-
cles%20in%20Australia.pdf, last accessed 2020/6/6
52. General Data Protection Regulation (GDPR) art.22; Recital 71; see also Article 29 Data
Protection Working Party, 2018a, Guidelines on Automated individual decision-making and
Profiling for the purposes of Regulation 2016/679, 17/EN WP251rev.01, p.19. http://ec.eu-
ropa.eu/newsroom/article29/item-detail.cfm?item_id=612053, last accessed 2020/6/4
53. Cobbe, Jennifer.: Administrative Law and the Machines of Government: Judicial Review of
Automated Public-Sector Decision-Making. Legal Studies, p 3 (2019)
54. For the interpretability characteristics of various AI models, refer to ICO and Alan Turing
Institute: Guidance on explaining decisions made with AI (2020), annexe 2,
https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-
themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf, last accessed
2020/6/6
55. For the types of explanation that an organisation may provide, refer to ICO and Alan Turing
Institute: Guidance on explaining decisions made with AI (2020), p. 20,
https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-
18
© IFIP International Federation for Information Processing 2020 Published by Springer Nature
Switzerland AG 2020 L. Strous et al. (Eds.): Unimagined Futures, IFIP AICT 555, pp. 3854,
2020. https://doi.org/10.1007/978-3-030-64246-4_4
themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf, last accessed
2020/6/6
56. General Data Protection Regulation (GDPR) art.15
57. General Data Protection Regulation (GDPR) art.22
58. General Data Protection Regulation (GDPR) arts.13-14
59. European Commission: Report on the safety and liability implications of Artificial Intelli-
gence, the Internet of Things and robotics, COM (2020) 64 (Feb. 19, 2020), p 15,
https://ec.europa.eu/info/files/commission-report-safety-and-liability-implications-ai-inter-
net-thingsand-robotics_en, last accessed 2020/6/9
60. Australian Human Rights Commission: Discussion Paper on Human Rights and Technology
(2019), p 104, https://humanrights.gov.au/our-work/rights-and-freedoms/publications/hu-
man-rights-and-technology-discussion-paper-2019, last accessed 2020/6/20; For a US per-
spective, refer to Flicker, Kirsten: The Prison of Convenience, The Need for National Reg-
ulation of Biometric Technology in Sports Venues In: 30 Fordham Intell. Prop. Media &
Ent.L.J. 985 (2020), p 1015, https://ir.lawnet.fordham.edu/iplj/vol30/iss3/7/, last accessed
2020/6/20
... Implementing an industry-wide safety regulation is a non-trivial procedure and therefore must include various stakeholders. In this case, potential challenges envisaged include social acceptability issues, geopolitical issues, country-specific approvals on Solar PV system installation, technical capabilities of RAID platforms [83][84][85] applied for Solar PV inspection, and security concerns on the types of RAID applications allowed within each country. Thus, aside from standard global regulations such as the ISO/IEC TS 22440 standard currently under development for safety of AI enabled systems, there may be need for country-wide amendments. ...
... To mitigate deviation from standard regulations, conventional approaches, such as the use of risk registers and safety officers, can be applied [85]. This can later be transformed using safety technology capabilities based on observed risk occurrences. ...
Article
Full-text available
While there is evidence of substantial improvement in efficiency and cost reduction from the integration of Robotics, Artificial Intelligence, and Drones (RAID) in solar installations; it is observed that there is limited oversight by international standards such as the International Electrotechnical Commission (IEC) in terms of the hazards and untapped potentials. This is partly because it is an emerging application and generally burdened with social acceptability issues. Thus, the safety regulations applied are adaptations of device-specific regulations as deemed fit by individual companies. Also, due to the fast-paced technological development of these platforms, there is huge potential for applications that are not currently supported by the device-specific regulations. This creates a multi-faceted demand for the establishment of standardized, industry-wide polices and guidelines on the use of RAID platforms for Solar PV integrations. This work aims to address critical safety concerns by conducting a comprehensive high-level system examination applicable to the monitoring and maintenance of Solar PV systems. Standard safety assurance models and approaches are examined to provide a safe autonomy perspective for Solar PVs. It is considered that, as RAID applications continue to evolve and become more prevalent in the Solar PV industry, standardized protocols or policies would be established to ensure safe and reliable operations.
... The manner in which there is a requirement to implement a process of iterative reviews for the AI solutions that are created (Battistoni et al., 2019;García & López, 2018;Verganti et al., 2020). This is a big ask and one that will need to be framed within commercial and behavioural constraints if it is to succeed (G. C. Allen, 2019;Petit, 2018;Wong, 2020). Design for sustainability has, for a long time, provided a framework for designers to consider the impacts of their designs on the reduction and safety of raw materials (Bhamra et al., 2008;Souza, 2017), use of their products (Baldassarre et al., 2017;Brambila-Macias, 2018), possibility of dematerialisation through PSS (Hüer et al., 2018;Manzini & Vezzoli, 2002;Vezzoli et | 120 al., 2018) and ethical or moral questions raised by what they create (Chan, 2018;Devon & van de Poel, 2004;Lloyd, 2017;Verbeek, 2006;Whiteley, 1993c). ...
... As in medicine, ethical AI decision-making on the ground involves a trade-off of interests in deciding on the appropriate course of action (Mittelstadt, 2019). Some of these trade-offs relate to making AI models more efficient versus making them more transparent (Dwivedi et al., 2021;Wong, 2020) or fairer (Lo Piano, 2020). Others stem from making AI models more representative of complexity versus them accessing private and confidential data (Rees, 2020;Whittlestone et al., 2019) or the complexity of exploring a wider context (through data) versus exploiting the model built so far to drive decisions (Verganti et al., 2020). ...
Thesis
Full-text available
In an epoch when our environments are under threat from climate change, resource depletion and pollution, our food systems require urgent transition to sustainable pathways in order to feed 9 billion people by 2050. To do so in such a way that future generations’ social, economic and environmental realities are not compromised, the designers of the innovations and technologies deployed to transform the various socio-technical systems of food production and consumption have a critical role to play. Artificial intelligence (AI) is increasingly disrupting food systems and redefining the practices of entire fields from agriculture, transportation and processing to retail and consumer behaviours. An automated, but not agnostic, analytical and decision-making technology, AI is key to designing more efficient food systems. However, it should be considered one element of designed transition strategies and evaluated within sustainability and ethical frameworks. Consequently this research addresses the question: How might we design with AI for the sustainable and ethical transition of our food systems and practices? The research, conveyed through a series of peer-reviewed and published case studies, focuses on how designers can use AI to transition to sustainable food systems by investigating the use of AI across food systems and its impacts on sustainability outcomes. A systems thinking lens and an action-research approach have been deployed to conduct case studies of different food systems – food rescue, child nutrition in schools, digital agri-food research and farmers’ markets – to explore how AI is applied, the consequences of using it, how designers of transitions work with AI and what sustainable outcomes may plausibly be achieved. Building on design for sustainability theory and practice, the research draws on qualitative primary data from fieldwork, interviews, active AI system design and evaluation, and literature from the field. The research finds that designers of the transition required can use AI to support and scale up sustainable outcomes in food systems by drawing on the ethical and practice frameworks that underpin the field of design for sustainability. Transdisciplinary and participatory design methods integrated through agile iterative learning cycles are shown to support, and extend over time, the designer’s intent for both sustainable transition and the technical development of AI-powered tools. The research extends knowledge in the fields of food system innovation and design for sustainability by developing a method for and demonstrating the use of a bottom-up engagement framework for sustainable and ethical design with AI for sustainable transitions. Further, it presents a prototypical case and method for nuanced approaches to the relationship between AI and design so as to contribute a granular and practice-focused definition of designing with AI based on sustainable and ethical precepts. Finally, the research provides an ontological design reflection on what design with AI means in the context of being “designed by our designing” (Willis, 2006, p. 70) and the moral and ethical consequences of designing with AI and creating futures. It finds that design intent, and particularly the targeting of deep leverage points in systems such as goals, mindsets and paradigms, enables the designer to acquire agency within complex and increasingly automated systems of production, exchange and consumption.
... Notably, several influential music generation models with strong support from the industry were released between 2023 and 2024, such as MusicLM (Google Research), MusicGen (Meta AI), Stable Audio (Stability AI), Suno (Suno AI) and Udio (Udio), further underscoring the rapid advancements in this field. However, multiple concerns are arising about the potential ethical, social, legal and economic implications of these new AI-powered systems, which are linked to how AI might replace or complement human creative tasks [36,48], the uncertain and opaque nature of generative algorithms [32,128], and the many challenges involved in the objective evaluation of both human and AI-generated works [6,132]. us to identify and single out research trends, gaps, and challenges within a defined field [61]. ...
Preprint
Full-text available
Recent advancements in music-generative AI raise ethical, social, legal and economic concerns linked to artists’ work, the existing music industry model, the role of AI in creative processes, and intellectual property rights. Transparency, a pillar for trustworthy AI, is key to addressing the principal ethical implications of generative AI in the music domain. We analyse transparency approaches for generative AI in music with a two-stage systematic literature review, identifying 107 relevant publications. Findings reveal a growing interest in AI transparency and the ethical implications of generative models. Yet, transparent methodologies for music-generative AI remain an under-explored topic, highlighting such research gap and the need to expand research in this direction. To encourage future exploration, we created a dynamic list of relevant publications in a public repository to be updated with new research initiatives.
... However, multiple concerns are arising about potential ethical, social, legal and economic implications. Some of these concerns are attributed to the uncertain and opaque nature of generative algorithms (Carnovalini and Rodà, 2020;Wong, 2020). ...
Preprint
Full-text available
Music-generative AI raises multiple challenges particularly related to the work of artists, the existing music industry model, the role of AI in creative processes, and the discussion of intellectual property rights. Our study addresses these challenges by examining transparency in music generation. We conduct a systematic literature review, following the PRISMA methodology, to gain a comprehensive understanding of the associations between algorithmic transparency, music generation, the evaluation in terms of creativity and originality, and the connections to intellectual property rights.We identify 1,111 publications by formulating four research questions. Following a rigorous review process, we narrow down the selection to 66 relevant investigations published by 2022, covering multiple AI domains. Acknowledging the rapid growth of the music generation field, we then incorporate 18 publications from 2023, focusing our search on the music-specific domain and novel applications. Thus, the present review overviews 84 publications. Our findings highlight a growing interest in AI transparency and the ethical consequences of generative models. However, transparent strategies in music-generative AI remain an under-explored topic. Our main contribution is the identification of research gaps and challenges in transparency for music-generative AI.
... Reference Title (Kirkpatrick, 2021) Technological Responses to COVID-19 (Kamargianni & Matyas, 2017) The Business Ecosystem of Mobility-as-a-Service (Porter, 2020) The Design, Regulation, and Adoption of Autonomous Driving Systems in Smart Sustainable Urbanism (Sagástegui, 2020) The Driverless City (Dimitrakopoulos, Uden, & Varlamis, 2020) The future of intelligent transport systems (Seuwou, Banissi, & Ubakanma, 2020) The future of mobility with connected and autonomous vehicles in smart cities (Hansen, 2020) The Hansen Report: on Automotive Electronics (Wong, 2020) The Laws and Regulation of AI and Autonomous Systems (Golbabaei, Yigitcanlar, & Bunker, 2020) The role of shared autonomous vehicle systems in delivering smart urban mobility: A systematic review of the literature (Jones, 2020) The Social Ethics of Self-Driving Cars: Public Perceptions and Predictions of Autonomous Vehicle Safety Risks (Jayaraman et al., 2018) Trust in AV: An Uncertainty Reduction Model of AV-Pedestrian Interactions (Zhang, 2020) Understanding customers' attitude and intention to use driverless cars (Yuen, Chua, Wang, Ma, & Li, 2020) Understanding public acceptance of autonomous vehicles using the theory of planned behaviour (Moniot, Ge, Reinicke, & Schroeder, 2020) Understanding the Charging Flexibility of Shared Automated Electric Vehicle Fleets ("Unsettled Topics Concerning Automated Driving Systems and the Development Ecosystem", 2020) Unsettled Topics Concerning Automated Driving Systems and the Development Ecosystem (Krasniqi & Hajrizi, 2016) Use of IoT Technology to Drive the Automotive Industry from Connected to Full Autonomous Vehicles (Razdan, 2020) User clusters, opinion, research hypotheses and use cases towards future autonomous vehicle acceptance (Emami, Sarvi, & Asadi Bagloee, 2019) Using Kalman Filter Algorithm for Short-Term Traffic Flow Prediction in a Connected Vehicle Environment (Reichenbach, 2020) Vehicles of Tomorrow (Marres, 2020) What if nothing happens? Street trials of intelligent cars as experiments in participation (Illium, Friese, Müller, & Feld, 2020) What A New Look at Autonomous-Vehicle Infrastructure (Deloitte, 2016) Insuring the Future of Mobility (Deloitte, 2017) The Rise of Mobility as a Service (Deloitte, 2015) Smart Mobility: Reducing Congestion and Fostering Faster, Greener, and Cheaper Transportation Options (McKinsey, 2017) Self-Driving Car Technology: When Will the Robots Hit the Road? ...
Article
Full-text available
The deployment of Shared Autonomous Vehicles (SAVs) in urban areas is no longer a futuristic vision. Pilot projects are undeniable realities in various locations, and automakers research agendas are clear about this increasing autonomous trend. An ecosystem that supports the deployment of autonomous mobility is imperative, before this new type of mobility becomes a reality. Trying to understand what is absolutely essential in a city, to allow the operation of SAVs and to attract potential investors, is the aim of this research. This work started with a Systematic Literature Review (SLR), where the main concepts supporting SAVs were identified, and continued using a Topic Modeling approach, specifically Latent Dirichlet Allocation, to reach the most important topics and clusters, that were then modeled in ArchiMate into a possible ecosystem for the deployment of SAVs in urban areas.
... The advent of autonomous systems such as driverless cars and robots constitutes a disruptive technological change that will impact jobs and constitute economic challenges. While a range of repetitive tasks have been automated, cognitive tasks are also being performed by AI systems (Wong 2020). Based on a survey in Australia, Pettigrew, Fritschi, and Norman (2018) emphasise the detrimental effects of AVs on employment alongside the benefits of new jobs that will emerge. ...
Article
Full-text available
The proliferation of autonomous systems like unmanned aerial vehicles, autonomous vehicles and AI-powered industrial and social robots can benefit society significantly, but these systems also present significant governance challenges in operational, legal, economic, social, and ethical dimensions. Singapore's role as a front-runner in the trial of autonomous systems presents an insightful case to study whether the current provisional regulations address the challenges. With multiple stakeholder involvement in setting provisional regulations, government stewardship is essential for coordinating robust regulation and helping to address complex issues such as ethical dilemmas and social connectedness in governing autonomous systems.
... Formal autonomy concerns are not subject to any positive constraints. The only restriction is that the autonomous area cannot regulate anything already governed by regulation at a higher level mandated by statute [59]. Material autonomy, which refers to the authority of the autonomous region, is positively limited by specifying what is entitled to be governed and managed in a limited and specific way [11]. ...
... Further research work is necessary to confirm or refute this research premise and to devise promising directions for sustainable technological development. As this problem is complex and broad, other perspectives could be taken in consideration [29]. For instance, a more skeptical viewpoint could assume that complexity is a feature of many things and certainly would affect employability. ...
Chapter
Full-text available
Automation throughout history has caused profound changes in employment dynamics. With the advent of the fourth industrial revolution, a new threat may affect employability, as robots and AI-based processes can now assume tasks considered exclusive to humans. This position paper aims to motivate the study of the effects of AI and automation on employability, extending it into a collaborative network perspective. The problem is firstly observed from a historical perspective. The collaboration aspects are considered through the analysis of two case studies. Results suggest that a latent element of collaborative networks, complexity, may have effects in terms of employability.
Article
The increased development and use of automated and cognitive technologies at the Global Six, in conjunction with the increased availability of data and various levels of data structure, may exacerbate auditor judgment bias or give rise to new biases. We take a comprehensive approach to gain a new perspective by providing a 50-year trend analysis of auditor judgment bias which results from relying on judgmental heuristics and the potential effects that the use of cognitive and automated technologies, such as artificial intelligence, may have on judgment. We describe individual biases and identify specific areas of research, commonalities and differences, gaps in the literature, and research methods applied. We construct a conceptual framework as a point of departure to guide future research by focusing on the impact of emerging technology. We conclude by identifying opportunities for future research.
Chapter
Full-text available
The latest shift in the industry, known as industry 4.0, has introduced new challenges in manufacturing. The main characteristic of this transformation is digital technologies’ effect on the way production processes occur. Due to the technological growth, knowledge and skills on manufacturing operations are becoming obsolete. Hence, the need for upskilling and reskilling individuals urges. In collaboration with other key entities, educational institutions are responsible for raising awareness and interest of young students to reach a qualified and equal workforce. Drawing on a thorough literature review focused on key empirical studies on learning factories and fundamental industry 4.0 concepts, trends, teaching approaches, and required skills, the goal of this paper is to provide a gateway to understand effective learning factories’ approaches and a holistic understanding of the role of advanced and collaborative learning practices in the so-called education 4.0.
Article
Full-text available
Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We conclude that difficulties in holding “electronic persons” accountable when they violate the rights of others outweigh the highly precarious moral interests that AI legal personhood might protect.
Article
Full-text available
Why should a property interest exist in an intangible item? In recent years, arguments over intellectual property have often divided proponents – who emphasize the importance of providing incentives for producers of creative works – from skeptics who emphasize the need for free and open access to knowledge.In a wide-ranging and ambitious analysis, Robert P. Merges of UC Berkeley (Boalt Hall) Law School establishes a sophisticated rationale for the most vital form of modern property: IP rights. His insightful new book answers the many critics who contend that these rights are inefficient, unfair, and theoretically incoherent. But Merges’ vigorous defense of IP is also a call for appropriate legal constraints and boundaries: IP rights are real, but they come with real limits.Drawing on the property theory of Kant, Locke, and Rawls as well as contemporary scholars such as Jeremy Waldron and Wendy Gordon, Merges crafts an original explanation of why IP rights make sense as a reward for effort and as a way to encourage “creative professionals” to carve out autonomous careers built on their talent for making high-quality original works. He also addresses an overlooked topic: the distributional fairness of the IP system. Merges provides a novel explanation of why awarding IP rights to creative people is fair, in Rawlsian terms, for everyone else in society. Merges argues convincingly that IP rights are based on a solid ethical foundation, and – when subject to fair limits – these rights are an indispensable part of a well-functioning society.Chapter 1: IntroductionChapter 1 describes the author’s search for strong, durable intellectual foundations for the IP field. It explains why Merges came to see deficiencies in the standard utilitarian account of the field, why that led him to explore classic writings on the institution of property (Locke, Kant, etc.), and how he came to understand the deep significance and relevance of those writings for the IP field.Though describing his own recourse to the philosophical roots of property, Merges leaves room for other foundational accounts as well. There is, he says, “room at the bottom,” at the deepest conceptual level of the IP field, for numerous different understandings of why IP makes sense. To permit high-level policy debate despite this foundational pluralism, he borrows from Rawls the idea of an “overlapping consensus,” a shared public space that permits high-level discourse while dispensing with the need for agreement on ultimate foundational principles. He then populates this public space with four essential “mid-level” principles of IP law, a concept he derived from legal philosopher Jules Coleman. These mid-level principles, which he draws from his long experience with the field, are: (1) efficiency; (2) nonremoval, or preservation of the public domain; (3) proportionality, i.e., grant or reward proportional to effort or contribution; and (4) dignity. Scholars of all stripes, whatever their belief in the ultimate basis of the IP field, can and in many cases already do engage in high level analysis and debate using this principles. The idea here is to make this mid-level policy debate more explicit, and to separate it when appropriate from the debate over ultimate foundations.As an example of the type of analysis he has in mind, Merges then moves to a detailed discussion of the proportionality principle. Like all the mid-level principles, this is a deep conceptual construct that spans, transcends, and ties together a wide range of specific doctrinal issues. From patent scope, to fair use, from the utility doctrine in patent law, to remedies for IP infringement, again and again specific doctrines and issues manifest the same underlying conceptual challenge: how do we calibrate the magnitude of the property grant to the effort or contribution of the claimant? To explore the deep structure of proportionality, Merges uses a simple story about property rights and the building of a bridge. This “parable of the bridge” seeks to describe the basic impulse to reward property claims in proportion to the significance of the claimant’s contribution. Most pertinent for IP law, the parable is employed to show how courts can adjust the reach of property rights when post-grant conditions change. This ex-post adjustment of rights makes sense, Merges argues, when adhering to the scope of an initial right would provide a disproportionate reward to the rightholder through no effort, contribution, or foresight on the rightholder’s part.In the final part of the book, the author strives to demonstrate the “cash value” or payoff of the theory he lays out in the early parts of the book. First Merges describes how property rights theory, traditionally keyed to the ownership of individual assets by individual claimants, can be mapped onto a world where large institutions employ many creative professionals and thus wind up as the beneficial owners of many IP rights. He explores the actual workings of the corporate and institutional “ecosystems” in which creative professionals operate, and concludes that individual IP rights, though diluted in impact by this modern institutional setting, still largely serve their purpose of fostering autonomous careers for creative professionals.Next he turns to some detailed issues of concern to contemporary IP scholars: ownership of creative works in the digital age, and the problem of patents on life-saving drugs in the face of destitute populations who need them. With respect to digital content, Merges recounts the advantages of continued adherence to a property regime, notwithstanding a large chorus of voices contending that property is obsolete and counterproductive in this new setting. He defends the privileging of significant, high-quality creative work as against nonprofessional and group-created works; yet he also calls for a simplified system of waiver of rights (which he roots in Kant’s writings on property) to facilitate voluntary sharing of works when that is what creative authors want. Next he deploys John Locke’s often-overlooked “charity proviso” to defend the right of the destitute to have access to life-saving drugs, despite the presence of patents. While pointing out the need to balance intergenerational equity concerns, he nonetheless finds that Locke’s strong defense of property came with an equally strong concern for its distributional impact in extremis, and so dictates a limited right of access when lives are at stake.
Article
Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (“AI”). Though AI often conjures images of autonomous robots, especially autonomous vehicles, most existing AI technologies are not autonomous. Rather, they are decision-assistance tools that aim to improve on the inefficiency, arbitrariness, and bias of human decisions. Decision-assistance tools are frequently used in contexts in which negligence law or negligence analogues operate, including medicine, financial advice, data security, and driving (in partially autonomous vehicles). Users of these tools interact with AI as they would any other form of technological development-by incorporating it into their existing decision-making practices. Accordingly, it is important to understand how the use of these tools affects the duties of care required by negligence law and people's ability to meet them. This Article takes up that discussion, arguing that AI poses serious challenges for negligence law's ability to continue compensating the injured. By inserting a layer of inscrutable, unintuitive, and statistically derived code in between a human decisionmaker and the consequences of her decisions, AI disrupts our typical understanding of responsibility for choices gone wrong. This Article argues that AI's unique nature introduces four complications into negligence: 1) the inability to predict and account for AI errors; 2) physical or cognitive capacity limitations at the interface where humans interact with AI; 3) the introduction of AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI's statistical nature and potential for bias. In those contexts where we rely on current negligence law to compensate for injuries, AI's use will likely result in injured plaintiffs regularly losing out, as errors cease being the fault of the operator and become statistical certainties embedded within the technology. With most new technologies, negligence law adapts over time as courts gain familiarity with the technology's proper use. But the unique nature of AI suggests that this may not occur without legislation requiring AI to be built interpretably and transparently, at a minimum, and that other avenues of regulation may be better suited to preventing uncompensated losses by injured parties. © 2020 Andrew D. Selbst. This Article is available for reuse under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0), http://creativecommons.org/licenses/by-sa/4.0/. The required attribution notice under the license must include the Article's full citation information: e.g., Andrew D. Selbst, Negligence and AI's Human Users, 100 B.U. L. REV. 1315 (2020). This work was funded in part by the National Science Foundation (IIS-1633400).
Book
Commercial exploitation of attributes of an individual's personality, such as name, voice and likeness, forms a mainstay of modern advertising and marketing. Such indicia also represent an important aspect of an individual's dignity which is often offended by unauthorized commercial appropriation. This volume provides a framework for analysing the disparate aspects of the problem of commercial appropriation of personality and traces, in detail, the discrete patterns of development in the major common law systems. It also considers whether a coherent justification for a remedy may be identified from a range of competing theories. The considerable variation in substantive legal protection reflects more fundamental differences in the law's responsiveness to commercial practices and different attitudes towards the proper scope and limits of intangible property rights.
Article
Hundreds of thousands of adults participate in book discussion groups, satisfying lifelong learning needs informally and in community.
Article
We consider the use of random CNF formulas in evaluating the performance of SAT testing algorithms, and in particular the role that the phase transition phenomenon plays in this use. Examples from the literature illustrate the importance of understanding the properties of formula distributions prior to designing an experiment. We expect this to be of increasing importance in the field.
Article
"On one side of the battle over freedom of information are people who believe that sharing information with other interested people is a good thing even if the information comes from someone who does not want it to be shared. Individuals and companies that would prefer that the information remain proprietary are on the other side of the fight. The criminal charges leveled against electronic publisher Craig Neidorf reveal the differing views that computing professionals have about the nature of information. Craig Neidorf was accused of publishing information about the 911 telephone system that he received from a proprietary document. One of the questions that comes up is whether the information was truly proprietary. Another is whether the government could prove that Neidorf was trying to acquire money or property from the rightful owner of the document. Although charges against Neidorf have been dropped, it may be wise to use the event as an occasion to ask questions such as why the law has largely resisted treating information as property."
Information may want to be free, but information products do not: protecting and facilitating transactions in information products
  • See Osenga
See Osenga, Kristen.: Information May Want to Be Free, But Information Products Do Not: Protecting and Facilitating Transactions in Information Products. Cardozo Law Review (2009) 30(5) 2099. p 2101
Do robots and artificial intelligence think about copyright? The Australian
  • A Wong