ChapterPDF Available

Reasonable Machines: A Research Manifesto

Authors:

Abstract and Figures

Future intelligent autonomous systems (IAS) are inevitably deciding on moral and legal questions, e.g. in self-driving cars, health care or human-machine collaboration. As decision processes in most modern sub-symbolic IAS are hidden, the simple political plea for transparency, accountability and governance falls short. A sound ecosystem of trust requires ways for IAS to autonomously justify their actions, that is, to learn giving and taking reasons for their decisions. Building on social reasoning models in moral psychology and legal philosophy such an idea of »Reasonable Machines« requires novel, hybrid reasoning tools, ethico-legal ontologies and associated argumentation technology. Enabling machines to normative communication creates trust and opens new dimensions of AI application and human-machine interaction.
Content may be subject to copyright.
arXiv:2008.06250v1 [cs.CY] 14 Aug 2020
Reasonable Machines: A Research Manifesto
Christoph Benzmüller1[0000000233923093] and Bertram
Lomfeld2[0000000241638364]
1Institute of Computer Science, Freie Universität Berlin, Berlin, Germany
2Department of Law, Freie Universität Berlin, Berlin, Germany
c.benzmueller|bertram.lomfeld@fu-berlin.de
Abstract. Future intelligent autonomous systems (IAS) are inevitably
deciding on moral and legal questions, e.g. in self-driving cars, health care
or human-machine collaboration. As decision processes in most modern
sub-symbolic IAS are hidden, the simple political plea for transparency,
accountability and governance falls short. A sound ecosystem of trust
requires ways for IAS to autonomously justify their actions, that is,
to learn giving and taking reasons for their decisions. Building on so-
cial reasoning models in moral psychology and legal philosophy such
an idea of »Reasonable Machines« requires novel, hybrid reasoning
tools, ethico-legal ontologies and associated argumentation technology.
Enabling machines to normative communication creates trust and opens
new dimensions of AI application and human-machine interaction.
Keywords: Trusthworthy and Explainable AI ·Ethico-Legal Governors
·Social Reasoning Model ·Pluralistic, Expressive Normative Reasoning
1 Introduction
Intelligent autonomous systems (IASs) are rapidly entering applications in in-
dustry, military, finance, governance, administration, healthcare, etc., leading to
a historical transition period with unprecedented dynamics of innovation and
change, and with unpredictable outcomes. Politics, regulatory bodies, indeed
society as a whole, are challenged not only with keeping pace with these poten-
tially disruptive developments, but also with staying ahead and wisely guiding
the transition. Fostering positive impacts, while preventing negative side effects,
is a balanced vision shared within most of the numerous ethical guidelines of the
last years on trustworthy AI, including the European Commission’s most recent
White Paper on AI [6], proposing the creation of an “ecosystem of excellence” in
combination with an “ecosystem of trust”.
We think that real “Trustworthy AI by Designdemands IASs, which are
able to give and take reasons for their decisions to act. Such »Reasonable
Machines« require novel, hybrid reasoning tools, upper ethico-legal ontologies
and associated argumentation technology to be utilised in practice for assess-
ing, justifying and controlling (externally and internally) the behaviour of IASs
with respect to explicitly encoded legal and ethical regulation. We envision this
2 C. Benzmüller and B. Lomfeld
technology to be integrated with an on-demand, cloud-based workbench for plu-
ralistic, expressive regulatory reasoning. This would foster knowledge transfer
with industry, research, and educational institutions, it would enable access to
critical AI infrastructure at scale with little risk and minimal costs, and, in the
long run, it could support dynamic adjustments of regulating code for IASs in
the cloud via politically and socially legitimated processes.
Paper structure: Section 2 formulates objectives for Reasonable Machines,
and section 3 provides models for them building on moral psychology and legal
philosophy. Section 4 outlines modular steps for research and implementation of
Reasonable Machines; this leverages own prior work such as the LogiKEy
methodology and framework for designing normative theories for ethical and
legal reasoning [4], which needs to be combined and extended with an upper-
level value ontology [17] and further domain-level regulatory theories for the
assessment and explanation of ethical and legal conflicts and decisions in IASs.
2Reasonable Machines: Objectives
The need for some form of “moral machines” [22] is no science fiction scenario
at all. With the rise of autonomous systems in all fields of life including highly
complex and ethically critical applications like self-driving cars, weapon sys-
tems, healthcare assistance in triage and pandemic plans, predictive policing,
legal judgement supports or credit scoring tools, involved AI systems are in-
evitably confronted with, and deciding on, moral and legal questions. One core
problem with ethical and legal accountability or even governance of autonomous
systems is the hidden decision process (black box) in modern (sub-symbolic)
AI technologies, which hinders transparency as well as direct intervention. The
simple plea for transparency disregards technological realities or even restrains
much needed further developments.3
Inspired by moral psychology and cognitive science, we envision the solution
in the development of independent, symbolic logic based safety-harnesses in fu-
ture AI systems [9]. Such “ethico-legal governors” encapsulate and interact with
black box AI systems, and they will use symbolic AI techniques in order to search
for possible justifications, i.e. reasons, for their decisions and (intended) actions
with regard to some formally encoded ethico-legal theories defined by regulating
bodies. The symbolic justifications computed at this abstract level thus provide
3While interpreting, modeling and explaining the inner functioning of black box AI
systems is relevant also with respect to our Reasonable Machines vision, such re-
search alone cannot completely solve the trust and control challenge. Sub-symbolic
AI black box systems (e.g. neural architectures) are suffering from various issues
(including adversarial attacks and influence of bias in data) which cannot be easily
eliminated by interpreting, modeling and explaining them. Offline, forensic processes
are then required such that the whole enterprise of turning black box AI systems
into fully trustworthy AI systems becomes a challenging multi-step engineering pro-
cess, and such an approach is significantly further complicated when online learning
capabilities are additionally foreseen.
Reasonable Machines: A Research Manifesto 3
a basis for generating explanations about why a decision/action (proposed by
an AI black box system) is ethico-legally legitimate and compliant with respect
to the encoded ethico-legal regulation.
Such an approach is complementary to, and as an additional measure more
promising than, explaining the inner (mis-)functioning of the black box AI sys-
tem itself. Symbolic justifications in turn enable the development of further
means towards a meaningful and robust control and towards human-understand-
able explanation and human-machine interaction. The Reasonable Machines
idea outlines a genuine approach of trustworthiness by design proposing, in psy-
chological terminology [14], a slow, rational (i.e. symbolic) “System 2” layer in
responsible IASs to justify and control their fast, “intuitive”, but opaque (sub-
symbolic), “System 1” layer computations.
Reasonable Machines research aims at analyzing and constructing ways
how intelligent machines could socially justify their actions at abstract level, i.e.
give and take moral and legal reasons for their decisions to act. Reason is based
on reasons. This is true as much for artificial as for human intelligent agents.
The “practical reasonableness of intelligent agents depends on their moral abil-
ities to communicate socially acceptable reasons for their behavior [11]. Thus,
the exploration of methods and tools enabling machines to generate normative
reasons (which may be independent of underlying black box architectures and
opaque algorithms) smoothes the way for more comprehensive artificial moral
agency and new dimensions of human-machine communication.
The core objectives of Reasonable Machines technology are:
enabling argument-based explanations & justifications of IAS decisions,
enabling ethico-legal reasoning about, and public critique of, IAS decisions,
facilitating political and legal governance of IAS decision making,
evolving ethico-legal agency and communicative capacity of IASs,
enabling trustworthy human-interaction by normative communication,
fostering development of novel neuro-symbolic AI architectures.
3 Artificial Social Reasoning Model (aSRM)
The black box governance problem has an interesting parallel in human decision
making. Most actual models in moral psychology consider emotional intuition
to be the (or at least one) initial driving force of human action which is only
afterwards (or with a second significantly slower system) rationalized with rea-
sons [12, 14]. Within a social framework of giving and taking reasons (e.g. moral
convention or a legal system) the initial motivation of a single human agent
could be ignored if his actions and his post-hoc reasoning comply with given
social (moral or legal) standards [16]. Communicating reasons within such a
post-hoc “Social Reasoning Model” (SRM) is not superfluous, but essential, as
only they guarantee the coherence of a moral or legal order in an increasingly
pluralistic world. The remaining difference is the relative independence of ratio-
nal reasoning from the motivational impulse to act. Even so, in the long run the
4 C. Benzmüller and B. Lomfeld
inner-subjective or social feedback loop with rational reasons might also change
the agents’ motivational (emotional) disposition.
This post-hoc SRM is transferable to AI decision processes as “artificial Social
Reasoning Model” (aSRM). The black box of an opaque AI system functions
like an AI intuition. Following the SRM model, transparency is not needed as
long as the system generates post-hoc reasons for its action. Moral and legal
accountability and governance could instead be enabled through symbolic or
sub-symbolic aSRMs.
A symbolic solution would try to reconstruct (or justify with an alternative
argument) the intuitive decision of the black box with deontic logical reasoning
applying moral or legal standards. A pluralistic, expressive “normative reasoning
infrastructure”, such as LogiKEy [4], should e.g. be able to support this process.
A sub-symbolic solution could create an independent (second) neural network
to produce reasons for the output of the (first) decision network (e.g. autonomous
driving control). Of course, the structure of this “reasoning net” process is again
hidden. Yet, if the outcoming reasons coherently comply with prescribed social
and ethico-legal standards the lack of transparency in the second black box
constitutes less of a problem.
Robust solutions for aSRMs could even seek to integrate and align these
two options. Moreover, in both scenarios the introduced feedback loop of giving
and taking reasons could be integrated as learning environment (self-supervised
learning) for the initial, intuitive layer of autonomous decision making, with the
eventual effect that differences at both layers may gradually dissolve.
Allowing various kinds of reasons, SRMs & aSRMs advance normative plural-
ism and may integrate different (machine-)ethical traditions: deontological, con-
sequentialist and virtue ethics. “Reasonable pluralism” in recent moral and po-
litical philosophy defines reasonableness by meta-level procedures like “reflective
equilibrium” and “overlapping consensus” [20] or “rational discourse” [11]. Con-
temporary legal philosophy and theory has enfolded how law could act as demo-
cratic real-world implementation of these meta-procedures, structuring public
deliberation and argumentation over conflicting reasons [1, 15]. Constructing a
pluralist aSRM substantially widens the mostly consequentialist contemporary
approaches [5, 9] to machine ethics and moral IAS.
4Reasonable Machines: Implementation
The implementation of Reasonable Machines requires expertise from differ-
ent areas: pluralistic normative reasoning, formal ethics and legal theory, ex-
pressive ontologies and semantic web taxonomies, human-computer interaction,
rule-based systems, automated theorem proving, argumentation technology, neu-
ral architectures and machine learning. Acknowledging the complexity of each
field, Reasonable Machines research should complement top-down construc-
tion of responsible machine architecture with bottom-up developments starting
from existing works in different domains. More concretely, we propose a modu-
Reasonable Machines: A Research Manifesto 5
lar and stepwise implementation of our research scheme based on the following
modules:
M1: Responsible Machine Architecture. The vision of an aSRM and its
parallel to human SRM needs to be further explored to guide and refine the over-
all architectural design of Reasonable Machines based on respective system
components responsible for generating justifications, for conducting compliance
checks and for governing the action executions of an IAS.
M2: Ethico-Legal Ontologies. Ethico-legal ontologies constitute a core
ingredient to enable the computation, assessment and communication of aSRM-
based rational justifications in the envisioned ethico-legal governance compo-
nents for IASs, and they are also key for black box independent user-explanations
in form of rational arguments. We propose the development of expressive ethico-
legal upper-level ontologies to guide and connect the encoding of concrete ethico-
legal domain-level theories (regulatory codes) [13, 8]. Moreover, we propose the
concrete regulatory codes to be complemented with an abstract ethico-legal value
ontology, for example, as “discoursive grammar of justification [17].
M3: Symbolic Reasoning Tools. For the implementation of pluralistic,
expressive and paradox-free normative reasoning at the upper-level, the LogiKEy
framework [4] can e.g. be adapted and further advanced. LogiKEy works with
shallow semantical embeddings (SSEs) of (combinations of) non-classical logics
in classical higher-order logic (HOL). HOL thereby serves as a meta-logic, rich
enough to support the encoding of a plurality of “object logics” (e.g. conditional,
deontic or epistemic logics and combinations thereof). The embedded “object
logics” are used for the iterative, experimental encoding of normative theories.
This generic approach shall ideally be integrated with specialized solutions based
e.g. on semantic web reasoning, logic programming, answer set programming,
and with formalized argumentation for ethical [21] or legal [3] systems design.
M4: Interpretable AI Systems. Sub-symbolic solutions to SRM-based
accountability and governance challenge could develop a hidden reasoning net,
which might be trained with legal and ethical use-cases. Moreover, techniques
in “explainable AI” [10] have to be assessed and, if possible, integrated with the
symbolic aSRM tools to be developed in M3 in order to provide guidance to
their computations and search processes. The more information can be obtained
about the particular information bits that trigger the decisions of the black box
systems we want to govern, the easier the corresponding reasoning tasks, i.e. the
search for justifications, should become in the associated, symbolic aSRM tool.
M5: Human-Machine Communication & Interaction. The intended
aSRM-based justifications generated by the tools developed in M3 and M4 re-
quire arguments and rational explanation which are understandable for different
AI ecosystems [19], including human users, collect decision scenarios between
machines and independent verification tools. Here, the development of respec-
tive techniques could build on argumentation theory in combination with recent
advances towards a computational hermeneutics [7]. An overarching objective
of Reasonable Machines is to contribute to trustful and fruitful interaction
between human and IASs.
6 REFERENCES
M6: Cloud-based Reasoning Workbench. To facilitate access to the
proposed knowledge representation and reasoning solutions, and also to host the
ethico-legal theories, a cloud-based reasoning workbench should be implemented.
This workbench would (i) integrate the bottom-up construed components and
tools from M2-M5 and (ii) implement instances of the top-down governance
architecture(s) developed in M1 based on (i). This cloud-based solution could
be developed in combination with, or as an alternative to, more independent
solutions based e.g. on agent-based development frameworks [23].
M7: Use Cases and Empirical Studies. The overall system framework
needs to be adequately prepared to support changing use cases and empirical
studies. Concrete use cases with high ethical and legal potential must be defined
and employed to guide the research and development work, as for example the
representative issue on self-driving cars [5]. Empirical studies should support and
inform the constructive development process. For testing the ethico-legal value
ontology in M2, for example, we could try to demonstrate that it can make sense
out of the rich MIT Moral Machine experiment data [2]. When its architecture
evolves, it would be highly valuable to design a genuine aSRM experiment.
5 Conclusion
The Reasonable Machines vision and research requires the integration of
heterogeneous and interdisciplinary expertise to be fruitfully implemented. The
cloud-based framework we envision would ideally be widely available and reusable,
and it could become part of related, bigger initiatives towards the sharing of crit-
ical AI infrastructure (such as the claire-ai.org vision towards a CERN for
AI). The implementation of the depicted program requires substantial resources
and investment in foundational AI research and in practical system development,
but it reflects the urgent and timely need for the development of trustworthy AI
technology.
The possible outreach of the Reasonable Machines idea is even far beyond
an ecosystem of trust. To enable machines to give normative reasons for their
decisions and actions means to capacitate them of communicative action [11],
or at least to engage in constitutive communication of social systems [18]. The
capacity to give and take reasons is a crucial step towards fully autonomous
normative (moral and legal) agency. Moreover, our research, in the long run,
paves way for interesting further studies and experiments on integrated neuro-
symbolic AI architectures and on the emergence of patterns of self-reflection in
intelligent autonomous machines.
Acknowledgement: We thank David Fuenmayor and the anonymous reviewers
for their helpful comments to this work.
References
1. Alexy, R.: Theorie der juristischen Argumentation. Suhrkamp, Frankfurt/M (1978)
REFERENCES 7
2. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F.,
Rahwan, I.: The Moral Machine experiment. Nature 563(7729), 59–64 (2018)
3. Benzmüller, C., Fuenmayor, D., Lomfeld, B.: Encoding Legal Balancing: Automat-
ing an Abstract Ethico-Legal Value Ontology in Preference Logic. In: MLR 2020,
Preprint: https://arxiv.org/abs/2006.12789 (2020)
4. Benzmüller, C., Parent, X., van der Torre, L.: Designing Normative Theories for
Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Sup-
port. Artificial Intelligence 287(103348) (2020)
5. Bonnefon, J.-F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehi-
cles. Science 352(6293), 1573–1576 (2016)
6. European Commission, On Artificial Intelligence – A European approach to excel-
lence and trust. European Commission White Paper, COM(2020) 65 final (2020)
7. Fuenmayor, D., Benzmüller, C.: A Computational-Hermeneutic Approach for Con-
ceptual Explicitation. In: Nepomuceno, A., et.al., (eds.) Model-Based Reasoning
in Science and Technology. Inferential Models for Logic, Language, Cognition and
Computation, SAPERE, pp. 441–469. Springer, Cham (2019)
8. Fuenmayor, D., Benzmüller, C.: Harnessing Higher-Order (Meta-)Logic to Rep-
resent and Reason with Complex Ethical Theories. In: PRICAI 2019: Trends in
Artificial Intelligence, LNAI, vol. 11670, pp. 418–432. Springer, Heidelberg (2019)
9. Greene, J., Rossi, F., Tasioulas, J., Venable, K.B., Williams, B.C.: Embedding
Ethical Principles in Collective Decision Support Systems. In: Schuurmans, D.,
Wellman, M.P. (eds.) Proceedings of the Thirtieth AAAI Conference on Artificial
Intelligence, pp. 4147–4151. AAAI Press (2016)
10. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A
Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51(5)
(2018)
11. Habermas, J.: Theorie des kommunikativen Handelns. Frankf./M: Suhrkamp (1981)
12. Haidt, J.: The emotional dog and its rational tail: a social intuitionist approach to
moral judgment. Psychol. Rev. 108(4), 814–34 (2001)
13. Hoekstra, R., Breuker, J., Bello, M.D., Boer, A.: LKIF Core: Principled Ontology
Development for the Legal Domain. In: Breuker, J., et.al., (eds.) Law, Ontologies
and the Semantic Web - Channelling the Legal Information Flood, Frontiers in
Artificial Intelligence and Applications, pp. 21–52. IOS Press (2009)
14. Kahnemann, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux (2013)
15. Lomfeld, B.: Die Gründe des Vertrages: Eine Diskurstheorie der Vertragsrechte.
Mohr Siebeck, Tübingen (2015)
16. Lomfeld, B.: Emotio Iuris. Skizzen zu einer psychologisch aufgeklärten Methoden-
lehre des Rechts. In: Köhler, Müller-Mall, Schmidt, Schnädelbach, (eds.) Recht
Fühlen, pp. 19–32. Fink, München (2017)
17. Lomfeld, B.: Grammatik der Rechtfertigung: Eine kritische Rekonstruktion der
Rechts(fort)bildung. Kritische Justiz 52(4) (2019)
18. Luhmann, N.: Soziale Systeme: Grundlage einer allgemeinen Theorie. Frankfurt/M:
Suhrkamp (1984)
19. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal,
C., Crandall, J.W., Christakis, N.A., Couzin, I.D., Jackson, M.O., Jennings, N.R.,
Kamar, E., Kloumann, I.M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A.,
Parkes, D.C., Pentland, A.‘, Roberts, M.E., Shariff, A., Tenenbaum, J.B., Wellman,
M.: Machine behaviour. Nature 568(7753), 477–486 (2019)
20. Rawls, J.: Justice as Fairness: A Restatement. Harvard University Press, Cam-
bridge/MA (2001)
8 REFERENCES
21. Verheij, B.: Formalizing value-guided argumentation for ethical systems design.
Artif. Intell. Law 24(4), 387–407 (2016)
22. Wallach, W., Allen, C.: Moral machines: Teaching robots right from wrong. Oxford
University Press (2008)
23. Wisniewski, M., Steen, A., Benzmüller, C.: LeoPARD - A Generic Platform for the
Implementation of Higher-Order Reasoners. In: Intelligent Computer Mathematics
- CICM 2015, LNCS, vol. 9150, pp. 325–330. Springer, Heidelberg (2015)
REFERENCES 9
M1:
Responsible Machine Architecture
ethico-legal governance of intelligent autonomous agents
M3:
Symbolic Reasoning Tools
pluralistic normative reasoning
rule-based normative reasoning
integrated and guided by M4
M2:
Ethico-Legal Ontologies
ethico-legal upper-level ontology
value ontology (moral “grammar”)
ethico-legal regulation (code)
M4:
Interpretable AI Systems
ethico-legal reasoning net
interpretable AI to inform M3
M5:
Human-Machine Communication & Interaction
human-understandable rational arguments
human-centered interaction
M6:
Cloud-based Reasoning Workbench
access at scale with little risk and minimal costs
M7:
Use Cases and Empirical Studies
grand vision (top-down) & module specific (bottom-up)
Fig. 1. Modular structure of Reasonable Machines research.
... Moreover, we can have rules conditioned on more concrete legal factors. 12 As a didactic example, the legal rule R4 states that the Ownership (say, the plaintiff's) of the land on which the appropriation took place, together with the fact that the opposing party (defendant) acted out of Malice implies a value preference of RELIance and RESPonsibility over STABility. This last rule has indeed been chosen to reflect the famous common law precedent of Keeble vs. Hickeringill [16,2]. ...
... Supporting interactive and automated value-oriented legal argumentation on the computer is a nontrivial challenge which we address, for reasons as defended e.g. by Bench-Capon [4], with symbolic AI techniques and formal methods. Motivated by recent pleas for explainable and trustworthy AI, our primary goal is to work towards the development of ethico-legal governors for future generations of intelligent system, or more generally, towards some form of (legally and ethically) reasonable machines [12], capable of exchanging rational justifications for the actions they take. While building up a capacity to engage in value-oriented legal argumentation is just one of a multitude of challenges this vision is faced with, it would clearly constitute an important stepping stone. ...
... One of them has been suggestively called "default logic". We refer to[25] for a discussion.12 The introduction of legal factors is an established practice in the implementation of case-based legal systems (cf.[3] for an overview). ...
Conference Paper
Full-text available
Literature in AI & Law contemplates argumentation in legal cases as an instance of theory construction. The task of a lawyer in a legal case is to construct a theory containing: (a) relevant generic facts about the world, (b) relevant legal rules such as precedents and statutes, and (c) contingent facts describing or interpreting the situation at hand. Lawyers then elaborate convincing arguments starting from these facts and rules, deriving into a positive decision in favour of their client, often employing sophisticated argumentation techniques involving such notions as burden of proof, stare decisis, legal balancing, etc. In this paper we exemplarily show how to harness Isabelle/HOL to model lawyer’s argumentation using value-oriented legal balancing, while drawing upon shallow embeddings of combinations of expressive modal logics in HOL. We highlight the essential role of model finders (Nitpick) and ‘hammers’ (Sledgehammer) in assisting the task of legal theory construction and share some thoughts on the practicability of extending the catalogue of ITP applications towards legal informatics.
... Isabelle/HOL), that enables and supports such improvements without requiring expensive technical adjustments to the underlying base reasoning technology. As a broader application scenario, we are currently proposing that ethico-legal value-oriented theories and ontologies should constitute a core ingredient to enable the computation, assessment and communication of rational justifications and explanations in the future ethico-legal governance of AI (Benzmüller and Lomfeld 2020). Thus, a sound and trustworthy implementation of any legally accountable 'moral machine' requires the development of formal theories and ontologies for the legal domain to guide and interconnect the encoding of concrete regulatory codes and legal cases. ...
Preprint
Full-text available
The logico-pluralist LOGIKEY knowledge engineering methodology and framework is applied to the modelling of a theory of legal balancing in which legal knowledge (cases and laws) is encoded by utilising context-dependent value preferences. The theory obtained is then used to formalise, automatically evaluate, and reconstruct illustrative property law cases (involving appropriation of wild animals) within the Isabelle/HOL proof assistant system, illustrating how LOGIKEY can harness interactive and automated theorem proving technology to provide a testbed for the development and formal verification of legal domain-specific languages and theories. Modelling value-oriented legal reasoning in that framework, we establish novel bridges between latest research in knowledge representation and reasoning in non-classical logics, automated theorem proving, and applications in legal reasoning.
... Supporting interactive and automated value-oriented legal argumentation on the computer is a nontrivial challenge which we address, for reasons as defended e.g. by Bench-Capon [4], with symbolic AI techniques and formal methods. Motivated by recent pleas for explainable and trutsworthy AI, our primary goal is to work towards the development of ethico-legal governors for future generations of intelligent system, or more generally, towards some form of (legally and ethically) reasonable machines [13], capable of exchanging rational justifications for the actions they take. While building up a capacity to engage in value-oriented legal argumentation is just one of a multitude of challenges this vision is faced with, it would clearly constitute an important stepping stone. ...
Preprint
Full-text available
Literature in AI & Law contemplates argumentation in legal cases as an instance of theory construction. The task of a lawyer in a legal case is to construct a theory containing: (a) relevant generic facts about the world, (b) relevant legal rules such as precedents and statutes, and (c) contingent facts describing or interpreting the situation at hand. Lawyers then elaborate convincing arguments starting from these facts and rules, deriving into a positive decision in favour of their client, often employing sophisticated argumentation techniques involving such notions as burden of proof, stare decisis, legal balancing, etc. In this paper we exemplarily show how to harness Isabelle/HOL to model lawyer's argumentation using value-oriented legal balancing, while drawing upon shallow embeddings of combinations of expressive modal logics in HOL. We highlight the essential role of model finders (Nitpick) and 'hammers' (Sledgehammer) in assisting the task of legal theory construction and share some thoughts on the practicability of extending the catalogue of ITP applications towards legal informatics. 2012 ACM Subject Classification Keywords and phrases Isabelle/HOL, shallow embedding, preference logic, legal reasoning Acknowledgements We thank Bertram Lomfeld for encouraging us to take up this formalisation challenge.
Preprint
Full-text available
To appear in: VerantwortungKI – Künstliche Intelligenz und gesellschaftliche Folgen (Schriftenreihe der Berlin-Brandenburgische Akademie der Wissenschaften), Interdisziplinäre Arbeitsgruppe Verantwortung: Maschinelles Lernen und Künstliche Intelligenz der Berlin-Brandenburgischen Akademie der Wissenschaften, volume 3, 2020.
Article
Full-text available
A framework and methodology—termed LogiKEy—for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples—all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.
Chapter
Full-text available
We present a computer-supported approach for the logical analysis and conceptual explicitation of argumentative discourse. Computational hermeneutics harnesses recent progresses in automated reasoning for higher-order logics and aims at formalizing natural-language argumentative discourse using flexible combinations of expressive non-classical logics. In doing so, it allows us to render explicit the tacit conceptualizations implicit in argumentative discursive practices. Our approach operates on networks of structured arguments and is iterative and two-layered. At one layer we search for logically correct formalizations for each of the individual arguments. At the next layer we select among those correct formalizations the ones which honor the argument’s dialectic role, i.e. attacking or supporting other arguments as intended. We operate at these two layers in parallel and continuously rate sentences’ formalizations by using, primarily, inferential adequacy criteria. An interpretive, logical theory will thus gradually evolve. This theory is composed of meaning postulates serving as explications for concepts playing a role in the analyzed arguments. Such a recursive, iterative approach to interpretation does justice to the inherent circularity of understanding: the whole is understood compositionally on the basis of its parts, while each part is understood only in the context of the whole (hermeneutic circle). We summarily discuss previous work on exemplary applications of human-in-the-loop computational hermeneutics in metaphysical discourse. We also discuss some of the main challenges involved in fully-automating our approach. By sketching some design ideas and reviewing relevant technologies, we argue for the technological feasibility of a highly-automated computational hermeneutics.
Chapter
Full-text available
The computer-mechanization of an ambitious explicit ethical theory, Gewirth’s Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church’s type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.
Article
Full-text available
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. Understanding the behaviour of the machines powered by artificial intelligence that increasingly mediate our social, cultural, economic and political interactions is essential to our ability to control the actions of these intelligent machines, reap their benefits and minimize their harms.
Article
Full-text available
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
Article
The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.
Article
Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done by individuals and emphasizes instead the importance of social and cultural influences. The model is an intuitionist model in that it states that moral judgment is generally the result of quick, automatic evaluations (intuitions). The model is more consistent than rationalist models with recent findings in social, cultural, evolutionary, and biological psychology, as well as in anthropology and primatology.