Content uploaded by Tania Sourdin
Author content
All content in this area was uploaded by Tania Sourdin on Jan 08, 2019
Content may be subject to copyright.
Do Judges Need to Be Human? The
Implications of Technology
for Responsive Judging
Tania Sourdin and Richard Cornes
Abstract Judicial responsiveness requires judges to act from the perspective of con-
scious legal rationality and also with intuition, empathy and compassion. To what
extent will the judicial role change in terms of responsiveness as many aspects of
human activity, including aspects of the work of lawyers and judges, are not only
augmented, but even takenover entirely by replacement technologies? Such technolo-
gies are already reshaping the way the legal profession operates, with implications
for judges by virtue of how cases are prepared and presented. In relation to courts,
the judicial role is also being augmented, and modified, by technological advances,
including the growth of online adjudication. There has even been speculation that the
role of the judge not only could be taken online, but as computing techniques become
more sophisticated, be fully automated. The role of the human judge though is not
merely that of a data processor. To reduce judging to such a definition would be to
reject not only the humanity of the judge, but also that of all those who come before
them. A better understanding of the essential humanity of the judge will help ensure
that technology plays a principled and appropriate role in advancing a responsive jus-
tice system. Insights from psychoanalytical thought will aid in that understanding,
and in developing the code that drives future applications of artificial intelligence in
judicial processes.
This Chapter also draws upon material in T. Sourdin (2018) forthcoming.
T. Sourdin (B)
University of Newcastle, Newcastle, Australia
e-mail: tania.sourdin@newcastle.edu.au
R. Cornes
University of Essex, Colchester, UK
e-mail: r.cornes@essex.ac.uk
© Springer Nature Singapore Pte Ltd. 2018
T. Sourdin and A. Zariski (eds.), The Responsive Judge, Ius Gentium: Comparative
Perspectives on Law and Justice 67, https://doi.org/10.1007/978-981-13-1023-2_4
87
tania.sourdin@newcastle.edu.au
88 T. Sourdin and R. Cornes
1 Introduction
The role of a judge is multifaceted. It can incorporate activism, complex interactions
with people, dispute settlement, case management, public and specific education
activities, social commentary as well the core adjudicatory functions which might
be conducted with other judges, or less commonly in some jurisdictions with lay
people (juries).1These varying functions are relevant when considering how tech-
nology will impact on the role of judges within our society. Alongside technolog-
ical developments, modern trends in judicial approaches are leading some judges
to be more “responsive” and to embrace the realization that judging requires not
only knowledge of the law and the surface facts of a case, but also the empathic
ability to understand the emotions underlying the matters which come before their
court; “emotion not alone but in combination with the law, logic, and reason—helps
the judges get it right” (Chin 2012, 1581; see also Sinai and Alberstein 2016,
esp. 225; Colby 2012, esp. 1946). The reasons for this are discussed in chapter one
of this book.
Also noted in Sourdin and Zariski’s introduction: innovations in responsive judg-
ing raise the debate between formalist and realist approaches to the law.2Judicial
awareness of underlying emotions per se should not be of concern to either side of the
realism/formalism debate. The rule of law, while demanding that cases be decided
according to the law, has never required judges to be blind to the non-rational motives
which may drive litigants coming before them. Indeed, the ability to comprehend,
and where appropriate, respond to such motives, will assist the judge in application
of the law and just resolution of disputes.
In this context, increased judicial responsiveness will require a better understand-
ing of emotional intelligence, as judges engage court participants with varying levels
of emotion and compassion, consider therapeutic justice interventions, or pursue pro-
cedural justice approaches. The work of Thibaut and Walker suggests that if people
consider that they have been treated fairly they are more likely to accept a decision
and outcome (Thibaut 1978; Thibaut and Walker 1975; Lind and Tyler 1988; Van den
Bos et al. 2014). Their work also suggests that dignity, voice and participation factors,
that are linked to a range of judicial interventions, can have a significant impact on
all court participants. The extent to which judges engage with this research, and their
respective judicial styles, varies across jurisdictions and between judges. These vari-
ations and innovations in judicial approach present one of a number of challenges to
1For a helpful discussion of this issue see Sourdin and Zariski (2013).
2See ‘Responsiveness and the jurisprudence of judging’ in Chapter “What is Responsive Judging?”
of this book. For an overview see also Leiter (2010). Our concern in this chapter is primarily with the
challenge of coding legal rules and reasoning, and how that touches on the formalism/realism debate.
A further chapter would be required to engage in detail with the question of fact finding by Judge AI.
On the essential relationship between facts and law Frank (1949, 14) noted that, “a legal rule… is a
conditional statement referring to facts”. Facts may be even more uncertain than legal rules (see also
Frank 1930, viii–xiii), and are arguably even less amenable to Judge AI. Our argument in this chapter
about the limits of Judge AI in relation to coding legal reasoning must apply with even greater force
regarding facts. From a computer science perspective see also MacCrimmon and Tillers (2002).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 89
determining how technological developments will, and should, reshape the judicial
role, emphasising as they do the degree to which the judicial role will in significant
part be modeled according to the distinct socio-legal norms of different jurisdictions.
As with many other fields of contemporary life, one of the most significant chal-
lenges to how judges deal with and determine disputes is linked to current and future
developments in Artificial Intelligence or “AI”. AI is an umbrella term which encom-
passes many branches of science and technology and will often involve the creation
of complex algorithms to enable decisions to be made, and outcomes determined.
As such, much AI is focused on evaluating and decision-making of the type which
is often perceived to be the primary activity undertaken by judges.
AI can include machine learning, natural language processing, expert systems,
vision, speech, planning, cognitive computing and robotics (Mills 2016). Schatsky
et al. (2014, 3; emphasis in original) offer a practical definition of AI, stating that
it “is the theory and development of computer systems able to perform tasks that
normally require human intelligence.” AI is an evolving concept. It can also include
“affective processing” a field that is linked to understanding emotion and extends
to the creation of human like avatars (Picard 1997). However, at its simplest, AI
and other technological advances mean that computer programs and systems will
become more capable of performing tasks and functions that previously have been
undertaken by humans. Of AI in law, Ashley (2017, 3) notes that it is a “research
field… about to experience a revolution.”
What we will call “Judge AI” are developments in the various branches of AI
specifically concerned with contributing to judicial tasks. As currently discussed in
the literature, Judge AI covers a range of possibilities from the increasing use of
technology in legal and judicial processes prior to trial, through to playing some role
in court and decision-making processes. Even before a case comes before a judge,
AI may already be having an impact on the judicial task by virtue of AI’s impact on
the legal profession and how cases are prepared and presented to the court (Lopez
2016).3Impacts here may even include influencing which cases get before a judge,
as predictive coding developments enable predictions to be made as to the outcome
of litigation (Schubarth 2016). Finally, once cases are before courts, Judge AI is
now playing some role in aspects of judicial decision-making, though not without
controversy. In Mexico, the Expertius system is advising judges and clerks “upon
the determination of whether the plaintiff is or is not eligible for granting him/her a
pension” (Carniero et al. 2014, 227). In the United States, predictive coding has been
used to help determine whether recidivism is more likely in criminal matters and to
assist in making decisions about sentencing (Liptak 2017). A due process challenge
by a Wisconsin inmate to the use of one such program was rejected by the state’s
Supreme Court, even though the inmate was unable to examine the detail of the
3See also, for example, Tamburro (2012) for an analysis of computer-assisted document coding and
review, often referred to as “predictive coding” with implications for the discovery process. The
analysis of large sets of data is likely to have a “game-changing” impact. The technology collapses
the time (and costs) needed to review millions of pages of discovered material, to identify relevant
aspects without devoting massively costly person hours.
tania.sourdin@newcastle.edu.au
90 T. Sourdin and R. Cornes
software being used against him (it being protected proprietary information).4This
would appear to be a breach of Principle 8 of the Asimov AI Principles (endorsed
by the likes of Stephen Hawking and Elon Musk, among many more), that “any
involvement by an autonomous system in judicial decision-making should provide
a satisfactory explanation auditable by a competent human authority” (see Future of
Life Institute, n.d.-b). The need for such scrutiny became clear in relation to Loomis,
when the investigative journalism organization ProPublica carried out an analysis
of “Compas” (the program in question) and found that it was prone to overestimate
likelihood of recidivism by black defendants, and underestimate that of white (see
Angwin et al. 2016).
In New Zealand, Alistair Knott of the University of Otago’s AI and Law project has
raised concerns about the use of a computer-based prediction model to handle claims
and profile claimants under the country’s state accident compensation scheme (the
Accident Compensation Corporation, “ACC”) (see University of Otago, n.d.). While
the ACC is a state corporation and may appear to be akin to say a welfare department,
it should be recalled that the introduction of no fault compensation under the scheme
commencing in 1974 was premised on the bargain that the right to sue in court for
the tort of personal injury was abolished. With that in mind ACC claim handlers are
carrying out what was previously a judicial task. The program now raising questions,
rather sinisterly named the “Survival Analysis Model”, is defended by the ACC as
leading to cost efficiencies (an aspect of concern here is also the general advance of
managerialism into “the justice sector” raising separation of power concerns).5On
one view the Survival Analysis Model reduces human claimants to a matter of data
processing, analysis and dry statistical prediction. Questions are now being raised as
to its fairness, humanity, and accountability (see Nine to Noon 2017; Chiang 2017).
While we acknowledge that many of the developments in Judge AI thus far have
merit, and while we will make the argument for the expansion of Judge AI alongside
human judges, our purpose in this chapter is also to sound a note of caution as to how
far AI should go in relation to the judicial function. In what follows we discuss—in
Part 1—the range of beneficial contributions AI can make to judicial processes,
from supportive developments, appropriate replacement technologies, through to
constructively disruptive reforms. In Part 2 we make the case for principled limits
to the use of AI in relation to the work of judges, concluding that the role of the
judge, and certainly the developing role of the responsive judge, requires at its heart,
a human mind. The existential core of that argument is based on insights into the
judicial mind drawn from psychoanalytical concepts.
4State v Loomis (2016). See critique in State v Loomis (2017) and Brooks (2017). The US Supreme
Court declined to take the issue up: see Loomis v Wisconsin (2017).
5We do not have space here to pursue this in detail. The essence of the critique is that executives
(governments), for possibly quite innocent concerns of managerial efficiency can tend to view the
work of the courts as merely part of the overall justice sector, including the police and prisons, and
not as the operating of a distinct branch of the state. See discussion in Elias (2017).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 91
2 Part 1—How AI Can Beneficially Contribute to the Work
of Courts and Judges
2.1 Technology and Dispute Resolution
As Sourdin (2015a) has noted, there are three main ways in which technology is
already reshaping the justice system. First, and at the most basic level, technology
is assisting to inform, support and advise people involved in the justice system (sup-
portive technology). Many people now locate and obtain legal support and services
online and the growth of online legal firms who may provide “unbundled” legal
services has been significant over the past three years (see, e.g., Lawyal Solicitors
2016).
Second, technology can replace functions and activities that were previously
carried out by humans (replacement technologies). Some web-based information
(including digital video), video-conferencing (including internet-based group video
calls), teleconferencing and email can supplement, support and replace many face-
to-face in court interactions. At this second level, justice is supported by technology
and in some circumstances, this can alter the environment in which court hearings
takes place (see, e.g., Soars 2016). In this regard, there has been a growing focus
on online courts and what they may provide (see Ministry of Justice 2016). These
pressures are partly a response to growing evidence of unmet legal needs includ-
ing that for legal assistance, concerns about access to justice more generally, and
the growth in large scale online dispute resolution systems which are already being
used to support some court and tribunal systems (see, e.g., Tyler Technologies 2017;
Civil Resolution Tribunal 2018). The creation of an online court involves replacing
a physical court and litigation process with an online alternative that encourages the
resolution of a dispute but retains the stature and powers of a physical court of law
(Harvey 2016).
Chief Justice Warren of the Supreme Court of Victoria has suggested another
model where technology is supportive: the distributed courtroom (Warren 2015). A
physical courtroom remains central in this model, but the participants are replaced by
life-size screens or holographic projections to enable judges, lawyers, jury members
and parties to appear in court from any location of convenience. This model is
facilitated through online videoconferencing technology, such as Skype, but still
preserves the option of a physical space for the court, and the option of physically
attending court. Essentially such technologies enable judges to be more responsive by
enabling remote participation in court proceedings and by meeting communication
preferences of court users.
Finally, at a third level, technology can change the way that judges work and
provide for very different forms of justice (disruptive technology), particularly where
processes change significantly (Sourdin 2015a). Technologies may enable people to
access more sophisticated online “advice” that is supported by AI or to consider
options and alternatives or engage in different ways. In contrast to the traditional
rational decision-making approaches, some of these more sophisticated technological
tania.sourdin@newcastle.edu.au
92 T. Sourdin and R. Cornes
programs are designed to encourage the development and refinement of a number
of options (rather than producing one outcome) (see, e.g., Smartsettle One+ 2017).6
These areas of technological innovation have the capacity to be more disruptive than
previous innovations that supported a “graft and grow” approach and assumed that
adjudicative processes would not change in the context of their basic procedural
stages (Sourdin 2015a, 97).
Legal information and AI systems can already use sophisticated “branching”
and data searching technology to create elaborate decision trees that can suggest
outcomes to disputes. This is done by a system which emulates human intelligence
(neural networks) (see, e.g., Chaphalkar et al. 2015). Essentially, what takes place is
that the system asks the user a number of questions or uses existing data about the
user and poses questions about the dispute to enable an accurate description of the
dispute to be built. The computer then arrives at a conclusion by applying the law to
the dispute description. It does this by applying rules for specific sets of facts. Finally,
the computer can perform tasks based on the description given. New developments in
AI enable machines to learn from existing data and in effect create their own decision
trees. These processes could provide indicative decisions, or, as we will argue below,
be used as a quality control mechanism to look for, inter alia, signs of unconscious
bias. Such systems can be continuously updated and are reflexive in that machine
learning enables systems to improve and be constantly revised with new data sets.
Similarly, developments in Online Dispute Resolution (ODR), a form of alterna-
tive dispute resolution (ADR) where parties use the internet and technology to help
resolve their dispute cheaply and efficiently, also support and enable the development
of AI by creating the structure and context within which it can flourish. In ODR, dis-
putants are not required to meet in person, as the ODR process can happen remotely
through an internet connection. AI decision-making is already being used within the
field of ODR. These systems are labelled expert systems, which are programmed by
experts in the field and integrate rule-based algorithms to assist the program to make
decisions based on information received from the parties (Legg 2016). Legg explains
that these processes “collect facts from users through interview-style questions and
produce answers based on a decision-tree analysis” (2016, 228). More sophisticated
technologies can now do more than this and Susskind and Susskind (2015, 45) have
noted “massive data-storage capacity and brute-force processing” do not require a
replication of a human expert system and a fundamentally different way of thinking.
In the Netherlands, an advanced ADR program called Rechtwijzer had ODR
components that were used to assist couples in the separation or divorce process.
Rechtwijzer asked questions about the parties and their relationship and provided
options based on this input information (Bickel et al. 2015). The program also
provided “information, tools, links to other websites and personal advice” which
encouraged the parties to resolve their dispute between themselves (2015, 4). If res-
olution was not reached, the final step involved Rechtwijzer providing the parties
6It has been said that collaborative platforms, such as GroupMindExpress.com, are likely to be
used more frequently in large multi-party disputes where information and participants are plentiful
(Gaitenby 2004).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 93
with information and contact details of professional third parties, such as mediators,
legal representatives, and other dispute resolution processes. Whilst Rechwijzer will
largely be replaced by a new system and online arrangements, its creators have noted
that the primary obstacle in terms of the success of such ODR arrangements relate
to the incapacity of courts, lawyers and government to fully embrace these types of
innovations (see, e.g., Barendrecht 2017).
If these techniques can be used effectively within the field of ADR, then it follows
that the introduction of AI programs into the court system is also feasible. Designers
and implementers may draw on the experiences of these ADR programs to help
inform any AI judge programs, or alternatively AIs more specifically designed to
assist judicial officers. Although Rechtwijzer has been the subject of some recent
criticism particularly in respect of security and safeguards, the primary concern
appears to relate to the lack of societal infrastructure to support a more online court-
based model (see, e.g., Barendrecht 2017). The experience of AI in ODR and ADR
does however, suggest that there is room to blend more disruptive technologies and
platforms with court processes that can support judicial work.
2.2 Technology Supporting Human Judges and Court
Processes
If technologies can support non-judicial decision-making (by for example, enabling
more accurate potential outcome identification by participants) they may play an
increasing role in some forms of judicial dispute resolution.7In Australia, non-rule-
based branching technology has been used in a project of the Intelligent Computing
Systems Research conducted by La Trobe University and Victoria University (called
“Split-Up”). The project, led by Professor John Zeleznikow, determined that there
are 94 factors relevant for a distributive decision, and was directed at applying AI to
assist in calculating the division of property in family law proceedings (see Victoria
University, n.d.). Split-Up, a hybrid rule-based neutral network system that grew out
of this research, offers advice on how property is likely to be distributed if the matter
were to be determined by a court. It has been trialed by some judges, judicial reg-
istrars and registrars of the Family Court of Australia as well as legal practitioners,
mediators and counsellors. A more advanced approach, which is oriented at support-
ing negotiation, is called FamilyWinner (Zeleznikow and Bellucci, n.d.; Zeleznikow
et al. 2007).
Further examples of technology now aiding courts and tribunals can be found
in British Columbia, Northern Ireland, and possibly soon, England & Wales. In
British Columbia, Canada, the Civil Dispute Tribunal uses an online platform to
guide disputants (Civil Resolution Tribunal 2018). Online-supported negotiation and
informal dispute resolution are features of the system, together with adjudication,
7For an example of one mechanism supporting disputants, see MyLawBC (n.d.), available at: http://
mylawbc.com/info/about.php.
tania.sourdin@newcastle.edu.au
94 T. Sourdin and R. Cornes
with most cases decided “… on evidence and arguments submitted through the
tribunal’s online tools. However, when necessary, the adjudicator will have discretion
to conduct a telephone or video hearing” (see Benyekhlef and Vermeys 2017). In
Northern Ireland, the Northern Ireland Courts and Tribunal Service now offers an
online process in respect of small claims, although final adjudication remains a face-
to-face option (Northern Ireland Courts and Tribunals Service 2011). In England
and Wales plans to introduce Judge AI in relation to some categories of dispute
were dropped in 2017 (less controversial, but significant, aspects associated with the
introduction of online dispute resolution are proceeding) (Hyde 2017; for current
developments see Johnstone 2016).8
2.3 All Rise for Judge “Co-bot”?
The issues associated with the total replacement of human judges, which we turn
to below in Part 2, lead us to argue that the focus should continue to be on using
technological advances to support human judges in their judicial work. Judge AI
systems, we argue, should complement current human work, allowing for greater
efficiencies (Surden 2014). This approach suggests that assistant “co-bots” rather
than replacement robot judges could play a more important role in the future.
Already there is precedent for programs to predict likely outcomes based on
previous cases. In relation to the European Court of Human Rights (ECtHR), Aletras
et al. (2016) have developed a program that textually analysed that Court’s decisions
relating to breaches of human rights to discover patterns in judgments. The program
learnt these patterns and was able to predict the outcome of cases presented to it in
textual form with 79% accuracy on average (Aletras et al. 2016). This is an example
of machine learning, where the computer system was able to “analyze past data to
develop rules that are generalizable going forward” (Surden 2014, 105). Machine
learning allows computer programs to learn complex tasks through experience, rather
than through hand-crafted computer functions (see Silver et al. 2016, 489; Surden
2014, 89).
We noted above though that the distinct norms of different jurisdictions present a
challenge to Judge AI development. Of note here in relation to Aletras’ and colleagues
work is the fact that the form of ECtHR judgments (following the decision template
outlined) are heavily influenced by the distinct legal reasoning style of the civil law
jurisdictions covered by the Court. The ECtHR’s relatively standard form of judgment
contrasts with the rather more open textured and at times idiosyncratic styles of
judgments often found in common law courts. It may well be that one disruptive
effect of use of such software in common law jurisdictions could be a change in
judgment writing styles, as judges adapt to better work with such technology.
AI programs that can not only predict outcomes, but also produce a suggested
reasoned decision based on the information input, could be used to assist human
8The reforms fell because of the snap election called in 2017.
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 95
judges in judgment preparation. These systems could produce a draft judgment based
on the system’s determined outcome (Sourdin 2015a, 102). A human judge could
then use this draft (much as many judges, especially in appeal courts, make use of
drafts from legal assistants) to produce their own reasons for judgment. This use of AI
would allow for human oversight over the computer program and enable discretionary
or social considerations to be taken into account by the human judge that may be
beyond the capacity, or authority, of the computer program. Such developments
are not without significant risk and the capacity for AI decisions to be appealed
or reviewed by human decision-makers is often cited as a necessary component of
any automated decision-making system (Perry 2017). Evaluations of Rechtwijzer,
the ADR program outlined above, similarly found that although participants were
satisfied with their experiences (Bickel et al. 2015), a majority still felt the need to
have a third party check over the agreement made through the system (Bickel et al.
2015).
2.4 Judge Co-bot: Human Unconscious Bias and Quality
Control?
While the value of increased judicial diversity is now widely accepted, and with that,
the reality that different judges will legitimately reach different decisions applying
the same law to the same facts, unconscious biases affecting a human judge’s decision
(for example, racism or sexism) are clearly a concern. In a decision concerning the
alleged bias of a judicial officer the psychoanalytically informed Judge Jerome Frank
said:
Every judge, … unavoidably has many “idiosyncratic “leanings of the mind,” uniquely
personal prejudices, which may interfere with his fairness at trial. He may be stimulated by
unconscious sympathies for, or antipathies to, some of the witnesses, lawyers or parties in a
case before him (In Re JP Linaham 1943, 652).
As those in the access to justice movement have noted, the outcome of court adjudica-
tion can clearly be influenced by many factors, including the quality of representation,
the resources available to the litigant and the quality of the decision-making and sur-
rounding rights-based framework.9In addition, adjudicative decision-making can be
influenced by a range of factors that can influence substantive justice (Sourdin 2012).
As Sourdin (2016) has noted, these include a range of impacts on the decision-maker
that include:
•when and what a person has eaten (Tierney 2011)10;
•the time of day (see Tierney 2011);
9For further discussion, see Sourdin (2015b).
10Tierney (2011) refers to a study of parole board decision-making reported in Danziger et al.
(2011).
tania.sourdin@newcastle.edu.au
96 T. Sourdin and R. Cornes
•how many other decisions a person has made that day (decision fatigue) (Tierney
2011);
•personal values (Chisholm 2009; see also Quintanilla 2012);
•unconscious assumptions (Mason 2001, 680);
•reliance on intuition (Kirby 1999b,4);
•the attractiveness of the individuals involved (Agthe et al. 2011)11;
•emotion (Bennett and Broe 2007, 84–86).
Judge Frank thought that “the conscientious judge will, as far as possible, make
himself aware of his biases of this character, and, by that very self-knowledge, nullify
their effect” (In Re JP Linaham 1943, 652). The extent to which these unconscious
factors influence judges is unknown, but it is likely that even if Judges become aware
of these factors, they are likely to underestimate their impact (Wilson and Gilbert
2008). This is partly because we are more likely to exaggerate information about
our own personal qualities that we perceive as positive and less likely to accept
information that raises any doubts about our positive characteristics.12
Sourdin (2012) has suggested previously that AI could play a role in address-
ing unconscious judicial bias. A premise of our note of caution about Judge AI
in Part 2 below is that it does not allow for the beneficial contributions from the
human judge’s unconscious. It might therefore be argued that a Judge AI—“Judge
Co-Bot”—lacking an unconscious, would not be subject to the adverse impacts of
negative unconscious bias. One potential benefit from Judge AI could therefore be
the use of algorithms to review either individual judicial decisions, or larger sample
groups, to exercise a quality control function by identifying evidence of inappropri-
ate biases in decision making (see discussion in Ashley 2017, Chap. 12). The simple
application of statistical analysis can do this to some extent presently. Could judicial
co-bots using more sophisticated algorithms do so more thoroughly, or in real time,
working alongside human judges?
Care will be needed in taking the steps described above. As we have already noted
in cases such as State v Loomis, some forms of AI currently in use have already
demonstrated that there can be significant risks in using AI because programmers
and others may embed their own unconscious biases in computer programs without
intending to do so. That problem goes beyond the law, with algorithms in other
contexts producing unwanted results including promoting racism and inaccurate
outcomes (see Levin 2016).13 In addition, using Judge AI has the potential to reduce
the capacity of the justice process to deal with people within courts with dignity
and to respond in a human way (which may incorporate emotion and compassion).
On this last weakness of Judge AI, developments in affective computing suggest
that it may be feasible in the future to develop coded applications able to recognise
and respond appropriately to human emotion.14 While we expect such progress to
11The researchers in this area suggest that there may be a bias away from attractive same sex
individuals and a bias towards attractive other sex individuals.
12For an interesting discussion of this phenomenon, see Brooks (2011, 220).
13See also Smith (2016) regarding the use of algorithms in relation to recidivism.
14For an interesting overview on affective technology, see Wikipedia (2017), quoting Picard (1997).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 97
continue, there will remain limits, some of them of a nature that, we suggest, will
place an ultimate limit on the application of AI in judicial systems.
3 Part 2—Limits to Judge AI
3.1 More Than Presiding in Court—the True Extent
of the Role of the Modern Judge
A threshold challenge to future developments in Judge AI is linked to inadequate
understandings of the nature and extent of the judicial role. The role is often inter-
preted as being synonymous only with judicial decision-making or adjudication at
first instance between contesting individuals, and little, if anything, more. Whilst
the question of Judge AI is certainly fraught with complex issues when considering
decision-making alone, there are other aspects of the judicial role where judicial
functions cannot be readily displaced by AI. Most civil and criminal disputes settle
before getting to a full hearing. Judges play an important role in both managing and
settling cases long before an actual trial. Their curiosity, emotional understanding
of parties and their lawyers, their agile questioning and exploration of issues can be
decisive in enabling litigants to consensually determine their own outcomes, rather
than submitting to a formal judicial decision.
The evolving nature of the judicial role, and a strong rationale for caution in terms
of the development of Judge AI, also arises because of the therapeutic influence that
judges may have on disputes, through interventions in both criminal and civil con-
texts that are directed at changing parties’ future behaviors as well as determining
outcomes in respect of past activities (as discussed more generally in this book).
Such interactions require judges to be empathic and understanding and to commu-
nicate with a range of people in different and supportive ways to support individual
transformation and acceptance.
In addition, apart from their critical adjudicative role, judges also play an educative
role, informing litigants and lawyers about approaches to be taken and contributing
to civic education at a broader level. While judges must be cautious as to how they
express themselves (lest, if expressing themselves too forcefully they open them-
selves to apparent bias challenges) it is now accepted that they play an important role
in public debates.15 Proponents of the view that judges could be replaced by AI fail
to acknowledge the full range of what judges contribute to society beyond adjudi-
cation, including important and often unexamined issues relating to compliance and
acceptance of the rule of law.
15For a successful apparent bias challenge on the basis that a judge so clearly disliked aspects of
criminal defendants’ rights protected in the European Convention on Human Rights, that a fair trial
was not possible before an appellate court containing that judge, see: Hoekstra v. H.M. Advocate
(No.2) (2000) S.L.T. 605; discussed in MacQueen and Wortley (1998).
tania.sourdin@newcastle.edu.au
98 T. Sourdin and R. Cornes
3.2 The Rule of Law, Judicial Authority, and the Human
Judge
A judge exercisses the judicial power of the state. That may entail an authority to
deprive a person of their liberty, determine private rights against other persons, or
their rights vis a vis the powers of the executive or legislative branches (and behind
them, the wider populace). Access to an independent and impartial judge (hitherto
always understood to be a person trained in the law according to the usages of
their jurisdiction) is a requirement of any liberal democratic state. Judicial functions
under the rule of law entail a complex cocktail of legal rationality and legally trained
human judgment. An element of litigants’ respect for judicial judgment, and the
social legitimacy of the judiciary more broadly, must come from, we think, the fact
that it is rendered by a fellow human being. As Harvey (2016, 95) notes, “what is at
stake [in developing Judge AI] is continued confidence in and adherence to the rule
of law.”
On this point, Cornes and Henaghan (forthcoming) argue that individual and
community recognition of the authority of “the judge” is not only traceable back
to very early infancy, but also primitive desires for the group (society) to be led.
Any project for an AI judge must contend with that uniquely human provenance.
Discussing the early contest between siblings vis a vis parents Freud reasoned, “there
grows up in the troop of children a communal or group feeling, which is then further
developed at school. The first demand made by this reaction formation is for justice,
for equal treatment for all”(1921, 120). We may disagree with our parents, but, in
(mostly) functional families we accept our parents’ inter alia judicial authority over
us because of instinctive early bondings and reactions. The respected authority of
those early judicial decisions (over disputes say between us and siblings) is translated
later into respect for the human judge who we may later in life encounter in court.
Questions therefore arise as to whether a computer program or automated process
could possess both the rational and emotional authority to make decisions in place
of a human judge. In the context of an automated system delivering administrative
decisions, Justice Perry (2017, 31) raises questions such as who makes the decision,
and who possesses the legal authority to make such a decision. Is it the computer pro-
grammer, the policy-maker, the human decision-maker, or the computer or automated
system itself?
Legislators have attempted to remove some of the complexities of this issue.
For example, a decision made under the Therapeutic Goods Act 1989 (Cth) by a
computer program is deemed to have been made by the Secretary.16 How such a
deeming provision would fare in litigation, and whether it would be accepted by
litigants remains uncertain.
Justice Kirby, writing in 1999, noted that the need for the public and open nature
of adjudication may also present difficulties with the adoption of electronic courts
and Judge AI: “[t]he right to see a judicial decision maker struggling conscientiously,
16Therapeutic Goods Act 1989 (Cth) s 7C(2).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 99
in public, with the detail of a case is a feature of the court system which cannot be
abandoned, at least without risk to the acceptance by the people of courts as part of
their form of governance” (Kirby 1999a, 188). Without a public, open forum for the
administration of the state’s judicial powers, would the exercise of these powers be
accepted by the populace? Chief Justice Warren of the Supreme Court of Victoria in
Australia (2015) suggests that they would be, at least insofar as justice is linked to
public access: few people attend court hearings in person, and information and news
is sourced more and more from online media, including social media. It is important
to note though that AI is still unable to interact with people with compassion, emotion,
or agile or intuitive responsiveness. Also, an AI judge would not be able to meet the
need for a party in court to see a decision-maker “grapple conscientiously” with a
decision.
3.3 Variation in the Adjudicative Function Within
the Judicial Hierarchy
The adjudicative component of the judicial function, in most common law systems,
occurs at three levels, each with a distinct function. While first instance judges deal
with the raw matter of a dispute, first level appeal court judges work to correct
obvious errors at first instance, and to some degree—legal systems vary—to carry
out wider legal system supervisory roles. Finally, higher level courts (for the most
part second level appellate) have wide ranging responsibilities which necessarily go
beyond the concerns of the individuals, organisations and government entities that
are directly involved in a court action (as for the most part, upper tier courts hear
cases of “general or public importance”).17
At the initial appellate, and certainly at the secondary appellate level, there is scope
for policy making which requires an ability for dealing with polycentric problems
and policy questions. At all three levels, decision-making requires consideration of
the context of the dispute and the legislation. Simply put, judges at all levels need to
be responsive to contextual factors that assist to determine the meaning of legislation
and human activities. This is a complex task and where boundaries are overstepped
there can be a concern that judges, rather than an elected government, are “making
law.”
The objections to human judges entering this arena may be amplified with AI
involvement. If, for example, from a democratic perspective one objects to human
judges placing legislative and other material within context and in some circum-
stances reviewing legislation or executive actions, an AI judge approach must be
even more troubling. Furthermore, an AI judge may not be perceived to be indepen-
dent (it would be the creature of its programmer). Given the limits Surden (2014)
notes it is also doubtful it would have the capacity to question beyond the presenting
17See discussion in Le Sueur and Cornes (2000, 53–97).
tania.sourdin@newcastle.edu.au
100 T. Sourdin and R. Cornes
material; that is, to view actions and legislation within a broader and evolving human
context.
Also, what would be the relationship between a full AI judge and the human judges
in a legal system? The idea that the AI judge would be reviewable by a human judge is
unremarkable; though even here questions would arise as to what is reviewed—the
decision only? Or also the algorithms which gave rise to the decision? In State v
Loomis the code was held to be unreviewable, yet as we have seen, the code in
question itself was found to be racially biased. In relation to lower tier courts, where
Judge AI is currently more likely to gain a foothold for at least filtering purposes,
the default availability of a responsive human judge permitted to review all aspects
of the AI input, and able to call on a complex array of communication and social
skills, remains desirable to support understanding and compliance with the law. The
concept of an AI judge authoritatively reviewing a human judge raises even more
serious ethical conundrums going to the very heart of the question of what it means
to be human.
3.4 The Challenge of Novelty
For the moment, most instances of Judge AI are at the entry level issues of dispute
settlement, or with AI playing an assisting role. That is, AI is used to inform or
support some initial participant decision-making rather than carrying out the judi-
cial decision-making itself. Judicial decision-making arguably requires more though
than algorithmic sophistication, particularly where novel situations exist. As Surden
(2014) notes machine learning techniques are only useful where analysed information
is similar to new information presented to the AI. Should an AI program be presented
with a novel case where no similar precedent exists, it may not be well-suited in mak-
ing a prediction or coming to an outcome. These issues may arise in Judge AI where
the sample size of previous cases is not large enough for the computer program
to discover patterns and create effective generalisations (Surden 2014, 105–106). It
may also be argued that the exercise by a judge of discretion—the application of
principle to novel factual circumstances—to some extent will always require a fresh
evaluation of circumstances beyond the capability of machine learning.
AI researchers have had a number of clear successes addressing these sorts of
issues outside of the legal field. These successes do suggest that predictive analysis,
even where there are significant variations in terms of novelty, can be “learned” and
that these insights could be extended into Judge AI applications. Google’s DeepMind
researchers successfully trained an AI program, AlphaGo, to play the complex game
of Go at a higher level than the European master of the game by training the neural
networks of the program “directly from gameplay purely through general-purpose
supervised and reinforcement learning methods” (Silver et al. 2016, 489). There are
also many examples in the medical field with AI now increasingly being used for
diagnostic purposes and in relation to some human functions (Ramesh et al. 2004;
Tirrell 2017; Neill 2013).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 101
Working from the basis of the current state of AI use in courts, Harvey (2016)gives
a simplified description of a hypothetical advanced application of Judge AI using the
example of algorithms already present in legal databases. These databases employ
natural language processing to assist with the sourcing of relevant material based on
search terms. The application, Harvey posits, would be able to decide cases. It would
be required to go further than existing applications, reducing returned sources to a
manageable and relevant sample, and then deploying tools to compare these sources
of law to a present case and engaging in analysis to determine the outcome (Harvey
2016, 93).
Harvey (2016, 93) explains that this final step would require “the development
of the necessary algorithms that could undertake the comparative and predictive
analysis, together with a form of probability analysis to generate an outcome that
would be useful and informative.” Aside from the question of how the case facts
would be determined, as will become clear from our discussion below however,
we are concerned that one of the premises Harvey starts from is that the material
required to be coded is, “the entire corpus of legal information” (2016, 94). We will
argue that while such conscious knowledge is certainly essential to judicial decision-
making, so too is the wider influence of the judge’s life experience at work in the
judge’s unconscious and the extent to which that experience enables the judge to be
responsive from a human perspective.
3.5 Translating Law into Code
Commentators have also raised the issue of how to accurately translate the law into
codes, commands and functions that a computer program can understand.18 Com-
puter programmers and IT professionals rarely have legal qualifications or experi-
ence, nor are they policy or administrative experts. However, it is these professionals
who are tasked with translating legislation and case law into computer codes and
commands to allow an autonomous process to make decisions. These sources of
law—whilst complex on their own—also operate within the context of statutory pre-
sumptions, and discretionary judgment. Ensuring these intricacies are properly coded
into an autonomous process is challenging. Because of these challenges, commenta-
tors note that more regulatory areas of the law may be better suited to be transformed
into computer code (Bathurst 2015).
Similarly, these codes will need to be constantly updated due to frequent amend-
ments, new case decisions, and complex transitional provisions (Perry 2017).
Autonomous systems will also require the capacity to apply the law from various
points in time, to ensure that cases are decided on the laws that applied at the rele-
vant time the actions occurred. These challenges can potentially be met by including
lawyers and policy-makers in the creation and updating of these computer programs;
18See, for example, Perry (2017), which provides a thorough treatment of the issues involved in
translating law into computer code.
tania.sourdin@newcastle.edu.au
102 T. Sourdin and R. Cornes
however, as most legislators will attest, converting human behaviours into legislation
is complex and often fraught with uncertainty (Perry 2017). In relation to involving
lawyers with such work there may be issues of capacity as despite the now decades
of developments in the AI field, only in recent years have some legal academics
acknowledged that while lawyers and judges may not need to know how to write
computer code, “they will need… an ability to think about legal practice in terms of
engineering a cognitive computing process” (Ashley 2017,36).IfAIinthelawisto
reach its true potential law schools need to be considering now how to prepare their
graduates to be conversant in the language of IT professionals.
3.6 Syntax and Semantics
The use of AI in law will be confronted by the philosophical distinction between
syntax and semantics. Searle (2002) noted that computer programs possess syntax
(a formal structure of operation), but do not possess semantics (meaning behind
these operations). Digital technology processes information in the form of abstract
symbols, namely ones and zeros. The technology possesses the ability to process
and manipulate these symbols, but it does not understand the meaning behind these
processes. In other words, the machine does not currently understand the informa-
tion that it is processing. This can be contrasted with the human mind, which can
understand the information that it processes.
Therefore, while computer programs may be able to approximate human ways
of thinking, they cannot yet duplicate human ways of thinking (Searle 2002). As
the information that is required for human decision-making becomes more complex
(involving multiple complex data sources),19 humans, including judicial officers may
benefit from some form of AI assistance. Assistance, though, is not replacement.
Of all the challenges to advancing Judge AI, this one—capturing in AI form the
semantics of judicial thinking—is the most challenging because it raises directly the
question not only of the nature of the human judge’s psyche, but that of the human
psyche generally.
3.7 To Judge Is Human—An Existential Limit
to the Development of Judge AI
We may well ask, generally (let alone in relation to Judge AI) whether there are or
should be any limits to the reach of AI.20 Harari (2015, 394) writes:
19It is argued that the information that may be considered by a judge has expanded significantly in
recent years. See, for example, Tashea (2016).
20Contrast, for example, the views of Musk and Zuckerberg, outlined in Domonoske (2017)and
Solon (2017).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 103
Scholars in the life sciences and social sciences should ask themselves whether we miss
anything when we understand life as data processing and decision making. Is there perhaps
something in the universe that cannot be reduced to data? Suppose non-conscious algorithms
could eventually outperform conscious intelligence in all data-processing tasks – what, if
anything, would be lost by replacing conscious intelligence with superior non-conscious
algorithms?
Our answer is: humanity. In relation to the life of courts we suggest that there is
more complexity in the work of the human judge, and the lives of the litigants
before them, than can, or should, be coded for entirely in the most sophisticated of
algorithms. Hariri’s thesis is that there is a new religion associated with the advance
of AI: “Dataism”. The premise of Dataism is that the universe “consists of data
flows, and the value of any phenomenon or entity is determined by its contribution
to data processing. … It collapses the barrier between animals and machines and
expects electronic algorithms to eventually decipher and outperform biochemical
algorithms” (Harari 2015, 428).21
Dataists, he argues, “are skeptical about human knowledge and wisdom, and
prefer to put their trust in Big Data and computer algorithms” (Harari 2015, 368).
High priests of the new religion, such as Mark Zuckerberg, therefore regard any
impediments to the uploading and sharing of information (whether true or not) as
heresy.22 Like Hariri, Lanchester (2017) reaches for another grand analogy: he calls
Facebook and Google, “the new colonial powers.”23 Barry Lynn and Matt Stoller of
the Open Markets Institute (2017) similarly warn of the monopolistic power of these
tech companies as a threat to democracy and the rule of law. Lynn lost his job at the
New America Foundation (significantly funded by Google) after publicly endorsing
the EU’s 2.42 Euro fine of Google for abuse of a dominant position.24 These are the
high stakes contexts in which AI, and Judge AI is being developed.
Not all entrepreneurs of the age of data support Zuckerberg and the dataists. Elon
Musk (Domonoske 2017), echoing the concern implicit in Hariri’s speculations, has
warned of the existential risk general AI advances pose to humankind. As noted above
there is also the work of the Future of Life Institute in relation to AI.25 Those at the
forefront of affective computing themselves could not be termed dataists. Rosalind
Picard discussed the potential to build a computer which could replicate the human
judge—the one we outline from a psychoanalytical perspective below, of rationality
harnessed to emotion. In her seminal article she opened with the disclaimer that, “I
am not proposing the business of building ‘emotional computers’” (Picard 1997,1;
emphasis in original).
21While certainly linked to a number of laudable aims, and noting sensitivity issues in relation to,
for example, medical data, see the proposals for “Data Trusts” in the UK to better facilitate data
sharing provided by Hall and Pesenti (2017).
22For an interesting critique of modern attitudes toward knowledge sharing, see Leith (2017).
23See also Foer’s (2017) sustained critique of Big Tech; and from fiction, Eggers (2013).
24For coverage of this event, see the European Commission (2017) and Vogel (2017).
25For news see Future of Life Institute (n.d.-a).
tania.sourdin@newcastle.edu.au
104 T. Sourdin and R. Cornes
3.8 Responding to Dataism in Relation to Judge
AI—A Psychoanalytical Model of the Legal/Judicial
Psyche
Discussions concerning the use of AI in the judicial context (theoretical and even
more worryingly contemporary policy proposals) have so far not pursued an in-depth
engagement with just what it is that algorithms are being touted as able to replicate:
the legal/judicial psyche (Posner calls it the “judicial mentality”), i.e., the thing which
carries out the semantic operations of legal reasoning (Posner 2010, 5). Capturing in
code the operation of the conscious, rational side of legal thinking (the “entire corpus
of legal knowledge”, as Harvey suggests) has not only been to the fore but has been
the entire focus of attention. If that were all that needed to be coded, “judges would
be well on the road to being superseded by digitized [AI]” (Posner 2010,5).
However, the legal reasoning of lawyers and judges (as with all human beings) is
only partly conscious; the unconscious also plays an essential role. A psychoanalyt-
ical understanding of the judicial mind suggests that the judicial function requires
at its heart the organic home of a human mind, within which contradictions, at the
heart of the judicial process—and human life—are managed. These messy, human,
contradictions and accommodations are part of the very definition of “the judge”,
enabling as they do the judge to understand both the law and the people to which
it is being applied. The Scottish judge, Lord MacMillan, writing in the late 1930s
(the decade Alan Turing was doing foundational work in computer science) made
the point that:
The judicial mind is subject to the laws of psychology like any other mind. When the judge
assumes the ermine he does not divest himself of humanity.Hehassworntodojusticeto
all men without fear or favour, but the impartiality which is the noble hallmark our bench
does not imply that the judge’s mind has become a mere machine to turn out decrees;the
judge’s mind remains a human instrument working as do other minds, though no doubt on
specialised lines and often characterized by individual traits of personality, engaging or the
reverse (1937, 202).
The point at which an algorithm matches the abilities, and just as importantly, the
frailties, confusions, perversions, quirkiness, and uncertainties, of a human mind
(enabling for example empathic understanding and reasoning) is the point at which we
have managed to replicate that mind, not imitate or approximate it. A society capable
of that may very well be beyond the need of something so basic as a judicial process;
so advanced would be its sense of understanding, its mastery of thought and emotion,
and ability to avoid conflict. Indeed, in a world of such perfect comprehension,
conflict itself might become a relic.
Skepticism about the notion that a judge could be replaced entirely by a smart
machine is also linked to the rejection of strong legal formalism and a move to more
realist understandings of judicial decision-making, including acknowledging that
judges make law. As Lord Reid said in 1971:
There was a time when it was thought almost indecent to suggest that judges make law – they
only declare it. Those with a taste for fairy tales seem to have thought that in some Aladdin’s
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 105
cave there is hidden the Common Law in all its splendour and that on a judge’s appointment
there descends on him knowledge of the magic words Open Sesame. Bad decisions are given
when the judge has muddled the pass word and the wrong door opens. But we do not believe
in fairy tales any more (Reid 1972, 22).
In the context of a discussion of Judge AI, we might substitute Lord Reid’s “magic
words”, for computer code.
Rejecting legal formalism necessarily allowed for two confessions: not only that
judges make law, but second, that the legally correct answer is simply the one deliv-
ered by the last judge or judges (or majority of them in an appellate court) to hear
a matter. As Reid (1972, 22) said, “the practical answer is that the law is what the
judge says it is” (emphasis added); and therefore the outcomes of cases turn in part
on which judges hear them. There is thus legitimate variance in the results of legal
and judicial reasoning which may arise from different human judges’ understand-
ing of the same legal concepts and their application in different ways. Observe, for
example, instances of top courts overturning their own precedents, or cases where
judges reach different conclusions as they pass through the appellate hierarchy (and
quite possibly with an overall majority of judges favouring the side which finally
loses).
How can we explain such accepted judicial variance? Clearly judges are not
free (under any view of the judicial role, formalist or otherwise), as are politicians,
consciously to give rein to personal or ideological convictions. Yet clearly elements of
a judge’s personal self play a role in their decisions and this truth is well accepted. As
the British judge, Sir Terence Etherton notes, whether a judge realises it or not, their
decisions will be influenced by “their personal outlook based on personal experience,
and their judicial philosophy” (Etherton 2010, 740).
Legal scholars (especially the realist and critical legal studies (CLS) movements),
and political scientists have proposed various theories and models to explain judicial
decision-making.26 While some of these approaches have made use of psychological
insights, none of the theorising has yet considered directly, comprehensively, how
psychoanalytical concepts, Freud’s vocabulary and grammar of the psyche, might
assist.27
26See, for example: Baum (1998,2008); and contributors in Klein and Mitchell (2010).
27For some reasons as to why psychoanalytic-legal work receded after the 1960s see Weisstub et al.
(2016), and in response, Sourdin and Cornes (2016). There is also a long running debate about
the value of psychoanalysis and Freud’s insights per se. A good place to start for the contentions
on either side is Menand (2017). It will be apparent from our discussion that we do see value in
psychoanalytical concepts in assisting to better understand the work of judges.
tania.sourdin@newcastle.edu.au
106 T. Sourdin and R. Cornes
3.9 Legal Training and Practice Builds a Separate “Legal
Self” Alongside the “Personal Self”
One of Freud’s propositions was the idea that our mind (psyche) has three parts (see
Fig. 1).
The id, or in German, es (it),28 is where our most primal selves are found. It is a
zone of want, urges, desire. Were it not controlled we, individually, or as a society,
would be in anarchy. Control comes from the operation of the super-ego, in German,
über-ich (over-I), and ego, in German, ich (I). The super-ego is what we take from our
parents and society as we grow up: the learning which conditions what is regarded
as appropriate or inappropriate behaviour. Finally, we have the ego. From this we
operate on a daily basis; it is the most personal, to the fore, aspect of our psyches.
We suggest that legal training entails the development of distinct legal super-egos
and legal egos from which we learn to approach the world as a lawyer, and for some,
as a judge (see Fig. 2).
First year law students will often be told that one of their initial challenges is to
start to think about the world in a quite distinct way; that “thinking like a lawyer”
requires a very different way of looking at the world to the one they have been used to.
Learning, as Schauer (2010, 109) puts it that, “what the law requires to be done may
be something other than that which a non-legal decision-maker would decide.”29
The first year of legal training is the most difficult because during that year not
only is one learning the law, one is also building within the additional psychical
apparatus of a legal super-ego and ego: one’s legal self. This gives rise to the psychic
growing pains of the first stage of legal study. One is training one’s psyche to be
able to think from two distinct points, each aware of the other, but, when operating
in “legal mode”, regardless of the ordinary self’s conscious reactions, allowing this
new learnt legal self to prevail, even if its conclusions clash with the personal self’s
view of the matter.
3.10 The Legal Super-Ego
The legal super-ego is the repository of our legal knowledge, not just learnt legal
rules, but also the broader cultural experience of the law. Like the personal super-ego,
the legal super-ego, is built up over time by thinking and views which run through
generations of legal thinkers. As the United States’ jurist Oliver Wendel Holmes
(1881, 5) put it: “The life of the law has not been logic: it has been experience….
The law embodies the story of a nation’s development through many centuries, and
it cannot be dealt with as if it contained only the axioms and corollaries of a book of
mathematics.”
28See discussion in Freud (1923) and Frosh (2012, Chap. 7).
29Space does not allow a wider discussion here of the nature of legal education, but for a CLS
perspective see Kennedy (1982).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 107
Fig. 1 Freud’s topographical/structural model of the mind (artwork, Carole and Richard Cornes)
During legal training, and beyond, the young lawyer will likely come across teach-
ers who become in a sense, the young lawyer’s “legal parents.” The views of such
teachers as to the workings of the law, like the law student’s domestic parents’ views
of moral life, go to building aspects of the legal super-ego. The unique culture of the
law school, reflected in its faculty and alumni, will also contribute to molding the new
tania.sourdin@newcastle.edu.au
108 T. Sourdin and R. Cornes
Fig. 2 A topographical/structural model of the legal mind (artwork, Carole and Richard Cornes)
graduate’s legal super-ego. The way the legal super-ego comes about (through train-
ing and acculturation in the legal world), means it also operates to tie the emerging
lawyer’s pysche into the social super-ego of the legal world. Concerning the concept
of the social super-ego, Wheelis (1959, 66; emphasis added) has argued that it:
Does not refer to a hypothetical superego of society [for our purposes here: the legal and
judicial professions] as a whole. … but only to those superego elements which are shared by
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 109
a significantly large group. When one views the superego of an individual, one sees it as the
incorporated images of parents. … The faces disappear when one views the social superego;
for there one sees only the shared elements—the ideals and faiths which that particular
society holds dear, and hence wishes to preserve and perpetuate.
Once qualified by formal education the young lawyer’s psychical growth continues.
They are admitted to the profession, and while the culture of jurisdictions will vary,
in most common law jurisdictions this entails an application to a court, including
the filing of affidavits as to the applicant’s character. The process is a sacral one,
marking the emergence of a new self, and marking that self as linked, and subject to,
a wider set of values. Entering legal practice, the new lawyer will then be taught by
the senior lawyers they work with. All these people will contribute, along with the
young lawyer’s formal studies, to the developing legal super-ego. The influence of
some of them will stay with the person throughout their professional life—as when
in practice, or academia, we find ourselves wondering what view these early mentors
or teachers might take on an issue we confront.
3.11 The Legal Ego
The legal ego: this is the place from which lawyers and judges learn to approach and
analyse the world in legal terms, obeying the commands of the legal super-ego. Like
the personal ego it is a place from which one speaks as one’s self, but in this case,
one’s legal self. While undergoing legal training at university, and after, the young
lawyer will also be developing their own sense of what it is to be a lawyer; what
their own legal voice will be. In building this aspect of their psyche there is perhaps
more opportunity to draw on their personal self; the natural process of psychical
development will, as it were, twin track. Their personal self will help to temper the
way they develop their legal ego, under the influence of the factors and people which
are building their legal super-ego. In another sense, the legal ego is a child not just
of the external factors of law school and “legal parents”, but also the person’s own
super-ego and ego. Clearly this psychical structure could be expected to give rise
to a range of tensions and conflicts; it is our argument that the outcomes of those
unconscious conflicts is one of the defining features of legal thinking.
3.12 The Place of the Pre-conscious and Unconscious
as a Link Between the Legal Psyche and the Personal
Psyche
Freud also developed the idea that we think from three levels of the mind: the con-
scious (what is present in our immediate thoughts), the pre-conscious (the things we
are about to bring to conscious as the circumstances require), and the unconscious
tania.sourdin@newcastle.edu.au
110 T. Sourdin and R. Cornes
(the realm of the psyche and thought that we cannot access ordinarily), accessible
only through exceptions such as our dream life, or through phenomena such as lapsis
linguae, jokes, or when we allow ourselves to free associate.30 In The Interpretation
of Dreams, Freud said of unconscious thinking that it is a process which “could
easily be very different from the one we are aware of in ourselves in the course of
purposeful reflection accompanied by consciousness” (Freud 2008, 214).31 Ian Craib
(2001, 21)—a sociologist and psychotherapist—said of the idea of the unconscious
generally, “the suggestion that people do not necessarily know what they are doing,
that they are driven by forces beyond their consciousness or their control, that they
can be mistaken about their own motives, is a scandalous idea.”
It is certainly an idea which would have scandalised the legal formalist orthodoxy
Lord Reid (1972) rejected in The Judge as Law Maker, and it will continue to
provoke resistance from more black letter lawyers and any remaining formalists. The
implications are significant. While we might control what happens at the conscious
level we can only pretend to control what goes on below. As Freud (1991, 139) said,
the, “unconsciousness is a regular and inevitable phase in the processes constituting
our psychical activity; every psychical act begins as an unconscious one, and it may
either remain so or go on developing into consciousness, according to whether it
meets resistance or not.”
The importance of considering the judicial unconscious (and to an extent, pre-
conscious) is this: as much as a judge honestly claims to be operating consciously
from the perspective of their legal super-ego and ego, they cannot control for the
leakage from their personal super-ego/ego which may occur beneath the membrane
of consciousness. Furthermore, in that realm there will also be influences from the
judge’s id. Judge Benjamin Cardozo (1921, 167) addressed this in explicitly psycho-
analytic terms: “deep below consciousness are other forces, the likes and dislikes, the
predilections and the prejudices, the complex of instincts and emotions and habits
and convictions which make the man, whether he be litigant or judge.”
For Schauer (2010, 114), the question, “do judges think like human beings, or
like lawyers, or like judges?… should be one of the central items on a research
agenda for the psychology of judging, but it is, surprisingly, an item that up to
now has been almost completely absent.” What then are the functional implications,
from a psychoanalytical perspective, (including for any AI designer) of the inevitable
influence of a judge’s unconscious on how they think? We are not concerned now with
conscious legal reasoning (which possibly may be well expressed in a sophisticated
algorithm). Here, we are concerned with the unconscious processes which play a
part in all human judicial reasoning - how, in the unconscious mind the personal self
can affect the conclusions of the legal self.
The psychoanalytic concept of use here is that of “phantasy,” spelt within the
discipline of psychoanalysis with a “ph” rather than “f”. The novel English spelling
30See discussion in Freud (2002,2008).
31For a neuro-psychoanalytical view of the unconscious see Solms (2013).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 111
arises from a translation compromise—Freud’s term in German is “phantasie.”32
Along with the interplay between legal and personal super-egos and egos, discussed
previously, phantasy, not only as a way of thinking, but also through the interplay
of different phantasies of “justice”, plays a role in how judges think. Freud thought
phantastic reasoning part of the “phylogenetically inherited capacity of the human
mind” (Spillius et al. 2011, 3). Isaacs (1948, 80–95) characterised phantasy as, “the
primary content of unconscious mental processes”, implicitly identifying it as of
fundamental importance: “phantasy is (in the first instance) the mental corollary,
the psychic representative, of instinct. There is no impulse, no instinctual urge or
response which is not experienced as unconscious phantasy.”
Every judge’s phantasies, in Isaac’s terms, instincts, of justice, shaped by their
life experiences, from the moment of birth, will play a role at the unconscious level,
influencing how they consciously apply legal rules. Phantastic reasoning will oper-
ate to meld the inclinations of the judge’s legal super-ego and ego with that of their
personal super-ego and ego, producing a synthesis which ultimately emerges to con-
sciousness. This mental product - the conclusion on the problem before the judge—is
then filtered through conscious judicial reasoning and articulated consistently with
learnt legal rules. This “internal [i.e., unconscious] dialogue of reason and passion,
does not taint the judicial process, but is in fact central to its vitality” (Brennan 1988,
3). United States Supreme Court Justice Brennan’s references to the importance of
passion in judicial decision-making echoes the concept of phantastic reasoning. He
goes on: “by ‘passion’ I mean the range of emotional and intuitive responses to a
given set of facts or arguments, responses which often speed into our consciousness
far ahead of the lumbering syllogisms of reason” (1988, 9). Foreshadowing the need
for the Judge AI project to take care not to revert to the fairy tales of formalism he
also notes that, “an appreciation for the dialogue between head and heart is precisely
what was missing from the formalist conception of judging” (1988,9).
The unconscious, and unavoidable (because it is innately human), reality of phan-
tastic reasoning means that the judge will still honestly be able to say they consciously
put their personal preferences to one side and simply followed the law. They cannot,
however, control for what happens in their unconscious where phantasies, instincts,
of justice, phantastic and instinctual reasoning, play a role. Phantastic mixing in the
unconscious ensures that legal reasoning is tethered to a judge’s humanity; it ensures
that the “quality of mercy is not strained”, and that the application of legal rules is
seasoned by the judge’s life experience.33 Lord Kerr of the UK Supreme Court comes
close to acknowledging this: “In the course of one’s legal career, although one has to
maintain a certain professional detachment, occasions arise where you feel strongly
that a particular person’s interests requires to be vindicated” (BBC4 2011, ~17’).
The fact that personal experiences do influence the law as enunciated by judges via
the operation of phantasy raises a practical point for the legal system and any Judge
AI project: a more diverse bench (in terms of judicial backgrounds and experience;
32For further elucidation see discussion of the concepts of “fantasy” and “phantasy” in Brenner
(2003).
33See Portia’s speech on mercy in Shakespeare’s, The Merchant of Venice, Act IV, Sc1, ll2125-46.
tania.sourdin@newcastle.edu.au
112 T. Sourdin and R. Cornes
the bench in the UK is still remarkably homogeneous), should increase confidence in
the judicial process as a wider range of concepts and phantasies of justice are given
expression to. Further, we might get substantively “better” law with a more diverse
bench, because the decisions which make up the common law would be based on a
richer range of experience. As Cardozo (1921, 177) argued, “The eccentricities of
judges balance one another. …[O]ut of the attrition of diverse minds there is beaten
something which has a constancy and uniformity and average value greater than its
component elements.”
Finally, as Cornes and Henaghan (forthcoming) argue, the concept of the judge
is not exemplified solely by the individuals who fulfill that role. The notion of the
independent and impartial judge is one collectively agreed to and understood via the
mechanisms of the social unconscious. That psychic system operates to co-construct
(as between the legal profession and wider society) our collective understanding of
the judge we trust in the courtroom. The implication here for the Judge AI project
is that “a judge” is not a single data processing unit to be adequately represented in
code. Judges are intimately and psychically linked to the wider social system within
which they operate.34
For any Judge AI project the problem thus arises: how to code to allow for the influ-
ence of a similar range of varying personal, human,and societal inputs in addition
to reflecting legal rules and principles? The problem is especially difficult because
such personal inputs, emanating from human judges’, and society’s unconscious, are
by definition not consciously knowable and therefore not translatable into code. As
Lord Phillips has said of judgment writing: “I sometimes start writing a judgment
and I don’t know where I’m going to get to at the end of it” (BBC4 2011, ~50’).
While, therefore, many aspects of the judicial task may ultimately be captured
in code, the human heart of the judicial process, being a combination of conscious
and currently unknowable unconscious thought, remains quite literally beyond the
comprehension of the most talented programmer. And, as we suggest above, the
point at which technology and society is so sophisticated as to allow a complete
understanding of all aspects of the human psyche is likely to be a point at which
conflict itself, arising as it does in part because of clashes of understanding (of
ourselves and others), is a spent force in human history.
4 Conclusion
The roles of those involved in the delivery of justice and judging are changing rapidly.
Newer more disruptive technologies have already reshaped some aspects of the justice
system and the business of litigation. We expect courts will continue to build and
34The role of the social unconscious and confidence in “the judge” is set out in detail in Cornes and
Henaghan (forthcoming).
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 113
extend online platforms and systems that support filing, referral and other activities.35
Further, we may see judicial passions (always reasoned), being monitored by trusted
dispassionate Judge Co-Bots.
However, developments thus far do not lead inevitably to technology entirely
taking over the judicial function or role. Whilst AI can contribute to some adjudicative
functions, the issues that emerge are whether this is appropriate per se, and under
what circumstances human judges should hand over aspects of their adjudicative
functions to AI. Whilst some conjecture that AI may one day completely replace
human judges, such suggestions could only be credible if the automation were able
to replicate a human mind. We have argued that at no point soon does it appear that
AI will be able to replace the necessary and essential humanity of a human judge,
and we suggest further that to do so would be undesirable.
It is important to understand that judges do far more than render judgments. They
manage cases, provide a responsive and human framework within which conflict is
contained, settle cases, and manage court systems and processes as well as playing
an important public and educative function. Further, clearly the contributions of a
judge’s unconscious mind, the exercise of discretion, and constructive judicial vari-
ance are three of the factors which make human judicial decision-making acceptable
to litigants and legitimate within society. Issues also arise about whether AI processes
possess the legal authority to make judicial decisions in place of a human judge.
Policy approaches that increasingly result in the transfer of lower value matters
or categories of disputes (such as insurance disputes) to administrative tribunals
and commissions suggest that judicial work is likely to continue to change over the
next twenty years. It is equally probable that AI will play a more prominent role in
administrative and other decision-making contexts before being used in courts. This
all means that the impact in respect of Judge AI is more likely to be significant, at
least initially, in relation to smaller civil claims as AI support spreads throughout the
administrative decision-making arena.
Drawing the boundaries of acceptable Judge AI (should it be extended for use in
larger claims than it is currently used for?) requires consideration of ethical questions
as well as questions about who produces the algorithms of Judge AI and the extent
to which discretion and oversight will be maintained within the judiciary.36 It is
unhelpful to conceive of Judge AI as disconnected from the work of human judges.
Rather, as for example Autor argues, while humans may not be replaced by AI,
human intelligence may be supplemented by technological advances (Autor 2015).
The requirement for interaction between judges and AI systems will in turn give raise
to issues about judicial appointment, workload, and retention, as well as broader
questions about how judges contribute to society and the importance of humanly
responsive judging. There is a need to better understand and explore the impact on
35See e.g., New South Wales Department of Justice (n.d.) and Whitbourn (2015) for further detail
on the new online court websites in New South Wales. The Federal Court of Australia has had an e-
courtroom and expanding online lodgement services for some years, see Federal Court of Australia
(n.d).
36Issues about robot ethics are currently the subject of some limited discussion. See Devlin (2016).
tania.sourdin@newcastle.edu.au
114 T. Sourdin and R. Cornes
people of a human judge who deals sensitively with their concerns. Whilst Judge
AI may not be able to replicate the human judicial mind, clearly technology will
continue to influence the evolving role of the responsive human judge in legal systems
worldwide.
References
Agthe M, Spörrle M, Maner J (2011) Does being attractive always help? Positive and negative
effects of attractiveness on social decision making. Pers Soc Psychol Bull 37:1042–1054
Aletras N, Tsarapatsanis D, Preotiuc-Pietro D, Lampos V (2016) Predicting judicial decisions of
the European Court of Human Rights: a natural language processing perspective. Peer J Comput
Sci 2:e93. https://doi.org/10.7717/peerj-cs.93. Accessed 30 May 2017
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. Propublica, May 23. https://www.
propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 30 Nov
2017
Ashley KD (2017) Artificial intelligence and legal analytics—new tools for the digital age. Cam-
bridge University Press, Cambridge
Autor DH (2015) Why are there still so many jobs? The history and future of workplace automation.
J Econ Perspect 29(3):3–30
Barendrecht M (2017) Rechtwijzer: why online supported dispute resolution is hard to implement.
HIIL Innovating Justice. http://www.hiil.org/insight/rechtwijzer-why-online-supporte-dispute-re
solution-is-hard-to-implement. Accessed 13 July 2017
Bathurst T (2015) iAdvocate v Rumpole: who will survive? An analysis of advocates’ ongoing rel-
evance in the age of technology. Paper presented at 2015 Australian Bar Association Conference,
Boston, 9 July 2015
Baum L (1998) The puzzle of judicial behaviour. University Michigan Press, Ann Arbor
Baum L (2008) Judges and their audiences—perspectives on judicial behaviour. Princeton Univer-
sity Press, Princeton
BBC4 (2011) The highest court in the land: justice makers. https://www.youtube.com/watch?v=P
ZtYENfNa7k. Accessed 3 Feb 2018
Bennett H, Broe GA (2007) Judicial neurobiology, Markarian synthesis and emotion: how can the
human brain make sentencing decisions? Crim Law J 31(2):75–90
Benyekhlef K, Vermeys N (2017) ODR and the (BC) courts. Slaw. http://www.slaw.ca/2012/05/2
8/odr-and-the-bc-courts/. Accessed 19 June 2017
Bickel E, van Dijk M, Giebels E (2015) Online legal advice and conflict support: a Dutch experience.
Report, University of Twente, March 2015
Brennan (1988) Reason, passion, and the progress of the law. Cardozo Law Rev 10:3–23
Brenner A (2003) Fantasy. University of Chicago. http://csmt.uchicago.edu/glossary2004/fantasy.
htm. Accessed 4 Dec 2017
Brooks D (2011) The social animal. Random House, New York
Brooks M (2017) Artificial ignorance. New Scientist, 7 October
Cardozo B (1921) The nature of the judicial process. Yale University Press, Connecticut
Carniero D, Novais P, Andrade F, Zeleznikow J, Neves J (2014) Online dispute resolution: an
artificial intelligence perspective. Artif Intell Rev 41(2):211–240
Chaphalkar NB, Iyer KC, Patil SK (2015) Prediction of outcome of construction dispute claims
using multilayer perceptron neural network model. Int J Project Manage 33(8):1827–1835
Chiang J (2017) ACC accused of using model to get people off its books. Radio New Zealand,
15 September. http://www.radionz.co.nz/news/political/339513/acc-accused-of-using-model-to-
get-people-off-its-books. Accessed 30 Nov 2017
Chin D (2012) Sentencing: a role for empathy. Univ Penn Law Rev 160(6):1561–1584
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 115
Chisholm R (2009) Values and assumptions in judicial cases. Paper presented at the National
Judicial College Conference, Judicial Reasoning—Art or Science, Canberra, 7–8 February
2009. https://njca.com.au/wp-content/uploads/2013/07/Values-and-Assumptions-in-Judicial-De
cisions-Chisholm.pdf. Accessed 30 May 2017
Civil Resolution Tribunal (2018) How the CRT works. https://civilresolutionbc.ca/how-the-crt-wo
rks/. Accessed 4 Feb 2018
Colby TB (2012) In defense of judicial empathy. Minn Law Rev 96:1944–2015
Cornes R, Henaghan M (forthcoming) Believing in the judge—understanding and defending the
role of the fair minded and informed observer in bias jurisprudence from a psychodynamic
perspective. Otago Law Rev
Craib I (2001) Psychoanalysis—a critical introduction. Polity, Cambridge
Danziger S, Levav J, Avnaim-Pesso L (2011) Extraneous factors in judicial decisions. Proc Natl
Acad Sci USA 108(17):6889–6892
Devlin H (2016) Do no harm, don’t discriminate: official guidance issued on robot ethics. The
Guardian (online), 18 September. https://www.theguardian.com/technology/2016/sep/18/officia
l-guidance-robot-ethics-british-standards-institute?CMP=share_btn_tw. Accessed 30 May 2017
Domonoske C (2017) Elon Musk warns governors: artificial intelligence poses ‘existential risk’.
The Two Way, 17 July. http://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-m
usk-warns-governors-artificial-intelligence-poses-existential-risk. Accessed 4 Dec 2017
Eggers D (2013) The circle. Penguin, UK
Elias S (2017) Managing criminal justice. Address given at Criminal Bar Association Conference,
University of Auckland Business School, Auckland, 5 August 2017. https://www.criminalbar.or
g.nz/cba_conference_2017. Accessed 30 Nov 2017
Etherton T (2010). Liberty, the archetype and diversity: a philosophy of judging. Public Law 727
European Commission (2017) Antitrust: commission fines Google e2.42 billion for abusing dom-
inance as search engine by giving illegal advantage to own comparison shopping service. Press
Release (27 June 2017). http://europa.eu/rapid/press-release_IP-17-1784_en.htm. Accessed 4
Dec 2017
Federal Court of Australia (n.d.) eCourtroom. http://www.fedcourt.gov.au/online-services/ecourtr
oom. Accessed 30 May 2017
Foer F (2017) World without mind—the existential threat of big tech. Jonathan Cape, London
Frank J (1930) Law and the modern mind. Brentano’s, New York
Frank J (1949) Courts on trial—myth and reality in American justice. Princeton University Press,
Princeton
Freud S (1921) Mass psychology and analysis of the “I”. In: Mass psychology and other writings
(2004). Penguin, London
Freud S (1923) The ego and the id. In: Standard edition of the complete psychological works of
Sigmund Freud, vol XIX. Hogarth Press, London
Freud S (1991) A note on the unconscious. In The essentials of psycho-analysis—the definitive
collection of Sigmund Freud’s writing, selected by Anna Freud. Penguin, London
Freud S (2002) The psychopathology of everyday life (Penguin Modern Classics). Penguin Classics,
London
Freud S (2008) The interpretation of dreams (Oxford World’s Classics). Oxford University Press,
Oxford
Frosh S (2012) A brief introduction to psychoanalytic theory. Palgrave, Basingstoke
Future of Life Institute (n.d.-a) Artificial intelligence news. https://futureoflife.org/ai-news/.
Accessed 4 Dec 2017
Future of Life Institute (n.d.-b) Asilomar AI principles. https://futureoflife.org/ai-principles/.
Accessed 30 Nov 2017
Gaitenby A (2004) Online dispute resolution. The Internet Encyclopaedia. https://doi.org/10.1002/
047148296X.tie129
tania.sourdin@newcastle.edu.au
116 T. Sourdin and R. Cornes
Hall W, Pesenti J (2017). Growing the artificial intelligence industry in the UK, a report for the
UK Government. https://www.gov.uk/government/publications/growing-the-artificial-intelligen
ce-industry-in-the-uk. Accessed 4 Dec 2017
Harari YN (2015) Homo Deus: a brief history of tomorrow. Harvill Secker, London
Harvey D (2016) From Susskind to Briggs: online court approaches. J Civil Litigation Pract
5(2):84–93
Hoekstra v. H.M. Advocate (No.2) (2000) S.L.T. 605
Holmes OW (1881) The common law. http://www.general-intelligence.com/library/commonlaw.
pdf. Accessed 4 Dec 2017
Hyde J (2017). Prison and courts bill scrapped. Law Soc Gazette, 20 April. https://www.lawgazette.
co.uk/news/breaking-prisons-and-courts-bill-scrapped/5060715.article. Accessed 13 July 2017
In Re JP Linaham (1943) 138 F.2d 650 (2d Cir). Justia. https://law.justia.com/cases/federal/appell
ate-courts/F2/138/650/1481751/. Accessed 4 Feb 2018
Isaacs S (1948) The nature and function of phantasy. Int J Psychoanal 73–93
Johnstone R (2016) HM Courts and Tribunals Service’s Susan Acland-Hood on dig-
ital courts, making big changes and her Whitehall hammock. Civil Service World,
6 October. https://www.civilserviceworld.com/articles/interview/hm-courts-and-tribunals-servic
e%E2%80%99s-susan-acland-hood-digital-courts-making-big. Accessed 4 Dec 2017
Kennedy D (1982) Legal education and the reproduction of hierarchy. J Leg Educ 32:591–615
Kirby M (1999a) The future of courts—do they have one? J Judicial Adm 8:383–391
Kirby M (1999b) Judging: reflections on the moment of decision. Aust Bar Rev 18:4–22
Klein DE, Mitchell G (eds) (2010) The psychology of judicial decision making. Oxford University
Press, Oxford
Lanchester J (2017) You are the product. Lond Rev Books 39:3–10
Lawyal Solicitors (2016) About us. https://lawyal.com.au/about-us. Accessed 31 Oct 2017
Le Sueur A, Cornes R (2000) What do the top courts do? Curr Leg Probl 53(1):53–97
Legg M (2016) The future of dispute resolution: online ADR and online courts. Australas Dispute
Resolut J 27:227–235
Leiter B (2010) Legal formalism and legal realism: what is the issue? Leg Theory 16(2):111–133
Leith S (2017) Nothing like the truth. TLS, 18 August. https://www.the-tls.co.uk/articles/public/p
ost-truth-sam-leith/. Accessed 25 Aug 2017
Levin S (2016) A beauty contest was judged by AI and the robots didn’t like dark skin. The Guardian
(online), 9 September. https://www.theguardian.com/technology/2016/sep/08/artificial-intellige
nce-beauty-contest-doesnt-like-black-people?CMP=share_btn_tw. Accessed 30 May 2017
Lind EA, Tyler TR (1988) The social psychology of procedural justice. Plenum Press, New York
Liptak A (2017) Sent to prison by a software program’s secret algorithms. New York Times (online),
May 1. https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-program
s-secret-algorithms.html?smid=tw-share&_r=0. Accessed 30 May 2017
Loomis v Wisconsion (2017) SCOTUSblog.http://www.scotusblog.com/case-files/cases/loomis-v-
wisconsin/. Accessed 30 Nov 2017
Lopez I (2016) The early years begin for AI’s transformation of law. Legaltech News, 5 Octo-
ber. http://www.legaltechnews.com/id=1202769286334/The-Early-Years-Begin-for-AIs-Transf
ormation-of-Law?cmp=share_twitter&slreturn=20160912054113. Accessed 29 May 2017
Lynn B, Stoller M (2017). How to stop Google and Facebook from becoming even more powerful.
The Guardian, 2 November. https://www.theguardian.com/commentisfree/2017/nov/02/faceboo
k-google-monopoly-companies. Accessed 6 Feb 2018
MacCrimmon M, Tillers P (eds) (2002) The dynamics of judicial proof: computation, logic, and
common sense. Physica-Verlag, Heidelberg
MacMillan L (1937) Law and other things. Cambridge University Press, Cambridge
MacQueen HL, Wortley S (1998) Human rights, the judges and the new Scotland. Scots Law News,
18 October. http://www.sln.law.ed.ac.uk/1998/10/18/78-human-rights-the-judges-and-the-new-s
cotland/. Accessed 4 Dec 2017
Mason K (2001) Unconscious judicial prejudice. Aust Law J 75:676–687
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 117
Menand L (2017) The stone guest: can Sigmund Freud ever be killed? The New Yorker, 28 August,
75
Mills M (2016) Artificial intelligence in law: the state of play 2016 (Part 1). Legal Executive
Institute, 23 February. http://legalexecutiveinstitute.com/artificial-intelligence-in-law-the-state-
of-play-2016-part-1/. Accessed 29 May 2017
Ministry of Justice of the Government of the United Kingdom and Her Majesty’sCourts and Tribunal
Service (2016) Transforming our justice system. https://www.gov.uk/government/uploads/syste
m/uploads/attachment_data/file/553261/joint-vision-statement.pdf. Accessed 30 May 2017
MyLawBC (n.d.) Separation, divorce & family matters. http://mylawbc.com/paths/family/.
Accessed 29 May 2017
Neill D (2013) Using artificial intelligence to improve hospital inpatient care. IEEE Intell Syst
28(2):92–95
New South Wales Department of Justice (n.d.) NSW online registry—courts and tri-
bunals. https://onlineregistry.lawlink.nsw.gov.au/content/nsw-supreme-district-local-courts-onli
ne-registry. Accessed 29 May 2017
Nine to Noon (2017) Is ACC “passing the buck” with prediction based evaluations? Radio New
Zealand, 26 September. http://www.radionz.co.nz/national/programmes/ninetonoon/audio/2018
59974/is-acc-passing-the-buck-with-prediction-based-evaluations. Accessed 30 Nov 2017
Northern Ireland Courts and Tribunals Service (2011) Small claims online: a users guide. http://
www.courtsni.gov.uk/SiteCollectionDocuments/Northern%20Ireland%20Courts%20Gallery/O
nline%20Services%20User%20Guides/Small%20Claims%20Online%20User%20Guide.pdf.
Accessed 30 May 2017
Perry M (2017) iDecide: administrative decision-making in the digital world. Aust Law J 91:29–34
Picard R (1997) Affective computing. MIT Media Lab, Massachusetts
Posner R (2010) How judges think. Harvard University Press, Harvard
Quintanilla V (2012) Different voices: the role of gender when reasoning about the letter versus
spirit of the law. Presentation at the Law and Society Conference, Honolulu, June 2012
Ramesh AN, Kambhampati C, Monson JRT, Drew PJ (2004) Artificial intelligence in medicine.
Ann R Coll Surg Engl 84:334–338
Reid L (1972) The judge as lawmaker. J Soc Public Teachers Law 12(1):22–29
Schatsky D, Muraskin C, Gurumurthy R (2014) Demystifying artificial intelligence: what business
leaders need to know about cognitive technologies. University Press, Deloitte
Schauer F (2010) Is there a psychology of judging? In: Klein DE, Mitchell G (eds) The psychology
of judicial decision-making. Oxford: Oxford University Press, pp 103–121
Schubarth C (2016) Y combinator startup uses big data to invest in civil lawsuits. Silicon Valley
Bus J, 24 August. http://www.bizjournals.com/sanjose/blog/techflash/2016/08/y-combinator-sta
rtup-uses-big-data-to-invest-in.html. Accessed 29 May 2017
Searle J (2002) Can computers think? In: Chalmers D (ed) Philosophy of mind: classical and
contemporary readings. Oxford University Press, Oxford, pp 669–675
Shakespeare W (16th Century) The merchant of Venice, Act IV, Sc1, ll2125-46
Silver D, Huang A, Maddison C, Guez A, Sifre L, Van Den Driessche G, Schrittweiser J (2016)
Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489
Sinai Y, Alberstein M (2016) Expanding judicial discretion: between legal and conflict considera-
tions. Harvard Negot Law Rev 21:221–277
Smartsettle One+ (2017) Smartsettle. http://www.smartsettle.com/home/products/smartsettle-one/.
Accessed 31 Oct 2017
Smith M (2016). In wisconsin, a backlash against using data to foretell defendant’s futures. New
York Times (online), 22 June. http://www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-a
gainst-using-data-to-foretell-defendants-futures.html?_r=0. Accessed 30 May 2017
Soars J (2016) Draft procedural order for use of online dispute resolution technologies in ACICA
rules arbitrations. The Australian Centre for International Commercial Arbitration. https://acica.o
rg.au/wp-content/uploads/2016/08/ACICA-online-ADR-procedural-order.pdf. Accessed 31 Oct
2017
tania.sourdin@newcastle.edu.au
118 T. Sourdin and R. Cornes
Solms M (2013) “The unconscious” in psychoanalysis and neuropsychology. In: Akhtar S, O’Neill
MK (eds) On Freud’s “The unconscious”. Karnac Books, London, pp 101–118
Solon O (2017) Killer robots? Musk and Zuckerberg escalate row over dangers of AI. The Guardian
Australia, July 2017. https://www.theguardian.com/technology/2017/jul/25/elon-musk-mark-zu
ckerberg-artificial-intelligence-facebook-tesla. Accessed 30 Nov 2017
Sourdin T (2012) Decision making in ADR: science, sense and sensibility. Arbitrat Mediat
31(1):1–14
Sourdin T (2015a) Justice and technological innovation. J Judicial Adm 25:96–105
Sourdin T (2015b) The role of the courts in the new justice system. Yearb Arbitrat Mediat 7:95–116
Sourdin T (2016) Alternative dispute resolution, 5th edn. Thomson Reuters, Pyrmont
Sourdin T (2018, forthcoming) Judge v Robot? Artificial intelligence and judicial decision making.
U New South Wales Law J 41(4)
Sourdin T, Cornes R (2016) Implications for therapeutic judging (TJ) of a psychoanalytical per-
spective to the judicial role. Int J Law Psychiatry 48:8–14
Sourdin T, Zariski A (eds) (2013) The multi-tasking judge: comparative judicial dispute resolution.
Thomson Reuters
Spillius EB, Milton J, Garvey P, Couve C, Steiner D (2011) The new dictionary of Kleinian thought.
Taylor & Francis, East Sussex
State v Loomis (2016) 881 N.W.2d 749. Leagle. https://www.leagle.com/decision/inwico2016071
3i48. Accessed 4 Feb 2018
State v Loomis (2017) Docket for 16-6387. Supreme Court of the United States. https://www.supr
emecourt.gov/docketfiles/16-6387.htm. Accessed 4 Feb 2018
Surden H (2014) Machine learning and law. Wash Law Rev 89:87–115
Susskind D, Susskind R (2015) The future of the professions. Oxford University Press, Oxford
Tamburro M (2012) The future of predictive coding—rise of the evidentiary expert. IMS Expert-
Services. http://technology.findlaw.com/electronic-discovery/the-future-of-predictive-coding-ri
se-of-the-evidentiary-expert-.html. Accessed 30 May 2017
Tashea J (2016) New York considers “Textalyzer” bill to allow police to see if drivers were texting
behind the wheel. ABA J, 1 October. http://www.abajournal.com/magazine/article/newyork_dis
tracted_driving_textalyzer_bill/. Accessed 30 May 2017
Therapeutic Goods Act 1989 (Cth)
Thibaut J (1978) Procedural justice: a psychological analysis. Duke Law J 6:1289–1296
Thibaut J, Walker L (1975) Procedural justice: a psychological analysis. Erlbaum, New Jersey
Tierney J (2011) Do you suffer from decision fatigue? New York Times (online), 17
August. http://www.nytimes.com/2011/08/21/magazine/do-you-suffer-from-decision-fatigue.ht
ml?_r=2&pagewanted=1. Accessed 30 May 2017
Tirrell M (2017) From coding to cancer: how AI is changing medicine. CNBC, 11 May. http://www.
cnbc.com/2017/05/11/from-coding-to-cancer-how-ai-is-changing-medicine.html. Accessed 30
May 2017
Tyler Technologies (2017) Modria. https://www.tylertech.com/solutions-products/modria.
Accessed 31 Oct 2017
University of Otago (n.d.) Artificial intelligence and law in New Zealand. http://www.cs.otago.ac.
nz/research/ai/AI-Law/index.html. Accessed 30 Nov 2017
Van den Bos K, Van der Velden L, Lind A (2014) On the role of perceived procedural justice
in citizens’ reactions to government decisions and the handling of conflicts. Utrecht Law Rev
10(4):1–26
Victoria University (n.d.) Professor John Zeleznikow. http://www.vu.edu.au/contact-us/john-zelez
nikow. Accessed 30 May 2017
Vogel K (2017) Google critic ousted from think tank funded by the Tech Giant. New York Times
(online), August 2017. https://www.nytimes.com/2017/08/30/us/politics/eric-schmidt-google-ne
w-america.html. Accessed 4 Dec 2017
WarrenM (2015) Embracing technology: the way forward for the courts. J Judicial Adm 24:227–235
Weisstub DN, Pitz A, Burt RA (2016) Introduction—Robert A. Burt. Int J Law Psychiatry 48:1–7
tania.sourdin@newcastle.edu.au
Do Judges Need to Be Human? The Implications of Technology … 119
Wheelis A (1959) Psychoanalysis and identity. Psychoanal Rev 46A:65–74
Whitbourn M (2015) NSW government trials online court for civil cases in Sydney. Sydney Morning
Herald (online), 10 August. http://www.smh.com.au/nsw/nsw-government-trials-online-court-f
or-civil-cases-in-sydney-20150807-giuig2.html. Accessed 30 May 2017
Wikipedia (2017) Rosalind Picard. https://en.wikipedia.org/wiki/Rosalind_Picard. Accessed 14
July 2017
Wilson T, Gilbert D (2008) Explaining away: a model of affective adaptation. Perspect Psychol Sci
3(5):370–386
Zeleznikow J, Bellucci E (n.d.) Family Winner: integrating game theory and heuristics to provide
negotiation support. http://www.jurix.nl/pdf/j03-03.pdf. Accessed 30 May 2017
Zeleznikow J, Bellucci E, Schild UJ, Mackenzie G (2007) Bargaining in the shadow of the law—us-
ing utility functions to support legal negotiation. Paper presented at the 11th International Con-
ference on Artificial Intelligence, Stanford, California, 4–8 June 2007
Tania Sou r d i n is the Dean of the University of Newcastle Law School and was previously the
Foundation Chair and Director of the Australian Centre for Justice Innovation (ACJI) at Monash
University in Australia. Professor Sourdin has led national research projects and produced impor-
tant recommendations for ADR and justice reform. In the past two decades, she has conducted
qualitative and quantitative research projects into aspects of the dispute resolution and justice
system systems in 12 Courts and Tribunals and six external dispute resolution schemes. Other
research has focussed on justice innovation, technology, delay and systemic reforms. Professor
Sourdin is the author of a number of books (the 5th Edition of her book ‘Alternative Dispute
Resolution’ was released in February 2016), articles and papers, and has published and presented
widely on a range of topics including ADR, justice innovation, justice issues, mediation, conflict
resolution, collaborative law, artificial intelligence, technology and organisational change.
Richard Cornes is a Senior Lecturer in Public Law at the University of Essex, England. He is
also a Visiting Fellow at the University of Otago, New Zealand, Centre for Legal Issues, and an
Associate Member of Landmark Chambers, London. Dr. Cornes graduated BA/LLB(Hons) from
Auckland University (English and Law) in 1992 and the following year was admitted as a Barrister
and Solicitor of the High Court of New Zealand. After practicing at Simpson Grierson, Auckland,
and studying at the University of Melbourne (Grad Dip International Law), he moved to the UK
in 1997 where he worked on aspects of the new Labour Government’s constitutional reforms as a
Senior Research Fellow at UCL’s Constitution Unit. Since 2000 has been on faculty at Essex Law
School where he runs the core modules in public law. His research and consultancy interests are
focused on judicial branch matters, with a specific focus on the application of psycho-social and
organisational dynamic approaches.
tania.sourdin@newcastle.edu.au