ArticlePDF Available

The Ethical AI Lawyer: What is Required of Lawyers When They Use Automated Systems?

Authors:

Abstract

This article focuses on individual lawyers’ responsible use of artificial intelligence (AI) in their practice. More specifically, it examines the ways in which a lawyer’s ethical capabilities and motivations are tested by the rapid growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover what is required of lawyers when they use this technology. To do so, we use psychologist James Rest’s Four-component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in professional conduct when utilising AI. We examine issues associated with automation that most seriously challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties. Importantly, this approach is grounded in social psychology. That is, by looking at human ‘thinking and doing’ (i.e., lawyers’ motivations and capacity when using AI), this offers a different, complementary perspective to the typical, legislative approach in which the law is analysed for regulatory gaps.
https://lthj.qut.edu.au/ LAW, TECHNOLOGY AND HUMANS
Volume 1 (1) 2019 https://doi.org/10.5204/lthj.v1.i1.1324
This work is licensed under a Creative Commons Attribution 4.0 International Licence. As an open access
journal, articles are free to use with proper attribution. ISSN: 2652-4074 (Online)
80 © The Author/s 2019
The Ethical AI Lawyer: What is Required of
Lawyers When They Use Automated Systems?
Justine Rogersa
University of New South Wales, Australia
Felicity Bellb
University of New South Wales, Australia
Abstract
Keywords: Lawyers; legal practice; professional ethics; Artificial Intelligence; AI
Introduction
Artificial intelligence (AI) is profoundly changing the field in which lawyers work, one that has already undergone enormous
change. It is also predicted that, as in other professional domains, AI will come to exceed the performance of practitioners
1
and
potentially replace them altogether. In legal services, AI may be used to create automated advice platforms, generate drafts and
undertake review of documents for discovery and due diligence—perhaps foreshadowing a ‘complete overhaul’ in how these
services are provided.
2
Speculation about AI’s further potential to automate elements of legal practice has ramped up in recent
years,
3
as even wider applications, such as ‘predicting’ the outcomes of cases, continue to be investigated. At the same time, it
is claimed that AI will make lawyers’ practice more interesting through automation of ‘grunt work’, and further offer greater
a Senior Lecturer and Deputy Director, Future of Law and Innovation in the Profession (FLIP) Research Stream, University of New South
Wales (UNSW) Law.
b Research Fellow, FLIP Research Stream, UNSW Law.
This research was undertaken with the support of the Law Society of New South Wales (NSW) FLIP Research Stream at UNSW Law. The
authors would like to thank the five anonymous reviewers for their thoughtful comments and Deborah Hartstein for her research assistance.
1
Gasser, “The Role of Professional Norms”; Cabitza, “Breeding Electric Zebras.”
2
Alarie, “How Artificial Intelligence Will Affect the Practice of Law,” 123.
3
Chester, “How Tech is Changing the Practice of Law.”
This article focuses on individual lawyers’ responsible use of artificial intelligence (AI) in their practice. More
specifically, it examines the ways in which a lawyer’s ethical capabilities and motivations are tested by the rapid
growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover
what is required of lawyers when they use this technology. To do so, we use psychologist James Rest’s Four-
component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in
professional conduct when utilising AI. We examine issues associated with automation that most seriously
challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties.
Importantly, this approach is grounded in social psychology. That is, by looking at human ‘thinking and doing’
(i.e., lawyers’ motivations and capacity when using AI), this offers a different, complementary perspective to the
typical, legislative approach in which the law is analysed for regulatory gaps.
Volume 1 (1) 2019 Rogers and Bell
81
commercial opportunities in new legal problems it raises.
4
Legal AI may be marketed or purposed for consumer (or client)
application without need for a lawyer intermediary, but many products are intended for use by lawyers and their organisations.
5
A growing body of literature singles out AI, especially machine learning (ML), as a critical target of regulation, including in
the law context.
6
Reasons for concern include its capacity for autonomous and unpredictable action, lack of reliability (or how
to be certain that a program performs correctly, without bias or error, especially in the absence of certification) and its opacity
(or lack of transparency).
7
Further, regulation of automated systems is seen as especially vital where professionals, including
lawyers, use AI to supplement or even replace elements of their work.
8
This is due to the increased vulnerability of consumers
of professional services and the important social institutions that may be diminished (or, in some areas, possibly enhanced). In
the case of law, these include universal access, the rule of law and the administration of justice. Further, and noting again its
potential opportunity, AI threatens professional jurisdiction where ‘tech people’, the software developers, knowledge engineers
and entrepreneurs, can create and deliver legal services with lesser regulatory burden than lawyers. This is important because
special regulatory demands made of professionalsfor high competence and ethicalityhave traditionally been in exchange
for certain market protections.
9
Bennett Moses has argued that, to avoid social harms such as those implied in the outline above, regulators wishing to influence
technological design need to act at an early stage when the situation is more malleable.
10
Regulators have both ‘hard’ and ‘soft’
regulatory options,
11
which are then interdependent.
12
Nevertheless, regulators appear to be leaving new technology such as AI
to general or existing regulatory regimes on several potential bases: that there are no regulatory gaps; that responding to a
particular technology is not feasible until there exists the need to act on multiple technologies; or, perhaps most likely, due to
a lack of information about the technology’s likely impact.
13
To this, Tranter charges the legal profession with being especially
unimaginative when it comes to responding to technology; for him, most legal scholars in the field look narrowly at existing
provisions to spot any gaps and avoid the task of law reform.
14
Presumably for all these and other reasons, lawyers’ use of AI
remains subject to formal rules that apply generally and typically by default—that is, the legislation and ‘the law of lawyering’.
This means that individual lawyers shoulder the regulatory burden when using AI in practice and in the absence of ‘express
provision, or ethical guidance’.
15
Taking seriously Tranter’s critique of legal scholarship for its focus on formal rules (with little attention paid to humans and
technology
16
), and noting that the regulation of technology more likely targets the behaviour or ‘social conduct’ of individuals
than the technology itself,
17
this article focuses on individual lawyers and their responsible use of AI in practice. More
specifically, it examines the ways in which a lawyer’s ethical capabilities and motivations are tested by the rapid growth of
automated systems, in order to identify the ethical risks posed by AI and to uncover what is being required of lawyers when
they use this technology. This approach to regulatory legitimacy (in the sense of the rules’ knowability, clarity and equality)
18
4
Semmler, “Artificial Intelligence.”
5
For example, Armstrong, “0 to 111.”
6
Crawford, The AI Now Report; Dawson, Australia’s Ethics Framework.
7
Pasquale, “Restoring Transparency”; Kroll, “Accountable Algorithms.”
8
Cabitza, “Breeding Electric Zebras.”
9
Rogers, “Large Professional Service Firm.”
10
Bennett Moses, “Regulating in the Face of Sociotechnical Change.”
11
The main distinctions between soft and hard law are binding/non-binding, effective/ineffective at implementation, delegated authority to
a third party to oversee and enforce/no delegated authority: Shaffer, “Hard vs Soft Law,” 712715.
12
Karlsson-Vinkhuyzen, “Global Regulation,” 608.
13
This challenge is known as the ‘Collingridge dilemma’ in which at an early stage of technology’s development, regulation is problematic
due to lacking knowledge. This persists even at a later stage, as the technology is more entrenched; therefore, regulation is expensive to
implement: Brownsword, “Law, Regulation, and Technology,” 21.
14
Tranter, “Laws of Technology,” 754; Tranter, “Law and Technology Enterprise,” 43.
15
Bartlett, “Legal Services Regulation in Australia,” 183. A number of jurisdictions have developed non-binding ethical frameworks: see
Dawson, Australia’s Ethics Framework; High-level Expert Group on AI, Ethics Guidelines.
16
Tranter, “Laws of Technology,” 755; Tranter, “Law and Technology Enterprise,” 7173. However, it should be noted that Tranter lists
exceptions within the scholarship. Commentators such as Lyria Bennett Moses and Arthur Cockfield explore a ‘general theory of law and
technology’, allowing them to observe the connections between technologies and across time, facilitating a richer level of analysis: “Laws
of Technology,” 754755.
17
Bennett Moses, “Regulating in the Face of Sociotechnical Change,” 6.
18
Regulatory legitimacy is regulation that adheres to or is compatible with certain desired normative values. Here, ‘rule of law’ values
require those who are the regulatory targets to know what the regulation is and to be clear about how that relates to and directs their
conduct: Brownsword, “Law, Regulation, and Technology,” 16; Zimmerman, Western Legal Theory, 91‒92, defining ‘rule of law’.
Volume 1 (1) 2019 Rogers and Bell
82
and efficacy (‘fit for purpose’, adequacy) is grounded in social psychology. That is, by examining human ‘thinking and doing’,
their motivations and capacity when handling AI, this perspective offers a different, complementary perspective to the typical,
legislative approach in which the formal law alone is analysed for regulatory gaps.
19
To do so, we use psychologist James Rest’s Four-component Model of Morality (FCM), recently used and extended by Hugh
Breakey in the law context.
20
As detailed in this paper, the FCM represents the four requisite psychological components (or
interactive elements) for ethical behaviour: awareness, judgement, motivation and action. Here, these components, with
Breakey’s additions, embody the necessary elements for lawyers to engage in professional, ethical conduct when they are using
AI in their work. We examine issues associated with automation that most seriously challenge each component in context, and
the skills and resolve lawyers need to adhere to their ethical duties. This is a context in which there is some active regulation,
such as the United States’ (US) requirement for technological competence, but mostly, as embodied by the Australi an case,
where the approach is a more passive ‘continuity’ approach. We take ethical duties to mean the specific, technical rules; the
wider, cornerstone professional values that underpin them and apply where they are silent; and the personal regulation needed
to enact them. An ‘ethical professional identity’ is one in which the core values and ideals of the profession and its codes of
conduct have been internalised.
21
Meanwhile, several studies in the parallel field of sociology of professions have looked at how new features of legal practice,
from new public management and the decline of legal aid to corporatism and billable hours, are shaping the ethics and identities
of lawyers.
22
This article can be meaningfully connected to this literature, with AI being the latest in a series of dramatic changes
to legal professionalism. Our paper adopts a similar analytical approach to Parker and her co-authors, who investigated the
ways in which features of large commercial law firms affect lawyers’ decisions and behaviour.
23
In line with the wider research
that emphasises the situational dimension of ethics, such studies reveal the ways in which new work environments are
influencing ethics capacity and motivation. Thus, we also consider the context of lawyers’ use of AI, including the effect of
formal professional controls, and the workplace or organisational circumstances to support moral capacity and resolve. This
sheds light on risk areas and regulatory effectiveness, including where ‘soft’ interventions, such as bett er legal education and
training, might be needed.
Therefore, this article takes a detailed look at what ethical AI practice might entail, noting that, professional ethical practice
already asks a lot of individuals within their personal and organisational work environments. It must be both feasible and fair
that lawyers are individually responsible when using AI, further raising issues of equality. This raises the apparent need for
some degree of equality of regulation for lawyers and the non-lawyers using AI to mimic legal offerings.
24
The article is structured as follows. First, we introduce and contextualise Rest’s FCM through detailed explication. Although
used primarily as a framing device to discuss different elements of ethical practice and evaluate the regulatory demands made
of lawyers, we also identify some of the criticisms made of the model. Thereafter, the main body of the article applies the FCM
to the AI context, focusing on its four elements of ethical practice, and how each is challenged by the presence and use of
automated systems. The article concludes with reflection on what is being asked of lawyers when they use AI, how that
behaviour is affected by the technology, and how issues of lawyer regulation are illuminated by an analysis of ethical process.
Before we begin, we note that ‘AIis an amorphous term that includes many different elements. In this article on lawyers’
ethical AI practice, we adopt an expansive definition
25
that encapsulates ML systems as well as less autonomous ‘expert
systems’ or variants thereof,
26
such as decision-support tools or programs for automated drafting. Thoughts of lawyers’
engagement with AI have tended to focus, in recent years, on the use of ML. This is understandable, as it is possible for such
systems to learn and act autonomously, giving rise to issues around control and responsibility.
27
To provide brief definitions,
19
Tranter, “Laws of Technology,” 755.
20
Breakey, “Building Ethics Regimes.”
21
Hamilton, “Assessing Professionalism,” 488, 496.
22
Sommerlad, “Implementation of Quality Initiatives”; Alfieri, “Fall of Legal Ethics”; Campbell, “Salaried Lawyers.”
23
Parker, “Ethical Infrastructure of Legal Practice.”
24
While we do not develop an analysis of the tensions around non-lawyers’ engagement (often through technology) in the legal services
market, we note the complex issues generatedof what it means to practise law, the meaning of professionalism and the workings of
professional trust. See, for example, Remus, “Reconstructing Professionalism,” 872; Wendel, “Promise and Limitations”; Beames,
“Technology-based Legal Document Generation”; Bennett, Automated Legal Advice Tools, 19; Bell, “Artificial Intelligence and Lawyer
Wellbeing.”
25
Following Tranter, who asks us to avoid piecemeal approaches: “Laws of Technology,” 754.
26
Bennett, Automated Legal Advice Tools.
27
Scherer, “Regulating Artificial Intelligence Systems,” 362363.
Volume 1 (1) 2019 Rogers and Bell
83
an ML system is one that, when trained with data, can build a model of patterns and correlations in that data and apply the
model to new and not previously seen data. This allows for sophisticated statistical analysis of hundreds or even thousands of
input variables. As noted, though, pre-programmed systems are also widely used for legal applications, and can assist in
structuring advice or decision-making, or automating forms or documents.
Rest’s Four-component Model
Professor James Rest was a late twentieth-century psychologist who, with a team of US researchers, theorised on moral
behaviour and development.
28
His four-component model (or FCM) identified the ‘semi-independent psychological processes’
that must occur for moral behaviour to take place.
29
According to certain writers, this model better represents how an individual
‘brings multiple systems to bear on moral situations’.
30
For our purposes, when lawyers are required by the ‘law of lawyering’
and other legislationnot to mention their own moral convictionsto use AI ethically, in fact several demands are being made
of them, each of which must feature for regulation (including self-regulation) to be effective. Rest defined the components in
the following way:
1. Moral sensitivity involves perception of social situations and the interpretation of the situation in terms of what actions
are possible, who and what would be affected by each of the possible actions, and how the involved parties might react
to possible outcomes.
2. Moral judgment involves deciding which of the possible actions is most moral. The individual weighs the choices and
determines what a person ought to do in such a situation.
3. Moral motivation implies that the person gives priority to the moral value above all other values and intends to fulfil
it.
4. Implementation or Action combines the ego strength with the social and psychological skills necessary to carry out the
chosen action.
31
These processes might interact and influence each other, as evidence suggests they do,
32
but Rest argued that they still have
‘distinctive functions’.
33
According to Rest and his team, the four components are ‘the major units of analysis in tracing how a
particular course of action was produced in the context of a particular situation’.
34
In a recent piece on how lawyers and their
firms can enhance ethics capacity, Breakey extended the FCM to add achievement (or moral competence) and review (requiring
moral reflectiveness to correct a course of action or improve it in future).
35
For Rest, individuals need ‘skills and persevering
character in the face of opposition’ to their ethical behaviour.
36
The FCM was designed in a scholarly context in which theorists, according to Rest, focused too much on ethical judgement
(Component II) and simply assumed that good thinking led to good behaviour.
37
Rest also viewed ethical sensitivity
(Component I) as being distinct from judgement. As a corollary, training in ethics reasoning might not influence the
interpretative process or how and whether ethics issues are detected in the first place.
38
While the categorisation of motivation as separate from judgement has been subject to intense debate in the psychology field,
39
in whichever way it occurs, motivation is critical: when a person chooses whether to act on the demands of ethics in competition
with other values and interests.
40
This decision to act is driven by both intrinsic and extrinsic motivations. As Breakey says of
lawyers’ possible motivations, these include common morality (e.g., honesty, respect and dignity), desirable role identity (e.g.,
the status, dignity and honour of being a professional), excellence (successfully performing ‘professional’ activities), fair
bargain (sense that honouring professional obligations is a fair exchange for status rewards), constructed virtues (habituated
28
Although we note the social element of ‘ethics’ in this paper, we use ‘ethics’ and ‘morals’ interchangeably.
29
Thoma, “How Not to Evaluate a Psychological Measure,” 242.
30
Thoma, “How Not to Evaluate a Psychological Measure,” 243.
31
Narvaez, “Four Components of Acting Morally,” 386.
32
Narvaez, “Four Components of Acting Morally,” 388.
33
Rest, “Evaluating Moral Development,” 77.
34
Rest, “An Overview of the Psychology of Morality,” 5.
35
Breakey, “Building Ethics Regimes,” 326.
36
Narvaez, “Four Components of Acting Morally,” 386.
37
Blasi, “Bridging Moral Cognition and Moral Action”; Vozzola, “The Case for the Four Component Model,” 640.
38
Bebeau, “Four Component Model,” 283284.
39
Vozzola, “The Case for the Four Component Model,” 642–643; Minnameier, “Deontic and Responsibility Judgments,” 69, 73.
40
Bebeau, “Four Component Model,” 285.
Volume 1 (1) 2019 Rogers and Bell
84
responses to meet peer approval and functioning), social admiration, and avoidance of sanction or punishment.
41
The emphasis
on motivation provides ‘the bridge between knowing the right thing to do and doing it’.
42
That said, the FCM has been criticised including for its suggestion that moral behaviour occurs in a stepwise fashion as a series
of distinct stages in a set order (something that Rest did not intend, as explained below). Current debate within social psychology
focuses on the extent to which ‘actual conscious reasoning and deliberation’ influences moral decisions and actions,
43
with the
weight of the literature strongly favouring models that emphasise the intuitive, innate and evolutionary foundations of
morality.
44
Evolution has ‘etched into our brains a set of psychological foundations’ that underlie human virtues: these are
adaptive mechanisms to help us rapidly solve problems, including care/harm, fairness/cheating, and liberty/oppression.
45
For
these contemporary writers, scholars like Rest are too rationalist, privileging, for instance, the act of thoughtful moral
deliberation (as part of Component II: judgement), even though in actuality it rarely occurs.
46
It is further argued that they have
overplayed people’s individual traits and abilities, and the extent to which these can changegrow, matureacross time and
experience.
47
These more recent writers argue that people are primarily influenced by their social contexts and the people
around them, and that these influences typically lead to moral disengagement.
48
This trend has seen the rise of behavioural
approaches to legal ethics, a scholarship that emphasises the situational effects on lawyers’ decision-making.
49
In response, Rest and his colleagues were clear that the FCM was intended as a means of analysing moral actions, not depicting
a linear sequence in real time.
50
They were also explicit about their overlapping and often simultaneous nature, and
demonstrated through empirical research how, for instance, concentrated effort in one component can diminish attention to
another component or to a new ethics situation.
51
In addition, Rest demonstrated (and emphasised) that each component contains
a combination of cognition and affect, comprising feelings of empathy and disgust, mood influences, attitudes and valuing.
52
Overall, ‘the major point … is that moral behavior is an exceedingly complex phenomenon and no single variable (empathy,
prosocial orientation, stages of moral reasoning, etc.) is sufficiently comprehensive to represent the psychology of morality’.
53
Moreover, Rest was himself often tentative about his scheme, and many scholars have tweaked with the categorisations into
subdivisions to allow for granularity
54
and into layers of abstraction,
55
adaptations he actively supported.
56
Indeed, in the law
context, Hamilton accepted from the FCM research and Rest’s writing that an ethical professional identity can be developed
over a lifetime. Further, that ethical behaviour was more or less likely depending on both the individual’s personality and
character, and the social dynamics of their law practice.
57
To further incorporate current theory, Breakey’s recent use of the
FCM, also in the legal domain, connects itpersonal capacities and motivations alikewith the social context in which lawyers
work.
58
As he writes, a lawyer’s organisation and wider profession may act as ‘obstacles’ to ethics capabilities and motivation
or to each of the FCM components.
59
Meanwhile, institutional initiativesby the profession as a whole (e.g., codes, guidance
and training) and the workplace (e.g., policies, incentives, training and mentoring)can support moral capacity and resolve,
reduce the effects of any obstacles to moral capacity or resolve, or otherwise leave in place (or even multiply) those obstacles.
60
41
Note that the last is not on Breakey’s list because he is interested in the ‘right’ reasons for doing so, which do not include extrinsic
intents. Vozzola would argue such lists are too cognitively based when two of the most powerful moral motivations (that move us to action)
are empathy and faith: “The Case for the Four Component Model,” 643; Breakey, “Building Ethics Regimes,” 333334.
42
Bebeau, “Four Component Model,” 285.
43
Vozzola, “The Case for the Four Component Model,” 633.
44
Vozzola, “The Case for the Four Component Model,” 635636.
45
Vozzola, “The Case for the Four Component Model,” 644, summarising Haidt’s model: Haidt, “The Emotional Dog.”
46
Vozzola, “The Case for the Four Component Model,” 634.
47
Vozzola, “The Case for the Four Component Model.”
48
Vozzola, “The Case for the Four Component Model.”
49
Robbennolt, “Behavioral Legal Ethics.”
50
Rest, “Evaluating Moral Development,” 85.
51
Rest, “Overview of the Psychology of Morality,” 17.
52
Narvaez, “Four Components of Acting Morally,” 387388.
53
Rest, “An Overview of the Psychology of Morality,” 18.
54
Bebeau, “Perspective on Research in Moral Education.”
55
Bebeau, “Perspective on Research in Moral Education,” 2223.
56
Bebeau, “Perspective on Research in Moral Education,” 22.
57
Hamilton, “Assessing Professionalism,” 495‒496.
58
Breakey, “Building Ethics Regimes.”
59
Breakey, “Building Ethics Regimes.”
60
Breakey, “Building Ethics Regimes,” 342, 349.
Volume 1 (1) 2019 Rogers and Bell
85
Noting its comprehensiveness and the decades of credible research findings that support its various elements,
61
the FCM is used
in this discussion of AI lawyering because, as an essentially competence-based, motivational model, it is especially well suited
to approaching how ethics (and, therefore, regulation) is practised by individuals or (more specifically in this context) lawyers
using AI. As Rest saw it, the FCM offers a set of criteria for testing the merits of ethics education,
62
and asks how successfully
this educational approach inculcates each component.
63
In contrast, we first use it for a somewhat different purposethat is, to
look at what we are asking of lawyers as ethical users of AI, specifically their capacity and resolve when handling it. As
Hamilton argues, the foundations of an individual’s professionalism are these FCM capacities and motivation within their
respective social context. In this way, we can identify the stressors testing the regulatory system. Then, turning back to Rest’s
own advocated use of the model, the FCM can also shed light on ‘soft regulation’. Until any formal regulatory changes are
introduced (if any are in fact needed), does a lawyer’s education and training—by law schools, training courses, workplaces
and professional bodiessupport each element with respect to AI and, in turn, help ensure ethical practice?
Using AI Ethically In Legal Practice
Before applying the FCM to AI, we momentarily turn to the sociological literature to consider some of the wider conditions for
lawyers’ ethical practice. In the process, we emphasise, as Hamilton and Breakey have, the compatibility of the FCM with the
social dimensions of ethics. These are the social factors that enable and constrain lawyers’ ethics practice. Extensive research
shows how lawyers’ capacity and motivation to be ethical, including by enacting the duties of professionalism (and receiving
its privileges), are under strain.
64
Four main factors (or ‘obstacles’) which have been documented are: size, specialisation and
the move to in-house (e.g., fragmentation, loss of community and diminished independence); the loss of monopolies;
competition and aggressive commercialism (supported by increasingly powerful clients and governmental economic reform
agendas, offering real opportunities to large firms especially, but also adding to intense internal competition, elongated
organisational hierarchies, delayed rewards and dramatic forms of adaptation, such as global expansion); and managerialism
or performance tracking and review focused on efficiencies (found across the profession). These obstacles reflect and reinforce
those demands of the organisation, with which even entity regulation has so far failed to come to terms.
65
The workplace is
now the site and source of professional norm-setting and control, wherein (as mentioned) a significant proportion of unethical
behaviour is done with or at the behest of others or organisational systems.
66
Throughout our analysis, we emphasise that the proliferation of AI products in the legal arena is occurring against a backdrop
of existing stressors that are impacting traditional (albeit imperfectly enacted) ethics practice. The incursion of AI into lawyers’
work may inflame some of these existing trends and ethics risks. For example, two core motivational driversthe construction
of a desirable role identity and the meaning of excellence in professional workhave already shifted through increasing profit
orientation, and may alter further. Many larger law firms have taken up AI technologies more rapidly and to a greater extent
than their medium and small counterparts.
67
This extends to acquiring tech companies developing or extending in-house IT
capabilities, or entering into partnerships or agreements with larger providers of legal AI.
68
Increasing automation may also
exacerbate the negative elements of a large law firm environment, which detrimental and unethical aspects have been
documented.
69
Meanwhile, the loss of monopolypreviously thought of as primarily affecting lawyers working in legal aid,
conveyancing and probate
70
is being more widely felt, with the proliferation of technologies such as automated document
drafting challenging both ‘big law’ and smaller firms, as well as sole practitioners.
71
61
Breakey, “Building Ethics Regimes,” 325, Component II.
62
Breakey, “Building Ethics Regimes,” 347349.
63
Breakey, “Building Ethics Regimes,” 337.
64
Welsh, “The Effects of Changes”; Moorhead, “Professional Minimalism?”; Boon, “From Public Service.” Meanwhile, Simon argues that
limits of lawyers’ ethics are a result of role morality that requires and allows for lawyers to ignore their ordinary morality: The Practice of
Justice. Others point to the low quality of legal ethics education. For example, Granfield, “Hard to be a Human Being and a Lawyer.”
65
Rogers, “Large Professional Service Firm.”
66
Parker, “Ethical Infrastructure of Legal Practice”; Le Mire, “From Scandal to Scrutiny.”
67
Susskind, Tomorrow’s Lawyers, 184–185; Chin, State of Legal Innovation in the Australian Market.
68
For example, Waye, “Innovation in the Australian Legal Profession,” 221.
69
Parker, “Ethical Infrastructure of Legal Practice”; Le Mire, “From Scandal to Scrutiny”; Sommerlad, “Professionalism, Work
Intensification, Sexualisation and Work–Life Balance”; Flood, “Re-landscaping of the Legal Profession.”
70
Hanlon, Lawyers, the State and the Market, 3237.
71
See Barton, “Lawyer’s Monopoly.”
Volume 1 (1) 2019 Rogers and Bell
86
Another set of contextual variables concerns the lawyerclient relationship. This involves the type of client, how the relationship
is characterised (as one of agency, contracts or trustshowever, in reality, all are encompassed),
72
the degree of control exerted
by the lawyer and how the client exercises choice. Generally, while the client directs the lawyer as to big-picture outcomes, the
lawyer is responsible for how tasks and outcomes are achieved.
73
Yet, the salient ethical considerations vary depending on the
degree of the client’s knowledge and autonomy, including by influencing what and how much information the lawyer needs to
give the client to enable the client to be properly informed. Further, different types and applications of AI require and allow for
varied degrees of ethical capacity from lawyers. A system that is, essentially, a pre-programmed representation of existing legal
knowledge is different to one that extrapolates from an analysis of data. For example, a program that simply automates existing
legal precedent documents
74
has controlled parameters, and lawyers are likely to be competent to review the system’s outputs.
Notwithstanding all these factors, if using an automated system to either support or perform some elements of a legal task, the
lawyer still retains professional responsibility and liability for the work they have been retained to doan individualised
responsibility.
We now apply the FCM to the lawyers’ AI context, with Breakey’s achievement (or moral competence) category merged, for
brevity, with Component IV: action. We do not approach this analysis by looking at one ethical issue or case right through all
the components. Rather, we use this framework to allow for a focused treatment of the implications of AI for the ethical lawyer
in relation to the component that is most challenged by a certain feature or features of this technology having accepted that each
process needs to occur for the lawyer to be ethical in their use of automated systems. This means that every AI issue we examine
could feature in any of the component sections since they all require each element to have occurred for good and effective
ethical handling. However, we have arranged the issues according to which component they seem to strain the most
dramatically, thus building up a picture of what we are demanding of the ethical AI lawyer.
Component I: Awareness
This first component requires lawyers to identify that their own use of automated systems can have ‘morally salient features’.
75
Lawyers need to be sensitive to and curious about how the issues mentioned in the Introduction (including bias, opacity and
absence of accountability)
76
intersect with their own professional responsibilitiesparticularly professional loyalty,
competence and care, integrity and independence. This involves recognition not just that there may be general or big-picture
issues with automated systems used in other areas, such as health or the criminal justice system,
77
but also that these same
issues may affect the very systems that lawyers themselves use. It is also possible that there could be an ethical requirement to
use AI software where it can perform a task more efficiently or effectively than a human.
78
This point appears to have been
reached in relation to Technology Assisted Review (TAR) for large-scale discovery, as studies indicate that computer-assisted
review of documents is both faster and more accurate than human review.
79
As explained below, the task of recognition is
complicated by two additional factors: the perceived value neutrality and infallibility of AI systems;
80
and the social and
organisational context in which automated systems are being used. We first look at professional competence and care.
When using AI, an issue arises as to whether the automated system (and by extension the lawyer) is ‘competent’ to do the work.
Competence is found in the Australian Solicitors’ Conduct Rules in the requirement to ‘deliver legal services competently,
diligently and as promptly as reasonably possible’.
81
To identify the possibility of ethical risk or whether a system is indeed
competent, lawyers require awareness of the shortcomings or limitations of the tools they are using. This presents a significant
challenge, as such knowledge is not likely to be readily available (i.e., the product developers are unlikely to readily explain a
product’s flaws
82
). Likewise, if the software’s functioning is opaque, lawyers may have no way of finding out if it is reliable.
83
72
Boon, Ethics and Conduct, 297
73
Boon, Ethics and Conduct, 298.
74
Noting that such systems may or may not use ‘AI’.
75
Breakey, “Building Ethics Regimes”; refer to ‘capacity to see ethical issues’: Parker, “Ethical Infrastructure of Legal Practice,” 164.
76
Kroll, “Accountable Algorithms.”
77
Crawford, The AI Now Report, 2.
78
Arruda, “An Ethical Obligation”; Lacobowitz, “Happy Birthday Siri,” 416; Cass v 1410088 Ontario Inc (2018) ONSC 6959 [34].
79
Grossman, “Quantifying Success.”
80
Grossman, “Quantifying Success.”
81
Legal Services Council, Australian Solicitors’ Conduct Rules 2015, r 4.1.3, which applies to NSW and Victorian lawyers, who together
constitute some three-quarters of Australian legal practitioners.
82
Products are also likely to be proprietary; therefore, their workings do not have to be disclosed: Pasquale, “Restoring Transparency”;
Lehr, “Playing with the Data,” 662; Carlson, “The Need for Transparency”; Sheppard, “Machine-learning-powered Software.”
83
Pasquale, Black Box Society; Burrell, “How the Machine ‘Thinks’.”
Volume 1 (1) 2019 Rogers and Bell
87
Even if openly accessible, lawyers may lack the technical knowledge to make sense of the explanation; in the case of complex
applications (e.g., where a prediction is being generated), it may be difficult for lawyers to evaluate outputs themselves.
84
Further, there is ongoing debate regarding whether the results of some ML systems, which are able to ‘learn from’ huge datasets
and apply the resulting models to new data, are, indeed, interpretable at all. In some cases, even if transparent, it will not be
possible to comprehend how a system arrived at its outputs.
85
Lawyers may possess little capacity, then, to identify problems
in the software’s operations and will therefore have to take its outputs on face value. The Organisation for Economic Co-
operation and Development (OECD) Working Party on Competition and Regulation has referred to this as the creation of new
information asymmetries, wherein those reliant upon technology are unable to assess its quality.
86
Notwithstanding, these lawyers are still required to be ethically and legally responsible for AI. For example, in undertaking
TAR, lawyers are dependent upon software providers to understand and correctly train an ML system to code documents;
however, if outputs are incorrect (e.g., documents are mistakenly identified as non-privileged and disclosed), lawyers retain
professional responsibility for the error. Accordingly, and bringing in now another core value, professional independence,
87
if
lawyers are relying on software outputs (e.g., concerning the most important contractual clauses, most relevant precedent case
or likely outcome of proposed litigation), they may not be exercising independent judgement.
88
Conversely, a system that relies
on pre-programmed rules is more straightforward; though, issues may still arise if the system is not accurate to begin with or is
not kept up to date with legal developments. These risks should be more readily detectable by lawyers who, to jump forward
to Component IV (action and achievement), will be able to follow through with the requirements of professionalism. Having
said this, delegation to a pre-programmed system effectively leaves no room to move outside the program. This is akin to our
reliance on social ‘scripts’ in shared interactions, which concern the ‘cognitive processes that map existing knowledge onto a
template for understanding and response’.
89
As Breakey says, they are a potential obstacle to general awareness of moral
issues.
90
Indeed, pre-programmed systems automate a series of steps, but if the script does not include reference to relevant
ethical factors it can hinder a person’s capacity to ‘see’ them. Moreover, the very fact of codification into a technological system
may give such ‘scripts’ the appearance of precision and completeness, a potential threat to professional competence. Finally,
any ethical parameters included in an automated system may have been defined by those designing the system
91
rather than the
lawyers whose role the system seeks to emulate. A related problem is the typical (and comparatively narrow) demographic and
educational background of these designers, known as the ‘sea of dudes’ problem.
92
Perceptions of technology as value neutral or incapable of error might result in dulled moral sensitivity and insufficient scrutiny
being applied to AI systems, leading to over-reliance and a failure to question their operations.
93
Awareness that a system is
neither value-free nor error-free entails at least some understanding of AI technology and represents a new dimension of
professional competence and integrity.
94
The American Bar Association’s (ABA) Model Rules of Professional Conduct
specify that ‘technological competence’ is included as part of the general requirement of competence.
95
However, what is left
undefined is what technological competence might precisely entail, how it is to be judged and who exercises this judgement.
96
This calls up, too, an issue of motivation (Component III): that is, how much additional education will lawyers be motivated to
84
Hildebrandt, “Law as Computation.”
85
Bornstein, “Is Artificial Intelligence Permanently Inscrutable?”
86
Mancini, Protecting and Promoting Competition.
87
Legal Services Council, Australian Solicitors’ Conduct Rules 2015, r 4.1.4 (‘avoid any compromise to … integrity and professional
independence’).
88
Medianik, “Artificially Intelligent Lawyers,” 1518.
89
Breakey, “Building Ethics Regimes,” 330.
90
Breakey, “Building Ethics Regimes,” 330.
91
Crawford, The AI Now Report, 20.
92
Myers West, Discriminating Systems; Crawford, “Artificial Intelligence’s White Guy Problem”; Walsh, “Why We Need to Start
Preparing”; Walsh, 2062, 115.
93
Crawford, The AI Now Report, 1314.
94
Law Society of New South Wales, Future of Law and Innovation, 41; Jenson, “Ethics, Technology, and Attorney Competence”;
Mazzone, “A Techno-ethics Checklist.”
95
This occurred when Comment 8 to Rule 1.1 (a lawyer’s duty of competence) in the Model Rules of Professional Conduct was amended
in 2012 to provide that ‘a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with
relevant technology’: American Bar Association, “Model Rules of Professional Conduct,” r 1.1; Boyd, “Attorney’s Ethical Obligations.”
96
Perlman, “Evolving Ethical Duty of Competence,” 25 (arguing that this is deliberately left vague to encompass change); see also Boyd,
“Attorney’s Ethical Obligations.”
Volume 1 (1) 2019 Rogers and Bell
88
engage in as part of their continuing legal education obligations
97
to achieve the competence necessary to question and test
automated systems?
An additional, confounding factor is the practice context. Parker and her co-authors observed that the structure of large law
firms might already complicate individual lawyers’ ability to identify ethical issues.
98
They give the example of how such firms’
diffusion of work among many practitioners may impede lawyers from ‘seeing’ an ethical issue, as they lack sight of the bigger
picture.
99
In the case of automated systems, this becomes even more likely. Susskind and Susskind have argued that lawyers’
work is increasingly ‘decomposed’ or broken down into components.
100
With automated systems, a lawyer’s tasks are not only
broken down, but some are excised altogether, as they are performed by a machine. This may further obscure ethical issues.
Finally, Rest explained that emotions can both highlight ethical cues or hamper our interpretations.
101
Increasing use of AI
systems in legal work may provoke emotional responses in lawyers, as such systems have been widely and popularly portrayed
as superior to, and replacing of, lawyers.
102
Although this may increase lawyers’ sensitivity to the problems of AI, increased
emotion might (to flag some of the issues explored in the following section) also result in a sense of futility or diminished
motivation. Evidently, this has implications for education and training.
Component II: Judgement
In seeking the morally ideal course of action, an individual must try to integrate the various needs and expectations at stake.
103
In relation to law, professional moral reasoning is said to include reasoning in three layers:
[First by] the application of explicit codes, rules and norms specific to the profession in the situation presented … [Second]
through the application of the profession’s core values and idealsintermediate concepts relevant to each profession like
‘confidentiality’, ‘conflicts of interest’ and ‘fiduciary duty’. At a third level, it encompasses the overarching neo -Kohlbergian
idea of post-conventional thinking or reasoning about the broad, societal dimensions of an ethical problem in the context of the
profession’s responsibilities.
104
Here, we consider what existing general laws, law of lawyering and ethical codes might be relevant for lawyers in understanding
what ethics requires of them. The professional codes represent the most salient expression of lawyers’ extensive obligations to
the court, client and community. Yet, as discussed, these codes include and/or are supported by wider professional values,
which are enforced by disciplinary and liability mechanisms, including the lawyer’s paramount duty to the court, their fiduciary
duty of loyalty to the client (summed up colloquially as no-conflict, no-profit) and their competence. They also include lawyers’
duties to themselves as professionals and the wider legal institutions, in independence and integrity, or the exercise of
professional judgement. Working out what these demand of lawyers in an AI context is at present a highly complex task, as
how these various elements might intersect in relation to specific technology is unclear. For example, the Victorian Legal
Services Commissioner announced the creation of a ‘regulatory sandbox’—a concept borrowed from financial services
regulation
105
for LawTech.
106
A regulatory sandbox allows interested parties to trial or attempt innovations without fear of
regulatory sanction. This approach is being utilised by the Solicitors Regulation Authority for England and Wales, which
describes the sandbox as a ‘safe space’ to test new ideas about legal service delivery.
107
These initiatives can be seen as an
attempt by regulators to clarify when the rules do not apply, but uncertainty still shrouds how existing rules about the lawyer’s
retainer and obligations to the client interact with the use of AI, or what demands are presently being made of lawyers. It is
equally unclear whether a sandbox approach can protect lawyers from their present, individualised responsibility.
97
For example, NSW solicitors are required to undertake 10 Continuing Professional Development units each year according to the NSW
Government, Legal Profession Uniform Continuing Professional Development (Solicitors) Rules 2015, r 6. One unit must be completed in
each of ‘ethics and professional responsibility’; ‘practice management and business skills’; ‘professional skills’; and ‘substantive law’: Law
Society of New South Wales, “Continuing Professional Development.”
98
Parker, “Ethical Infrastructure of Legal Practice,” 163.
99
Parker, “Ethical Infrastructure of Legal Practice,” 164165.
100
Susskind, The Future of the Professions, 198.
101
Rest, “A Psychologist Looks,” 30.
102
For example, Markoff, “Armies of Expensive Lawyers”; Addady, “Meet Ross”; Patrice, “BakerHostetler Hires AI Lawyer.”
103
Rest, “A Psychologist Looks,” 31.
104
Hamilton, “Assessing Professionalism,” 495‒496.
105
Australian Securities and Investments Commission, “Fintech Regulatory Sandbox.”
106
Derkley, “Regulatory ‘Sandbox’.”
107
See Solicitors Regulation Authority, “SRA Innovate.”
Volume 1 (1) 2019 Rogers and Bell
89
Another example concerns the new ‘law of lawyering’ contained in the US Model Rules of Professional Conduct,
108
adopted
in many US states, which sees lawyers’ duty of competence extend to staying up to date with relevant technology. Its parameters
are, however, unclear. The change was suggested to not have imposed any new requirements on lawyers, but rather to represent
a symbolic step in acknowledging the importance of technology.
109
Alternatively, one commentator has said that ‘lawyers who
fail to keep abreast of new developments [in technology] face a heightened risk of discipline or malpractice’.
110
It would be
difficult for most lawyers to attain more than a basic understanding of AI without extended study. For reasons described above,
such as opacity and lack of foreseeability, even with extended study it is likely not possible to know how an automated system
arrived at its output. Other than reaffirming the individualised responsibility for technology, which lawyers must take on, the
further explication of the duty of competence in the Model Rules does little to assist lawyers in adhering to their ethical duties.
Arruda draws parallels with earlier technologies (e.g., noting that lawyers are now expected to use email rather than postal
services) to argue that lawyers have a duty to use AI technologies.
111
However, the latter are arguably qualitatively different to
‘digital uplift’ projects, as they may be used to generate output, that goes directly to lawyers’ core work. Moreover, the scope
of potential errors is both wider and carries more serious ramifications, as explained below in relation to action and
achievement.
112
Hence, the following sections demonstrate some of the value conflicts that arise due to indeterminacy of the
rules.
Regarding AI systems generally, Gasser and Schmitt have noted the recent proliferation of ethical AI principles both from tech
companies and other bodies, including interest groups and non-government organisations.
113
For example, in Australia there is
interest in the regulation of AI, evidenced by current inquiries being conducted by the Australian Human Rights Commission
114
and the Commonwealth Scientific and Industrial Research Organisation’s computer science arm.
115
Yet, despite this concern
and a burgeoning of ethical codes, there is no legal or regulatory regime presently governing ‘AI’ specifically.
116
Automated systems are regulated though, in that they are subject to the same general law obligations as other products: tort
law, consumer law and even criminal law may all be applicable.
117
Nonetheless, it remains difficult to attribute responsibility
for AI products to designers or developers due to the likelihood of unforeseen or unintended results. If those involved in the
creation of a complex AI system cannot know what its eventual outputs might be, it is difficult to demonstrate foreseeability of
harm or intention to cause harm.
118
The number of people, both professionalised and not, who may be involved in engineering
a piece of software adds to this complexity and potentially leaves a liability vacuum.
119
There is, as explored by Gasser and
Schmitt, the possibility that the AI industry will self-regulate or self-impose some form of certification on its own products and
services.
120
Yet, other authors are sceptical about the effectiveness of self-regulation in this area.
121
Gasser and Schmitt are
more optimistic, but note that at present there is no ‘coherent normative structure … [but] rather a patchwork’ of norms and
principles, both existing and emerging.
122
Accordingly, lawyers must be aware of the difficulties involved in attempting to
attribute or share liability with those who designed or built the system, and also that AI systems may, or may not, themselves
have been designed and created in a way that adheres to ethical principles or codes of conduct.
108
American Bar Association, “Model Rules of Professional Conduct,” r 1.1, cmnt 8 (Competence). Reportedly, this has been adopted in
36 States: Ambrogi, “Tech Competence.”
109
American Bar Association, Commission on Ethics.
110
Macauley, “Duty of Technology Competence?” (quoting Andrew Perlman).
111
Arruda, “An Ethical Obligation,” 456. Note that Arruda is the CEO of Ross Intelligence, an AI-powered legal research tool.
112
For example, Sheppard has argued that even something as apparently straightforward as conducting legal research may have serious
consequences if a key case is overlooked: “Machine-learning-powered Software.”
113
Gasser, “The Role of Professional Norms,” 5.
114
Australian Human Rights Commission, Artificial Intelligence.
115
Dawson, Australia’s Ethics Framework.
116
Kroll, “Accountable Algorithms,” 633.
117
Scherer, in sentiments echoed by other authors, has commented on the shortcomings of these traditional modes of regulation when
applied to AI: “Regulating Artificial Intelligence Systems,” 356; Millar, “Delegation, Relinquishment, and Responsibility,” 123 (noting that
product liability is not appropriate); Karnow, “Application of Traditional Tort Theory.” A further suggestion is to attribute legal
personhood to autonomous systems akin to that given to companies: Select Committee on Artificial Intelligence, AI in the UK, Chapter 8;
Solum, “Legal Personhood”; Creely, “Neuroscience, Artificial Intelligence,” 2323.
118
Millar, “Delegation, Relinquishment, and Responsibility”; Parker, “Ethical Infrastructure of Legal Practice,” 124.
119
Scherer, “Regulating Artificial Intelligence Systems,” 370–371; see also Gasser, “The Role of Professional Norms,” 67.
120
Gasser, “The Role of Professional Norms.”
121
Guihot, “Nudging Robots”; Calo, “Artificial Intelligence Policy,” 408.
122
Gasser, “The Role of Professional Norms,” 25.
Volume 1 (1) 2019 Rogers and Bell
90
Component III: Decision-making
According to Rest, ‘research (and common sense) have clearly demonstrated that what people think they ought to do for moral
reasons is not necessarily what they decide to do’.
123
Having made a judgement about what is required of him or her ethically,
a lawyer must then have the moral motivation to follow through. While some writers have noted that we can be ‘too rigid (with
an exaggerated sense of moral responsibility)’,
124
here we focus on complacencyand worse, wilfully ignoring moral urges
and sidestepping moral imperatives. Research has highlighted the various sources of values we hold that might conflict with
ethics, including career and financial goals, important relationships, religion and aesthetic valuesall of which might be
preferred over one’s ethical commitments.
125
Some of these values will be compatible with ethical action. For example, career and financial goals can support extrinsically
motivated ethical decision-making—that is, following the ‘law of lawyering’ to avoid punishment, which can result in, inter
alia, suspension or exclusion from practice. However, for Rest (and others), external motivations such as avoiding sanctions or
the desire for peer approval are not truly ethical motivations, since they are not self-determined.
126
Rather, motivation to
prioritise ethics ought to be intrinsicto develop personal integrity and character, or at least the motivation should be more
integratedto, for example, adhere to standards of excellence and find work satisfaction
127
(with which the use of AI may
better align).
128
This article posits two dimensions of motivation relevant for discussion: though there is the moral motivation to do something
about a specific ethics problem that has been detected in a certain context, first there is an overall motivation, closer to a sense
of moral responsibility, for AI. Before any engagement with the issues and the rules to which they give rise, the lawyer must
have accepted (and for these rules to be legitimate, and achieve self-regulatory efficacy, ought to have accepted) that the
technology is within their due responsibility.
129
However, lawyers may not see themselves as responsible for AI, which then
affects each component of ethical decision-making. Indeed, as discussed in relation to Component IV (action), lawyers may
then seek to deliberately excise their responsibility for AI from the scope of the retainer.
For several reasons, it is understandable that lawyers might not feel connected with, let alone accountable for, AI systems, in
the sense of this broader concept of overall motivation. First, consider the character of such systems. Certain AI systems can
act autonomously, may ‘teach themselves’ and can produce outputs without providing reasons.
130
Yet, such a system cannot be
legally liable for a decision or output, as machines are not legal persons.
131
While some commentators have called for greater
accountability for those involved in creating autonomous systems,
132
as detailed above, attribution is complex.
133
These
products arein direct contrast to the highly regulated nature of legal practicedesigned, developed, manufactured and
implemented in a largely unregulated context.
Second, consider the workplace context in which lawyers are likely using automated systems. As noted, the legitimacy of
professional accountability (i.e., an individual accountability) ultimately rests on the lawyer’s exercise of independent
judgement. Yet, lawyers are subject to greater managerialism in their work than ever before. The pursuit of organisational profit
and client demands has already reduced their personal autonomy.
134
The ensuing siloing and hierarchies within large legal
organisations may ‘degrade individual lawyers’ sense of professional autonomy and their capacity to take responsibility for
their own work’.
135
As technology likely falls under the auspices of the firm, individual lawyers may have little choice in
whether autonomous systems are used within the organisation, whether that lawyer must use them, and how the technology is
chosen and scrutinised.
123
Rest, “A Psychologist Looks,” 33.
124
Minnameier, “Deontic and Responsibility Judgments,” 78.
125
Rest, “A Psychologist Looks,” 33.
126
Rest, “An Overview of the Psychology of Morality.”
127
Breakey, “Building Ethics Regimes,” 334, 339.
128
For example, McGinnis, “The Great Disruption,” 3054 (suggesting that some lawyers will be more successful with the aid of AI).
129
See Brownsword, “Law, Regulation, and Technology,” 16 for explanation of regulatory legitimacy in this context.
130
Dhar, “Future of Artificial Intelligence.”
131
Vladeck, “Machines Without Principals,” 121.
132
Kroll, “Accountable Algorithms,” 657.
133
See Citron, “Technological Due Process,” 1253; FAT/ML, “Fairness, Accountability, and Transparency”; Creely, “Neuroscience,
Artificial Intelligence”; Scherer, “Regulating Artificial Intelligence Systems,” 364365.
134
Le Mire, “A Propitious Moment,” 1047.
135
Parker, “Learning from Reflection on Ethics Inside Law Firms,” 403.
Volume 1 (1) 2019 Rogers and Bell
91
Third, though related to the second issue, the professional model has already shifted to a more consumer-focused orientation,
away from a model in which the lawyer, as expert, was ascendant over the client. In many publicised instances, it seems that
legal AI developments are being driven by clients’ demands for efficiency and cost-effectiveness
136
and thus, ‘in some areas,
consumer protection might be experienced by lawyers as consumer demands. Within a wider competitive environment, clients,
armed with information technology, can come to the legal practitioner with their own ideas, ready to test the lawyer’s
expertise’.
137
Rather than adopting AI technology for the sake of professional excellence or improving workflow, lawyers and firms may feel
compelled or pressured to do so. A potential conflict also exists between the disciplinary rules and associated professional
values, which require lawyers to take responsibility for any shortcomings or flaws in the operation of an AI system; and the
demands of clients and/or workplace success, which may require use of AI tools for reasons of excellence and efficiency.
138
Finally, use of automated systems in law is also marketed or upheld as a means of improving access to justice, through the
creation of simpler and more accessible apps and online services.
139
Commentary here has focused on the failure of lawyers
and traditional modes of pro bono work and legal aid schemes to ensure equitable access to justice.
140
Accordingly, the use of
automated systems may also undermine lawyers’ public-service-minded motivations as being both disparaging of those
working within the sector, and in characterising access to justice as something that is now being primarily addressed by others
outside legal practice.
141
In motivational terms, these factors all pose a threat to lawyers’ ‘fair bargain’ motivation,
142
or their sense that it is acceptable
and legitimate
143
for them to be subjected to the demands of ethics (given the privileges they enjoy) when it comes to AI.
Indeed, as detailed here, these privileges are for many areas of law under strain or depreciating. Increasingly, lawyers might
feel burdened with extensive responsibilities for technology which non-lawyers create, distribute and profit from with impunity.
Moreover, the possibility of AI adding to ‘decomposition’,
144
and its ability to perform (at least in relation to discrete tasks) to
a higher standard than a professional, is confronting to professional identity and bespoke, ‘trusted advisor’ work.
145
Additional
pressures relate to AI performing more quickly and more cost-effectively than lawyers, which may affect lawyers’ sense of
self-efficacy or trust in their own abilities when AI technology can take over aspects of their work.
Noting that legal responsibility for AI predictions will continue to rest with humans, Cabitza has asked (within the context of
medicine) whether it will become professionally impossible to disregard the ‘machine’. He observes that ‘being against [a
program’s prediction] could seem a sign of obstinacy, arrogance, or presumption: after all that machine is right almost 98 times
out of 100 and no [professional] could seriously think to perform better’.
146
In these ways, AI might itself be eroding
lawyers’ identities and rewards, and, therefore, the drivers of ethical motivationincluding when it is suitable for use.
Component IV: Action and Achievement
Ethics action involves ‘figuring out the sequence of concrete actions, working around impediments and unexpected difficulties,
overcoming fatigue and frustration, resisting distractions and other allurements, and keeping sight of the original goal’.
147
As
Breakey illustrates, these features of ethics action and achievement (his extension) require both personal courage and
perseverance, interpersonal skills and strategic nous.
148
After deciding to exercise supervision over and responsibility for AI,
lawyers must then follow through and do so effectively. However, in this context, lawyers’ success in terms of moral action
will depend on their technological understanding and experience. Importantly, though, there is a lack of consensus regarding
the degree of technological competence that lawyers should possess, including because ethical competence is beset by issues
of autonomy and explainability. As we now illustrate, these issues are germane to the lawyerclient interaction, centred on the
136
Alarie, “How Artificial Intelligence Will Affect the Practice of Law,” 114.
137
Bell, “Artificial Intelligence and Lawyer Wellbeing.”
138
McGinnis, “The Great Disruption.”
139
Cabral, “Using Technology to Enhance Access to Justice.”
140
For example, Hadfield, “How to Regulate Legal Services”; Barton, Rebooting Justice.
141
Bell, “Artificial Intelligence and Lawyer Wellbeing.”
142
Breakey, “Building Ethics Regimes,” 333334.
143
Brownsword, “Law, Regulation, and Technology,” 1116.
144
Susskind, The Future of the Professions, 198.
145
Cabitza, “Breeding Electric Zebras.”
146
Cabitza, “Breeding Electric Zebras,” 3.
147
Rest, “A Psychologist Looks,” 34.
148
Breakey, “Building Ethics Regimes,” 326.
Volume 1 (1) 2019 Rogers and Bell
92
lawyer assisting the client understand relevant legal issues and obtaining informed consent regarding decisions to be taken.
149
They extend to lawyers’ overarching duty to the court and the administration of justice.
The issue of autonomy was discussed in relation to motivation, but it also has a practical element: the greater a program’s
autonomy, the less control any human has over its actions and outputs. As such, a lawyer cannot have any ‘say’ in the program’s
‘ethical’ actions or implementation. Likewise, the less explainable, the less insight people have into how a program has
generated its answers. These attributes are linked: a high degree of autonomy tends to correspond to a low degree of
explainability. In this sense, ‘explainability’ differs from ‘transparency’, as it encapsulates the idea that while an ML system’s
workings may be made visible (e.g., through revealing source code), they may still be unintelligible even to an expert.
150
Thus,
the most sophisticated or ‘frontier’ ML systems are able to act with the greatest autonomy, but this tends to make their actions
all but indecipherable to humans.
If lawyers cannot themselves understand the reasons for a system’s particular outputs, it will not be possible to relay that
information to clients, heightening the challenge of informed decision-making.
151
A further issue is that ethical frameworks
tend to prioritise the production of records (e.g., a lawyer’s case files).
152
Yet ‘contemporary AI systems often fall short of
providing such records …either because it’s not technically possible to do so, or because the system was not designed with
[this] in mind’.
153
Both issues affect the lawyer–client relationship; technically, the lawyer cannot obtain the client’s informed
consent to a course of action if they are not truly ‘informed’. Even appropriate record-keeping may be difficult.
One commentator has argued that:
Al can’t be defended unless it’s possible to explain why and how the Al system or tool reached the conclusion it reached … a
law firm would need to find out which analytics and variables programmed into the technology sparked the conclusion that
particular facts about a case are relevant.
154
Yet, in the case of a sophisticated system, this may simply not be possible. Other writers have noted the fact that in ML,
accuracy and intelligibility often have an inverse relationship.
155
Thus, the ‘best’ or most accurate systems are likely the least
intelligible. Moreover, Goodman has argued that ‘it is one thing for [AI] to assist attorneys in making better, or fairer, or more
efficient judgments, but it is a different situation where the human is simply ratifying what the computer has chosen to do’.
156
Even if the lawyer has used such software enough to ‘trust’ its outputs, this will not assist with providing an explanation.
Further, if the lawyer does not or cannot place complete confidence in the results an automated system produces, then he or she
may end up undertaking a review which nullifies any efficiency benefits of using the system in the first place.
157
This final section considers three possible avenues for moral action that lawyers may take at this point. These are neither
mutually exclusive, nor is any one a complete answer. These are: to seek the client’s informed consent to the use of the
technology; to ‘supervise’ the technology; or to seek to excise responsibility for the technology altogether, via unbundling or
limiting the scope of the lawyer’s retainer.
It is not clear that lawyers’ use of automated systems must in all circumstances be disclosed to clients. There are two elements
to this: one involves client decision-making, and the other concerns fairness in costs disclosure. In terms of decision-making,
while arguments are made that the client’s informed consent is imperative,
158
if the lawyer is (in any event) legally responsible
(to a high standard) for the advice, documents and so on provided to the client, then arguably how that advice was arrived at is
irrelevant. The information that the lawyer gives should be directed to assisting the client in determining what their best interests
149
For NSW and Victorian lawyers, informed consent is codified: Legal Services Council, Australian Solicitors’ Conduct Rules 2015, rr
7.1–7.2 (‘A solicitor must provide clear and timely advice to assist a client to understand relevant legal issues and to make informed
choices about action to be taken during the course of a matter, consistent with the terms of the engagement’ and ‘inform the client or the
instructing solicitor about the alternatives to fully contested adjudication of the case which are reasonably available to the client’).
150
Kroll, “Accountable Algorithms,” 638.
151
Millar, “Delegation, Relinquishment, and Responsibility”; Parker, “Ethical Infrastructure of Legal Practice,” 163.
152
Crawford, The AI Now Report, 19.
153
Crawford, The AI Now Report, 19.
154
Williamson, “Getting Real.”
155
For example, Rane, “The Balance.”
156
Goodman, “Impacts of Artificial Intelligence,” 160.
157
Medianik, “Artifically Intelligent Lawyers,” 15281529 (advocating for review that is not overly burdensome).
158
Lacobowitz, “Happy Birthday Siri,” 416417.
Volume 1 (1) 2019 Rogers and Bell
93
are, so that the lawyer can advance them.
159
The information must therefore help the client assess the risks and drawbacks of
the proposed course of action.
160
This all might hinge on whether the lawyer’s decision to use AI relates to subject matter
(which must be subject to the client’s instructions) or to tactics and procedure (where the case la w is less straightforward and,
depending on the context, advocates’ immunity may apply).
161
In some circumstances, the use of AI may not be apparent to
clients—for example, if lawyers or firms are using automated drafting software to create ‘first drafts’ of documents or when
undertaking review of documents using ML programs. The lawyer may not wish to disclose use of AI software as this may be
viewed as diminishing their time and effort, and correspondingly appear to not warrant the fees being charged. That said,
lawyers’ fees are always subject to fair disclosure and reasonableness requirements. Lawyers may also want to know the
software’s output first, particularly if the output is that the lawyer, and/or the lawyer’s firm, is not best placed to act for the
client due to a poor previous success rate with similar types of cases
162
(or whatever parameters are being measured).
A lawyer’s duties of supervision may also intersect with the use of automated systems. Parallels can be drawn with the
outsourcing of legal work to third parties: the ABA Model Rules indicate that lawyers remain responsible for work outsourced
and must be competent for its consequent review.
163
Lawyers must also understand when it is appropriate to outsource work.
164
Outsourcing and the general ‘commoditisation’ of legal services are not new phenomena. Rostain has noted, for example, that:
the processes underlying the provision of legal services, once centralized in law firms, have been disaggregated and outsourced.
In litigation, for example, law firms have developed supply chains that rely on outside consultants, contract lawyers, and non-
lawyer service providers to standardize tasks that were at one time performed by associates.
165
Medianik has also suggested that the ABA rules around outsourcing could be used as a guide that informs how AI should be
managed under the professional conduct rules.
166
She suggests that lawyers should treat AI ‘like a junior associate’ and carry
out their usual supervisory role.
167
Yet, if lawyers cannot independently evaluate the functioning of the software, this is
undeniably different to supervising a junior.
168
Medianik’s proposal also relies on the use of technology that is ‘qualified
through requisite programming’,
169
but does not explain how this ‘qualification’ could be verified or standardised. This
leaves unanswered more pertinent questions concerning the design of such systems, including how lawyers can trust their
operation.
Finally, to further consider the decomposition of legal work. Limited-scope representation or ‘unbundling’ is where the lawyer
performs some tasks but not others, with the scope of work clearly delineated in the retainer. It is permitted and indeed
encouraged in some jurisdictions
170
as a means of improving access to justice and reducing the cost of legal services but is
disallowed in others. In Australia, it is not clear that a retainer that limits the lawyer’s scope of work can be effective t o guard
against breaches of the professional conduct rules.
171
If allowed in relation to AI, unbundling could permit the lawyer to excise
responsibility from completing certain tasks (which would be performed by the AI program) from the scope of the retainer.
This might work effectively for some legal work. For example, TAR requires lawyers to be involved in ‘training’ the ML
system to identify documents correctly, where the bulk of ‘review’ is then performed by the machine with checks or oversight
subsequently completed by lawyers.
172
Here, the elements performed by humans and those undertaken by the automated system
can be clearly delineated. In other cases, however, there may be blurring or overlap between tasksparticularly if, say, the
lawyer relies on contract- review software to review a document and the software identifies only portions of the contract as
159
Dinerstein, “Client-centered Counseling.”
160
Dal Pont, Lawyers’ Professional Responsibility, 153, citing Samper v Hade (1889) 10 LR (NSW) 270, 273; Lysaght Bros & Co Ltd v
Falk (1905) 2 CLR 421, 439.
161
See generally Byrne, “A Death by a Thousand Cuts.”
162
McGinnis, “The Great Disruption,” 3054.
163
American Bar Association, “Model Rules of Professional Conduct,” r 1.1.
164
American Bar Association, “Model Rules of Professional Conduct,” r 5.3; Medianik, “Artificially Intelligent Lawyers,” 1501.
165
Rostain, “Robots versus Lawyers,” 565567.
166
Medianik, “Artificially Intelligent Lawyers.”
167
Medianik, “Artificially Intelligent Lawyers.”
168
Remus, “Predictive Coding”; Whittfield, “Ethics of Artificial Intelligence in Law.”
169
Medianik, “Artificially Intelligent Lawyers.”
170
California Courts, “Limited-scope Representation.”
171
Legg, “Recognising a New Form of Legal Practice.”
172
Davey, “Predictive Coding.”
Volume 1 (1) 2019 Rogers and Bell
94
other than standard. It is unclear whether the lawyer can shape the retainer so as to not review the other parts.
173
In Australia,
courts have indicated that limiting the retainer remains subject to the lawyer independently evaluating the client’s understanding
and its business position:
174
[The solicitor’s] retainer would have extended beyond the formal or mechanical tasks of preparing the loan agreements and
mortgages. [He] could not fulfil his duty without ascertaining the extent of the risk his client wished to assume in the
transactions, evaluating the extent of the risks involved in the transactions and advising in that regard.
175
In the case of AI tools, a limited retainer may work for sophisticated clients who can evaluate the risk of not having lawyer
review. Indeed, it seems likely that it is these clients (who are large and influential) who are primarily driving the uptake of AI-
assisting tools among law firms.
176
In the case of less astute clients, however, it is not clear that a lawyer can proceed under a
limited-scope retainer, complicating the argument for AI enhancing access to justice. Returning to the issues foreshadowed
above in relation to Component II (judgement), it is equally unclear how a lawyer is able to ‘evaluate the extent of the risks
involved’ if he or she is completing only part of the work.
Conclusion
Consideration of the legal professional ethics and regulatory implications of the increasing use of AI or automated systems in
legal practice is in its early stages. This article has sought to take a different approach to this kaleidoscope of intersecting issues
by focusing on the social psychological elements that underpin the regulation of professionals. This paper analysed lawyers’
use of AI through Rest’s FCM (extended by Breakey) of the psychological components for ethical behaviour, encompassing
awareness, judgement, decision-making, and action and achievement. These elements are fundamental to regulation, which
relies upon, inter alia, its targets (lawyers) having both motivation and capacity to uphold their professional obligations. We
suggest that it is only when these features are supported, that regulation will be legitimate and effectivein terms of both the
rules and related education and training. To support rule of law values, the laws that govern legal practitioners in their use of
AI must be clear, certain and adequately publicised
177
to ensure that lawyers know what is required of them and how the
disciplinary and malpractice regimes operate. Individuals can then conduct their practices with a satisfactory level of security,
supporting professional efficacy and a ‘greater good’ orientation.
178
Of course, AI is entering and contributing to a complicated context for lawyers’ professional identities and behaviourand,
therefore, for their effective regulation. Even before the arrival of AI tools to legal practice, the task of professional regulation,
both for ensuring standards and securing monopoly, is more difficult than ever. The profession’s special promise of ethicality
and competence is difficult to quantify and deploy as part of the regulative bargain, both to justify to the state that the profession
deserves monopoly protection as well as to validate to clients that using a professional’s services is better than those of a non-
professional. Conceptions of desirable role identity (the meaning of being a professional), of achieving high standards in one’s
work, and of the ‘fair bargain’ (i.e., professional obligations in return for professional status) are all further challenged by the
use of AI in law. As demonstrated, the environment in which lawyers work is also important. The large and global firms and
in-house corporate legal departments, where much AI use is being promoted and developed, already complicate lawyers’ ethical
practice. Lawyers may not be able to choose whether they use automated systems in their work have the opportunity to
understand how these machines actually (or could) function.
Against this backdrop, the combination of professional rules, the general law and the context of AI’s development and
regulation may not be sufficient to incentivise and otherwise influence responsible use of such technologies by lawyers.
Seemingly little clarification, education and regulatory guidance are being proffered to legal practitioners, increasing especially
173
It is also unclear that this is wise, given that contractual terms are generally interdependent and cannot be dissociated from the contract
as a whole.
174
Robert Bax & Associates v Cavenham Pty Ltd [2013] 1 Qd R 476, cited by Legg, “Recognising a New Form of Legal Practice.”
175
Robert Bax & Associates v Cavenham Pty Ltd [2013] 1 Qd R 476, 490 [54]. Note, it seems clear that Muir JA was distinguishing
between sophisticated and unsophisticated clients.
176
Macquarie Bank, An Industry in Transition.
177
Zimmerman, Western Legal Theory, 91‒92, defining ‘rule of law’.
178
While noting the ways in which the public interest has sometimes been deployed by the professions to maintain exclusivity, extensive
empirical behavioural ethics research has shown how ethical behaviour diminishes under conditions of stress and uncertainty. For some of
this research in the legal context, see Robbennolt, “Behavioral Legal Ethics,” 1140‒1143. Indeed, the self-regulatory (or monopoly
protections) model maintains its supportersthose who argue that getting on with being a professional (exercising independent judgement
and contributing to the advancement of professional knowledge)are incompatible with intense insecurity, competition and status-seeking.
For a full discussion, see Rogers, “Large Professional Service Firm,” Part II.
Volume 1 (1) 2019 Rogers and Bell
95
(as demonstrated) the complexity of the stages of awareness and judgement. Ensuring that lawyers are able to adhere to the set
standards of ethics and competence requires both capacity and motivation for individual professionals and their workplaces.
This includes the necessary skill and motivation to continue to strive for a professional identity, and subject themselves to
statutory and disciplinary regimes beyond those that apply to non-professionals.
Rest resisted framing the FCM as a presentation of the ideal person.
179
Nevertheless, some writers, including in law, now see
this reticence as a missed opportunitythat the FCM represents not just an explanation of ethical failure, but a gold standard
for the morally good person.
180
Indeed, the influx of AI into professionalised occupations such as law heightens the need for
human skills, as at present AI cannot undertake moral reasoning. The FCM helps regulators, as well as lawyers, heads of legal
practice and legal educators clarify their ‘moral ambitions as well as their images of the “successful” professional’.
181
Moreover,
it can be used to evaluate ethical education and training as well changes to regulation, so that lawyers do not shoulder the entire
burden of responsibility for AI alone. Right now, this responsibility is neither straightforward, nor does it encourage high
standards.
Bibliography
Primary Sources
Cass v 1410088 Ontario Inc (2018) ONSC 6959.
Robert Bax & Associates v Cavenham Pty Ltd [2013] 1 Qd R 476.
Secondary Sources
Addady, Michal. “Meet Ross, the World’s First Robot Lawyer.” Fortune, May 12, 2016.
http://fortune.com/2016/05/12/robot-lawyer/?iid=leftrail
Alarie, Benjamin, Anthony Niblett and Albert Yoon. “How Artificial Intelligence Will Affect the Practice of Law.”
University of Toronto Law Journal 68, supp 1 (2018): 106124. https://doi.org/10.3138/utlj.2017-0052
Alfieri, Anthony Victor. “The Fall of Legal Ethics and the Rise of Risk Management.” Georgetown Law Journal 94, no 6
(2006): 19091955.
Ambrogi, Robert. “Tech Competence.” LawSites (blog). November 16, 2017. https://www.lawsitesblog.com/tech-
competence
American Bar Association. “Model Rules of Professional Conduct.” December 4, 2018.
https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/mod
el_rules_of_professional_conduct_table_of_contents/
–––––. Commission on Ethics 20/20: Report to the House of Delegates (American Bar Association, May 8, 2012).
Armstrong, Ben, David Bushby and Eric Chin. “0 to 111: Funding Australia’s LegalTech Market.” Alpha Creates, March 17,
2019. https://alphacreates.com/0-to-111-funding-australias-legaltech-market/
Arruda, Andrew. “An Ethical Obligation to Use Artificial Intelligence? An Examination of the Use of Artificial Intelligence
in Law and the Model Rules of Professional Responsibility.” American Journal of Trial Advocacy 40, no 3 (2017): 443
458.
Australian Human Rights Commission. Artificial Intelligence: Governance and Leadership Whitepaper (Australian Human
Rights Commission, February 1, 2019). https://www.humanrights.gov.au/our-work/rights-and-
freedoms/publications/artificial-intelligence-governance-and-leadership
Australian Securities and Investments Commission. “Fintech Regulatory Sandbox.” Last modified May 1, 2019.
https://asic.gov.au/for-business/innovation-hub/fintech-regulatory-sandbox/
Bartlett, Francesca and Linda Haller. “Legal Services Regulation in Australia: Innovative Co-regulation.” In International
Perspectives on the Regulation of Lawyers and Legal Services, edited by Andrew Boon, 161184. Portland: Hart, 2017.
Barton, Benjamin. “The Lawyer’s Monopoly: What Goes and What Stays.” Fordham Law Review 82, no 6 (2014): 3067
3090.
Barton, Benjamin and Stephanos Bibas. Rebooting Justice: More Technology, Fewer Lawyers, and the Future of Law. New
York: Encounter Books, 2017.
Beames, Emma. “Technology-based Legal Document Generation Services and the Regulation of Legal Practice in Australia.”
Alternative Law Journal 42, no 4 (2017): 297‒303. https://doi.org/10.1177%2F1037969X17732709
Bebeau, Murielj. “The Defining Issues Test and the Four Component Model: Contributions to Professional Education.”
Journal of Moral Education 31, no 3 (2002): 271295. https://doi.org/10.1080/0305724022000008115
179
Rest, “An Overview of the Psychology of Morality,” 5.
180
Hamilton, “Assessing Professionalism,” 487; Curzer, “Tweaking the Four-component Model,” 105106.
181
Rest, “A Psychologist Looks, 34.
Volume 1 (1) 2019 Rogers and Bell
96
Bebeau, Murielj, James Rest and Darcia Narvaez. “Beyond the Promise: A Perspective on Research in Moral Education.”
Educational Researcher 28, no 4 (1999): 1826. https://doi.org/10.3102/0013189X028004018.
Bell, Felicity, Justine Rogers and Michael Legg. “Artificial Intelligence and Lawyer Wellbeing.” In The Impact of
Technology and Innovation on the Well-being of the Legal Profession, edited by Janet Chan, Michael Legg and Prue
Vines. Cambridge: Intersentia, forthcoming.
Bennett, Judith, Tim Miller, Julian Webb, Rachelle Bosua, Adam Lodders and Scott Chamberlain. Current State of
Automated Legal Advice Tools: Discussion Paper 1 (Networked Society Institute, University of Melbourne, April 2018).
Bennett Moses, Lyria. “Regulating in the Face of Sociotechnical Change.” In The Oxford Handbook of Law, Regulation and
Technology, edited by Roger Brownsword, Eloise Scotford and Karen Yeung, 573596. Oxford: Oxford University Press,
2017.
Blasi, Augusto. “Bridging Moral Cognition and Moral Action: A Critical Review of the Literature.” Psychological Bulletin
88, no 1 (1980): 145. https://doi.org/10.1037/0033-2909.88.1.1
Boon, Andrew. The Ethics and Conduct of Lawyers in England and Wales. 3rd ed. London: Hart, 2014.
–––––. “From Public Service to Service Industry: The Impact of Socialisation and Work on the Motivation and Values of
Lawyers.” International Journal of the Legal Profession 12, no 2 (2005): 229260.
https://doi.org/10.1080/09695950500226599
Bornstein, Aaron. “Is Artificial Intelligence Permanently Inscrutable?” Nautilus, September 1, 2016.
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
Boyd, Shea. “The Attorney’s Ethical Obligations with Regard to the Technologies Employed in the Practice of Law.”
Georgetown Journal of Legal Ethics 29, no 4 (2016): 849866.
Breakey, Hugh. “Building Ethics Regimes: Capabilities, Obstacles and Supports for Professional Ethical Decision-making.”
University of NSW Law Journal 40, no 1 (2017): 322352.
Brownsword, Roger, Eloise Scotford and Karen Yeung. “Law, Regulation, and Technology: The Field, Frame, and Focal
Questions.” In The Oxford Handbook of Law, Regulation and Technology, edited by Roger Brownsword, Eloise Scotford
and Karen Yeung, 340. Oxford: Oxford University Press, 2017.
Burrell, Jenna. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data and Society
3, no 1 (2016): 112. https://doi.org/10.1177/2053951715622512
Byrne, Corey. “A Death by a Thousand Cuts: The Future of Advocates’ Immunity in Australia.” Journal of Judicial
Administration 28, no 2 (2018): 98121.
Cabitza, Federico. “Breeding Electric Zebras in the Fields of Medicine.” Proceedings of the IEEE Workshop on the Human
Use of Machine Learning, Venice, Italy, December 16, 2016. https://arxiv.org/pdf/1701.04077
Cabral, James, Abhijeet Chavan, Thomas Clarke, John Greacen, Bonnie Rose Hough, Linda Rexer, Jane Ribadeneyra and
Richard Zorza. “Using Technology to Enhance Access to Justice.” Harvard Journal of Law and Technology 26, no 1
(2012): 241324.
California Courts. “Limited-scope Representation.” 2019. http://www.courts.ca.gov/1085.htm?rdeLocaleAttr=en
Calo, Ryan. “Artificial Intelligence Policy: A Primer and Roadmap.” University of California Davis Law Review 51, no 2
(2017): 399435.
Campbell, Iain and Sara Charlesworth. “Salaried Lawyers and Billable Hours: A New Perspective From the Sociology of
Work.” International Journal of the Legal Profession 19, no 1 (2012): 89122.
https://doi.org/10.1080/09695958.2012.752151
Carlson, Alyssa. “The Need for Transparency in the Age of Predictive Sentencing Algorithms.” Iowa Law Review 103, no 1
(2017): 303329.
Chester, Simon. “How Tech is Changing the Practice of Law: Watson, Al, Expert Systems, and More.” Pacific Legal
Technology Conference, Vancouver, October 2, 2015.
Chin, Eric, Graeme Grovum and Matthew Grace. State of Legal Innovation in the Australian Market (Alpha Creates, 2019).
Citron, Danielle Keats. “Technological Due Process.” Washington Law Review 85, no 6 (2008): 12491313.
Crawford, Kate, Meredith Whittaker, Madeleine Clare Elish, Solon Barocas, Aaron Plasek and Kadija Ferryman. The AI Now
Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-term (AI Now Institute,
September 22, 2016). https://ainowinstitute.org/AI_Now_2016_Report.html
Creely, Henry. “Neuroscience, Artificial Intelligence, CRISPR—and Dogs and Cats.” University of California Davis Law
Review 51, no 5 (2018): 23032330.
Curzer, Howard. “Tweaking the Four-component Model.” Journal of Moral Education 43, no 1 (2014): 104123.
https://doi.org/10.1080/03057240.2014.888991
Dal Pont, Gino. Lawyers’ Professional Responsibility. 6th ed. Prymont: Lawbook Company, 2017.
Davey, Thomas and Michael Legg. “Predictive Coding: Machine Learning Disrupts Discovery.” Law Society of NSW
Journal, no 32 (2017): 8284.
Dawson Dane, Emma Schleiger, Joanna Horton, John McLaughlin, Cathy Robinson, George Quezada, Jane Snowcroft and
Stefan Hajkowicz. Artificial Intelligence: Australia’s Ethics Framework (Data61 CSIRO, 2019).
Derkley, Karin. “Regulatory ‘Sandbox’ to Encourage Legal Tech Tools.” Law Institute of Victoria, December 13, 2018.
https://www.liv.asn.au/Staying-Informed/LIJ/LIJ/December-2018/Regulatory--sandbox--to-encourage-legal-tech-tools
Volume 1 (1) 2019 Rogers and Bell
97
Dhar, Vasant. “The Future of Artificial Intelligence.” Big Data 4, no 1 (2016): 59.
https://doi.org/10.1089/big.2016.29004.vda
Dinerstein, Robert. “Client-centered Counseling: Reappraisal and Refinement.” Arizona Law Review 32, no 3 (1990): 501‒
604.
FAT/ML. “Fairness, Accountability, and Transparency in Machine Learning.” Last modified October 23, 2019.
http://www.fatml.org
Flood, John. “The Re-landscaping of the Legal Profession: Large Law Firms and Professional Re-regulation.” Current
Sociology 59, no 4 (2011): 507529. https://doi.org/10.1177/0011392111402725
Gasser, Urs and Carolyn Schmitt. “The Role of Professional Norms in the Governance of Artificial Intelligence.” In The
Oxford Handbook of Ethics of AI, edited by Markus Dubber, Frank Pasquale and Sunit Das. Oxford University Press, in
press.
Goodman, Christine Chambers. “AI/Esq: Impacts of Artificial Intelligence in Lawyer–Client Relationships.” Oklahoma Law
Review 72, no 1 (2019): 149‒184.
Granfield, Robert and Thomas Koenig. “It’s Hard to be a Human Being and a Lawyer: Young Attorneys and the
Confrontation with Ethical Ambiguity in Legal Practice.” West Virginia Law Review 105 (2002): 495524.
Grossman, Maura and Gordon Cormack “Quantifying Success: Using Data Science to Measure the Accuracy of Technology-
assisted Review in Electronic Discovery.” In Data-driven Law: Data Analytics and the New Legal Services, edited by
Edward Walters, Chapter 6. Boca Raton: CRC Press, 2018.
Guihot, Michael, Anne Matthew and Nicolas Suzor. “Nudging Robots: Innovative Solutions to Regulate Artificial
Intelligence.” Vanderbilt Journal of Entertainment and Technology Law 20, no 2 (2017): 385456.
Hadfield, Gillian and Deborah Rhode. “How to Regulate Legal Services to Promote Access, Innovation, and the Quality of
Lawyering.” Hastings Law Journal 67, no 5 (2016): 11911223.
Haidt, Jonathan. “The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment.”
Psychological Review 108, no 4 (2001): 814834.
Hamilton, Neil. “Assessing Professionalism: Measuring Progress in the Formation of an Ethical Professional Identity.”
University of St. Thomas Law Journal 5, no 2 (2008): 470511.
Hanlon, Gerard. Lawyers, the State and the Market: Professionalism Revisited. Basingstoke: Palgrave Macmillan, 1999.
High-level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy Artificial Intelligence (European
Commission, April 8, 2019).
Hildebrandt. Mireille. “Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of
Statistics.” University of Toronto Law Journal 68, supp 1 (2018): 1235. https://doi.org/10.3138/utlj.2017-0044
Jenson, Karin, Coleman Watson and James Sherer. “Ethics, Technology, and Attorney Competence.” Unpublished
manuscript, BakerHostetler, 2015.
Karlsson-Vinkhuyzen, Sylvia. “Global Regulation Through a Diversity of Norms: Comparing Hard and Soft Law.” In
Handbook on the Politics of Regulation, edited by David Levi-Faur, 604614. Cheltenham: Edward Elgar, 2011.
Karnow, Curtis. “The Application of Traditional Tort Theory to Embodied Machine Intelligence.” In Robot Law, edited by
Ryan Calo, Michael Froomkin and Ian Kerr, 5177. Cheltenham: Edward Elgar, 2016.
Kroll, Joshua, Joanna Huey, Solon Barocas, Edward Felten, Joel Reidenberg, David Robinson and Harlan Yu. “Accountable
Algorithms.” University of Pennsylvania Law Review 165, no 3 (2017): 633706.
Lacobowitz, Jan and Justin Ortiz. “Happy Birthday Siri! Dialing in Legal Ethics for Artificial Intelligence, Smartphones, and
Real Time Lawyers.” Texas A&M Journal of Property Law 4, no 5 (2018): 407442.
Law Society of New South Wales. “Continuing Professional Development.” Last modified October 17, 2019.
https://www.lawsociety.com.au/practising-law-in-NSW/working-as-a-solicitor-in-NSW/your-practising-certificate/CPD
Law Society of New South Wales. The Future of Law and Innovation in the Profession (Law Society of NSW, 2017).
Legal Services Council. Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (NSW Government, May
23, 2015).
Legg, Michael. “Recognising a New Form of Legal Practice: Limited Scope Services.” Law Society of NSW Journal 50
(2018): 7476.
Lehr, David and Paul Ohm. “Playing with the Data: What Legal Scholars Should Learn About Machine Learning.”
University of California Davis Law Review 51, no 2 (2017): 653717.
Le Mire, Suzanne, Adrian Evans and Christine Parker. “From Scandal to Scrutiny: Ethical Possibilities in Large Law Firms.”
Legal Ethics 11, no 2 (2008): 131136. https://doi.org/10.1080/1460728X.2008.11423908
Le Mire, Suzanne and Rosemary Owens. “A Propitious Moment: Workplace Bullying and Regulation of the Legal
Profession.” University of NSW Law Journal 37, no 3 (2014): 10301061.
Macauley, Don. “What is a Lawyer’s Duty of Technology Competence?” Smart Lawyer, February 2, 2018.
http://www.nationaljurist.com/smartlawyer/what-lawyers-duty-technology-competence
Macquarie Bank. An Industry in Transition: 2017 Legal Benchmarking Results (Macquarie Bank, 2017).
Mancini, James. Protecting and Promoting Competition in Response to “Disruptive” Innovations in Legal Services (OECD
Directorate for Financial and Enterprise Affairs Competition Committee, June 13, 2016).
Volume 1 (1) 2019 Rogers and Bell
98
Markoff, John. “Armies of Expensive Lawyers, Replaced by Cheaper Software.” New York Times, March 4, 2011.
https://www.nytimes.com/2011/03/05/science/05legal.html
Mazzone, Erik and David Ries. “A Techno-ethics Checklist: Basics for Being Safe, Not Sorry.” ABA Law Practice 35, no 2
(2009): 45.
McGinnis, John and Russell Pearce. “The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers
in the Delivery of Legal Services.” Fordham Law Review 82, no 6 (2014): 30413066.
Medianik, Katherine. “Artificially Intelligent Lawyers: Updating the Model Rules of Professional Conduct in Accordance
with the New Technological Era.” Cardozo Law Review 39, no 4 (2018): 14971530.
Millar, Jason and Ian Kerr. “Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots.” In Robot Law,
edited by Ryan Calo, Michael Froomkin and Ian Kerr, 102130. Cheltenham: Edward Elgar, 2016.
Minnameier, Gerhard. “Deontic and Responsibility Judgments: An Inferential Analysis.” In Handbook of Moral Motivation,
edited by Karen Heinrichs, Fritz Oser and Terence Lovat, 6982. Rotterdam: Sense Publishers, 2013.
Moorhead, Richard and Victoria Hinchly. “Professional Minimalism? The Ethical Consciousness of Commercial Lawyers.”
Journal of Law and Society 42, no 3 (2015): 387412. https://doi.org/10.1111/j.1467-6478.2015.00716.x
Myers West, Sarah, Meredith Whittaker and Kate Crawford. Discriminating Systems: Gender, Race, and Power in AI (AI
Now Institute, April 2019).
Narvaez, Darcia and James Rest. “The Four Components of Acting Morally.” In Moral Development: An Introduction, edited
by William Kurtines and Jacob Gewirtz, 385400. Boston: Allyn and Bacon, 1995.
New South Wales Government. Legal Profession Uniform Continuing Professional Development (Solicitors) Rules 2015
(NSW Government, May 27, 2015).
Parker, Christine, Adrian Evans, Linda Haller, Suzanne Le Mire and Reid Mortensen. “The Ethical Infrastructure of Legal
Practice in Larger Law Firms: Values, Policy and Behaviour.” University of NSW Law Journal 31, no 1 (2008): 158188.
Parker, Christine and Lyn Aitken. “The Queensland “Workplace Culture Check”: Learning from Reflection on Ethics Inside
Law Firms.” Georgetown Journal of Legal Ethics 24, no 2 (2011): 399441.
Pasquale, Frank. “Restoring Transparency to Automated Authority.” Journal on Telecommunications and High Technology
Law 9, no 1 (2011): 235254.
–––––. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University
Press, 2015.
Patrice, Joe. “BakerHostetler Hires AI Lawyer, Ushers in the Legal Apocalyse.” Above the Law, May 12, 2016.
https://abovethelaw.com/2016/05/bakerhostetler-hires-a-i-lawyer-ushers-in-the-legal-apocalypse/
Perlman, Andrew. “The Twenty-first Century Lawyer’s Evolving Ethical Duty of Competence.” Professional Lawyer 22, no
4 (2014): 2430.
Rane, Sharayu. “The Balance: Accuracy vs Interpretability.” Towards Data Science, December 3, 2018.
https://towardsdatascience.com/the-balance-accuracy-vs-interpretability-1b3861408062
Remus, Dana. “Reconstructing Professionalism.” Georgia Law Review 51, no 3 (2017): 807‒877.
–––––. “The Uncertain Promise of Predictive Coding.” Iowa Law Review 99, no 4 (2014): 16941724.
Rest, James. “A Psychologist Looks at the Teaching of Ethics.” The Hastings Center Report 12, no 1 (1982): 2936.
https://doi.org/10.2307/3560621
–––––. “Evaluating Moral Development.” In Promoting Values Development in College Students, edited by Jon Dalton, 77
90. Columbus: National Association of Student Personnel Administrators, 1985.
Rest, James, Muriel Bebeau and Joseph Volker. “An Overview of the Psychology of Morality.” In Moral Development:
Advances in Research and Theory, edited by James Rest, 139. New York and London: Praeger, 1986.
Robbennolt, Jennifer and Jean Sternlight. “Behavioral Legal Ethics.” Arizona State Law Journal 45, no 3 (2013): 11071182.
Rogers, Justine, Dimity Kingsford Smith and John Chellew. “The Large Professional Service Firm: A New Force in the
Regulative Bargain.” University of NSW Law Journal 40, no 1 (2017): 218261.
Rostain, Tanina. “Robots versus Lawyers: A User-centered Approach.” Georgetown Journal of Legal Ethics 30, no 3 (2017):
559574.
Select Committee on Artificial Intelligence. AI in the UK: Ready, Willing and Able? (House of Lords: HL Paper 100, April
16, 2018).
Semmler, Sean and Zeeve Rose. “Artificial Intelligence: Application Today and Implications Tomorrow.” Duke Law and
Technology Review 16, no 1 (2017): 8599.
Scherer, Matthew. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harvard
Journal of Law and Technology 29, no 2 (2016): 353400. http://doi.org/10.2139/ssrn.2609777
Shaffer, Gregory and Mark Pollack. “Hard vs Soft Law: Alternatives, Complements, and Antagonists in International
Governance.” Minnesota Law Review 94, no 3 (2010): 706799.
Sheppard, Brian. “Does Machine-learning-powered Software Make Good Research Decisions? Lawyers Can’t Know for
Sure.” American Bar Association Journal, November 22, 2016.
http://www.abajournal.com/legalrebels/article/does_machine-learning-
powered_software_make_good_research_decisions_lawyers
Simon, William. The Practice of Justice: A Theory of Lawyers’ Ethics. Cambridge: Harvard University Press, 1998.
Volume 1 (1) 2019 Rogers and Bell
99
Solicitors Regulation Authority. “SRA Innovate.” Last modified October 10, 2019.
https://www.sra.org.uk/solicitors/innovate/sra-innovate.page
Solum, Lawrence. “Legal Personhood for Artificial Intelligences.” North Carolina Law Review 70, no 4 (1992): 12311287.
Sommerlad, Hillary. “ ‘A Pit to Put Women in’: Professionalism, Work Intensification, Sexualisation and Work Life Balance
in the Legal Profession in England and Wales.” International Journal of the Legal Profession 23, no 1 (2016): 6182.
https://doi.org/10.1080/09695958.2016.1140945
–––––. “The Implementation of Quality Initiatives and the New Public Management in the Legal Aid Sector in England and
Wales: Bureaucratisation, Stratification and Surveillance.International Journal of the Legal Profession 6, no 3 (1999):
311343. https://doi.org/10.1080/09695958.1999.9960469
Susskind, Richard. Tomorrow’s Lawyers: An Introduction to Your Future. 2nd ed. Oxford: Oxford University Press, 2017.
Susskind Richard and Daniel Susskind. The Future of the Professions. Oxford: Oxford University Press, 2015.
Thoma, Stephen, Muriel Bebeau and Darcia Narvaez. “How Not to Evaluate a Psychological Measure: Rebuttal to Criticism
of the Defining Issues Test of Moral Judgment Development by Curzer and Colleagues.” Theory and Research in
Education 14, no 2 (2016): 241249. https://doi.org/10.1177/1477878516635365
Tranter, Kieran. “The Laws of Technology and the Technology of Law.” Griffith Law Review 20, no 4 (2011): 753762.
https://doi.org/10.1080/10383441.2011.10854719
–––––. “The Law and Technology Enterprise: Uncovering the Template to Legal Scholarship on Technology.” Law,
Innovation and Technology 3, no 1 (2011): 3183. https://doi.org/10.5235/175799611796399830
Vladeck, David. “Machines Without Principals: Liability Rules and Artificial Intelligence.” Washington Law Review 89, no 1
(2014): 117150.
Vozzola, Elizabeth. “The Case for the Four Component Model vs Moral Foundations Theory: A Perspective from Moral
Psychology.” Mercer Law Review 68, no 3 (2017): 633648.
Walsh, Toby. 2062: The World That AI Made. Carlton: La Trobe University Press, 2018.
–––––. “Why We Need to Start Preparing for Our Robotic Future Now.” UNSW Newsroom, November 7, 2017.
https://newsroom.unsw.edu.au/news/science-tech/why-we-need-start-preparing-our-robotic-future-now
Waye, Vicki, Martie-Louise Verreynne and Jane Knowler. “Innovation in the Australian Legal Profession.” International
Journal of the Legal Profession 25, no 2 (2018): 213242. https://doi.org/10.1080/09695958.2017.1359614
Welsh, Lucy. “The Effects of Changes to Legal Aid on Lawyers’ Professional Identity and Behaviour in Summary Criminal
Cases: A Case Study.” Journal of Law and Society 44, no 4 (2017): 559585. https://doi.org/10.1111/jols.12058
Wendel, Bradley. “The Promise and Limitations of Artificial Intelligence in the Practice of Law.” Oklahoma Law Review 72,
no 1 (2019): 21‒50.
Whittfield, Cameron. “The Ethics of Artificial Intelligence in Law.” PWC Digital Pulse, March 28, 2017.
https://www.digitalpulse.pwc.com.au/artificial-intelligence-ethics-law-panel-pwc/
Williamson, Mark. “Getting Real About Artificial Intelligence at Law Firms.” Law360, November 3, 2017.
https://www.law360.com/articles/976805/getting-real-about-artificial-intelligence-at-law-firms
Zimmerman, Augusto. Western Legal Theory: History, Concepts and Perspectives. Chatswood: LexisNexis Butterworths,
2013.
... Lawyers have an ethical responsibility to ensure competence and diligence (Nunez, 2017). This may indicate an added responsibility placed on lawyers to understand the logic used by AI and the capabilities and limitations of AI systems (Rogers & Bell, 2019;Scherer, 2015). However, it is a major challenge for lawyers to understand the intricacies of a particular AI being used as lawyers are not known to be legal technologist (Susskind, 2017). ...
... However, there are certain areas of law where AI will not be able to entirely replace human lawyers. This is especially in criminal and family law practice where human interaction is considered a vital aspect in legal work (Rogers & Bell, 2019). In addition, currently AI is not able to provide oral representation for clients in courts. ...
Article
Full-text available
The article endeavors to analyze the implications of artificial intelligence (AI) in the legal fraternity. There have been various reports on the impact and challenges of AI in the legal fraternity in recent years. AI is used to perform legal work previously completed solely by human lawyers. The rise of AI technology has caused a great deal of apprehension among members of the legal fraternity both in Malaysia and globally. AI promises to disrupt the substratum of how legal work is practiced and delivered. Nevertheless, there are implications encountered by the legal fraternity in adopting AI in legal practice such as ethical responsibility, algorithm bias, data privacy and the lack of regulations for AI. The doctrinal method was employed in conducting this study. The primary objective of this article is to evaluate the implications of AI adoption in the legal fraternity and to propose recommendations for better integration of AI in the legal industry.
... The ABA underlined that professionals should investigate the ethical impacts of AI and uphold the client's interest, which is the ultimate responsibility of a lawyer (Rogers & Bell, 2019). AI depends on big data creating privacy issues and necessities strict security measures (Tom et al., 2020). ...
Article
Full-text available
Artificial Intelligence (AI) is substituting human decision-making in every aspect of life where law stands with no exception. New technological trends offer expeditious and cost-effective AI tools yet confront challenges such as privacy invasion, bias, fairness, and hallucinations, necessitating regulatory oversight. Like other countries, the USA and Pakistan have initiated AI solutions in their legal domain. A strong regulatory oversight is indispensable for its legitimacy and efficiency. Based on their functions and ethical considerations, AI tools in the legal profession face competing opinions. With qualitative research methodology, the research aims to explore how AI is transforming and reshaping the legal regime, focused on the comparative analysis of the USA and Pakistan. The research paper critically examines the legal frameworks and impacts of AI solutions and how both countries navigate the complexities of AI-based decision-making.
... Besides, Artificial Intelligence is also used as a lawyer's tactic in dealing with the law of events when the case is unclear in terms of its jurisdictional facts. It allows the right of immunity to be exercised (Rogers & Bell, 2019). Australia has used its technological capabilities to assist its lawyer's duties in both deposition issues, Artificial Intelligence to analyze cases under consideration, and other administrative needs. ...
Article
Full-text available
Advocates are one part of the Law Enforcement Officers who have rights and obligations that must be obeyed by each party. Advocate Immunity Rights which is a right that states advocates cannot be prosecuted civilly or criminally in carrying out their professional duties in good faith for the benefit of client defense. Problems are how is the influence of legal sociology in supporting the professional duties of advocates? and how is the applicability of advocate immunity rights in accordance with the indonesian advocates law and the australian solicitor studied based on legal compliance theory? The purpose of this study is to answer the various problems of this research. Normative juridical method with a comparative study approach in Indonesia and Australia. The essential influence of legal sociology in supporting the professional duties of advocates is because advocates will be faced with diverse community cultures. Legal compliance theory of advocate immunity rights in accordance with Advocate Law Number 18 of 2003 is still often ignored by other law enforcement officials. Advocates are still often criminalized in carrying out professional duties, namely defending their clients, unlike in Australia which prioritizes immunity rights. This is a special concern for each of the law enforcement institution to respect each other, in order to create fair law enforcement
Article
Full-text available
La Inteligencia Artificial está a punto de cambiar nuestras vidas. Todos los sectores se verán implicados, incluida la enseñanza del Derecho. En este punto, uno de los métodos introducidos ha sido los Sistemas de Expertos, programas capaces de resolver los problemas jurídicos de manera interactiva a través de una lógica determinada. Aunque de escaso estudio en nuestro país, los SEJ aspiran a convertirse en una herramienta muy útil para el aprendizaje de determinadas habilidades jurídicas. Para facilitar su comprensión, el presente artículo propone resolver un caso práctico de derechos reales utilizando la lógica subyacente al SEJ con el objetivo de visibilizar sus fallas y virtudes.
Article
Full-text available
The emerging field of Artificial Intelligence (AI) has the potential to not only aid, but also transform and potentially replace human decision-making in a wide range of areas, including the legal system. The integration of computer science and law, exemplified using artificial intelligence in legal decision-making, improves the efficiency of handling cases and promotes standardization in legal procedures, while strengthening the organization of legal information. This paper expands on previous research in the field of judicial prediction and presents the first comprehensive, reliable, and applicable Machine Learning (ML) model for predicting decisions issued by the Supreme Court of the United States. This represents a notable progress in the field of predictive analytics. This work conduct a thorough and comparative analysis of prediction results for various algorithms, including Perceptron, Logistic Regression (LR), Support Vector Machines (SVMs), Naïve Bayes (NB), k-Nearest Neighbors (k-NN), Multi-Layer Perceptron (MLP), Calibrated, and Ensemble Learning. The implemented models showcase the ability to accurately predict the results of legal systems, especially by utilising Ensemble techniques. Proposed research explores the integration of different ML and Ensemble learning techniques in the field of legal studies, which is experiencing tremendous technological advancements. It discusses how this technology has the potential to significantly transform the judicial process. These capabilities can greatly enhance decision-making in complex legal situations. This manuscript envisions a future judicial system where the use of ML technology greatly improves the efficiency and fairness of delivering justice.
Article
Artificial intelligence (AI) is rapidly evolving, influencing service industries and Professional Services Firms (PSFs) embracing AI to improve margins. In this paper, we reveal how AI impacts the characteristics and marketing practices of PSFs. AI has the potential to provide considerable efficiency and organizational benefits, yet simultaneously changes service attributes, threatens the competitive advantage of deep client relationships, and directly influences the marketing practices of PSFs, including pricing strategy. Based on an extensive literature review, we present a conceptual model illustrating the changes impacting the business model, marketing practices, and client relationships of PSFs.
Article
Full-text available
The 2030 Agenda for Sustainable Development builds upon the Millennium Development Goals while at the same time reaffirming the conclusions of the leading instruments in the field of human rights and international law. The 17 integrated and indivisible sustainable development goals (SDGs) require innovation through digitalization and legal activities. Digitalization and new technologies are crucial for SDG 8, 9, and 16. SDG 16: Peace, justice, and strong institutions directly focus on law. While SDG 16 does not directly mention it, digitalization is essential in achieving its specific targets. Examples include concepts of e-government (including data protection and public access to information), e-commerce, equal access to dispute resolution mechanisms in cyberspace, and enforcement of non-discriminatory laws for sustainable development. The right to a healthy and sustainable environment encompasses economic, social, and environmental aspects that SDGs capture. To achieve these goals, the 2030 Agenda relies on international law instruments. The right to a healthy and sustainable environment is developing towards an internationally recognized human right. As environmental goals do not recognize national borders, international law plays a key role. International environmental law should facilitate a broader application of existing clean technologies through the transfer of technology and examine the development of new technologies as to its compatibility with a sustainable environment. Moreover, the human right to share in scientific advancement and enjoy its benefits embodies equal access to technology. The legal enforcement of sustainable goals in the private and governmental sectors remains one of the main concerns of climate change.
Chapter
Technology advances the quality and efficiency of the legal work and so failure to use technology results in ineffective service. AI is rapidly improving, driven by advances in software, computing power, and big data. AI already affects many areas of law, embracing contract analysis, legal research, e-discovery, etc. and so with continual progresses in technology stimulating the growth of AI. AAI algorithms will be the outcome of AAI running which means that AAI objective and ideology will dictate their objectivity and functionality. Presently, it seems that AI is not advanced enough to process cases in the same way as judges do and so AAI is required to find ways to simulate legal thinking with AI, and also to advance the legal acceptability of mathematical reasoning. Taking into account that judges increasingly rely on determinations made by AI systems then AI technologies that influence judges also act as tools for behavior regulation. There is a shifting role of AI in the legal course. AI is gradually engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation.
Article
From around the millennial turn, Australia was to the fore among common law countries in the liberalisation of legal practice with a range of radical reforms, such as the ownership of firms by non-lawyers and listing on the stock exchange. Albeit not peculiar to Australia, technological innovations, including remote working, digitalised platforms and artificial intelligence (AI), are also dramatically changing the way law is practised. Invariably motivated by profit maximisation, the impact of these reforms poses discomfiting questions for the underlying values of legal professionalism. This article will overview the reforms that have occurred, drawing on a small study of NewLaw firms in Australia and the UK, to illustrate how the “Uberisation” of contemporary legal practice is contributing to a new incarnation of postprofessionalism. The article will also show how the injunction to work at home in response to COVID-19 has given “Uberisation” an adrenalin shot in the arm.
Article
Full-text available
The idea of artificial legal intelligence stems from a previous wave of artificial intelligence, then called jurimetrics. It was based on an algorithmic understanding of law, celebrating logic as the sole ingredient for proper legal argumentation. However, as Oliver Wendell Holmes has noted, the life of the law is experience rather than merely logic. Machine learning, which determines the current wave of artificial intelligence, is built on data-driven machine experience. The resulting artificial legal intelligence may be far more successful in terms of predicting the content of positive law. In this article, I discuss the assumptions of law and the Rule of Law and confront them with those of computational systems. As a twin article to my Chorley lecture on law as information, this should inform the extent to which artificial legal intelligence provides for responsible innovation in legal decision making.
Chapter
The legal profession has undergone significant changes in the past few years. These have affected working structures and context within the profession, in turn affecting the wellbeing of individual practitioners. This book is the first to consider how these operate in practice and how they impact on the wellbeing of lawyers. This is significant because legal systems cannot operate without properly functioning lawyers. Changes considered include rapidly evolving technologies such as the internet, artificial intelligence and increasing digitisation, and innovations in legal practice. Such innovations include changes in the structures of law firms, changing requirements about whether lawyers must practice separately from other professions and changing employment practices in law firms. The Impact of Technology and Innovation on the Well-Being of the Legal Profession considers the impact of all of these developments on the legal profession. It begins with students and how their responses to questions about their attitudes to learning may provide clues as to why they and the professionals they become might be more vulnerable to depression and anxiety than the wider population. The analysis then extends to how both satisfaction and stress levels can be simultaneously high and the implications of this, considering the experiences of lawyers in private and public practice, as well as academics, and their responses to the interactions between all of these changes. Leading researchers assess the situation in Australia and the United Kingdom in these various domains, using empirical research as the foundation of the arguments put forth. Anyone who is interested in the future of the legal profession and the challenges currently faced as a consequence of the massive structural and environmental changes experienced should read this book. MICHAEL LEGG is Professor of Law and the Director of the Law Society of New South Wales Future of Law and Innovation in the Profession (FLIP) research stream at UNSW. PRUE VINES is Professor of Law and Associate Dean (Education) and Co-Director of the Private Law Research and Policy Group at UNSW Law. JANET CHAN is Professor at UNSW Law and leader of the Data Justice research stream at the Allens Hub for Technology, Law and Innovation.
Article
This Article explores the history of AI and the advantages and potential dangers of using AI to assist with legal research, administrative functions, contract drafting, case evaluation, and litigation strategy. This Article also provides an overview of security vulnerabilities attorneys should be aware of and the precautions that they should employ when using their smartphones (in both their personal and professional lives) in order to adequately protect confidential information. Finally, this Article concludes that lawyers who fail to explore the ethical use of AI in their practices may find themselves at a professional disadvantage and in dire ethical straits. The first part of this Article defines the brave new world of AI and how it both directly and indirectly impacts the practice of law. The second part of this Article explores legal ethics considerations when selecting and using AI vendors and virtual assistants. The third part outlines technology risks and potential solutions for lawyers who seek to embrace smartphone technology while complying with legal ethics obligations. The Article concludes with an optimistic eye toward the future of the legal profession.
Article
Artificial intelligence is exerting an influence on all professions and industries. We have autonomous vehicles, instantaneous translation among the world's leading languages, and search engines that rapidly locate information anywhere on the web in a way that is tailored to a user's interests and past search history. Law is not immune from disruption by new technology. Software tools are beginning to affect various aspects of lawyers' work, including those tasks that historically relied upon expert human judgment, such as predicting court outcomes. These new software tools present new challenges and new opportunities. In the short run, we can expect greater legal transparency, more efficient dispute resolution, improved access to justice, and new challenges to the traditional organization of private law firms delivering legal services on a billable hour basis through a leveraged partner-associate model. With new technology, lawyers will be empowered to work more efficiently, deepen and broaden their areas of expertise, and provide more value to clients. These developments will predictably transform both how lawyers do legal work and resolve disputes on behalf of their clients. In the longer term, it is difficult to predict what the impact of artificially intelligent tools will be, as lawyers incorporate them into their practice and expand their range of services on behalf of clients.
Article
Criminal law scholars devote substantial research to sociological and behavioral studies to determine characteristics common among reoffenders. This research aligns with a massive effort to reform the criminal justice system by reducing recidivism as a means to cure high crime rates and overcrowded prisons. Many scholars believe that by focusing resources on the criminal population that will likely commit future crimes, overall crime rates will decrease. The effort to reduce recidivism has led to the creation of objective risk assessment tools. These are essentially algorithms that purport to predict the likelihood that an individual will commit crime in the future. While these predictive algorithms were first implemented to determine parole conditions, they have become increasingly popular among courts and are now routinely used in all phases of a criminal proceeding. As the demand for predictive risk assessment formulas increases, many state governments now look to private companies to develop these methods. However, the move towards privatization raises issues of transparency, as companies are able to maintain the secrecy of their algorithms by claiming trade secret protection. As a result, defendants are unable to ensure the accuracy of the risk score results. This Note argues that private companies who benefit by providing a public service should be held to the same transparency requirements as public agencies, and freedom of information disclosure requirements should be extended to include proprietary predictive algorithms to achieve this result.