Content uploaded by Natania Locke
Author content
All content in this area was uploaded by Natania Locke on Nov 24, 2020
Content may be subject to copyright.
1
Perspectives on the current and imagined role of artificial intelligence and technology in
corporate governance practice and regulation
Natania Locke and Helen Bird*
I INTRODUCTION
Corporate governance in Australia’s four major banking institutions is under the ray lamp
thanks to the 2018 Royal Commission into Misconduct in the Banking, Superannuation and
Financial Services Industry in Australia
1
, the 2018 Prudential Inquiry into the Commonwealth
Bank of Australia
2
and the 2019 commencement of civil penalty proceedings and an associated
prudential inquiry against Westpac Banking Corporation relating to a money laundering
scandal.
3
These inquiries revealed and continue to reveal all too clearly that company size does
complicate but not excuse board transparency and accountability for governance problems and
difficulties. While much attention has rightly been given to the judgment, shrewdness, acumen
and experience of banking executive and non-executive directors during the period of review,
the resulting analysis tends to underplay the realities of the modern banking boardroom beset
by huge volumes of data that directors must digest to perform their governance functions, data
that is only increasing as their industries are rapidly changing and responding to technological
disruption.
4
For another revolution, apart from regulatory review, is going on in boardrooms
across the globe. It is the incorporation of technology and artificial intelligence (AI), and the
creation and curation of data that could assist in the use of such AI systems, into the practice
of corporate governance and strategy. This is borne from the realisation that businesses, like
Australia’s major banks, are too complex for their boards and executives to make good
decisions without the aid of intelligent systems.
This article investigates how AI and related technology already augments global corporate
governance practices and the potential for further augmentation as AI continues to expand its
capacity to provide cognitive insights and cognitive engagement in corporate board rooms. It
* Dr Natania Locke, Senior lecturer, Swinburne Law School and Visiting Professor, University of Johannesburg.
Ms Helen Bird, Course Director of Master of Corporate Governance and Research Fellow, Swinburne Law
School. An earlier version of this article was presented at the Comparative Corporate Governance Conference,
hosted by Singapore Management University and the University of Adelaide, 24 – 25 January 2019, Singapore.
1
The Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry was
established on 14 December 2017 by the Governor-General of the Commonwealth of Australia, His
Excellency General the Honourable Sir Peter Cosgrove AK MC (Retd). The Final Report of the Commission
was presented to the Governor-General on 1 February 2019, <https://treasury.gov.au/publication/p2019-fsrc-
final-report/>.
2
Australian Prudential Regulation Authority, Prudential Inquiry into the Commonwealth Bank of Australia –
Final Report (20 April 2018), https://www.apra.gov.au/sites/default/files/CBA-Prudential-Inquiry_Final-
Report_30042018.pdf.
3
Pleadings for AUSTRAC v Westpac Banking Corporation, issued in November 2019 are available at
https://www.austrac.gov.au/lists-enforcement-actions-taken. On 17 December 2019, the APRA announced an
investigation into possible breaches of the Banking Act and increased capital requirements for Westpac, see
APRA, ‘APRA Launches Westpac Investigation and Increases Capital Requirement Add-ons to $1 Billion’,
Media Release, 17 December 2019, at https://www.apra.gov.au/news-and-publications/apra-launches-
westpac-investigation-and-increases-capital-requirement-add-ons.
4
Barry Libert, Megan Beck and Mark Bonchek, ‘AI in the Boardroom: The Next Realm of Corporate
Governance’ (October 2017) MIT Sloan Management Review <https://sloanreview.mit.edu/article/ai-in-the-
boardroom-the-next-realm-of-corporate-governance/>; David Lancefield and Carlo Gagliardi, ‘Reimaging the
Boardroom for an Age of Virtual Reality and AI’ Harvard Business Review (April 2015)
<https://hbr.org/2015/04/reimagining-the-boardroom-for-an-age-of-virtual-reality-and-ai>.
2
is a scoping study to determine the current uses of these technologies
5
. There is already a
baseline of technology that boards have adopted to help manage their corporate governance
responsibilities that assists in preparing and distributing reports, streamlining meeting
preparation and scheduling meetings. Helpful as this is, it doesn’t address the bigger
governance challenges that boards are facing, such as engaging in meaningful strategic, pro-
active risk oversight and real-time insights into company operations. Artificial intelligence for
strategic and operational decision-making enables boards to make better decisions and is likely
to become an essential competitive advantage in its own right. The article imagines how these
developments might play out, highlights the challenges involved and suggests their impact on
the traditional norms and understandings of contemporary corporate governance.
All of the available technology set out below may be used in the boardroom to augment the
natural capability of directors to fulfil their duties. We do not argue here that AI will supplant
the role of directors in corporations any time soon but rather that the insight offered by AI will
inevitably improve analytical accuracy and resulting efficiency of the governance task. The
combined intellectual capability of a group of actors is referred to as ‘collaborative
intelligence’.
6
It was predicted as early as 1993 that so-called ‘intelligence amplification’ might offer a
different road into super-human intelligence than sole reliance on the development of artificial
intelligence alone.
7
The premise is that any human cognitive behaviour is already amplified or
augmented through the use of even basic technology,
8
but that the rapid development of
technology would elevate this amplification to an extent never previously possible. We assume
that the collaborative intelligence of the human actors on the board of directors and the AI
systems acting in support and augmentation of those actors far surpasses the intelligence of the
board in absence of those systems.
However, we acknowledge from the outset that the availability of this intelligence and
technology does not necessarily mean it will lead to better corporate governance. AI and
technology assists, but does not supplant, human actors performing governance functions,
aspects of governance decision-making, such as ethics and culture, remain outside AI’s sphere
5
Others describe it as a ‘a thought experiment’ on corporate management and AI: See Martin Petrin,
‘Corporate management in the Age of AI’UCL Working Paper Series (No. 3/2019), at
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3346722##
6
See in general H James Wilson and Paul R Daugherty, ‘Collaborative Intelligence: Humans and AI are Joining
Forces’, Harvard Business Review (online), July-August 2018, < https://hbr.org/2018/07/collaborative-
intelligence-humans-and-ai-are-joining-forces?referral=03758&cm_vc=rr_item_page.top_right>. The study
of the collaborative or collective intelligence of human actors forms a distinct field of study in social
psychology and falls outside the scope if the present paper. For further reading, see for instance Anita Williams
Woolley, Christopher F Chabris, Alex Pentland, Nada Hashmi and Thomas W Malone, ‘Evidence for a
Collective Intelligence Factor in the Performance of Human Groups’ (2010) 130 Science 686; Michael A
Woodley and Edward Bell, ‘Is Collective Intelligence (Mostly) the General Factor of Personality? A Comment
on Woolley, Chabris, Pentland, Hashmi and Malone (2010)’ (2011) 39 Intelligence 79; Anita Williams
Woolley, Ishani Aggarwal and Thomas W Malone, ‘Collective Intelligence and Group Performance’ (2015)
25 Current Directions in Psychological Science 420; Timothy C Bates and Shivani Gupta, ‘Smart Groups of
Smart People: Evidence of IQ as the Origin of Collective Intelligence in the Performance of Human Groups’
(2017) 60 Intelligence 46.
7
Vernor Vinge, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’ in NASA
Lewis Research Center, Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace,
(NASA Conference Publication 10129, 1993) 11 17 available at
<https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855.pdf>.
8
Vinge (n 5) 17 uses the example of a PhD student and his desktop computer, which is so much more
intellectually powerful than the PhD student alone. Link the desktop to the internet and the amplification is
immediately extended many times over.
3
of influence with the result that human actors remain accountable for governance outcomes in
the final instance.
The article is divided into VI parts, of which the introduction and conclusion form the 1st and
6th parts of the paper. Part II describes the AI framework used and outlines the key AI concepts
used. Part III investigates current applications of AI and supporting technology in the corporate
governance space with reference to the AI framework set out in Part II. Part IV speculates
about the future advances that AI might bring in the corporate governance sphere, some of
which are already being explored in current systems. Part V considers the challenges that
current and foreseen applications of AI in the boardroom hold for directors in the fulfilment of
their duties and corporate governance practices generally and explores how these challenges
might influence future corporate governance theory and practice.
II AI FRAMEWORK FOR ANALYSING CORPORATE GOVERNANCE
An examination of artificial intelligence (‘AI’) applications in corporate governance invites a
preliminary inquiry about the nature of AI and the systems and technologies that underpin it.
This is a task easier said than done. Definitions are problematic, characterised by the loose
application of terms associated with AI (machine learning,
9
neural networks,
10
deep learning,
11
natural language processing (NLP),
12
rule-based expert systems,
13
robotics
14
) as synonyms,
when their actual functionality is far more narrowly defined.
Technology supported by AI tends to be analysed in silos, without any attempt to systemise
the underlying contribution made by AI in each case. While there are a variety of explanatory
frameworks that might be referenced for this purpose,
15
we adopt the systemisation articulated
9
Thomas H Davenport, The AI Advantage: How to Put the Artificial Intelligence Revolution to Work, (MIT
Press, 2018) ch 1 defines ‘statistical machine learning’ as ‘a technique for automatically fitting models to data
and to “learn” by training models with data’. See further Nick Bostrom, Superintelligence: Paths Dangers,
Strategies, (Oxford University Press, 2014) 8 – 18.
10
This is a form of machine learning that models that functioning of the human brain. It conceptualises problems
as a set of inputs and outputs and uses artificial ‘neurons’ to assign a weight to the inputs that would lead to
particular outputs. See Davenport (n 7) and Bostrom (n 7) 7 – 8.
11
Deep learning is a more complex form of machine learning using neural networks. It uses the backpropagation
algorithm to make it possible to train multi-layered neural networks. These layers lie in between the input and
output layers, which make it very difficult for human actors to interpret. This technique is used increasingly
to enable the system to predict or classify outputs. See Davenport (n 7) and Bostrom (n 7) 8.
12
This refers to the application of AI in speech recognition, text analysis, translation, the generation of text or
speech and other goals related to language (Davenport, n 7). Understanding language is a task so difficult that
many are of the opinion it would only be completely mastered by AI at the time when AI has general human
capability. See Bostrom (n 7) 14.
13
Bostrom (n 7) 7 defines rules-based expert systems as ‘rule-based programs that [makes] simple inferences
from a knowledge base of facts, which had been elicited from human domain experts and painstakingly hand-
coded in formal language’. Expert systems were the earliest form of AI. These systems were hard to maintain
and characterised as ‘brittle’ – even small errors in assumptions could leave it useless.
14
Patrick Lin, Keith Abney and George Bekey, ‘Robot Ethics: Mapping the Issues for a Mechanised World’
(2011) 175 Artificial Intelligence 943 define a robot as ‘an engineered machine that senses, thinks and acts’
and it must have the capability to do so autonomously.
15
For instance, Ajay Agrawal, Joshua Gans and Avi Goldfarb, Prediction Machines: The Simple Economics of
Artificial Intelligence, (Harvard Business Review Press, 2018) offer a systemisation from an economics
perspective where the prediction capability of AI is the driver of all other layers of influence. For their
purposes, ‘prediction’ is ‘the process of filling in missing information. Prediction takes information you have,
often called ‘data’, and uses it to generate information you do not have’ (ch 3).
4
by leading US business AI authors, Davenport and Ronanki, to describe the current and
potential roles that AI play and is likely to play in the boardroom in the near future.
16
Davenport and Ronanki describe AI according to its business capabilities rather than the
technology underlying it. They identify three specific classes of AI in a business context:
process automation; cognitive insight; and cognitive engagement. Each class reflects the
evolution of cognitive capacity in AI, from lowest to the highest in cognitive engagement.
However, AI enabled systems may show elements of all three classes in various degrees.
Process automation refers to those systems where AI is used to automate digital and physical
tasks, very often administrative or financial tasks.
17
Robotic process automation technology
can access various IT systems much like a human actor would do. This is the area where many
businesses have started incorporating AI technology, since it is relatively easy to do and often
leads to immediate bottom-line benefit in that the automated tasks are often those already
outsourced. Second stage AI, cognitive insight, is offered by those systems that gain predictive
ability through big data analysis. They detect patterns in data and interpret their meaning
through machine learning.
18
This is the second most used application of AI technology. Third
stage AI, cognitive engagement, refers to systems that use natural language processing to
interact directly with customers or employees of an organisation.
19
Examples include chatbots
answering basic call centre queries from customers or employees and cognitive assistants such
as Siri, Alex and Echo that anticipate instructions and provide answers to questions.
20
The
engagement may be further aided by augmented reality visualisations of business activities that
create the impression of being inside an activity in real-time. While all three classes of AI are
embedded in current corporate governance technologies, their impact is uneven. As will be
seen, process automation has had the most immediate impact on governance because it
facilitates the product and distribution of more reliable data for the purposes of internal auditing
and risk management and in turn governance oversight of those functions.
III CURRENT APPLICATIONS OF AI IN CORPORATE GOVERNANCE
A. Introduction
Current boardroom technologies focus on the production and distribution of information that
boards need to assist in their supervisory and strategic roles.
21
Many of these technologies do
not offer the advantages of AI systems themselves but instead generate the data which is the
vital lifeblood that AI needs. Information systems that generate and curate data and AI systems
must therefore be discussed simultaneously. Of these, board portals and risk reporting systems
are the most widely used, tied in with systems that assist in legal compliance and internal
auditing. Each of these technologies is at present an application of Davenport and Ronanki’s
class one AI, process automation. While there has been progress in using cognitive insights in
16
Thomas H Davenport and Rajeev Ronanki, ‘Artificial Intelligence for the Real World’, Harvard Business
Review (online), January-February 2018, https://hbr.org/2018/01/artificial-intelligence-for-the-real-world.
See further Davenport (n 7).
17
Davenport and Ronanki (n 14).
18
Ibid. This is the form that algorithmic trading takes.
19
Ibid.
20
Tom Davenport and Rajeev Ronanki, ‘The rise of cognitive agents’ (2018) <
https://www2.deloitte.com/insights/us/en/focus/cognitive-technologies/rise-of-cognitive-agents-artificial-
intelligence-applications.html>.
21
Financial Reporting Council (UK), Digital Future of Corporate Reporting (December 2017) 8
<https://www.frc.org.uk/getattachment/9279091c-a4e9-4389-bdd6-
d8dc5563b14a/DigFutureXBRLDec.pdf>.
5
the boardroom, they are not yet widely in use. However, the future potential of cognitive
insights and cognitive engagement technologies in the governance of corporations is immense.
B. Board portals
1. Baseline technology & AI extensions
Board portals are critical to all classes of AI. They are both the chief mode by which digital
information is distributed to boards and the platform by which boards are accessing the rapidly
expanding range of cognitive technologies.
22
Originally developed as a means of automating
the secure preparation and distribution to directors of the board pack or board book, the advent
of board portals kick-started the shift from paper-based board preparation to an all-digital
environment.
23
Along with video and web-based conferencing, portals also worked to remove
the constraints against remote participation in meetings by directors.
24
Algorithms help collect, collate, format, distribute and regularly update papers and reports from
officers such as the CEO, CFO, COO etc and various divisions of a corporation that form part
of the board pack periodically required across the board reporting cycle so that boards can
analyse the state of the business at regular intervals. This type of handling and management is
considered to be perfectly suited to AI optimisation because it can be configured to the needs
of the user/specific company and yet, leveraged individually by board members who can access
the server/cloud on which it is stored.
25
The data must be regularly updated of course, but the
underlying algorithms have generally low levels of variation, facilitating time and cost
efficiencies.
26
Critical to these transformations has been the ability of portal technology to
provide secure access and control of regularly updated, sensitive information to board
members. This extended to automating the archiving and digital shredding of documents.
27
More recent evolutions have extended portal functionality beyond that of a digital data
depository to also include AI class 2 or cognitive insight applications including note-taking
and data review functionality; interactivity between board members and immediacy of data
available to directors. The effect has been to turn portals into AI platforms for board
collaboration; data analysis and interactions, thereby enhancing board effectiveness.
28
Depending on the portal provider, tools may include scheduling, alerts, intra-board messaging,
surveys, e-signatures, voting capabilities, multi-language capability, reading rooms, document
libraries, search functionality and secure support for the use of portals in multiple environments
accessed through a range of devices including iPads and tablets, web-based tools and
smartphones.
29
22
Raphael Goldsworthy, Tech in the Boardroom: Beyond the Board Portal (9 May 2016) <
https://betterboards.net/governance/tech-boardroom-board-portal/.
23
Corporate Secretary, Trends in Board Portal Adoption – Special Edition (1 June 2017) 4 <http;//www.
Corporatesecretary.com/node/30644>.
24
John Cormican and Luke Phillips, ‘Board Portals – evolution in board communication’ Keeping Good
Companies (May 2011) 207.
25
David Gardner, ‘Is General Counsel Ready for AI Adoption?’ (11 July 2018) <
https://ct.wolterskluwer.com/resource-center/articles/is-general-counsel-ready-for-ai-adoption>.
26
Shared Services & Outsourcing Network and SSON Analytics, Global Intelligent Automation Market Report
(H1 2017) (2017) < https://www.sharedservicesweek.com/downloads/global-intelligent-automation-market-
report-h1-2017-part-1 10. The technology that underlies these systems is typically rules-based and training is
supervised.
27
Corporate Secretary (n 21).
28
Corporate Secretary (n 21); Steve Cocheo, ‘Should you “plug in” your board of directors?’ (2007) 99 American
Bankers Association Banking Journal 40; Cormican and Phillips (n 22) 212. See also the discussion below par
III.E. on how the Internet of Thing is assisting in this task.
29
Corporate Secretary (n 21) 4.
6
2. Factors affecting adoption and wide use
Despite the absence of a published academic study of the adoption of board portals in various
jurisdictions, privately commissioned surveys provide some insight into the adoption of portal
technology. First, a global survey of 400 governance professionals undertaken by Forrester
Consulting and commissioned by board portal provider, Diligent, revealed that portal adoption
is high across the world, with the highest rate of adoptions in the Asia Pacific,
30
followed by
Europe and surprisingly, the lowest rate of adoption in North America.
31
Secondly, a similar
study by Azoth Analytics
32
determined that the global leaders in board portal production
include Diligent Corporation,
33
Nasdaq Boardvantage,
34
ComputerShare,
35
Directorpoint,
36
BoardPaq;
37
Convene
38
and BoardEffect.
39
Of these, Diligent is the market leader, with its
website claiming to provide board portals to more than 145,000 corporate officers globally.
40
Thirdly, based on a recent survey by Corporate Secretary of 300 US executives with
responsibility for preparing board materials for their organisations, board portals are being
adopted by organisations of all sizes, from both the public and private sector and profit and
non-profit enterprises with approximately 32% of such enterprises receiving digital only board
packs.
41
Implicit in each of these findings is the assumption that there were no barriers to full
digital adoption in the organisations surveyed such that board buy-in to the adoption of
technology was the same across the survey cohorts.
The foregoing discussion suggests that portals are a baseline technology that has been widely
adopted to help manage corporate governance responsibilities. However, despite their
potential, there is a long way to go before boards view portals as more than digital board
meeting packs.
42
This is a function of two issues. First, the slow integration of AI into corporate
governance due to significant variations in the technology features of portal products. For
example, not all portals presently offer cognitive insight or engagement technology such as
chat functionality or virtual dealing rooms for board interactions and cognitive engagement.
43
Secondly, even where such functionality exists, boards are cautious about using them, being
more comfortable with nascent technology practices such as their own personal email in
preference to board portal enabled closed-loop messaging for directors.
44
This must change,
the question is when. An issue likely to further galvanise digital governance is cyber security,
30
Note that 7% of respondent companies of the total survey were based in Australia, but Australian companies
constituted only slightly over 20% of the Asian Pacific contingent. See Forrester Consulting, ‘Directors’
Digital Divide: Practices Boardroom Aren’t Keeping Pace with Technology’ (October 2018), <
https://diligent.com/wp-content/uploads/sites/5/2018/11/UK-Diligent-Global-Report-Forrester-Directors-
Digital-Divide-Boardroom-Practices.pdf > 10.
31
Forrester Consulting (n 28) 7.
32
Azoth Analytics, Global Board Portal Market: Insights, Analysis and Forecast to 2023 - By Value, By
Number of Users, By Model, Penetration, End-User Industry, Company Analysis (2018)
www.researchandmarkets.com/reports/4585755/global-board-portal-market-2018-edition#pos>.
33
https://diligent.com/au/.
34
https://nq.nasdaq.com/Boardvantage-Reimagining-Governance?channel=PPC&source=Google&ppc-
campaignid=1533189567&gclid=EAIaIQobChMIpO7qq4rn3gIVlxOPCh2g7w0REAAYASAAEgKnavD_B
wE
35
https://www.computershare.com/au/business/corporate-governance/stay-connected-to-the-boardroom.
36
http://directorpoint.com/.
37
https://www.boardpac.co/?gclid=EAIaIQobChMIgOy-h4vn3gIVS4aPCh30zw-tEAAYAiAAEgLjsfD_BwE.
38
https://www.azeusconvene.com/.
39
https://boardeffect.com.
40
https://diligent.com/au/.
41
Corporate Secretary (n 21) 2 and 6-7.
42
Forrester Consulting (n 28) 5.
43
Ibid 2. Also see the discussion in part III.E below.
44
Forrester Consulting (n 28) 2.
7
specifically the vulnerability of boards and their organisations to cyber-attack and the
repercussions of sensitive materials or matters being exposed to the public.
45
AI applications
will gain greater traction when both their cognitive and security benefits beyond the boardroom
are better understood.
C Risk reporting and Internal Auditing Systems
1. Outline
AI technologies that model and stress test risk management practices and internal audit
processes and controls within organisations are an emergent field gaining traction in the
financial services sector, particularly in banking organisations.
46
Part of an AI wave that first
focussed on customer-facing ‘journeys’ and the ‘front of house’ operations that support them,
47
the transformation of ‘back of house’ operations such as risk and internal audit, has been more
cautious and uneven. Tempered by concerns as to the untried and untested nature of AI in
helping firms to satisfy their legal obligations,
48
despite the obvious benefits of streamlining
the automation of monotonous tasks and the reallocation of budgeted times involved in those
tasks to more strategic concerns.
49
However, the rapid growth of digital customer-facing
activities is forcing the issue, with industry consultants arguing that risk and audit functions
within financial institutions must be digitalised or create bottlenecks for their front of house
operations.
50
In this section, we examine these trends then discuss how AI is helping address
governance challenges that boards are facing in the risk and audit responsibility fields.
2. Risk Management and Risk Oversight
AI already provides several building blocks that make possible the digitalization of a firm’s
risk function. These include: data management; process and workflow automation; advanced
analytics and decision automation; and smart visualizations.
51
Ever expanding commercial
versions of these technologies are available in multiple forms online from the big four
accounting firms;
52
the big three management consultants;
53
legal compliance and risk
management consultants;
54
and governance and entity management consultants.
55
These
products still predominantly depend or draw from the first class of AI, namely process
automation and best assist the function of risk management rather than risk governance. AI
enabled advanced analytics arguably illustrate the expansion of digital risk technology into
Davenport and Ronanki’s second AI class, cognitive insight because, as will be seen, it makes
possible more precise and in-depth analytics than were previously possible.
56
Decision
45
Ibid 3.
46
See for e.g. Deloitte EMEA Centre for Regulatory Strategy, AI and Risk Management – Innovating with
Confidence (April 2018) 1-2 <https://www2.deloitte.com/au/en/pages/financial-services/articles/ai-risk-
management.html; Jeanne Boillet, EY, “Why AI is both a risk and a way to manage risk” (1 April 2018) <
https://www.ey.com/en_gl/assurance/why-ai-is-both-a-risk-and-a-way-to-manage-risk>; McKinsey &
Company, “The future of risk management in the digital era” (October 2017)
<https://www.mckinsey.com/business-functions/risk/our-insights/the-future-of-risk-management-in-the-
digital-era>.
47
E.g. online marketing, customer onboarding and servicing. See McKinsey & Company (n 44) 8.
48
Deloitte EMEA Centre for Regulatory Strategy (n 44) 1.
49
Pascal Bizarro and Margaret Dorian, ‘Artificial Intelligence: The Future of Auditing’ (October 2017) Internal
Auditing 21.
50
McKinsey & Company (n 44) 17-21; Deloitte EMEA Centre for Regulatory Strategy (n 44) 1-2.
51
McKinsey & Company (n 44) 12-14.
52
Deloittes, EY, KPMG and Price Waterhouse.
53
McKinseys, Boston Consulting and Bain & Co.
54
E.g. SAI Global and Lexis Nexus.
55
E.g. Diligent and Nasdaq.
56
Davenport and Ronanki (n 14).
8
automation, aided by smart visualisations and the use of augmented reality in the performance
of a firm’s risk function, are still emergent technologies but would seem to fall within
Davenport and Ronanki’s third AI class of cognitive engagement.
3. Benefits of AI in risk & audit space
AI enables the faster capture and use of vast amounts of structured and unstructured data
(emails, text, social media posts, clickstreams, chat transcripts). AI algorithms permit linkage
analysis, pattern recognition and NLP,
57
which in turn makes possible more timely and precise
profiling of customer attributes and risks. This is supported by a wider variety and greater
quality of data available so that a firm’s risk function need no longer rely on traditional risk
data nor, in the case of financial service firms, burden customers with requests to supply data
that can be captured by other means. An example of a growing data source or real time data
on banking customers is the Internet of Things,
58
which constructs data networks using built-
in sensors on digital machines and appliances (fixed, mobile, wearable) to gather and combine
infinite sets of individual data from those machines and facilitates big data analysis.
59
Process and workflow automation facilitate streamlining, standardisation and the efficient
execution of routine tasks such as data entry and collection. Using robotic process automation
(RPA), NLP and optical character recognition, tasks can in turn be combined into integrated
sequences or smart workflows. The use of RPA and NLP is still in its very early stages but is
rapidly catching on in the financial services sector.
60
Blockchain is also expected to add further
potential for process automation in a few years.
61
It is claimed that automated workflows offer
the potential for increased precision in risk modelling, improved regulatory compliance and
significant cost reductions depending on a firm’s risk types.
62
Advanced analytics based on AI algorithms,
63
are able to identify more complex patterns than
is possible by human intervention, particularly in the context of identifying fraud and money
laundering in the financial services context.
64
Semantic analysis, a branch of NLP, can be
applied to news to extract market sentiment data on business entities. Further, AI analytics are
rapidly advancing with new applications arriving constantly. Decision automation is also
possible, through the use of models and algorithms. For example, advanced analytics are able
to make risk predictions, suggest optimal actions and extract relevant insights. It has been
suggested that these decisions can increase accuracy of risk management in many cases.
65
At
least in the financial services context, these decisions are not, as yet, being used to automate
actual risk decisions, but rather to improve the accuracy and precision of existing risk models.
AI smart visualisation technology allows users to access AI generated insights and data in a
more intuitive and customer friendly way. Key technologies include dashboards and portals,
57
Deepak Amirtha Raj, ‘Spotlight on the Remarkable Potential of AI in KYC’ (13 June 2017)
<https://medium.com/all-technology-feeds/spotlight-on-the-remarkable-potential-of-ai-in-kyc-
7441bf7eec38>.
58
‘Internet of Things’ refers to a global network of machines and devices that are enabled to gather information
from their environment or users and can then communicate and interact with each other. See In Lee and
Kyoochun Lee, ‘The Internet of Things (IoT): Applications, Investments, and Challenges for Enterprises’
(2015) 58 Business Horizons 431.
59
See, for e.g., Siemen’s MindSphere,
https://www.siemens.com/global/en/home/products/software/mindsphere.html.
60
McKinsey & Company (n 44) 46.
61
Ibid 46.
62
McKinsey & Company (n 44) 12.
63
Ibid 47.
64
Raj (n 55).
65
Rajdeep Dash, Andreas Kremer, Luis Natio and Derek Walton, ‘Risk analytics enters its prime’ (June 2017)
< https://www.mckinsey.com/business-functions/risk/our-insights/risk-analytics-enters-its-prime.
9
augmented reality and cognitive agents, such as Siri and Alexa.
66
Presently these technologies
are customer focussed, helping them in the banking context to better understand their spending
habits and financial capacity. However, the same technologies can also be used to improve,
streamline and fasten decision-making. Critical to this capacity is the ability of AI to perform
ongoing multi-dimensional analysis on consolidated data.
67
4. Impact on C-Suite and Boardroom
AI automated risk management has the potential to improve the functionality of risk and risk
related executives, boards and board risk committees in two important ways.
68
First, for risk
management and the C-Suite, AI automates away routine risk functions and reduces the number
of manually handled exceptions. This leaves the C-Suite and the risk executives in particular
to focus on strategic and high value decisions using AI driven advanced analytics, as now
described, to generate insights such as more complex correlations and trend analyses than were
previously available and in turn to optimise risk and other management decisions. When added
to improved connectivity, executives will be able to detect emergent risks immediately and set
cross-risk mitigation strategies and limits more dynamically than before.
Secondly, risk reports and strategic advice on risk-oriented business decisions will be generated
automatically and/or be sent on demand to the C-Suite and/or board of directors, for risk
oversight and review. As noted earlier, strategic advice can take the form of risk predictions
and suggested optimal actions, which the board or board risk committee can use as a decision-
making point of reference, a valuable cognitive insight. Risk reports and advice need no longer
be static, nor necessarily require a risk register, although boards still need to have an
appreciation of their risk appetite and an understanding of the firm’s digital risk frameworks.
69
Boards will increasingly be presented with intuitive visuals that provide summary information
with appropriate levels of detail (as to market, portfolios and products) but which enable the
reader to drill down into the headline figures as required and run further analysis virtually using
real time data. Instead of focussing on the oversight of risks ex-post facto it will increasingly
be possible to oversee risk more strategically, by profiling emerging risks ex ante using AI’s
precise, preventative control mechanisms and reporting features.
70
The resulting governance
outcomes depend on the persuasive but as yet untested assertion that improved governance
capacity resulting from improved analytics will lead to improved governance per sè, better
performance and improved business value.
71
These arguments may yet been shown to be
particularly strong in board governance in a crisis, when governance would clearly benefit from
access to real-time and precise reporting.
5. Internal Audit Systems
As technology continues to change the working landscape of the business world, AI impacts
on the nature of the workloads and deliverables requiring core assurance. Adopting digital
risk-based technology, for example, raises security, data quality and methodology, regulatory
and supervisory challenges, both for a firm’s audit teams and for audit governance. Keeping a
check on those challenges across an organisation as a whole is the responsibility of internal
audit.
72
It involves engagement in ongoing audits of a firm’s systems and operations across all
66
McKinsey & Company (n 44) 50-52.
67
Ibid 52. Applications such as Tableau, Alteryx and Qlikview are mentioned in this regard.
68
Ibid 27-29.
69
McKinsey & Company (n 44) 27-29.
70
Ibid 28.
71
Forrester (n 28) 5-8.
72
The Institute of Internal Auditors Australia, ‘What is an Internal Audit?” <https//www.iia.org.au/about-iia-
australia/WhatIsInternalAudit.aspx>.
10
areas of the firm to identify how well risks are managed, whether the right processes, controls
and authorisation procedures are in place and in turn, whether those processes and controls are
being implemented and/or followed.
73
Audit results are translated into reports providing advice
and assurance regarding the organisation’s risk management, governance and controls to
boards, board committees and senior management and facilitating co-ordination between the
first and second lines of defence involved in the company’s risk management framework.
74
The use of computer assisted audit software in conducting internal audits is already an
established practice that has brought cost and time savings to a traditionally manually intensive
exercise hindered by timely available data.
75
However, the adoption of AI to define and
automatically pilot an internal audit end to end is less well-established.
76
The human factor,
encapsulating knowledge, creativity and professional judgment, is still regarded as an
important control/authorisation procedure for ensuring audit outcomes are reliable.
77
Implementation of AI tools to transform audit service delivery is the focus of the Big Four
accounting firms.
78
Critical audit tasks that lend themselves to automation via AI include
analytical review procedures, classification, internal controls evaluation, risk assessment and
going concern decisions.
79
AI already outperforms humans when streamlining and automating
data acquisition for audits and converting that data into report formats.
80
Human error is
reduced and time can be allocated to higher order tasks including the leveraging of digital audit
data to identify irregularities and improve weaknesses.
81
These aspects fall under Davenport
and Ronanki’s process automation category.
The advanced analytics within AI technologies can also compare, contrast and summarise huge
data collections and provide greater support for audit findings through reliance on accurate and
real time data rather than audit sampling.
82
Furthermore, the machine learning capability of the
73
The Institute of Internal Auditors Global, Global Perspectives and Insights – The IIA’s Artificial Intelligence
Auditing Framework (September 2017) 2, <https://na.theiia.org/periodicals/Pages/Global-Perspectives-and-
Insights.aspx>.
74
Paul Holland, Shamus Rae and Paul Taylor, ‘Why AI must be included in audits’ (June 2018)
<https://assets.kpmg.com/content/dam/kpmg/uk/pdf/2018/06/why-ai-must-be-included-in-audits.PDF>.
75
He Li, Jun Dai, Tatiana Gershberg and Miklos Vasarhelyi, ‘Understanding Usage and Value of Audit
Analytics for Internal Auditors: An Organisational Approach’ (2018) 28 International Journal of Accounting
Information Systems 59; Nurmazilah Mazhan and Andy Lymer, ‘Examining the Adoption of Computer-
assisted Audit Tools and Techniques’ (2014) 29 Managerial Auditing Journal 327; 28 Managerial Auditing
Journal 88; Muhammad Razi and Haider Madani, ‘An Analysis of Attributes that Impact Adoption of Audit
Software’ (2013) 21 International Journal of Accounting & Information Management 170; Aidi Ahmi and
Simon Kent, ‘The Utilisation of Generalized Audit Software (GAS) by External Audits (2012) 28 Managerial
Auditing Journal 88; Fatima Alali and Fang Pan ‘Use of Audit Software: Review and Survey’ (2011) 26
Internal Auditing 29.
76
Holland, Rae and Taylor (n 72).
77
Bizarro and Dorian (n 47) 21.
78
Ibid 24-35. Deloitte is considered the current market leader. The firm leverages a range of AI technologies
such as Argus, which learns from human interaction and machine techniques to extract key accounting data
from electronic documentation including sales; leasing; derivative contracts; employment agreements;
invoices; client meeting minutes; legal letters and financial statements. See Deloitte, ‘Advancing audit quality
through smarter audits’ (2018) <https://www2.deloitte.com/us/en/pages/audit/articles/smarter-audits.html>.
79
Syed Moudud-UI-Huq, ‘The Role of Artificial Intelligence in the Development of Account Systems: A
Review” (2014) 13 Journal of Accounting Research & Audit Practices 7.
80
Ibid.
81
Bill Brennan, Mike Baccala, Mike Flynn and 10Rule, ‘Artificial Intelligence Comes to Financial Statement
Audits’, CFO Magazine (February 2017) <http://ww2.cfo.com/auditing/2017/02/artificial-intelligence-
audits/>.
82
Michael P Cangemi and Patrick Taylor, ‘Harnessing Artificial Intelligence to Deliver Real-Time Intelligence
and Business Process Improvements’ EDPACS (online), 27 April 2018, 3
<https://www.tandfonline.com/doi/abs/10.1080/07366981.2018.1444007>.
11
systems enable them to benchmark across different organisations to, for instance, pick up on
the characteristics of extraordinary transactions. This capability is enhanced when they are run
on the cloud.
83
These capabilities extend the AI functionality into cognitive insight. The major
advantages still, however, remain cost and efficiency. The risks lie in ensuring that AI has been
effectively tested to ensure that results are accurate and reflect the objectives of internal audits.
Fully AI automated internal audits are the foreseeable future but are not yet here. They have
the potential to improve the functionality of the audit process, risk management at all levels of
an organisation and risk governance by the board. As with risk governance, AI driven audit
reports make audit governance by the board or board committee more efficient and create new
opportunities for strategic oversight of business activities in place of monotonous tasks.
However, as with all technologies, what remains unclear is the extent of tailoring, integrating,
testing and monitoring required to ensure the benefits outweigh the resources devoted to its
use.
D Legal compliance systems
Integral to risk and audit governance is a firm’s commitment to operating their businesses in a
legally and ethically responsible manner, in the long term interests of their shareholders.
84
Legal compliance issues are increasingly sophisticated, requiring not only that companies meet
the requirements of relevant laws but also to identify and self-report possible breaches of the
laws to the appointed regulator in a time-conscious way.
85
The sheer volume of business
transactions, regulatory requirements and breach reporting obligations have inevitably led to
big data, AI (also known as RegTech
86
) and blockchain-based solutions being developed. It
has caused regulators globally to create initiatives such as regulatory sandboxes to engage with
and respond to the emerging solutions.
87
The focus here is however confined to the issue of
how AI facilitates and reports on legal compliance to company boards and in turn how boards
are able to make better business decisions because of that AI functionality.
AI legal compliance techniques have been used by financial service firms in areas such as credit
card fraud and analysing data sets to detect Anti-Money Laundering (AML). AI adoption in
these fields has focused on automating costly manual work, leaving key decisions to human
beings or rules-based systems. AI is proving particularly effective because fraud and money
laundering detection typically involve large numbers of documents and repetitive processes
that suit process automation. Whilst automation in some firms is as much as 90% of their
processes, it is not the only digital answer anymore and others, such as blockchain, may
increasingly become more financially justified.
88
83
Ibid.
84
Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (Interim
Report, September 2018) 191. See further Royal Commission into Misconduct in the Banking, Superannuation
and Financial Services Industry (Final Report, February 2019) vol 1, 12: ‘The conduct identified and criticised
in the Commission’s Interim Report and in this Report has been of a nature and extent that shows that the law
has not been obeyed, and has not been enforced effectively. It also points to deficiencies of culture, governance
and risk management within entities. Too often, entities have paid too little attention to issues of regulatory,
compliance and conduct risks.’
85
See for example, the breach reporting duty of Australian Financial Service Licence holders in Corporations
Act 2001 (Cth) s 912D (‘Corporations Act’).
86
Chartis Research, Demystifying AI for Risk and Compliance (2018) 2 <https://www.ibm.com/blogs/insights-
on-business/banking/demystifying-ai-risk-compliance/>.
87
Global Financial Innovation Network (GFIN), Consultation document (August 2018) 4
<https://www.fca.org.uk/publication/consultation/gfin-consultation-document.pdf>.
88
Ibid 6.
12
AI takes process automation protocols and facilitates another layer of cognitive insight through
the use of text analytics and insights to identifying relevant content, negative news, case notes
from structured and unstructured data like emails, posts and chat room conversations. It also
makes possible network analytics, of the kind described in relation to risk management
earlier,
89
that facilitate the identification of connections between individuals in order to
evaluate risky parties and networks.
AI can also be used to facilitate legal compliance. NLP and RPA are the most commonly used
AI process for compliance. An example is the automatic generation of written reports or
notifications to a regulator regarding cash deposits required by money laundering laws.
Scenario comparisons and stress testing can be tailored to banking regulation requirements in
different jurisdictions. Finally, in something of a ‘holy grail’ for AI,
90
AI can establish and
automate a system of internal notifications of legal changes throughout an organisation, a
significant issue for financial institutions who have to integrate content from thousands of
regulatory publications each month.
Data integrity remains the biggest challenge for AI-based legal compliance solutions.
Advanced analytics through AI depend on data quality and reliability. We return to this issue
below for further consideration.
91
AI requires proper data governance procedures and the
validation or assurance processes previously discussed regarding internal audits.
92
This is
particularly the case when converting documents (essentially unstructured) into structured
formats, while preserving the information they contain. The goal is to obtain segmented,
cleaned and structured data sets that could then feed into companies’ compliance and control
activities.
Boards receive reports on these issues as part of their digital pack that is the primary
functionality of board portals. However, in the case of large corporations, board criticism may
not be so much concerned with accuracy and precision of compliance data but rather, its
relevance. In the Prudential Inquiry Report into the Commonwealth Bank of Australia (CBA),
one of the complaints concerned the failure of CBA board to be alarmed by and follow up on
breaches of money laundering laws despite warnings in internal audit reports to the relevant
board sub-committee. The failure to take up that concern was considered to be a cultural rather
than a data accuracy question with the implication being that no AI procedure was sufficient
to improve the proper governance of this issue.
93
This illustrates that the fundamentals of good
governance practice remain important regardless of the benefits that more reliable information
brings.
E Cognitive insight and the Internet of Things
The formal use of cognitive insight in the boardroom was first reported in 2014 when Deep
Knowledge Ventures, a Hong Kong-based venture capitalist firm specialising in biotechnology
investments in aged care, announced the ‘appointment’ of VITAL
94
to its board.
95
On close
89
See Part III.C.2 above.
90
Chartis, (n 84) 7.
91
See par V.
92
See par III.C. 3.
93
APRA (n 2).
94
The acronym refers to ‘Validating Investment Tool for Advancing Life Sciences’.
95
Nicky Burridge, ‘Artificial Intelligence Gest a Seat in the Boardroom’, Nikkei Asian Review (online), (10 May
2017), <https://asia.nikkei.com/Business/Artificial-intelligence-gets-a-seat-in-the-boardroom>; Rob Wile, ‘A
Venture Capital Firm Just Named an Algorithm to its Board of Directors – Here’s what it Actually Does’,
Business Insider Australia (online), (14 May 2014), <https://www.businessinsider.com.au/vital-named-to-
board-2014-5?r=US&IR=T>.
13
inspection, however, it becomes clear that these first-comers are generally more aptly described
as robo-advisors and have been narrowly employed in business areas where decisions are
focussed on investments. In the case of Deep Knowledge Ventures, the high failure rate of
investments led the company to invest in data analytics of criteria that indicate success or
failure of investments in biotechnology. AI allowed for the identification of patterns that is
claimed not to have been recognisable by human actors before. VITAL is, for instance, fed
with data about prospective investments’ financing, clinical trials, intellectual property and
previous funding rounds.
96
In 2017, the CEO of Deep Knowledge Ventures credited the use of VITAL for saving the firm
from financial collapse.
97
VITAL showed that the focus of the firm’s investments ought to be
on longevity or prevention of diseases rather than on-treatment. The limitations of the system
were, however, acknowledged. While being hailed as the first AI member of a board, VITAL
was actually a tool used by the board to augment their own decision-making capability. It did
not have an independent vote at meetings and it certainly did not participate in meetings in the
manner one would expect from a human actor. Instead, the board agreed that it would not take
positive investment decisions without the corroboration of VITAL.
IBM is driving a new move in artificial intelligence in the boardroom, devoting an entire stream
of research into the potential of Watson to augmenting corporate governance.
98
IBM Watson
first came to the attention of the general public when it won the American game show
Jeopardy! in 2011.
99
Developed specifically with beating this game in mind, it showcased the
ability to interpret natural language even when faced with apparent ambiguity. The technology
that underlies Watson takes information from a variety of sources to compare possible answers
to the question posed. It evaluates which of these answers is supported by the best evidence
and then offers an answer.
100
Importantly, the sources of information are unstructured, meaning
that they are not neatly organised in datasets but come from natural language material. IBM
Watson can interpret these sources to aggregate knowledge sets and can continually update its
knowledge sets in this fashion.
After success on Jeopardy! the researchers working on Watson immediately announced the
extension of their research into other fields of application, the first being its potential use in
medical diagnostics.
101
Two main differences between this application field and that employed
for Jeopardy! were identified.
102
The first is that the system would not offer only a best answer
but would rather offer a range of hypothesis that would assist a health care practitioner in
arriving at the best diagnosis and treatment plan.
103
The final decision would remain with the
96
Wile (n 93).
97
Burridge (n 93).
98
See Tom Simonite, ‘A Room Where Executives Go to Get Help from IBM’s Watson’, MIT Technology Review
(online), 4 August 2014, <https://www.technologyreview.com/s/529606/a-room-where-executives-go-to-get-
help-from-ibms-watson/>. IBM Watson first came to the attention of the general public when it won the
American game show Jeopardy! in 2011, while competing against Ken Jennings and Brad Rutter, two of the
most successful human contestants in history.
99
See Adam Gabbatt, ‘IBM Computer Watson Wins Jeopardy Clash’, The Guardian (online), 17 February 2011,
<https://www.theguardian.com/technology/2011/feb/17/ibm-computer-watson-wins-jeopardy>; John
Markoff, ‘Computer Wins on ‘Jeopardy!’: Trivial It’s Not’, The New York Times (online), 16 February 2011,
<https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html>. Watson competed against Ken
Jennings and Brad Rutter, two of the most successful human contestants in history.
100
David Ferrucci et al, ‘Watson: Beyond Jeopardy!’ (2013) 199-200 Artificial Intelligence 93 at 94—95.
101
Markoff (n 97); David Ferrucci et al (n 98) 93.
102
David Ferrucci et al (n 98) 95.
103
David Ferrucci et al (n 98) 95—96, 104. The system is described as a ‘massive abduction machine’, meaning
that it would come up with an initial answer based on the information (symptoms) provided, followed by
eliminating some answers less supported by evidence.
14
doctor. Secondly, the system would actively encourage the completion of missing information
needed to improve its diagnostic result. In other words, it would encourage the doctor to ask
certain relevant questions or to run additional diagnostic tests. It immediately becomes
apparent that a similar boardroom capability could be of tremendous use – offer a range of
possible decisions or actions to a defined problem or encourage further investigation before
suggestions are offered by the intelligent system.
The major benefit that Watson’s cognitive computing solutions offered when compared to
other pre-existing systems was that Watson would be more user-friendly. Input of data could
happen in real time through the use of its natural language capability – it simply listens in on
the conversation between the patient and the health care practitioner.
104
The latter technology
is labelled ‘Voice-Enabled Cognitive Rooms’, which means that the physical space is
connected by speakers to the IBM Watson Internet of Things Platform.
105
Persons interact with
the system by verbal demands to change the physical space such as room temperature or
lighting, or to request information from the system. The same technology underlies the
governance solution of Ricoh discussed below.
Other explored applications of IBM Watson include the identification of potential research
areas in medical research,
106
the repurposing of existing medication,
107
improving image-based
diagnosis,
108
enhancement of legal research,
109
due diligence in mergers and acquisitions,
110
the hospitality industry
111
and many more. It is predicted that all information-intensive domains
will in future benefit from some form of AI application.
112
The IBM Watson governance research has resulted in a first iteration of product towards the
end of 2017 with the release of the Ricoh Cognitive Whiteboard.
113
This technology combines
the process automation and cognitive insight capability of AI with the Internet of Things, which
enables data collection and curation. Apart from performing such tasks usually performed by
digital whiteboards, the Ricoh Whiteboard is essentially a scribe, which records what is said at
meetings in real time. Additionally, it can translate the discussion in the participants’ language
104
David Ferrucci et al (n 98) 97—98.
105
Chris Preimesberger, ‘If These Walls Could Speak: IBM, Harman Powering Cognitive Rooms’, eWeek
(online), 19 April 2017, <http://www.eweek.com/innovation/if-these-walls-could-speak-ibm-harman-
powering-cognitive-rooms>.
106
Ying Chen, Elenee Argentinis and Griff Weber, ‘IBM Watson: How Cognitive Computing Can be Applied to
Big Data Challenges in Life Sciences Research’ (2016) 38(4) Clinical Therapeutics 688, 697.
107
Chen, Argentinis and Weber (n 104) 697—698.
108
Davenport (n 7) ch 2.
109
Anthony Sills, ‘ROSS and Watson Tackle the Law’, IBM Blogs (online), 14 January 2016,
<https://www.ibm.com/blogs/watson/2016/01/ross-and-watson-tackle-the-law/>; Jen Clarke, ‘Spotlight on
“LawTech”: How Machine Learning is Disrupting the Legal Sector’, IBM Blogs (online), 11 December 2017,
< https://www.ibm.com/blogs/internet-of-things/iot-spotlight-on-lawtech/>. ROSS Intelligence uses a
technique called ‘technology aided review’ to extract relevant data from unstructured data sets like legal
documents. In other words, it can aid legal research by quickly going through mountains of legal precedent to
come up with relevant authority. According to Clarke it can process over a billion text documents in a second.
See further Paul Lippe, Daniel Martin Katz and Dan Jackson, ‘Legal by Design: A New Paradigm for Handling
Complexity in Banking Regulation and Elsewhere in Law’ (2015) 93 Oregon Law Review 833, 849 – 850.
110
See for instance Rage-AI, < http://www.rageframeworks.com>; eBrevia, < https://ebrevia.com/diligence-
accelerator>; Seal, < https://www.seal-software.com/role-head-ma>; Kira, < https://kirasystems.com>.
111
Preimesberger (n 103).
112
Davenport and Ronanki (n 14).
113
D Craig MacCormack, ‘Alexa and Watson are Taking Over Your Conference Room’, Commercial Integrator
(online), 30 June 2017, <https://www.commercialintegrator.com/blogs/alexa-watson-control-conference-
room/>.
15
of choice,
114
create action items and perform other tasks previously left to a human assistant.
It can send a full transcript of the meeting to participants afterwards.
Others have warned that IBM’s claimed successes are overplayed and that the natural language
capability of the system often presents real difficulty in transposing it into different business
sectors.
115
For instance, investment analysts at a major investment bank wanted to interpret
their organisation’s combined investment reports in order to make recommendations to clients.
However, each analyst uses vastly different language and structuring in his or her reports. This
could not be solved by current technology, despite some of the claims made in media by IBM.
Instead, the firm had to appoint an outsourced service to systemise the reports so that they may
be mined for information.
116
As in the case of AI-driven legal compliance systems and other
applications of AI, the quality of the data available for analytics remains important.
IV POTENTIAL FOR FUTURE DEVELOPMENT
It is clear that the potential of AI business systems has not yet reached maturity and that many
benefits will only follow with technological advancement.
117
Not wanting to limit the
discussion to current AI, in this section, we attempt to answer the hypothetical: what is the
potential future development of the above technologies and how may they advance governance.
We have limited this discussion to capabilities that are already mentioned in the context of
existing technology but which have not yet developed.
Cognitive computing systems
118
may in future become a super-reporter of information. If the
system is integrated with other information sources in the organisation it has the potential to
become a deep pool of information that may be accessed by the board to assist in decision-
making.
119
Cognitive computing systems not only have the capacity to access large amounts of
data but also to analyse it and to come up with answers or potential answers. It is therefore
possible that a future board could rely on an intelligent system for information about the affairs
of the business rather than on the report of the executive that it is currently reliant on - data
presented on the surface using smart visualisations but which enables directors to zone in on
and drill down into more detailed information if required. This might not sit well with the
executive, which has always maintained a firewall between the day-to-today business of a
corporation and the information it shares with the board.
120
Certainly, recent Australian director
sentiments on board overreach into management in Australia have been very critical,
suggesting that AI overreach in the same field is likely to meet the same frosty reception.
121
Moreover, such systems could be enabled to pool information about governance behaviour
across many users (companies) so that patterns in governance behaviour might be traced, or
feedback could be given about benchmarking of certain governance issues across industries.
For instance, the system could be asked to report on the trends in executive remuneration across
114
At the time of release, it could translate into nine languages.
115
See, for instance, Davenport (n 7) ch 2.
116
Ibid.
117
Davenport (n 7) ch 3 and ch 4. The author lists some issues in ch 4 that work against the wide-scale adoption
of AI in business models at present, to which must be added as an overriding factor the cost of implementation.
118
‘Cognitive computer systems’ is used here in reference to technology that evidences all or any of the traits
according to Davenport and Ronanki’s classification set out above in par II.
119
See above n 107 for references on technology aided review.
120
Lancefield and Gagliardi (n 3): ‘[The executive] may worry that more information in board members’ hands
could lead to poorer decisions if it’s not being examined with the right set of lenses.’
121
Simon Evans, Patrick Durikin & Joanne Mather, ‘It's so soul-destroying': Business backs David Murray’,
Australian Financial Review (12 August 2018) https://www.afr.com/business/banking-and-finance/its-so-
souldestroying-business-backs-david-murray-20180801-h13ejn>.
16
a sector or the number of women in senior executive roles in comparable corporations. It could
then cross-reference this with the contributions of the various actors and their general
performance as directors. The concern this raises is the escalating degrees of transparency of
board processes, not all of which might be welcomed by boards. For example, it is foreseeable
that a data platform based on the Internet of Things as previously discussed could develop
benchmarking data on whether directors open and read their board reports and if so, how often
and for what periods of time.
The system could potentially log the contributions of the various participants and flag those
that have not yet delivered input, thereby countering group think and unconscious bias, a
primary concern in improving board performance.
122
It could in future address the board to ask
for the input of a particular director who has not yet contributed but who, according to data
already captured by the system, has the expertise to provide valuable input. It could ask the
chair to provide an opportunity for minorities represented on the board to address the board
when their opinion has not been heard for a while. It could also serve as a fact checker to flag
mistaken assertions. All of these capabilities have been mentioned as prospective abilities of
the Ricoh Cognitive Whiteboard.
123
Davenport recommends that companies design new systems from the outset with proper
consideration of the extent to which AI will be involved in decision-making compared to
human actors, as well as the strengths and weaknesses of each that the other seeks to
augment.
124
In the governance context the final responsibility for decision-making must always
remain with the members of the board.
125
It is therefore their capability that must be augmented
by the use of cognitive systems. For instance, there are current examples of consulting firms
developing, with the use of machine learning, tools to determine the strategic orientation of
companies and to make appropriate recommendations from those determinations.
126
The same
can, and is being, done in terms of business models. Similar tools could in future play a key
role in augmenting the ability of boards to plan strategically. Strategic planning needs to
include express consideration of the adoption of AI systems in the corporation, both in terms
of governance practices as outlines above as well as operational and customer-facing
interfaces.
Strategic planning and the monitoring of corporate culture goes hand in hand. The board’s
influence over the culture of an organisation may be improved through the use of organisation
network analysis enhanced by machine learning capability.
127
Instead of relying on staff
surveys to determine attitudes, this type of analysis mines the internal emails or telephone calls
of staff to determine who is communicating with each other.
128
It then picks up on how words
are written or spoken, rather than on the content of the messages, to determine the trust and
likeability factor between the actors. The outcome is that it becomes possible to tell who the
influencers are in organisations – those key persons who have the ability inside the organisation
122
Simonite (n 96).
123
See above n 111.
124
Davenport (n 7) ch 3.
125
This is in terms of the general law duty not to fetter discretions as well as the general law and statutory duty
(Corporations Act, s 180(1)) to act with care and diligence. See in general Robert P Austin and Ian M Ramsay,
Ford, Austin and Ramsay’s Principles of Corporations Law (LexisNexis Butterworths, 2015) [8.290] –
[8.305]. It is widely acknowledged that AI systems at present do not have ‘common sense’. For instance, in
diagnostic medicine they tend to be biased in favour of diagnosis of rare diseases, when it is much more likely
to be a more commonly found malady. See Davenport (n 4) ch 4.
126
Davenport (n 7) ch 4 describes the work of Boston Consulting Group in this regard.
127
Harry Toukalas, Tim Boyle and Ian Laughlin, ‘Turn Culture into Competitive Advantage with AI’, (2017) 70
Governance Directions 72.
128
Ibid, 74.
17
to change the attitudes of those around them.
129
The focus of the implementation of new ideas
or values can then be directed to these individuals, rather than to the organisation as a whole.
This can be done continuously, which enhances reliability, since influencers change over time.
This tool also has predictive capability since it may predict when staff will fail to comply with
certain directives.
130
V CHALLENGES AND IMPLICATIONS FOR GOVERNANCE THEORY AND
PRACTICE
Technological advances in AI stand to bring real benefits to corporate governance but
governance will not be business as usual. Some of the challenges that AI systems will present
in the boardroom are similar to that presented by technology in general. Other challenges arise
more specifically from the nature of the technology and its current limitations.
A Opaqueness of decision process
AI systems using machine learning use the presence or absence of a pre-acquired set of
reference points to decide whether an outcome is probable or not. While the reference points
will be known to human actors if the system was built via supervised learning, these reference
points are not known when the systems train through unsupervised learning. In supervised
machine learning a set of training data with a labelled output is used to train the system. The
instances or examples that would lead to a particular outcome is well-known. The analysis is
similar to statistical regression analysis incorporating scoring, which is the likelihood of the
outcome given the presence of a particular set of variables.
131
The training data is known from
the outset in the programming process. In unsupervised machine learning, the data analysed is
not labelled and the result is not known. Instead the system seeks to find patterns in the data
over millions of iterations to predict outcomes. This process may be bolstered by a process
called ‘reinforcement learning’, which means that the system has a defined goal and each time
it moves closer to that goal acknowledgement is fed back into the system to indicate that
progress has been made.
132
Essentially, the system trains itself through trial and error until it
has established the necessary patterns and will continue doing so. It becomes smarter over time.
In this model the data is not labelled, which means that the conclusion of the AI system may
not be traceable to particular reference points.
In both cases there is a measure of opaqueness in the steps that the system took to reach a
recommendation or prediction. In the case of supervised learning, one would know that specific
reference points were used to reach the conclusion. However, the relative weight assigned to
each of the reference points in reaching the conclusion may not be clear. This makes the testing
of the reliability of the system very difficult.
133
In an unsupervised learning system, the system
aggregates its reference points by virtue of learning from previously analysed data. This implies
that human actors may not be able to ultimately determine which reference points were used
by the system in reaching its conclusion or decision.
129
Ibid, 75 – 76.
130
Ibid, 75.
131
See in general, Davenport (n 7) ch 1. Supervised learning is still most commonly used.
132
Ibid.
133
See in this regard Joshua A Kroll, Joanna Huey, Solon Barocas, Edward W Felten, Joel R Reidenberg, David
G Robinson and Harlan Yu, ‘Accountable Algorithms’ (2018) 165 University of Pennsylvania Law Review
633. The authors consider different technological mechanisms to address concerns about reliability of systems
but ultimately conclude that none of the current mechanisms individually present a complete solution to this
problem.
18
If testing is difficult, the human actors that need to rely on the system cannot through their own
diligence tell whether they may safely rely on the recommendation or prediction. The result
may be that there is either an under-reliance on the system or an over-reliance. This may be
best illustrated through the use of an example.
Suppose that a corporation’s board resolves that it would not make decisions about acquisitions
without the corroboration of an AI system. At some point, there is a proposal before the board
to acquire a particular company and the proposal is supported both by management and all of
the members of the board. If the AI system does not corroborate their decision, will the board
go ahead with the acquisition despite the dissent and thereby under-rely or will they supplant
their own opinion with that of the AI system? Current governance theory holds that each
director must exercise independent judgment and cannot blindly rely on experts in the exercise
of his or her duties as a director.
134
This would extend to reliance on expert systems built on
AI technology. Weary of these obligations, directors may be reluctant to rely on AI predictions
or conclusions, leading to under-reliance with the attendant loss in opportunity.
By the same token, if the board of directors in our example used the system for a period of time
and it proves to be reliable, how much time will it take before the board’s reliance on the AI
system switches from corroboration to initiation? To put it differently, how long will it take
before the board relies on AI to suggest the target for acquisition, to be corroborated by the
board? And when this point is reached, how much dissent from the board will it take to override
the decision or opinion of the AI system? There is a very real risk that directors may reach a
point where they over-rely on the system because it is impossible to show that the system is
incorrect in its finding. This risk will be increased once the intellectual capability of the AI
system equals and then surpasses that of the human actors, a capability that is predicted to
eventuate and evolve anywhere from 2029 onwards.
135
B Over-dependency
Apart from the risk of over or under-reliance, a further risk is that dependency on AI may lead
to an overall decrease in skill and expertise in governance as more of the functions are left to
AI to perform.
136
Over-dependency is a risk attached to the use of all forms of technology, not
just AI, but it is exacerbated in this case in that AI could eventually make decisions on behalf
of human actors.
137
This may in turn lead to the temptation to defer to the decision of AI as a
rule. But what happens when the technology fails or becomes unavailable? Moreover, without
deep knowledge and skill of proper governance, how will human actors remain capable of
judging the merit and reliability of decisions or recommendations emanating from AI?
134
ASIC v Healy (2011) 83 ACSR 484. See further Austin and Ramsay, (n 123) [8.305] and the discussion in par
V below.
135
Ray Kurzweil, the current Director of Engineering at Google, has maintained for some time that AI will
achieve human intelligence by 2029. Recently, Google has publicly predicted that the ability to surpass human
intelligence should eventuate between 2029 and 2045. See Sean Martin, AI Warning: Robots will be Smarter
than Humans by 2045, Google Boss Says’, 17 October 2017, Express (online),
https://www.express.co.uk/news/science/867565/google-artificial-intelligence-ray-kurzweil-AI-singularity.
A general distinction is drawn between artificial general intelligence and technological singularity. Artificial
general intelligence refers to the ability of machines to match humans in that they ‘[possess] common sense
and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a
wide range of natural and abstract domains’ (Bostrom, n 7, 3). ‘Technological singularity’ refers to the point
when machines exceed human cognitive ability. The term was first coined by Vernor Vinge (n 5) 11.
136
Lin, Abney and Bekey (n 12) 947.
137
We only consider technology outside of the human body here and not robotic augmentation that forms part of
the human body. See in this regard, Lin, Abney and Bekey (n 12) 946; Vinge (n 5) 12.
19
Dependency on AI must therefore be carefully monitored, as must the continued development
of sound governance knowledge and practices on the part of the human actors who use it.
C The business judgment rule
The advances in technology set out above has the most immediate effect on the duty of directors
to act with care and diligence.
138
The availability of enhanced information about the
corporation and the enhanced reliability of checks in the form of internal audit imply that the
duty of care and diligence will be elevated. In essence, the collaboration of these information
sources and the skills that the directors bring to the boardroom should lead to an increased
standard of care. At the same time, especially when AI capability comes into its own, there
may be the temptation to rely on the support of the technology to the exclusion of independent
judgment.
139
This brings the applicability of the business judgment rule into play.
A director’s business judgment will comply with her duty of care and diligence if she informed
herself of the subject matter of the judgment to the extent that she reasonably believes to be
appropriate
140
and if she rationally believes that the judgment is in the best interests of the
corporation.
141
A director’s reasonable reliance on the information or advice provided by others
will also need to be reassessed.
142
A director may rely on the information and advice provided
by others if the reliance was made in good faith and after making an independent assessment
of the information or advice, having regard to the director's knowledge of the corporation and
the complexity of the structure and operations of the corporation.
143
It is hoped that technology
will reduce the difficulty that complexity in structure and operations may pose, but the
independent assessment of the information or advice offered by the system remains in place.
This might prove difficult, especially if the system is one that derives its functionality through
unsupervised machine learning.
144
Moreover, the reasonable reliance provision is currently
couched in terms that only foresees the information or advice emanating from specified human
actors.
145
Increasingly the creation of information and advice will become autonomous from
human actors, which might mean that the content of the reasonable reliance provision could
become unnecessarily restrictive and in need of amendment.
Overcoming the challenge of applying independent judgment in a decision, or independent
assessment of information or advice more generally, when the reasoning of an AI system
leading to the specific recommendation or prediction is not open for determination or
comprehension is much harder. At the same time, it raises the prospect that directors may find
themselves between a rock and hard place – if they refuse to use predictive or augmenting AI
capability in the boardroom on the basis that they cannot exercise independent judgment over
138
Corporations Act (n 123) s 180.
139
See par V above.
140
Corporations Act (n 123) s 180(2)(c).
141
Corporations Act (n 123) s 180(2)(d).
142
Corporations Act (n 123) s 189.
143
Corporations Act (n 123) s 189(b).
144
See the discussion par V above.
145
Corporations Act (n 123) s 189(a): ‘a director relies on information, or professional or expert advice, given or
prepared by: i) an employee of the corporation whom the director believes on reasonable grounds to be reliable
and competent in relation to the matters concerned; or (ii) a professional adviser or expert in relation to matters
that the director believes on reasonable grounds to be within the person's professional or expert competence;
or (iii) another director or officer in relation to matters within the director's or officer's authority; or (iv) a
committee of directors on which the director did not serve in relation to matters within the committee's
authority [emphasis added]’.
20
its functions they may act in conflict with the duty to act in the best interests of the
corporation.
146
D The board/management divide
Increased use of automated processes and the potential improvements it may bring in terms of
more reliable internal auditing, better risk flagging and speedier and direct access to
information about organisational conduct brings with it a challenge to the board/management
relationship. The classical and well-entrenched informational divide between management and
the board may become muddled as directors and board committees become able to directly
access information that they may want about certain managerial actions and operations. This
introduces the risk, and fear from management, that the board may usurp functions typically
left to management instead on focussing on the strategic and supervisory role left to the board.
Ultimately, it begs the question about the proper role of the board. We believe that the time
and remuneration constraints attached to board appointments will keep interference in
management in check. Fears about meddling in management will probably not pan out.
E Data considerations
Some of the benefits of benchmarking may be hampered by a lack of generalisability of systems
across many corporations – the unique business attributes (read data) of corporations might
imply that most solutions will not be off the shelf but rather developed within a specific
organisation. Bespoke systems are costlier. Moreover, the usefulness of data gathered, for
instance, via the Internet of Things through AI systems across many organisations could prove
limited if the variance between these organisations is too great.
A key obstacle to the realisation of AI advances is the fact that data in organisations has not
kept pace with the development of AI, machine learning or even automation. Most data are
unstructured and to systemise it would take extensive resources. Furthermore, with the
emphasis in the recent past having been on optimising storage of data to increase computing
power, often much of the valuable data that could be used to inform the development of AI
capability in organisations have been intentionally destroyed.
147
Cyber security concerns may
further damper organisations’ enthusiasm to keep data for longer periods, which may also
weaken the pool of available data.
Tied in with the previous point is the push for greater digitisation of information inside
corporations.
148
AI systems rely on reliable data in order to function and so it stands to reason
that the creation and curation of data to enable the functioning of AI systems must become
institutionalised for the full benefits of AI to eventuate. We predict that this digitisation effort
will increasingly extend to the discussions at board and board committee meetings. The days
of the redacted board minutes may be numbered.
149
The technology already exists to create and
146
Corporations Act (n 123) s 181(1)(a).
147
SSON (n 24) 18.
148
See Lippe, Katz and Jackson (n 107) 844 where they make a similar point with regards to operations (contracts)
inside organisations.
149
Corporations Act (n 123) s 251A requires proceedings and resolutions of directors’ meetings to be included in
minutes. Catherine Livingstone recently claimed in proceedings before the Financial Services Royal
Commission that she challenged the board over its downplay of the severity of AUSTRAC’s investigation in
allegations of the Commonwealth Bank of Australia’s breaches of anti-money laundering regulation. She also
acknowledged that this challenge was not recorded in the minutes. The issue of the AUSTRAC investigation
was listed in the agenda, but the minutes only recorded that the matter was discussed. See Sarah Danckert and
Clancy Yeates, ‘CBA Accused of Board Minutes Criminal Breach’ The Sydney Morning Herald (21
November 2018) (online), <https://www.smh.com.au/business/banking-and-finance/cba-accused-of-board-
minutes-breach-20181121-p50hcp.html?promote_channel=edmail&mbnr=MTYwMzg3Nw&eid=email:nnn-
13omn659-ret_newsl-membereng:nnn-04%2F11%2F2013-business_news_pm-dom-business-nnn-smh-
21
store full transcripts of proceedings with minimal interruption or effort.
150
It will become
increasingly difficult for directors to justify the absence of such data when it may be useful for
the improvement of the governance of the corporation. The creation of the data for purposes of
AI analysis and governance improvement stand separately from the rights or interests of third
parties to gain access to the information.
151
The creation of the data does not necessarily mean
that access should be granted to a larger group of persons. The latter consideration stands
separate from the former.
The digitisation of board proceedings does, however, hold confidentiality implications.
152
This
will be especially true if the data is stored on the cloud or if the AI analysis is done via off-the
shelf products with the customisation done with the assistance of proprietors who also serve
competitors or who have conflicting interests with that of the corporation. These concerns have
already received some attention in other areas.
153
Moreover, dependency on the cloud may
have consequences across whole business sectors if it should fail, which holds some risk of its
own.
There will inevitably be some push-back from directors against the transcripts of meetings in
full on the basis that they must be able to speak their mind openly and without restriction at
such meetings. Furthermore, there will be the unspoken fear that their contributions, or lack
thereof, at meetings may be held against them in future when they are taken to task for breach
of duty.
154
It must further be remembered that much of the work of boards happen outside of
the boardroom in private discussions and ground work that precedes that actual board meetings.
These discussions will typically not be captured, meaning that the data captured at the actual
board meeting may not be as rich in information as hoped for.
155
This might be alleviated by
an awareness amongst directors of the importance of recording outside discussions formally
inside the board meeting. More generally, directors need to become aware of the importance
of data generation and reliability inside their organisations.
u&campaign_code=13IBU021&et_bid=29152781&list_name=2032_smh_busnews_pm&instance=2018-11-
21--07-29--UTC>.
150
The Ricoh Cognitive Whiteboard (above, par III. D) provides this function, amongst others.
151
Members do not generally have the right to inspect the minutes of board meetings, but may be granted limited
access if the member can show that she the inspection is necessary for a specific dispute in which the member
has a special interest. See generally, Austin and Ramsay (n 123) [11.390].
152
A full discussion of this area falls outside the scope of this paper. For more on data governance, see John
Ladley, Data Governance: How to Design, Deploy, and Sustain and Effective Data Governance Program
(Morgan Kaufmann, 2012); David Plotkin, Data Stewardship: An Actionable Guide to Effective Data
Management and Data Governance (Elsevier, 2014).
153
See for instance Maziar Peihani, ‘Financial Regulation and Disruptive Technologies: The Case of Cloud
Computing in Singapore’ (2017) Singapore Journal of Legal Studies 77, considering the regulation of the use
of cloud computing and other substantial outsourced services for the financial services industry in Singapore.
See further Basel Committee on Banking Supervision, Sound Practices: Implications of Fintech Developments
for Banks and Bank Supervisors (2017) 36 – 38, where recommendations are made for best practice where
outsourced services are extensively relied upon by financial institutions.
154
See Ellie Chapple and Elisabeth Sinnewe, ‘So What’s a Secretary to Do? Banking Royal Commission Raises
Questions About What’s in Minutes’ The Conversation (29 November 2018) (online)
<http://theconversation.com/so-whats-a-secretary-to-do-banking-royal-commission-raises-questions-about-
whats-in-minutes-107509>: ‘Company secretaries are already acutely aware that every set of minutes of every
board meeting might one day end up as evidence.’ See further Australian Securities and Investments
Commission v Hellicar (2012) 286 ALR 501 at [138] where the resolution adopting a misleading ASX
announcement in the minutes of a board meeting was held to have been a ‘contemporaneous record of
proceedings at the meeting’. See further Austin and Ramsay (n 123) [7.570].
155
The study by Forrester Consulting (n 28) 2 found that half of governance professionals (board members and
C-level executives) surveyed use their personal emails for sensitive internal board communications despite the
availability of board portals that offer secure chat rooms.
22
VI CONCLUSION
Corporate boardrooms globally are taking nascent steps towards their own digital
transformation. Adopting Davenport and Ronanki’s framework for AI systems, our
investigation shows that most of the current AI applications that aid governance fall in the
process automation classification (board portals, risk and auditing systems, legal compliance),
with some inroads having been made in cognitive insight (risk management, internal audit,
legal compliance). Systems that exercise cognitive engagement have not yet come into their
own but may yet add value in the boardroom as the technology matures (smart visualisations
with recommended actions, robo-advisors, capabilities enabled by the Internet of Things). All
of these technologies increase the collaborative intelligence in the boardroom as the directors
have quicker and more reliable access to information they need for efficient decision-making.
It frees up time that may be better spent on strategic planning and oversight.
Our central conclusion is therefore that AI, and the technology that supports it by way of data
collection and management, is poised to lend real benefit to governance practice. The
consequential gaps in the current statutory formulation of the business judgment rule could be
easily remedied by legislative amendment or even by means of expansive judicial
interpretation.
However, recent events brought to light by the Royal Commission show that despite the
availability of improved information, as facilitated by the technology discussed, governance
failures still abound. For instance, the benefits of greater risk reporting and internal audit
facilitated by the automated systems discussed above could have led to swifter and more
decisive action at CBA in the case of its breaches of anti-money laundering legislation.
156
However, despite these issues being flagged to the board the focus was mostly on financial risk
rather than on non-financial risk arising from non-compliance with laws.
The message seems clear: technology can only bring us so far. The rest of the way is determined
by the exercise of sound governance principles by company boards and the consistent nurturing
of a culture of compliance and ethical behaviour in their firms.
156
See Royal Commission (‘Final Report’ n 82) 396.