Article

AI-Supported Adjudicators: Should Artificial Intelligence Have a Role in Tribunal Adjudication?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Estudos tratando da dimensão Técnica demonstram que os resultados propostos pelos sistemas de IA podem ser fontes relevantes de informação na tomada de decisão judicial em tribunais de diferentes países como no Japão (Hayashi e Wakabayashi, 2017), EUA (Angwin et al, 2016), Australia (Finck, 2020), México (Caceres, 2008, Canadá (Beatson, 2018), China (Zhong et. al, 2018), India (Chanda, 2018) assim como no continente europeu (Aletras et al., 2016). ...
... Outro ponto convergente na literatura, é quanto a dimensão técnica descrita no framework, tendo em vista que o processamento do sistema proposto depende diretamente dos tipos e níveis de IA em aplicação. Ou seja, muito se discute na literatura sobre o que está em uso nos tribunais em termos de IA (Agrawal, Gans e Goldfarb, 2019;Angwin et al, 2016;Beatson, 2018;Hayashi e Wakabayashi, 2017) no entanto, pouco se discute sobre os principais atingidos dessas ferramentas, os usuários juízes e profissionais de TI e clientes, no caso aqui expresso refere-se aos próprios cidadãos que buscam no judiciário a resolução de conflitos e conquista de seus direitos. ...
Conference Paper
Full-text available
A utilização de IA - Inteligência Artificial em decisões judiciais já é uma realidade em vários tribunais ao redor do mundo, no entanto nos tribunais brasileiros ainda são poucos os sistemas de IA em pleno funcionamento. O presente estudo teve por objetivo apresentar os projetos de IA em tribunais brasileiros e propor um framework de análise da IA aplicada a tomada de decisão judicial em tribunais. Os resultados demonstraram que dos 91 tribunais de justiça brasileiros somente 18 possuem projeto de IA sendo que a maioria se refere a IA fraca com aplicações específicas a tomada de decisão judicial não autônoma, ou seja, a palavra final ainda é do juiz. O framework de análise do fenômeno servirá de base para uma agenda de pesquisa de estudos empíricos a realizar-se junto aos juízes assim como analistas da área de TI nos tribunais.
... A single device could bypass the mandatory training processes, performance monitoring, securing personnel benefits and instead, prompt adjudication for a vast number of cases, limited only by computing power and energy resources -eventually, lowering the involved cost. Since the same AI adjudicator could be deployed to resolve many disputes-certainly, as a single program is capable of solving a stretched of caseload, it would accord an otherwise futile degree of uniformity (Beatson, 2018). In this context, AI adjudication could abate, to certain extent eliminate, the arbitrariness or biasness hereditary of human judge (Re & Solow-Niederman, 2019). ...
Article
Kanada’da idarenin yapay zekâ ve algoritma kullanımını doğrudan düzenleyen ilgili bir yasa bulunmamaktadır. Bu boşluk, idarenin genel düzenleyici işlemleriyle doldurulmaktadır. Kanada Federal Hükümeti, kamu hizmetinin görülüş usulleri, bilgi teknolojileri, siber güvenlik gibi konularda Kamu Hizmeti ve Dijital Politikaları yayınlamaktadır. Otomatik Karar Alma Yönergesi (Directive on Automated Desicion-Making), idarenin algoritmaya dayalı karar alma süreçlerinin eşitlik, tarafsızlık, şeffaflık, katılım ilkesi, gerekçe yükümlülüğü gibi usuli adalet kapsamındaki ilkeler üzerinde ortaya çıkaracağı muhtemel sorunları bertaraf etmeye yönelik Algoritmik Etki Değerlendirmesi (Algorithmic Impact Assessment) öngörmektedir. Bu çalışmada Kanada’daki bu düzenleme esas alınarak algoritma çağında idarenin karar alma süreçlerinde yapay zekayı kullanımı irdelenecektir.
Thesis
Full-text available
The possibility of implementing robotic judgement to solve legal disputes has been an object of study in the field of Law and Natural Language Processing (NLP) for decades, which has allowed the development of Machine Learning (ML) models that often revolved around the prediction of case outcomes. Rather than mere outcome prediction, algorithms trained on judicial data to support contemporary judges arguably have the potential to improve the efficiency and quality of legal decision-making, therewith providing more access to a higher standard of justice. The scholarly legal field often argues that algorithms remain unable to support judicial decision-making for reasons of input bias, opaqueness and the lack of a reasoned explanation. With the implementation of the General Data Protection Regulation's Article 22, stipulating the right to an explanation and a general prohibition of automated decision-making, this discussion has arguably gotten more complicated. This thesis addresses the contemporary status of the legal framework for algorithmic transparency, and its requirements for explainability. This research therewith evaluates the possibilities of applying a working definition of explainability to recently developed technical capabilities of ML and NLP, whilst classifying the analysed models based on their complexity. Through synthesizing academic literature, evaluating model performance measurements and based on expert advice, this thesis's main purpose shall be to demonstrate whether a future with robotic judgement is possible with the introduction of explainable judicial decision-supporting algorithms.
Thesis
Full-text available
L'intelligence artificielle (IA), alors qu'il s'agit d'une technique née dans les années 1950, est aujourd'hui employée à des fins de modernité au sein de plusieurs secteurs, notamment juridique. La matière pénale se prête parfaitement à l'illustration de cette affirmation. Les dispositifs d'IA, en matière de prévention et de répression des infractions pénales, sont présentés comme les armes du 21e siècle venant répondre aux dangers du 21e siècle, à savoir les actes terroristes subis par l'Europe ces dernières années. Le danger figure dans le fait de penser que ces outils, dont nous ne connaissons pas l'ensemble des revers de la médaille, sont la solution ultime des problèmes préexistants dans notre société. C'est l'état d'esprit du solutionnisme technologique. L'objet de cette étude n'est ni techno-pessimiste ni techno-optimiste. Cependant, elle vise à rappeler que : l'ingérence des procédés de traitement des données à caractère personnel est habituellement légitimée par le caractère pénal de la finalité du traitement. Cette ingérence est amplifiée par l'utilisation d'algorithmes et de dispositifs d'IA non transparents et dont le risque zéro de biais n'existe pas.
Article
Full-text available
One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.
Article
Full-text available
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.
Article
Full-text available
That emotion should play a role in legal decision-making has been seen as inimical to the rule of law. Recent neuroscience research, however, has demonstrated that emotion plays a key role in legal decision-making, in particular the criminal law where personal, social, and moral circumstances are considered. The High Court recently considered judicial decision-making in Markarian v The Queen, particularly as it relates to sentencing, where the majority putatively upheld the "instinctive synthesis" approach. Labels aside, this article will evaluate the decision-making processes proposed by the judges, and potential alternative approaches, in the light of what is possible neurobiologically. This will include an analysis of which of the approaches to sentencing are most consistent with rational decision-making, together with an assessment of the role of emotion. The article will conclude that, in Markarian, the High Court in fact unanimously rejected the earlier form of Williscroft "instinctive synthesis", which was the sentencing method most likely to allow unregulated emotion to bias decisions. The court had proposed an alternative form of decision-making, Markarian synthesis, which allowed an essential role for emotion, but included the safeguard of processes more typically associated with reason and deliberation. In this, the court endorsed a form of decision-making which was consistent, neurobiologically, with the highest likelihood of arriving at rational, well informed, yet humane decisions.
Article
Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more. The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness. We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities-subtler and more flexible than total transparency-to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also-in certain cases-the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the "brain" of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward. The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society. Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department's diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountability.
Article
Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing chorus of commentators, scientists, and entrepreneurs has expressed alarm regarding the increasing role that autonomous machines are playing in society, with some suggesting that government regulation may be necessary to reduce the public risks that AI will pose. Unfortunately, the unique features of AI and the manner in which AI can be developed present both practical and conceptual challenges for the legal system. These challenges must be confronted if the legal system is to positively impact the development of AI and ensure that aggrieved parties receive compensation when AI systems cause harm. This article will explore the public risks associated with AI and the competencies of government institutions in managing those risks. It concludes with a proposal for an indirect form of AI regulation based on differential tort liability.
Article
This Article critiques, on legal and empirical grounds, the growing trend of basing criminal sentences on actuarial recidivism risk prediction instruments that include demographic and socioeconomic variables. I argue that this practice violates the Equal Protection Clause and is bad policy: an explicit embrace of otherwise- condemned discrimination, sanitized by scientific language. To demonstrate that this practice raises serious constitutional concerns, I comprehensively review the relevant case law, much of which has been ignored by existing literature. To demonstrate that the policy is not justified by countervailing state interests, I review the empirical evidence underlying the instruments. I show that they provide wildly imprecise individual risk predictions, that there is no compelling evidence that they outperform judges' informal predictions, that less discriminatory alternatives would likely perform as well, and that the instruments do not even address the right question: the effect of a given sentencing decision on recidivism risk. Finally, I also present new empirical evidence, based on a randomized experiment using fictional cases, suggesting that these instruments should not be expected merely to substitute actuarial predictions for less scientific risk assessments but instead to increase the weight given to recidivism risk versus other sentencing considerations.
Article
Paper presented at the Canadian Bar Association’s 12th Annual National Administrative Law and Labour & Employment Law Conference, November 25th 2011.
Article
A central tenet for human-computer interaction and decision support system design is the emphasis on the human-centered approach. Human cognitive strengths and weaknesses, the variability of human response, and sometimes unpredictable decision making behavior by both human and computers require that engineers and designers not only understand the physical limitations of a system, but also attempt to predict how human-computer interactions could introduce potentially dangerous and life threatening situations. However, one element that is often overlooked in design of these systems is the ethical and social impact that interactions can have on the individual operator, a team, and society at large. Engineers who design decision support systems (DSSs) and computer interfaces have a number of additional ethical responsibilities beyond those of engineers who only interact with the mechanical or physical world. When the human element is introduced into decision and control processes, entirely new layers of design, social, and ethical issues (to include moral responsibility) emerge, but are not always recognized as such. Ethical and social impact issues can arise during all phases of design, and identifying and addressing these issues as early as possible can help engineers to both analyze a domain more comprehensively as well as suggest specific design guidance. The interaction between cognitive limitations, system capabilities, and ethical and social impact cannot be easily quantified using formulas and mathematical models. Often what may seem to be a straightforward design decision can carry with it ethical implications that may go unnoticed. If design of a DSS is faulty or fails to take into account a critical social impact factor, the results could not only be expensive in terms of later redesigns and lost productivity, but possibly also the loss of life. One such design consideration is the use of automation in a decision support system. For example, in the 2004 U.S. war with Iraq, the U.S. Army's highly automated Patriot missile system engaged in fratricide, shooting down a British Tornado and an American F/A-18, killing three aircrew. Humans were theoretically "in control," but the displays were confusing and often incorrect, and operators, who only were given ten
Article
A Google search for a person's name, such as "Trevon Jones", may yield a personalized ad for public records about Trevon that may be neutral, such as "Looking for Trevon Jones?", or may be suggestive of an arrest record, such as "Trevon Jones, Arrested?". This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. First names, assigned at birth to more black or white babies, are found predictive of race (88% black, 96% white), and those assigned primarily to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word "arrest" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. On the more ad trafficked website, a black-identifying name was 25% more likely to get an ad suggestive of an arrest record. A few names did not follow these patterns. All ads return results for actual individuals and ads appear regardless of whether the name has an arrest record in the company's database. The company maintains Google received the same ad text for groups of last names (not first names), raising questions as to whether Google's technology exposes racial bias.
Conference Paper
Decision making can be regarded as an outcome of a cognitive process leading to the selection of a course of action among several alternatives. With the latest advances in decision making systems it is important to study which methods provide better results and in which fields, contexts. By creating a hybrid approach to decision making we can outline the performance and deviations between different methods, but more importantly we establish the foundation for building a hybrid decision making system. With the analysis of the decision making methods, we are able to build a hybrid decision making system, which is a composition and cooperation of different methods and uses the optimal methods for specific contexts. In this paper the author presents an approach to creating and evaluating hybrid decision making systems.
Article
Today, computer systems terminate Medicaid benefits, remove voters from the rolls, exclude travelers from flying on commercial airlines, label (and often mislabel) individuals as dead-beat parents, and flag people as possible terrorists from their email and telephone records. But when an automated system rules against an individual, that person often has no way of knowing if a defective algorithm, erroneous facts, or some combination of the two produced the decision. Research showing strong psychological tendencies to defer to automated systems suggests that a hearing officer’s check on computer decisions will have limited value. At the same time, automation impairs participatory rulemaking, the traditional stand-in for individualized due process. Computer programmers routinely alter policy when translating it from human language into computer code. An automated system’s opacity compounds this problem by preventing individuals and courts from ascertaining the degree to which the code departs from established rules. Programmers thus are delegated vast and effectively un¬reviewable discretion formulating policy. Professor Citron will be talking about a concept of technological due process that can vindicate the norms underlying last century’s procedural protections. A carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. Her proposal provides a framework of mechanisms capable of enhancing the accuracy of rules embedded in automated decision-making systems
Should justice by delivered by AI?
  • Bob Tarantino
Bob Tarantino, ''Should justice by delivered by AI?" (April 2018), Policy Options, online: < http://policyoptions.irpp.org/magazines/april-2018/should-justice-be-delivered-by-ai/>.
Judicial Review and Administrative Decision-Making
  • Paul Daly
Paul Daly, ''Judicial Review and Administrative Decision-Making" (October 2013), online: < http://www.administrativelawmatters.com/blog/2013/10/28/judicial-reviewand-administrative-decision-making/>.
The troubling new science of legal persuasion: heuristics and biases in judicial decision-making
  • See For Example
  • Craig E Jones
See for example Craig E. Jones, "The troubling new science of legal persuasion: heuristics and biases in judicial decision-making" (2013) 41
The government may make exceptions to this. See s 2 of Appointment to Adjudicative Tribunals
Adjudicative Tribunals Accountability, Governance and Appointments Act, 2009, SO 2009, c 33, Sch 5, s 14(1). The government may make exceptions to this. See s 2 of Appointment to Adjudicative Tribunals, O Reg 88/11, where the requirement for a competition in a number of circumstances including re-appointments and crossappointments is waived.
Reputational Review I: Expertise, Bias and Delay
  • Ryan V Canada
Ryan v. Canada (Attorney General), 2005 FC 65, 2005 CarswellNat 156, 2005 CarswellNat 1995, [2005] F.C.J. No. 110 (F.C.) at para. 14. See also R E Hawkins, ''Reputational Review I: Expertise, Bias and Delay" (1998) 21:1 Dal LJ 5 at 9-26.
Algorithms as fetish: Faith and possibility in algorithmic work
  • Torin Monahan
  • Jamie Sherman
Torin Monahan, "Algorithmic Fetishism" (2018) 16:1 Surveillance & Society. See also Suzanne L. Thomas, Dawn Nafus, and Jamie Sherman. "Algorithms as fetish: Faith and possibility in algorithmic work." (2018) 5:1 Big Data & Society.
Administrative Law in Context 3 rd ed. (Toronto: Emond Montgomery, 2017) at 2:4; In a leading rule of law case
  • Mary Liston
Mary Liston "Administering the Canadian Rule of Law" in Colleen M. Flood and Lorne Sossin (eds), Administrative Law in Context 3 rd ed. (Toronto: Emond Montgomery, 2017) at 2:4; In a leading rule of law case, Roncarelli c. Duplessis, 1959 CarswellQue 37, [1959] S.C.R. 121 (S.C.C.) (judgments of Rand J. and Cartright J.) an issue was whether a decision supposed to be taken by the commissioner was made instead by the Quebec premier.
The Principles and Practices of Procedural Fairness
  • Kate Glover
Kate Glover, ''The Principles and Practices of Procedural Fairness" in Colleen M. Flood and Lorne Sossin (eds), Administrative Law in Context 3 rd (Toronto: Emond Montgomery, 2017).
Minister of Citizenship & Immigration), supra, note 102, para
  • Baker V
  • Canada
Baker v. Canada (Minister of Citizenship & Immigration), supra, note 102, para. 46.
Wewayakum Indian Band v. Canada) [2003] 2 S.C.R. 259 (S.C.C.)
  • See R Roberts V
See Roberts v. R., 2003 SCC 45, 2003 CarswellNat 2822, 2003 CarswellNat 2823, (sub nom. Wewayakum Indian Band v. Canada) [2003] 2 S.C.R. 259 (S.C.C.); Newfoundland Telephone Co. v. Newfoundland (Board of Commissioners of Public Utilities), 1992
  • See Geza V
  • Canada
See Geza v. Canada (Minister of Citizenship & Immigration) (2006), 2006 CAF 124, 2006 FCA 124, 2006 CarswellNat 706, 2006 CarswellNat 2310, [2006] 4 F.C.R. 377, 41 Admin.
Mathwashing', Facebook and the zeitgeist of data worship
  • Tyler Woods
Tyler Woods, '''Mathwashing', Facebook and the zeitgeist of data worship" (8 June 2016), online: < https://technical.ly/brooklyn/2016/06/08/fred-benenson-mathwashing-facebook-data-worship/>.
Among them Mary Liston names ''public inquiries, royal commissions, task forces, department investigations
  • Supra Liston
Liston, supra, note 97 (Among them Mary Liston names ''public inquiries, royal commissions, task forces, department investigations...and ombudsmen").
Digital Government: Making government work better for people in the digital age
  • Ontario Government
Ontario Government, ''Digital Government: Making government work better for people in the digital age", online: <https://www.ontario.ca/page/digital-government>. 145 ''AI-augmented government" (April 2017), Deloitte Insights, online:< https:// www2.deloitte.com/insights/us/en/focus/cognitive-technologies/artificial-intelligencegovernment.html>.
Companies move forward in Ontario AI Legal Challenge
  • Carolyn Gruske
Carolyn Gruske, ''Companies move forward in Ontario AI Legal Challenge", The Lawyer's Daily (4 January 2018), online: <https://www.thelawyersdaily.ca/articles/ 5581/companies-move-forward-in-ontario-ai-legal-challenge>.
Predictive justice: when algorithms pervade the law
''Predictive justice: when algorithms pervade the law", (June 2017) Paris Innovation Review, online:< http://parisinnovationreview.com/articles-en/predictive-justicewhen-algorithms-pervade-the-law>.
This is also called the
  • Springer
Springer (This is also called the ''interpretability problem" in the computer science literature).
A certain level of technical understanding is probably a baseline for accountability. Canadian lawyer Jane Caskey calls for ''white boxing" solutions that dissect and deconstruct algorithms)
  • Norton Rose Fulbright
  • ai Summit
Norton Rose Fulbright ''AI Summit" (15 November 2017) (A certain level of technical understanding is probably a baseline for accountability. Canadian lawyer Jane Caskey calls for ''white boxing" solutions that dissect and deconstruct algorithms).
Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks
  • Julia Angwin
  • Jeff Larson
  • Surya Mattu
  • Lauren Kirchner
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, ''Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks" (23 May 2016), ProPublica, online:<https://www.propublica.org/article/machinebias-risk-assessments-in-criminal-sentencing>.
  • Ewert V Canada
Ewert v. Canada, 2018 SCC 30, 2018 CarswellNat 2804, 2018 CarswellNat 2805 (S.C.C.).
Artificial Intelligence Has a 'Sea of Dudes' Problem
  • Jack Clark
Jack Clark, ''Artificial Intelligence Has a 'Sea of Dudes' Problem" (23 June 2016), Bloomberg Technology, online: < https://www.bloomberg.com/news/articles/2016-06-23/artificial-intelligence-has-a-sea-of-dudes-problem>.
Government of Canada review will ensure that innovation and clean technology programs deliver results for Canada's innovators
  • Canada Government Of
Government of Canada, ''Canada's Innovation And Skills Plan", online: <www.budget.gc.ca/2017/docs/themes/Innovation_en.pdf>; Government of Canada, ''Innovation superclusters initiative (ISI): Program guide", online: <https://www.ic.gc.ca/eic/ site/093.nsf/eng/00003.html>; Treasury Board of Canada Secretariat, News Release, ''Government of Canada review will ensure that innovation and clean technology programs deliver results for Canada's innovators" (6 September 2017), online: <https:// www.canada.ca/en/treasury-board-secretariat/news/2017/09/government_of_canadar-eviewwillensurethatinnovationandcleantechno.html>.
Artificial Intelligence summit
  • Norton Rose Fulbright
Norton Rose Fulbright, ''Artificial Intelligence summit 2017" (15 November 2017), online: <http://www.nortonrosefulbright.com/knowledge/events/157194/artificial-intelligence-summit-2017>.