PreprintPDF Available

Data-Driven Decision-Making and The 'Rule of Law' Data-Driven Decision-Making and The 'Rule of Law' •

Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

The paper intends to identify certain "rule of law" implications of Big Data analysis from a techno-regulatory perspective-namely, (i) the collapse of the normative enterprise, (ii) the erosion of moral enterprise and (iii) replacing of causative basis with correlative calculations. Although these implications are not completely specific to Big Data space but rather of general nature regarding techno-regulation, each of these rule of law implications become aggravated, and extend into deeper dimensions when techno-regulation is implemented through data-driven systems.
Content may be subject to copyright.
Data-Driven Decision-Making
The 'Rule of Law'
Emre Bayamlıoğlu & Ronald Leenes
Tilburg University, TILT
TILT Law & Technology Working Paper
Paper presented in TILTING 2017: TILTING PERSPECTIVES- May 2017
Tilburg /Netherlands
Data-Driven Decision-Making and The 'Rule of Law'
Emre Bayamlıoğluψ & Ronald Leenesπ
The paper intends to identify certain “rule of law” implications of Big Data analysis
from a techno-regulatory perspectivenamely, (i) the collapse of the normative
enterprise, (ii) the erosion of moral enterprise and (iii) replacing of causative basis with
correlative calculations. Although these implications are not completely specific to Big
Data space but rather of general nature regarding techno-regulation, each of these
rule of law implications become aggravated, and extend into deeper dimensions when
techno-regulation is implemented through data-driven systems.
This is a longer version of the paper Emre Bayamlıoğlu and Ronald Leenes, The ‘rule of law’ implications of data-
driven decision- making: a techno-regulatory perspective, LAW, INNOVATION AND TECHNOLOGY2018, VOL.
10, NO. 2
ψ Researcher, Tilburg Institute for Law, Technology, and Society (TILT)
π Professor of Techno-Regulation, Tilburg Institute for Law, Technology, and Society (TILT)
1. Introduction
Narcissus' mirror symbolizes all technologies, reflecting man or some aspect of his
capacities directly in an external form ... [t]he myth illustrates the fact that even the
surface of a pool, a natural phenomenon, can become a technology, an extension of
Time, since the era of industrialization, has witnessed an influx of novel artefacts,
objects, and more recently automated systems that come to play a profound role in what we
do, how we perceive and interpret the world, how we make our choices, and under what
conditions.2 As the industrial “revolution” was based on the modelling of machines for
specific mechanical tasks, the new era of “computational turn” is characterized by its
modelling of processes from manufacturing of goods, optimizing supply chains to simulation
of real life scenarios and, even the “maddening randomness of humans”3 extending the
physical assembly line of Henry Ford to a virtual network of people, objects and spaces.
As being a novel way to capitalize economic and institutional power, algorithmic solutions
coupled with high volume, speed and variety (3Vs) data (a.k.a. Big Data) make a profound
effect on the allocation of resourcesowing to their capacity to control and manage social,
economic and even political processes, dynamics and relations.4 We see the emergence of
“algorithmic authority” as the legitimate power of the “code” to direct human action and also
to impact which information is considered true.
On this background, despite the increasing number of studies, reports etc. from academic,
governmental or business circles which focus on privacy intrusion, data protection, and
several other aspects of data-driven practices in general, and of algorithms in particular; both
the enabling and restricting role of data-driven solutions as techno-regulatory orders have
remained mostly unanalysed.5 In parallel with this, while studies on techno-regulation
frequently analyse and characterize technology for its normativity6, research which is
theorizing the regulatory relevance and affordances of Big Data analyticsboth as a
normative order in itself and also as a component of other techno-regulatory systemshave
been few and far between. 7 As the world of data has become the test bed for social sciences,
1 Michael Shallis, “The Silicon Idol”, in John Zerzan and Alice Carnes (eds) Questioning Technology Tool Toy
or Tyrant, New Society Publishers , 1991, 27-28
2 Paul Verbeek, What Things Do - Philosophical reflections
on technology, agency, and design, (Transl.
Robert P. Crease) 2005, The Pennsylvania State University Press (Originally published as De daadkracht der
dingen: Over techniek, filosofie en vormgeving by Boom Publishers, Amsterdam 2000)
3 Stephen Baker, The Numerati (2009), 29.
4 Michael Latzer et. al. “The economics of algorithmic selection on the Internet”. For more on Big data and
media/information economics, see Argenton, C. and J. Prüfer (2012), ‘Search Engine Competition with
Network Externalities’, Journal of Competition Law & Economics, 8 (1), 73-105.
5 A recent remarkable work see, Timothy D. Robinson, “A Normative Evaluation of Algorithmic Law”, 23
Auckland U. L. Rev. 293 (2017)
6 Lessig’s Code as Law and the descendant literature. The roots of this line of thinking may be traced back to
Bruno Latour’s “actor-network theory”
7 M Hildebrandt, ‘Law at a Crossroads: Losing the Thread or Regaining Control? The Collapse of Distance in
economic innovation and state administration; the need for research explaining and framing
the regulatory dimension of the data-driven practices is ever more daunting.8,9
Following from above, this article is based on the premise that data-driven automated DM
processes, governed by complex algorithms, are either embodiments of existing normative
orders, or they themselves enact ad hoc regulatory orders with or without a legal basis
respectively, the case of credit rating where algorithms decide who is eligible to a loan; and
the call service running data analysis to estimate the educational level of the callers in order
to match them with the best fitting operator. As seen, in terms of regulatory constraints and
capacities, data-driven DM systems go much beyond the existing contractual and statutory
norms. Based on this, the paper aims to identify certain “rule of law” implications of data-
driven automated decisions (Big Data analytics) from a techno-regulatory perspective.10 The
main idea is that automated decision-making (DM), share a common aim with the instrumental
side of lawnamely the control and/or steering of institutional practices and individual
behavior within society. Combined with other regulatory features of ICTs, Big Data ushers a
new prospect of techno-regulatory settings which is capable of achieving goals common with
human-governed normative systems.
Following this introduction, Part II and III set the scene as providing an account of predictive
analytics and data-mining as a regulatory tool (modality) laid out by Lessig: (1) law, (2) market,
(3) architecture, and (4) social norms.11 Data-driven DM is framed as a part of the wider
concept of techno-regulation and thus treated as both an instantiation/articulation and an
essential component of techno-regulatory settings. Having established data-driven DM as a
process with regulatory effectsbased on the interpretation of datafied human experience
Part III further develops the idea that when complemented and reinforced by data analysis
capabilities, adaptive/cognitive systems may overcome the rigidness of former pre-set
Real Time Computing’ in Goodwin, Koops and Leenes (n 3) 165; Mireille Hildebrandt and Bert- Jaap Koops,
‘The Challenges of Ambient Law and Legal Protection in the Profiling Era’ (2010) 73 Modern Law Review
8 More on the normativity of technology see, W.N. Houkes, “Rules, Plans and the Normativity of
Technological Knowledge” in M.J. de Vries et al. (eds.), Norms in Technology, Springer Science+Business
Media Dordrecht 2013.
9 “Normativity within the codeor the technical designmay be communicated in various methodologies which
involve persuasion, nudging and affording as intentional but subtle ways. Van den Berg and Leenes treat them
outside the concept of techno-regulation for their intransparent and concealed character.” Bibi van den Berg and
Ronald Leenes, “Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects”. More on the concept
of “nudging”, see Thaler, Richard H., and Cass R. Sunstein. Nudge Improving Decisions about Health, Wealth,
and Happiness, (Yale University Press, 2008) ; Klaus Mathis and Avishalom Tor (eds.), Nudging - Possibilities,
Limitations and Applications in European Law and Economics (Springer International, 2016)
10 At this point a clarification of terms would be useful, that is, automated and data-driven are actually two
different concepts. In the literal sense, alarm clock set to ring at 07:00 AM every day is perfectly automated but
not data-driven. On the other hand, your refrigerator with a thermostat is both data-driven and automated. The
question arises whether there could there be systems that are data-driven but not automated. Even less and less
so, the answer is, affirmative. Early judicial aids for sentencing could be regarded as data-driven or statistics-
based, but still not automated in that the human judge made the final decision.
11 Lawrence Lessisg, Code and Other Laws of Cyberspace. Or as C. Scott and A. Murray put it in ‘Controlling
the New Media: Hybrid Responses to New Forms of Power’ (2002) 65 MLR 491: (1) hierarchical control, (2)
competition-based control, (3) community-based control, and (4) design-based control. Also see, Egbert
Dommering “Regulating Technology: Code Is Not Law.” Information Technology and Law Series Coding
Regulation, 2006, 116.
architectures used to implement norms (expert systems).12 By use of machine learning (ML)
algorithms, automated DM systems may “interpret” normative propositions and principles
through a feedback mechanism based on the data received from the environment. Especially
in embedded environments such as the Internet of Things (IoT)13,14, data analysis enable the
systems to foresee or anticipate the full set of scenarios and future eventsbringing them
closer to what is called Complex Adaptive Systems (CAS) or Multi-Agent Systems (MAS).15 In
sum, this part is an initial attempt for the mapping of the implementation of data-driven DM
in several regulatory spaces. Next, Part IV identifies some problematic dynamics and features
inherent in data mining as the main challenges and concerns which also give rise to the “rule
of law implications” to be analysed in the final Part. Accordingly, certain epistemological flaws,
informational asymmetries, and the bias nascent to algorithmic DM are elucidated as the main
factors behind the harmful consequences specific to data-driven DM as an implementation
of norms by and through architecture/software.
Before the conclusion, Part V conceptualizes three “rule of law” implications from the above
analysisnamely, (i) the collapse of the normative enterprise, (ii) the erosion of moral
enterprise and (iii) replacing of causative basis with correlative calculations. Although these
implications are not solely specific to Big Data space but rather of general nature regarding
techno-regulation, each of these rule of law implications become aggravated and extend into
deeper dimensions when techno-regulation is implemented through data-driven systems. In
line with this, the informational asymmetries and the discriminatory capacity of the data-driven
practices have been the main source of concern among the scholars in that the “rule of law”
being exchanged with the “rule of technology”accompanied by Kafkaesque, Huxleyan and
Orwellian discourses of dystopia.16
12I assume that expert systems will be sufficiently sophisticated to be able to track old-fashioned law.
However, there seem to be two sources of serious difficulty: one is that we are not able to foresee or anticipate
the full set of scenarios; and the other is that, over time, we change our minds about how the rule should be
applied. Yet, these difficulties do not look insuperable. In response to the former, the obvious move is to equip
the system with a default rule.” Roger Brownsword So What Does the World Need Now?” in Roger
Brownsword, Karen Yeung eds. Regulating Technologies , 44. Also, see Koops, B.J. (2011), ‘The
(In)flexibility of Techno-Regulation and the Case of Purpose-Binding’, Legisprudence (2), 171-194 ; Vincent
C. Müller (eds.)-Philosophy and Theory of Artificial Intelligence, Springer (2013)
13The virtual and the physical were imagined as separate realmscyberspace and meat space, as William
Gibson’s insouciantly in-your-face formulation put it. …. Networked intelligence is being embedded
everywhere, in every kind of physical systemboth natural and artificial. Routinely, events in cyberspace are
being reflected in physical space, and vice versa. Electronic commerce is not, as it turns out, the replacement
of bricks and mortar by servers and telecommunications, but the sophisticated integration of digital networks
with physical supply chains. William J. Mitchell Me++ The Cyborg Self. Also, see Yuanshan Lee,
Applications of Sensing Technologies for the Insurance Industry” in Florian Michahelles (ed.) Business
Aspects Of The Internet Of Things, 89 (2008).
14Autonomous computing environments constitute a further progression within the process of increasing
conflation of social and biological domains, as it results in the multiplication of forms of embodiment and
locations of human subjectivity as the condition of human agency: the layers between the biological, virtual and
real become increasingly permeable and enmeshed.” Hyo Yoon Kang “Autonomic computing, genomic data and
human agency, The case for embodiment” in M Hildebrandt and A Rouvroy (eds.), The Philosophy of Law Meets
the Philosophy of Technology: Autonomic Computing and Transformations of Human Agency, Routledge (2011),
15 Mireille Hildebrand, “A Vision of Ambient Law” in Roger Brownsword, Karen Yeung eds. Regulating
Technologies, 175-193.
16When the boundary between the human and the technological is blurred, we also appear to have to give up
that which makes us most human: our autonomy, the freedom to organize our lives as we see fit. After all,
2. Techno-regulation revisited
Left to itself, cyberspace will become a perfect tool of control.17
As the world we are living in becomes densely populated with coded objects18, it seems almost
“axiomatic” that the environment and artefacts possess certain governance mechanisms
which steer behaviour both at the individual and institutional level by facilitating or imposing
some forms of use and conduct, while inhibiting others. As Leenes notes, as early as 1977
Langdon Winner made the point that technology was legislation in a true sense in that modern
technics prescribed the conditions of human existence in a much more extensive way than
politics in the conventional sense did.19
When regulation is taken in the broadest sense to mean intentional influencing of behavior
according to set standards or goals in order to produce certain identified outcomesbrought
into effect either by code20, laws, self-regulation, or by various private schemes21it
becomes clear that, from a functional standpoint, both technology and Law may act as
regulatory mechanisms which seek to subject human conduct to the governance of certain
rules.22 A common example illustrates that speed regulation on the roads may be effectuated
without this autonomy we are but slaves to technology. A world in which people are directed by devices which
do their work invisibly, whether in the environment or from within the body, perfectly embodies the Brave New
World dystopia that is so widely feared.” Brownsword, “So What Does the World Need Now?”
17 Lawrence Lessig, Code and Other Laws of Cyberspace v.2.0, 2006, 6
18 "At four levels of activitycoded objects, coded infrastructures, coded processes, and coded assemblages”
Kitchin, R., and Dodge, M. Code/space software and everyday life. (MIT Press, 2011) 5, 54-59.
19 L Winner, Of Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (The MIT
Press, Cambridge MA 1977) 323-325 in Leenes Framing Techno-Regulation, Legisprudence, Vol. 5, No. 2,
144. Also see Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121136.
20Code is an expression of how computation both capture the world within a system of thought (as algorithms
and structures of capta) and a set of instructions that tell digital hardware and communication networks how
to act in the world.” Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life (Cambridge,
Mass.: MIT Press, 2011), 43
21 Julia Black, ‘Critical Reflections on Regulation’ (2002) 27 Australian Journal of Legal Philosophy 1;
Ronald Leenes, “Framing Techno-Regulation”, 147-148; Brown, I. and C. Marsden, ‘Regulating Code. Good
Governance and Better Regulation in the Information Age.’ Cambridge, M.A., London: MIT Press(2013). For
more on “regulation” see, Roger Brownsword and Morag Goodwin, Law in Context: Law and the
Technologies of the Twenty-First Century. Text and Materials (Cambridge University Press, 2012); Kooiman
(ed), Modern Governance (London: Sage, 1993); C. Hood, The Tools of Government (London: Macmillan,
1983). For the ranging scope and different definitions of regulation, see Lyria Bennett Moses “How to Think
about Law, Regulation and Technology: Problems with ‘Technology’ as a Regulatory Target” (2013) 5(1)
Law, Innovation and Technology 1-20. For a taxonomy of regulatory strategies as (1) command and control,
(2) self-regulation, (3) incentives, (4) market-harnessing controls, (5) disclosure, (6) direct action, (7) rights
and liabilities laws, and (8) public compensation see, Robert Baldwin et. al. Understanding Regulation -
Theory, Strategy, and Practice, (Oxford University Press, 2012). And for a conceptual framework about the
notions of rules, norms and principles see, Paul Boghossian Rules, “Norms and Principles: A Conceptual
Framework”, in Michał Araszkiewicz, Paweł Banaś, Tomasz Gizbert-Studnicki, Krzysztof Płeszka (eds.)-
Problems of Normativity, Rules and Rule-Following, Springer International Publishing. Lastly for an early
conceptualisation of techno-regulation under the term “dense regulation” Aernout Schmidt, Dense Regulation
and the Rule of Law: Institutional Perspectives in Aernout Schmidt, et al. Fighting the War on Music-File
Sharing, 2007 (T.M.C. Asser Press)
22 H Kelsen, ‘The Law as a Specific Social Technique’ 9 University of Chicago Law Review (1941-1942) 75-
97, 79.
through physical means such as speed bumps irrespective of whether there also exist legal
norms prohibiting and sanctioning speeding.23 When defined so extensionally, “regulation” is
conceptually perceived closer to the usage in biology, systems theory and cybernetics
encompassing almost any control apparatus or procedure.24
Accordingly, techno-regulation is one of the four modalities laid out by Lessig (law, market,
architecture/code, social norms) which aims to steer human behaviour and institutional
practices through technical means and in particular by way of software/code. 25 Murray and
Scott further elaborate this analysis of regulatory modalities in light of control theory as: (1)
Hierarchical control refers to law or normative systems as a richer conception and better
label “hierarchy” looks to the form of control rather than its source; (2) Competition-based
control is the regulative force of markets through supply/demand including other economic
tools such as rivalry and exclusion. Apparently, this is an indirect type of regulation though
with direct physical effects such as diminishing of one’s material means for subsistence or
self-realisation; (3) Community-based control referring but not limited to social norms and
conventions for control; and (4) Design-based control is used to describe the normativity in
design and artefacts including techno-regulatory settings.26 Design-based regulation should
not be seen as confined to cyber-space or computerised devices, design-based features have
always been fundamental to the way societies, states or institutions are governed. More
importantly, Murray and Scott also argue that, from the perspective of control theory, the
appropriate analysis of regulation involves not only a four way classification of different bases
of control as Lessig suggests, but also requires a fine grained analysis of the three elements
necessary to generate a control systeme.g., standard-setting, information gathering, and
behavior modification.27 Lessig’s four regulatory modes often overlap and converge as they
reinforce, compete and interact with each other in an array of ways. In the implementation
space, we usually see an amalgamation and fusion of different modalities, that is, we have
traffic signs, speed bumps and surveillance cameras to deter drivers from speeding around
the schoolsa highly undesirable behaviour also sanctioned by social and moral
23 However, “[p]erformance or goal-based regulations that identify specific outcomes, leaving the means up to
the regulatory party, are ineffective when "desired performance is difficult to identity in advance or assess
contemporaneously"-the focus shifts from punishment to prevention.” See, Meg Leta Jones The Ironies Of
Automation Law, Tying Policy Knots with Fair Automation Practices Principles, 18 Vand. J. Ent. & Tech. L.
77 2015-2016 referring to Kenneth A. Bamberger, Regulation as Delegation: Private Firms, Decisionmaking,
and Accountability in the Administrative State, 56 DUKE L.J. 377, 386-87 (2006).
24 C Hood, et al, The Government of Risk: Understanding Risk Regulation Regimes (Oxford University Press,
Oxford 2001).
25 L Lessig, Code as Law. Also, see J R. Reidenberg, ‘Lex Informatica: The Formulation of Information Policy
Rules through Technology’ 76 Tex. L. Rev. 553 (1997-1998) ; R Brownsword, ‘Lost in Translation: Legality,
Regulatory Margins, and Technological Management’ Berkeley Technology Law Journal, Vol. 26, No. 3, 2011
; M J Madison, ‘Law As Design: Objects, Concepts, And Digital Things’ Case Western Law Review, Vol. 56,
No. 2, 2005 ; E J Koops et al. (eds), Dimensions Of Technology Regulation, Nijmegen: Wolf Legal Publishers
(WLP) (2010); B.J. Koops, “Criteria for Normative Technology: The Acceptability of Code as Law in Light of
Democratic and Constitutional Values” in R. Brownsword and K. Yeung (eds.), Regulating Technologies:
Legal Futures, Regulatory Frames and Technological Fixes (Hart Publishing, Oxford 2008) 157-174
26The potential for controls to be built into architecture have long been recognised, as exemplified by Jeremy
Bentham’s design for a prison in the form of a panopticon (within which the architecture permitted the guards
to monitor all the prisoners)” A. Murray and C. Scott, ‘Controlling the New Media: Hybrid Responses to New
Forms of Power’, (2002) 65 MLR 491: 500. Also, see Andrew D Murray, ‘Conceptualising the Post-
Regulatory (Cyber)state’ Roger Brownsword, Karen Yeung eds. Regulating Technologies, 292.
27 C Scott and A Murray, 504
expectations. Accordingly, any regulatory scheme may rely on one or more of these
modalities, that is, each modality may serve a different function in order to produce the
desired behavioural outcome. For instance, we do not settle with legal rules for trespassing,
but also secure our property with fences and locks. So, techno-regulation refers to the
intentional influencing of individuals’ behaviour by embedding norms into technological
systems and devices.28 Depending on the context, such regulatory models may
interchangeably be referred to as: “regulation by technology”, “technological normativity”,
“regulative software”, “law as design”, ‘design-based regulation’ or “algorithmic regulation”.
Apart from where the context necessitates otherwise, we generally prefer the term techno-
regulatory systems/settings throughout this paper.
Regulation by technology in the spatial realm brings to the fore the notion of ubiquitous
computing, and its current articulation “ambient intelligence” and the IoT, where speed
monitoring, CCTV cameras, smart buildings, RFID, face recognition software together with
wearable devices make up the pioneer technologies. The deployment of techno-regulatory
tools directly targeting cognitive and/or physical properties of the human beings is a near
future scenario where the desired course of conduct will be wired into brains either by way of
genetic manipulation, administering of drugs or by other means that might be used to alter
the neurological setup.29
Techno-regulatory settings may focus on products/services, places or persons covering a
complex plethora of practices and designs. Today, we commonly experience techno-
regulatory applications in products and services such as speed limiters in cars, internet
filtering, Digital Rights Management systems, speed bumps, personalised information
services, etc. Nevertheless, it does not follow that every technical design is regulative. For
instance, seat belts and air bags are mere safety equipment and do not primarily, or at least
in principle, aim to limit or influence behaviour in a legally significant way.30 And although
Intentionality is an essential element of the concept of regulation, technological settings may
have unintended and/or subtle consequences which might bring about regulatory effects
either by eliminating or supporting certain ways of conductsuch as the ‘script’ embodied
by technological settings which is not merely a set of instructions but rather, a builtin set of
self-imposing prescriptions. Hence, in many cases, it might not be transparent whether a
specific outcome is intentional or emerges as the spin-off resulting from some design choice.
And moreover, intention behind any technology does not necessarily determine its normative
impact, that is, the effect is rather dependent on the affordances of the technology, and the
way that humans engage and interact with them.31 Accordingly, it is often the case that
technologies are adapted to better conform to the pre-existing technological frames or to
28 Van den Berg and Leenes emphasize and draw attention to other less ‘legal’ forms of influencing behaviour
such as persuation, or nudging. Bibi van den Berg, Ronald Leenes, Abort, retry, fail: scoping techno-regulation
and other techno-effects, in Mireille Hildebrandt & Jaenne Gakeer (eds.), Human Law and Computer Law:
Comparative Perspectives, Dordrecht: Springer (2012). They argue that “persuasion, nudging and affording are
more subtle, yet clearly intentional, forms of affecting human behaviour, through the use of technologies,
which are overlooked in the current debate on techno-regulation.” 74
29 D L Burk, ‘Lex genetica: The law and ethics of programming biological code’ Ethics and Information
Technology 4: 109121, 2002.
30 Karen Yeung “Towards an Understanding of Regulation by Design” in Roger Brownsword, Karen Yeung
(eds.) Regulating Technologies, 86
31 M Hildebrandt, Technology and the End(s) of Law, 453
reinforce existing social and political power relations. For example, regarding the debates on
total face covering in public places, in many cases a strong opposition is based on the
grounds of identification difficulties eventually giving rise to security risks.32 However, as
alternative identification technologies such as fingerprint, retina scan or else becomes widely
and inexpensively available, opponents of this practice lose certain ground against the
supporters of such forms of conduct. New technology evidently provides a flexible tool for
regulators in that they are no longer constrained to identification through one’s face and
therefore, may assume a more permissible attitude towards these demands which may or
may not have other adverse repercussions. One imminent consequence of identification
through other means would be the loss of transparency for other members of the public
sharing the same physical space. A person, only visible by eyes is not really identified but
more verified through a technical setting that is not open to challenge by the members of the
public. Though the end resultsthe goal of elimination of a security riskmay seem to be
equivalent from a functional perspective; the way technology deployed in this example would
have far reaching consequences which turns out to be indirectly supportive of certain type of
regressive practice to seclude and isolate women in and from public spaces.33
As the example underlines, technology is not (or never) neutral34,35, yet in the eyes of many,
technology and politics are separated in that politics is supposedly based on values, while
technology on scientific knowledge and objective facts.36 An apparent result of such dualism
is the lack of democratic control over much techno-regulation in the private sector for
numerous technology and service providers shape both the ontology and the epistemology
of our world without any meaningful oversight. Techno-regulation must be situated in a wider
framework encapsulating the mutual entanglements between culture, politics and technology.
As Don Ihde has put:technological form of life is part and parcel of culture, just as culture in
the human sense inevitably implies technologies.”37 Or, as Feenberg writes "Technology
32 Dutch Parliament (1st chamber) passed a bill which bans face-covering outfit such as niqab, burka, ski-masks
and helmets in public transport, governmental offices, educational and care institutions. The law is still
pending before the 2nd Chamber. (Partial Prohibition Clothing Act -Wet gedeeltelijk verbod gezichtsbedekkende
kleding- Kamerstukken II 2015/16, 34 349, nrs.1-13.
33 “…[t]he means that we use to achieve our social goals reflect value judgments about the appropriate
relationship between means and ends… [W]hat has been overlooked is that the application of a new
technology can change the preconditions of the application, and that means also the very purposes and ends of
the application.” Gernot Böhme, Invasive Technification, Critical Essays in the Philosophy of Technology,
2012 (Trans. Cameron Shingleton- Originally published in German as Invasive Technisierung:
Technikphilosophie une Technikkritik, 2005)
34 Hildebrandt 2008, 451; Koops 2008, 157; Winner (1980).
35 Marx also highlighted how technology was a fundamentally social relation. Not just in the way it extended
our capacity to work, but also as a means for obtaining and maintaining power. Our relationship with
technology is therefore never neutral, for as a social relation it always reproduces and reinforces pre-existing
inequalities within those relations. MR McGuire, Technology, Crime and Justice: The question concerning
36 A Feenberg, Critical Theory of Technology in JKB Olsen et al. (eds.) A Companion to the Philosophy of
Technology (Blackwell Publishing, 2009), 149. Also see, M. Bunge, Evaluating Philosophies,
Science+Business Media Dordrecht 2012 , 5.
37 Don Ihde, Technology and the Lifeworld. From Garden to Earth (Bloomington and Indianapolis: Indiana
University Press 1993), 20. (Many technologies, instruments in particular, occupied a mediating position in the
interrelation between humans and their lifeworld.) Don Ihde, Experimental Phenomenology - Multistabilities
2nd Edition, (Albany: State University of New York Press, 2012), xiv.
should be brought into the public sphere where it increasingly belongs."38
3. The Lure of Big Data as a regulatory tool
Whilst we are dead to the world at night, networks of machines silently and repetitively
exchange data. They monitor, control and assess the world using electronic sensors, updating
lists and databases, calculating and recalculating their models to produce reports, predictions
and warnings. In the swirling constellations of data, they oversee and stabilise the everyday
lives of individuals, groups and organisations, and remain alert for criminal patterns, abnormal
behaviour, and outliers in programmed statistical models.” 39
3.1. Big data as a method of empirical inquiry
Today, it is a common observation that in every realm of life vast amounts of raw data
compiled from various sources (i.e., communication networks, the energy grid, and
transportation and financial systems40) are put to use in order to obtain actionable information
for the purposes of detecting of fraudulent transactions, calculation of creditworthiness,
organizing of Facebook newsfeed and so on. Apparently, the society we live in is heavily
dependent on databases and analytic tools to carry out processes of various kinds and
scale.41 Although data-driven practicesas governance strategies aiming at efficiency
have long made their way to our lives through statistics and actuarial methods since the 19th
century42, what is happening now is the intense and exponential expansion of data-driven
practices by means of the analytic tools and methodologies conceptualized under the term
“big data”. Big data analytics has its origins in the 'empirical turn' witnessed in the realm of
computing tools and the practices used for decision-making, that is, the use of statistics and
machine learning for predictive purposes.
Despite the lack of a coherent understanding, big data is frequently defined with reference to
clusters of readily available and widely linkable data whose volume, variety and velocity go
beyond the capacity of conventional analysis and processing techniques.43 The term “big
38 Feenberg (2009).
39 DM Berry, The Philosophy of Software Code and Mediation in the Digital Age, (Palgrave Macmillan 2011),
40 “It encompasses structured databases of all types in addition to unstructured transaction and interaction
data from communication networks, data from cloud computing, and the rapidly growing ‘internet of things’
from smart devices to sensors, and cameras.” Burkhardt Wolf, Big data, small freedom? Informational
surveillance and the political RP 191 (May/Jun 2015)/ Commentary, Data & Surveillance.
41 F Pasquale, The Black Box Society - The Secret Algorithms That Control Money and Information (Harvard
University Press, 2015)
42In any case, in order to invent the concept of ‘society’, statistical objects and correlations had to be reified
as ‘collective things’. This realistic notion of virtual macrosocial objects led to two important inceptions:
sociology was founded as a new science that focuses solely on this half real, half imaginary object named
‘society’. And, as a showcase of statistical rule, public insurance was founded on a large scale, especially in
Germany between 1881 and 1889, when the state introduced obligatory health, accident and old-age insurance
on the basis of extensive statistical data.” Burkhardt Wolf, Big data, small freedom? Informational surveillance
and the political, RP 191 (May/Jun 2015)/ Commentary, Data & Surveillance. Also, see Alain Desrosieres, The
Politics of Large Numbers: A History of Statistical Reasoning (Trans. Camille Naish. Originally published as La
politique desgrands nombres: Histoire de la raison statistique), Editions La Decouverte, Paris, 1993 ; Harcourt,
(2007) Against prediction: profiling, policing, and punishing in an actuarial age, Chicago: University of Chicago
43 The industry today uses this 3Vs definition as a standard to classify Big Data see, Krish Krishnan, Data
data”, which is a buzz-word44, may be a misguiding term for reflecting a non-exhaustive and
static approach to the problem in hand. Moreover, this area of interdisciplinary research still
does not bear a universally accepted name and it is common that, at times, terms such as
machine learning, neural networks, data mining, big data, cognitive systems, or genetic
algorithms are used interchangeably.45 Irrespective of the techniques, tasks, algorithms,
programming tools and platforms; the common element in data mining and predictive
analytics is the deployment of a functional approach, a learning algorithm, in order to extract
signal from noise in large bodies of data so that those signals can serve as abstractions for
classifying certain data representative of persons, events or processes.46
In comparison to the earlier data practices, “big data” is not collected and processed in
batches of manageable size, but frequently received as constant data streams which are most
useful when processed without delay. Exponential increase in the computational power,
distributed processing, and the advances in algorithmic accuracy enable the efficient analysis
of high-velocity, voluminous and diverse data generated by numerous applications, servers
and other physical, virtual and cloud devices. The phenomenon of big data primarily thrives
on the long-known practices of data storage and data analysisnow implemented through
the novel technologies of distributed computing and database management (e.g. NoSQL,
Hadoop, HDFS, R and relational databases).47
The real promise of big data lies in its ability to access, sort and reuse huge streams of data
for the purpose of gaining critical insights from the repeated or unique patterns that are only
visible through sophisticated algorithms.48 Big data not only refers to the sheer number of
bytes, but also to the innovative techniques to collect, manage and manipulate data at an
unprecedented scale. Therefore, for the purposes of our inquiry, we approach “big data” as
a method49 of empirical inquiry, performed on informational sources to extract new insights
Warehousing in the Age of Big Data, Morgan Kaufmann (2013), 5. For a survey on different definitions of Big
Data see, Pompeu Casanovas, Regulation of Big Data: Perspectives on Strategy, Policy, Law, and Privacy.
44 Evgeny Morozov, Your Social Networking CreditScore (30 January 2013) Slate For
further epistemological flaws of the term Big Data, see Luciano Floridi Big Data and Information Quality in
Luciano Floridi, Phyllis Illari (eds.), The Philosophy of Information Quality, Springer International Publishing
(2014), 303.
45 Presumably with a view to overcome the terminological ambiguity and the cacophony, Kaplan coins the
term synthetic intellects to emphasize their cognitive dimension. Jerry Kaplan Humans Need Not Apply: A
Guide to Wealth and Work in the Age of Artificial Intelligence (USA: Yale University Press, 2015), 5; Paul
Ohm, calls Big Data as “the trendy moniker for powerful new forms of data analytics” in “The Underwhelming
Benefits Of Big Data” University of Pennsylvania Law Review Online Vol. 161: 339; JS Ward, A Barker
Undefined By Data: A Survey of Big Data Definitions (2013).
46 Jerry Kaplan Humans Need Not Apply, 25, fn.8.
47 Distributed computing creates complex interconnected systems maintaining many sub-systems as an
amalgamation of various computational tools. Those sub-systemseach performing some rudimentary task in a
limited domain are further combined to communicate with relational databases to reveal patterns, and acting
in parallel, they constitute flexible, robust, and pervasive multi-agent adaptive systems acting/operating in smart
environments a.k.a. Internet of Things (IoT).” Norberto Nuno Gomes de Andrade, “Future Trends on the
Regulation of Personal Identity and Legal Personality in the context of Ambient Intelligence Environments: The
Right to Multiple Identities and the Rise of the AIvatars’”.
48 “If I were giving a talk on ‘what is mathematics?’ I would have already answered you. Mathematics is looking
for patterns.” Richard Feynman, What Is Science? in The Pleasure of Finding Things Out, Ed. Jeffry Robbins,
175. Also, see Krish Krishnan, Data Warehousing in the Age of Big Data, (Morgan Kaufmann, 2013).
49A method for solving a problem (a task) describes an effective path that leads to the problem solution. This
description must consist of a sequence of instructions that everybody can perform (even people who are not
out of raw data.50 Through computational operations for abstraction, correlation,
classification, pattern recognition, profiling, modelling, and visualization, data analytics
generates information to control processes, in ways that radically transform markets,
societies and institutions.51 Big data represents a radical expansion and transformation of our
forms of observation, perception and knowledge acquisition, as well as our modes of
production(economy) and interaction.52 We, as individuals and our social relations, are
constantly being reconstructed in a parallel world composed of numbers, vectors and
algorithms in a mathematical modelling of humanity which expands to become as complex
as the humans it simulates.53
Conceptualizing big data as a methodologyrather than as a computational
source/tool/instrument defined with reference to size and speedprovides a framework
which enables the analysis of the regulatory aspects of data-driven methodologies, and the
ensuing rule of law implications that will be elaborated in the proceeding parts.
3.2 Big Data and decision-making
The mystery of the decision and the mystery of the hierarchy respectively support each
other. Both exhibit an unspeakable (dare one say, religious) element, which makes them
into what they are.54
The contagious lure of vastly available data personal or elseincreasingly attracts
institutions of various form and level to transfer all or part of their decision-making processes
to adaptive data-driven systems. Every day, data wizards come up with new metrics and
novel ways to model us, and the world around us mathematically. This cartography of human
lives which serves as the epistemic base for many decision-making processes abstracts
people from contextsreducing or eliminating variation, difference, conflict, and noise which
could impede action or introduce moral ambiguity; and further normalizing the subjugation of
those marked as “other”.55
By way of data mining, all kinds of human activities and decisions are more and more steered
mathematicians). Juraj Hromkovic, Algorithmic Adventures From Knowledge to Magic, Springer-Verlag
Berlin Heidelberg (2009), 21.
50 Michael Mattioli, Disclosing Big Data, Minnesota Law Review (99), 2014, 538; Viktor Mayer-Schönberger,
Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work, and Think, Houghton
Mifflin Harcourt Publishing Company (2013)
51 KEC Levy, Relational Big Data, 66 Stan. L. Rev. Online 73, 73 n.3 (Sept. 3, 2013),;Viktor
Mayer-Schönberger, Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work, and
Think, Houghton Mifflin Harcourt Publishing Company (2013)
52 Clough and Halley, Affective Turn, 3.
53 Stephen Baker, The Numerati, 2009, 13
54 Niklas Luhmann, 'Die Paradoxie des Entscheidens (The paradox of Decision)’ Verwaltungsarchiv (1993) 84:
287-310, 287 taken from Gunther Teubner, “Economics of Gift - Positivity of Justice:The Mutual Paranoia of
Jacques Derrida and Niklas Luhmann” Theory, Culture and Society 18, 2001, 29-47
55 Tyler Wall and Torin Monahan Surveillance and violence from afar: The politics of drones and liminal
security-scapes Theoretical Criminology 15(3) 239–254; Grégoire Chamayou, A Theory of the Drone, (The
New Press, 2015)
and regulated through the predictive capacities of machine learning (ML) algorithms. ML is
the part of the research efforts aiming to develop systems applicable to complex tasks such
as perception (vision, audition), reasoning, control, and other artificially intelligent behaviours.
AI research for the last ten years have been concentrating on devising algorithms that can
learn highly complex functions, with minimal domain knowledge, and with the least human
As we amass more data from an expanding array of sensors that monitor multiple layers of
both the physical and the online world, we also increasingly delegate more power57 to
machines to decide where and how we live, what we consume, how we communicate, are
entertained, healed, and so on. Vast amounts of raw data compiled from various sources are
analysed and integrated to potentially replace human decision-makers in conventional legal
procedurese.g. the termination of one’s social benefits, the extent of healthcare one
receives, exclusion from commercial flights, detection of fraudulent transactions, calculation
of creditworthiness and etc. Combined with other regulatory features of the ICTs, the
enhanced capacity and affordances of high volume, velocity and variety data analysis ushers
a new prospect of techno-regulatory settings.58
Data analytics and data-driven predictive models may contribute and complement the
objectives of techno-regulation in many ways not only directly, but also indirectly. For
instance, data-driven cues and decisions may act as the part of a larger technical system
executing norms in an automated fashion, such as modern traffic signalling and management.
Data-driven decisions about urban traffic control may affect property prices or commuting
times and thus may have very direct effect on the lives of individuals although no profiling or
similar activity takes place.59 And moreover, big data analytics may influence behaviour merely
through communication of the probability of a future event (E.g., I can make person X wear a
raincoat either by eliminating all other clothing but leaving the raincoat, or, by simply informing
X of the probability of rainon the assumption that X is a rational agent willing to avoid rain).60
Since techno-regulation is defined as the effectuation of norms through technical means at
various levels such as rule-making, implementation, monitoring and enforcement in a
normative system; the intrinsic regulatory capacity of data-driven automated DMwhether
56 “Computational inquiry into human nature originated in the years after World War II. […] A
servomechanism, for example, could aim a gun by continually sensing the target's location and pushing the
gun in the direction needed to intercept it. Technologically sophisticated psychologists such as George Miller
observed that this feedback cycle could be described in human-like terms as pursuing a purpose based on
awareness of its environment and anticipation of the future.” Philip E. Agre, Computation and Human
Experience Learning in Doing Social, Cognitive and Computational Perspectives, 1997. See also Steven
Whitehead, and Dana H. Ballard, "Learning to Perceive and Act by Trial and Error," Machine Learning 7(1),
1991, 7-35. Also, see Yoshua Bengio, “Scaling Learning Algorithms towards AI.”
57 For a multitude of purported reasons, because it may be cheaper, faster, more neutral or simply because it
can be done.
58Techno-regulatory models may be defined as a set of values, patterns, rules, norms and principles
conceptually expressed, represented and implemented in a certain technological framework by means of an
artificial language. They may take the form of constraints and conditions for agency and the performance
and/or execution of rules in the broadest sense”. Pompeu Casanovas et. al., The Role of Pragmatics in the Web
of Data
59 Steve Lohr, Data-ism.
60 Mireille Hildebrandt, “Law as Information in the Era of Data-Driven Agency” 21.
based on profiling or notis evident. We see the regulative force of data analytics in almost
in every context where operation or conduct of certain activity is, either fully or partially,
automated or controlled by algorithmic decision-making systems.61 The predictive and the
pre-emptive nature of big data analytics amplify both direct and indirect regulative impact of
the ICTs.62 This regulatory capacity may target behaviour in diverse ways from online
advertising to manipulation through online content filtering. In sum, data-driven decision-
making maintains certain normativity with regulatory effects as it interprets the datafied
human experience and acts upon it to steer human conduct.63
3.3. The missing piece of the jigsaw
The idea that computers could do “smart” things with data may be traced back to the
early years of computation that it was emerging as a scientific discipline. In 1959, Richard
Feynman suggested:64
Everybody who has analysed the logical theory of computers has come to the conclusion that
the possibilities of computers are very interestingif they could be made to be more
complicated by several orders of magnitude. If they had millions of times as many elements,
they could make judgments. […] They could select the method of analysis which, from their
experience, is better than the one that we would give to them. And in many other ways, they
would have new qualitative features.
Rule-based expert systems implementing state-of-the-art domain knowledge in the form of
production rules (if-then rules) to give expert-like advice or make decisions have been in
operation since the 1970s.65 These systems codify knowledge in a static way. Today’s data-
driven DM systems differ from these earlier rule-based applications by their adaptive
capacities and affordances which were long ago anticipated by Feynman in the passage
above. Data-driven systems complement, amplify and transform techno-regulatory settings
and modalities so that unlike former (rule-based) expert systems they are no longer stand-
alone edifices but an integrated part of the information systems. As will be elaborated below,
when reinforced by data analytics capabilities, rule-based systems may mitigate the rigidness
of pre-set architecturesimplementing norms by way of incorporation of new knowledge
through (machine) learning and feedback mechanisms. Backed by data analysis, techno-
regulatory settings then may possess the robustness to adapt to changing environments,
altering interests or to the dynamic uncertainty and indeterminacy of human language. In that
61 “…[m]easurement operations use ‘technologies of persuasion.’ They disguise their interventional character
and appear as “a way of making decisions without seeming to decide.” Karoline Krenn, “Markets and
Classifications - Constructing Market Orders in the Digital Age: An Introductionin: Historical Social
Research 42 (2017), 1, 7-22, 15.
62 Ian Kerr & Jessica Earle, “Prediction, Preemption, Presumption, How Big Data Threatens Big Picture
Privacy”, 66 STAN. L. REV. ONLINE 65 September 3, 2013.
63 In that sense, data-driven DM systems may be regarded as a regulatory technology legally recognised and
regulated primarily through Data Protection Law. See, Bert-Jaap Koops, ‘Criteria for Normative Technology:
An essay on the acceptability of ‘code as law’ in light of democratic and constitutional values’ in R
Brownsword & K Yeung (eds) Regulating Technologies, Oxford: Hart Publishing, 2008, 157-174.
64 Richard Feynman, “There's Plenty of Room at the Bottom”, in Jeffry Robbins (ed.), The Pleasure of Finding
Things Out, 117-141
65 EA Feigenbaum, “The Art of Artificial Intelligence: I. Themes and Case Studies of Knowledge Engineering.
Technical Report”. UMI Order Number: CS-TR-77-621, Stanford University, 1977; Stranieri, A. and
Zeleznikow, J. Knowledge discovery from legal databases. (Dordrecht: Springer, 2010).
sense, data analytics may be seen as the missing piece of the jigsaw puzzle in AI research
with regard to rule implementing and executing systems.
Former expert systems were designed to perform complex tasks near the level of a human
specialist. For instance, in case of legal knowledge based systems (LKBS)a relatively
successful rule-based applicationdevelopers basically implemented (or rather represented)
'the law' in executable form, allowing the system to reach correct legal decisions and be able
to explain their reasoning process, or legally justify their conclusions.66 The developers of
such systems aimed at faithfully representing the authoritative legal source in the domain of
application as well as the anticipated kinds of cases relevant to the domain (and rule-based
representations of existing case law). The construction of these LKBS was, and still is, a time
consuming and laborious task because it requires experts to make the transformation from
legal source to formal, machine executable, representation.67 Next to requiring significant
effort to represent legal rules, there are also fundamental problems due to the intentional
open-texturedness and vagueness of the human language through which the law is
expressedin addition to being highly context dependent68, meaning that the fringes of what
such a regulatory mode appropriately handle are easily reached. The critiques of early rule-
based systems made the point that legal terms were too uncertainalmost only of ritualistic
significanceand therefore knowledge of law may be acquired better through observation
and other scientific methods such as empirical study, hermeneutics, etc.69
The decisions produced by these LKBS are based on actually applying the knowledge
embedded in the statutory provisions to the case at hand and they are able to justify the
decisions reached as well. But this approach seems less appropriate for handling one of the
other legal sources, namely the case law. Since many, if not all, domains in which legal
decisions taken are characterised by a combination of ‘positive’ law and ‘case’ law70, the rule-
based LKBS approach is limited due to the difficulty of dealing with fundamental
66Not necessarily through mimicking the actual reasoning process, but by, for instance, implementing the
underlying (complex) legal rules and executing those.” Trevor Bench-Capon, “Exploiting isomorphism:
development of a KBS to support British coal insurance claims.” Proceedings of the 3rd International
Conference on Artificial Intelligence and Law, New York, 1991, 62-68; J.S. Svensson “Legal expert systems in
general assistance: from fearing computers to fearing accountants”(2002) 7 (2/3) Journal of Information Polity
pp. 143-154. Also on the failures of LKBS, see Leith P., “The rise and fall of the legal expert system”, in
European Journal of Law and Technology, Vol 1, Issue 1, 2010.
67 “Decades ago, the main focus of artificial intelligence research was to develop knowledge rules and
relationships to make so-called expert systems. But those systems proved extremely difficult to build. So
knowledge systems gave way to the data-driven path: mine vast amounts of data to make predictions, based on
statistical probabilities and patterns.” Steve Lohr, Data-ism. See also, Jerry Kaplan Humans Need Not Apply:
A Guide to Wealth and Work in the Age of Artificial Intelligence (Yale University Press: USA, 2015), 29.
68 Lyria Bennett Moses and Janet Chan, “Using Big Data for Legal and Law Enforcement Decisions”, 2014,
69 Lee Loevinger, 'Jurimetrics: The Next Step Forward' (1949) 33 Minnesota Law Review 455, cited in Lyria
Bennett Moses and Janet Chan, Using Big Data for Legal and Law Enforcement Decisions, 647. For further
shortcomings of (legal) expert systems in the application of rules, see Abdul Paliwala (2016) Rediscovering
artificial intelligence and law: an inadequate jurisprudence? International Review of Law, Computers &
Technology, 30:3; Leith, Philip. 2010. “The Rise and Fall of the Legal Expert System” in A History of Legal
Informatics, edited by Abdul Paliwala, 179203.
70 We put positive law and case law in quotes to signify that both sources are not limited to material produced
by the legislative and judicial branches of government, but rather that we mean authoritative rules that are
adjudicated (or enforced) by some agency that has the authority to do so.
characteristics of legal norms (open-texture, vagueness) and its inherent difficulty to cope
with the dynamics of the domain it purports to govern.71
In the heydays of AI and Law research (the mid 90s to early 00s), sufficiently large machine
readable data samples to automatically generate decision models based on case law were
lacking, and so did the necessary tools, computational power and storage capacity. This
seems to be changing, and modern ML techniques may come at the stage where indeed
decision models may incrementally and dynamically be derived from case law.72 In the last 20
years, ML algorithms have enabled the automation of sophisticated tasks involving physical,
social and economic processes which are formerly believed to be confined to human
cognitive faculties.73 Such capacity of machine learning to adapt to changing or uncertain
domains was brilliantly foreseen by Alan Turing as he notes the dynamic nature which is
common in ‘law’ and ‘machine learning’74:
“The idea of a learning machine may appear paradoxical to some readers. How can the
rules of operation of the machine change? They should describe completely how the machine
will react whatever its history might be, whatever changes it might undergo. The rules are
thus quite time-invariant. This is quite true. The explanation of the paradox is that the rules
which get changed in the learning process are of a rather less pretentious kind, claiming
only an ephemeral validity. The reader may draw a parallel with the Constitution of the
United States.”
Owing to the advances in the fields of data analytics, semantic web and Natural Language
Processing (NLP); data-driven DM systems now may assign meaning to vague terms, and
“interpret” normative standards, and principles to ‘manage’ the uncertainties of the human
language by deriving knowledge from a large corpus including the case law.75 Through a
feedback mechanism based on the data received from the environment, data analytics in a
techno-regulatory setting provides the necessary flexibility and adaptive capacity.76 Modern
71 See Ronald Leenes, Hercules of Karneades, “Hard cases in recht en rechtsinformatica”, Universiteit Twente
1999 (in Dutch); P Leith, “The rise and fall of the legal expert system”.
72 See K Ashley, Artificial Intelligence and Legal Analytics - New Tools for Law Practice in the Digital Age,
Cambridge: Cambridge University Press, 2017.
73 R. Corrigan, Digital Decision Making, Back to the Future; Dan Saffer, Why We Need to Tame Our
Algorithms Like Dogs, WIRED (June 20, 2014)
(comparing the evolution of some wild wolves into human companionsthat is, dogswith the possible
future evolution of algorithms and of the relationship between humans and algorithms).
74 Alan Turing, ‘Computing Machinery And Intelligence’ in Edward Feigenbaum and Julian Feldman (Eds).
Computers and Thought (New York: McGraw-Hill) 1963, 11-35, 34.
75 See Ashley, Artificial Intelligence and Legal Analytics.
76 “The discussion on the future of AI seems to open three different directions. The first is AI that continues,
based on technical and formal successes, while re-claiming the original dream of a universal intelligence
(sometimes under the heading of ‘artificial general intelligence’). This direction is connected to the now
acceptable notion of the ‘singular’ event of machines surpassing human intelligence. The second direction is
defined by its rejection of the classical image, especially its rejection of representation (as in Brooks’ ‘new
AI’), its stress of embodiment of agents and on the ‘emergence’ of properties, especially due to the interaction
of agents with their environment. A third direction is to take on new developments elsewhere. One approach is
to start with neuroscience; this typically focuses on dynamical systems and tries to model more fundamental
processes in the cognitive system than classical cognitive science did. Other approaches of more general
‘systems’ subvert the notion of the ‘agent’ and locate intelligence in wider systems.” Vincent C. Müller,
Introductory Note: Philosophy and Theory of Artificial Intelligence in VC Müller (ed.), Philosophy and Theory
of Artificial Intelligence, Springer (2013), viii
techniques could hence potentially overcome the static (and limited) nature of the classical
rule-based LKBS.
The resulting systems could take the form of a combination of classical, including
handcrafted, rule-based representations augmented with knowledge derived by ML. The
latter could take any form suitable for the purposes, ranging from rules to frame-based
representations to neural networks. In any case, these systems are capable of dynamically
adapting to their environment owing to the data-driven knowledge bases that are so vast and
complex that they may not be directly intelligible, and their relevance may not be
understoode.g., due to the opaque nature of neural networks. The output inferred by these
hybrid (or ML-only algorithms) systems will correspond with the intended legal conclusions
regarding the presented cases as long as the system has a correct model of the normative
system it purports to materialize.
The integration or supplanting of rule-based LKBS by decision models based on data
analytics, not only advances and expands techno-regulation but also opens a new dimension
for its integration with the normative systems. Automated DM, when coupled with data
analytics, acquires the necessary adaptive capability to diffuse into more general domains
controlling and regulating real-life events that are of relevance to law and to the legal system.
We have to make note though of what this means from a normative point of view. Arguably,
rule-based representations of legal sources purport to represent the normative intent of the
legislator, which may be successful or less so, but in any case, care has been taken to honour
the normative value of the sources. In the case of machine learning on decided cases, we are
in a grey zone where regularities that are observed originate both from the normative intent
and the extraneous factors. The cases, certainly prior to DM, are supposed to also represent
the normative intent of the legislator (hence authoritative legal decisions by courts etc.), but
what the ML does is to interpret decisions of this normative intent. In comparison to the
representation of legal statutes there is an extra step involved. When the machine starts
deciding cases, there is the risk of deriving 'ought' from 'is', the less decisions are based on
‘authoritative interpretations’ of the normative intent.77
3.4. Implementation of Data-driven DM in Legal contexts
Data-driven DM practices have diverse areas of applicationaiming for control,
compliance, manipulation or prediction implemented for various regulatory tasks such as
detection or discovery of wrongdoing and evaluation of financial status; relevance, risk and
credit scoring; content filtering; sentiment analysis; performance testing and other
optimization problems such as traffic management or personalisation of default rules in online
contracts. In connection with these implementation fields (tasks), we see several applications
of data analytics which relate and encompass various legal domains and normative regimes.78
Data-driven automated DM processes, governed by algorithms of varying degrees of
complexity are either the embodiment of existing normative orders, or they themselves enact
ad hoc regulatory orders with or without legal basis such as the case of online advertising
77 This issue will further be elaborated in section 5.3 below.
78 From an economic perspective and at the enterprise/institution level, objectives pursued by the use of Big
Data are: Cost Reduction, Time Reduction/saving, Developing New Offerings, Supporting Internal Business
Decisions. Thomas H. Davenport, Big data @ work, 2014, 60-67.
where algorithms decide who is worthy receiving a discount, or the call service using
sentiment analysis to decide which of the callers is more tolerant to be kept waiting.79
Although such trivial practices may seem irrelevant from the legal perspective, a second
thought reveals several repercussions with regard to consumer rights and human dignity in
general. In addition, data-driven DM systems may further alter the contractual balance and
manipulate the individual choices for the purpose of profit maximization.
Automated DM is not limited to cases of profiling of individuals. 80 Data driven decision-
making may affect lives and social environments in an array of ways, not necessarily involving
decisions directly about the individuals. For instance, a simple ML application to recognize
congestion on visual data (e.g., from a traffic surveillance camera) may give rise to biased
decisions with regard to traffic flow, depending on the data and the way of processing. One
other dimension is that nothing comes for free, that is, the efficiency gains or other benefits
to be derived from data analysis also have trade-off effects in other domains or for other
individuals. Cutting costs through data analysis could mean certain economic and material
diversions, and shift of interests among employees, students, citizens or consumers. For
instance, reducing the cost of handling customer complaints through a techno-regulatory
application (e.g. automated classification and diverting of complaints to the relevant
departments) may give rise to a significant change in a company’s way of communicating
with the public. Moreover, such systemsthough not necessarily intentionallyrun the risk
of favouring certain type of complainants against the others without any just cause. Or, a bank
which decides to use predictive analytics to prevent customer churn can act pre-emptively
such as to offer advantageous services to the customer who is regarded to be more likely to
move to another bank. This may seem to be a discriminatory result in that many of us would
not consider risk of churn as a legitimate basis on the side of the bank to differentiate between
the service receivers.
4. Data-driven DM concerns, challenges and potential
As the emergence of “algorithmic authority” legitimises the power of the “code” to
direct human action and also to determine which information is considered true; we may
79 “… [a]n algorithm could be used to determine particular qualities of the person calling in: based upon
speech patterns, the particular words they used, and even details as seemingly trivial as whether they said
“um” or “err”and then utilize these insights to put them through to the agent best suited for dealing with
their emotional needs.” Luke Dormehl, The Formula: How Algorithms Solve all Our Problems and Create
80 Unlike many former studies, the legal framework provided in this paper is not limited to profiling-based
applications, but intends to encapsulate any type of automated DM which has a regulatory relevance. The
below-explained “rule of law” implications come into being irrespective of whether the data analysis involves
of personal profiling. Moreover, profiling may also be treated differently depending on the intensity and the
type of data used. As Ihde puts it: “Although the customer churn example may be said to be involving profiling
it is a type of low-level profiling relying upon already set interests and patterns. It is also regarded as
conservative profiling which is not severely different than old school statistics.” Don Ihde, “Smart?
Amsterdam urinals and autonomic computing” in M Hildebrandt and A Rouvroy (eds.), The Philosophy of Law
Meets the Philosophy of Technology: Autonomic Computing and Transformations of Human Agency,
Routledge (2011) 12-27.
identify certain mutually reinforcing dynamics and features of algorithmic DM systems which
raise concerns as to fairness/non-discrimination, privacy/invasiveness, and the notion of the
“autonomous self” and dignity. Systems that are regarded as autonomic are distinguished
by their adaptive capacity which aims to exclude human interaction. In that sense, autonomic
systems, if left to their own devices and without explaining what they do, can be seen as
extreme examples of (in)transparency and invasiveness. The lure of Big Data, as a new way
to capitalize economic and institutional power, heavily relies on its discriminatory and invasive
power in terms of exploiting the informational vulnerabilities on the side of the data subjects.
In this Part, before moving on to the “rule of law implications of data-driven DM”, we identify
three types of concerns/challenges that are inherent in data-driven practices, namely:
informational asymmetries, epistemological flaws, and biases in machine learning (see Figure
1). These mutually reinforcing and inextricably intertwined dynamics/traits/features may be
regarded as the root of certain consequences which materialise as unfair, discriminatory
results, and invasiveness impinging on privacyfurther raising concerns from the point of
human autonomy as higher values of the European order since the enlightenment.81
With regard to possible risks, harms and undesirable consequences of data driven DM, and
techno-regulation in general; an already rich and intense literature exists both in “Law and
Technology” studies and the wider approach which may be categorised as STS. Yet, we are
still far from a proper treatment of these phenomena which would provide a theoretical
framework for a legal reading of these technologies together with the ensuing consequences.
The rough taxonomy in Figure 1 is an attempt to identify certain characteristics of data-driven
practices that eventually give rise to harms. Although it is not possible to fully develop each
and every item included, the intended “mapping” attempts to systemize and theorize
potentially problematic dynamics and properties inherent to data miningindependent of the
possible legally addressable harms that they may give rise. The reason we strictly keep harms
and their possible causes distinct is because, the primary task of any legal analysis is to
establish the connections between the possible causes and the consequences; and the legal
framing of the observed consequences comes only the second.
81Where a novel technology alters the environment in some way, courts sometimes legitimize that alteration
by refusing to recognize harm and instead characterizing avoidance of the technology as self-imposed harm.
The Autonomy of Technology: Do Courts Control Technology or Do They Just Legitimize Its Social
Acceptance?” Also, see Marco Nørskov “Human-Robot Interaction and Human Self-Realization:
Reflectionson the Epistemology of Discrimination” 319.
Figure 1: Data-driven DM concerns
4.1 Informational asymmetries
Many of the automated decision-making processes may be characterised by an
information asymmetry (cognitive deficit) between the system and the affected individual (and
sometimes even those controlling or regulating them). In a data-driven model, complex
interactions of embedded rules and algorithms, augmented through the knowledge obtained
by means of machine learning, render the process of decision-making opaque. These
informational asymmetries create obscurities as to the content of the rules, the identity of the
rule-makers, and also as to the way the rules are implemented and executed. The opaqueness
can be intentional for providing insight into the algorithms may impair the competitive position
of the owner of the system. Another frequently mentioned reason not to reveal the algorithms
is that this may induce 'gaming' behaviour among those affected by the decision; knowing
which factors influence whether one gets singled out at the airport in view of the anti-terrorism
measures may render the system useless because the suspects will adapt their behaviour. 82
And apart from all, secrecy may also be mandated by law.83 There are also more mundane
reasons why the decision-making process is kept in the dark, for instance because the
system's algorithm itself is unintelligible, or that the algorithms and snippets of code conjuring
82 This transparency-gaming effect is known as Goodhart’s law “when a measure becomes a target, it ceases to
be a good measure” Tal Zarsky, “Transparent Predictions”, University of Illinois Law Review, Vol. 2013, No.
4, 2013.
83 As Zarksy mentions: "Every disclosure law has a law-enforcement exemption clause" ibid.
Inscrutable+ Unpredictable
Tech nic al
up the decision calculus is actually scattered across multiple systems making it very difficult
to construct a clear justification of the decision.84
Apart from the (in)transparencies, the adaptive and pre-emptive capacity of big data analysis
is another type of asymmetry which brings up discussions of free will and autonomy. Pre-
emptive models pose a different type of information asymmetry due their context awareness,
resulting with a “mental invisibility” on the side of the individuals. Especially in cases of
Ambient Intelligence (AmI) environments, the system senses, analyses and models
individuals by anticipating their state of mind and possible behaviour in order to pre-act in a
way that is deemed appropriate without conscious mediation.85
Such asymmetries and obscurities create further levels of regulatory difficulty for the public
agencies, especially where the techno-regulatory setting is controlled and exploited by private
parties.86 Irrespective of their ontological, legal, material or causal nature, the commonality
among the informational asymmetries is that they somehow cause certain cognitive deficits
on the side of the regulatee. Such vulnerability makes individuals prone to abuse and
incapable of objecting to the discriminatory or invasive results. In all cases, the affected
individual has an information disadvantage compared to the techno-regulatory assamblage
that decides about her/him making it difficult for the individual to contest the results
produced by the system.87
4.2 Epistemological flaws
Nothing is true; everything is permitted.88
As seen in Figure 1, the second major source of potential harms regarding automated
decisions is the epistemological background. Machine learning is a problem-solving
approach which implements algorithmic learning theory as a framework of computational
strategies for discovering “truth” in empirical questions.89 Data mining employs quantitative
and inductive methods (equations and algorithms), along with the statistical testing to process
data resources with a view to identify reliable patterns, trends, and associations among
variables that describe and/or anticipate a particular process or event.90 Although it is
paramount to human cognition and conduct to understand the reason (mechanism) behind
the associations one encounters in the real world,91 most of the big data practices focus on
84 Jenna Burrell (2016), “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”,
Big Data & Society, 1-12.
85 Simon Elias Bibri, The human face of ambient intelligence: cognitive, emotional, affective, behavioral and
conversational aspects. (Paris: Atlantis Press, 2015), 10, 37, 172.
86 Facebook’s system allows advertisers to exclude black, Hispanic, and other “ethnic affinities” from seeing
ads. Julia Angwin and Terry Parris Jr.Facebook Lets Advertisers Exclude Users by Race” ProPublica, Oct.
28, 2016.
87 These asymmetries will be elucidated in relation to the normative dimension of Law in section 5.1.
88 For the origin of the phrase, see
89 ML is, in the meantime, a subject of computational philosophy which extends to the mathematical
investigation of the systematic connections between the notions of scientific method, truth, causality and
90 Stephan Kudyba, Big Data, Mining, and Analytics, 29
91 Rob Kitchin, Big Data, new epistemologies and paradigm shifts, Big Data & Society, AprilJune 2014: 1
12, 4.
the potential exploitative and invasive uses for valuable” insights, rather than the nature and
the quality of the knowledge itself.92 Neglecting experience and intuition, decision-making
becomes increasingly based on finding correlative patterns and as such, big data thrives on
the idea of “correlation supremacy”.
Since data itself is not capable of verifying the assumptions and the perspective underlying a
certain inference of causation, letting data speak for itself is problematic in many ways. As
will be elaborated in section 5.2, algorithms in machine learning is not immune from the
general shortcomings of the causal inference in large data sets. Data mining reveals
correlation, not causality, which could be spurious”, and this brings in the question of the
ethical justifiability of acting upon them.93 A further perspective which transpires through this
analysis is that, causality is not an objective quality of data, but rather a narrative constructed
through a certain perspective, as theorised and implemented in a model. In order to establish
a causative link, patterns need models with an encompassing narrative since “it is one thing
to establish significant correlations, and still another to make the leap from correlations to
causal attributes.”94 As an inductive methodprogressing from particular cases (sample
data) machine learning accumulates a set of discovered dependencies, correlations or
relationships that are referred to as “model”. Although a model in the abstract may be robust
and consistentand thus experiments prove to be validinterpretation of the outcome may
be laden with epistemological flaws favouring certain values, persons, or processes; bringing
us to a domain which is more political, rather than being scientific.95 Systems and artefacts
we deploy to achieve social goals articulate and shape our values as to the relationship
between the means and ends.96
Though technologies embody our view of the world and our relationship with it, the data itself
is silent regarding which correlations to be preferred as corresponding with “reality” and thus
being “true”. 97 Just like fables, models also deliver a value-judgmenta conviction about
92 David Chandler, “A World without Causation: Big Data and the Coming of Age of Posthumanism”,
Millennium: Journal of International Studies 1–19 , 2015, 2 ; Thomas W. Simpson, Evaluating Google As An
Epistemic Tool, in Harry Halpin and Alexandre Monnin (eds.) Philosophical Engineering Toward a
Philosophy of the Web, (Wiley Blackwell, 2014) 97-116.
93Episcopalian dog owners who drive more than forty miles to work and recently moved to the suburbs may
have an extraordinarily high rate of bladder cancer, but so what? The correlation is probably spurious.
Nothing about dog ownership, being Episcopalian, or recently moving to the suburbs would seem to cause
bladder cancer. The challenge is to sort through all of the correlations and decide which have a causal basis.”
Scott E. Page The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and
Societies, 85
94 David Bollier, “The Promise and Peril of Big Data”, 16.
95 This is simply because we are in a “constitutive entanglement” with those systems where “it is not only us that
make them, they also make us” see, Introna and Hayes 2011, 108.
96 [o]ne can easily argue that, rather than pure and abstract rational argumentation, political choices are
constrained and often shaped by the technological form of life available to a polity at a given time.” Wagner,
B. ‘Algorithmic regulation and the global default: Shifting norms in Internet technology’ Etikk i praksis. Nord
J Appl Ethics (2016), 513, 6. Also, see Gernot Böhme Invasive Technification, Critical Essays in the
Philosophy of Technology, 2012 (Translated by Cameron Shingleton- Originally published in German as
Invasive Technisierung:Technikphilosophie une Technikkritik, 2005)
97We live in an age of such chronic decisionism: one in which legality as a mode of legitimation is displaced
by performance. Hegemony and the symbolic are effective through meaning. Communications work
through performativity. Legitimation is no longer separate from what it is meant to legitimate, it becomes
automatic.)” Scott Lash, Power after hegemony
what is important and about how the world ought to be.98 As Bollier puts it: “the specific
methodologies for interpreting the data are open to all sorts of philosophical debate.”99
Overall, the ability to represent relationships between people as a graph of correlations does
not mean conveying equivalent information.100 Speaking of a model in Big Data, it is the
“opinions embedded in mathematics.”101
4.3 Bias and discrimination
It is common case nowadays that two persons shopping from the same online retailer
may be offered significantly different discounts or product and service packages, depending
on their recorded profiles based on the demographic, behavioural, transactional, and
associational data previously accrued and aggregated about them. Behind almost every
online service or sale point, there is a piece of computer code working out to calculate how
much more we could be charged on the basis of location, connected device (Mac or PC), or
even demand flexibility of the potential customer.102 It is a well-known fact that significant part
of Big Data practices focus on identifying economical vulnerabilities among the populace,
possibly deprived groups. Although only for the sake of ease of reference, even the names
given to certain consumer categories by the industry such as “Ethnic Second-City
Strugglers”, “Retiring on Empty”, “Tough Start: Young Single Parents”, “Established Elite”,
“Power Couples”, “American Royalty” and “Just Sailing Along” suffice to illustrate the
obnoxious nature of consumer profiling in retailing and financial business.103 The common
point of each of these categories is that they all signify the part of the society who are, for
example, inclined to accept higher interest rates due to their not so bright financial position,
or those who are wealthy enough not to care about the price of the food when they are
Although it is true that algorithmic systems have the potential to make precise calculations, it
is also a long-known fallacy that algorithms will select or classify more ‘objectively’
remedying existing inadequacies or inequalities that may relate to tasks based on
categorization of individuals. So, the third concern related to data-driven decision making is
that of the undesired biases which may end up influencing the decisions, with discrimination
as being the most feared consequence. Intuitively, bias and discrimination are closely related
concepts, the former being more abstract and often inclusive of the latter which is, more in
the legal sense, a failure to treat all persons equally when no reasonable ground of distinction
Bias refers to an inclination or outlook to present or hold a partial perspective including the
98 Shallis, Silicon Idol
99 David Bollier, The Promise and Peril of Big Data, (2010, p. 13)
100 danah boyd and Kate Crawford, Critical Questions for Big Data
101 Cathy O'Neil, Weapons of Math Destruction
102 Dana Mattioli and On Orbitz, “Mac Users Steered to Pricier Hotels”, WSJ, August 23, 2012.
103 United States Senate, Office of Oversight and Investigations Majority Staff, “A Review of the Data Broker
Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes,” STAFF REPORT FOR
104 Brent Daniel Mittelstadt, et al., ‘The ethics of algorithms: Mapping the debate’ Big Data & Society, July
December 2016, 9.
refusal or the ignorance to consider other possible aspects. Within the context of data-driven
algorithmic decisions bias explains any tendency and interest of a data processing system to
act in a certain way or to yield certain results. When seen from this perspective, every
algorithm which somehow aims for sorting, prediction or grouping will eventually prioritize
certain criteria and establish some kind of ranking.105 Bias is a polymorphic and contextual
concept with many facets and dimensions and thus we cannot per se conclude that all bias
is harmful and must be repelled categorically. A systemic approach to bias in ML and the
potential harms with a view to provide a comprehensive legal framework requires the
treatment of bias and discrimination as distinct concepts for former being an inherent
characteristic of ML while the latter is a difference in treatment on a basis other than individual
merit.106 Discrimination or unfair treatment as a harmful consequenceresult of an act or
decisionis rather a value-laden concept partially addressed by law. Keeping bias and its
possible harmful consequence such as unfair treatment and discrimination apart enables us
to distinguish what is legally and technically addressable, and what remains beyond the legal
realm as a political discussion.
Separated from discrimination, bias in data-driven DM systems, initially as a computational
and data-originated problemalbeit with strong intertwined economic, political social roots
and underpinningsmay be studied under a tripartite categorisation, as: input bias, process
bias and output bias.107 Nevertheless, it goes without saying that such categorisation may not
be taken as establishing distinct compartments of analysis; and almost in every case, bias is
a fusion and/or combination of those in a complex and intertwined wayoften a question of
way of approach to the problem in hand. Every stage of big data analysis has a direct or
indirect bearing on the final interpretation. Big data analysis is a holistic process, different
stages or components of which cannot be analysed in isolation but rather requires systemic
conceptualisation. Bias pre-existing in the data and further created through data collection,
preparation and data analysis stages may or may not translate into discriminatory results at
the final interpretation/decision stage. Considering the level of human intervention e.g.,
defining features, pre-classifying training data, and adjusting thresholds and parameters
together with the commercial logic behind these systems, it would not be wrong to assume
that, there is often embedded bias and human judgment in big data analysis. 108
105 In a cosmological sense, bias is the source of life and the driver of evolution on earth. Any becoming would
start with a tendencya propensity to move towards light or heat etc.which would cause an anti-entropic
(giving some order to the dissipating energy in the form of living organism) process for a more favoured
existence. Scott J. Muller Asymmetry:
The Foundation Of Information, (Springer-Verlag Berlin Heidelberg,
2007). For an extraordinary work on entropy and bias, see Robert Biel, Entropy of Capitalism, (Brill
Publishers, the Netherlands) 2012.
106 Larry Alexander, “What Makes Wrongful Discrimination Wrong?”, 141 U. PA. L. REV. 149, 151 (1992);
Also see, Faisal Kamiran and Toon Calders, Classifying without discriminating, in IEEE International
Conference On Computer, Control & Communication (2009)
107 This partly overlaps with the approach developed by Friedman and Nissenbaum which argues that bias can
arise from (1) pre-existing social values found in the ‘‘social institutions, practices and attitudes’’ from which
the technology emerges, (2) technical constraints and (3) emergent aspects of a context of use. Also, see
Barocas S (2014) “Data mining and the discourse on discrimination” ; Barocas S and
Selbst AD (2015) “Big data’s disparate impact”, abstract=2477899 (accessed 16
October 2015); Alex Rosenblat, Tamara Kneese, and danah boyd, “Algorithmic Accountability”
108 See Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”, Big
Input bias
Speaking of data collection, data does not exist in isolation in a value-free and
neutral vacuumbut first needs to be contemplated within a context and through a certain
interpretation of the world/environment. Items need to be identified as “data” in the
seamlessness of phenomenathe undifferentiated blur.109 Data capture and collection is the
initial stage where each inclusion or exclusion valorises a certain point of view and silences
another.110 Part of the problem with the pre-exiting bias is rooted in the social and political
contradictions inherent to our current techno-financial and primarily accumulative political
system. This is where pre-exiting bias is introduced into the system.
Following to the data collection, data preparation and transformation is primarily a
structuration and categorization where the data formats, data structures and the operations
carried out significantly limit or decisively determine the data mining capabilities in the coming
stages.111 Data preparation is never neutral but a highly interpretative part of data analysis,
and as Bollier puts it: any interpretation is necessarily biased by subjective filtering.112
The scope of this paper does not extend to a thorough analysis of all pre-processing
operations. Nevertheless, it is important to note that some of the data transformation
operations overlap with the data mining tasks explained below. Put in other words, pre-
processing itself consists of many composite data mining functionalities. Considering the new
data analysis models and technologies such as sand-boxes and data-cubes that are designed
to overcome latency problems in real-time data analysis, data pre-processing and analytics
increasingly stages blend into each other and become conflated and indistinguishable.113
Bias in analysis
As to bias in the data processing (analysis) stage, machine learning applications are
of particular importance as they make up the crux of Big Data analysis where discriminative
and consequently harmful practices originate and/or become concealed. Machine learning,
as an algorithmic approach, is a general model of inductive learning in observational
environments through Turing-computational means especially used for the analysis of large
datasets. 114 It invites and provokes type of empirical queries where the answer sought is not
Data & SocietyJanuaryJune 2016: 112.
109 Every knowledge domain, institution and discipline has its own approaches and standards with regard to
contemplation of data. To decide what is to be designated as data is an interpretive function that may generate
bias. Lisa Gitelman and Virginia Jackson, “Raw Data” Is an Oxymoron, 3.
110 Geoffrey Bowker, Sorting Things Out, Classification And Its Consequences, 5
111 Custers, Toon Calders, Bart Schermer, and Tal Zarsky (Eds.), Discrimination and Privacy in the
Information Society, Data Mining and Profiling in Large Databases, (Springer-Verlag Berlin Heidelberg
2013), 8 ; danah boyd & Kate Crawford (2012) “Critical Questions For Big Data”, Information,
Communication & Society, 15:5, 662-679.
112Bollier, D. (2010) ‘The promise and peril of big data’; Josep Domingo-Ferrer, et al. “Database Privacy” in
Sherali Zeadally ; Mohamad Badra (Eds) Privacy in a Digital, Networked World: Technologies, Implications
and Solutions (Heidelberg New York Dordrecht London :Springer) 23.
113 For more, see Mehmed Kantardzic, Data Mining, 8
114In the broader domain of algorithms implemented in various areas of concern (such as search engines or
credit scoring) machine learning algorithms may play either a central or a peripheral role and it is not always
easy to tell which is the case. For example, a search engine request is algorithmically driven, however, search
necessarily known in advance.115 Rather than being an algorithm itself, machine learning is a
collection of inductive and quantitative techniques/algorithms used for data mining tasks
e.g., classification, clustering, and prediction in general. It consists of subcategories such as
supervised, unsupervised, semi-supervised, and active learning.116 Learning style algorithms
may yield insights as to whether a tweet or comment is positive or negative, or as to the
“topic” in a particular corpus of text. Machine learning algorithms are efficient generalizers
and predictors, that is, they perform computational tasks by drawing generalizations through
inductive reasoning based on sample data. The problem of learning from a given set of
samples consists of two stages: First, learning or selecting of unknown dependencies or
correlations from a sample dataset and then, using these discovered correlations to predict
new outputs for future input values of the system117 As such learning process may be biased
in an array of ways depending on the purpose for which the task being deployed, and the
algorithmic approach adopted.
Bias in supervised systems may be carried from the training data which is labelled and
contextualized by humans for the tasks such as sentiment mining, outlier analysis and etc.
So, an important source of bias in supervised learning, and particularly in case of classification
type of data mining tasks, is the classifiers trained by data with biased features which may
yield discriminatory results.118 Accordingly, seemingly neutral systems free of technical bias
will generate biased and discriminative outputs when trained or operated on the data
contaminated by the pervasive discrimination embedded in our social political and economic
structures and practices.119 Certain groups (protected classes) might be represented in
disproportionate amounts in the training data, creating a potential for bias and harmful
outcome in many respects. It is a concern with regard to systemic omission of those living in
the margins of Big Data, that is, who are less “datafied” either due to poverty, geography or
lifestyle are also less involved in the formal economy.120 Where the sample data introduced
engine algorithms are not at their core ‘machine learning’ algorithms. Search engines employ machine
learning algorithms for particular purposes, such as detecting ads or blatant search ranking manipulation and
prioritizing search results based on the user’s location”. Jenna Burrell, ‘How the machine ‘thinks’:
Understanding opacity in machine learning algorithms’ Big Data & SocietyJanuaryJune 2016: 112.
115 Valentina S. Harizanov et al., “Introduction To The Philosophy And Mathematics Of Algorithmic Learning
Theory” in M. Friend, N.B. Goethe and V.S. Harizanov (eds.), Induction, Algorithmic Learning Theory, and
Philosophy, (Springer, 2007), 2
116 This categorization is based on the way the data is represented; and other categorizations can be made based
on the goals or learning strategies. Mehmed Kantardzic, Data Mining, 89
117 Mehmed Kantardzic, Data Mining, 88, 89
118 Most of the times, in order to efficiently label sample data, manually labelled examples are used. However,
if there exist no pre-labelled examples, system designers devise a way for labelling data themselves, and this
may turn out to be a source of inaccuracy and flaw in data mining. For more general works on this problem,
see Dan McQuillan, ‘Algorithmic paranoia and the convivial alternative’ (2016) Big Data & Society 1–12.;
David J. Hand, “Classifier Technology and the Illusion of Progress”, 21 Statistical Sci. 1 (2006); Nicholas
Diakopoulos ‘Algorithmic Accountability : Reporting On The Investigation of Black Boxes’ Tow Center for
Digital Journalism -Report (2013); Richard Y. Wang & Diane M. Strong, Beyond Accuracy: What Data
Quality Means to Data Consumers, 12 J. Management Info. SYS. 5 (1996); Luciano Floridi, Information
Quality, 26 PHIL. & TECH. 1 (2013) ; Larry P. English, Information Quality Applied: Best Practices For
Improving Business Information, Processes And Systems (2009). See Barocas, fn.39
119 Anupam Chander,The Racist Algorithm?”, 13 and see footnote 50,
Also see, Faisal Kamiran and Toon Calders, Classifying without discriminating, in IEEE International
Conference On Computer, Control & Communication (2009)
120 Jonas Lerman, “Big Data and Its Exclusions”, 66 STAN. L. REV. ONLINE 55 (2013); S Barocas, “Data
Mining And The Discourse On Discrimination”; Crawford, K., 2013. “The Hidden Biases in Big Data”,
for training fails to correspond with the probability distribution of the entire population, the
misrepresentation and the resulting bias embedded in the system become obfuscated and
mostly legitimized through the seeming infallibility of data mining.121
Other than sampling data, in supervised machine learning the process of selection of target
variables, class labels and features for classification tasks is another important source of bias.
Classification, a widely used data mining task, predicts a target variable that is either binary
(e.g., pass/fail) or categorical (e.g., membership to a consumer group) by way of induction
through a set of input variables.122 Classification may be impartial when detecting well-defined
concepts like some specific form of fraud or spam mail but not so successful with regard to
more open-ended concepts such as creditworthiness or appropriateness for a job position.123
Other than concepts defined by attributes, target variables may also be numerical values such
as sales quota or production time.124 Devising target variables is the step that bias may
become nested in the system in that it is a crucial process for the decisions whether one is
worth of lending money or offering benefits. Having said that, it should be mentioned that the
total performance of data mining process may also be enhanced through explicit and clear
target definitionthough easier said than done. Classification further requires the selection
of features to differentiate between the classes. For instance, in a system aimed at recognising
and classifying vehicles on a motorway, wheels, engine and sound are the properties to
distinguish between bicycles, motorbikes and cars.125 Choosing a feature that is
corresponding or exceeding the concept itself would not yield any information. For example,
mobility may not be a feature capable of distinguishing vehicles from each other. Similarly, if
we take only the number of wheels, this will provide low mutual information in distinguishing
bikes, motorbikes and cars on a motorway given that two of the three elements generally have
two wheels (bicycles and motorbikes). Feature selection is the making of a representative
reduction of the real-world phenomena to be analysed. Since it is not possible to fully
encompass the entire complexity of the physical phenomena and the real-world in general, if
the selected features are not informative, this will directly impair the accuracy of the
analysis.126 Such bias stemming from the inherent inadequacy of the representation is not
always easy to detect for it may not be an intentional choice of the system designers.
Unsupervised learning is generally a descriptive technique used to identify underlying patterns
in a dataset through data mining tasks such as clustering or its variantse.g., collaborative
Harvard Business Review
121 Larry M Bartels, “Economic Inequality and Political Representation”, in THE UNSUSTAINABLE
AMERICAN STATE 167, Lawrence Jacobs & Desmond King, ed. 2009 ; Barocas, Solon and Selbst, Andrew
D., Big Data's Disparate Impact, 15
122 Vidjay Kotu, 8
123 For instance, in a ML system designed to identify type of consumers, each defined type is a class labele.g.
“single mom”.
124 Barocas, Solon and Selbst, Andrew D., Big Data's Disparate Impact, 10 and fn.23.
125 D. Haussler, M. Kearns, and R. Schapire, ‘Bounds on the Sample Complexity of Bayesian Learning Using
Information Theory and the VC Dimension’, Machine Learning: Special Issue on Computational Learning
Theory, 14/1 (January 1994) in Shroff, footnote 44.
126 Toon Calders & Indrė Žliobaitė, “Why Unbiased Computational Processes Can Lead to Discriminative
Decision Procedures” in Discrimination And Privacy In The Information Society; Barocas and Selbst, 16;
Andrea Romei and Salvatore Ruggieri, “Discrimination Data Analysis:A Multi-disciplinary Bibliography” ;
S. Hajian and J. Domingo, "A Methodology for Direct and Indirect Discrimination prevention in data mining"
filtering or association analysis. The process is regarded to be unsupervised because the input
samples are not labelled, and based on metrics defined, the learning agent constructs the
model on its own.127 Unsupervised learning methods are employed to discover priorly
unknown features and the associations between themeventually resulting with a certain but
limited representation of the observed phenomena.128 ‘Topic discovery is an example of
unsupervised learning where the system gains insights as to the semantics/meaning of certain
text or corpus without any pre-existing knowledge provided by outside human intervention.129
Clustering, as a basic technique for unsupervised learning, is the grouping of objects and
items that are similar to each other. Clustering algorithms bring together the items that are
closest in a dataset. Naturally, a clustering algorithm will shape the given data in a manner
reflecting embedded biases and inequalities, giving rise to a subtle type of bias which is
difficult to identify let alone to remedy because of the way the technology engages and
extenuates them. Therefore, clustering type of tasks have the potential to foster and augment
pre-existing bias either by shifting benefits or misplacing burdens.130 For example grouping
of one’s Facebook friends through clustering may create a group such as “church friends”
despite the fact that the user has never contemplated such a grouping of her/his friends.131
Obviously, discovering unknown traits in demographic data may give rise to many disclosures
beyond the designers’ anticipation. Clustering analysis conducted on demographic data
involves a potential risk of revealing undesirable results.
Output bias
Lastly, in many cases bias and the possible discrimination is not explicitly inscribed in
the code or other observable components of the system but rather an outcome of the overall
process leading to a decision which is based on certain assumptions and the consequent
interpretations. The decision in a machine learning contextmanifesting itself as an
interpretation of the result is the final stage where bias cultivated throughout the several
processes of data mining becomes translated into the outcome. Output bias could be either
due to an accumulation of the flaws and biasesboth intentional and unintentionalfrom
earlier stages or may independently arise throughout the analytic operations as an emergent
property of the system. Such systemic bias is not necessarily a consequence of inaccurate
data, technical constraints, imprecise calculations or their combination. On the contrary, at
times, the problem with systemic bias may perfectly be overly precise predictions and perfect-
fit models accompanied by the assumptions which encapsulate the commercial logic based
on profit seeking. Statistically sound but nonetheless poorly representative and non-universal
127 A third group, semi-supervised learning is a hybrid approach where some contextual or background
knowledge introduced before the algorithms are let loose. Florin Gorunescu, Data Mining
128In case of clustering and other unsupervised data mining tasks, there exist no supervisory tagging and no
pre-set target variables to predict. Hence rather than being predictive, unsupervised machine learning is used
for descriptive tasks which aims to construct some structure of the observed phenomena out of the data.” Vijay
Kotu, 15
129 Gautam Shroff, The Intelligent Web, Search, smart algorithms, and big data, (USA:Oxford University
Press, 2014), 85
130 M. Eduardo, et al., “Avoiding Bias in Text Clustering Using Constrained K-means and May-Not-Links”
131 Motahhare Eslami, et al., “Friend Grouping Algorithms for Online Social Networks: preference, bias, and
generalisations at the data analytics stage may severely bias the process.132
Although it may seem as the final stage, in fact, interpretation starts with the design of the Big
Data project and completes with the decision based on extracted information. Accordingly,
bias can never be solely explained with reference to a single source or process but usually
emerges as an amalgamation of various causes. Since Big Data analysis involves composite,
highly recursive and complex algorithmic processes, understanding of bias as translated into
the system requires a systemic approach to the data-driven decisions.
In sum, it is neither the human agency nor machine but a systemic symbiosis of both which
make up the decision calculus of big data systems. Computer code embodying algorithms
does not make decisions but simply automates the logic crafted by humansor by human-
designed agents.133 Hence it is not exactly the “algorithm” which is biased and obscure but
rather, the complexity arising out of the interaction of many algorithmic processes. The
interpretation of data analysis for the purpose of prediction may only be understood when
studied as a totality of the processes, concepts and assumptions that underlie any specific
Left to their own profit seeking motives, learning-based algorithms running on demographic
data and deployed for economical and profit seeking purposes will eventually develop
decision rules acting to the detriment of disadvantaged groups, or even detect unknown
patterns of vulnerability.135
4.4 Conclusion
The above attempt to systemise the concerns, challenges and potential harms
emanating from data-driven decision-making is far from being comprehensive and the true
interrelation among those dynamics present a much more complex and perplexed taxonomy.
As intending to be a “helicopter view”, our systemisation also suffers from generalisation and
to some extent lack of precision. The intertwined nature of these dynamics formulated as
concerns and challenges explained by a hybrid terminology compiled from many disciplines
and different types of technical writingmakes them equally elusive, shifty, and constantly
co-opting, overlapping and thus rendering a clear cut, precise picture almost impossible.136
Accordingly, it is to be noted that the above construction in Figure 1 is far from being a one-
to-one and airtight taxonomy, and it is doubtful whether one can be made. This is due to the
fact that the terms borrowed from computer science and statisticssuch as, data mining,
machine learning, neural, networks, and the likeoriginate from the practical application of
mathematics to specific problems, and inevitably lack the necessary consistency, generality
132 Barocas and Selbst, ‘Big Data's Disparate Impact’, fn. 57
133 Sonja B. Starr, “Evidence-Based Sentencing and the Scientific Rationalization of Discrimination”, 66
STAN. L. REV. 803, 806 (2014).
134 Andrew Goffey, “Algorithm” in Matthew Fuller (ed.), Software Studies A Lexicon, Cambridge,
Massachusetts London, England:The MIT Press, 2008)15-36, 19..
135 Goran Bolin and Jonas Andersson Schwarz, “Heuristics of the algorithm: Big Data, user interpretation and
institutional translation”.
136 For instance, although treated separately in Figure 1, “unpredictability” and “uncertainty” are not easy to
distinguish in every case.
and the rigour to perfectly fit into, or to smoothly interact with legal, sociological or
philosophical concepts and narratives. Therefore, the above systemisation cannot be taken
as a blueprint or some sort of “one fits for all” template for investigation of algorithmic
decision-making, but rather like a conceptual map offering some pointers for a query aiming
to derive legally meaningful results. The development of such taxonomies is an iterative
process which will crystallize in time as more systems are designed and then investigated in
view of this a perspective.
Although it has become a truism that data analytics yields discriminatory and unfair results,
we still don’t have a comprehensive literature or an overarching theorisation which identifies
or evaluates these potential harms and their causes from the view of relevant legal domains.
Seen from this perspective, the rule of implications elaborated in the coming Part may be
regarded as a further and special refinement of the concerns inherent in data-driven practices
from a procedural perspective a separate (meta-level) categorisation of potential harms
focussed on data driven systems as regulatory processes.137
5. The Rule of Law Implications
5.1 Rule of law as effective capability to contest decisions
Logically, every legal system has a claim to legitimacy in the sense that the source of
authority relies on a moral right to rule.138 In modern democratic systems, the principle of Rule
of Law, as an essential pillar of this moral dimension, requires that rules are publicly declared
with prospective application (punishments or consequences tied to a given prohibition or
exigency), and possess the characteristics of generality (usually meaning consistency and
comprehensibility), equality, and certainty (that is, certainty of application for a given
situation).139 As the protection of rights, prevention of arbitrariness and holding state
responsible for unlawful acts are only possible in an intelligible, reliable and predictable order;
universality and relatively constant application over time in a prospective and non-
contradictory way may be regarded as the main constituents of the notion of Rule of Law. 140
Rights are of little use if their limits and proper scope are not in advance known by citizens.
The most important safeguard against arbitrariness is an accessible and understandable
normative regime.
An important procedural dimension rule of law, which is of particular concern from the data-
driven perspective, is the effective capability to contest decisions.141 This primarily requires
137 In a techno-regulatory setting, law operates at a higher level of order as “meta-technology” so that, we
witness the emergence of legal norms that no longer command human conduct but regulate the design of the
systems that limit, shape and thus, govern human conduct. Ugo Pagallo, The Laws of Robots Crimes,
Contracts, and Torts, 10-11. The issue of level of abstraction and control order is also elucidated by Turchin
under the concept of “meta-system transition” in relation to systems theory and cybernetics. Valentin F.
Turchin (Trans. Brand Frentz) The Phenomenon of Science -A cybernetic approach to human evolution,
(Columbia University Press: USA, 1977), 55
138 “Or as Thomas Hobbes might have put it, how is authority now authorized?” Bauman, Zygmunt et al.
(2014) After Snowden: Rethinking the Impact of Surveillance. International Political Sociology, 121-144, 37.
139 Brian Tamanaha, On the Rule of Law, Cambridge: Cambridge University Press, 2004.
140 Jeremy Waldron, “The rule of law in contemporary liberal theory”, Ratio Juris, v. 2, n. 1, p. 84, 1989;
Hans-Wolfgang Arndt, “Das Rechtsstaatsprinzip”, JuS 27, pages L41-L44, 1987.
141 Speaking of natural overlaps between the substantive and procedural aspects of the rule of law, Waldron
that one must be aware of the existence of a DM process, and also foresee and understand
the consequences.142 Law’s capacity to allow subjects to contest judicial and administrative
decisions, including the validity of the rule itself, provides a meta-level procedural safeguard
in that “the addressees and the ‘addressants’ of legal norms coincide” a form of self-
regulation where the law maker is bound by the rules of its own creation.143
The emergence of positive lawan outcome of enlightened modernity rooted in the advances
of capitalistic society towards industrialismwas the key stone in the evolution of a social
and political order which developed instruments within the realm of constitutional and
administrative law to regulate the regulator.”144 Historically the concept of rule of law reflects
the struggle to limit law as the instrument of power.145 In the ideal liberal state, there could be
no executive, legislator, judge or citizen to exercise or enjoy arbitrary power so as to act
against the public welfare or common good“the empire of laws and not of men”.146
Rule of law as an essential constituent of the ideal of democracy is predicated upon a two-
prong transparency principle, first, rule-making as a process should be open to people
through political representation; and second, the enforcement process should allow
contestation through procedural safeguards. “Good law” ought to be predictable, and also
accountable that subjects must know in advance who is responsible for the enactment and
administration of the norms. However, in techno-regulatory settings, the three phases of legal
process as: direction (rule making), detection, and correction collapse on top of each other
and become an opaque inner process embedded in the systems.
Against this backdrop, below, we will conceptualize three potential harms which undermine
the rule of law as a procedural safeguard to discern, foresee, understand and contest
decisionsnamely (i) replacing of causative basis with correlative calculations, (ii) the
collapse of the normative enterprise and (iii) the erosion of moral enterprise.147 Although these
implications are not completely specific to Big Data space, but rather of general nature
mentions that hearing by an impartial tribunal acting on the basis of the evidence and arguments presented,
right to hear reasons from the tribunal when it reaches its decision, and some right of appeal to a higher
tribunal as procedural characteristics are equally indispensable. Jeremy Waldron, “The Rule of Law and the
Importance of Procedure”, in James E Fleming, Getting to the rule of law (New York: New York University
Press, 2011), 7.
142 M. Hildebrandt, Profile transparency by design? Re-enabling double contingency in M. Hildebrandt, K. de
Vries (eds.), Privacy, Due Process and the Computational Turn, Routlegde 2013.
143 Mireille Hildebrandt. Smart Technologies and the End(s) of Law. (Cheltenham: Edward Elgar, 2015) 10
144 Mireille Hildebrandt, ‘Law as Information in the Era of Data-Driven Agency’, 22. Also, see Fiss, Owen
M., "The Autonomy of Law" (2001). Faculty Scholarship Series. Paper 1316.
145 “The rule of law is not itself a legal rule or a rule system, but a political and cultural ideal that emerges
over time and provides essential support for the proper functioning of law.” Brian Z. Tamanaha, A Realistic
Theory of Law, 31 ; Brian Z. Tamanaha, Law as a Means to an End: Threat to the Rule of Law (New York:
Cambridge University Press 2006) . Joseph Raz, “The Rule of Law and its Virtue,” in The Authority of Law
(Oxford: Clarendon Press 1979)
146The definition: ‘rule of law’ is the English translation of the Latin phrase ‘imperium legum’, more literally
“the empire of laws and not of men” Mortimer N.S. Sellers, ‘What Is the Rule of Lawand Why Is It So
Important?’ in J.R. Silkenat et al. (eds.), The Legal Doctrines of the Rule of Law and the Legal State
(Rechtsstaat), Springer International Publishing: Switzerland 2014, 1-13, 4.
147 This trilogy has been briefly visited in Ugo Pagallo, Emre Bayamlıoğlu et. al., “New technologies and law:
global insights on the legal impacts of technology, law as meta-technology and techno regulation” New-
Technologies-and-Law-Research-Group-Paper, 4th LSGL Academic Conference, Mexico 2017.
regarding techno-regulation, each of them aggravates and extends into deeper dimensions
when techno-regulation is implemented through data-driven systems. The informational
asymmetries, and the flawed epistemology of data-driven inferences together with the bias
inherent to machine learning bring about the concern that “rule of law” being exchanged of
“rule of technology”accompanied by Kafkaesque, Huxleyan and Orwellian discourses of
5.1 Challenge to law as a normative enterprise
Rules, principles, standards and in general “norms” provide uniformity, predictability,
and social coordination for they inform individuals about their way of conduct, and explain
the legal course of events in situations addressed by the Law.149 Law, hence, is a normative
enterprise where the regulator consciously creates and maintain the norms that regulate
conduct in society, and the judiciary consciously decide what to do when 'frictions/disputes'
arise in view of the existing norms.
Any regulator, whether it is in the realm of the law, or within corporate policies, will weigh
various interests and decide what the norm should be in particular constellation of facts. The
norm is usually written down allowing the regulatees to take note of it and act accordingly.
Regulatees are supposed to adhere to the norms and if they transgress the norm face the
consequences. The buck does not stop here, otherwise enforcing the norms through
technology would potentially fully realize the ideal sketched by the law. Statutory norms
represent the solidification of a political debate at a particular moment, taking into account
only the foreseeable facts, interests and effects. Changing knowledge, opinions, interests etc,
may require reopening the debate, and hence contestation of norms is an essential
mechanism to have the law and society mutually adapt and develop. Courts will decide how
to cope with new arguments and new situations, and how to ensure that their verdict is
enforceable and comprises law.
As explained in the earlier parts of the paper, there is some implicit normativity in every
decision. Any decision-making system has a normative basis which may be seen as a totality
of the decisional criteria, assumptions, and legitimations embedded in the system, specifying
its behaviour.150 However, techno-regulatory settings based on data-driven correlations and
inferences pose a challenge to law as a normative enterprise in that there exists no clear
norms in the conventional sense to provide a mapping between the facts and the legal
effects.151 Below we will have a closer look how certain normative opacities and flaws in
probabilistic reasoning seriously impair the rule of law as a remedy for the contestation of
148 Brownsword, “So What Does the World Need Now?”. More on the implications of ML that may disrupt the
concept and Rule of Law, see Mireille Hildebrandt, Law As Computation in the Era of Artificial Legal
Intelligence. Speaking Law to the Power of Statistics (June 7, 2017), University of Toronto Law Journal,
forthcoming. Available at SSRN:
149 Brian Z. Tamanaha, A Realistic Theory of Law, Cambridge University Press (2017), 121.
150 Vries MJ, Hansson SO and Meijers (eds.), Norms in Technology (Springer Netherlands 2013)
151 “As well, the specified variables could be the result of still other forces to which we should pay attention: a
statistical model might gain accuracy by including the race, sex, age, and income of the parties, lawyers, and
judges participating in a case without revealing precisely why or how these attributes influence decision-
making. Useful variables will not necessarily map out decision dynamics.” Adam Samaha, "Judicial
Transparency in an Age of Prediction" (University of Chicago Public Law & Legal Theory Working Paper No.
216, 2008), 9.
Computational complexity
Of normative opacities engendered by the data-driven decisions, the first and
foremost problem creating opaqueness is the computational complexity.153 Algorithms are
unintelligible in the sense that the recipient of the output (e.g., a classification decision), rarely
has any concrete idea of how or why a particular classification has been arrived at from the
input in hand.’154 The self-adjusting and adaptive capacity of data-driven systems render
them intractable and unintelligible to human cognition.155 For instance, in spite of being a
well-specified mechanized process, it is found extremely difficult to produce a complete
“technical recipe” or purely mechanistic explanation of how online advertisements are
personalised. The result may seem like a messy and abstract interrelations among several
pieces of code (e.g., neural networks) exhibiting complexity as unexpected and unplanned
Although the human mind determines the design and the modes of deployment of algorithms;
the nature and extent of this intervention has little bearing on the interpretability of the results.
Opacity in machine learning algorithms is a product of the high- dimensionality of data,
complex code and constantly reconfigured logic of the decision-making.
The blurring of the legislative intent
A further type of normative opaqueness is due to the difficulties in discerning the
intention of the rule-maker. In a data driven setting, the programmer determines a system’s
responses but the user sees only the results of the software’s decisions; and hence, we may
not be sure that the normative impact is solely determined by the legislative intent. The
observation and even the analysis of the output does not allow individuals to discern which
part of the normativity (as could be inferred from the output) is intentional and which part is
merely spin-off in the form unforeseen or secondary effects. These unintended and/or subtle
consequences are not the all hindrance but the normativity contained in a system is rather
dependent on the affordances of the technology and the way that humans engage and
interact with it. Therefore, the outcome in a data-driven setting may not be regarded as fully
152 Matthias Leese, ‘The new profiling: Algorithms, black boxes, and the failureof anti-discriminatory
safeguards in the European Union’ (2014) 45 (5) Security Dialogue 498. Also, see Kroll, ‘Accountable
153Algorithms are complex in at least two ways: technically and contextually.” Anton Vedder and Laurens
Naudts, ‘Accountability for the Use of Algorithms in a Big Data Environment’ (2017) 31 International Review
of Law, Computer & Technology, 206-224
154A man who abstracts a pattern from a complex of stimuli has essentially classified the possible inputs. But
very often the basis of classification is unknown, even to himself; it is too complex to be specified explicitly.”
Oliver G. Selfridge and Ulric Neisser, "Pattern Recognition by Machine," in Edward A. Feigenbaum & Julian
Feldman eds. Computers and Thought, (McGraw-Hill Book Company, 1963) 238.
155 Jenna Burrell, How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data
& Society, 1-12 (2016) ; Antoinette Rouvroy, ‘The end(s) of critique : data-behaviourism vs. due-process’ ;
Valeria Ferraris, Francesca Bosco, et al. Working Paper “Defining Profiling”, 2013 ; Ronald Leenes and Paul
de Hert (eds.), Reforming European Data Protection Law, Springer Netherlands (2015) ; Nicholas
Diakopoulos ‘Algorithmic Accountability: Reporting On The Investigation of Black Boxes’ ; Ian Walden, Jon
Crowcroft, Jean Bacon, ‘Responsibility & Machine Learning: Part of a Process, (October 27, 2016)
reflecting the intent of the competent body to enact rules. What further complicates the
problem is the fact that, in many cases, what is seemingly a spin-offan unintended
consequencemay have subtle effects that are desirable to the system controllers. In such
case, unintended but somehow desirable consequences become confounded with the
original goals of the system, and even become goals themselves.156
Dynamic rule-making
The final and equally insurmountable difficulty in terms of normative opaqueness is
the dynamic rule-making (probabilistic reasoning) that underlie data-driven decisions. As
seen, the normativity embedded in data-driven models is transparent neither to those who
are subjected to such techno-regulatory processes nor to those in charge of legal scrutiny.
In addition, in cases of data-driven decisions based on probabilistic calculations, these
norms are not stable, but rather they are the objects of persistent and on-going
reconfiguration.157 This malleability and adaptive capacity of data-driven systems make them
particularly attractive as a regulatory tool.
In adaptive systems, rule-making (normative statements) is often based on dynamic
correlation patterns where the decisional rule itself emerges autonomously from the streaming
data.158 As such, dynamic rule-making based on probabilistic reasoning appears as an
impediment in challenging of automated decisions in that what is regarded to be the “norm”
is no longer predetermined, but constantly adjusted.159 Akin to transparency problems,
inferences derived from such fluid hypotheses160 make any challenge on normative credentials
of the system hard to formulate for the decisional criterion remains vague and cannot be
pinned down in sufficient precision.161 Probabilistic reasoning about normative issues pose
156 Gary T. Marx, Windows Into the Soul, Surveillance and Society in an Age of High Technology, University
of Chicago Press (2016), 62
157The algorithm modifies its behavioural structure during operation. Machine learning algorithms have self-
regulative capacities. Machine learning is adept at creating and modifying rules to classify or cluster large
datasets. The algorithm modifies its behavioural structure during operation (Markowetz et al., 2014). This
alteration of how the algorithm classifies new inputs is how it learns (Burrell, 2016: 5). Training produces a
structure (e.g. classes, clusters, ranks, weights, etc.) to classify new inputs or predict unknown variables. Once
trained, new data can be processed and categorised automatically without operator intervention (Leese, 2014).
The rationale of the algorithm is obscured, lending to the portrayal of machine learning algorithms as ‘black
boxes’.” Brent Daniel Mittelstadt, et al., The ethics of algorithms: Mapping the debate, Big Data & Society,
JulyDecember 2016. Also, see Wagner, B. Algorithmic regulation and the global default: Shifting norms in
Internet technology, Etikk i praksis. Nord J Appl Ethics (2016), 513.
158 Many of the practical applications of automated decision-making exhibit the properties of “artificial
adaptive systems”systems within systemscomprising of: intelligent classification systems, visual
clustering systems, prediction systems, prototype generation systems, and systems creating network
connections between objects. Massimo Buscema and William J. Tastle (eds), ‘Intelligent Data Mining in Law
Enforcement Analytics_ New Neural Networks Applied to Real Problems’ (Springer Netherlands 2013) 14.
159In contrast to human-made rules, these rules for decisionmaking are induced from historical examples—-
they are, quite literally, rules learned by example.” Kroll et. al. “Accountable Algorithms” (2017) University
of Pennsylvania Law Review, 679. Also see .” Matthias Leese, ‘The new profiling: Algorithms, black boxes,
and the failureof anti-discriminatory safeguards in the European Union’ (2014) 45 (5) Security Dialogue 501
160 Ton Jörg, New Thinking in Complexity for the Social Sciences and Humanities: A Generative,
Transdisciplinary Approach, Springer Netherlands (2011)
161 “…[o]wing to the dynamic nature of algorithms, reverse engineering can provide only momentary
snapshots of data-driven profiling practices that might not be relevant any longer at the point of discovery.”
Matthias Leese, ‘The new profiling: Algorithms, black boxes, and the failureof anti-discriminatory safeguards
great obstacles since it eliminates the necessary qualitative assessment in order to be
reconnected to the real world. 162
As inferential statistics and/or machine learning techniques, produce probable yet uncertain
knowledge, when statistics instead of reason de facto enter into the realm of norm setting,
law loses its normative basisat least to the extent we associate normativity to human
action. Rights depend upon how distantor notthey are from given targets or features.163
As complex and fluid systems with countless decision-making rules and lines of code
operating, data-driven models inhibit holistic oversight of decision-making pathways and
5.2 Challenge to law as a causative enterprise
Given that part of our knowledge we obtain direct; and part by argument164, in order
to understand and legally define data-driven decision-making, rather than dissecting data
mining stages and analysing the algorithms in the wild, we need to develop a holistic view
which treats data analytics in a framework bringing together perspectives, heuristics, theories,
and models. 165 We need a closer working partnership between the data and the animating
ideas of cause and effectthe “why” of things.166 Big data is not only about how much we
know, but also how much we can learn, and by what means.167
Ever since Wittgenstein's and Heidegger's different but converging versions of the
Linguistic Turn, many of us have become convinced that it is impossible to grasp
any segment of reality independently of the filter of some interpretive framework
(be it a language game, a tradition, a paradigm, a conceptual scheme, a
vocabulary) and that the plurality of existing interpretive frameworks cannot be
reduced to unity without some significant loss of meaning.168
Against this backdrop, below part intends to develop a further look into the epistemological
flaws inherent to data-driven decision-making, which pose a challenge to the rule of law as
impairing the causal basis of Law and adjudication. First, the spurious nature of data analysis
will be explained to illustrate the point that data itself is not capable of justifying the
assumptions and the perspective underlying certain inference of causation. And following
in the European Union’ (2014).
162 “[w]hile reasoning about the facts can (at least in principle) still be regarded as probabilistic, reasoning
about normative issues clearly is of a different nature. Moreover, even in matters of evidence reliable numbers
are usually not available so that the reasoning has to be qualitative.” Henry Prakken, ‘Logics of
Argumentation and the Law’ in H. Patrick Glenn, Lionel D. Smith (eds.), Law and the New Logics (Cambridge
University Press 2017) 3-32, 4.
163 Bauman, Zygmunt et al. (2014) “After Snowden: Rethinking the Impact of Surveillance”
164 The Theory of Probability is concerned with that part which we obtain by argument, and it treats of the
different degrees in which the results so obtained are conclusive or inconclusive. John Maynard Keynes, A
Treatise on Probability.
165 Scott E. Page The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and
Societies, 85
166 Steve Lohr, Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost
Everything Else.
167 Dani Rodrik, Economics Rules
168 Alessandro Ferrara, The force of the example: explorations in the paradigm of judgment Columbia
University Press, 2008 ( emphasis added), 17.
from that, we further develop the the view that causality in Big Data is a question of model-
building which is itself a judgmental and value-laden theorisation that may also be read as a
Numbers only speak for themselves: the spurious nature correlations
in data mining
Although it is paramount to human cognition and conduct to understand the reason
behind the associations one encounters in the real world, most of the big data practices focus
on the potential exploitative and invasive uses of “valuable” insights, rather than the nature
and the quality of the knowledge itself.169 For we only hear what we understand, wherever a
consequence or judgment is justified on the rhetoric of “facts never lie”, a closer look
becomes necessary to figure out whether the data objectively reveals what it purports to.
Data mining and in particular machine learning, employs quantitative methods and statistical
testing to process data resources to identify reliable patterns, trends, and associations among
variables that describe and/or anticipate a particular process.170 As big data aims to discover
what is known to be unknown with a view to construct meaning out of seemingly random data
patterns171, it engenders a shift in the centre of gravity of knowledge, that is, decision-making
increasingly becomes a process based on finding correlations and patterns, rather than
experience and intuition. Allegedly, what researchers do is to program distributed
computers/servers so that they search over the space of correlations to discover aggregate
types which somehow differ.172 As a novel method of empirical inquiry, instead of starting with
a question, Big Data reverses this process by first running the algorithms to look for patterns,
and then retrospectively constructing already proven hypotheses.173 The seeming strength
and comprehensiveness of this methodology relies on the immense magnitude of the
datasets providing an oligoptic view of full resolution the belief that “with enough data, the
numbers speak for themselves. 174 As such, rather than creating knowledge Big Data arises
as a system of knowledge which itself transforms the objects of knowledge.175 Emerging as a
cartography of human lives where the map precedes the territory176, it becomes the epistemic
169 David Chandler, “A World without Causation: Big Data and the Coming of Age of Posthumanism”,
Millennium: Journal of International Studies 2015, 119, 2 ; Rob Kitchin, Big Data, new epistemologies and
paradigm shifts, Big Data & Society
, AprilJune 2014: 112, 4.
170 Stephan Kudyba Big Data, Mining, and Analytics, 29
171A pattern is a discernible regularity in a domain that keeps reoccurring in a predictable way and that may
or may not be human made.Tanel Kerikmäe and Addi Rull (eds.) The Future of Law and eTechnologies
172 Erez Aiden and Jean-Baptiste Michel, Uncharted - Big Data as a Lens on Human Culture, Penguin (2013)
173 Michael Mattioli, ‘Disclosing Big Data’ Minnesota Law Review (99), 2014, 541 ; Chris Anderson, “The
End of Theory: The Data Deluge Makes the Scientific Method Obsolete”, Wired (23 June 2008), available at:
174 Allegedly, this emerging type of data analysis is unlike the abstract and reductionist constructions of old-
school statistics, but is increasingly populationalclaiming to observe society in its entirety (N = ALL).
Russell Walker, From Big Data to Big Profits Success with Data and Analytics, 2015, 17. Also, see D
Chandler, A World without Causation: Big Data and the Coming of Age of Posthumanism, Millennium:
Journal of International Studies 2015, 119, footnote 10. And for an excellent examination of the birth of
statistical analysis see, Ian Hacking, The Taming of Chance (Cambridge: Cambridge University Press, 1990).
175 By many, such methodology is seen as a kind of social “alchemy” between science and pseudoscience. See,
Hubert Dreyfus, Alchemy and Artificial Intelligence (1965). Also, see Paul R. Thagard Computational
Philosophy of Science, The MIT Press (1988)
176 Jorge Luis Borges’s short story just one paragraph named “On Exactitude in Science” “[d]escribes a
base through which unknown regularities are discovered by seeking insights ‘born from the
data’. 177
As it may seem to be the latest development of reason, there are some evident restrictions
and limitations of the methodology of extracting knowledge out of patterns and correlations
identified in immensely large datasets. A correlation is the link between two variablestaking
quantifiable values so that, they increase, decrease or somehow alter in a synchrony.178 Some
correlations are straightforward; almost axiomatic easy observations e.g., demand for flu
medicine increases in winter, and more traffic accidents take place during rain. And some
may be more subtle and sinister like overweight persons make more spelling mistakes, while
some are simply valuable such as the knowledge that a US citizen is more likely to register to
vote after being informed that a close friend had registered. However, a correlation does not
necessarily amount to causationfor it does not inform us about the nature of the discovered
relation. The correlation between independent and depended variables in the analysis may be
spurious. As the well-known story tells, there may not be a causal relation between diapers
and beerthough it may be equally plausible to argue that people buying diapers have kids
and therefore they consume beer at home, rather than going out with friends. In such cases,
although the supposed cause and effect are related, in fact they may be both dependent on
a third factor. Apparently, algorithms may be good at predicting outcomes, but predictors
are not causes.”179
The meaning constructed through repeated observations over time and/or space does not
necessarily explain but may undeniably rationalizes what is otherwise would be regarded as
coincidental or unpredictable.180 The basic premise behind data analytics is the observation
of correlations along the chosen parameters, e.g. time, events, and operations. The
correlation is trusted that it will extend into the future eventsby maintaining the distance or
the relation between the chosen observables. However, a correlation may be a weak
epistemological basis for prediction and thus, the so-called “truth” offered by Big Data may
turn out to be nothing more than a discursive self-intoxication.181
Certain mathematical theories concerning large datasets and randomness in general,
mythical empire in the distant past in which cartographers took their craft very seriously and strived for
perfection. In their quest to capture as much detail as possible, they drew ever-bigger maps. The map of a
province expanded to the size of a city; a map of the empire occupied a whole province. In time, even this level
of detail became insufficient and the cartographers’ guild drew a map of the empire on a 1:1 scale the size of
the empire itself. But future generations, less enamored by the art of cartography and more interested in help
with navigation, would find no use for these maps. They discarded them and left them to rot in the desert.
from Dani Rodrik, Economics Rules,
177 Baudrillard on the above Borges story: The territory no longer precedes the map, nor survives it.
Henceforth, it is the map that precedes the territory ... it is the map that engenders the territory and if we were
to revive the fable today, it would be the territory whose shreds are slowly rotting across the map. ... The
desert of the real itself.” in Simulations and Simulacra, see Simon Malpas, ‘Postmodernism’, in Rosi Braidotti
(ed.) The history of continental philosophy, volume 7: After poststructuralism: transitions and transformations,
178 Michael Mattioli, ‘Disclosing Big Data’ Minnesota Law Review (99), 2014, 541. Also see, Viktor Mayer-
Schönberger and Kenneth Cukier, ch.1.
179 Ziad Obermeyer, and Ezekiel J. Emanuel,Predicting the Future Big Data, Machine Learning, and
Clinical Medicine’
180A Jacobs, “The Pathologies of Big Data”, (2009) 52 Communications of the ACM 36-44
181 Grégoire Chamayou, A Theory of the Drone
demonstrate that past cannot be relied to predict future.182 According to recurrence theorem
in mathematics (ergodic theory) “in any deterministic system, including chaotic systems, the
future, soon or late, will be analogous to the past (will somehow iterate).” 183 Accordingly, as
we deepen and prolong the data analysis for real world queries we will see nothing but a
recurring past.184 In other words, as we want to make more far reaching predictions based on
more exhaustive datasets, the process fails to foresee the relative unpredictability of the
world. This is due to the fact that every large enough dataset at the end presents some
regularity not necessarily implying any predictive result. Accordingly, it may be concluded
that certain correlations appear just because of the size of the data.185 In large enough
datasets, even if data is selected arbitrarily, certain patterns will occur when analysis extends
long enough. With so many possible dimensions, it becomes incredibly likely that some
constructed type correlates with the outcome.186
Causality in Big Data: A question of model-building
“…while technology provides causes for action, law provides reasons for
Without doubt, certain correlations are useful observations for their practical
relevance. However, as the data itself is not capable of justifying the assumptions and the
perspective underlying certain inference of causation, correlations have no causative
explanatory link unless narrated through a theory and implemented as a model based on that
theory. This unfolds the further epistemological problem that causality in data-driven
practiceseven where it is “properly” establishedis a question of model-building which is
itself a judgmental and value-laden theorisation that may also be read as a “narrative”.
In any knowledge query, theory/modeland the heuristics and precepts it embodies is
important as the only consistent way to make sense of the world as well as to extrapolate
beyond the inherent constraints of the observed domain.188 However, while determining
182 Cristian S. Calude and Giuseppe Longo, “The Deluge of Spurious Correlations in Big Data” Foundations of
Science, March 2016, 6
183 “… most of the times Big Data analysis is premised on the assumption that future is the sum of all possible
interactions of “free will,” both on an individual as well as on an international scale. Jonathan” S.
Lockwood, The Lockwood Analytical Method for Prediction (LAMP): A Method for Predictive Intelligence
Analysis, (USA: Bloomsbury Academic 2013), 3
Cristian S. Calude and Giuseppe Longo, “The Deluge of Spurious Correlations in Big Data”, 7.
184 Cristian S. Calude and Giuseppe Longo, “The Deluge of Spurious Correlations in Big Data”
185 The recurrence of correlation is a rather natural phenomenon which applies to many systems such as
seasonal cycles and their observables consequences, Cristian S. Calude and Giuseppe Longo, “The Deluge of
Spurious Correlations in Big Data”, 9 ; Edward R. Dewey, Edwin F. Dakin Cycles: The Science of Prediction,
186 “Note that it is exactly the size of the data that allows our result: the more data, the more arbitrary,
meaningless and useless (for future action) correlations will be found in them.” Cristian S. Calude and
Giuseppe Longo, “The Deluge of Spurious Correlations in Big Data, 6. Also, see Scott E. Page The Difference:
How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, 85
187 Mireille Hildebrandt, “Technology and the end of law” in Claes, W. Devroe, and B. Keirsbilck (eds.)
Facing the limits of the law, (Berlin/Heidelberg: Springer, 2009) 443464.
188 Theory is crucial. Serendipity may occasionally yield insight, but is unlikely to be a frequent visitor.
'Without theory we make endless forays into uncharted badlands. 'With
theory we can separate fundamental
characteristics from fascinating idiosyncrasies and incidental features. Theory supplies landmarks and
causal links, we often overlook the underlying theory and the relevant perspective such as our
knowledge of the world or precepts about the behavioural cues and motives of others.189 For
example, take the rule of thumb that ice cream sales increase when the summer comes.
Although such inference may seem very simple, there is no reason for an alien landing on
earth from outer space to not to conclude that the weather got warmer when people ate ice
cream. Put in other words, there is nothing to come out of the data about the true cause of
the relation between the weather and the ice cream sales that could justify our perspective.
Apparently, we take for granted the underlying heuristics, or the common sense we employ,
that is, what people ate has no bearing on the weather, and ice cream does not grow(!) in
summer. While correlations appearing in large datasets may possess insightful value from a
certain perspective, under a different theory and hypothecation, they may simply be spurious
as mere coincidences.
As seen, in the realm of big data, causality is a question of model-building and thus it is not
discovered but rather manufactured. Deciding which correlations have a causal basis upon
which we may construct a theory and accordingly implement the necessary model, has no
objective or neutral criteria; and therefore patterns in Big Data are in fact a value-laden
narrative.190 The idea of discovering informational regularities (patterns) in a seemingly random
and comprehensive pile of data may, at first glance, be seen in contrast with the principle of
“narrative and understanding” as the main modes of knowledge acquisition since the era of
enlightenment. However, treating “patterns” and “narratives” in such a dichotomy, as
opposing modes of knowledge acquisition, may be illusive and misguiding. Since patterns
are abstract representations of the physical, social or economic phenomenabe it weather
forecast, stock fluctuation or criminal behaviourthey implicitly require a narrative in order
to be understood.191 Similarly, Rodrik sees an analogy between fables and models implying
that models are also narratives each describing their own tale of reality. 192
In order to make a prediction or to infer a causal relationship, we first need to interpret the
guideposts, and we begin to know what to observe and where to act.” David Bollier, “The Promise and Peril of
Big Data”. Also, see Holland, John H., Hidden Order: How Adaption Builds Complexity (Helix Books, 1995),
189 A perspective is about how we decide to handle the problem and accordingly, under which assumptions we
construct our model to eventually determine which parameters to consider and in what weight. Perspectives,
based on certain assumptions, embed knowledge about certain events and consequences which we believe to be
causal. As seen, we cannot speak of one perspective but more of a fusion of many perspectives each describing
reality through a different narrative. Scott E. Page, The Difference, 85.
190 “All causal inference presupposes some causal background assumptions, but do all such assumptions
concern causal mechanisms? It should also be recognized that mechanisms are not a magic wand for causal
inference in the social sciences. The problem in many cases is not the absence of a possible mechanism, but
insufficient evidence to discriminate between competing mechanistic hypotheses. Similarly, lazy mechanism-
based storytelling is a constant threat: having a good story is no substitute for real statistical evidence. It is not
rare for a good story about a (possible) mechanism to make people forget how important it is to test whether
such a mechanism really is in place and whether it can really account for the intended explanandum.” Stavros
Ioannidis and Stathis Psillos ‘Mechanisms, Counterfactuals, and Laws’ in Stuart Glennan and Phyllis Illari
(eds.), The Routledge Handbook of Mechanisms and Mechanical Philosophy (Routledge 2018). Also see Loise
Amoore, The Politics of Possibility (Duke University Press Books 2013) 44.
191 “… it can be argued that code itself consists of a narrative form that allows databases, collections and
archives to function at all.” David M. Berry, The Philosophy of Software - Code and Mediation in the Digital
Age, (Palgrave Macmillan 2011), 26. Also, see Cathy O'Neil, Weapons of Math Destruction,
192 Dani Rodrik, Economics Rules
situation based on the perspective that underlies our narrative.193 Otherwise, numbers only
speak for themselves and every inference might seem equally valid. For an illustrative
example, imagine that Alice has just learnt from a trustable source that her husband Bob is
having an affair with a woman from their weekend party group, but she has no idea about
who the secret lover might be. Alice also knows, Bob is aware that he is under suspicion and
acting accordingly. As she tries to discover Bob’s secret lover by observing him at the
weekend party, she singles out Catty as the only woman that Bob had stayed distant while
being flirtatious with the others in the usual way. It seems like, the cunning husband Bob has
failed in his attempt to deceive Alice by not showing interest to Catty. Apparently, Alice has a
model or put differently, certain contemplation of Bob’s thinking/mindset and his possible
behaviour under suspicion. Her model is based on the assumption that Bob is acting in deceit,
and when analysed from this perspective, his conduct reveals the identity of his secret lover,
Catty. In other words, the interpretation which results with the singling out of Catty is based
on the perspective that Bob’s conduct contained certain type of intentional “cunning”
behaviour. However obviously, Bob is not very cunning at all, at least not as much as Alice,
since he could not figure out that he was giving himself in. In any case, it will not be wrong to
assume that even if Bob had been more experienced in his cunning game and accordingly
had showed no sign of difference in his treatment of other women, Aliceassuming that she
is sure of the affair would only need to observe him at a longer period and in several social
gatherings. Indeed, in a longer time series and with a multitude of data points, Alice, with the
aid of machine learning algorithms, might reach to a probabilistic conclusion with regard to
Bob’s conduct with each woman in the group. Of course, there could theoretically be
husbands even more cunning, anticipating this possibility and employing further strategies of
obfuscation requiring more complex models for analysis. The noteworthy point here is that,
depending on the model constructed by Alice about Bob’s behaviour, any conduct of Bob
may result with the failure to conceal the identity his secret lover. It is not that same data is
interpreted differently, but rather different data are interpreted to reach a model-consistent
conclusion. As we may contemplate several levels of sophistication for Bob’s cunning game,
there is an inevitable indeterminacy which haunts our interpretation.194 Alice can never reach
a conclusion beyond reasonable doubt, other than some probabilistic confidence score
based on correlations whose causal reliability is unknown in the absence of additional
extraneous and independent knowledgee.g. the knowledge of Bob’s strategy of
Theorizing and thus narrating of a computational model through a perspective inevitably
discards certain part of the information about the world around us, and by doing so; it enables
us to reach a digitized representation of the problem space which can be manipulated by
means of algorithms (recursive functions).196 In order to asses causal value, we need to know
the range of alternatives from which a certain interpretation is derived, together with the
principles and factors which generate those range of options. Determining which of the
193 David M. Berry, The Philosophy of Software - Code and Mediation in the Digital Age, (Palgrave Macmillan
2011), 26 (it can be argued that code itself consists of a narrative form that allows databases, collections and
archives to function at all.) See also Cathy O'Neil, Weapons of Math Destruction
194 Ernest Gellner ; Jose Brunner, The Psychoanalytic Movement The Cunning of Unreason, 131-2.
195 id. 134
196 David M. Berry, The Philosophy of Software - Code and Mediation in the Digital Age, (Palgrave Macmillan
interpretations are compatible with the causality depends on the assumptions and the
underlying perspective.
Correlations may be relied upon as possessing a causative link only where the contemplated
model uses direct pertinent input representing physical reality such as the case of shots on
goal, free-kicks, crosses and tackles in a football game, rather than indirect proxies as
substitutes to simulate processes such as human lives and behaviours.197 In the legal sense,
causality proves facts, single and more specific instances but not probabilistic concepts such
as risk of malintent or propensity to default a payment. What we want to render related, the
two ends of causality“facts” and “consequences”become too loose to tie in and to be
handled by law. When risks appear as an accumulation of probabilities which further becomes
the basis of certain decisionsalthough the process may seem to a certain extent rule-
adherentthe norms/law and the input they act upon may no longer be causatively related
to the facts in the conventional sense. What is established through the algorithms is not truth
or fact in the legal sense but a probability score obtained through the statistical analysis of
seemingly irrelevant traits. The domain and the problem space that the law is not designed to
deal with so abstract and complex in that a review of applicable rules and their relevance
becomes impossible. Law cannot work through abstract and elusive concepts, or even
undefined risks such as Bayesian probability about one’s being terrorist, propensity to lie, or
inclination to speeding. Although in modern legal systems the evaluation of certain aspects
and elements is to some extent probabilistice.g. beyond reasonable doubt198; machine
learning relies upon Bayesian “confidence” scores which makes the probability way too
conditional, that is, it assumes that two or more elements are in fact correlated without “real”
Apart from extreme limitation of one’s life choices; an epistemology establishing a so-called
causation between such abstract consequences and multitude of data points through
aggregation, and computational recursive analysis of datainsights of which may not be
understood through direct human cognitionis the demise of law as a causative enterprise.
As will be explained in the coming parts, such break of the causation chain is also serious
blow to human autonomy since individuals could no longer contest the result through
argumentationalso alerting the end of the era of enlightenment for human intellect ceases
to become the measure of everything where a systemic and axiomatic rationality takes over.
197 “[i]n a 1947 article, “Measurement Without Theory,” by Tjalling Koopmans, a Dutch-American economist
who later won a Nobel Prize. The Koopmans article was a critique of the hard-line “empiricist” approach to the
study of business cycles back then.” Steve Lohr, Data-ism. “[Therefore] lack of realism [in a model] is not a
good criticism on its own. To use an example from Milton Friedman again, a model that included the eye colour
of the businesspeople competing against each other would be more realistic, but it would not be a better one”
Dani Rodrik Economics Rules
198 An early US Case People v. Risley (14 N.Y. 75 (1915)) illustrates the difficulty with probabilistic evidence
in that it relates only to the future events, but not the past. The court decided that, establishing that the chance
of same combination of defected letters appearing in a typewriter other than the one belonged to the defendant
was one in a four billion, did not mean that the defendant is guilty with one in a four billion error margin. The
Court held that The fact to be established in this case was not the probability of a future event, but whether an
occurrence asserted by the People to have happened had actually taken place.On the US judgments, see
Michael O. Finkelstein, Basic Concepts
of Probability and Statistics in the Law (Springer Science+Business
Media 2009) 3, 11-15. Moreover, statistical reasoning may not be able to properly evaluate evidence which are
different in their nature or derived from different epistemologiese.g. speaking of a crime: a motive and a
potential alibi are facts which may be represented with different data types and ontologies.
The collapse of the causative link may also be seen as a big leap towards dehumanisation of
the social, economic, political texture of our lives.
5.3 Demise of law$as$a$moral enterprise
Impairment of human Autonomy
Data-driven models implementing rules or legal frameworks impair the rule of law by
undermining the moral basis of the legal system in many fronts. The arguments within this
context, primarily relate with the notions of human autonomy and dignity as the higher
principles of European legal and political order since the Enlightenment.
Where technology is used to steer human conduct with a view to ensure compliance or for
the implementation of certain norms, not only the normative character of law suffers from
erosion, but also human autonomy and the moral grounds that the very norms are predicated
upon. Especially where an ex-ante regulatory approach is takenleaving no room for breach,
or choice as to way of complianceour thinking of law departs from “should/should not” to
“can/cannot”, meaning that what is not legal cannot be done either.199 Techno-regulation may
leave room for dissent, but can also take away any freedom to deviate from the embedded
norm.200 Compare, for instance, the London Underground ticket barriers with those in Paris
or Brussels. In the former, passage of these barriers without a valid ticket is not impossible,
people can jump over the barrier, but doing so is a flagrant transgression of the norm. Jumping
over the barrier dramatises the choice between morality and deviance.201 Entering the metro
in Paris and certain stations in Brussels without a valid ticket is made impossible by means
of a tall tourniquet. The difference may seem trivial, but taking away the personal choice by
rendering certain behaviour impossible may lead to weakening of self-controls and may have
a de-moralising effect.202 Brownsword argues that human dignity implies that people should
be able to choose the right actionswhich implies also being able to choose the wrong
actions. In a moral community, people do not only act in line with the norms because they
have a moral obligation (or must according to the techno-norms), but especially because they
subscribe to the norm: "Ideal-typically, the fully moral action will involve an agent doing the
right things (as a matter of act morality) for the right reasons (as a matter of agent morality)."203
Such erosion to human autonomy is aggravated in case of data-driven DM models where the
norms are not stable but rather they are the objects of persistent and on-going change and
199 While ex-post methodologies discourage non-compliance or improve the chances of detection, without
eliminating individual choice, ex-ante approach overrides the individual as an intentional agent and
automatically imposes the desired state or pre-empts certain behavior. Ian Kerr and Jessica Earle, Prediction,
Preemption, Presumption, How Big Data Threatens Big Picture Privacy66 STAN. L. REV. ONLINE 65
September 3, 2013
200 Ronald Leenes “Framing techno-regulation: an exploration of state and non-state regulation by technology”,
Legisprudence, Vol. 5, No. 2; K. Yeung, Can we Employ Design-Based Regulation While Avoiding Brave
New World? (2011) 3(1) Law, Innovation and Technology 1, 2.
201 K. Yeung, “Towards an Understanding of Regulation by Design”, in R. Brownsword and K. Yeung (eds.),
Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Hart Publishing,
Oxford 2008) 98.
202 D.J. Smith, “Changing Situations and Changing People” in A. von Hirsch, D. Garland and A. Wakefield
(eds.), Ethical and Social Perspectives on Situational Crime Prevention (Hart Publishing, Oxford 2000)
203 R. Brownsword, “Code, Control, and Choice: Why East is East and West is West” (2005) 25(1) Legal
Studies 1-21, 17.
reconfigurationmaking a moral anchoring less possible. This malleability and “fluid” nature
of data-driven systems make them particularly attractive as a regulatory tool, but very
unattractive from the perspective of agent moralityeliminating the opportunities to act in a
moral way by one’s own will and thus undermining the conditions required for a flourishing
moral community. As explained above, although data-driven DM may cure the giddiness of
rule-based systems, and may ensure “efficient” rule compliance and execution; such positive
gains are achieved at the expense severe damage to individual autonomy. The adaptive and
pre-emptive capacity, of data-driven systems deprive individuals of the ability to reason with
the rules.
Human beings unconsciously adapt to the complexity of this regulatory mode in a way by
giving up on the attempt to understand it, and instead adopting a behaviour which turns out
to be compliant though not necessarily based on reasoning or rational choice but merely
conformity. 204 Moreover, the legitimacy of techno-regulatory system may not only depend on
the scope of individual choice they permit but also the proportionality of harmful
consequences when compared with efficiency gains.
When smart environments start meddling with us of their own accord, secretly persuading
us to change our behaviour, tracking and monitoring our actions on the internet, and
registering where we find ourselves at what times, it feels as if we are losing our grip on
what happens to us. Our boundaries appear to evaporate: externally, in our environments,
and internally, within our own bodies, it seems that technologies are running the show.205
The demise of adjudication, argumentation and contestation
The application of Data Science techniques in the legal domain has been described
as an important factor that may change how the legal services operate as well as the way the
judiciary functions.206 The core idea here is that data-driven legal analytics trained on data
extracted from 'legal sources' such as case law and even doctrinal research will allow the
construction of systems that will predict legal effects and consequences with high precision
and hence render lawyers less relevant and the process of adjudication almost idle. Some
even believe that a “legal singularity” is near because the "…accumulation of massively more
data and dramatically improved methods of inference make legal uncertainty obsolete".207
Whatever one may think of the feasibility of this, it may be the case that application of data
analytics on the existing case law may produce a model that is able to accurately predict the
outcome of every case that falls within the boundaries of the training set.208 Indeed, the
204The invisible inferences of personalized risks and preference profiles will increasingly afford seamless,
unobtrusive and subliminal adaptations of the environment to cater to a person’s inferred preferences and to
target, include or exclude her on the basis of inferred risks.” Hildebrandt, M. 2015. Smart Technologies and
the End(s) of Law. Cheltenham: Edward Elgar, 9.
205 Peter-Paul Verbeek, Subject to technology, On autonomic computing and human autonomy
206 See, for instance, Richard and Daniel Susskind, The Future of the Professions (Oxford University Press,
2015), Daniel Martin Katz, ‘Quantitative Legal Prediction or How I Learned to Stop Worrying and Start
Preparing for the Data Driven Future of the Legal Services Industry’, Emory Law Journal 62 (2013).
207 Alarie, Benjamin, “The Path of the Law: Toward Legal Singularity” (May 27, 2016). Available at SSRN:
208 This is a fundamental problem in AI and Law, known as the frame problem. Within the boundaries of the
knowledge of the system, its performance may be good, but the system will not be able to handle cases outside
performance of systems trained on a set of cases may be good in the sense of accurately
predicting the outcome of a case relative to its body of knowledge (the training set). The
outcomes of cases not covered by the training set are speculative and it is unknown whether
these outcomes are 'legally correct'.209
In other words, the model can retrospectively predict the outcome of legal disputes only
within a very limited understanding of what the law is about (reducing unpredictability by
offering legal certainty), this may seem unproblematic and even laudable as it may help the
under-privileged access to legal advice, and facilitate extra judicial settlement of disputes.
However, as Hildebrandt and others have rightly pointed out, "law must be understood as a
coherent web of speech acts that inform the consequences of our actions, itself informed by
the triple tenets of legal certainty, justice and instrumentality that hold together jurisdiction
(the force of law), community (even if between strangers) and instrumentality *the policy
objectives of the democratic legislator)".210 A perfect simulationthe magical algorithm
may render the law fully predictable but it will still lack the necessary transparency and moral
accountability in the sense of being scrutinable, engageable, and consequently rule of law
compliant.211 For being an affront to man’s dignity as a responsible agent, replacing
adjudication process with predictable outcomes is a significant impairment to rule of law for
it undermines the internal morality of the legal system“a procedural version of natural
'Mathematical simulation of legal judgement'a decision in generalshould not be mistaken
for the legal judgment itself.213 Where decisions are not contestable through argumentation,
there exists no authority to morally defend and justify the decision. Even if we knew that the
analytics provides the best possible solution, and accurately predicts the outcome of every
possible dispute in advance; we would still need to render such decision intelligible so that it
is transparent enough to be contested. Although such magical algorithm appears to relieve
us from the burden of arguing cases before the courts, this does not in fact the supress the
need for argumentation as a moral justification process. Delivery of an explanation to
these boundaries, nor will it generally be able to detect that a case actually falls outside its frame of
knowledge/reference. It operates on a closed world assumption. Law, however, is a dynamic open system,
rendering potentially any case outside the system's perimeters. See Leenes 1998.
209 The system can thus handle 'clear cases' as they are called in legal theory (see Dworkin), not 'hard cases,
which can be taken to mean here cases that fall outside the frame of the system, or cases that are made to fall
outside the frame by contestation. Nor does it notice a hard case has been presented to it. As a result of
contestation, any case, also seemingly clear cases (or cases that are treated as clear by the system), may be
turned into hard ones, for which the system may produce the wrong result. Moreover, even a perfect system
(the magical algorithm, the point of legal singularity) will have diminishing returns, as the confidence of the
system will be impaired by the decreased number of new cases to observe due to decreased need for
adjudication. However, if seen from the perspective of cybernetics, this positive feedback may be offset in that
the system’s loss of reliability in time will result with more disputes taken to courteventually pushing the
system back to perfection with the introduction of fresh data. Accordingly, instead of replacing the judiciary,
predictive analytics may be used as a tool to monitor and audit actual court decisions. “Confidence in these
models increases by testing alternatives that would disrupt conventional wisdom.” Adam Samaha, "Judicial
Transparency in an Age of Prediction, 13 (italics added)
210 Hildebrandt 2017
211 Adam Samaha, "Judicial Transparency in an Age of Prediction" (University of Chicago Public Law & Legal
Theory Working Paper No. 216, 2008).
212 Lon L. Fuller, The Morality of Law (New Haven: Yale University Press, 1969), 162.
213 Hildebrandt 2017
substantiate any decision is crucial in obtaining the necessary acceptance and endorsement
from the individuals who are subject to the system. Adjudication not only provides redress
but also has a connotation of morality through explanations that render the outcome
normatively acceptable. The idea of predictive judgment, which eliminates the need for
adjudicatory process, discards this moral signalling function of law as to the legally compliant
way of conduct.
6. Conclusion: Conflicts to paradoxes
If there were to be a Grundnorm in the autopoietic legal system, that would be the
Pervasion of data-driven systems is indicative of our current and future dependence
on technologies incorporating, articulating and amplifying computational and calculative
rationalities 215linking ends to means in novel and humanly unintelligible ways. 216 As the Big
Data eco-system uses symbolic sets of discrete data to represent reality in a form which may
be resampled, transformed, and filtered endlessly; it opens the way for novel explorations of
the interaction between the worlds of knowledge” and “power”, and of “description” and
“decision”. 217
Counting, calculating, accounting and eventually computinga hectic obsession that began
with domestication and civilizationhas reached the point where we turn blind to almost
anything that falls beyond or outside of our measuring capacity.218 Owing to the all-
encompassing integration required by modern systems, the social complexity we live in
dictates a paradigm where knowledge is limited without measurement.219 Such “neutral” and
214 Andreas Philippopoulos-Mihalopoulos, Niklas Luhmann: Law, Justice, Society 2010 Routledge,
215 Berry speaks of this rationality that it is also in many cases a privatized one too. David M. Berry, Critical
Theory and the Digital, Bloomsbury Academic, 2014, 38.
216Today we live in a world of technical beings, whose function and operation are becoming increasingly
interconnected and critical to supporting the lifeworld that we inhabit…… Without these technologies in place
our postmodern financialized economies would doubtlessly collapse resulting in a crisis of immense
proportions.” David M. Berry, Critical Theory and the Digital, Bloomsbury Academic, 2014, 37. Also see
Weizenbaum, J. (1976) Computer Power and Human Reason: From Judgement to Calculation, (London:
Penguin Books) 236 ; Jonathan Roberge and Robert Seyfert What are algorithmic cultures? in Robert Seyfert
and Jonathan Roberge eds., Algorithmic Cultures Essays on meaning, performance and new technologies, 2016
; Peter K Manning, The technology of policing: crime mapping, information technology, and the rationality of
crime control. (New York: New York University Press, 2011)
217A description can thus be assimilated into a story told by one person or by a group of people, a story
sufficiently stable and objectified to be used again in different circumstances, and especially to support
decisions, for oneself or others.” Alain Desrosières and Camille Naish The Politics of Large Numbers: A
History of Statistical Reasoning
218 Frank George, Machine_Takeover, The Growing Threat to Human Freedom in a Computer Controlled
Society, (1977), 6
219Thinking of measures and statistical patterns as explanatory per se became widely popular already in the
19th century. It is strongly connected to intellectuals such as Broussais, Condorcet, Quetelet and Comte who
advanced the project of empirical moral science, which likewise gave birth to sociology.” Karoline Krenn,
“Markets and Classifications - Constructing Market Orders in the Digital Age: An Introduction” in: Historical
Social Research 42 (2017), 1, pp. 7-22. DOI: Also, see John
Zerzan, Why hope?: the stand against civilization. (Port Townsend, WA: Feral House, 2015) ; John M.
Henshaw, Does Measurement Measure Up? How Numbers Reveal and Conceal the Truth, (Baltimore: The
prevailing understanding of data analytics and technology is rooted in the political philosophy
of contemporary modern societies as relied on a distinction between politics and technology,
allegedly while the former being based on values, the latter embodies scientific knowledge.220
* * *
The problem with the emerging data-driven epistemology is that the kind of knowing it
suggests is not what we aim for or desire, but simply what technology allows us.221 Or as
Berry put it: “subtractive methods of understanding reality (episteme) produce new
knowledges and methods for the control of reality (techne).”222
Mathematical thinking behind Big data is coercive in the sense that it is totalizing, and
ambiguity is anathemaleaving no room for moral hesitation.223 The calculative rationalities
imply a relationship which is precisely that of power in that “everything in human life that does
not lend itself to mathematical treatment must be excluded.” 224,225 This is the point of absolute
falsehood and absolute trutha state of freedom from all contradictions that is “…[i]n any
purely logical system there was no room for a single inconsistency. If one could ever arrive at
‘2 + 2 = 5’ then it would follow that ‘4=5‘, and ‘0=1’, so that any number was equal to 0, and
so that every proposition whatever was equivalent to ‘0=0’ and therefore true.226 Apparently,
a claim to reality reached in this manner227 leaves no meaningful distinction between true and
falseoverriding Hegelian dialectics for the sake of consistency.228
Data-driven algorithmic processes increasingly crystallize and re-embody norms and values
within a form of an instrumentalized rationality. The data-driven instrumental reason converts
each dilemma, conflict or antagonism, however material and fundamental, into a mere
paradox229 that can then be unravelled by the application of logicsubstituting all conflicting
Johns Hopkins University Press, 2006).
220 Andrew Feenberg, Critical Theory of Technology” in J. K. B. Olsen et al. (eds.) A Companion to the
Philosophy of Technology (Blackwell Publishing, 2009), 149. Also, see Max Horkheimer,