Content uploaded by Dirk Helbing
Author content
All content in this area was uploaded by Dirk Helbing on May 12, 2018
Content may be subject to copyright.
"Big Nudging" - Ill-Designed for Problem Solving
He who has large amounts of data can manipulate people in subtle ways. But even benevolent decision-
makers may do more wrong than right, says Dirk Helbing.
Proponents of Nudging argue that people do not take optimal decisions and it is, therefore, necessary to help
them. This school of thinking is known as paternalism. However, Nudging does not choose the way of
informing and persuading people. It rather exploits psychological weaknesses in order to bring us to certain
behaviours, i.e. we are tricked. The scientific approach underlying this approach is called "behaviorism",
which is actually long out of date.
Decades ago, Burrhus Frederic Skinner conditioned rats, pigeons and dogs by rewards and punishments (for
example, by feeding them or applying painful electric shocks). Today one tries to condition people in similar
ways. Instead of in a Skinner box, we are living in a "filter bubble": with personalized information our
thinking is being steered. With personalized prices, we may be even punished or rewarded, for example, for
(un)desired clicks on the Internet. The combination of Nudging with Big Data has therefore led to a new
form of Nudging that we may call "Big Nudging". The increasing amount of personal information about us,
which is often collected without our consent, reveals what we think, how we feel and how we can be
manipulated. This insider information is exploited to manipulate us to make choices that we would otherwise
not make, to buy some overpriced products or those that we do not need, or perhaps to give our vote to a
certain political party.
However, Big Nudging is not suitable to solve many of our problems. This is particularly true for the
complexity-related challenges of our world. Although already 90 countries use Nudging, it has not reduced
our societal problems - on the contrary. Global warming is progressing. World peace is fragile, and terrorism
is on the rise. Cybercrime explodes, and also the economic and debt crisis is not solved in many countries.
There is also no solution to the inefficiency of financial markets, as Nudging guru Richard Thaler recently
admitted. In his view, if the state would control financial markets, this would rather aggravate the problem.
But why should one then control our society in a top-down way, which is even more complex than a
financial market? Society is not a machine, and complex systems cannot be steered like a car. This can be
understood by discussing another complex system: our bodies. To cure diseases, one needs to take the right
medicine at the right time in the right dose. Many treatments also have serious side and interaction effects.
The same, of course, is expected to apply to social interventions by Big Nudging. Often is not clear in
advance what would be good or bad for society. 60 percent of the scientific results in psychology are not
reproducible. Therefore, chances are to cause more harm than good by Big Nudging.
Furthermore, there is no measure, which is good for all people. For example, in recent decades, we have seen
food advisories changing all the time. Many people also suffer from food intolerances, which can even be
fatal. Mass screenings for certain kinds of cancer and other diseases are now being viewed quite critically,
because the side effects of wrong diagnoses often outweigh the benefits. Therefore, if one decided to use Big
Nudging, a solid scientific basis, transparency, ethical evaluation and democratic control would be really
crucial. The measures taken would have to guarantee statistically significant improvements, and the side
effects would have to be acceptable. Users should be made aware of them (in analogy to a medical leaflet),
and the treated persons would have to have the last word.
In addition, applying one and the same measure to the entire population would not be good. But far too little
is known to take appropriate individual measures. Not only is it important for society to apply different
treatments in order to maintain diversity, but correlations (regarding what measure to take in what particular
context) matter as well. For the functioning of society it is essential that people apply different roles, which
are fitting to the respective situation they are in. Big Nudging is far from being able to deliver this.
Current Big-Data-based personalization rather creates new problems such as discrimination. For instance, if
we make health insurance rates dependent on certain diets, then Jews, Muslims and Christians, women and
men will have to pay different rates. Thus, a bunch of new problems is arising.
Richard Thaler is, therefore, not getting tired to emphasize that Nudging should only be used in beneficial
ways. As a prime example, how to use Nudging, he mentions a GPS-based route guidance system. This,
however, is turned on and off by the user. The user also specifies the respective goal. The digital assistant
then offers several alternatives, between which the user can freely choose. After that, the digital assistant
supports the user as good as it can in reaching the goal and in making better decisions. This would certainly
be the right approach to improve people's behaviours, but today the spirit of Big Nudging is quite different
from this.
Digital Self-Determination by Means of a “Right to a Copy”
by Ernst Hafen
Europe must guarantee citizens a right to a digital copy of all data about them (Right to a Copy), says Ernst
Hafen. A first step towards data democracy would be to establish cooperative banks for personal data that
are owned by the citizens rather than by corporate shareholders.
Medicine can profit from health data. However, access to personal data must be controlled the persons (the
data subjects) themselves. The “Right to a Copy” forms the basis for such a control.
In Europe, we like to point out that we live in free, democratic societies. We have almost unconsciously
become dependent on multinational data firms, however, whose free services we pay for with our own data.
Personal data — which is now sometimes referred to as a “new asset class” or the oil of the 21st Century —
is greatly sought after. However, thus far nobody has managed to extract the maximum use from personal
data because it lies in many different data sets. Google and Facebook may know more about our health than
our doctor, but even these firms cannot collate all of our data, because they rightly do not have access to our
patient files, shopping receipts, or information about our genomic make-up. In contrast to other assets, data
can be copied with almost no associated cost. Every person should have the right to obtain a copy of all their
personal data. In this way, they can control the use and aggregation of their data and decide themselves
whether to give access to friends, another doctor, or the scientific community.
The emergence of mobile health sensors and apps means that patients can contribute significant medical
insights. By recording their bodily health on their smartphones, such as medical indicators and the side
effects of medications, they supply important data which make it possible to observe how treatments are
applied, evaluate health technologies, and conduct evidence-based medicine in general. It is also a moral
obligation to give citizens access to copies of their data and allow them to take part in medical research,
because it will save lives and make health care more affordable.
European countries should copper-fasten the digital self-determination of their citizens by enshrining the
“Right to a Copy” in their constitutions, as has been proposed in Switzerland. In this way, citizens can use
their data to play an active role in the global data economy. If they can store copies of their data in non-
profit, citizen-controlled, cooperative institutions, a large portion of the economic value of personal data
could be returned to society. The cooperative institutions would act as trustees in managing the data of their
members. This would result in the democratization of the market for personal data and the end of digital
dependence.
Democratic Digital Society
Citizens must be allowed to actively participate
In order to deal with future technology in a responsible way, it is necessary that each one of us can
participate in the decision-making process, argues Bruno S. Frey from the University of Basel
How can responsible innovation be promoted effectively? Appeals to the public have little, if any, effect if
the institutions or rules shaping human interactions are not designed to incentivize and enable people to meet
these requests.
Several types of institutions should be considered. Most importantly, society must be decentralized,
following the principle of subsidiarity. Three dimensions matter.
Spatial decentralization consists in vibrant federalism. The provinces, regions and communes must
be given sufficient autonomy. To a large extent, they must be able to set their own tax rates and
govern their own public expenditure.
Functional decentralization according to area of public expenditure (for example education, health,
environment, water provision, traffic, culture etc) is also desirable. This concept has been developed
through the proposal of FOCJ, or “Functional, Overlapping and Competing Jurisdictions”.
Political decentralization relating to the division of power between the executive (government),
legislative (parliament) and the courts. Public media and academia should be additional pillars.
These types of decentralization will continue to be of major importance in the digital society of the future.
In addition, citizens must have the opportunity to directly participate in decision-making on particular issues
by means of popular referenda. In the discourse prior to such a referendum, all relevant arguments should be
brought forward and stated in an organized fashion. The various proposals about how to solve a particular
problem should be compared and narrowed down to those which seem to be most promising, and integrated
insomuch as possible during a mediation process. Finally, a referendum needs to take place, which serves to
identify the most viable solution for the local conditions (viable in the sense that it enjoys a diverse range of
support in the electorate).
Nowadays, on-line deliberation tools can efficiently support such processes. This makes it possible to
consider a larger and more diverse range of ideas and knowledge, harnessing “collective intelligence” to
produce better policy proposals.
Another way to implement the ten proposals would be to create new, unorthodox institutions. For example, it
could be made compulsory for every official body to take on an “advocatus diaboli”. This lateral thinker
would be tasked with developing counter-arguments and alternatives to each proposal. This would reduce the
tendency to think along the lines of “political correctness” and unconventional approaches to the problem
would also be considered.
Another unorthodox measure would be to choose among the alternatives considered reasonable during the
discourse process using random decision-making mechanisms. Such an approach increases the chance that
unconventional and generally disregarded proposals and ideas would be integrated into the digital society of
the future.
Bruno S. Frey
Bruno Frey (* 1941) is an academic economist and Permanent Visiting Professor at the University of Basel
where he directs the Center for Research in Economics and Well-Being (CREW). He is also Research
Director of the Center for Research in Economics, Management and the Arts (CREMA) in Zurich.
Democratic Technologies and Responsible Innovation
When technology determines how we see the world, there is a threat of misuse and deception. Thus,
innovation must reflect our values, argues Jeroen van den Hoven.
Germany was recently rocked by an industrial scandal of global proportions. The revelations led to the
resignation of the CEO of one of the largest car manufacturers, a grave loss of consumer confidence, a
dramatic slump in share price and economic damage for the entire car industry. There was even talk of
severe damage to the “Made in Germany” brand. The compensation payments will be in the range of billions
of Euro.
The background to the scandal was a situation whereby VW and other car manufacturers used manipulative
software which could detect the conditions under which the environmental compliance of a vehicle was
tested. The software algorithm altered the behavior of the engine so that it emitted fewer pollutant exhaust
fumes under test conditions than in normal circumstances. In this way, it cheated the test procedure. The full
reduction of emissions occurred only during the tests, but not in normal use.
In the 21st Century, we urgently need to address the question of how we can implement ethical standards
technologically.
Similarly, algorithms, computer code, software, models and data will increasingly determine what we see in
the digital society, and what are choices are with regard to health insurance, finance and politics. This brings
new risks for the economy and society. In particular, there is a danger of deception.
Thus, it is important to understand that our values are embodied in the things we create. Otherwise, the
technological design of the future will determine the shape of our society (“code is law”). If these values are
self-serving, discriminatory or contrary to the ideals of freedom and personal privacy, this will damage our
society. Thus, in the 21st Century we must urgently address the question of how we can implement ethical
standards technologically. The challenge calls for us to “design for value”.
If we lack the motivation to develop the technological tools, science and institutions necessary to align the
digital world with our shared values, the future looks very bleak. Thankfully, the European Union has
invested in an extensive research and development program for responsible innovation. Furthermore, the EU
countries which passed the Lund and Rome Declarations emphasized that innovation needs to be carried out
responsibly. Among other things, this means that innovation should be directed at developing intelligent
solutions to societal problems, which can harmonize values such as efficiency, security and sustainability.
Genuine innovation does not involve deceiving people into believing that their cars are sustainable and
efficient. Genuine innovation means creating technologies that can actually satisfy these requirements.
Digital Risk Literacy
Technology needs users who can control it
Rather than letting intelligent technology diminish our brainpower, we should learn to better control it, says
Gerd Gigerenzer – beginning in childhood.
The digital revolution provides an impressive array of possibilities: thousands of apps, the Internet of Things,
and almost permanent connectivity to the world. But in the excitement, one thing is easily forgotten:
innovative technology needs competent users who can control it rather than be controlled by it.
Three examples:
One of my doctoral students sits at his computer and appears to be engrossed in writing his dissertation. At
the same time his e-mail inbox is open, all day long. He is in fact waiting to be interrupted. It's easy to
recognize how many interruptions he had in the course of the day by looking at the flow of his writing.
An American student writes text messages while driving:
"When a text comes in, I just have to look, no matter what. Fortunately, my phone shows me the text as a
pop up at first… so I don't have to do too much looking while I'm driving." If, at the speed of 50 miles per
hour, she takes only 2 seconds to glance at her cell phone, she's just driven 48 yards "blind". That young
woman is risking a car accident. Her smart phone has taken control of her behavior – as is the case for the 20
to 30 percent of Germans who also text while driving.
During the parliamentary elections in India in 2014, the largest democratic election in the world with over
800 million potential voters, there were three main candidates: N. Modi, A. Kejriwal, and R. Ghandi. In a
study, undecided voters could find out more information about these candidates using an Internet search
engine. However, the participants did not know that the web pages had been manipulated: For one group,
more positive items about Modi popped up on the first page and negative ones later on. The other groups
experienced the same for the other candidates. This and similar manipulative procedures are common
practice on the Internet. It is estimated that for candidates who appear on the first page thanks to such
manipulation, the number of votes they receive from undecided voters increases by 20 percentage points.
In each of these cases, human behavior is controlled by digital technology. Losing control is nothing new,
but the digital revolution has increased the possibility of that happening.
What can we do? There are three competing visions. One is techno-paternalism, which replaces (flawed)
human judgment with algorithms. The distracted doctoral student could continue readings his emails and use
thesis-writing software; all he would need to do is input key information on the topic. Such algorithms would
solve the annoying problem of plagiarism scandals by making them an everyday occurrence.
Although still in the domain of science fiction, human judgment is already being replaced by computer
programs in many areas. The BabyConnect app, for instance, tracks the daily development of infants –
height, weight, number of times it was nursed, how often its diapers were changed, and much more – while
newer apps compare the baby with other users' children in a real-time database. For parents, their baby
becomes a data vector, and normal discrepancies often cause unnecessary concern.
The second vision is known as "nudging". Rather than letting the algorithm do all the work, people are
steered into a particular direction, often without being aware of it. The experiment on the elections in India is
an example of that. We know that the first page of Google search results receives about 90% of all clicks,
and half of these are the first two results. This knowledge about human behavior is taken advantage of by
manipulating the order of results so that the positive ones about a particular candidate or a particular
commercial product appear on the first page. In countries such as Germany, where web searches are
dominated by one search engine (Google), this leads to endless possibilities to sway voters. Like techno-
paternalism, nudging takes over the helm.
But there is a third possibility. My vision is risk literacy, where people are equipped with the competencies
to control media rather than be controlled by it. In general, risk literacy concerns informed ways of dealing
with risk-related areas such as health, money, and modern technologies. Digital risk literacy means being
able to take advantage of digital technologies without becoming dependent on or manipulated by them. That
is not as hard as it sounds. My doctoral student has since learned to switch on his email account only three
times a day, morning, noon, and evening, so that he can work on his dissertation without constant
interruption.
Learning digital self-control needs to begin as a child, at school and also from the example set by parents.
Some paternalists may scoff at the idea, stating that humans lack the intelligence and self-discipline to ever
become risk literate. But centuries ago the same was said about learning to read and write – which a majority
of people in industrial countries can now do. In the same way, people can learn to deal with risks more
sensibly. To achieve this, we need to radically rethink strategies and invest in people rather than replace or
manipulate them with intelligent technologies. In the 21st century, we need less paternalism and nudging and
more informed, critical, and risk-savvy citizens. It's time to snatch away the remote control from technology
and take our lives into our own hands.
Ethics: Big data for the common good and for humanity
The power of data can be used for good and bad purposes. Roberto Zicari and Andrej Zwitter have
formulated five principles of Big Data Ethics.
by Andrej Zwitter and Roberto Zicari
In recent times there have been a growing number of voices — from tech visionaries like Elon Musk (Tesla
Motors), to Bill Gates (Microsoft) and Steve Wozniak (Apple) — warning of the dangers of artificial
intelligence (AI). A petition against automated weapon systems was signed by 200,000 people and an open
letter recently published by MIT calls for a new, inclusive approach to the coming digital society.
We must realize that big data, like any other tool, can be used for good and bad purposes. In this sense, the
decision by the European Court of Justice against the Safe Harbour Agreement on human rights grounds is
understandable.
States, international organizations and private actors now employ big data in a variety of spheres. It is
important that all those who profit from big data are aware of their moral responsibility. For this reason, the
Data for Humanity Initiative was established, with the goal of disseminating an ethical code of conduct for
big data use. This initiative advances five fundamental ethical principles for big data users:
1. “Do no harm”. The digital footprint that everyone now leaves behind exposes individuals, social
groups and society as a whole to a certain degree of transparency and vulnerability. Those who
have access to the insights afforded by big data must not harm third parties.
2. Ensure that data is used in such a way that the results will foster the peaceful coexistence of
humanity. The selection of content and access to data influences the world view of a society.
Peaceful coexistence is only possible if data scientists are aware of their responsibility to provide
even and unbiased access to data.
3. Use data to help people in need. In addition to being economically beneficial, innovation in the
sphere of big data could also create additional social value. In the age of global connectivity, it is
now possible to create innovative big data tools which could help to support people in need.
4. Use data to protect nature and reduce pollution of the environment. One of the biggest
achievements of big data analysis is the development of efficient processes and synergy effects.
Big data can only offer a sustainable economic and social future if such methods are also used to
create and maintain a healthy and stable natural environment.
5. Use data to eliminate discrimination and intolerance and to create a fair system of social
coexistence. Social media has created a strengthened social network. This can only lead to long-
term global stability if it is built on the principles of fairness, equality and justice.
To conclude, we would also like to draw attention to how interesting new possibilities afforded by big data
could lead to a better future: "As more data become less costly and technology breaks barriers to acquisition
and analysis, the opportunity to deliver actionable information for civic purposes grows. This might be
termed the 'common good' challenge for big data." (Jake Porway, DataKind). In the end, it is important to
understand the turn to big data as an opportunity to do good and as a hope for a better future.
Measuring, Analyzing, Optimizing: When Intelligent Machines Take over Societal Control
In the digital age, machines steer everyday life to a considerable extent already. We should, therefore, think
twice before we share our personal data, says expert Yvonne Hofstetter
For Norbert Wiener (1894-1964), the digital era would be a paradise. “Cybernetics is the science of
information and control, regardless of whether a machine or a living organism is being controlled”, the
founder of cybernetics once said in Hanover, Germany in 1960.
Cybernetics, a science which claims ubiquitous importance makes a strong promise: “Everything is
controllable.” During the 20th century, both the US armed forces and the Soviet Union applied cybernetics to
control the arms’ race. NATO had deployed so-called C3I systems (Command, Control, Communication and
Information), a term for military infrastructure that linguistically leans on Wiener’s book entitled
Cybernetics: Or Control and Communication in the Animal and the Machine published in 1948. Control
refers to the control of machines as well as of individuals or entire societal systems such as military alliances,
NATO and the Warsaw Pact. Its basic requirements are: Integrating, collecting data and communicating.
Connecting people and things to the Internet of Everything is a perfect way to obtain the required data as
input of cybernetic control strategies.
With cybernetics, a new scientific concept was proposed: the closed-loop feedback. Feedback – such as the
likes we give or the online comments we make – is another major concept related to digitization. Does this
mean that digitization is the most perfect implementation of cybernetics? When we use smart devices, we
create an endless data stream disclosing our intentions, geolocation or social environment. While we
communicate more thoughtlessly than ever online, in the background, an artificial intelligence (AI)
ecosystem is evolving. Today, AI is the sole technology able to profile us and draw conclusions about our
future behavior.
An automated control strategy, usually a learning machine, analyses our current state and computes a
stimulus that should draw us closer to a more desirable “optimal” state. Increasingly, such controllers govern
our daily lives. Such digital assistants help us to make decisions among the vast ocean of options and
intimidating uncertainty. Even Google Search is a control strategy. When typing a keyword, a user reveals
his intentions. The Google search engine, in turn, presents not only a list of the best hits, but also a list of
links sorted according to their (financial) value to the company, rather than to the user. By listing corporate
offerings at the very top of the search results, Google controls the user’s next clicks. That is a misuse of
Google’s monopoly, the European Union argues.
But is there any way out? Yes, if we disconnect from the cybernetic loop and simply stop responding to the
digital stimulus. Cybernetics will fail, if the controllable counterpart steps out of the loop. We should remain
discreet and frugal with our data, even if it is difficult. However, as digitization further escalates, soon there
may be no more choices left. Hence, we are called on to fight once again for our freedom in the digital era,
particularly against the rise of intelligent machines.