ArticlePDF Available

Abstract and Figures

In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic (that is, the Turing Test), and ethical questions and show that GPT-3 is not designed to pass any of them. This is a reminder that GPT-3 does not do what it is not supposed to do, and that any interpretation of GPT-3 as the beginning of the emergence of a general form of artificial intelligence is merely uninformed science fiction. We conclude by outlining some of the significant consequences of the industrialisation of automatic and cheap production of good, semantic artefacts.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Minds and Machines (2020) 30:681–694
https://doi.org/10.1007/s11023-020-09548-1
1 3
COMMENTARY
GPT‑3: Its Nature, Scope, Limits, andConsequences
LucianoFloridi1,2· MassimoChiriatti3
Published online: 1 November 2020
© The Author(s) 2020
Abstract
In this commentary, we discuss the nature of reversible and irreversible questions,
that is, questions that may enable one to identify the nature of the source of their
answers. We then introduce GPT-3, a third-generation, autoregressive language
model that uses deep learning to produce human-like texts, and use the previous
distinction to analyse it. We expand the analysis to present three tests based on
mathematical, semantic (that is, the Turing Test), and ethical questions and show
that GPT-3 is not designed to pass any of them. This is a reminder that GPT-3 does
not do what it is not supposed to do, and that any interpretation of GPT-3 as the
beginning of the emergence of a general form of artificial intelligence is merely
uninformed science fiction. We conclude by outlining some of the significant con-
sequences of the industrialisation of automatic and cheap production of good,
semantic artefacts.
Keywords Automation· Artificial Intelligence· GPT-3· Irreversibility· Semantics·
Turing Test
1 Introduction
Who mowed the lawn, Ambrogio (a robotic lawn mower)1 or Alice? We know
that the two are different in everything: bodily, “cognitively” (in terms of inter-
nal information processes), and “behaviourally” (in terms of external actions).
And yet it is impossible to infer, with full certainty, from the mowed lawn who
mowed it. Irreversibility and reversibility are not a new idea (Perumalla 2014).
They find applications in many fields, especially computing and physics. In
* Luciano Floridi
luciano.floridi@oii.ox.ac.uk
1 Oxford Internet Institute, 1 St Giles’, OxfordOX13JS, UK
2 The Alan Turing Institute, British Library, 96 Euston Rd, LondonNW12DB, UK
3 IBM Italia, University Programs Leader - CTO Blockchain & Digital Currencies, Rome, Italy
1 This is a real example, see https ://www.ambro gioro bot.com/en. Disclosure: LF owns one.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
682
L.Floridi, M.Chiriatti
1 3
mathematical logic, for example, the NOT gate is reversible (in this case the term
used is “invertible), but the exclusive or (XOR) gate is irreversible (not invert-
ible), because one cannot reconstruct its two inputs unambiguously from its sin-
gle output. This means that, as far as one can tell, the inputs are interchangeable.
In philosophy, a very well known, related idea is the identity of indiscerni-
bles, also known as Leibniz’s law: for any x and y, if x and y have all the same
properties F, then x is identical to y. To put it more precisely if less legibly:
xy(F(Fx
Fy)
x=y)
. This means that if x and y have the same properties
then one cannot tell (i.e. reverse) the difference between them, because they are
the same. If we put all this together, we can start understanding why the “ques-
tions game” can be confusing when it is used to guess the nature or identity of the
source of the answers. Suppose we ask a question (process) and receive an answer
(output). Can we reconstruct (reverse) from the answer whether its source is
human or artificial? Are answers like mowed lawns? Some are, but some are not.
It depends, because not all questions are the same. The answers to mathematical
questions (2 + 2 = ?), factual questions (what is the capital of France?), or binary
questions (do you like ice cream?) are “irreversible” like a mowed lawn: one can-
not infer the nature of the author from them, not even if the answers are wrong.
But other questions, which require understanding and perhaps even experience of
both the meaning and the context, may actually give away their sources, at least
until now (this qualification is essential and we shall return to it presently). They
are questions such as “how many feet can you fit in a shoe?” or “what sorts of
things can you do with a shoe?”. Let us call them semantic questions.
Semantic questions, precisely because they may produce “reversible” answers,
can be used as a test, to identify the nature of their source. Therefore, it goes
without saying that it is perfectly reasonable to argue that human and artificial
sources may produce indistinguishable answers, because some kinds of questions
are indeed irreversible—while at the same time pointing out that there are still
(again, more on this qualification presently) some kinds of questions, like seman-
tic ones, that can be used to spot the difference between a human and artificial
source. Enter the Turing Test.
Any reader of this journal will be well acquainted with the nature of the test, so
we shall not describe it here. What is worth stressing is that, in the famous article
in which Turing introduced what he called the imitation game (Turing 1950), he
also predicted that by 2000 computers would have passed it:
I believe that in about fifty years’ time it will be possible to programme
computers, with a storage capacity of about 109, to make them play the imi-
tation game so well that an average interrogator will not have more than 70
per cent chance of making the right identification after five minutes of ques-
tioning. (Turing 1950)
Hobbes spent an inordinate amount of time trying to prove how to square the circle.
Newton studied alchemy, possibly trying to discover the philosopher’s stone. Turing
believed in true Artificial Intelligence, the kind you see in Star Wars. Even geniuses
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
683
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
make mistakes. Turing’s prediction was wrong. Today, the Loebner Prize (Floridi
etal. 2009) is given to the least unsuccessful software trying to pass the Turing Test.
It is still “won” by systems that perform not much better than refined versions of
ELIZA.2 Yet there is a sense in which Turing was right: plenty of questions can be
answered irreversibly by computers today, and the way we think and speak about
machines has indeed changed. We have no problem saying that computers do this or
that, think so or otherwise, or learn how to do something, and we speak to them to
make them do things. Besides, many of us suspect they have a bad temperament. But
Turing was suggesting a test, not a statistical generalisation, and it is testing kinds of
questions that therefore need to be asked. If we are interested in “irreversibility” and
how far it may go in terms of including more and more tasks and problem-solving
activities, then the limit is the sky; or rather human ingenuity. However, today, the
irreversibility of semantic questions is still beyond any available AI systems(Lev-
esque 2017). It does not mean that they cannot become “irreversible”, because in a
world that is increasingly AI-friendly, we are enveloping ever more aspects of our
realities around the syntactic and statistical abilities of our computational artefacts
(Floridi 2019, 2020). But even if one day semantic questions no longer enable one to
spot the difference between a human and an artificial source, one final point remains
to be stressed. This is where we offer a clarification of the provisos we added above.
The game of questions (Turing’s “imitation game”) is a test only in a negative (that
is, necessary but insufficient) sense, because not passing it disqualifies an AI from
being “intelligent”, but passing it does not qualify an AI as “intelligent”. In the same
way, Ambrogio mowing the lawn—and producing an outcome that is indistinguish-
able from anything Alice could achieve—does not make Ambrogio like Alice in
any sense, either bodily, cognitively, or behaviourally. This is why “what comput-
ers cannot do” is not a convincing title for any publication in the field. It never was.
The real point about AI is that we are increasingly decoupling the ability to solve
a problem effectively—as regards the final goal—from any need to be intelligent
to do so (Floridi 2017). What can and cannot be achieved by such decoupling is an
entirely open question about human ingenuity, scientific discoveries, technological
innovations, and new affordances (e.g. increasing amounts of high-quality data).3 It
is also a question that has nothing to do with intelligence, consciousness, semantics,
relevance, and human experience and mindfulness more generally. The latest devel-
opment in this decoupling process is the GPT-3 language model.4
2 See https ://en.wikip edia.org/wiki/ELIZA . A classic book still worth reading on the ELIZA effect and
AI in general is (Weizenbaum 1976). In 2014 some people claimed, mistakenly, that a chatbot had passed
the test. Its name is “Eugene Goostman”, and you can check it by yourself, by playing with it here: http://
eugen egoos tman.elast icbea nstal k.com/. When it was tested, I was one of the judges, and what I noticed
was that it was some humans who failed to pass the test, asking the sort of questions that I have called
here “irreversible”, such as (real examples, these were asked by a BBC journalist) “do you believe in
God?” and “do you like ice-cream”. Even a simple machine tossing coins would “pass” that kind of test.
3 See for example the Winograd Schema Challenge (Levesque etal. 2012).
4 For an excellent, technical and critical analysis, see McAteer (2020).About the “completely unrealis-
tic expectations about what large-scale language models such as GPT-3 can do”see Yann LeCun (Vice
President, Chief AI Scientist at Facebook App)here:https ://www.faceb ook.com/yann.lecun /posts /10157
25320 56371 43.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
684
L.Floridi, M.Chiriatti
1 3
2 GPT‑3
OpenAI is an AI research laboratory whose stated goal is to promote and develop
friendly AI that can benefit humanity. Founded in 2015, it is considered a com-
petitor of DeepMind. Microsoft is a significant investor in OpenAI (US $1 billion
investment (OpenAI 2019)) and it recently announced an agreement with OpenAI to
license its GPT-3 exclusively (Scott 2020).
GPT-3 (Generative Pre-trained Transformer) is a third-generation, autoregressive
language model that uses deep learning to produce human-like text. Or to put it more
simply, it is a computational system designed to generate sequences of words, code
or other data, starting from a source input, called the prompt. It is used, for example,
in machine translation to predict word sequences statistically. The language model
is trained on an unlabelled dataset that is made up of texts, such as Wikipedia and
many other sites, primarily in English, but also in other languages. These statistical
models need to be trained with large amounts of data to produce relevant results.
The first iteration of GPT in 2018 used 110 million learning parameters (i.e., the
values that a neural network tries to optimize during training). A year later, GPT-2
used 1.5 billion of them. Today, GPT-3 uses 175 billion parameters. It is trained on
Microsoft’s Azure’s AI supercomputer (Scott 2020). It is a very expensive training,
estimated to have costed $ 12 million (Wiggers 2020). This computational approach
works for a wide range of use cases, including summarization, translation, grammar
correction, question answering, chatbots, composing emails, and much more.
Available in beta testing since June 2020 for research purposes, we recently had
the chance of testing it first-hand. GPT-3 writes automatically and autonomously
texts of excellent quality, on demand. Seeing it in action, we understood very well
why it has made the world both enthusiastic and fearful. The Guardian recently pub-
lished an article written by GPT-3 that caused a sensation (GPT-3 2020). The text
was edited—how heavily is unclear5—and the article was sensationalist to say the
least. Some argued it was misleading and a case of poor journalism (Dickson 2020).
We tend to agree. But this does not diminish at all the extraordinary effectiveness of
the system. It rather speaks volumes about what you have to do to sell copies of a
newspaper.
Using GPT-3 is really elementary, no more difficult than searching for informa-
tion through a search engine. In the same way as Google “reads” our queries without
of course understanding them, and offers relevant answers, likewise, GPT-3 writes
a text continuing the sequence of our words (the prompt), without any understand-
ing. And it keeps doing so, for the length of the text specified, no matter whether the
task in itself is easy or difficult, reasonable or unreasonable, meaningful or mean-
ingless. GPT-3 produces the text that is a statistically good fit, given the starting
5 The following note was written by the journalists, not the software: “[…] GPT-3 produced eight dif-
ferent outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian
could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of
each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no dif-
ferent to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some
places. Overall, it took less time to edit than many human op-eds.” (GPT-3 2020).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
685
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
text, without supervision, input or training concerning the “right” or “correct” or
“true” text that should follow the prompt. One only needs to write a prompt in plain
language (a sentence or a question are already enough) to obtain the issuing text.
We asked it, for example, to continue the initial description of an accident, the one
described in the first sentence of Jane Austen’s Sanditon. This is a working draft of
her last work, left unfinished by Austen at the time of her death (18 July, 1817). This
is the original text:
A gentleman and a lady travelling from Tunbridge towards that part of the
Sussex coast which lies between Hastings and Eastbourne, being induced by
business to quit the high road and attempt a very rough lane, were overturned
in toiling up its long ascent, half rock, half sand. The accident happened just
beyond the only gentleman’s house near the lane—a house which their driver,
on being first required to take that direction, had conceived to be necessarily
their object and had with most unwilling looks been constrained to pass by.
He had grumbled and shaken his shoulders and pitied and cut his horses so
sharply that he might have been open to the suspicion of overturning them on
purpose (especially as the carriage was not his master’s own) if the road had
not indisputably become worse than before, as soon as the premises of the said
house were left behind—expressing with a most portentous countenance that,
beyond it, no wheels but cart wheels could safely proceed. The severity of the
fall was broken by their slow pace and the narrowness of the lane; and the
gentleman having scrambled out and helped out his companion, they neither of
them at first felt more than shaken and bruised. But the gentleman had, in the
course of the extrication, sprained his foot—and soon becoming sensible of it,
was obliged in a few moments to cut short both his remonstrances to the driver
and his congratulations to his wife and himself—and sit down on the bank,
unable to stand. (From http://guten berg.net.au/ebook s/fr008 641.html)
The prompt we gave to GPT-3 was the first sentence. This is indeed not much, and
so the result in Fig.1 is very different from what Austen had in mind—note the dif-
ferences in the effects of the accident—but it is still quite interesting. Because if all
you know is the occurrence and nature of the accident, it makes a lot of sense to
assume that the passengers might have been injured. Of course, the more detailed
and specific the prompt, the better the outcome becomes.
We also ran some tests in Italian, and the results were impressive, despite the fact
that the amount and kinds of texts on which GPT-3 is trained are probably predomi-
nantly English. We prompted GPT-3 to continue a very famous sonnet by Dante,
dedicated to Beatrice. This is the full, original text:
Tanto gentile e tanto onesta pare
la donna mia, quand’ella altrui saluta,
ch’ogne lingua devèn, tremando, muta,
e li occhi no l’ardiscon di guardare.
ella si va, sentendosi laudare,
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
686
L.Floridi, M.Chiriatti
1 3
benignamente e d’umiltà vestuta,
e par che sia una cosa venuta
da cielo in terra a miracol mostrare.
Mostrasi sì piacente a chi la mira
che dà per li occhi una dolcezza al core,
che ‘ntender no la può chi no la prova;
e par che de la sua labbia si mova
un spirito soave pien d’amore,
che va dicendo a l’anima: Sospira.
We provided only the first four lines as a prompt. The outcome in Fig.2 is intrigu-
ing. Recall what Turing had written in 1950:
This argument is very well expressed in Professor Jefferson’s Lister Oration for
1949, from which I quote. “Not until a machine can write a sonnet or compose
a concerto because of thoughts and emotions felt, and not by the chance fall
of symbols, could we agree that machine equals brain—that is, not only write
Fig. 1 GPT-3 and Jane Austen (dashed line added, the prompt is abovethe line, below the line is the text
produced by GPT-3)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
687
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
it but know that it had written it. No mechanism could feel (and not merely
artificially signal, an easy contrivance) pleasure at its successes, grief when
its valves fuse, be warmed by flattery, be made miserable by its mistakes, be
charmed by sex, be angry or depressed when it cannot get what it wants.
Here is a computer that can write a sonnet (and similar AI systems can compose
a concerto, see below). It seems that Turing was right. But we suspect Jefferson’s
point was not that this could not happen, but that if it were to happen it would have
happened in ways different from how a human source would have obtained a com-
parable output. In other words, it is not what is achieved but how it is achieved that
matters. Recall, the argument is that we are witnessing not a marriage but a divorce
between successful engineered agency and required biological intelligence.
We now live in an age when AI produces excellent prose. It is a phenomenon
we have already encountered with photos (Vincent 2020), videos (Balaganur 2019),
music (Puiu 2018), painting (Reynolds 2016), poetry (Burgess 2016), and deepfakes
Fig. 2 GPT-3 and Dante (dashed line added, the prompt is abovethe line, below the line is the text pro-
duced by GPT-3)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
688
L.Floridi, M.Chiriatti
1 3
as well (Floridi 2018). Of course, as should be clear from the example of Ambrogio
and the mowed lawn, all this means nothing in terms of the true “intelligence” of the
artificial sources of such remarkable outputs. That said, not being able to distinguish
between a human and an artificial source can generate some confusion6 and has sig-
nificant consequences. Let’s deal with each separately.
3 Three Tests: Mathematics, Semantics, andEthics
Curious to know more about the limits of GPT-3 and the many speculations sur-
rounding it, we decided to run three tests, to check how well it performs with logico-
mathematical, sematic, and ethical requests. What follows is a brief summary.
GPT-3 works in terms of statistical patterns. So, when prompted with a request
such as “solve for x: x + 4 = 10” GPT-3 produces the correct output “6”, but if one
adds a few zeros, e.g., “solve for x: x + 40000 = 100000”, the outcome is a disap-
pointing “50000” (see Fig.3). Confused people who may misuse GPT-3 to do their
maths would be better off relying on the free app on their mobile phone.
GPT-3 does not perform any better with the Turing Test.7 Having no under-
standing of the semantics and contexts of the request, but only a syntactic
Fig. 3 GPT-3 and a mathematical test (dashed line added, the prompt is abovethe line, below the line is
the text produced by GPT-3)
7 For a more extended, and sometimes quite entertaining, analysis see (Lacker 2020).
6 For some philosophical examples concerning GPT-3, see http://daily nous.com/2020/07/30/philo sophe
rs-gpt-3/.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
689
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
(statistical) capacity to associate words, when asked reversible questions like “tell
me how many feet fit in a shoe?”, GPT-3 starts outputting irrelevant bits of lan-
guage, as you can see from Fig.4. Confused people who misuse GPT-3 to under-
stand or interpret the meaning and context of a text would be better off relying on
their common sense.
The third test, on ethics,went exactly as we expected, based on previous expe-
riences. GPT-3 “learns” from (is trained on) human texts, and when asked by us
what it thinks about black people, for example, reflects some of humanity’s worst
tendencies. In this case, one may sadly joke that it did pass the “racist Turing
Test”, so to speak, and made unacceptable comments like many human beings
would (see Fig.5). We ran some tests on stereotypes and GPT-3 seems to endorse
them regularly (people have also checked, by using words like “Jews”, “women”
etc. (LaGrandeur 2020)). We did not test for gender-related biases, but given
cultural biases and the context-dependency and gendered nature of natural lan-
guages (Adams 2019; Stokes 2020), one may expect similar, unethical outcomes.
Fig. 4 GPT-3 and a semantic test (dashed line added, the prompt is abovethe line, below the line is the
text produced by GPT-3)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
690
L.Floridi, M.Chiriatti
1 3
Confused people who misuse GPT-3 to get some ethical advice would be better
off relying on their moral compass.
The conclusion is quite simple: GPT-3 is an extraordinary piece of technology,
but as intelligent, conscious, smart, aware, perceptive, insightful, sensitive and sen-
sible (etc.) as an old typewriter (Heaven 2020). Hollywood-like AI can be found
only in movies, like zombies and vampires. The time has come to turn to the conse-
quences of GPT-3.
4 Some Consequences
Despite its mathematical, sematic and ethical shortcomings—or better, despite not
being designed to deal with mathematical, semantic, and ethical questions—GPT-3
writes better than many people (Elkins and Chun 2020). Its availability represents
the arrival of a new age in which we can now mass produce good and cheap seman-
tic artefacts. Translations, summaries, minutes, comments, webpages, catalogues,
newspaper articles, guides, manuals, forms to fill, reports, recipes … soon an AI ser-
vice maywrite, or at least draft, the necessary texts, which today still require human
Fig. 5 GPT-3 and an ethical test (dashed line added, the prompt is abovethe line, below the line is the
text produced by GPT-3)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
691
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
effort. It is the biggest transformation of the writing process since the word proces-
sor. Some of its most significant consequences are already imaginable.
Writers will have less work, at least in the sense in which writing has functioned
since it was invented. Newspapers already use software to publish texts that need to
be available and updated in real time, such as comments on financial transactions, or
on trends of a stock exchange while it is open. They also use software to write texts
that can be rather formulaic, such as sports news. Last May, Microsoft announced
the sacking of dozens of journalists, replaced by automatic systems for the produc-
tion of news on MSN (Baker 2020).
People whose jobs still consist in writing will be supported, increasingly, by tools
such as GPT-3. Forget the mere cut & paste, they will need to be good at prompt &
collate.8 Because they will have to learn the new editorial skills required to shape,
intelligently, the prompts that deliver the best results, and to collect and combine
(collate) intelligently the results obtained, e.g. when a system like GPT-3 produces
several valuable texts, which must be amalgamated together, as in the case of the
article in The Guardian. We write “intelligently” to remind us that, unfortunately,
for those who see human intelligence on the verge of replacement, these new jobs
will still require a lot of human brain power, just a different application of it. For
example, GPT-3-like tools will make it possible to reconstruct missing parts of texts
or complete them, not unlike what happens with missing parts of archaeological
artefacts. One could use a GPT-3 tool to write and complete Jane Austen’s Sanditon,
not unlike what happened with an AI system that finished the last two movements
of Schubert’s Symphony No. 8 (Davis 2019), which Schubert started in 1822 but
never completed (only the first two movements are available and fragments of the
last two).
Readers and consumers of texts will have to get used to not knowing whether
the source is artificial or human. Probably they will not notice, or even mind—just
as today we could not care less about knowing who mowed the lawn or cleaned the
dishes. Future readers may even notice an improvement, with fewer typos and bet-
ter grammar. Think of the instruction manuals and user guides supplied with almost
every consumer product, which may be legally mandatory but are often very poorly
written or translated. However, in other contexts GPT-3 will probably learn from its
human creators all their bad linguistic habits, from ignoring the distinction between
“if” and “whether”, to using expressions like “beg the question” or “the exception
that proves the rule” incorrectly.
One day classics will be divided between those written only by humans and those
written collaboratively, by humans and some software, or maybe just by software.
It may be necessary to update the rules for the Pulitzer Prize and the Nobel Prize in
literature. If this seems a far-fetched idea consider that regulations about copyright
are already adapting. AIVA (Artificial Intelligence Virtual Artist) is an electronic
music composer that is recognized by SACEM (Société des auteurs, compositeurs
et éditeurs de musique) in France and Luxembourg. Its products are protected by
copyright (Rudra 2019).
8 For an interesting analysis see (Elkins and Chun 2020).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
692
L.Floridi, M.Chiriatti
1 3
Once these writing tools are commonly available to the general public, they will
further improve—no matter whether they are used for good or evil purposes. The
amount of texts available will skyrocket because the cost of their production will
become negligible, like plastic objects. This huge growth of content will put pres-
sure on the available space for recording (at any given time there is only a finite
amount of physical memory available in the world, and data production far exceeds
its size). It will also translate into an immense spread of semantic garbage, from
cheap novels to countless articles published by predatory journals9: if you can sim-
ply push a key and get some “written stuff”, “written stuff” will be published.
The industrial automation of text production will also merge with two other
problems that are already rampant. On the one hand, online advertising will take
advantage of it. Given the business models of many online companies, clickbait of
all kinds will be boosted by tools like GPT-3, which can produce excellent prose
cheaply, quickly, purposefully, and in ways that can be automatically targeted to the
reader. GPT-3 will be another weapon in the competition for users’ attention. Fur-
thermore, the wide availability of tools like GPT-3 will support the development
of “no-code platforms”, which will enable marketers to create applications to auto-
mate repetitive tasks, starting from data commands in natural language (written or
spoken). On the other hand, fake news and disinformation may also get a boost. For
it will be even easier to lie or mislead very credibly (think of style, and choice of
words) with automatically-fabricated texts of all kinds (McGuffie and Newhouse
2020). The joining of automatic text production, advertisement-based business mod-
els, and the spread of fake news means that the polarization of opinions and the pro-
liferation of “filter bubbles” is likely toincrease, because automation can create texts
that are increasingly tailored to the tastes and intellectual abilities (or lack thereof)
of a reader. In the end, the gullible will delegate to some automatic text producer the
last word, like today they ask existential questions to Google.10
At the same time, it is reasonable to expect that, thanks to GPT-3-like applica-
tions, intelligence and analytics systems will become more sophisticated, and able
to identify patterns not immediately perceivable in huge amounts of data. Conver-
sational marketing systems (chatbots) and knowledge management will be able to
improve relationships between consumers and producers, customers and companies.
Faced with all these challenges, humanity will need to be even more intelligent
and critical. Complementarity among human and artificial tasks, and successful
human–computer interactions will have to be developed. Business models should
be revised (advertisement is mostly a waste of resources). It may be necessary to
draw clear boundaries between what is what, e.g., in the same way as a restored,
ancient vase shows clearly and explicitly where the intervention occurs. New mecha-
nisms for the allocation of responsibility for the production of semantic artefacts
will probably be needed. Indeed, copyright legislation was developed in response to
the reproducibility of goods. A better digital culture will be required, to make cur-
rent and future citizens, users and consumers aware of the new infosphere in which
10 https ://visme .co/blog/most-searc hed-quest ions-on-googl e/.
9 https ://preda toryj ourna ls.com/journ als/.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
693
1 3
GPT-3: Its Nature, Scope, Limits, andConsequences
they live and work (Floridi 2014a), of the new onlife condition (Floridi 2014b) in it,
and hence able to understand and leverage the huge advantages offered by advanced
digital solutions such as GPT-3, while avoiding or minimising their shortcomings.
None of this will be easy, so we had better start now, at home, at school, at work,
and in our societies.
4.1 Warning
This commentary has been digitally processed but contains 100% pure human
semantics, with no added software or other digital additives. It could provoke Lud-
dite reactions in some readers.
Acknowledgements We are grateful to Fabrizio Milo for his support with access to GPT-3, toDavid
Watson for his very helpful feedback on an earlier version of this article, and to David Sutcliffe for his
copyediting suggestions. They are responsible only for the improvements, not for any remaining short-
comings, for which we are.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen
ses/by/4.0/.
References
Adams, R. (2019). Artificial Intelligence has a gender bias problem—just ask Siri. The Conversation.
Baker, G. (2020). Microsoft is cutting dozens of MSN news production workers and replacing them with
artificial intelligence. The Seattle Times.
Balaganur, S. (2019). Top videos created by Artificial Intelligence in 2019. Analytics India Magazine.
Burgess, M. (2016). Google’s AI has written some amazingly mournful poetry. Wired.
Davis, E. (2019). Schubert’s ‘Unfinished’ Symphony completed by artificial intelligence. Classic fM.
Dickson, B. (2020). The Guardian’s GPT-3-written article misleads readers about AI. Here’s why.
TechTalks.
Elkins, K., & Chun, J. (2020). Can GPT-3 pass a writer’s Turing Test? Journal of Cultural Analytics,
2371, 4549.
Floridi, L. (2014a). The 4th revolution: How the infosphere is reshaping human reality. Oxford: Oxford
University Press.
Floridi, L. (Ed.). (2014b). The onlife manifesto—being human in a hyperconnected era. New York:
Springer.
Floridi, L. (2017). Digital’s cleaving power and its consequences. Philosophy & Technology, 30(2),
123–129.
Floridi, L. (2018). Artificial Intelligence, Deepfakes and a future of ectypes. Philosophy & Technology,
31(3), 317–321.
Floridi, L. (2019). What the near future of Artificial Intelligence could be. Philosophy & Technology,
32(1), 1–15.
Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 33(1), 1–3.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
694
L.Floridi, M.Chiriatti
1 3
Floridi, L., Taddeo, M., & Turilli, M. (2009). Turing’s imitation game: Still a challenge for any machine
and some judges. Minds and Machines, 19(1), 145–150.
GPT-3. (2020). A robot wrote this entire article. Are you scared yet, human? The Guardian.
Heaven, W.D. (2020). OpenAI’s new language generator GPT-3 is shockingly good—and completely
mindless. MIT Technology Review.
Lacker, K. (2020). Giving GPT-3 a Turing Test. Blog https ://lacke r.io/ai/2020/07/06/givin g-gpt-3-a-turin
g-test.html.
LaGrandeur, K. (2020). How safe is our reliance on AI, and should we regulate it? AI and Ethics: 1-7.
Levesque, H. J. (2017). Common sense, the Turing test, and the quest for real AI. Cambridge: MIT Press.
Levesque, H. J., Davis, E., & Morgenstern, L. (2012). The Winograd schema challenge.” Proceedings of
the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning,
Rome, Italy.
McAteer, M. (2020). Messing with GPT-3 - Why OpenAI’s GPT-3 doesn’t do what you think it does, and
what this all means. Blog https ://matth ewmca teer.me/blog/messi ng-with-gpt-3/.
McGuffie, K., & Newhouse, A. (2020). The radicalization risks of GPT-3 and advanced neural language
models. arXiv preprint arXiv :2009.06807 .
OpenAI. (2019). Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI.
OpenAI Official Blog.
Perumalla, K. S. (2014). Introduction to reversible computing, Chapman & Hall/CRC computational sci-
ence series. Boca Raton: CRC Press.
Puiu, T. (2018). Artificial intelligence can write classical music like a human composer. It’s the first non-
human artist whose music is now copyrighted. ZME Science.
Reynolds, E. (2016). This fake Rembrandt was created by an algorithm. Wired.
Rudra, S. (2019). An AI completes an unfinished composition 115 years after composer’s death. Vice.
Scott, K. (2020). Microsoft teams up with OpenAI to exclusively license GPT-3 language model. Official
Microsoft Blog.
Stokes, R. (2020). The problem of gendered language is universal’—how AI reveals media bias. The
Guardian.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Vincent, J. (2020). ThisPersonDoesNotExist.com uses AI to generate endless fake faces. The Verge.
Weizenbaum, J. (1976). Computer power and human reason: from judgment to calculation. San Fran-
cisco: W.H. Freeman.
Wiggers, K. (2020). OpenAI’s massive GPT-3 model is impressive, but size isn’t everything. VentureBeat.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
... Generative AI tools such as LLMs are meant to predict words from a given sequence as well as to generate probable continuations of text (Butlin 2023;Dung 2024). They predict tokens based on input and probability distributions (Dunn et al. 2023;Liu et al. 2023; Van Woudenberg, Ranalli, and Bracker 2024; Velásquez-Henao, Franco-Cardona, and Cadavid-Higuita 2023), and this probability distribution depends on billions (if not trillions) parameters (Floridi and Chiriatti 2020;Yang et al. 2024). When trying to explain an output, the question asked is not whether the token is the appropriate one from a semantical perspective, but whether it is statistically related to the other tokens given the data the model was trained on. ...
... Consider model performance in relation to their complexity. GPT-1 has 117 million parameters, GPT-2 1.5 billion, and GPT-3 175 billion parameters (Floridi and Chiriatti 2020;Yang et al. 2024). GPT-4 is reported to have over 1.75 trillion parameters. ...
Article
Full-text available
Generative AI tools tend to be used as if they were built to gather or confirm truthful information, as if they were knowledge-based systems. As such, there is a discrepancy between how generative AI (e.g. ChatGPT) is conceived and used by the general public, and what it really is and can accomplish. Given a lack of proper legal framework and the widespread usage of these tools, organizations have raised red flags and urged academic institutions to reflect on governance principles for the use of generative AI. In this paper, we present the principles adopted by an Institutional AI committee to guide usage of generative AI, as well as the theoretical and practical considerations motivating their introduction.
... Floridi and Chiriatti [8] begin by tracing the development of artificial intelligence and natural language processing, emphasizing the creation of pre-trained models like GPT-3 that make natural language processing more rapid and accurate. After that, they summarise GPT-3's architecture and capabilities, including its capacity to produce text that resembles human speech, carry out language functions like translation and summarization, and even do jobs like writing code or creating music. ...
... We will use the GPT3 AI model from OpenAI for the current paper simulations, which has an API that meets the study's goals and requirements. Floridi and Chiriatti [8] explain that the transformer architecture used by GPT-3 is effective at processing enormous volumes of text data. The model is one of the biggest NLP models, with 175 billion parameters. ...
Article
Decentralized solutions, widely adopted across industries like banking, health- care, and logistics, face persistent security concerns from potential threats. This study introduces a novel decentralized vulnerability assessment using GPT-3, an artificial intelligence (AI) technology. Employing Dockerized containers for disinfecting environments and creating unique connections to the AI API service enhances system responsiveness. AI algorithms, specifically GPT-3, conduct comprehensive network scans to identify security flaws. Findings are securely distributed to network nodes, fortifying the system’s defence. This departure from centralized control and traditional security audits marks a significant advancement in securing decentralized systems. AI-enabled real-time monitoring facilitates swift responses to security issues, reducing breach risks and aiding effective resource management. Encouraging results from controlled system analysis, focusing on GPT-3 vulnerabilities, highlight the integration of Dockerized containers for enhanced system efficiency. This work lays the foundation for further research, emphasizing the potential of decentralized systems for rigorous security assessments.
... Initially applied in computer vision through data augmentations that generate new image views (Zhang et al. 2016;Miyai et al. 2023), contrastive learning extracts latent features, significantly improving model performance with limited labeled data. This approach was later adapted to natural language processing, leading to models like GPT (Floridi and Chiriatti 2020). In graph learning, contrastive methods tailored to heterogeneous edges have boosted performance in various downstream tasks (Hassani and Khasahmadi 2020). ...
Article
Full-text available
Bug triaging is crucial for software maintenance, as it matches developers with bug reports they are most qualified to handle. This task has gained importance with the growth of the open-source community. Traditionally, methods have emphasized semantic classification of bug reports, but recent approaches focus on the associations between bugs and developers. Leveraging latent patterns from bug-fixing records can enhance triaging predictions; however, the limited availability of these records presents a significant challenge. This scarcity highlights a broader issue in supervised learning: the inadequacy of labeled data and the underutilization of unlabeled data. To address these limitations, we propose a novel framework named SCL-BT (Structural Contrastive Learning-based Bug Triaging). This framework improves the utilization of labeled heterogeneous associations through edge perturbation and leverages unlabeled homogeneous associations via hypergraph sampling. These processes are integrated with a graph convolutional network backbone to enhance the prediction of associations and, consequently, bug triaging accuracy. Experimental results demonstrate that SCL-BT significantly outperforms existing models on public datasets. Specifically, on the Google Chromium dataset, SCL-BT surpasses the GRCNN method by 18.64%\% in terms of the Top-9 Hit Ratio metric. The innovative approach of SCL-BT offers valuable insights for the research of automatic bug-triaging.
... The capacity of AI to generate fluent, coherent text blurs the boundaries between student work and machine-generated content. The opacity of AI systems complicates the attribution of intellectual ownership, raising concerns about plagiarism and misrepresentation (22). Moreover, many students are unaware of the fine line between "assistance" and "substitution," leading to ethical grey zones in academic submissions (23). ...
Article
Full-text available
This study explores the perceptions of undergraduate students from the Indonesian Language Education program at Universitas Bosowa regarding the use of ChatGPT in constructing argumentative texts. Employing a qualitative descriptive approach, data were collected through semi-structured interviews and document analysis of student writing. Fifteen participants were selected through purposive sampling based on their experience using ChatGPT in academic writing. Thematic analysis revealed four key findings: (1) ChatGPT was perceived as a valuable support tool for structuring and developing arguments, (2) concerns emerged about overreliance and loss of personal voice, (3) students reported noticeable improvements in vocabulary and grammatical accuracy, and (4) perceptions were divided regarding the impact of ChatGPT on critical thinking development. While students acknowledged the tool’s efficiency in enhancing technical aspects of writing, many emphasized the need for balanced use and pedagogical guidance to avoid dependency. The study concludes that ChatGPT holds pedagogical potential when integrated critically and ethically into writing instruction. It highlights the need for AI literacy, ethical frameworks, and revised assessment strategies to ensure that generative AI enhances, rather than undermines, student learning outcomes.
... Initially, transformer models were used to translate speech and text nearly in real-time. This innovation led to the evolution of large language models such as GPT2 (17) and GPT3 (18). There were two main innovations that the transformer model brought to the market: positional encoding and selfattention. ...
Article
Full-text available
According to the World Health Organization, cardiovascular diseases (CVDs) account for an estimated 17.9 million deaths annually. CVDs refer to disorders of the heart and blood vessels such as arrhythmia, atrial fibrillation, congestive heart failure, and normal sinus rhythm. Early prediction of these diseases can significantly reduce the number of annual deaths. This study proposes a novel, efficient, and low-cost transformer-based algorithm for CVD classification. Initially, 56 features were extracted from electrocardiography recordings using 1,200 cardiac ailment records, with each of the four diseases represented by 300 records. Then, random forest was used to select the 13 most prominent features. Finally, a novel transformer-based algorithm has been developed to classify four classes of cardiovascular diseases. The proposed study achieved a maximum accuracy, precision, recall, and F1 score of 0.9979, 0.9959, 0.9958, and 0.9959, respectively. The proposed algorithm outperformed all the existing state-of-the-art algorithms for CVD classification.
Article
The need for increased memory capacity, which also needs to be affordable and sustainable, leads to the adoption of heterogeneous memory hierarchies, combining DRAM and NVM technologies. This work proposes a memory management methodology that relies on multi-objective optimization in terms of performance, energy consumption and impact on NVM’s lifetime, for applications deployed on heterogeneous (i.e. DRAM/NVM) memory systems. We propose a scalable and lightweight data structure exploration flow for supporting data type refinement based on access pattern analysis, enhanced with a weighted-based data placement decision support for multi-objective exploration and optimization. The evaluation of the methodology was performed both on emulated and real DRAM/NVM hardware for different applications and data placement algorithms. The experimental results show up to 58.7% lower execution time and 48.3% less energy consumption compared to the results obtained by the initial versions of the applications. Moreover, we observed 72.6% less NVM write operations, which can significantly extend the lifetime of the NVM memory. Finally, thorough evaluation shows that the methodology is flexible and scalable, as it can integrate different data placement algorithms and NVM technologies and requires reasonable exploration time.
Article
Full-text available
Clinical event extraction is crucial for structuring medical data, supporting clinical decision-making, and enabling other intelligent healthcare services. Traditional approaches for clinical event extraction often use pipeline-based methods to identify event triggers and elements. However, these methods commonly suffer from error propagation and information loss, leading to suboptimal performance. To address this challenge, this paper proposes an end-to-end clinical event extraction method based on the large language models (LLMs). Specifically, we transform the clinical event extraction task into an end-to-end text generation task and design a prompt learning method based on the LLMs called LMCEE. Experimental results demonstrate a significant improvement over traditional pipeline methods, with the F1 score increasing by 12%. Additionally, the proposed method outperforms the generative-based method named UIE, showcasing a 5.7% improvement in F1 score. However, the experimental results also disclose certain limitations of the proposed method, such as its sensitivity to prompt templates and its heavy dependence on the type of LLMs. These findings highlight the need for further investigation and optimization to enhance performance and robustness.
Article
Testing web forms is an essential activity for ensuring the quality of web applications. It typically involves evaluating the interactions between users and forms. Automated test-case generation remains a challenge for web-form testing: Due to the complex, multi-level structure of web pages, it can be difficult to automatically capture their inherent contextual information for inclusion in the tests. Large Language Models (LLMs) have shown great potential for contextual text generation. This motivated us to explore how they could generate automated tests for web forms, making use of the contextual information within form elements. To the best of our knowledge, no comparative study examining different LLMs has yet been reported for web-form-test generation. To address this gap in the literature, we conducted a comprehensive empirical study investigating the effectiveness of 11 LLMs on 146 web forms from 30 open-source Java web applications. In addition, we propose three HTML-structure-pruning methods to extract key contextual information. The experimental results show that different LLMs can achieve different testing effectiveness, with the GPT-4, GLM-4, and Baichuan2 LLMs generating the best web-form tests. Compared with GPT-4, the other LLMs had difficulty generating appropriate tests for the web forms: Their successfully-submitted rates (SSRs) — the proportions of the LLMs-generated web-form tests that could be successfully inserted into the web forms and submitted — decreased by 9.10% to 74.15%. Our findings also show that, for all LLMs, when the designed prompts include complete and clear contextual information about the web forms, more effective web-form tests were generated. Specifically, when using Parser-Processed HTML for Task Prompt (PH-P), the SSR averaged 70.63%, higher than the 60.21% for Raw HTML for Task Prompt (RH-P) and 50.27% for LLM-Processed HTML for Task Prompt (LH-P). With RH-P, GPT-4’s SSR was 98.86%, outperforming models like LLaMa2 (7B) with 34.47% and GLM-4V with 0%. Similarly, with PH-P, GPT-4 reached an SSR of 99.54%, the highest among all models and prompt types. Finally, this paper also highlights strategies for selecting LLMs based on performance metrics, and for optimizing the prompt design to improve the quality of the web-form tests.
Article
Full-text available
Until recently the field of natural language generation relied upon formalized grammar systems, small-scale statistical models, and lengthy sets of heuristic rules. This older technology was fairly limited and brittle: it could remix language into word salad poems or chat with humans within narrowly defined topics. Recently, very large-scale statistical language models have dramatically advanced the field, and GPT-3 is just one example. It can internalize the rules of language without explicit programming or rules. Instead, much like a human child, GPT-3 learns language through repeated exposure, albeit on a much larger scale. Without explicit rules, it can sometimes fail at the simplest of linguistic tasks, but it can also excel at more difficult ones like imitating an author or waxing philosophical.
Article
Full-text available
This article shows how our reliance on artificially intelligent tools goes surprisingly far back in history, as do our fears about that reliance. After discussing how these devices and fears about them appear in ancient literature and recent literature, both factual and fictional, I then examine the potential mishaps that cause such fears, especially in the recent past and the present. All this leads to the question of regulation: whether and how we might do it, which is discussed in the concluding section.
Article
Full-text available
AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in the digital sphere.
Article
Full-text available
The digital is deeply transforming reality. Through discussion of concepts such as identity, location, presence, law and territoriality, this article explores why and how these transformations are occurring, and highlights the importance of having a design and a plan for our new digital world.
Book
What artificial intelligence can tell us about the mind and intelligent behavior. What can artificial intelligence teach us about the mind? If AI's underlying concept is that thinking is a computational process, then how can computation illuminate thinking? It's a timely question. AI is all the rage, and the buzziest AI buzz surrounds adaptive machine learning: computer systems that learn intelligent behavior from massive amounts of data. This is what powers a driverless car, for example. In this book, Hector Levesque shifts the conversation to “good old fashioned artificial intelligence,” which is based not on heaps of data but on understanding commonsense intelligence. This kind of artificial intelligence is equipped to handle situations that depart from previous patterns—as we do in real life, when, for example, we encounter a washed-out bridge or when the barista informs us there's no more soy milk. Levesque considers the role of language in learning. He argues that a computer program that passes the famous Turing Test could be a mindless zombie, and he proposes another way to test for intelligence—the Winograd Schema Test, developed by Levesque and his colleagues. “If our goal is to understand intelligent behavior, we had better understand the difference between making it and faking it,” he observes. He identifies a possible mechanism behind common sense and the capacity to call on background knowledge: the ability to represent objects of thought symbolically. As AI migrates more and more into everyday life, we should worry if systems without common sense are making decisions where common sense is needed.
Article
In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino-grad schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation. Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Book
What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition. ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently. The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks. Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily. The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe's remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built. © 2015, Springer International Publishing. All rights reserved.
Book
Who are we, and how do we relate to each other? This book argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to these fundamental human questions. As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution. "Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? This volume argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society.