ArticlePDF Available

From the Philosophy of AI to the Philosophy of Information

Authors:

Abstract

Computational and information-theoretic research in philosophy has become increasingly fertile and pervasive, giving rise to a wealth of interesting results. Consequently, a new and vitally important field has emerged, the philosophy of information (PI). This paper introduces PI as the philosophical field concerned with (i) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, and with (ii) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. It is argued that PI is a mature discipline for three reasons: it represents an autonomous field of research; it provides an innovative approach to both traditional and new philosophical topics; and it can stand beside other branches of philosophy, offering a systematic treatment of the conceptual foundations of the world of information and the information society.
Preprint
This paper has been accepted for publication in
The Philosophers’ Magazine
http://www.philosophers.co.uk/index.htm
Permission to make digital or hard copies of all or part of this work for personal or classroom
use is granted without fee provided that copies are not made or distributed for profit or
commercial advantage and that copies bear this notice and the full citation on the first page. To
copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
It is a publisher's requirement to display the following notice:
The documents distributed by this server have been provided by the contributing authors as a
means to ensure timely dissemination of scholarly and technical work on a noncommercial
basis. Copyright and all rights therein are maintained by the authors or by other copyright
holders, notwithstanding that they have offered their works here electronically. It is understood
that all persons copying this information will adhere to the terms and constraints invoked by
each author's copyright. These works may not be reposted without the explicit permission of the
copyright holder.
1
Lucia
no
Floridi
Digitally signed by
Luciano Floridi
DN: CN = Luciano
Floridi, C = GB, O =
Oxford University,
OU = IEG
Reason: I am the
author of this
document
Date: 2004.07.30
18:52:01 +01'00'
From the Philosophy of AI to the Philosophy of Information
Luciano Floridi
Summertime, and a bottle of juice lies half-empty on the grass. Attracted by the smell,
wasps get inside it but cannot get out of it and eventually drown. Their behaviour is
stupid in many ways: they try to fly through the very surface on which they walk; they
keep hitting the glass, until they are exhausted; they see other corpses inside the bottle
and yet fail to draw any conclusion; they cannot tell each other about the danger, despite
their communication abilities; even if they escape the danger, they do not register it and
will come back inside the bottle; they cannot use any means to help the other wasps. If
you did not know better, you would think the vespula vulgaris to be some kind of robot.
Descartes would certainly agree with you.
As a family of insects, wasps got lucky. Had nature produced juice-bottle-
flowers, they would have long been extinct. Wasps and their environment have been
tuned to each other by natural selection. To us, they are a reminder that fatally stupid
behaviour comes in a bewildering variety of forms. Unfortunately, so does intelligence.
Common sense, experience, learning and rational abilities, communication
skills, memory, planning capacities: these are only some of the essential ingredients that
can make a behaviour (hence the agent so behaving) intelligent. If you think of it, they
are all ways of handling information (mind, not symbols or uninterpreted data, but
information in the semantic sense of the word, more on this presently). So, could it be
that stupid or intelligent behaviour is a function of some hidden information processes?
The question is “too meaningless to deserve discussion”, to quote Turing, but it does
point in the right direction: information is the key.
2
Suppose the necessary information processing is already in place: although
intelligent behaviour cannot be defined in terms of necessary and sufficient conditions,
it could still be tested contextually and comparatively, as Turing rightly understood.
After all, we do have a sample of intelligent agents, and that’s us, modestly.
Suppose, instead, that the necessary information processing is not yet in place:
could it be engineered? If it could, it may be Turing-tested. Yet, whether it can, it is still
anybody’s guess, or rather faith, despite half a century of research in Artificial
Intelligence (AI). One thing, however, seems to be clear: talking of information
processing helps to explain why our current AI – or, better, AIB (Artificial Intelligent
Behaviour) – systems are overall more stupid than the wasps in the bottle. Our present
technology is actually incapable of processing any kind of information, being
impervious to semantics. IT is as misnamed as “smart weapons”. If you find this
puzzling, consider the following example.
Wasps can navigate very successfully. They can find their way in the garden,
avoid obstacles, collect food, fight or flee other animals, and so forth. This is already far
more than any current AIB system can achieve. A recent confirmation came last March,
when 13 vehicles took part to the Grand Challenge
(http://www.darpa.mil/grandchallenge/), a race sponsored by DARPA (the American
Defence Advanced Research Projects Agency). The rough and unpredictable course
consisted in 142 miles through of the Mojave desert between California and Nevada.
Each vehicle had to navigate unmanned, unaided, un-pre-programmed and without any
remote-control, by relying only on its AIB. The prize was $1m to the team whose
vehicle was first to cross the finishing line within ten hours. Eventually, the best
performance was offered by Sandstorm, a machine built by a team from Carnegie
3
Mellon University (http://www.redteamracing.org/). Sandstorm managed to navigate
7.4 miles before its tires caught fire.
This is how bad the situation is with state-of-the-art AIB systems. Despite its
simplicity, navigation seems to require some sort of intelligence. Nobody really knows
how to achieve the same result by means of advanced sensors and computational
capabilities.
Sometimes one may forget that the most successful AIB systems are those lucky
enough to have their environments shaped around their limits, like a robomower
(http://www.friendlyrobotics.co.uk/), not vice versa. Put artificial agents in their digital
soup, the Internet, and you will find them happily buzzing. The real difficulty is to cope
with the unpredictable world out there, which may also be full of other collaborative or
competing agents. This is known as the frame problem: how a situated agent can
represent a changing environment and interact with it successfully. Nobody has much of
a clue, so human intervention is constantly required, as with the robots on Mars
(http://marsrovers.jpl.nasa.gov/mission/spacecraft_rover_brains.html). Our most
successful artificial agents operating in the wild are those to which we are related as
homunculi to their bodies.
Consider now the explanation of AI failure, namely the lack of information
processing capacities. Our current computers, of any architecture, generation and
physical making, analogue or digital, Newtonian or quantum, sequential, distributed or
parallel, with any number of processors, any amount of RAM, any size of memory,
whether embodied, situated, simulated or just theoretical, never deal with information
only with data. No philosophical hair-splitting here. Data are mere (patterns of physical)
differences and identities. They are uninterpreted and tend to stay so, not matter how
much they are crunched or kneaded. Nowadays, we think of data in Boolean terms –
4
ones vs. zeros, ups vs. downs in the spin of an electron, high vs. low voltage – but of
course artificial devices can detect and record analogue data equally well. The point is
not the binary nature of the vocabulary, but the fact that strings of data can be more or
less well-formed according to some rules, and that a computer can then handle the latter
rather successfully. So, whenever the behaviour in question is reducible to a matter of
transducing, encoding, decoding or modifying uninterpreted data according to some
syntax, computers are likely to be successful. This is why they are often and rightly
described as purely syntactic machines.
Of course, “purely syntactic” is a comparative abstraction, like “virtually fat
free”. It means that traces of information are negligible, not that they are completely
absent. Computers are indeed capable of (responding to) elementary discrimination.
They can detect identities as equalities and differences as simple lacks of identities
between the relata (but not in terms of appreciation of the peculiar and rich features of
the entities involved). Admittedly, this is already a proto-semantic act. So, to call a
computer a syntactic machine is to stress that discrimination is a process far too poor to
generate anything resembling semanticisation. It only suffices to guarantee an efficient
manipulation of syntactically-friendly data. Given that it is also the only vaguely proto-
semantic act that (present) computers are able to perform as “cognitive systems”, the
Grand Challenge resembles more a Mission Impossible.
Problems become immediately insurmountable when their solutions require the
successful manipulation of information, that is, of well-formed data that are also
meaningful. Semantics is the snag. How do data acquire their meaning? Solving what is
known as the symbol grounding problem in a way that could be effectively engineered
would be a crucial step towards solving the frame problem. Unfortunately, once again
we still lack a clear understanding of how precisely the symbol grounding problem is
5
solved in animals, including primates like us, let alone having a blue print of a
physically implementable approach.
What we do know is that processing information is exactly what intelligent
agents like us are good at. So much so that fully and normally developed human beings
seem cocooned in their own informational space. Strictly speaking, we do not
consciously cognise pure meaningless data. The genuine perception of completely
uninterpreted data might be possible, perhaps under very special circumstances, but it is
not the norm, and cannot be part of a continuously sustainable, conscious experience, at
least because we never perceive pure data in isolation but always in a semantic context,
which inevitably forces some meaning onto them. We are so used to dealing with rich
semantic contents that we mistake dramatically impoverished or variously interpretable
information for something completely devoid of any semantic content. Yet what goes
under the name of “raw data” are data that might lack a specific and relevant
interpretation, not any interpretation.
To sum up, data, as (interpretable but still) uninterpreted (patterns of physical)
differences and identities, represent the semantic upper-limit of current and foreseeable
AIB systems. They also are the semantic lower-limit of natural intelligent behaviour
(NIB) systems, which normally deal with (semantic) information. Ingenious layers of
interfaces exploit this threshold and make possible human-computer interaction.
The suggestion concerning human informational-cocooning and machines’ data-
entrapment becomes less controversial once is carefully distinguished from five theses
that it does not deny. One may argue that:
1) young NIB systems, for example Wittgenstein’s young Augustine, seem to go
through a formative process in which, at some stage, they experience only data, not
6
information. There is a stage in the history of a human being at which we are
information virgins;
2) adult NIBs, for example Turing’s clerk, the adult John Searle or a medieval copyist,
could behave or be used as if they were perceiving only data, not information. One
could behave like a child – or an Intel processor, or a Turing Machine – if one is placed
in a Chinese Room or, more realistically, while copying a Greek manuscript without
knowing even the alphabet of the language but just the physical shape of the letters;
3) cognitively, psychologically or mentally impaired NIBs, including the old Nietzsche,
might also act like children, and fail to experience information (like “this is a horse”)
when exposed to data;
4) there is certainly a neurochemical level at which NIBs process data, not yet
information;
5) NIBs’ semantic constraints might be comparable to, or even causally connected with,
AIBs’ syntactic constraints, at some adequate level of abstraction.
These five theses are perfectly fine and consistent with the point made above,
which is that (current) AIBs’ achievements are constrained by syntactical resources,
whereas NIBs’ achievements are constrained by semantic ones.
There is a semantic threshold between us and our machines and we do not know
how to make the latter overcome it. Indeed, we know very little about how we ourselves
build the cohesive and successful informational narratives that we inhabit. If this is true,
then artificial and human agents belong to different worlds and one may expect them
not only to have different skills but also to make different sort of mistakes. Some
evidence in this respect is provided by the Wason Selection Task.
Imagine a pack of cards where each card has a letter written on one side and a
number written on the other side. You are shown the following four cards: [E], [T], [4],
7
[7]. Suppose, in addition, that you are told that if a card has a vowel on one side, then it
has an even number on the other side. Which card or cards – as few as possible – would
you turn over, in order to check whether the rule holds?
While you think about it, it may be consoling to know that only about 5% of the
educated population gives the correct answer, which is [E] and [7]. However, most
people have no problems with a semantic version of the same exercise, in which the rule
is “if you borrow my car, then you have to fill up the tank” and the cards say: [borrowed
the car], [did not borrow the car], [tank full], [tank empty].
In both cases, a computer obtains the correct answer by treating each problem
syntactically. The test reminds us that intelligent behaviour relies on semantic
understanding more than on syntactical manipulation and that, while both can easily
achieve the same goals efficiently and successfully, semantically- and syntactically-
based agents are prone to different sort of potential mistakes.
All this should be fairly trivial, yet it is still common to find people comparing
human and artificial chess players. In 1965, the Russian mathematician Alexander
Kronrod remarked that chess was the fruit fly of artificial intelligence. This may still be
an acceptable point of view had AI tried to win chess tournaments by building
computers that (learn how to) play chess the human way. But it hasn’t, and as a result
the chess-fly has caused some conceptual confusion.
Playing chess well requires quite a lot of intelligence if the player is human, but
no intelligence whatsoever if played computationally. When IBM computer Deep Blue
won against the world chess champion Garry Kasparov in 1997, it was a sort of Pyrric
victory for classic AI (http://www.sciencemag.org/cgi/content/full/276/5318/1518).
Deep Blue is only a marvelous syntactical engine, with a great memory but virtually
zero AIB.
8
John McCarthy, one of the fathers of AI, immediately recognised that Deep Blue
said more about the nature of chess than about intelligent behaviour. He rightly
complained about the betrayal of the original idea
(http://www.sciencemag.org/cgi/content/full/276/5318/1518), but he drew the wrong
lesson. Contrary to his suggestion, AI should not try to simulate human intelligent
behaviour. This is the glass we should stop hitting. AI should try to emulate its results.
Emulation is not to be confused with some form of functionalism, whereby the
same function – lawn-mowing, dish-washing, chess-playing – is implemented by
different physical systems. Emulation is rather connected to “outcomism”: agents
emulating each other can achieve the same result by radically different strategies and
processes. The end underdetermines the means.
Outcomism is techologically fascinating and rather successful, witness the
spreading of IT in our society. Unfortunately, it is eyes-crossingly dull when it comes to
its philosophical implications, which can be summarised in two words: “big deal”. So
should this be the end of our interest in the philosophy of AI? Not at all.
The failure of mimetic AI (AIB must simulate NIB) has been conceptually very
fertile. By showing that what matters is information, the philosophy of AI has hushered
in a new paradigm, the philosophy of information.
Elsewhere I have defined PI as the philosophical field concerned with (a) the
critical investigation of the conceptual nature and basic principles of information,
including its dynamics, utilisation and sciences, and (b) the elaboration and application
of information-theoretic and computational methodologies to philosophical problems. It
would be impossible to analyse here critically and in detail the nature of PI, but a
reference to the Blackwell Guide to the Philosophy of Computing and Information
9
(http://www.blackwellpublishing.com/pci/) and three schematic points may suffice to
give a general idea.
First, by trying to circumvent the semantic threshold and squeeze some
information procesing out of mechanics and syntax, AI has opened up a large and very
rich variety of research areas, which are conceptually challenging per se and very
interesting for their potential implications and applications in philosophy. Part of this
innovation goes under the name of New AI. Consider, for example, situated robotics,
neural networks, multi-agent systems, Bayesian systems, machine-learning, cellular
automata, artificial life systems, epistemic logic and non-monotonic reasoning.
Philosophical issues no longer look the same once you have been exposed to any of
these fields.
Second, ironically, artificial simulations have failed to reproduce NIB, but have
made available environments where philosophical theories can be simulated and tested
“in silico”. This is true not only for logic-based problems, as one may expect, but also
for ethical, linguistic and epistemological issues, for example, which can be modelled
through digital simulations.
Third, by realizing that it is not so much the process (computing) that has
revolutionized our society and our conceptual schemes, as the broader phenomenon of
information, the philosophy of AI has helped to call attention to new conceptual
problems that require our philosophical attention. Computer ethics is a good example.
So the Philosophy of AI has not lived in vain. Obviously there is plenty of
exciting work that lies ahead. All we need to do is to replace an I for an I.
10
... Bilgi kirliliği, bilgilerin doğru ve güvenilir olmadığını ifade eden genel bir terimdir. Floridi (2011), bilgi kirliliğini "Bilgi ortamının, doğruluğu, güvenilirliği ve kalitesi düşük bilgilerle dolması durumu." olarak tanımlamaktadır. ...
... Bu tür bilgi kirliliği, bir olay veya durumu tam olarak anlamayı zorlaştırır ve yanıltıcı olabilir. Floridi (2011), eksik bilgiyi "bilginin bütünlüğünü bozan ve tam anlamayı engelleyen bilgiler" olarak tanımlar. Bilgi kirliliği, sanal örgütlerde ve genel olarak bilgi toplumunda ciddi sorunlar yaratır. ...
... Yanıltıcı finansal bilgiler, yatırım kararlarında hatalara ve mali kayıplara yol açabilir. Özellikle sahte yatırım fırsatları veya yanlış ekonomik veriler, bireylerin finansal sağlığını riske atabilir (Floridi, 2011). Aynı zamanda bilgi kirliliği, bireylerin eğitim ve bilgi edinme süreçlerini de zedeler. ...
Article
Full-text available
Bu çalışma, sanal örgütlerde bilgi kirliliğinin çölyak hastaları üzerindeki etkilerini ve bu kirliliğin nasıl yönetilebileceğini incelemektedir. Sanal örgütler, coğrafi olarak dağılmış bireylerin dijital iletişim araçları aracılığıyla koordinasyon sağladığı esnek ve dinamik yapılar olarak tanımlanır. Bilgi kirliliği ise doğru ve güvenilir bilgilerin erişimini zorlaştıran ve yanlış, yanıltıcı veya eksik bilgilerin yayılmasından kaynaklanan bir sorundur. Çalışmada, çölyak hastalarının bilgiye erişimlerinde internet ve sosyal medya platformlarının öncelikli kaynaklar olduğu, ancak bu platformlardaki bilgilerin doğruluğu konusunda yaşanan tereddütlerin hastaların doğru bilgiye ulaşmalarını zorlaştırdığı tespit edilmiştir. Nitel analiz yöntemleri kullanılarak, bilgi eksikliği, bilginin doğruluğu, bilginin benzerliği, bilginin tutarsızlığı ve bilgi kaynaklarının güvenilirliği gibi temalar üzerinde durulmuştur. Araştırma sonuçlarına göre bilgi kirliliği, çölyak hastalarının doğru bilgiye erişimini zorlaştırmakta ve buna bağlı olarak sağlıklarını olumsuz yönde etkilemektedir. Özellikle sosyal medya platformlarında yayılan yanıltıcı ve eksik bilgiler, hastaların yanlış tedavi yöntemlerine başvurmasına veya zararlı diyet uygulamalarına yönelmesine neden olabilmektedir. Bilgi kirliliğini önlemek ve çölyak hastalarının doğru bilgiye erişimini sağlamak için çeşitli stratejiler önerilmektedir. Bunlar arasında bilgi doğrulama protokollerinin geliştirilmesi, dijital okuryazarlık eğitimlerinin verilmesi, güvenilir bilgi kaynaklarının teşvik edilmesi, yapay zekâ ve veri analitiği kullanımı ve sağlık profesyonelleri ile iş birliği yapılması bulunmaktadır. Bilgi doğrulama protokolleri, internette yayılan bilgilerin doğruluğunu sistematik olarak değerlendirme sürecini içerirken, dijital okuryazarlık eğitimleri hastaların bilgi kirliliğini fark etmelerini ve doğru bilgiyi ayırt edebilmelerini sağlar. Ayrıca, güvenilir bilgi kaynaklarının teşvik edilmesi ve sağlık profesyonellerinin doğru bilgilerle hastaları bilgilendirmesi, bilgi kirliliğinin azaltılmasında önemli bir rol oynar. Yapay zekâ ve veri analitiği ise yanıltıcı bilgileri tespit etmede ve doğru bilgiye hızlı erişim sağlamada etkili araçlar sunar. Bu stratejilerin uygulanması, çölyak hastalarının sağlıklı ve bilinçli kararlar almasına yardımcı olacak ve sanal örgütlerde bilgi kirliliğinin etkilerini minimize edecektir.
... This depicts Rescher's (2009) notion of generative unpredictability, where both the human and digital components may encounter epistemic voids (Sullivan et al., 2023b). For example, while the oracle might process vast datasets and provide recommendations, it may not fully grasp the nuanced human values or the unpredictable nature of political and social responses to its advice (Floridi, 2013). This cognitive gap highlights the limitations in both human and digital cognition when confronting complex global challenges. ...
... For instance, quantum computing challenges traditional notions of binary logic, operating in a state of superposition that challenges the conventional understanding of computation. Additionally, digital representations of identity and reality, such as those created through deepfakes or virtual environments, blur the lines between what is real and what is artificially constructed, leading to epistemic uncertainty about authenticity in the digital first age (Floridi, 2013). ...
Conference Paper
Full-text available
Digital systems rely on known data, algorithms, and predefined parameters, are often precise in their calculations. As a consequence, the digital-first paradigm is approaching fast. However, this paper highlights the unknowabilities emerging from the interplay between digital technologies and human agents. By introducing the concept of symbiotic unknowability through an imaginary narrative featuring a future Digital Oracle, it critiques the current digitalization trajectory reliant on known data. The paper argues for acknowledging the limitations of digital systems to deal with the unknown and complex realities. It emphasizes the necessity of balancing reliance on data and human judgment by highlighting the interaction between human judgment and digital algorithms introducing unique unknowabilities, that neither can foresee independently. This highlights the importance of recognizing and addressing these emergent unknowns in the evolving digital landscape.
... Despite the non-intuitive nature of applying the mathematical axioms of quantum mechanics to macro-level processes, the application of quantum formalisms to model human cognitive behaviors is gaining ascendency and broadening its application to new areas outside its original purpose [41][42][43][44][45][46][47][48][49][50][51]. For example, the concept of measurement/observation shares similarities to social anthropology by recognizing that observation in both fields changes the system [52]. In spite of these reasons, quantum formalisms require further explanation to clarify their benefits in modeling human behavior. ...
Article
Full-text available
Artificial intelligence is set to incorporate additional decision space that has traditionally been the purview of humans. However, AI systems that support decision making also entail the rationalization of AI outputs by humans. Yet, incongruencies between AI and human rationalization processes may introduce uncertainties in human decision making, which require new conceptualizations to improve the predictability of these interactions. The application of quantum probability theory (QPT) to human cognition is on the ascent and warrants potential consideration to human-AI decision making to improve these outcomes. This perspective paper explores how QPT may be applied to human-AI interactions and contributes by integrating these concepts into human-in-the-loop decision making. To capture this and offer a more comprehensive conceptualization, we use human-in-the-loop constructs to explicate how recent applications of QPT can ameliorate the models of interaction by providing a novel way to capture these behaviors. Followed by a summary of the challenges posed by human-in-the-loop systems, we discuss newer theories that advance models of the cognitive system by using quantum probability formalisms. We conclude by outlining areas of promising future research in human-AI decision making in which the proposed methods may apply.
... Furthermore, this paper also aims to discuss some of this concept's interesting properties and present possible avenues for further developing this interesting proposal. It is well known that there is enormous interest in the philosophical aspects of the concept of information (e.g., Adriaans and Benthem, 2008;Burgin, 2010;Floridi, 2011;Dodig-Crnkovic and Burgin, 2019), but we feel that Krzanowski's proposal is particularly attractive with some great research potential, especially for studying physical reality. It would therefore be helpful, in our opinion, to position this concept within the vast array of philosophical problems related to the notion of information, as well as introduce a few distinctions to allow us to order the discourse. ...
Article
Full-text available
As one may have noticed, the title of this paper is somewhat provocative. We found Roman Krzanowski’s (2020a,b,c; 2022) proposed approach to the problem of information very intriguing. Our aim here is to highlight some advantages when it comes to answering some fundamental questions in the philosophy of physics and metaphysics, as well as the philosophy of information and computer science. This issue is of great importance, so we propose that the introduction of some subtle distinctions between ontological and epistemological information can be regarded as being analogous to G.F.R. Ellis’s analyses of the passage of time in his concept of the Crystallizing Block Universe (Ellis and Goswami, 2012). This analogy could be useful when further studying the relations between different types of information. We also suggest some subjects for further study, ones where Krzanowski’s proposal could serve as a very solid foundation for examining traditional metaphysical issues by combining classical philosophical doctrines with the new approach.
... Recently, a great deal of work in philosophy and the social sciences has sought to define or delineate various sorts of misleading content, including misinformation, disinformation, malinformation, and fake news (Fallis, 2016;Weatherall and O'Connor, 2024). A typical claim, especially earlier in this literature, was to define terms like misinformation and disinformation as involving false or inaccurate content (Floridi, 1996(Floridi, , 2011Fetzer, 2004). But increasingly it is recognized that much content is true or accurate, but nonetheless misleading (Fallis, 2015;Wardle and Derakhshan, 2017). ...
Article
There are myriad techniques industry actors use to shape the public understanding of science. While a naive view might assume these techniques typically involve fraud or outright deception, the truth is more nuanced. This paper analyzes industrial distraction, a common technique where industry actors fund and share research that is accurate, often high quality, but nonetheless misleading on important matters of fact. This involves reshaping causal understanding of phenomena with distracting information. Using case studies and causal models, we illustrate how this impacts belief and decision making even for rational learners, informing science policy and debates about misleading content.
... A person finds themself in a world of numerous interactions, where their personality is identified and functions. This new type of personality formation occurs in the context of a new phenomenon of personal online identification (personal identity online) (Floridi, 2011;Rodogno, 2011). This latest extension of the personality, associated with hyperconnectivity, provided every inhabitant of our planet with the possibility of unlimited access to other inhabitants (e.g., via social networks), to any place on Earth (e.g., via Google Maps), and to any piece of knowledge (for example, using Wikipedia) (Serres, 2012). ...
Preprint
Full-text available
This article explores the evolution of constructionism as an educational framework, tracing its relevance and transformation across three pivotal eras: the advent of personal computing, the networked society, and the current era of generative AI. Rooted in Seymour Papert constructionist philosophy, this study examines how constructionist principles align with the expanding role of digital technology in personal and collective learning. We discuss the transformation of educational environments from hierarchical instructionism to constructionist models that emphasize learner autonomy and interactive, creative engagement. Central to this analysis is the concept of an expanded personality, wherein digital tools and AI integration fundamentally reshape individual self-perception and social interactions. By integrating constructionism into the paradigm of smart education, we propose it as a foundational approach to personalized and democratized learning. Our findings underscore constructionism enduring relevance in navigating the complexities of technology-driven education, providing insights for educators and policymakers seeking to harness digital innovations to foster adaptive, student-centered learning experiences.
... Despite the non-intuitive nature of applying the mathematical axioms of quantum mechanics to macro-level processes, the application of quantum formalisms to model human cognitive behaviors is gaining ascendency and broadening its application to new areas outside its original purpose [38][39][40][41][42][43][44][45][46][47]. For example, the concept of measurement/observation shares similarities to social anthropology by recognizing that observation in both fields changes the system [49]. In spite of these reasons, the application of quantum formalisms can still require additional explanation to understand its benefits for modeling human behaviors. ...
Preprint
Full-text available
Artificial Intelligence (AI) is set to incorporate expanded decision space that has traditionally been the purview of humans. However, AI systems that support decision-making also entail human rationalization of AI outputs. Yet, incongruencies between AI and human rationalization processes may introduce uncertainties into human decision-making, necessitating new conceptualizations to improve the predictability of these interactions. The application of quantum probability theory (QPT) to human cognition is on the ascent and warrants potential consideration in human-AI decision-making to improve outcomes.
... La locuzione "More human than human is our motto" (Blade Runner 1982) pronunciata dal personaggio fittizio Eldon Tyrell, risuona come una dichiarazione profetica nell'ambito della riflessione sulla convergenza tra umano e algoritmi intelligenti, invitando a un'indagine sulle implicazioni filosofiche e sociologiche di tale simbiosi, sottolineando così l'intento di superare le limitazioni naturali dell'essere umano attraverso l'integrazione di sistemi algoritmici avanzati e delineando una visione futura in cui la distinzione tra naturale e artificiale diventa sempre più labile (Robinson 2020). Si tratta di fantascienza che diviene controspazio nel quale indagare le reazioni, le visioni e le capacità di adattamento dell'ente umano verso queste tecnologie ma che non si distanzia così tanto dal reale poiché, dall'inforg (Floridi 2013) al quantified self (Lupton 2016) si è giunti alla costituzione di ulteriori prodotti tecnoscientifici che hanno acquisito la valenza di soggetti sociali in quanto eticamente agenti e relazionalmente attivi (Grassi 2020) -Entità generatrici: algoritmi (base GAN) diventati parte attiva di un sistema architettonico sociale in cui convivono con la persona e che, soprattutto, valicano uno dei confini del processo embrionale, poiché capaci di generare altri "prodotti" (Dall.e e Midjourney) e altri "simili", fino alla creazione di un apparato simbolico condiviso e significante, aspetto sino ad ora di totale competenza della persona. ...
Chapter
Il saggio esplora l'evoluzione delle identità sociali nell'era delle tecnologie avanzate, riflettendo su come i confini della corporeità utilizzati come ultimo limite dell'essere umano siano diventati sempre più malleabili e intrecciati con le innovazioni tecnoscientifiche. Con una prospettiva storico-teorica, viene analizza la crescente presenza degli algoagenti nelle dinamiche relazionali, riconoscendoli come attori sociali e proponendo una categorizzazione dei soggetti artificiali con i quali si condivide lo spazio sociale. L a storia del corpo narra come la definizione della sua natura viva una costante permeazione dei propri confini, limiti e frontiere che superano qualsiasi arcaica forma di dualismo per intessere una struttura reticolare che contempli simultaneamente i concetti di organico, artificiale, naturale, protesico, umano e macchinico, potendo riassumerla così come la narrazione delle tecnoscienze che lo rendono un complesso campo di testimonianze e relazioni. Nel costante confronto con l'universo dei dispositivi ar-tificiali, l'identità dell'individuo è stata ed è declinata nel-la sua struttura di ente naturalmente tecnologico, mediata dallo sguardo degli artifici adottati e dalla loro introduzione nell'esistenza quotidiana della persona e della sua manifestazione. Tale processo non è semplicemente un atto di auto -rappresentazione ma si radica profondamente nelle dinamiche sociali e culturali che plasmano la percezione del sé e dell'alterità. Attraverso questa costellazione di enti non
Article
En este artículo, presentamos y analizamos críticamente el enfoque tecnicista en teoría de los medios. Abordamos una de sus propuestas más radicales, a saber: la eliminación de la dimensión del sentido y de las instancias agenciales humanas en favor de la dimensión de la información y de sistemas automatizados de procesamiento de señales y datos. En nuestro análisis consideramos las dificultades del gesto eliminacionista y ofrecemos un enfoque alternativo que atienda a los aspectos técnico-materiales, pero sin eliminar la dimensión del sentido. Se trata de un enfoque basado en los planteos derridianos sobre la huella. Examinaremos la mediación de la huella con vistas a tres aspectos decisivos: la conformación de relaciones lógicas y lenguajes formales, el funcionamiento de los sistemas de información y la producción de significaciones.
Article
Full-text available
Large Language Models (LLMs) raises challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. Regarding LLMs, the main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of risks and harms generated by means of LLMs. The epistemological approach examines how LLMs produce outputs, information, knowledge, and a representation of reality in ways that differ from those followed by human beings. To face the impact of LLMs, our paper contends that the epistemological approach should be examined as a priority: identifying risks and opportunities of LLMs also depends on considering how this form of artificial intelligence works from an epistemological point of view. To this end, our analysis compares the epistemology of LLMs with that of law, in order to highlight at least five issues in terms of: (i) qualification ; (ii) reliability ; (iii) pluralism and novelty ; (iv) technological dependence and (v) relation to truth and accuracy . The epistemological analysis of these issues, preliminary to the normative one, lays the foundations to better frame challenges and opportunities arising from the use of LLMs.
Article
Full-text available
Identifying and utilizing information is central to reproduc-tive success. We study a scenario where a multicellular colony has to trade-off between utility of strategies for in-vestment in persistence or progeny and the (Shannon-type) relevant information necessary to realize these strategies. We develop a general approach to treat such problems that in-volve iterated games where utility is determined by iterated play of a strategy and where, in turn, informational process-ing constraints limit the possible strategies.