Chapter

Digital Metaphysics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

How can a reality whose subsistence is only digital and whose existence is only online have ontological consistency? Yet experience shows how the pervasiveness of communication in human life today and the new possibilities of data analysis, made possible by AI, open up real and crucial ethical questions, in the etymological sense of crux. This, in fact, is tied to a “judgement”, through the Greek term krisis from which the words “crisis” and “critical” descend. A crux represents an interpretative passage in which the attribution of meaning is difficult, if not impossible. And this concerns not only philological investigations, but also every “critical” reading of reality. These ethical challenges call into question, at the same time, metaphysics, anthropology and even theology, through the question of what reality can be recognized in virtual relations, and what epistemological criteria, consequently, are required for a conscious manipulation of big data. AI and social media to actually work and make a profit from human relationships, which are the real product at stake. But to whom does a relationship belong? Who can own symbols and language? The point of access to such questions is the symbolic, anthropological and metaphysical bearing of the relationships themselves.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
API-based research is an approach to computational social sciences and digital sociology based on the extraction of records from the datasets made available by online platforms through their application programming interfaces (or APIs). This type of research has allowed the collection of detailed information on large populations, thereby effectively challenging the distinction between qualitative and quantitative methods. Lately, access to this source of data has been significantly reduced after a series of scandals connected to the use of personal information collected through online platforms. The closure of APIs, however, can have positive effects, if it encourages researchers to reduce their dependence on mainstream platforms and explore new sources and ways to collect records on online interactions that are closer to the digital fieldwork.
Book
Philosophy written in English is overwhelmingly analytic philosophy, and the techniques and predilections of analytic philosophy are not only unhistorical but anti-historical, and hostile to textual commentary. Analytic usually aspires to a very high degree of clarity and precision of formulation and argument, and it often seeks to be informed by, and consistent with, current natural science. In an earlier era, analytic philosophy aimed at agreement with ordinary linguistic intuitions or common sense beliefs, or both. All of these aspects of the subject sit uneasily with the use of historical texts for philosophical illumination. How, then, can substantial history of philosophy find a place in analytic philosophy? If history of philosophy includes the respectful, intelligent use of writings from the past to address problems that are being debated in the current philosophical journals, then history of philosophy may well belong to analytic philosophy. But if history of philosophy is more than this; if it is concerned with interpreting and reinterpreting a certain canon, or perhaps making a case for extending this canon, its connection with analytic philosophy is less clear. More obscure still is the connection between analytic philosophy and a kind of history of philosophy that is unapologetically antiquarian. This is the kind of history of philosophy that emphasises the status of a philosophical text as one document among others from a faraway intellectual world, and that tries to acquaint us with that world in order to produce understanding of the document. In this book, ten distinguished historians of philosophy, mostly trained in the analytic tradition, explore the tensions between, and the possibilities of reconciling, analytic philosophy and history of philosophy.
Chapter
Alan Turing was one of the most influential thinkers of the 20th century. In 1935, aged 22, he developed the mathematical theory upon which all subsequent stored-program digital computers are modeled. At the outbreak of hostilities with Germany in September 1939, he joined the Government Codebreaking team at Bletchley Park, Buckinghamshire and played a crucial role in deciphering Engima, the code used by the German armed forces to protect their radio communications. Turing's work on the version of Enigma used by the German navy was vital to the battle for supremacy in the North Atlantic. He also contributed to the attack on the cyphers known as "Fish," which were used by the German High Command for the encryption of signals during the latter part of the war. His contribution helped to shorten the war in Europe by an estimated two years. After the war, his theoretical work led to the development of Britain's first computers at the National Physical Laboratory and the Royal Society Computing Machine Laboratory at Manchester University. Turing was also a founding father of modern cognitive science, theorizing that the cortex at birth is an "unorganized machine" which through "training" becomes organized "into a universal machine or something like it." He went on to develop the use of computers to model biological growth, launching the discipline now referred to as Artificial Life. The papers in this book are the key works for understanding Turing's phenomenal contribution across all these fields. The collection includes Turing's declassified wartime "Treatise on the Enigma"; letters from Turing to Churchill and to codebreakers; lectures, papers, and broadcasts which opened up the concept of AI and its implications; and the paper which formed the genesis of the investigation of Artifical Life.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
In recent decades, the question of whether a machine can think has been given a different interpretation entirely. The question that has been posed in its place is, Could a machine think just by virtue of implementing a computer program Is the program by itself constitutive of thinking This is a completely different question because it is not about the physical, causal properties of actual or possible physical systems but rather about the abstract, computational properties of formal computer programs that can be implemented in any sort of substance at all, provided only that the substance is able to carry the program. A fair number of researchers in artificial intelligence (AI) believe the answer to the second question is yes. They believe further-more that they have a scientific test for determining success or failure: the Turing test devise by Alan M. Turing, the founding father of artificial intelligence. The Turing test, as currently understood, is simply this: if a computer can perform is such a way that an expert cannot distinguish its performance from that of a human who has a certain cognitive ability-say, the ability to do addition or to understand Chinese - then the computer also has thatmore » ability. So the goal is to design programs that will simulate human cognition in such a way as to pass the Turing test. What is more, such a program would not merely be a model of the mind; it would literally be a mind, in the same sense that a human mind is a mind. By no means does every worker in artificial intelligence accept so extreme a view. A more cautious approach is to think of computer models as being useful in studying the mind in the same way that they are useful in studying the weather, economics or molecular biology. To distinguish these two approaches, the authors call the first strong AI and the second weak AI. It is important to see just how bold an approach strong AI is.« less
Mapping out moral dilemmas of free expression on social media. A closer look at Facebook, Twitter and YouTube (Doctoral thesis)
  • K G Souza
Gli imperdonabili. Adelphi. Google Scholar
  • C Campo
Dissemination. The Athlone Press
  • J Derrida