Book

The Digital Mind: How Science Is Redefining Humanity

Authors:

Abstract

What do computers, cells, and brains have in common? Computers are electronic devices designed by humans; cells are biological entities crafted by evolution; brains are the containers and creators of our minds. But all are, in one way or another, information-processing devices. The power of the human brain is, so far, unequaled by any existing machine or known living being. Over eons of evolution, the brain has enabled us to develop tools and technology to make our lives easier. Our brains have even allowed us to develop computers that are almost as powerful as the human brain itself. In this book, Arlindo Oliveira describes how advances in science and technology could enable us to create digital minds. Exponential growth is a pattern built deep into the scheme of life, but technological change now promises to outstrip even evolutionary change. Oliveira describes technological and scientific advances that range from the discovery of laws that control the behavior of the electromagnetic fields to the development of computers. He calls natural selection the ultimate algorithm, discusses genetics and the evolution of the central nervous system, and describes the role that computer imaging has played in understanding and modeling the brain. Having considered the behavior of the unique system that creates a mind, he turns to an unavoidable question: Is the human brain the only system that can host a mind? If digital minds come into existence -- and, Oliveira says, it is difficult to argue that they will not -- what are the social, legal, and ethical implications? Will digital minds be our partners, or our rivals? © 2017 Massachusetts Institute of Technology. All rights reserved.
... Notwithstanding, there are some applications (e.g., image recognition) that the ANNs are not even close to human performance (in terms of processing speed and accuracy) since only a small fraction of the functioning of our neural circuit is known [220] and it is not possible yet to implement that behaviour on an ANN. (In fact, the Drosophila and Nematode's brain are the only two brains that are known in detail, and it took over 100 years of research [221,222].) ...
... Like the biological neurons, the neuron units in an ANN are only fired when the amount of excitation is high; otherwise, they must remain silent [221]. ...
... Nevertheless, the works developed by McCulloch, Pitts, Rosenblatt, Ivakhnenko and Lapa were the most relevant approaches that leveraged the development of the ANN of the current days [221]. ...
Thesis
Full-text available
Optimisation is a branch of mathematics that was developed to find the optimal solutions, among all the possible ones, for a given problem. Applications of optimisation techniques are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of methods to solve specific problems to its optimality. This dissertation focuses on the adaptation of two nature inspired algorithms that, based on optimisation techniques, are able to compute approximations for zeros of polynomials and roots of non-linear equations and systems of non-linear equations. Although many iterative methods for finding all the roots of a given function already exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results due to the problem of accumulating rounding errors, (b) good initial approximations to the roots for the algorithm converge, or (c) the computation of first or second order derivatives, which besides being computationally intensive, it is not always possible. The drawbacks previously mentioned served as motivation for the use of Particle Swarm Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are known, respectively, for their ability to explore high-dimensional spaces (not requiring good initial approximations) and for their capability to model complex problems. Besides that, both methods do not need repeated deflations, nor derivative information. The algorithms were described throughout this document and tested using a test suite of hard numerical problems in science and engineering. Results, in turn, were compared with several results available on the literature and with the well-known Durand–Kerner method, depicting that both algorithms are effective to solve the numerical problems considered.
... Es por ello que Internet figura como el gran protagonista de los últimos veinticinco años, ya que ha pasado de tener 16 millones de usuarios globales en 1995 a 4.1 miles de millones en 2019 (International Telecomunication Union), cifra que se corresponde con el 55 por ciento de la población mundial. El crecimiento en número de usuarios/as, así como de contribuidores a la creación de contenido, es lo que ha aumentado en gran parte su utilidad, y es lo que impulsó, a mediados de la década pasada, el salto de la Web 1.0, una web rígida y de 'unos pocos', a lo que hoy se conoce como la Web 2.0, una web más 'social' y 'abierta', que ha dado pie al desarrollo de millones de aplicaciones y a la generación de datos en cantidades ingentes (Oliveira, 2017). SOCIOLOGÍA Y TECNOCIENCIA, 11.2 (2021): 310-325 ISSN: 1989 A grandes rasgos, la Web 2.0 es la responsable de los cambios producidos en el modo de generar, así como de acceder a la información, además de cómo nos comunicamos, compramos, consumimos ocio, manifestamos, accedemos a servicios, etc. (Oliveira, 2017). ...
... El crecimiento en número de usuarios/as, así como de contribuidores a la creación de contenido, es lo que ha aumentado en gran parte su utilidad, y es lo que impulsó, a mediados de la década pasada, el salto de la Web 1.0, una web rígida y de 'unos pocos', a lo que hoy se conoce como la Web 2.0, una web más 'social' y 'abierta', que ha dado pie al desarrollo de millones de aplicaciones y a la generación de datos en cantidades ingentes (Oliveira, 2017). SOCIOLOGÍA Y TECNOCIENCIA, 11.2 (2021): 310-325 ISSN: 1989 A grandes rasgos, la Web 2.0 es la responsable de los cambios producidos en el modo de generar, así como de acceder a la información, además de cómo nos comunicamos, compramos, consumimos ocio, manifestamos, accedemos a servicios, etc. (Oliveira, 2017). Citando al último informe de los usos de las TIC realizado por el INE en 2019, el 91,4% 1 de los hogares españoles disponen de conexión a internet en alguna de sus modalidades de acceso. ...
Article
Full-text available
Este documento pretende ser una aproximación a cómo la digitalización puede modificar la intervención social debido a los recientes desarrollos en la inteligencia artificial. El objeto de investigación se centra en el marco de los servicios sociales, y en concreto en la figura del Trabajo Social, sin embargo, los hallazgos encontrados son generalizables a cualquier disciplina que opere en la intervención social (Psicología Social, Educadores Sociales, Pedagogía, Animación Sociocultural, etc.). Transitaremos por las perspectivas teóricas y cosmovisiones científicas que analizan la relación tecnología-sociedad, con el objetivo de enmarcar el presente estudio y disponer de marcos interpretativos sobre la percepción social de la tecnología. Y analizaremos cuál está siendo el impacto real de la digitalización en el ámbito del Trabajo Social, a partir de metodologías mixtas que combinen técnicas cuantitativas (encuestas a profesionales) con técnicas cualitativas (entrevistas a informantes clave del sector tecnológico y de las disciplinas de intervención social) En definitiva, las preguntas de investigación que motivan el presente trabajo tienen un carácter exploratorio: valorar cuál está siendo la penetración real de las tecnologías de la información; qué acogida han tenido entre profesionales y ciudadanía; y qué percepción tienen los/as profesionales respecto al impacto en su acción profesional. Una fotografía del presente y una previsión de los posibles escenarios futuros que se dibujan en la intervención social.
... Artificial General Intelligence can be seen as an intermediate stage between what we have now, a kind of Artificial Specialized Intelligence that is very performant in restricted domains, and a conceivable future Super-intelligence that might endow artificial systems with the capability to exceed human performance in many, if not all, the relevant domains, possibly including leadership. Some authors (Oliveira, 2017) are now putting the following question: Is the human brain the only system that can host a mind? If digital minds come into existence, and the referred author states that it is difficult to argue that they will not, we have to face all the legal and ethical implications of such a possibility. ...
... Even if we admit, and I could, that it might be possible that some simple type of "consciousness" will emerge from very complex interactions of more primitive forms of intelligence included in AI-based Systems, we cannot assure that such a complexity will be reached with current "in silico" hardware systems. Moreover, the possibility either to download a mind or to make it evolve from a simpler digital mind, and, here, I agree with the ideas expressed in "The Digital Mind" (Oliveira, 2017), would need an non existing reverse engineering capability of the brain or, for the latter alternative a kind of real body, plenty of sophisticated sensors, which is not yet available today. However, to replicate "in silico" what exists "in vivo" in the biological brain seems to be, for now, out of our grasp as far as we can preview based on scientific grounds. ...
Article
Full-text available
When planting our human print in a new technology-driven world we should ask, remembering Neil Armstrong in 1969, “after many small steps for AI researchers, will it result in a giant leap in the unknown for mankind?” An “Artificial Intelligence-first” world is being preached all over the media by many responsible players in economic and scientific communities. This letter states our belief in AI potentialities, including its major and decisive role in computer science and engineering, while warning against the current hyping of its near future. Although quite excited by several recent interesting revelations about the future of AI, we here argue in favor of a more cautious interpretation of the current and future AI-based systems potential outreach. We also include some personal perspectives on simple remedies to preventing recognized possible dangers. We advocate a set of practices and principles that may prevent the development of AI-based systems prone to be misused. Accountable “Data curators”, appropriate Software Engineering specification methods, the inclusion, when needed, of the “human in the loop”, software agents with emotion-like states might be important factors leading to more secure AI-based systems. Moreover, to inseminate ART in Artificial Intelligence, ART standing for Accountability, Responsibility and Transparency, becomes also mandatory for trustworthy AI-based systems. This letter is an abbreviation of a more substantial article to be published in IJCA journal.
... TURNER, Fred (10.10.2012 Porto: Porto Editora, 2003-2017. [consult. 2017. ...
Thesis
Full-text available
From the beginning of the 21st century, a universal concern makes itself present across the board: what will become of us, humans, undergoing the technicisation of the most intimate and inner realms of body and mind? A recollection of inter-generational narratives has been taken to address this question encompassing the anxieties of contemporary life. The memory and trajectory of members from four upper-class families living in the city of Oporto since the last half of the 19th century have been chosen to guide through the individual as well as social limitations, preoccupations, strategies and anticipations of daily household life and personal intimacy. These have been analyzed under the scope of technological interference on individuals – and of the objects correspondingly inducing that very interference on the story of each narrator. Chronological accounts follow with the presentation of costly novelties and low-cost technologies for the household and personal use. This work is a contribution to the debate of the human in present and in future given that the amplification of skills and competences endeavored by technology necessarily confers it a mesmerizingly irrefusable character – for now.
... We can also approach this process differently by focusing on more indirect and subtler ways of technological determination. For example, many scientists and philosophers point out (Oliveira 2017, Perez 2018) that neural networks not only revolutionise information sciences and software industry, but also undermine our social, political, and existential categories. There is, of course, nothing exceptional in the fact that a technological devicetreated metaphorically serves as an epistemological tool to discover new 'truths' about ourselves (Draaisma 2000). ...
... One of the most pertinent questions for the 21st century will be how these increasingly intelligent and invasive technologies will affect our minds. Many think digital technologies are fundamentally shaping how we think, process information, and engage in social relationships [1][2][3][4][5][6][7][8][9]. It is important to develop research methods that inform public debate on how to deal with the innovations that Silicon Valley provides for us. ...
Article
Full-text available
As digital devices, such as smartphones, are becoming ever more absorbed in the daily lives of adolescents, a major assumption is that they start taking over basic functions of the human mind. A main focus of current debate and research is therefore on investigating adolescents’ use of digital technologies. However, the lack of an instrument measuring the degree to which adolescents offload cognitive and social functions to technology hinders debate and research. This paper tests the reliability and validity of the Extended Mind Questionnaire (XMQ) which measures the degree to which digital technology is used to offload cognitive and social functions. In a first study on young adults (n = 63), we constructed a 12-tem scale, which proved to be highly reliable. A large-scale study on teenagers (n = 947) demonstrated the high structural validity, reliability, and construct and criterion validity of the XMQ. In sum, these studies provide evidence that the XMQ is psychometrically sound and valid, and can be useful in future research on the consequences of digital technology in the daily lives of adolescents.
... In the classical theory of systems, emergence is defined as nonreducibility of the whole to its part. The definitions of complexity are very diverse (Oliveira 2017). Complexity, however, always relates to some degree of difficulty in converting the actual object into its formal description. ...
Article
Full-text available
The term “Second Machine Age” was used by Erik Brynjolfsson and Andrew McAfee in their book of the same name as an indication of the impact of AI technology on people, society, and the economy. The term seeks to analyse the age we actually live in, its hidden patterns, which jobs and fields of study have a perspective, and which do not. It is about the second industrial revolution that is going on right now, and it changes the world no less radically than the first one, driven by the steam locomotive. Exponential growth of digital technologies, digitization of everything and recombinant innovation is a driving engine and fuel of the Second Machine Age. However, the ethical issues of this change remain unaddressed. Artificial intelligence is currently being dealt with by a great many scientists and philosophers who ask many questions. The most important questions are whether the machines can think, whether we will give them the copyright, which the animals do not have until now, and the question whether AI can has its own ethics. The study focuses on these issues, and uses concrete examples to show our unpreparedness for these topics.
Preprint
Full-text available
We consider the terminology used to describe artificial entities and how this terminology may affect the moral consideration of artificial entities. Different combinations of terms variously emphasize the entity's role, material features, psychological features, and different research perspectives. The ideal term may vary across context, but we favor “artificial sentience” in general, in part because “artificial” is more common in relevant contexts than its near-synonyms, such as “synthetic” and “digital,” and to emphasize the sentient artificial entities who deserve moral consideration. The terms used to define and refer to these entities often take a human perspective by focusing on the benefits and drawbacks to humans. Evaluating the benefits and drawbacks of the terminology to the moral consideration of artificial entities may help to clarify emerging research, improve its impact, and align the interests of sentient artificial entities with the study of artificial intelligence (AI), especially research on AI ethics.
Article
Full-text available
Today, the automatic refusal of an online credit application emer- ges from autonomous data processing using Artificial Intelligence techniques. And despite the undoubted advantages associated, the solvency profiling of an alleged beneficiary may imply the exclusion of an end-user from the granting of credit. It similarly affects their sphere since it also impacts on the quality of life of data subjects. In the current state of the artis legis, the indetermination of the information that can be used by banking institutions will only be fulfilled in the presence of specific norms that regulate how the collected data can be used and, above all, that provide a sustained conclusion regarding the relevance of the inferred knowledge. Concerning prediction models and inferential Big Data analytics in particular, some limitations result from algorithmic opacity and, therefore, have got repercussions on the effectiveness of the transparency regime established by GDPR. In this essay, we begin a trail of research leading to a readjustment of European public policies concerning personal data protection and the promotion of a functioning Digital Single Market that should start by focusing more on enhancing end-to-end digital literacy.
Conference Paper
Full-text available
Museums promote cultural experiences through the exhibits and the stories behind them. Nevertheless, museums are not always designed to engage and interest young audiences, especially the “net generation”. According to the Falk model of visitor user experience, the visitor uses their visit experience to improve and change their sense of identity and thoughts of the museum along with, in a small but significant way, how society understands their sense of identity and other museums. According to the above model, we see our target group, teenagers, as experience seekers since this typical visitor’s type is usually motivated to collect an experience. In order to verify if this hypothesis is true, we created a series of focus groups with a total of 130 teenagers (15-17 years old) to gather their thoughts about museums and what they could add to a museum to make their visit more enjoyable. Through the notes gathered from the focus group above mentioned, we then validated our assumption that teenagers of 15-17 of age could be related as experience seekers regarding a first tour to an interactive museum.
Conference Paper
Full-text available
This paper describes a user-driven innovation study conducted with teenagers of Madeira Island to probe their desires for technology aided experiences inside a natural history museum. After gathering the results of the sessions with 43 teens (15-17 years old), such results were shown to 17 students of museum curatorship course at the local university (average of 26 years of age). These students enrolled in the Master in Cultural Management were required to design an experience targeting the teenage audience desires and preferences. Subsequently, a comparison between the results found in both groups was made in order to assert if the curators of tomorrow are prepared to design meaningful experiences for the teens of today, who will be the future adult audience.
ResearchGate has not been able to resolve any references for this publication.