FIGURE 3 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
| Cover of New Yorker magazine with a cartoon by Saul Steinberg illustrating the diverse train of thought of a person viewing a cubist painting by Georges Braque.
Source publication
This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning s...
Context in source publication
Context 1
... is also important to stress that acquiring understanding does not merely entail local object detection and recognition but also in holding several distinct concepts in mind at once, along with each of their attendant associations, while forming a global conception of their interrelations and overall significance. These distinct concepts can be highly diverse, as is illustrated in the cartoon by Saul Steinberg that featured on the cover of New Yorker magazine in 1969 showing the train of thought of a person viewing a cubist painting by Georges Braque (Figure 3) 4 . And they are not necessarily logically consistent. ...
Citations
... This is why DeepMind cannot take that additional step towards machine understanding. The failure to develop selfreferentiality in AI agents is a major stumbling block for machine understanding (Pepperell, 2022). A major problem with existing AI is that it can be autonomous without understanding. ...
The potential of conscious artificial intelligence (AI), with its functional systems that surpass automation and rely on elements of understanding, is a beacon of hope in the AI revolution. The shift from automation to conscious AI, once replaced with machine understanding, offers a future where AI can comprehend without needing to experience, thereby revolutionizing the field of AI. In this context, the proposed Dynamic Organicity Theory of consciousness (DOT) stands out as a promising and novel approach for building artificial consciousness that is more like the brain with physiological nonlocality and diachronicity of self-referential causal closure. However, deep learning algorithms utilize "black box" techniques such as "dirty hooks" to make the algorithms operational by discovering arbitrary functions from a trained set of dirty data rather than prioritizing models of consciousness that accurately represent intentionality as intentions-inaction. The limitations of the "black box" approach in deep learning algorithms present a significant challenge as quantum information biology, or intrinsic information, is associated with subjective physicalism and cannot be predicted with Turing computation. This paper suggests that deep learning algorithms effectively decode labeled datasets but not dirty data due to unlearnable noise, and encoding intrinsic information is beyond the capabilities of deep learning. New models based on DOT are necessary to decode intrinsic information by understanding meaning and reducing uncertainty. The process of "encoding" entails functional interactions as evolving informational holons, forming informational channels in functionality space of time consciousness. The "quantum of information" functionality is the motivity of (negentropic) action as change in functionality through thermodynamic constraints that reduce informational redundancy (also referred to as intentionality) in informational pathways. It denotes a measure of epistemic subjectivity towards machine understanding beyond the capabilities of deep learning.
... Understanding what is said by someone, for Recanati, requires a conscious experience of what is said, an experience that he compares to conscious perception (idem). Besides Recanati's, there are other theories that require understanding to be conscious (e.g., Bourget, 2017;Pepperell, 2022;Searle, 1980). ...
... Besides emotionally-loaded communication, intuitive candidates include communication using perceptual concepts (red, high-pitched, bitter) or other feeling concepts (feeling nauseous, tired, hungry). Furthermore, some may argue that grasping any concept requires phenomenal consciousness (e.g., Bourget, 2017;Pepperell, 2022). Such possibilities constitute massive obstacles for the MBM. ...
Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1-3), I explore a way to answer this question that I call the 'mental-behavioral methodology' (§4-5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether an AI displays the relevant behaviors. If the first two steps are successfully completed, and if the AI passes the tests with human-like results, this constitutes evidence that this AI and humans can genuinely communicate. This mental-behavioral methodology has the advantage that we don't need to understand the workings of black-box algorithms, such as standard deep neural networks. This is comparable to the fact that we don't need to understand how human brains work to know that humans can genuinely communicate. This methodology also has its disadvantages and I will discuss some of them (§6).
... Whereas the phenomenon of the human brain is fundamentally different, as evidenced by experiments undertaken by a group of scientists on chimps to demonstrate the learning effect of artificial intelligence (22). Technology based theory is essentially a complicated machine-like brain, that supports the idea of artificial awareness for a particular job and to create a similar job in the larger scale (23). ...
This paper examines the complexity theory of consciousness, which is one of many hypotheses proposed to explain the emergence of complexity from the basic notion. Our goal is to establish which characteristics have more fundamental implications for the emergence of biology, psychology, and technology, as opposed to those that are more peripheral in these contexts. In the examples we discuss, the complexity is quite rational and factual in connection to biological and psychological processes. The most adaptive hierarchical structures are open systems that participate in the behavior. Each system is causally successful because they work together, and their value cannot be overstated. Various biological processes are responsible for achieving the aim, while physical limits also influence the outcomes that can be attained. The underlying issue is the origin of consciousness and the biological basis of life, which are structured and variable in the principles used to study consciousness in psychology. One possible answer is to acknowledge that consciousness is an irreducible emergent characteristic of brain tissue. The structure and function of the brain have been extensively characterized over the previous ten decades, yet the level of awareness is debatable. The level of awareness is a frequent complex in biological, psychological, and technological fields. Our goal is to identify common characteristics that will allow us to explain the idea of consciousness.
... El concepto de hibridación coevolutiva de la inteligencia humana y de la máquina ha sido propuesto como un elemento clave para la intelectualización del mundo, sugiriendo que esta hibridación podría conducir a soluciones para problemas históricamente inaccesibles para la humanidad (Krinkin et al., 2022). Sin embargo, existen debates en curso sobre si las máquinas con IA realmente comprenden los datos que procesan o si simplemente siguen reglas sintácticas, planteando dudas sobre la autenticidad de la comprensión de las máquinas (Pepperell, 2022). ...
Este trabajo exploró la intersección filosófica de la inteligencia artificial mediante una revisión sistemática que abordó la epistemología y la autenticidad de la comprensión de las máquinas. El contexto actual de rápido crecimiento en IA plantea cuestionamientos sobre la naturaleza del conocimiento que generan las máquinas. Se establecieron tres preguntas clave para guiar la revisión, y se aplicó parte de la declaración PRISMA en la búsqueda y selección de estudios relevantes. Los resultados revelaron una diversidad de perspectivas filosóficas y resaltaron la complejidad de evaluar la autenticidad de la comprensión de las máquinas. La conclusión destacó la necesidad continua de investigar esta intersección, enfatizando la importancia de marcos teóricos que integren ética y epistemología en la evaluación del conocimiento generado por la IA.
... The failure to develop artificial experience into artificial intelligence agents is a major stumbling block for machine understanding (Pepperell, 2022). The "soft" materials, as suggested by Bronfman et al. (2021), have not been fully addressed. ...
Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.
The authors discuss a functional unit of neuronal circuits, crucial in both brain structure and artificial intelligence (AI) systems. These units recognize objects denoted by single words or verbal phrases and are responsible for object perception, memorization of image patterns, and retrieval of those patterns as imagination. Functional neuronal units, activated by the working memory system, contribute to problem-solving. The paper highlights the authors' achievements in developing a theory for these functional units, known as 'the equimerec - units', which combine 'the threshold logic unit' and 'the feedback control loop'. The authors emphasize the importance of this theory, especially in the context of highly advanced language-based AI systems. Additionally, the authors note that the functional units' essence relies on backpropagation connections, causing impulse circulation in closed circuits, ultimately leading to the emergence of an electromagnetic field. This phenomenon explains the long-known existence of the human brain's endogenous electromagnetic field. The authors suggest that a similar field likely arises in AI systems. In light of Joe McFadden's "Conscious Electromagnetic Information Field Theory (cemi)", the authors argue that the potential emergence of self-awareness in AI systems deserves due attention.
This article presents a project which is an experiment in the emerging field of human-machine artistic collaboration. The author/artist investigates responses by the generative pre-trained transformer (GPT-2) to poetic and esoteric prompts and curates them with elements of digital art created by the text-to-image transformer DALL-E 2 using those same prompts; these elements are presented in the context of photographs featuring an anthropomorphic female avatar as the messenger of the content. The tripartite ‘cyborg’ thus assembled is an artificial intelligence endowed with the human attributes of language, art and visage; it is referred to throughout as Madeleine. The results of the experiments allowed the investigation of the following hypotheses. Firstly, evidence for a convergence of machine and human creativity and intelligence is provided by moderate degrees of lossy compression, error, ignorance and the lateral formulation of analogies more typical of GPT-2 than GPT-3. Secondly, the work provides new illustrations supporting research in the field of artificial intelligence that queries the definitions and boundaries of accepted categories such as cognition, intelligence, understanding and—at the limit—consciousness, suggesting that there is a paradigm shift away from questions such as “Can machines think?” to those of immediate social and political relevance such as “How can you tell a machine from a human being?” and “Can we trust machines?” Finally, appearance and epistemic emotions: surprise, curiosity and confusion are influential in the human acceptance of machines as intelligent and trustworthy entities. The project problematises the contemporary proliferation of feminised avatars in the context of feminist critical literature and suggests that the anthropomorphic avatar might echo the social and historical position of the Delphic oracle: the Pythia, rather than a disembodied search engine such as Alexa.