Article

The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Setting aside that Schrodinger himself introduced his hypothetical cat specifically to point out the absurdity of treating his linear, deterministic equation as applying universally, there is no shortage of academic literature that treats Schrodinger's Cat (and its conscious cousin, Wigner's Friend) as possible in principle, even if difficult or impossible for all practical purposes [1,2,[4][5][6][7][8][9][10][11][12][13][14][15][16]. ...
... This lack of "which-slit" information in the form of decohering correlations with other objects in the universe means that the electron's superposition coherence was maintained, and thus the rules of quantum mechanics (and not classical probability) would apply to probability distribution calculations. 4 Because the dispersion of an object's wave function is directly proportional to Planck's constant and inversely proportional to its mass, the ability to demonstrate the wave-like behavior of electrons is in large part thanks to the electron's extremely small mass. The same method of producing superpositions -waiting for quantum uncertainty to work its magic -has been used to produce location superpositions of objects as large as C 60 molecules [3]. ...
... 6 In other words, if we sent a dust particle into deep space, its location relative to other objects in the universe is so well defined due to its correlations to those objects that it would take over a million years for the universe to "forget" where the dust particle is to a resolution allowing for the execution of a double-slit interference experiment. 7 In this case, information in the universe would still exist to localize the dust particle to a resolution of around 4 Indeed, the existence of "which-slit" information -that is, the existence of a correlating fact about the passage of the electron through one slit or the other -is incompatible with existence of a superposition at the double-slit plane. 5 The word "macroscopic" has been abused in the literature, with the phrase "Schrodinger Cat-type" state often applied to "mesoscopic" objects that are much smaller than what is visible with the naked eye. ...
Preprint
Full-text available
The Schrodinger's Cat and Wigner's Friend thought experiments, which logically follow from the universality of quantum mechanics at all scales, have been repeatedly characterized as possible in principle, if perhaps difficult or impossible for all practical purposes. I show in this paper why these experiments, and interesting macroscopic superpositions in general, are actually impossible in principle. First, no macroscopic superposition can be created via the slow process of natural quantum packet dispersion because all macroscopic objects are inundated with decohering interactions that constantly localize them. Second, the SC/WF thought experiments depend on von Neumann-style amplification to achieve quickly what quantum dispersion achieves slowly. Finally, I show why such amplification cannot produce a macroscopic quantum superposition of an object relative to an external observer, no matter how well isolated the object from the observer, because: the object and observer are already well correlated to each other; and reducing their correlations to allow the object to achieve a macroscopic superposition relative to the observer is equally impossible, in principle, as creating a macroscopic superposition via the process of natural quantum dispersion.
... Penrose's main area of interest in theoretical physics is general relativity (but he made many contributions in other domains as well), while Prigogine's major playground is statistical physics. 38 In spite of that, Prigogine, like Penrose, realized very early that gravitation plays a fundamental role in integrating the Second Law with dynamics. For example, at the beginning of a 1986 talk celebrating John. A. Wheeler, Prigogine stated: ...
... Throughout his distinguished and still ongoing long career, Penrose has been relentlessly pursuing this task through a series of extremely well-articulated and original books, articles, and talks [28,[38][39][40]100]. Most importantly, it is probably due to Penrose that the role of gravitation in entropy has been prominently brought into the picture, an idea that does not appear to have played a very essential role in the works of Boltzmann, Reichenbach, and even Prigogine. ...
... 45 To solve this problem, Penrose has proposed two key ideas. First, he invoked the so-called Weyl Curvature Hypothesis (WCH) [28,38,39], which claims that the Weyl curvature 46 is exactly zero at the moment of the Big Bang. This is motivated by the fact that such a global boundary condition constitutes a "quick and direct method" to encode into the structure of spacetime the observed fact that the gravitational degrees of freedom at the Big Band moment were somehow turned off. ...
Article
Full-text available
The question why natural processes tend to flow along a preferred direction has always been considered from within the perspective of the Second Law of Thermodynamics, especially its statistical formulation due to Maxwell and Boltzmann. In this article, we re-examine the subject from the perspective of a new historico-philosophical formulation based on the careful use of selected theoretical elements taken from three key modern thinkers: Hans Reichenbach, Ilya Prigogine, and Roger Penrose, who are seldom considered together in the literature. We emphasize in our analysis how the entropy concept was introduced in response to the desire to extend the applicability of the Second Law to the cosmos at large (Reichenbach and Penrose), and to examine whether intrinsic irreversibility is a fundamental universal characteristics of nature (Prigogine). While the three thinkers operate with vastly different technical proposals and belong to quite distinct intellectual backgrounds, some similarities are detected in their thinking. We philosophically examine these similarities but also bring into focus the uniqueness of each approach. Our purpose is not providing an exhaustive derivations of logical concepts identified in one thinker in terms of ideas found in the others. Instead, the main objective of this work is to stimulate historico-philosophical investigations and inquiries into the problem of the direction of time in nature by way of crossdisciplinary examinations of previous theories commonly treated in literature as disparate domains.
... Whether or not consciousness can be copied is fertile ground for a multitude of troubling, if not fascinating, thought experiments. There's the duplication problem [1]: imagine we can teleport a traveler to another planet by creating "a precise duplicate of the traveler, together with all his memories, his intentions, his hopes, and his deepest feelings," but then we decide not to destroy the original copy? "Would his 'awareness' be in two places at once?" There's the simulation problem [2]: if conscious awareness can be uploaded onto a computer, then how do we know we aren't simulated minds in simulated universes? ...
... Notice that each of these problems is a direct consequence of the copiability or repeatability of conscious states. 1 If it turns out, for whatever reason, that con-1 Throughout this paper, I'll treat copiability and repeatability of conscious states as essentially interchangeable because repeating a conscious state (or resetting it to an earlier state) is akin to copying it at a later point in spacetime. ...
Article
The possibility that consciousness is algorithmic depends on the assumption that conscious states can be copied or repeated by sufficiently duplicating their underlying physical states, leading to a variety of paradoxes including the problems of duplication, teleportation, simulation, self-location, the Boltzmann brain, and Wigner’s Friend. In an effort to further elucidate the physical nature of consciousness, I challenge this assumption by analyzing the implications of special relativity on evolutions of physical copies of a mental state, particularly the divergence of these evolutions due, for example, to quantum fluctuations. I show that the conjunction of three assumptions leads to a logical contradiction: first, that a conscious state supervenes on some sufficient underlying physical state such that instantiation of that physical state is sufficient to create that conscious state; second, that conscious states are associated with transtemporal identity; and third, that two or more physical copies of a conscious state can be instantiated nonlocally in spacetime. I then show that transtemporal identity is logically incompatible with the copiability of conscious states and offer several arguments in favor of transtemporal identity and against copiability of conscious states. Several explanatory hypotheses and implications are addressed, particularly the relationships between consciousness, locality, physical irreversibility, and quantum no-cloning.
... Essa proposta de solução para a situação elaborada por Wigner (1983), através do "paradoxo do amigo",é muito próxima da solução proposta por Bass (1971), como vimos anteriormente. Revisitando a situação do gato de Schrödinger (1983), expandida por Penrose (1989), Goswami (1989, p. 390) afirma que questões acerca da consciência do gato ou a discrepância entre os humanos de dentro e fora da caixa são dificuldades que acompanham a concepção dualista da noção de "consciência". ...
... Outra tentativa de interpretar a mecânica quântica, em específico, o papel causal da consciência na medição quântica,é feita por Henry Stapp. Sua proposta vai no caminho inverso daquele proposto pela interpretação da consciência causal, que procurou utilizar a consciência para compreender a mecânica quântica; Stapp (2007) procura utilizar a mecânica quântica para compreender a consciência -caminho este que tambémé traçado por Penrose (1994). No entanto, como observa Landau (1998, p. 172), "Penrose aceita que a mente consciente surge como um funcionamento do cérebro físico [. . ...
Preprint
Full-text available
This book deals with some ontological implications of standard non-relativistic quantum mechanics, and the use of the notion of `consciousness' to solve the measurement problem.
... functional which is enough to operationalise a concept but does not necessarily entail the understanding of its underlying mechanism (akin to The Chinese Room Argument [72,83]); and structural which warrants a detailed understanding of how and why a concept operates. ...
... In particular, we highlighted two different mental models: functional -enough understanding to operationalise a concept; and structural -in-depth, theoretical appreciation of underlying processes. We further argued that the former -a shallow form of understanding -aligns with The Chinese Room Argument [72,83] and the notion of simulatability [55]. We also reviewed diverse notions of explainability, interpretability, transparency, intelligibility and many others that are often used interchangeably in the literature, and argued in favour of explainability. ...
Preprint
Explainable artificial intelligence and interpretable machine learning are research fields growing in importance. Yet, the underlying concepts remain somewhat elusive and lack generally agreed definitions. While recent inspiration from social sciences has refocused the work on needs and expectations of human recipients, the field still misses a concrete conceptualisation. We take steps towards addressing this challenge by reviewing the philosophical and social foundations of human explainability, which we then translate into the technological realm. In particular, we scrutinise the notion of algorithmic black boxes and the spectrum of understanding determined by explanatory processes and explainees' background knowledge. This approach allows us to define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge - a process that engenders understanding in explainees. We then employ this conceptualisation to revisit the much disputed trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers as well as fairness and accountability engendered by explainability. We furthermore discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability, with a focus on explainees, contrastive statements and explanatory processes. Our discussion reconciles and complements current research to help better navigate open questions - rather than attempting to address any individual issue - thus laying a solid foundation for a grounded discussion and future progress of explainable artificial intelligence and interpretable machine learning. We conclude with a summary of our findings, revisiting the human-centred explanatory process needed to achieve the desired level of algorithmic transparency.
... Однако вопрос о том, как включить сознание в общую картину мироописания, еще не имеет ответа в современной науке. Более того, лауреат нобелевской премии Р. Пенроуз, только чтобы доказать это, написал серию книг [1][2][3][4]. И в результате сделал вывод о том, что для понимания сущности сознания в рамках строгой науки нужна «новая физика». Поэтому для научного ответа на поставленный вопрос нам необходимо, как минимум: ...
... называть, следуя Р. Пенроузу [1][2][3][4], невычислительными. То есть такими, результат действия которых не может быть запрограммирован. ...
Preprint
Full-text available
Обсуждается широкий круг проблем взаимоотношения сознания и материи. Особое внимание уделено анализу структуры и свойств сознания в рамках информационной эволюции. А также – анализу роли специфических (невычислительных) свойств сознания в процедуре классических и квантовых измерений. В частности, подробно обсуждается вопрос о «клонировании» сознания (возможности копирования его свойств его на новый материальный носитель). Мы надеемся, что сформулированный нами обобщенный принцип дополнительности откроет новые пути для исследования проблем сознания в рамках фундаментальной физической картины мира.
... Ces arguments tentent de montrer que le raisonnement humain ne peut pasêtre formaliséà partir des systèmes formels, c'est-à-dirè a partir d'une MT. Les deux derniers arguments sont quantà eux fondés sur la physique et sont défendus par Penrose [Penrose, 1989] et Siegelmann [Siegelmann, 1995]. Ces arguments ont pour but de prouver que certains processus effectués par le cerveau ne peuvent pasêtre simulés par une MT. ...
... Contrairementà l'argument de Bringsjord qui se fonde sur le raisonnement mathématique, Penrose défend la thèse des hyper-cerveauxà partir d'arguments physiques [Penrose, 1989]. Voici une présentation de ces arguments suivie d'une explication des principaux problèmes qu'ils soulèvent. ...
Thesis
Mon projet de thèse intitulé « de la logique à la physique » a pour but général de montrer que le calcul qui est effectué par les ordinateurs ne forme pas seulement une branche de la logique mais doit être également étudié au sein des sciences physiques. Plus précisément, mon travail réside dans l'analyse logique de la notion de calcul proposée par Alan Turing en 1936. Cette analyse logique se fonde sur le modèle mathématique des ordinateurs actuels appelé « machine de Turing » . La machine de Turing est un modèle de calcul qui permet d'identifier les possibilités et les limites théoriques des ordinateurs indépendamment des ressources physiques. Ainsi l'étude des ordinateurs qui ont pour fonction principale de calculer peut-elle se faire directement à partir du modèle mathématique de la machine de Turing. Cette correspondance entre l'ordinateur et la Machine de Turing a eu pour conséquence que les limites des ordinateurs sont surtout étudiés indépendamment de leur nature physique. Et dans cette optique, la machine de Turing est le « mètre-étalon » du calcul, c'est une notion mathématique suffisante pour étudier les limites du calcul des ordinateurs. Un certain nombre de recherches menées depuis quelques années laissent cependant envisager que la structure physique des ordinateurs joue un rôle essentiel dans leur fonctionnement. De telles recherches pourraient permettre de fonder une critique de l'étude purement mathématique du calcul. Plus précisément, l'objet central de ces recherches est l'étude de « l'hyper-calcul », concept né en 1999 désignant le calcul qui dépasse les limites de la machine de Turing. Autrement dit, si de nos jours les limites de la machine de Turing sont considérées comme les limites du calcul, l'hyper-calcul renvoit quant à lui à l'affirmation qu'il serait possible de calculer « plus » que la machine de Turing; et c'est justement de la physique que proviendraient de telles possibilités.
... The other point of a great interest is a very controversial, but tempting, idea of "emergent quantumness", that is, some quantum-like behavior of systems which are difficult to believe to be quantum per se [24]. Some authors use quantumness just as a metaphor to describe cultural phenomena [25] or genotype-phenotype duality in biological evolution [26], while others suggest the relevance of the true quantum phenomena in human brains [27,28]. On a more practical level, this line of thinking may be related to a much more pragmatic and solid concept of "quantum annealing" [29][30][31][32][33][34]. ...
... In this sense, our approach is essentially different from 94 Page 18 of 20 those of Refs. [27,28] where the relevance of truly quantum processes in our bodies for our mind is suggested. The other point worth to be emphasized is that, in our approach, only a fully optimized neural network has this property of "quantumness". ...
Article
Full-text available
It was recently shown that the Madelung equations, that is, a hydrodynamic form of the Schrödinger equation, can be derived from a canonical ensemble of neural networks where the quantum phase was identified with the free energy of hidden variables. We consider instead a grand canonical ensemble of neural networks, by allowing an exchange of neurons with an auxiliary subsystem, to show that the free energy must also be multivalued. By imposing the multivaluedness condition on the free energy we derive the Schrödinger equation with “Planck’s constant” determined by the chemical potential of hidden variables. This shows that quantum mechanics provides a correct statistical description of the dynamics of the grand canonical ensemble of neural networks at the learning equilibrium. We also discuss implications of the results for machine learning, fundamental physics and, in a more speculative way, evolutionary biology.
... Hamilton's equations which correspond to the familiar F = ma. Penrose (1989) is a schematic depiction of the phase space (restricted to the energy hypersurface). Each point corresponds to an exact microstate. ...
... It seems that the special initial condition in T M is too special and too contrived. Penrose (1989) estimates that the initial macrostate M(0) is tiny compared to the available volume on phase space. A rough calculation based on classical general relativity suggests that the PH macrostate (specified using the Weyl curvature) is only 1 10 10 123 of the total volume in phase space. ...
Article
One of the most difficult problems in the foundations of physics is what gives rise to the arrow of time. Since the fundamental dynamical laws of physics are (essentially) symmetric in time, the explanation for time's arrow must come from elsewhere. A promising explanation introduces a special cosmological initial condition, now called the Past Hypothesis: the universe started in a low‐entropy state. Unfortunately, in a universe where there are many copies of us (in the distant “past” or the distant “future”), the Past Hypothesis is not enough; we also need to postulate self‐locating (de se) probabilities. However, I show that we can similarly use self‐locating probabilities to strengthen its rival—the Fluctuation Hypothesis, leading to in‐principle empirical underdetermination and radical epistemological skepticism. The underdetermination is robust in the sense that it is not resolved by the usual appeal to ‘empirical coherence’ or ‘simplicity.’ That is a serious problem for the vision of providing a completely scientific explanation of time's arrow.
... Taking sentience to be a state of matter, reductionist theories of consciousness can likewise resort to a more fundamental quantum level beyond neural activities. Most notably, Penrose (1989Penrose ( , 1994 has placed the origin of consciousness at the level of quantum phenomena. According to him, consciousness relates to the behavior not just of neural tissue but to their quantum-mechanical properties-this is, with states posited at the quantum level. ...
... Taking sentience to be a state of matter, reductionist theories of consciousness can likewise resort to a more fundamental quantum level beyond neural activities. Most notably, Penrose (1989Penrose ( , 1994 has placed the origin of consciousness at the level of quantum phenomena. According to him, consciousness relates to the behavior not just of neural tissue but to their quantum-mechanical properties-this is, with states posited at the quantum level. ...
Article
Full-text available
Unlike animal behavior, behavior in plants is traditionally assumed to be completely determined either genetically or environmentally. Under this assumption, plants are usually considered to be noncognitive organisms. This view nonetheless clashes with a growing body of empirical research that shows that many sophisticated cognitive capabilities traditionally assumed to be exclusive to animals are exhibited by plants too. Yet, if plants can be considered cognitive, even in a minimal sense, can they also be considered conscious? Some authors defend that the quest for plant consciousness is worth pursuing, under the premise that sentience can play a role in facilitating plant's sophisticated behavior. The goal of this article is not to provide a positive argument for plant cognition and consciousness, but to invite a constructive, empirically informed debate about it. After reviewing the empirical literature concerning plant cognition, we introduce the reader to the emerging field of plant neurobiology. Research on plant electrical and chemical signaling can help shed light into the biological bases for plant sentience. To conclude, we shall present a series of approaches to scientifically investigate plant consciousness. In sum, we invite the reader to consider the idea that if consciousness boils down to some form of biological adaptation, we should not exclude a priori the possibility that plants have evolved their own phenomenal experience of the world. This article is categorized under: Cognitive Biology > Evolutionary Roots of Cognition Philosophy > Consciousness Neuroscience > Cognition
... The authors of[8,12,15,17,23] are among these maverick challengers. ...
Article
Full-text available
In a recent paper, Zukowski and Markiewicz showed that Wigner’s Friend (and, by extension, Schrodinger’s Cat) can be eliminated as physical possibilities on purely logical grounds. I validate this result and demonstrate the source of the contradiction in a simple experiment in which a scientist S attempts to measure the position of object |O⟩ = |A⟩S +|B⟩S by using measuring device M chosen so that |A⟩M ≈ |A⟩S and |B⟩M ≈ |B⟩S. I assume that the measurement occurs by quantum amplification without collapse, in which M can entangle with O in a way that remains reversible by S for some nonzero time period. This assumption implies that during this “reversible” time period, |A⟩M ̸= |A⟩S and |B⟩M ̸= |B⟩S – i.e., the macroscopic pointer state to which M evolves is uncorrelated to the position of O relative to S. When the scientist finally observes the measuring device, its macroscopic pointer state is uncorrelated to the object in position |A⟩S or |B⟩S, rendering the notion of “reversible measurement” a logical contradiction.
... The authors of[8,12,15,17,23] are among these maverick challengers. ...
Preprint
Full-text available
In a recent paper, \.Zukowski and Markiewicz showed that Wigner's Friend (and, by extension, Schr\"odinger's Cat) can be eliminated as physical possibilities on purely logical grounds. I validate this result and demonstrate the source of the contradiction in a simple experiment in which a scientist S attempts to measure the position of object O=AS+BS|O\rangle = |A\rangle_S + |B\rangle_S by using measuring device M chosen so that AMAS|A\rangle_M \approx |A\rangle_S and BMBS|B\rangle_M \approx |B\rangle_S. I assume that the measurement occurs by quantum amplification without collapse, in which M can entangle with O in a way that remains reversible by S for some nonzero time period. This assumption implies that during this "reversible" time period, AMAS|A\rangle_M \neq |A\rangle_S and BMBS|B\rangle_M \neq |B\rangle_S -- i.e., the macroscopic pointer state to which M evolves is uncorrelated to the position of O relative to S. When the scientist finally observes the measuring device, its macroscopic pointer state is uncorrelated to the object in position AS|A\rangle_S or BS|B\rangle_S, rendering the notion of "reversible measurement" a logical contradiction.
... In fact, there is a risk of assimilating the complexity of human mind, such as conscience, awareness and intuition (in many ways still unknown), to simple logical categories, denying space for creativity and innovation. Penrose, then, summarises his theory in the famous motto "human mind is not algorithmic" (Penrose 1989), meaning that human mind is not a Turing machine. This is like saying that intelligence cannot, by definition, be "artificial", as intelligence requires awareness, namely the consciousness that machines don't have. ...
Article
Full-text available
This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research (of AI and Law in particular): from the Winter of AI, namely a period of mistrust in AI (throughout the eighties until early nineties), to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first decades of AI research is that “intelligence requires knowledge”. Since its inception the Web proved to be an extraordinary vehicle for knowledge creation and sharing, therefore it’s not a surprise if the evolution of AI has followed the evolution of the Web. I argue that a bottom-up approach, in terms of machine/deep learning and NLP to extract knowledge from raw data, combined with a top-down approach, in terms of legal knowledge representation and models for legal reasoning and argumentation, may represent a promotion for the development of the Semantic Web, as well as of AI systems. Finally, I provide my insight in the potential of AI development, which takes into account technological opportunities and theoretical limits.
... AI is just an attempt to reconstruct the functions of human intelligence in parallel using mathematical and physical methods. No matter what devices we look at a brain with, our thoughts will never be seen (Penrose, 1994). ...
Article
Full-text available
Through this article we will try to get into the way of thinking of one of the most important economic schools, namely the Austrian School. Over time, many mistakes have been made by various economic schools because the fundamentals of thinking were not clear or the concepts used were not properly defined. Here we will try to clarify the foundation in general of the socio-human sciences and in particular of the mentioned school. Building theories or opinions on a fragile foundation will certainly give rise to a friable architecture. Among the various philosophers who have approached science, we believe that von Wight has a direct and clear approach to the type of human social thinking. From the information we have we do not know an approach similar to this article. The Austrian School has hitherto been understood as a methodology from the perspective of Aristotelianism. The question is whether a new perspective can make more sense. The methodology of presenting the article is narrative argumentation. The questions we try to answer in the conclusions are those related to understanding the present moment and discerning the future moment through the fog. The conclusions will be critical regarding the use of mathematics in the socio-human sciences, and the socio-human sciences must be understood only from the perspective of human motivations and intentions. Clarifying the starting point in economic thinking makes us more modest in drawing conclusions and making predictions about the future.
... John Wheeler has argued that the very validity of the laws of physics depends on the existence of consciousness. 4 In a way, the human point of view is all that counts! In astronomy/cosmology this is referred to as the Anthropic Principle (Bostrom, 2010), which in its weak form basically states that one sapient life form (humans) looks back to the past from its point of view (Penrose, 1989). ...
Article
Full-text available
Entropy always increases monotonically in a closed system but complexity increases at first and then decreases as equilibrium is approached. Commonsense information-related definitions for entropy and complexity demonstrate that complexity behaves like the time derivative of entropy, which is proposed here as a new definition for complexity. A 20-year old study had attempted to quantify complexity (in arbitrary units) for the entire Universe in terms of 28 milestones, breaks in historical perspective, and had concluded that complexity will soon begin decreasing. That conclusion is now corroborated by other researchers. In addition, the exponential runaway technology trend advocated by supporters of the singularity hypothesis—which was in part based on the trend of the very 28 milestones mentioned above—would have anticipated five new such milestones by now, but none have been observed. The conclusions of the 20-year old study remain valid: we are at the maximum of complexity and we should expect the next two milestones at around 2033 and 2078. You can read a preprint here: https://osf.io/6nwf9/
... Derrière tout cetétonnement se cachent des conceptions et des attitudes bien différentes. Wigner, par exemple, semble vouloir défendre une position antiréalisteà propos des mathématiques et de leurs objets (Wigner [1960], p. 2 ; pour une analyse plus fine de la manière dont Wigner pense le problème de l'applicabilité des mathématiques et suggère de le résoudre, voir Islami [2017]).À l'inverse, d'autres physiciens, comme Paul Davies et Roger Penrose, adoptent une position que l'on retrouve chez de nombreux philosophes sous la forme du soi-disant argument d'indispensabilité (sur lequel nous reviendrons plus loin, et qui est aussi traité en détail dans le chapitre 7 du présent volume) : ils pensent que le succès des mathématiques dans les sciences empiriques donne des raisons de défendre une forme de réalisme mathématique (Davies [1992], p. 140-160 ;Penrose [1989], p. 556-557). ...
... PENROSE, R.; The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Nueva York, Oxford University, 1989. 16 HAMEROFF, S. y PENROSE, R.; "Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness", Mathematics and Computers in Simulation, Vol.40, 3-4, 1996. ...
Article
Full-text available
RESUMEN. El incompatibilismo humanista sostiene que el libre albedrío es un concepto incompatible con las verdades deterministas e indeterministas de la ciencia, pero plantea la dignidad humana como límite a cualquier desarrollo de la ciencia y la justicia. El presente artículo realiza un recorrido histórico y conceptual alrededor del determinismo, indeterminismo, compatibilismo e incompatibilismo. Posteriormente, introduce al problema que la neurociencia provoca para la teoría del delito y la culpabilidad penal. Más adelante, presenta algunas críticas al compatibilismo humanista y postula al incompatibilismo humanista como una contrapropuesta abolicionista del derecho penal. El artículo finaliza con la formulación de cinco postulados básicos de una nueva corriente abolicionista: el neuroabolicionismo penal, y se exponen las conclusiones. PALABRAS CLAVE. Libre albedrío; Compatibilismo; Incompatibilismo; Abolicionismo penal; Neuroabolicionismo penal; Culpabilidad. ABSTRACT. Humanistic incompatibilism holds that free will is a concept incompatible with the deterministic and indeterministic truths of science but raises the human dignity as a limit to any development of science and justice. This article makes a historical and conceptual journey around determinism, indeterminism, compatibilism and incompatibilism. Later, it introduces the problem that neuroscience creates for the theory of crime. Then, it presents some criticisms to humanistic compatibilism and posits humanistic incompatibilism as an abolitionist counterproposal of criminal law. The article ends with the formulation of five basic postulates of penal neuroabolitionism and presents the conclusions.
... This was later explained, by Carter and Dicke, by the fact that this epoch coincided with the lifetime of what are called main-sequence stars, such as the Sun. At any other epoch, so the argument ran, there would be no intelligent life around in order to measure the physical constants in question -so the coincidence had to hold, simply because there would be intelligent life around only at the particular time that the coincidence did hold!["] - The Emperor's New Mind [(Penrose, 1989)], Chapter 10 […] ...
Research
Full-text available
Abstract We propose a dual-aspect framework for consciousness, which is an extended version of dual-aspect monism metaphysics (IDAM) framework based on the robust and reproducible two sources of scientific empirical data: (i) The data from the 1st person perspective (1pp) such as our subjective experiences and (ii) The data from the 3rd person perspective (3pp) such as their respective neural bases. In this article, the term ‘consciousness’ is defined as the mental aspect of a state of brain-system or brain-process, which has two sub-aspects: conscious experience and conscious function from first person perspective (1pp); the terms ‘mental’ and ‘physical’ are used in the sense of the IDAM framework (not dualism). The IDAM framework has five components: (I) Dual-Aspect Monism framework, where (a) Each entity-state has inseparable physical and mental aspects, (b) The potentiality of primary irreducible subjective experiences (SEs) co-exists with its inseparable physical aspect in Nature, (c) SEs are the excitations of Universal Potential Consciousness (UPC) that is the mental aspect of the unmanifested state of the primal entity (Brahman), in analogy to the ripples of an ocean, and (d) Its inseparable the physical aspect is ubiquitous physical quantum vacuum field (both radiation reaction and stochastic zero-point radiation field (ZPF)) from the very beginning (de la Peña, Cetto, & Valdes-Hernandez, 2015).p196. (II) Dual-mode (conjugate matching between stimulus-dependent-feed-forward-signals-related-mode and cognitive-feedback-signals-related-mode and then the selection of a specific subjective experience by the self); and (III) The degree of manifestation of aspects depends on a state of an entity. The mental aspect is from 1pp and the physical aspect is from the objective third person perspective (3pp). (IV) The fourth component is the segregation and integration of dual-aspect information and (V) The fifth component is the necessary conditions of consciousness, which are developed here. The necessary conditions for access (reportable) consciousness are the formation of neural networks, wakefulness, reentry, attention, information integration, working memory, stimulus contrast at or above a threshold, and potential experiences embedded in the neural network. Attention is not necessary for phenomenal (non-reportable) consciousness. This framework is parsimonious and has the least number of problems compared to all other frameworks, and it addresses the objections raised in Biological Naturalism by traditional views (dualism and materialism). The IDAM framework (a) is consistent with psychophysical, biological, and physical laws; (b) it attempts to address the ‘hard’ problem of consciousness (how to explain subjective experiences), and (c) it can be tested scientifically: if the doctrine of inseparability between the 1pp-mental and 3pp-physical aspects of a conscious brain-mind state is somehow rejected then the IDAM framework needs major modification. We have followed the least problematic ‘bottom-up’ approach, which starts from the robust and reproducible two sources of empirical data and then extrapolates carefully backward in time. This process eventually entails the manifestation of an entity is from its potentiality in the primal entity (Brahman) to its realization thru the process of co-evolution. This approach concludes that the degree of manifestation of the unmanifested state of Brahman is highest in us presumably at Nirvikalpa Samādhi state, which entails that ‘God’ is inside us because we attain ‘godly’ virtues at this state, such as compassion, humility, bliss/Ānanda, love for all, inner light perception, and the unification of subject and objects.
... For the reasons laid out in the previous subsection, conventionalism avoids PBE, and as such avoids both Price's conclusion that the Past State requires a special explanation, and Maudlin's conclusion that time direction realism offers an explanatory advantage 29 A phase space is an abstract mathematical space in which each point represents a complete kinematically possible microstate of the system in question-i.e. for a classical Hamiltonian picture, the position and momentum values of each particle. 30 For instance, using Bekenstein-Hawking entropy rather than Boltzmannian entropy, Penrose (1989) calculates that the probability of the universe having been in a state of sufficiently low entropy as 1 in 10 10 123 . 31 For instance, he takes Boltzmann's picture to be incomplete, since it does not entail the parallelism of entropy increase (see Reichenbach, 1956, p. 137). ...
Article
Full-text available
In what sense is the direction of time a matter of convention? In 'The Direction of Time', Hans Reichenbach makes brief reference to parallels between his views about the status of time’s direction and his conventionalism about geometry. In this article, I: (1) provide a conventionalist account of time direction motivated by a number of Reichenbach’s claims in the book; (2) show how forwards and backwards time can give equivalent descriptions of the world despite the former being the ‘natural’ direction of time; and (3) argue that this offers an important middle-ground position between existing realist and antirealist accounts of the direction of time.
... The standard solution to this puzzle was presented in Penrose (1979Penrose ( , 1989, and popularised in his book The Emperor's New Mind (Penrose 1990). Penrose identifies the crucial role of gravity: the highest-entropy arrangement of a box of matter is that in which all the matter has collapsed into a black hole. ...
Article
Curiously, our Universe was born in a low entropy state, with abundant free energy to power stars and life. The form that this free energy takes is usually thought to be gravitational: the Universe is almost perfectly smooth, and so can produce sources of energy as matter collapses under gravity. It has recently been argued that a more important source of low-entropy energy is nuclear: the Universe expands too fast to remain in nuclear statistical equilibrium, effectively shutting off nucleosynthesis in the first few minutes, providing leftover hydrogen as fuel for stars. Here, we fill in the astrophysical details of this scenario and seek the conditions under which a Universe will emerge from early nucleosynthesis as almost-purely iron. In so doing, we identify a hitherto-overlooked character in the story of the origin of the second law: matter–antimatter asymmetry.
... The prospects for the future of life become more challenging when we consider the rapid pace of development of Artificial Intelligence and Machine Learning (Penrose 1999;Woolfson 2000). While we are not close yet, recent advances in synthetic memory management provide a reasonable theoretical base for consideration of synthetic and biosynthetic neural systems capable of matching and exceeding the human brain's capacity (Makin et al. 2020;Rai et al. 2020;Tizno et al. 2019). ...
... If collapse only happens in conscious minds, no experiment to date has actually closed the locality loophole. One may even consider moving macroscopic masses based on the measurement results, to address Penrose's suggestion [230] that collapse takes place between macroscopically distinct gravitational fields. David Hume states that the question of free will is "the most contentious question of metaphysics" [231]. ...
Preprint
Full-text available
The National Aeronautics and Space Administration's Deep Space Quantum Link mission concept enables a unique set of science experiments by establishing robust quantum optical links across extremely long baselines. Potential mission configurations include establishing a quantum link between the Lunar Gateway moon-orbiting space station and nodes on or near the Earth. In this publication, we summarize the principal experimental goals of the Deep Space Quantum Link mission. These include long-range teleportation, tests of gravitational coupling to quantum states, and advanced tests of quantum nonlocality.
... Mathematics plays a role in brain research, from Von Helmholtz's [3] early work, which looks for functions similar to energy, the physical and chemical foundations that describe brain dynamics, and Freud's catharsis [4]. Norbert Wiener's cybernetics to study biological control mechanisms [5], mathematical research on the nervous system [6] and recent use of mathematics to study consciousness [7,8]. Examine the Dynamical transmission and effect of smoking in society [15]. ...
Article
Full-text available
A mathematical model of HIV/AIDS and TB including its co-infections is formulated. We find the Equilibrium points and with the help of numerical simulation, we have analyzed that the sub-models of TB, HIV/AIDS and its co-infections. The Caputo and Caputo Febrizo fractional derivative operator of order α ∈ (0, 1] is employed to obtain the system of fractional differential equations. Laplace Adomian Decomposition Method was successfully used for solving the different differential equations.Laplace transform is a perfect technique in various field of biological science,engineering,pure and applied mathematics. The latest technique Laplace Adomian Decomposition Method is employed on the developed fractional order model for the numerical solutions. Finally numerical simulations are also established to investigate the influence of the system parameter on the spread of disease and which show effect of fractional parameter α on our obtained solutions.
... Hence, this is not an argument that the mind has powers exceeding that of any formal system, as argued most famously by Lucas (1961) and Penrose (1989), but also Gödel (1995) himself; on the contrary, its reasoning capacities are explicitly bounded by what R can prove about itself. ...
Article
Full-text available
Representationalist accounts of mental content face the threat of the homunculus fallacy. In collapsing the distinction between the conscious state and the conscious subject, self-representational accounts of consciousness possess the means to deal with this objection. We analyze a particular sort of self-representational theory, built on the work of John von Neumann on self-reproduction, using tools from mathematical logic. We provide an explicit theory of the emergence of referential beliefs by means of modal fixed points, grounded in intrinsic properties yielding the subjective aspects of experience. Furthermore, we study complications introduced by allowing for the modification of such symbolic content by environmental influences.
... The big problem with the past hypothesis is the sheer improbability of the Past State. Penrose (1989) calculates the probability of the universe having been in a state of sufficiently low entropy to be 1 in 10 10 123 .¹⁰ According to the past-hypothesis-based statistical mechanical explanation of the entropy gradient, the assumption that such a low-probability state occurred is a key part of the explanation of any mundane thermodynamic regularity, such as the inevitably underwhelming final sips of my now-cold flat white. ...
Article
It is often said that the world is explained by laws of nature together with initial conditions. But does that mean initial conditions don’t require further explanation? And does the explanatory role played by initial conditions entail or require that time has a preferred direction? This chapter looks at the use of the ‘initialness defence’ in physics, the idea that initial conditions are intrinsically special in that they don’t require further explanation, unlike the state of the world at other times. Such defences commonly assume a primitive directionality of time to distinguish between initial and final conditions. Using the case study of the time-asymmetry of thermodynamics and the so-called ‘past hypothesis’ — the hypothesis that the early universe was in a state of very low entropy —, I outline and support a deflationary account of the initialness defence that does not presuppose a basic directionality of time, and argue that there is a relevant explanatory asymmetry between initial conditions and the state of systems at other times only if certain causal conditions are satisfied. Hence, the initialness defence is available to those who reject a fundamental direction of time.
... However, the question of how to include consciousness in the general picture of the world description does not yet have an answer in modern science. Moreover, the Nobel Prize laureate R. Penrose wrote a series of books for the sole purpose of proving it [1][2][3][4]. And as a result, he concluded that in order to understand the essence of consciousness within the framework of rigorous science, a "new physics" is needed. ...
Preprint
Full-text available
A wide range of problems of the relationship between consciousness and matter are discussed. Particular attention is paid to the analysis of the structure and properties of consciousness in the framework of information evolution. The role of specific (non-computational) properties of consciousness in the procedure of classical and quantum measurements is analyzed. In particular, the issue of "cloning" of consciousness (the possibility of copying its properties onto a new material carrier) is discussed in detail. We hope that the generalized principle of complementarity formulated by us will open up new ways for studying the problems of consciousness within the framework of the fundamental physical picture of the world.
... Along this line of thinking, Roger Penrose suggested that a conflict emerges when a balanced superposition of two separate wave packets representing two different position of a massive object is considered 22,23 . In his words (see Ref. 26, p. 475): "My own point of view is that as soon as a significant amount of space-time curvature is introduced, the rules of quantum linear superposition must fail". Clearly the conflict is the result of putting together the general covariance of general relativity and the quantum mechanical superposition principle. ...
Preprint
Full-text available
The broad debate on foundational issues in quantum mechanics, which took place at the famous 1957 Chapel Hill conference on \textit{The Role of Gravitation in Physics}, is here critically analyzed with an emphasis on Richard Feynman's contributions. One of the most debated questions at Chapel Hill was whether the gravitational field had to be quantized and its possible role in wave function collapse. Feynman's arguments in favor of the quantization of the gravitational field, based essentially on a series of gedanken experiments, are here discussed. Then the related problem of the wave function collapse, for which Feynman hints to decoherence as a possible solution, is discussed. Finally, another topic is analyzed, concerning the role of the observer in a closed Universe. In this respect, Feynman's many-worlds characterization of Everett's approach at Chapel Hill is discussed, together with later contributions of his, including a kind of Schr\"{o}dinger's cat paradox, which are scattered throughout the 1962-63 Lectures on Gravitation. Philosophical implications of Feynman's ideas in relation to foundational issues are also discussed.
... It is the phenomenon whereby the universe governed by laws that do not allow consciousness is no universe at all. I would even say that all the mathematical descriptions of the universe that have been given so far must fail this criterion it is only phenomenon of consciousness that can." [28] In the end, he raises following questions: ...
... Иными словами, за последние 20 лет был открыт огромный класс объектов и систем, которые не могут быть диагностированы и смоделированы (описаны) в рамках существующих теорем и моделей (на базе ДСН). В первую очередь, речь идет о работе мозга [8][9][10][11][12] и о поведении любых биосистем, образующих организм человека [13][14][15][16][17]. ...
Article
В настоящее время не существует единого определения искусственного интеллекта. Требуется такая классификация задач, которые должны решать системы искусственного интеллекта. В сообщении дана классификация задач при использовании искусственных нейросетей (в виде получения субъективно и объективно новой информации). Показаны преимущества таких нейросетей (неалгоритмизируемые задачи) и показан класс систем (третьего типа — биосистем), которые принципиально не могут изучаться в рамках статистики (и всей науки). Для изучения таких биосистем (с уникальными выборками) предлагается использовать искусственные нейросети, которые решают задачи системного синтеза (отыскание параметров порядка). Сейчас такие задачи решает человек в режиме эвристики, что не моделируется современными системами искусственного интеллекта. Currently, there is no single definition of artificial intelligence. We need a Such categorization of tasks to be solved by artificial intelligence. The paper proposes a task categorization for artificial neural networks (in terms of obtaining subjectively and objectively new information). The advantages of such neural networks (non-algorithmizable problems) are shown, and a class of systems (third type biosystems) which cannot be studied by statistical methods (and all science) is presented. To study such biosystems (with unique samples) it is suggested to use artificial neural networks able to perform system synthesis (search for order parameters). Nowadays such problems are solved by humans through heuristics, and this process cannot be modeled by the existing artificial intelligence systems.
... This notion of strong determinism is introduced in(Penrose 1989). This is different from the notion of "superdeterminism" that is sometimes invoked in the context of avoiding Bell non-locality. ...
Preprint
Full-text available
What exists at the fundamental level of reality? On the standard picture, the fundamental reality contains (among other things) fundamental matter, such as particles, fields, or even the quantum state. Non-fundamental facts are explained by facts about fundamental matter, at least in part. In this paper, I introduce a non-standard picture called the "cosmic void" in which the universe is devoid of any fundamental material ontology. Facts about tables and chairs are recovered from a special kind of laws that satisfy strong determinism. All non-fundamental facts are completely explained by nomic facts. I discuss a concrete example of this picture in a strongly deterministic version of the many-worlds theory of quantum mechanics. I discuss some philosophical and scientific challenges to this view, as well as some connections to ontological nihilism.
... There is a lot of discussion in the literature about the influence of Gödel's incompleteness theorems on the philosophical question of whether the mind can be mechanized (see [Penrose, 1989] [Chalmers, 1995] [Lucas, 1996] [Lindström, 2006] [Feferman, 2009] [Shapiro, 1998] [Shapiro, 2003] [Koellner, 2016] [Koellner, 2018a] [Koellner, 2018b] [Cheng, 2020b]). The Anti-Mechanism Argument claims that the mind cannot be mechanized in the sense that the mathematical outputs of the idealized human mind outstrip the mathematical outputs of any Turing machine. ...
Article
Full-text available
We use Gödel's incompleteness theorems as a case study for investigating mathematical depth. We examine the philosophical question of what the depth of Gödel's incompleteness theorems consists in. We focus on the methodological study of the depth of Gödel's incompleteness theorems, and propose three criteria to account for the depth of the incompleteness theorems: influence, fruitfulness, and unity. Finally, we give some explanations for our account of the depth of Gödel's incompleteness theorems.
... Thus this framework provides a precise way of talking about determinism as a property of a world rather than merely a theory, opening the door for that notion to be applied to a variety of ongoing metaphysical discussions. [40]. However, Penrose's scheme does not seem to accommodate the intermediate possibility of worlds which satisfy weak global determinism but not Laplacean determinism, such as worlds which have some degree of arbitrariness at some point other than the initial state of the universe but nonetheless have no objectively chancy events. ...
Preprint
Full-text available
Physicists are increasingly beginning to take seriously the possibility of laws outside the traditional time-evolution paradigm; yet our understanding of determinism is still predicated on a forwards time-evolution picture, making it manifestly unsuited to the diverse range of research programmes in modern physics. In this article, we use a constraint-based framework to set out a generalization of determinism which does not presuppose temporal directedness, distinguishing between strong, weak and hole-free global determinism. We discuss some interesting consequences of these generalized notions of determinism, and we show that this approach sheds new light on the long-standing debate surrounding the nature of objective chance.
... "Strong AI" is the view that human level cognition is algorithmically possible, and in fact that the human mind is algorithmic in nature (Searle, 1980). The "'mindbrain" identity thesis is controversial (Penrose, 1989), and it is anyone's guess whether extreme predictions such as the Singularity (Vinge, 1983) will be realized. The computational theory of mind has some important ethical implications-e.g., will AI or other computational agents have "human" rights (du Sautoy, 2016)? ...
... Conventionally, if the system is finite, one normalizes the measure to 1 such that the size of the entire 24 This definition of entropy is written on Boltzmann's tomb, although Boltzmann actually defined entropy in a different mathematical way, even if similar in spirit. This formula for entropy was due to Max Planck (Darrigol and Renn, 2013, p. 783) and taken up as the foundation for the neo-Boltzmannian project of statistical mechanics (Callender, 1999;Goldstein, 2001;Lebowitz, 1993aLebowitz, ,b, 1994Lebowitz, , 2008Penrose, 1989). ...
Article
I want to combine two hitherto largely independent research projects, scientific understanding and mechanistic explanations. Understanding is not only achieved by answering why-questions, that is, by providing scientific explanations, but also by answering what-questions, that is, by providing what I call scientific descriptions. Based on this distinction, I develop three forms of understanding: understanding-what, understanding-why, and understanding-how. I argue that understanding-how is a particularly deep form of understanding, because it is based on mechanistic explanations, which answer why something happens in virtue of what it is made of. I apply the three forms of understanding to two case studies: first, to the historical development of thermodynamics and, second, to the differences between the Clausius and the Boltzmann entropy in explaining thermodynamic processes.
... 79 Também vale a pena registar, como fato pouco positivo acerca do eternismo, a opinião de alguns autores acerca de uma suposta consequência determinista (ou fatalista, em algumas versões) relacionada. Autores que defenderam argumentos semelhantes, em algum momento, são: Rietdijk (1966); Putnam (1967); Penrose (1989); Shanks (1994); Lockwood (2007) e muitos outros. Aqui existe, evidentemente, todo o tipo de preocupação com a existência ou não de 79 Para a tese de que o eternismo é compatível com mudança, em um sentido robusto, e sobre as condições em que isso é possível, ver Marques (2020). ...
Chapter
Full-text available
Article
Full-text available
11 УДК 001.8  И. Б. Птицына, 2022 НАУЧНЫЕ ПРИБОРЫ И ИНСТРУМЕНТЫ КАК ОСОБЫЙ ВИД АРТЕФАКТОВ История артефактов-произведенных человеком предметов-так же велика, как и история человечества. Среди большого разнообразия артефактов важное место с самого начала занимали инструменты и несколько позже-приборы. Развитие социума всегда сопровождалось и сопровождается необходимостью увеличения возможностей этих артефактов и их усложнением. Все артефакты подобного рода-это экстрасоматические органы, дополнение к органам телесным и умственным, это инструменты, созданные для повышения способности решать определенные проблемы. С развитием технологий их возможности стали настолько большими, что возник вопрос, превышают ли они возможности человеческого мозга. Особенно этот вопрос актуален для такой разновидности инструментов, которые созданы для помощи мозгу,-обучающихся компьютерных программ искусственного интеллекта. Чтобы понять это, нужно обратиться к истокам науки, когда закладывались основы методологии и общие принципы получения умственного продукта. Этот результат имеет особенность-он часто воспринимается антропоморфно, перенося свойства экспериментато-ра на результат его деятельности. Это особенно актуально для сложных приборов и инструментов. В статье показана природа взаимоотношений человека и инструмента как его искусственного экстрасоматического органа. The history of artifacts (human-made objects) is as great as the history of mankind. Among the wide variety of artifacts, tools occupied an important place from the very beginning and, somewhat later, instruments. The development of society has always been accompanied by the need to increase the capabilities of these artifacts and their complications. All artifacts of this kind are extrasomatic organs — addition to bodily and mental organs — they are tools created to increase the ability to solve certain problems. With the development of technology, their capabilities have become so great that the question has arisen whether they exceed the capabilities of the human brain. This question is especially relevant for these kinds of tools that are created to assist the brain — self-learning computer programs of artificial intelligence. To understand this, one needs to turn to the origins of science, back to the day when the foundations of methodology and general principles for obtaining a mental product were laid. This result has a peculiarity: it is often perceived anthropomorphically, transferring the properties of the experimenter to the result of his activity. This is especially true for complex devices and instruments. The article shows the nature of the relationship between a person and an instrument as his artificial extrasomatic organ
Preprint
Full-text available
Once upon a time in ancient Greece, the pre-Socratic Parmenides postulated a holistic being and universe. Nothing is separated in it, everything is connected. The pre-Socratic Democritus instead postulated the exact opposite. The being and the whole universe consist of separated atoms. The whole is the sum of the atoms. In an article published in 1948 Albert Einstein followed Democritus and postulated separation as a basic relation for classical physics and questioned Parmenides referring to connection as a basic relation for quantum physics. He rejected the latter as incomplete. Apart from that, both basic relations of nature are also moments of the abstract relation aRb. R separates a and b and connects them at the same time. Quantum theory is based on the field of complex numbers. This can be proven. The fundamental Schrödinger equation is imaginary. It can be derived with the help of Brownian motions. This leads to imaginary Brownian motions and again to the Schrödinger equation. Einstein's classical ontology as well as the quantum ontology can be confirmed empirically by the double slit experiment. Even more, with the help of the 'which way' ̶ experiment it can be proven that quantum nature is unobservable. This is known as the Born rule. The consequence is that quantum nature is only conceivable! This quantum cognition has been explored by Diederik Aerts for decades. He found in it the same principles at work as they are given in quantum theory. 2
Thesis
Full-text available
In the mid 1980’s several bio-inspired approaches emerged to the study of artificial intelligence. Starting from this context and from von Neumann cellular automata, the field of artificial life was developed with the objective to construct artificial systems capable to present similar behaviors to those found in biological phenomena. This thesis recovers the history of artificial life and its relationship with artificial intelligence, presents the difficulties of its development considering cartesian dualism, and demonstrates the possibility of a more adequate way of research based on the hypothesis of continuity between mind and matter, typical of the general semiotics of Charles Sanders Peirce. Through peircean semiotics and using the fundamentals of biosemiotics, the semiotic transposition technique is developed, a set of diagrammatic operations to support the study of artificial life. This technique studies the semiotic processes underlying biological phenomena. Then, through isomorphism, derived from the category theory, a finite automata can be created to computationally express certain aspects of the original biological processes. Throughout the research, the learning and memory behavior of a sea slug species, Aplysia californica, was used as an auxiliary element for the formalization of semiotic transposition. Two other biological phenomena — the genetic translation and the vacancy chain dynamics related to the Pagurus longicarpus, a species of crab — were considered as case studies to demonstrate the general character of the semiotic transposition. It is concluded that the use of semiotic theory as the basis for the study of artificial life constitutes an effective instrument to the creation of bio inspired computational devices.
Article
Full-text available
Many physicists have thought that absolute time became otiose with the introduction of Special Relativity. William Lane Craig disagrees. Craig argues that although relativity is empirically adequate within a domain of application, relativity is literally false and should be supplanted by a Neo-Lorentzian alternative that allows for absolute time. Meanwhile, Craig and co-author James Sinclair have argued that physical cosmology supports the conclusion that physical reality began to exist at a finite time in the past. However, on their view, the beginning of physical reality requires the objective passage of absolute time, so that the beginning of physical reality stands or falls with Craig’s Neo-Lorentzian metaphysics. Here, I raise doubts about whether, given Craig’s Neo-Lorentzian metaphysics, physical cosmology could adequately support a beginning of physical reality within the finite past. Craig and Sinclair’s conception of the beginning of the universe requires a past boundary to the universe. A past boundary to the universe cannot be directly observed and so must be inferred from the observed matter-energy distribution in conjunction with auxilary hypotheses drawn from a substantive physical theory. Craig’s brand of Neo-Lorentzian has not been sufficiently well specified so as to infer either that there is a past boundary or that the boundary is located in the finite past. Consequently, Neo-Lorentzian implicitly introduces a form of skepticism that removes the ability that we might have otherwise had to infer a beginning of the universe. Furthermore, in analyzing traditional big bang models, I develop criteria that Neo-Lorentzians should deploy in thinking about the direction and duration of time in cosmological models generally. For my last task, I apply the same criteria to bounce cosmologies and show that Craig and Sinclair have been wrong to interpret bounce cosmologies as including a beginning of physical reality.
Chapter
The mind-brain problem and free will, its meaning, its historical origins, and its significance to philosophy, religion, and society are all important for psychiatry. It is important for psychiatrists as well as mental health workers to understand mental events as well as the limits of our understanding and the consequences of these limits in the way we perceive mental disorders and manage patients suffering from them as well as their families.
Preprint
The relation between entropy and information has great significance for computation. Based on the strict reversibility of the laws of microphysics, Landauer (1961), Bennett (1973), Priese (1976), Fredkin and Toffoli (1982), Feynman (1985) and others envisioned a reversible computer that cannot allow any ambiguity in backward steps of a calculation. It is this backward capacity that makes reversible computing radically different from ordinary, irreversible computing. The proposal aims at a higher kind of computer that would give the actual output of a computation together with the original input, with the absence of a minimum energy requirement. Hence, information retrievability and energy efficiency due to diminished heat dissipation are the exquisite tasks of quantum computer technology.
Article
One of the most difficult problems of our time is the problem of consciousness. There are many different theoretical concepts of consciousness, both in Russia and abroad. The fact, that sometimes the researchers use the so-called artificial intelligence as a research model of consciousness, aggravates the situation. Recently the criminologists use artificial intelligence, which has come to the aid of the human mind, in their practice. The authors summarize the best practices and conceptualize the use of artificial intelligence in some fields of criminology. The study offers some methodological and theoretical aspects of solving the problems based on trends in the use of artificial intelligence in practical criminology. The authors investigate the interaction of artificial intelligence with human consciousness. The basis of the comparative analysis is the quantum level of consciousness, that is, a quantum theoretical concept based on the concepts of mathematicians and modern physicists. This paradigm of the exact sciences is fundamentally different from the humanitarian paradigm based on philosophy and psychology. The proposed approach makes it possible to understand the physical meaning, the mechanisms of consciousness and artificial intelligence at the quantum level. The authors consider it important to emphasize that there are deep parallels between the functioning of consciousness and artificial intelligence. The well-known psychologist K. Jung and the Nobel laureate physicist W. Pauli discussed similar parallels between idea and matter. The article analyzes the positive and negative aspects of the joint interaction of consciousness and artificial intelligence. This reveals the role of the interdisciplinary science of synergetics. Since consciousness and artificial intelligence are quantum in nature, both general and distinctive laws of non-linear systems functioning apply to these phenomena. There is an uncertainty principle, the meaning of which is as follows: if one parameter is precisely defined in the system, then other parameters cannot be defined, they are uncertain. This facet is movable. The authors analyze some aspects of criminological uncertainty upon occurrence of socio-psychological distorting effects. The psychologists Ph. Zimbardo and S. Milgram investigated them in group experiments. The article substantiates the prospects for the artificial intelligence development and the limitations of its use in criminology. The authors are pessimistic about the development and application of artificial intelligence in criminology. Artificial intelligence does not have the “I” status of a personality, although it can be "subordinated" to legal laws of different levels. The use of artificial intelligence for judicial proceedings is problematic, although there are great opportunities for operational search activities in its functioning, for example, anti-crime programs: recognizing appearance from photos and videos, predicting individual criminal behavior. This frees the consciousness of the expert from routine work, and the employee can therefore choose a creative approach and pay more attention to ethical issues, which is not available for artificial intelligence.
Preprint
Full-text available
The Great Divide in metaphysical debates about laws of nature is between Humeans who think that laws merely describe the distribution of matter and non-Humeans who think that laws govern it. The metaphysics can place demands on the proper formulations of physical theories. It is sometimes assumed that the governing view requires a fundamental / intrinsic direction of time: to govern, laws must be dynamical, producing later states of the world from earlier ones, in accord with the fundamental direction of time in the universe. In this paper, we propose a minimal primitivism about laws of nature (MinP) according to which there is no such requirement. On our view, laws govern by constraining the physical possibilities. Our view captures the essence of the governing view without taking on extraneous commitments about the direction of time or dynamic production. Moreover, as a version of primitivism, our view requires no reduction / analysis of laws in terms of universals, powers, or dispositions. Our view accommodates several potential candidates for fundamental laws, including the principle of least action, the Past Hypothesis, the Einstein equation of general relativity, and even controversial examples found in the Wheeler-Feynman theory of electrodynamics and retrocausal theories of quantum mechanics. By understanding governing as constraining, non-Humeans who accept MinP have the same freedom to contemplate a wide variety of candidate fundamental laws as Humeans do.
Article
Full-text available
The Great Divide in metaphysical debates about laws of nature is between Humeans who think that laws merely describe the distribution of matter and non-Humeans who think that laws govern it. The metaphysics can place demands on the proper formulations of physical theories. It is sometimes assumed that the governing view requires a fundamental / intrinsic direction of time: to govern, laws must be dynamical, producing later states of the world from earlier ones, in accord with the fundamental direction of time in the universe. In this paper, we propose a minimal primitivism about laws of nature (MinP) according to which there is no such requirement. On our view, laws govern by constraining the physical possibilities. Our view captures the essence of the governing view without taking on extraneous commitments about the direction of time or dynamic production. Moreover, as a version of primitivism, our view requires no reduction / analysis of laws in terms of universals, powers, or dispositions. Our view accommodates several potential candidates for fundamental laws, including the principle of least action, the Past Hypothesis, the Einstein equation of general relativity, and even controversial examples found in the Wheeler-Feynman theory of electrodynamics and retrocausal theories of quantum mechanics. By understanding governing as constraining, non-Humeans who accept MinP have the same freedom to contemplate a wide variety of candidate fundamental laws as Humeans do.
Preprint
Full-text available
Many physicists have thought that absolute time became otiose with the introduction of Special Relativity. William Lane Craig disagrees and argues that although relativity is empirically adequate within a domain of application, relativity should be supplanted by a Neo-Lorentzian alternative that allows for absolute time. Meanwhile, Craig and co-author Sinclair have argued that physical cosmology supports the conclusion that physical reality began to exist at a finite time in the past. However, on their view, the beginning of physical reality requires the objective passage of absolute time, so that the beginning of physical reality stands or falls with Craig's Neo-Lorentzian metaphysics. I raise doubts about whether, given Craig's NeoLorentzian metaphysics, physical cosmology could adequately support a beginning of physical reality within the finite past. Craig and Sinclair's conception of the beginning of the universe requires a past boundary to the universe. A past boundary to the universe cannot be directly observed and so must be inferred from the observed matter-energy distribution in conjunction with auxilary hypotheses drawn from a substantive physical theory. Craig's brand of Neo Lorentzianism has not been sufficiently well specified so as to infer either that there is a past boundary or that the boundary is located in the finite past. Thus, Neo Lorentzianism implicitly introduces a form of skepticism that removes the ability that we might have otherwise had to infer a beginning of the universe. Furthermore, in analyzing traditional big bang models, I develop criteria that Neo-Lorentzians should deploy in thinking about the direction and duration of time in cosmological models generally. For my last task, I apply the same criteria to bounce cosmologies and show that Craig and Sinclair have been wrong to interpret bounce cosmologies as including a beginning of physical reality.
Preprint
Full-text available
Many physicists have thought that absolute time became otiose with the introduction of Special Relativity. William Lane Craig disagrees. Craig argues that although relativity is empirically adequate within a domain of application, relativity is literally false and should be supplanted by a Neo-Lorentzian alternative that allows for absolute time. Meanwhile, Craig and co-author James Sinclair have argued that physical cosmology supports the conclusion that physical reality began to exist at a finite time in the past. However, on their view, the beginning of physical reality requires the objective passage of absolute time, so that the beginning of physical reality stands or falls with Craig's Neo-Lorentzian metaphysics. Here, I raise doubts about whether, given Craig's NeoLorentzian metaphysics, physical cosmology could adequately support a beginning of physical reality within the finite past. Craig and Sinclair's conception of the beginning of the universe requires a past boundary to the universe. A past boundary to the universe cannot be directly observed and so must be inferred from the observed matter-energy distribution in conjunction with auxilary hypotheses drawn from a substantive physical theory. Craig's brand of Neo Lorentzianism has not been sufficiently well specified so as to infer either that there is a past boundary or that the boundary is located in the finite past. Consequently, Neo Lorentzianism implicitly introduces a form of skepticism that removes the ability that we might have otherwise had to infer a beginning of the universe. Furthermore, in analyzing traditional big bang models, I develop criteria that Neo-Lorentzians should deploy in thinking about the direction and duration of time in cosmological models generally. For my last task, I apply the same criteria to bounce cosmologies and show that Craig and Sinclair have been wrong to interpret bounce cosmologies as including a beginning of physical reality. Forthcoming in the European Journal for Philosophy of Science.
Article
We do not have an understanding of the fundamental mechanism of how information is stored and retrieved by the brain. A Universal Brain Code utilized for these functions is proposed here. The basic tenent of the Code is that a memory engram is propagated and guided through the connectome by specific proteins/peptides embedded within the pre-synaptic neuronal membrane corresponding to information provided by afferent electrical currents to the pre-synaptic neuron. It is intended to provide a working approach to this central brain activity and begin the process of investigation based on these ideas which are new and unexplored.
Article
Full-text available
NOTE, September, 2021: This is the corrected second eBook edition of this work. Readers are asked to use this edition. The _Critique of Impure Reason_ has now also been published in a printed edition. To reduce the otherwise high price of this scholarly/technical book of nearly 900 pages and make it more widely available beyond university libraries to individual readers, the non-profit publisher and the author have agreed to issue the printed edition at cost. The printed edition was released on September 1, 2021 and is now available through all booksellers, including Barnes & Noble, Amazon, and brick-and-mortar bookstores under the following ISBN: 978-0-578-88646-6 Commendations of this work, from the back cover of the published edition: “I admire its range of philosophical vision.” – Nicholas Rescher, Distinguished University Professor of Philosophy, University of Pittsburgh, author of more than 100 books. “Bartlett’s _Critique of Impure Reason_ is an impressive, bold, and ambitious work. Careful scholarship is balanced by original analyses that lead the reader to recognize the limits of meaning, knowledge, and conceptual possibility. The work addresses a host of traditional philosophical problems, among them the nature of space, time, causality, consciousness, the self, other minds, ontology, free will and determinism, and others. The book culminates in a fascinating and profound new understanding of relativity physics and quantum theory.” – Gerhard Preyer, Professor of Philosophy, Goethe-University, Frankfurt am Main, Germany, author of many books including _Concepts of Meaning_, _Beyond Semantics and Pragmatics_, _Intention and Practical Thought_, and _Contextualism in Philosophy_. “[This work’s] goal is of a unique and difficult species: Dr. Bartlett seeks to develop a formal logical calculus on the basis of transcendental philosophical arguments; in fact, he hopes that this calculus will be the formal expression of the transcendental foundation of knowledge.... I consider Dr. Bartlett’s work soundly conceived and executed with great skill.” – C. F. von Weizsäcker, philosopher and physicist, former Director, Max-Planck-Institute, Starnberg, Germany. “Bartlett has written an American “Prolegomena to All Future Metaphysics.” He aims rigorously to eliminate meaningless assertions, reach bedrock, and place philosophy on a firm foundation that will enable it, like science and mathematics, to produce lasting results that generations to come can build on. This is a great book, the fruit of a lifetime of research and reflection, and it deserves serious attention.” — Martin X. Moleski, former Professor, Canisius College, Buffalo, NY, studies of scientific method, the presuppositions of thought, and the self-referential nature of epistemology. “Bartlett has written a book on what might be called the underpinnings of philosophy. It has fascinating depth and breadth, and is all the more striking due to its unifying perspective based on the concepts of reference and self-reference.” – Don Perlis, Professor of Computer Science, University of Maryland, author of numerous publications on self-adjusting autonomous systems and philosophical issues concerning self-reference, mind, and consciousness. +++++++++++++++++++++++++++++++++++++++++ The _Critique of Impure Reason: Horizons of Possibility and Meaning_ comprises a major and important contribution to the philosophy of science. Thanks to the generosity of its publisher, this massive volume of 885 pages has been published as a free open access eBook. It inaugurates a revolutionary paradigm shift in philosophical thought by providing compelling and long-sought-for solutions to a wide range of problems that have concerned philosophers of science as well as epistemologists. The book includes a Foreword by the celebrated German physicist and philosopher of science Carl Friedrich von Weizsäcker. The principal objective of the study is to identify the unavoidable limitations of the conceptual frameworks in terms of which knowledge, reference, and meaning are possible. The book establishes a bridge between, on the one hand, a model of philosophy as a science — i.e., rigorous proof-based scientific philosophy — and, on the other, the philosophy of science. The study develops a logically compelling method that enables us both to recognize the boundaries of what is referentially forbidden — the limits beyond which reference becomes meaningless — and second, to avoid falling victims to a certain broad class of conceptual confusions which lie at the heart of major problems of philosophy of science and epistemology. With these ends in view, individual chapters are devoted to a critique of a wide range of fundamental concepts studied by philosophy of science, among them, space, time, space-time, causality, the problem of discovery or invention in general problem-solving, mathematics, and physics, the role of the observer, the perturbation theory of measurement, indeterminacy and uncertainty, complementarity, the ontological status of physical reality, and others. The study culminates in a group of chapters devoted to special and general relativity and quantum theory. In these concluding chapters the purpose is to show the extent to which both relativity physics and quantum theory bear out results that have been reached in a logically compelling manner wholly by means of the approach to conceptual analysis developed in the book. Based on original research and rigorous analysis combined with extensive scholarship, the _Critique of Impure Reason_ develops a logically self-validating method that at last provides provable and constructive solutions to a significant number of major philosophical problems in philosophy of science and epistemology. Bartlett is the author or editor of more than 20 books and numerous papers.
ResearchGate has not been able to resolve any references for this publication.