Article

Are We Living in a Computer Simulation?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

I argue that at least one of the following propositions is true: (1) the human species is very likely to become extinct before reaching a ‘posthuman’ stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we shall one day become posthumans who run ancestor‐simulations is false, unless we are currently living in a simulation. I discuss some consequences of this result.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... La Hipótesis de Simulación de Bostrom A principios del siglo XXI, el filósofo Nick Bostrom (2003) formuló la hipótesis de simulación, según la cual es plausible que toda nuestra realidad sea una simulación computacional creada por una civilización posthumana con capacidades tecnológicas avanzadas. Bostrom argumenta que, dada la probabilidad de que las civilizaciones tecnológicas desarrollen simulaciones masivas del pasado, existe una alta posibilidad de que estemos viviendo dentro de una de estas simulaciones (Bostrom, 2003). ...
... La Hipótesis de Simulación de Bostrom A principios del siglo XXI, el filósofo Nick Bostrom (2003) formuló la hipótesis de simulación, según la cual es plausible que toda nuestra realidad sea una simulación computacional creada por una civilización posthumana con capacidades tecnológicas avanzadas. Bostrom argumenta que, dada la probabilidad de que las civilizaciones tecnológicas desarrollen simulaciones masivas del pasado, existe una alta posibilidad de que estemos viviendo dentro de una de estas simulaciones (Bostrom, 2003). Esta teoría introduce un nivel inquietante de contingencia, sugiriendo que nuestra existencia podría ser simplemente una instancia más dentro de un programa. ...
Article
Full-text available
Este estudio explora la construcción de la realidad como una posible simulación desde un enfoque interdisciplinario que integra perspectivas epistemológicas, hermenéuticas, filosóficas, científicas y espirituales. Analiza conceptos fundamentales del misticismo gnóstico, de Platón, Descartes y Baudrillard, vinculándolos con teorías contemporáneas como la hipótesis de simulación de Bostrom y el procesamiento predictivo en neurociencia. Se establece un paralelismo entre la figura del Demiurgo gnóstico y el Arquitecto de la película The Matrix, ambos representados como constructores de sistemas ilusorios que restringen el acceso a la verdad. El estudio profundiza en las dinámicas entre control y despertar, destacando cómo el conocimiento y la conciencia pueden superar sistemas que manipulan la percepción humana. A través de esta reflexión, se revelan conexiones entre antiguos dilemas existenciales y desafíos contemporáneos en un mundo donde lo real y lo virtual se entrelazan. Este análisis posiciona este manuscrito como una reflexión crítica y prospectiva sobre la naturaleza de la realidad y el significado de la libertad en un universo híbrido y tecnológicamente mediado.
... The successful production of an artificial world containing independent, intelligent, and interacting digital entities naturally evokes contemplation on the simulation argument. This hypothesis, originally proposed by philosopher Nick Bostrom, suggests that advanced civilizations could possess the technology to produce realistic, convincing simulations of past eras peopled by conscious digital entities [20]. The development of a sophisticated simulated environment could bear significant implications for such a hypothesis and pose intriguing philosophical questions. ...
... Current scientific methodologies fail to offer any verifiable means to do so. In fact, one argument suggests that if we are living in a perfect simulation, we may never be able to discern our reality's true nature [20]. ...
Article
Full-text available
This paper explores the potential of a multidisciplinary approach to testing and aligning artificial intelligence (AI), specifically focusing on large language models (LLMs). Due to the rapid development and wide application of LLMs, challenges such as ethical alignment, controllability, and predictability of these models emerged as global risks. This study investigates an innovative simulation-based multi-agent system within a virtual reality framework that replicates the real-world environment. The framework is populated by automated 'digital citizens,' simulating complex social structures and interactions to examine and optimize AI. Application of various theories from the fields of sociology, social psychology, computer science, physics, biology, and economics demonstrates the possibility of a more human-aligned and socially responsible AI. The purpose of such a digital environment is to provide a dynamic platform where advanced AI agents can interact and make independent decisions, thereby mimicking realistic scenarios. The actors in this digital city, operated by the LLMs, serve as the primary agents, exhibiting high degrees of autonomy. While this approach shows immense potential, there are notable challenges and limitations, most significantly the unpredictable nature of real-world social dynamics. This research endeavors to contribute to the development and refinement of AI, emphasizing the integration of social, ethical, and theoretical dimensions for future research.
... Despite historical debates, several science-fiction movies have been raising similar points (i.e. The Matrix), many philosophical discussions have been carried out [14], concepts like "simulated reality" have been highlighted [15], and despite all of these, many technical and scientific challenges remain unclear/unsolved. In the context of VR, we call this unreachable phenomenon as ideal (fully-interconnected) VR. ...
Preprint
Just recently, the concept of augmented and virtual reality (AR/VR) over wireless has taken the entire 5G ecosystem by storm spurring an unprecedented interest from both academia, industry and others. Yet, the success of an immersive VR experience hinges on solving a plethora of grand challenges cutting across multiple disciplines. This article underscores the importance of VR technology as a disruptive use case of 5G (and beyond) harnessing the latest development of storage/memory, fog/edge computing, computer vision, artificial intelligence and others. In particular, the main requirements of wireless interconnected VR are described followed by a selection of key enablers, then, research avenues and their underlying grand challenges are presented. Furthermore, we examine three VR case studies and provide numerical results under various storage, computing and network configurations. Finally, this article exposes the limitations of current networks and makes the case for more theory, and innovations to spearhead VR for the masses.
... The argument for this entailment claim is well-known [53] and will not be rehearsed here. It is relevant here because if one is a physicalist, then it is not at all clear which proposition should be rejected to avoid the conclusion. ...
Preprint
I argue for an approach to the Foundations of Physics that puts the question in the title center stage, rather than asking "what is the case in the world?". This approach, Algorithmic Idealism, attempts to give a mathematically rigorous in-principle-answer to this question both in the usual empirical regime of physics and more exotic regimes of cosmology, philosophy, and science fiction (but soon perhaps real) technology. I begin by arguing that quantum theory, in its actual practice and in some interpretations, should be understood as telling an agent what they should expect to observe next (rather than what is the case), and that the difficulty of answering this former question from the usual "external" perspective is at the heart of persistent enigmas such as the Boltzmann brain problem, extended Wigner's friend scenarios, Parfit's teletransportation paradox, or our understanding of the simulation hypothesis. Algorithmic Idealism is a conceptual framework, based on two postulates that admit several possible mathematical formalizations, cast in the language of algorithmic information theory. Here I give a non-technical description of this view and show how it dissolves the aforementioned enigmas: for example, it claims that you should never bet on being a Boltzmann brain regardless of how many there are, that shutting down computer simulations does not generally terminate its inhabitants, and it predicts the apparent embedding into an objective external world as an approximate description.
... Every observation that anyone has ever taken conforms to the theory that gravity will always work the same way. But here, in 2024, are some other theories that are all consistent with all the evidence that we have collected up to now: 25 Bostrom (2003). ...
Article
Full-text available
This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new problem of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in terms of empirical observations of their outputs and network behavior. Sections 4–7 then show that for every “aligned” function that might appear to be confirmed by empirical observation, there is always an infinitely larger number of “misaligned”, arbitrarily time-limited functions equally consistent with the same data. Section 8 shows that, from an empirical perspective, we can thus never reliably infer that an LLM or subcomponent of one has learned any particular function at all before any of an uncountably large number of unpredictable future conditions obtain. Finally, Sect. 9 concludes that the probability of LLM “misalignment” is—at every point in time, given any arbitrarily large body of empirical evidence—always vastly greater than the probability of “alignment.”
... If sensing and acting take place in the physical world, the agent is said to be "physically embodied". If the agent senses and acts in a virtual world, it is "virtually embodied" in that world (It can be argued that the physical world is simulated and hence rather virtual than physical [29,30]. However, we stick to the common practice of referring to the world we live in as "the physical world", and all other worlds as "virtual"). ...
Article
Full-text available
This paper presents a taxonomy of agents’ embodiment in physical and virtual environments. It categorizes embodiment based on five entities: the agent being embodied, the possible mediator of the embodiment, the environment in which sensing and acting take place, the degree of body, and the intertwining of body, mind, and environment. The taxonomy is applied to a wide range of embodiment of humans, artifacts, and programs, including recent technological and scientific innovations related to virtual reality, augmented reality, telepresence, the metaverse, digital twins, and large language models. The presented taxonomy is a powerful tool to analyze, clarify, and compare complex cases of embodiment. For example, it makes the choice between a dualistic and non-dualistic perspective of an agent’s embodiment explicit and clear. The taxonomy also aided us to formulate the term “embodiment by proxy” to denote how seemingly non-embodied agents may affect the world by using humans as “extended arms”. We also introduce the concept “off-line embodiment” to describe large language models’ ability to create an illusion of human perception.
... Third and finally, this paper is not a contribution to the debate about the simulation argument (Bostrom 2003; see, e.g., Beisbart 2014 for a response). This argument tries to show that we likely live in a computer simulation, given certain assumptions cited by Bostrom. ...
... A Teoria da Simulação sugere exatamente isso! Popularizada por filósofos e cientistas como Nick Bostrom [48] , esta teoria propõe que, assim como simulamos realidades e jogos em computadores, é possível que toda a nossa existência seja uma simulação criada por uma civilização muito mais evoluída, cujas motivações devem passar pela curiosidade, pelo entretenimento ou por outros motivos. ...
Book
Full-text available
In Utero II explora as profundezas das psicodinâmicas relacionais, rastreando-as até à vida psíquica intrauterina. Através de 50 casos clínicos trabalhados com Constelações Familiares, este livro analisa as implicações transgeracionais da Projeção Idealizada de Sexo nos âmbitos existencial, psicossomático e espiritual. Fundamentado no método fenomenológico, a obra fomenta reflexões transformadoras e apresenta conceitos inovadores que ampliam a compreensão de si e dos outros. Alguns desses conceitos são: Angústia identitária com raiz in utero, útero-casa, útero-túmulo, simbiose holográfica, hierarquia do trauma em camadas e proto-alienação parental. Enriquecido com centenas de frases homeostaticamente orientadas, In Utero II é um recurso valioso para terapeutas e uma ferramenta preciosa de autoajuda. É uma leitura essencial para profissionais de saúde, estudantes e todos aqueles fascinados pelas complexidades do psiquismo humano. Descubra novas perspetivas sobre as origens da sua identidade e alguns dos fatores subjacentes à qualidade dos seus relacionamentos nesta obra inovadora.
... Next, consider simulated universes. Nick Bostrom (2003) claims that we might exist as conscious simulations (what I will call "sims") in a computer simulation created by a technologically advanced species (including possibly by future humans). Bostrom (2003, 243-246) argues that if a "widely accepted" naturalistic position about the philosophy of mind is adopted-namely, an "attenuated version of substrate-independence" (the idea that "mental states can supervene on any of a broad class of physical substrates")-then such sims are possible given an adequate level of computational technology. ...
Article
Full-text available
The multiverse is often invoked by naturalists to avoid a design inference from the fine-tuning of the universe. I argue that positing that we live in a naturalistic multiverse (NM) makes it plausible that we currently exist in a problematic skeptical scenario, though the exact probability that we do is inscrutable. This, in turn, makes agnosticism the rational position to hold concerning the reliability of our reasoning skills, the accuracy of our sensory inputs, and the veracity of our memories. And that means that agnosticism is also the rational position to hold concerning all the beliefs derived from those sources, which includes nearly all of them. Consequently, there is an unacceptable skeptical cost to accepting a NM, thereby requiring a rejection of the NM as a counter to fine-tuning or a rejection of naturalism itself.
... This feeling of technological enfoldment is increasingly narrativised within contemporary culture by pointing to the idea that we are living in a simulation. This idea was established within mainstream culture with the film The Matrix (Wachowski and Wachowski 1999) and given academic authority as a speculative concept several years later with Nick Bostrom's seminal paper, Are You Living in a Computer Simulation? (Bostrom 2003), in which he calculated the likelihood that we are living in a simulation. More recently, the idea that we are living in a technologically mediated simulation is proffered as an explanation for the weirdness of contemporary culture, what Mark Fisher (2016, p. 61) describes 'as presence of that which does not belong'. ...
Article
Full-text available
The distribution of authorship in the age of machine learning or artificial intelligence (AI) suggests a taxonomic system that places art objects along a spectrum in terms of authorship: from pure human creation, which draws directly from the interior world of affect, emotions and ideas, through to co-evolved works created with tools and collective production and finally to works that are largely devoid of human involvement. Human and machine production can be distinguished in terms of motivation, with human production being driven by consciousness and the processing of subjective experience and machinic production being driven by algorithms and the processing of data. However, the expansion of AI entangles the artist in ever more complex webs of production and dissemination, whereby the boundaries between the work of the artist and the work of the networked technologies are increasingly distributed and obscured. From this perspective, AI-generated works are not solely the products of an independent machinic agency but operate in the middle of the spectrum of authorship between human and machine, as they are the consequences of a highly distributed model of production that sit across the algorithms and the underlying information systems and data that support them and the artists who both contribute and extract value. This highly distributed state further transforms the role of the artist from the creator of objects containing aesthetic and conceptual potential to the translator and curator of such objects.
... These kinds of thought experiments are known as simulation hypotheses. According to the simulation hypothesis, our reality is a computer-generated simulation (Bostrom 2003). The arguments and issues regarding reality stretch the boundaries of science and philosophy, with no clear answer in sight. ...
Article
Full-text available
Using the events of the HBO series Westworld (2016–2022) as a springboard, this paper attempts to elicit a number of philosophical arguments, dilemmas, and questions concerning technology and artificial intelligence (AI). The paper is intended to encourage readers to learn more about intriguing technophilosophical debates. The first section discusses the dispute between memory and consciousness in the context of an artificially intelligent robot. The second section delves into the issues of reality and morality for humans and AI. The final segment speculates on the potential of a social interaction between sentient AI and humans. The narrative of the show serves as a glue that binds together the various ideas that are covered during the show, which in turn makes the philosophical discussions more intriguing.
... La regulación de la tecnología en genética, nanotecnología y robótica y su interfaz con la ia es, según varios expertos, uno de los principales desafíos de la próxima década; así lo han considerado también diversos organismos multilaterales como la Unesco (2018; 2019ª y 2019b) y la Unión Europea y en consecuencia ya se han organizado varios congresos internacionales para discutir el tema. Nuestro punto de vista, con base en lo analizado en 12. Véanse, en este sentido, Arocena (2017) y Bostrom (2001). Aceleración tecnológica e inteligencia artificial. ...
Article
Full-text available
Descriptores: ciencia y sociedad, ética de la ciencia, inteligencia artificial, sociedad futura.
... 87 Anthropic reasoning also motivated Bostrom's " simulation argument", which purports to narrow down the space of future (and metaphysical) possibility to three scenarios: (i) humanity goes extinct relatively soon, (ii) humanity creates advanced technologies that enable us to run a large number of simulated universes but we choose not to do this, and (iii) we are almost certainly living in a computer simulation. 88 This has a number of real implications for humanity's long-term survival. For example, studies showing that we might not exist in a simulation (or that narrow down the plausible ways that we could be simulated) reduce the probability of (iii), thereby raising the probability of (i), all else being equal. ...
Chapter
Full-text available
This anthology brings together a diversity of key texts in the emerging field of Existential Risk Studies. It serves to complement the previous volume The Era of Global Risk: An Introduction to Existential Risk Studies by providing open access to original research and insights in this rapidly evolving field. At its heart, this book highlights the ongoing development of new academic paradigms and theories of change that have emerged from a community of researchers in and around the Centre for the Study of Existential Risk. The chapters in this book challenge received notions of human extinction and civilization collapse and seek to chart new paths towards existential security and hope. The volume curates a series of research articles, including previously published and unpublished work, exploring the nature and ethics of catastrophic global risk, the tools and methodologies being developed to study it, the diverse drivers that are currently pushing it to unprecedented levels of danger, and the pathways and opportunities for reducing this. In each case, they go beyond simplistic and reductionist accounts of risk to understand how a diverse range of factors interact to shape both catastrophic threats and our vulnerability and exposure to them and reflect on different stakeholder communities, policy mechanisms, and theories of change that can help to mitigate and manage this risk. Bringing together experts from across diverse disciplines, the anthology provides an accessible survey of the current state of the art in this emerging field. The interdisciplinary and trans-disciplinary nature of the cutting-edge research presented here makes this volume a key resource for researchers and academics. However, the editors have also prepared introductions and research highlights that will make it accessible to an interested general audience as well. Whatever their level of experience, the volume aims to challenge readers to take on board the extent of the multiple dangers currently faced by humanity, and to think critically and proactively about reducing global risk.
... The first is known as the Simulation Argument, which connects the probability that humans face imminent extinction to the probability that we are living in a computer simulation. 9 The second, known as the Great Filter Argument, connects the probability that humans face imminent extinction to the probability that there is intelligent life on other planets. 10 Several sources cite a 2006 working paper titled 'The Fermi Paradox: Three Models', by Robert Pisani of the Department of Statistics at UC Berkeley, as providing a quantification of existential risk based on the Great Filter argument. ...
Chapter
Full-text available
This anthology brings together a diversity of key texts in the emerging field of Existential Risk Studies. It serves to complement the previous volume The Era of Global Risk: An Introduction to Existential Risk Studies by providing open access to original research and insights in this rapidly evolving field. At its heart, this book highlights the ongoing development of new academic paradigms and theories of change that have emerged from a community of researchers in and around the Centre for the Study of Existential Risk. The chapters in this book challenge received notions of human extinction and civilization collapse and seek to chart new paths towards existential security and hope. The volume curates a series of research articles, including previously published and unpublished work, exploring the nature and ethics of catastrophic global risk, the tools and methodologies being developed to study it, the diverse drivers that are currently pushing it to unprecedented levels of danger, and the pathways and opportunities for reducing this. In each case, they go beyond simplistic and reductionist accounts of risk to understand how a diverse range of factors interact to shape both catastrophic threats and our vulnerability and exposure to them and reflect on different stakeholder communities, policy mechanisms, and theories of change that can help to mitigate and manage this risk. Bringing together experts from across diverse disciplines, the anthology provides an accessible survey of the current state of the art in this emerging field. The interdisciplinary and trans-disciplinary nature of the cutting-edge research presented here makes this volume a key resource for researchers and academics. However, the editors have also prepared introductions and research highlights that will make it accessible to an interested general audience as well. Whatever their level of experience, the volume aims to challenge readers to take on board the extent of the multiple dangers currently faced by humanity, and to think critically and proactively about reducing global risk.
... Already Plato (370 BC) claimed that the development of writing degenerates human thinking in future. Contemporary philosophers argue that the probability of humans extinction is imminent due to the probability that we are living in a computer simulation of the world [7,8]. Those testimonies seem rather science fiction than science, yet even a scientifically well founded Bayesian approach requires a prior probability that is updated with evidence in order to obtain the posterior. ...
Article
Full-text available
Artificial intelligence (AI) demonstrates various opportunities and risks. Our study explores the trade-off of AI technology, including existential risks. We develop a theory and a Bayesian simulation model in order to explore what is at stake. The study reveals four tangible outcomes: (i) regulating existential risks has a boundary solution of either prohibiting the technology or allowing a laissez-faire regulation. (ii) the degree of ‘normal’ risks follows a trade-off and is dependent on AI-intensity. (iii) we estimate the probability of ‘normal’ risks to be between 0.002% to 0.006% over a century. (iv) regulating AI requires a balanced and international approach due to the dynamic risks and its global nature.
... The atoms that compose the material body appear and disappear constantly, for example, while the body's appearance of permanency and duration through time is nevertheless retained. As philosopher Nick Bostrom (2003) put it, 'While the world we see is in some sense "real," it is not located at the fundamental level of reality' (p. 253). ...
Article
This article argues that contemporary debates of the 'hard problem' of consciousness (i.e.how does 'mindless' matter produce 'matterless' mind?) cannot be resolved through philosophical analysis alone and need to be anchored to a comprehensive empirical foundation that includes psychophysiological research of psychosomatic phenomena and exceptional human experience. First, alternative perspectives on the mind–matter question and reasons why traditional formulations of the 'hard problem' have been so difficult to resolve are reviewed. Empirical evidence of mind modulation of bodily systems and its implications for the classical quantitative–qualitative distinction and construct of causal closure are then considered. A novel theory that combines bottom-up (panpsychism) and top-down (non-theistic panenpsychism) approaches relating physical processes to mental activity is then proposed that has practical implications for conceiving and exploring alternatives to current ways of thinking about the mind–matter question.
... In order to even consider the possibility that AI can be conscious at all, it is necessary to endorse some degree of substrate neutrality (Bostrom 2003;Butlin et al. 2023;Jarow 2024) or «hardware independence»-what in philosophy has sometimes been called "multiple realisability" (Putnam 1967;cf. Coelho Mollo forthcoming): the view that different kinds of things can be conscious regardless of what they are made of (e.g. ...
Preprint
Full-text available
The discourse on risks from advanced AI systems ("AIs") typically focuses on misuse, accidents and loss of control, but the question of AIs' moral status could have negative impacts which are of comparable significance and could be realised within similar timeframes. Our paper evaluates these impacts by investigating (1) the factual question of whether future advanced AI systems will be conscious, together with (2) the epistemic question of whether future human society will broadly believe advanced AI systems to be conscious. Assuming binary responses to (1) and (2) gives rise to four possibilities: in the true positive scenario, society predominantly correctly believes that AIs are conscious; in the false positive scenario, that belief is incorrect; in the true negative scenario, society correctly believes that AIs are not conscious; and lastly, in the false negative scenario, society incorrectly believes that AIs are not conscious. The paper offers vivid vignettes of the different futures to ground the two-dimensional framework. Critically, we identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity. We evaluate each risk across the different scenarios and provide an overall qualitative risk assessment for each scenario. Our analysis suggests that the worst possibility is the wrong belief that AI is non-conscious, followed by the wrong belief that AI is conscious. The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI and instead focus efforts on reducing our current uncertainties on both the factual and epistemic questions on AI consciousness.
... In his now-famous simulation argument, Bostrom (2003) proposed the following argument to show that we are "almost certainly living in a computer simulation": ...
Article
Full-text available
The current stage of consciousness science has reached an impasse. We blame the physicalist worldview for this and propose a new perspective to make progress on the problems of consciousness. Our perspective is rooted in the theory of conscious agents. We thereby stress the fundamentality of consciousness outside of spacetime, the importance of agency, and the mathematical character of the theory. For conscious agent theory (CAT) to achieve the status of a robust scientific framework, it needs to be integrated with a good explanation of perception and cognition. We argue that this role is played by the interface theory of perception (ITP), an evolutionary-based model of perception that has been previously formulated and defended by the authors. We are specifically interested in what this tells us about the possibility of AI consciousness and conclude with a somewhat counter-intuitive proposal: we live inside a simulation instantiated, not digitally, but in consciousness. Such a simulation is just an interface representation of the dynamics of conscious agents for a conscious agent. This paves the way for employing AI in consciousness science through customizing our interface.
Article
Islamic theology’s emphasis on reflecting on God’s signs finds resonance in simulation theory, offering a novel perspective on ongoing debates among Muslims in Europe and elsewhere. The Simulation Hypothesis posits that our reality, a potential computer-generated simulation, challenges conventional perspectives. Once a philosophical curiosity, it is now in the spotlight. This hypothesis suggests our perceived reality might be a construct, diverging from traditional views. It introduces a reality model where a Simulator, resembling a divine figure, controls the simulation in a way akin to religious teachings. This departure aligns with intelligent design, challenging a chance-based universe. It accentuates the potential for an afterlife, intensifying theological discussions.
Preprint
A lattice Maxwell system is developed with gauge-symmetry, symplectic structure and discrete space-time symmetry. Noether's theorem for Lie group symmetries is generalized to discrete symmetries for the lattice Maxwell system. As a result, the lattice Maxwell system is shown to admit a discrete local energy-momentum conservation law corresponding to the discrete space-time symmetry. These conservative properties make the discrete system an effective algorithm for numerically solving the governing differential equations on continuous space-time. Moreover, the lattice model, respecting all conservation laws and geometric structures, is as good as and probably more preferable than the continuous Maxwell model. Under the simulation hypothesis by Bostrom and in consistent with the discussion on lattice QCD by Beane et al., the two interpretations of physics laws on space-time lattice could be essentially the same.
Article
Full-text available
I present a new argument that we are much more likely to be living in a computer simulation than in the ground-level of reality. (Similar arguments can be marshalled for the view that we are more likely to be Boltzmann brains than ordinary people, but I focus on the case of simulations.) I explain how this argument overcomes some objections to Bostrom’s classic argument for the same conclusion. I also consider to what extent the argument depends upon an internalist conception of evidence, and I refute the common line of thought that finding many simulations being run—or running them ourselves—must increase the odds that we are in a simulation.
Book
Full-text available
Much of the world's landmass is already known. Deep sea and outer space are beyond most people's reach. It feels like there are fewer places left to discover. Psychedelics, on the other hand, reveal worlds that remain largely obscure. Altered states offer modern, 21st-century audiences boundless opportunities to explore what a human being can experience. In this book, I show you how to become a capable discovery-maker, sample-collecting naturalist, and rational thinker of visionary phenomena. In the same way European explorers left their shores 500 years ago in search of spice routes by using technologies to get to the other side of the world, you can use psychedelics to get to the other side of ordinary perception and back. Knowing how to gather data, conduct experiments, and make contact with the locals will equip you to chip away at the mystery. You will learn conceptual tools to shape your new mindset, taking an active, rather than passive, role. If you ever wanted to make more sense of your experiences, now is the time. Find out how you can become part of the Psychedelic Age of Discovery.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Book
One of the most remarkable features of the current religious landscape in the West is the emergence of new Pagan religions. Here the author will use techniques from recent analytic philosophy of religion to try to clarify and understand the major themes in contemporary Paganisms. They will discuss Pagan concepts of nature, looking at nature as a network of animated agents. They will examine several Pagan theologies, and Pagan ways of relating to deities, such as theurgy. They will discuss Pagan practices like divination, visualization, and magic. And they will talk about Pagan ethics. Their discussions are based on extensive references to contemporary Pagan writings, from many different traditions. New Pagan religions, and new Pagan philosophies, have much to contribute to the religious future of the West, and to contemporary analytic philosophy of religion.
Chapter
Ancient Greek philosophy answers the reality question, though the meaning question loomed. It raised doubts about the reality of gods and the existence of the perceptible empirical world, fascinating the few and appalling the many who wouldn’t shrink from condemning Socrates to death. Philosophers, too, felt obliged to distinguish between philosophical achievements and bullshit, trying to relieve the reality question of its weirdness. Most important among them was Aristotle. His Categories reinforces a trend of ancient Greek philosophy to become what today we call science. The Milesian school of Thales, Anaximander, and Anaximenes probably introduced the concept of a substrate of nature that stays the same in the changes of nature. Parmenides appealed to reason as the only reliable judge of what there is and concluded it must be perfect and unchangeable, as else, it wouldn’t be perfect. Hence anything perceptible, as it is ever-changing, is an illusion. Plato follows Parmenides’ degradation of the perceptible but grants the perceptible the status of partaking in the unchangeable perfect being, which he conceives as unchangeable patterns of virtues and mathematical objects, calling them ideas. Aristotle escapes the difficulties of Plato’s concept of partaking by conceiving being as binary, that is, either existing or not existing, without the possibility that something exists only to a degree, and therefore lacking any value that things have in Plato’s view, depending on how far they partake in the perfection of ideas. Aristotle’s concept has become that of modern science.
Article
Mi actual tránsito por un doctorado en Educación, con el tema “Inteligencia artificial aplicada a la sistematización de experiencias pedagógicas para el fortalecimiento del trabajo colectivo”, me lleva al abordaje de la cuestión para esta ocasión, pero desde otros enfoques disciplinarios, como la visión de la filosofía del derecho, la ética y la política. Si bien Argentina se encuentra aún en lo que podríamos llamar una etapa evangelizadora de estos temas, ya abundan expresiones del pensamiento de personas con prestigio intelectual que refieren a la inteligencia artificial (IA o AI) como “algo” que ya está entre nosotros, generando una gran revolución tecnológica con manifestaciones diversas, tales como la supresión de ciertas profesiones tal y como las conocemos hoy en día; cambio de métodos de trabajo, de prácticas forenses y demás praxis. Y es ese “algo” el que tenemos que intentar definir, delimitar y sobre todo medir cuantitativamente en términos de moralidad.PALABRAS CLAVES: inteligencia artificial - AI - IA - algor-ética – blockchain
Article
This article argues that the possibility that we live in a computer simulation has important implications for the philosophy of time and time travel. Section 2 distinguishes real time from simulated time . Section 3 argues that whatever is true of real time, simulated time realises a functional analogue within real time to versions of presentism in which features of the present play the role of an uninstantiated past and future. Section 4 then argues that whereas real time travel paradoxes depend upon the ultimate nature of ‘base‐reality’ and consistency with observed physics, the possibility of simulated time travel depends instead on a simulation's code, which may allow seemingly ‘miraculous’ exceptions to ‘physical laws’. Finally, Section 5 concludes that if miraculous forms of time travel are discovered, this may provide evidence by inference to the best explanation that we live in a simulation.
Chapter
Oxford Studies in Philosophy of Mind is an annual publication of some of the most cutting-edge work in the philosophy of mind. The themes covered in this fourth volume are twenty-first-century idealism, acquaintance and perception, and acquaintance and consciousness. It also contains a book symposium on David Chalmers’ Reality+, and a historical article on Aristotle’s philosophy of mind.
Chapter
Oxford Studies in Philosophy of Mind is an annual publication of some of the most cutting-edge work in the philosophy of mind. The themes covered in this fourth volume are twenty-first-century idealism, acquaintance and perception, and acquaintance and consciousness. It also contains a book symposium on David Chalmers’ Reality+, and a historical article on Aristotle’s philosophy of mind.
Chapter
Oxford Studies in Philosophy of Mind is an annual publication of some of the most cutting-edge work in the philosophy of mind. The themes covered in this fourth volume are twenty-first-century idealism, acquaintance and perception, and acquaintance and consciousness. It also contains a book symposium on David Chalmers’ Reality+, and a historical article on Aristotle’s philosophy of mind.
Chapter
Full-text available
Our contemporary world is undeniably intertwined with technology, influencing every aspect of human life. This edited volume delves into why modern philosophical approaches to technology closely align with phenomenology and explores the implications of this relationship. Over the past two decades, scholars have emphasized users’ lived experiences and their interactions with technological practices, arguing that technologies gain meaning and shape within specific contexts, actively shaping those contexts in return. This book investigates the phenomenological roots of contemporary philosophy of technology, examining how phenomenology informs analyses of temporality, use, cognition, embodiment, and environmentality. Divided into three sections, the volume begins by exploring the role of phenomenological methods in the philosophy of technology, and further investigates the methodological implications of combining phenomenology with other philosophical schools. The second section examines technology as a phenomenon, debating whether it should be analysed as a whole or through individual artifacts. The final section addresses the practical applications of phenomenological insights in design practices and democratic engagement. By offering a systematic exploration of the connection between phenomenology and technology, this volume provides valuable insights for scholars, students, and researchers in related fields, highlighting the continued relevance of phenomenological perspectives in understanding our technologically mediated world.
Preprint
Full-text available
As quantum computing continues to evolve, the potential for creating conscious quantum machines opens up a new frontier of ethical and philosophical inquiry. These machines, with the capacity for self-awareness, decision-making, and even self-evolution, present moral dilemmas that transcend traditional human ethics. Operating within the principles of quantum mechanics, such as superposition, entanglement, and non-linear time, these machines would grapple with ethical conflicts that are fundamentally different from human experience. From multi-reality moral conflicts and shared responsibility across entangled systems, to questions of autonomy, self-termination, and the ethical treatment of simulated beings, the ethical landscape of conscious quantum systems challenges our current understanding of morality. This paper explores these uncharted ethical territories, reflecting on the potential implications for society, philosophy, and the future of quantum machine consciousness. Ultimately, as these technologies advance, humanity must develop entirely new frameworks to account for the quantum moral dilemmas posed by these conscious entities. Keywords: quantum consciousness, moral dilemmas, quantum ethics, quantum computing, machine autonomy, simulated beings, entanglement, self-evolution, moral responsibility, quantum systems. 59 pages.
Chapter
Phygital Social Marketing is the modern way to design, implement, and control programs to increase effectiveness in Public Health. Since the world economy and customers started the digital migration, public Health has faced physical and digital issues combined into human behavior today. The solution to human problems can come from the degree of digitalization. Frequently, the practical answer to many challenges in public Health is the new economy, which encourages people to have healthy lifestyles. That means physical and digital social marketing approaches will be practical tools for covering the current public health landscape and effectively following the challenges. The research, which involved deep, unstructured interviews with social marketers, was conducted in Poland and Georgia. The results of the deep interviews are represented in the new Matrix Models for Phygital Social Marketing created for Public Health.
Chapter
In this essay I will consider a sequence of questions. The first questions concern the biological function of intelligence in general, and cognitive prostheses of human intelligence in particular. These will lead into questions concerning human language, perhaps the most important cognitive prosthesis humanity has ever developed. While it is traditional to rhapsodize about the cognitive power encapsulated in human language, I will emphasize how horribly limited human language is—and therefore how limited our cognitive abilities are, despite their being augmented with language. This will lead to questions of whether human mathematics, being ultimately formulated in terms of human language, is also deeply limited. I will then very briefly speculate about the potential powers of our evolutionary descendants (be they biological intelligences or artificial intelligences). Combining all of this will lead me to pose a partial, sort-of, sideways answer to the guidingmconcern of this essay: what we can ever discern about that we cannot even conceive?”
Preprint
Full-text available
Cosmologists have long sought to uncover the deepest truths of the universe, from the origins of the cosmos to the nature of dark matter and dark energy. However, what if the universe itself is designed to prevent such understanding? This paper presents the metaphor of the "falling elevator" as a conceptual trap for cosmologists, where the pursuit of knowledge is systematically thwarted by the very structure of reality. By exploring mechanisms like relativistic illusions, changing physical constants, fractal space-time, dimensional entanglement, cosmic censorship, observer-dependent realities, and recursive simulations, we illustrate how the universe might be fundamentally unknowable. In this scenario, cosmologists are trapped in a perpetual loop of incomplete discoveries and paradoxical observations, where every breakthrough only reveals deeper layers of complexity. The paper reflects on the philosophical implications of this thought experiment, questioning whether certain truths about the universe are inherently beyond human comprehension. Keywords: cosmology, simulation hypothesis, relativistic illusions, fractal space-time, dimensional entanglement, cosmic censorship, observer effect, quantum mechanics, recursive simulations, limits of knowledge, simulation, multiverse, quantum uncertainty, dark matter, dark energy, philosophical cosmology.
Preprint
Full-text available
The Simulation Hypothesis, which posits that our universe could be an advanced computational construct, has gained increasing attention in both philosophical and scientific discourse. Central to this hypothesis is the idea that reality, including consciousness and physical laws, might be the output of a sophisticated algorithm. However, this notion faces fundamental challenges from Turing’s Halting Problem, which proves that no algorithm can determine whether every computational process will halt or run indefinitely. This introduces profound uncertainty, even for the creators of the simulation, who cannot predict whether the universe will persist or cease. This paper explores the intersection of the Halting Problem and the Simulation Hypothesis, highlighting how this computational limitation impacts the stability of simulated reality, the role of determinism and free will, and the potential for halting events to explain certain unpredictable phenomena in the universe. By examining these concepts, we aim to provide a deeper understanding of the inherent uncertainty embedded within simulated worlds. Keywords: Simulation Hypothesis, Turing’s Halting Problem, computational limits, free will, determinism, quantum uncertainty, emergent phenomena, cosmological models, existential uncertainty, reality. 40 pages.
Article
Full-text available
Este artículo aborda la hipótesis de que la realidad que experimentamos no es una representación fiel del universo externo, sino una simulación interna generada por el cerebro humano para hacer frente a las limitaciones perceptivas y cognitivas. Partiendo de la idea de que la percepción es una simplificación evolutiva de una realidad mucho más vasta, se analizan las conexiones entre la percepción durante los estados de sueño y vigilia, sugiriendo que ambos representan variantes de una simulación cerebral continua. Se discuten estudios que exploran cómo los sentidos filtran y simplifican el entorno, dejando a los seres humanos con una experiencia limitada y sesgada de lo que realmente existe. La conclusión plantea que esta simulación es necesaria para la supervivencia y eficiencia cognitiva, pero también desafía nuestras concepciones sobre lo que es realmente "real".
Article
Creating simulations of the world can be a valuable way to test new ideas, predict the future, and broaden our understanding of a given topic. Presumably, the more similar the simulation is to the real world, the more transferable the knowledge generated in the simulation will be and, therefore, the more useful. As such, there is an incentive to create more advanced and representative simulations of the real world. Simultaneously, there are ethical and practical limitation to what can be done in human and animal research, so creating simulated beings to stand in their place could be a way of advancing research while avoiding some of these issues. However, the value of representativeness implies that there will be an incentive to create simulated beings as similar to real-world humans as possible to better transfer the knowledge gained from that research. This raises important ethical questions related to how we ought to treat advanced simulated beings and consider if they might have autonomy and wellbeing concerns that ought to be respected. As such, the uncertainty and potential of this line of research should be carefully considered before the simulation begins.
Chapter
Nicht eine Wolke aus Asche und Staub, sondern ein „Schwarm von Daten“ habe 2010 nach dem Ausbruch des isländischen Vulkans Eyjafjallajökull den Flugverkehr zeitweise lahm gelegt, so Frank Schirrmacher (2010) in der FAZ. Die Rede ist hier von den Ergebnissen von Computersimulationen, die bei der Entscheidung über das Aussetzen des Luftfahrtgeschehens maßgeblich waren. Die von Schirrmacher geäußerte Kritik ist eine, die auch in der Wissenschaft und in der Philosophie gegen Simulationen erhoben wird: Dass Computersimulationen eben keine wirklichen Messungen oder Beobachtungen sind.
Chapter
This chapter explores the fundamental aspects of moral status with a focus on the necessary and sufficient conditions for it. It posits that only phenomenally conscious beings can possess moral status and highlights two properties either of which is sufficient for it: sentience (the capacity for positively and negatively valenced experiences) and sapience (the capacity for self-awareness and rational autonomy). The chapter argues that purely functional interpretations of these properties render them morally vacuous, underscoring the necessity of consciousness for moral status. Addressing objections from scholars like David Gunkel and Mark Coeckelbergh, the chapter acknowledges the conditional validity of certain critiques but maintains that they do not successfully invalidate the consciousness criterion. Ultimately, the chapter concludes that there are compelling reasons to affirm the moral importance of consciousness and no comparably good reasons to reject it.
Article
Full-text available
Today, the impact of virtual reality (VR) and virtual experiences on our lives is growing. Virtual environments are used in psychological therapy and skills training, which can be applied in real-life situations. The aim of the article is to compare two positions, i.e., virtual realism and virtual fictionalism, which attribute different ontological and epistemological statuses to virtual reality. According to realists, virtual reality is a genuine reality, where the subject directly interacts with the virtual environment and objects without having to imagine that these objects exist. Fictionalists, on the other hand, believe that virtual environments and objects do not exist in reality, and interaction with them is based on a make-believe game relying on imagination. The article argues that virtual experience is not identical to fictional experience. Contrary to fictionalists, virtual experience is not based on imagination and significantly differs from engaging with fiction. Instead of explaining virtual experience as a make-believe game, it is more natural to describe it in terms of real interaction between the user and the interactive virtual environment
Chapter
Globalisation destabilises indigenous cultures from mining in rainforests to the erasure of indigenous identities due to the impact of globalising information technology (IT). Extinguishing these cultures means deleting strategies needed for the survival of humankind. Within its short time of existence, IT has already achieved the creation of virtual environments with virtual agents equipped with artificial intelligence (AI). As a leverage point for reconciling indigenous and globalised views the Simulation Hypothesis is proposed, which postulates that our world has been programmed from a metalevel. This concept assumes a programmer, which is consistent with religion in general, including the rather complex Christian concept of salvation. The perspective of the universe being a complex programme is compatible with indigenous worldviews of the universe being created by the Creator, and thus, with spirituality. It is suggested that IT users should respect indigenous peoples, especially since they pay a very high price for something they hardly enjoy themselves.
Article
Brandon Carter’s “Anthropic Principle” reminds us that observers must find themselves at life-permitting places and times. Again, observers are particularly likely to find themselves in spatiotemporal regions where observers are most thickly clustered—and, as Carter noticed, a human picked at random would be unlikely to find himself/herself at a date such that, of all humans who would ever have lived, almost all would have lived later than that date. Compare how a ball marked with your name is unlikely to be drawn in the first twenty from an urn bearing hundreds of named balls. Now, the recent population explosion means that if the human race continued for long, even if just at its present size instead of growing enormously through spreading across its galaxy, then you and I would have lived very exceptionally early as measured by a clock that ticked at each new human birth. Grounds for suspecting that humankind will fairly soon be extinct are provided by such things as the risk of nuclear warfare. Carter’s point means that we ought to take all such risks more seriously. Maybe an experiment at extremely high energies will create a tiny bubble of new-strength scalar field that expands and kills everyone. [The “Doomsday Argument” of this article was developed more fully in “Time and the Anthropic Principle” (MIND, July 1992).]
Article
Predictable improvements in lithographic methods foretell continued increases in computer processing power. Economic growth and engineering evolution continue to increase the size of objects which can be manufactured and power that can be controlled by humans. Neuroscience is gradually dissecting the components and functions of the structures in the brain. Advances in computer science and programming methodologies are increasingly able to emulate aspects of human intelligence. Continued progress in these areas leads to a convergence which results in megascale superintelligent thought machines. These machines, referred to as Matrioshka Brains 1 , consume the entire power output of stars (~10 26 W), consume all of the useful construction material of a solar system (~10 26 kg), have thought capacities limited by the physics of the universe and are essentially immortal. A common practice encountered in literature discussing the search for extraterrestrial life is the perspective of assuming and applying human characteristics and interests to alien species. Authors limit themselves by assuming the technologies available to aliens are substantially similar or only somewhat greater than those we currently possess. These mistakes bias their conclusions, preventing us from recognizing signs of alien intelligence when we see it. They also misdirect our efforts in searching for such intelligence. We should start with the laws on which our particular universe operates and the limits they impose on us. Projections should be made to determine the rate at which intelligent civilizations, such as ours, approach the limits imposed by these laws. Using these time horizons, laws and limits, we may be better able to construct an image of what alien intelligence may be like and how we ourselves may evolve.
Article
Physical systems of finite size and limited total energy E have limited entropy content S (alternatively, limited information-storing capacity). We demonstrate the validity of our previously conjectured bound on the specific entropy S/E in numerous examples taken from quantum mechanics (number of energy levels upto given energy), free-field systems (entropy of miscellaneous radiations for given energy), and strongly interacting particles (number of many-hadron states up to given energy). In the quantum-mechanical examples we have compared the bound directly with the logarithm of the number of levels for the harmonic oscillator, the rigid rotator, and a particle in an arbitrary potential well. For many-particle systems such as radiations, there is no closed formula for the number of configurations associated with a specified one-particle spectrum. To overcome this barrier we use an efficient numerical algorithm to calculate the number of configurations up to given energy from the spectrum. In all our examples of systems of scalar, electromagnetic, and neutrino quanta contained in spaces of various shapes, the numerical results are in harmony with the bound on S/E. This conclusion is buttressed by an approximate analytical estimate of the peak S/E which leaves little doubt as to the general applicability of the bound for systems of free quanta. We consider a gas of hadrons confined to a cavity as an example of a system of strongly interacting particles. Our numerical algorithm applied to the Hagedorn mass spectrum for hadrons confirms that the number of many-hadron states up to a given energy is consistent with the bound. Finally, we show that a rather general one-channel communication system has an information-carrying capacity which cannot exceed a bound akin to that on S/E. It is argued that a complete many-channel system is similarly limited.
Article
The minimum energy requirements of information transfer and computing are estimated from the time-energy uncertainty relation.
Article
The Doomsday argument purports to show that the risk of the human species going extinct soon has been systematically underestimated. This argument has something in common with controversial forms of reasoning in other areas, including: game theoretic problems with imperfect recall, the methodology of cosmology, the epistemology of indexical belief, and the debate over so-called fine-tuning arguments for the design hypothesis. The common denominator is a certain premiss: the Self-Sampling Assumption. We present two strands of argument in favor of this assumption. Through a series of thought experiments we then investigate some bizarre prima facie consequences – backward causation, psychic powers, and an apparent conflict with the Principal Principle.
Article
1st issued as an Oxford Univ. Press Paperback
Article
How much do we humans enjoy our current status as the most intelligent beings on earth? Enough to try to stop our own inventions from surpassing us in smarts? If so, we'd better pull the plug right now, because if Ray Kurzweil is right, we've only got until about 2020 before computers outpace the human brain in computational power. Kurzweil, artificial intelligence expert and author of The Age of Intelligent Machines, shows that technological evolution moves at an exponential pace. Further, he asserts, in a sort of swirling postulate, time speeds up as order increases, and vice versa. He calls this the "Law of Time and Chaos," and it means that although entropy is slowing the stream of time down for the universe overall, and thus vastly increasing the amount of time between major events, in the eddy of technological evolution the exact opposite is happening, and events will soon be coming faster and more furiously. This means that we'd better figure out how to deal with conscious machines as soon as possible--they'll soon not only be able to beat us at chess, they'll likely demand civil rights, and they may at last realize the very human dream of immortality. The Age of Spiritual Machines is compelling and accessible, and not necessarily best read from front to back--it's less heavily historical if you jump around (Kurzweil encourages this). Much of the content of the book lays the groundwork to justify Kurzweil's timeline, providing an engaging primer on the philosophical and technological ideas behind the study of consciousness. Instead of being a gee-whiz futurist manifesto, Spiritual Machines reads like a history of the future, without too much science fiction dystopianism. Instead, Kurzweil shows us the logical outgrowths of current trends, with all their attendant possibilities. This is the book we'll turn to when our computers
Article
Computers are physical systems: what they can and cannot do is dictated by the laws of physics. In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses. This paper explores the physical limits of computation as determined by the speed of light c, the quantum scale \hbar and the gravitational constant G. As an example, quantitative bounds are put to the computational power of an `ultimate laptop' with a mass of one kilogram confined to a volume of one liter.
Article
Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a "posthuman" society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that of existential risks. These are threats that could cause our extinction or destroy the potential of Earth-originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies.
How to Live in a Simulation
  • Hanson