Preprint

Falsification and consciousness

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Any theory amenable to scientific inquiry must have testable consequences. This minimal criterion is uniquely challenging for the study of consciousness, as we do not know if it is possible to confirm via observation from the outside whether or not a physical system knows what it feels like to have an inside—a challenge referred to as the “hard problem” of consciousness. To arrive at a theory of consciousness, the hard problem has motivated development of phenomenological approaches that adopt assumptions of what properties consciousness has based on first-hand experience and, from these, derive the physical processes that give rise to these properties. A leading theory adopting this approach is Integrated Information Theory (IIT), which assumes our subjective experience is a “unified whole”, subsequently yielding a requirement for physical feedback as a necessary condition for consciousness. Here, we develop a mathematical framework to assess the validity of this assumption by testing it in the context of isomorphic physical systems with and without feedback. The isomorphism allows us to isolate changes in Φ without affecting the size or functionality of the original system. Indeed, the only mathematical difference between a “conscious” system with Φ > 0 and an isomorphic “philosophical zombie” with Φ = 0 is a permutation of the binary labels used to internally represent functional states. This implies Φ is sensitive to functionally arbitrary aspects of a particular labeling scheme, with no clear justification in terms of phenomenological differences. In light of this, we argue any quantitative theory of consciousness, including IIT, should be invariant under isomorphisms if it is to avoid the existence of isomorphic philosophical zombies and the epistemological problems they pose.
Article
Full-text available
Information processing in neural systems can be described and analyzed at multiple spatiotemporal scales. Generally, information at lower levels is more fine-grained but can be coarse-grained at higher levels. However, only information processed at specific scales of coarse-graining appears to be available for conscious awareness. We do not have direct experience of information available at the scale of individual neurons, which is noisy and highly stochastic. Neither do we have experience of more macro-scale interactions, such as interpersonal communications. Neurophysiological evidence suggests that conscious experiences co-vary with information encoded in coarse-grained neural states such as the firing pattern of a population of neurons. In this article, we introduce a new informational theory of consciousness: Information Closure Theory of Consciousness (ICT). We hypothesize that conscious processes are processes which form non-trivial informational closure (NTIC) with respect to the environment at certain coarse-grained scales. This hypothesis implies that conscious experience is confined due to informational closure from conscious processing to other coarse-grained scales. Information Closure Theory of Consciousness (ICT) proposes new quantitative definitions of both conscious content and conscious level. With the parsimonious definitions and a hypothesize, ICT provides explanations and predictions of various phenomena associated with consciousness. The implications of ICT naturally reconcile issues in many existing theories of consciousness and provides explanations for many of our intuitions about consciousness. Most importantly, ICT demonstrates that information can be the common language between consciousness and physical reality.
Article
Full-text available
The proposal that probabilistic inference and unconscious hypothesis testing are central to information processing in the brain has been steadily gaining ground in cognitive neuroscience and associated fields. One popular version of this proposal is the new theoretical framework of predictive processing or prediction error minimization (PEM), which couples unconscious hypothesis testing with the idea of ‘active inference’ and claims to offer a unified account of perception and action (Clark 2013, 2016; Friston 2008; Hohwy 2013). Here we will consider one outstanding issue that still looms large at the core of the PEM framework: the lack of a clear criterion for distinguishing conscious states from unconscious ones. In order to fulfill the promise of becoming a unifying framework for describing and modeling cognition, PEM needs to be able to differentiate between conscious and unconscious mental states or processes. We will argue that one currently popular view, that the contents of conscious experience are determined by the ‘winning hypothesis’ (i.e. the one with the highest posterior probability, which determines the behavior of the system), falls short of fully accounting for conscious experience. It ignores the possibility that some states of a system can control that system’s behavior even though they are apparently not conscious (as evidenced by e.g. blindsight or subliminal priming). What follows from this is that the ‘winning hypothesis’ view does not provide a complete account of the difference between conscious and unconscious states in the probabilistic brain. We show how this problem (and some other related problems) for the received view can be resolved by augmenting PEM with Daniel Dennett's multiple drafts model of consciousness. This move is warranted by the similar roles that attention and internal competition play in both the PEM framework and the multiple drafts model.
Article
Full-text available
The dynamical evolution of a system of interacting elements can be predicted in terms of its elementary constituents and their interactions, or in terms of the system’s global state transitions. For this reason, systems with equivalent global dynamics are often taken to be equivalent for all relevant purposes. Nevertheless, such systems may still vary in their causal composition—the way mechanisms within the system specify causes and effects over different subsets of system elements. We demonstrate this point based on a set of small discrete dynamical systems with reversible dynamics that cycle through all their possible states. Our analysis elucidates the role of composition within the formal framework of integrated information theory. We show that the global dynamical and information-theoretic capacities of reversible systems can be maximal even though they may differ, quantitatively and qualitatively, in the information that their various subsets specify about each other (intrinsic information). This can be the case even for a system and its time-reversed equivalent. Due to differences in their causal composition, two systems with equivalent global dynamics may still differ in their capacity for autonomy, agency, and phenomenology.
Article
Full-text available
How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.
Article
Full-text available
Medically induced loss of consciousness (mLOC)during anesthesia is associated with a macroscale breakdown of brain connectivity, yet the neural microcircuit correlates of mLOC remain unknown. To explore this, we applied different analytical approaches (t-SNE/watershed segmentation, affinity propagation clustering, PCA, and LZW complexity)to two-photon calcium imaging of neocortical and hippocampal microcircuit activity and local field potential (LFP)measurements across different anesthetic depths in mice, and to micro-electrode array recordings in human subjects. We find that in both cases, mLOC disrupts population activity patterns by generating (1)fewer discriminable network microstates and (2)fewer neuronal ensembles. Our results indicate that local neuronal ensemble dynamics could causally contribute to the emergence of conscious states.
Article
Full-text available
Integrated Information Theory (IIT) is a prominent theory of consciousness that has at its centre measures that quantify the extent to which a system generates more information than the sum of its parts. While several candidate measures of integrated information (“ Φ ”) now exist, little is known about how they compare, especially in terms of their behaviour on non-trivial network models. In this article, we provide clear and intuitive descriptions of six distinct candidate measures. We then explore the properties of each of these measures in simulation on networks consisting of eight interacting nodes, animated with Gaussian linear autoregressive dynamics. We find a striking diversity in the behaviour of these measures—no two measures show consistent agreement across all analyses. A subset of the measures appears to reflect some form of dynamical complexity, in the sense of simultaneous segregation and integration between system components. Our results help guide the operationalisation of IIT and advance the development of measures of integrated information and dynamical complexity that may have more general applicability.
Article
Full-text available
The integrated information theory (IIT) is one of the most influential scientific theories of consciousness. It functions as a guiding framework for a great deal of research into the neural basis of consciousness and for attempts to develop a consciousness meter. In light of these developments, it is important to examine whether its foundations are secure. This article does just that by examining the axiomatic method that the architects of IIT appeal to. I begin by asking what exactly the axiomatic method involves, arguing that it is open to multiple interpretations. I then examine the five axioms of IIT, asking: what each axiom means, whether it is indeed axiomatic and whether it could constrain a theory of consciousness. I argue that none of the five alleged axioms is able to play the role that is required of it, either because it fails to qualify as axiomatic or because it fails to impose a substantive constraint on a theory of consciousness. The article concludes by briefly sketching an alternative methodology for the science of consciousness: the natural kind approach.
Article
Full-text available
Cosmological models that invoke a multiverse - a collection of unobservable regions of space where conditions are very different from the region around us - are controversial, on the grounds that unobservable phenomena shouldn't play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice.
Article
Full-text available
There have been a number of advances in the search for the neural correlates of consciousness-the minimum neural mechanisms sufficient for any one specific conscious percept. In this Review, we describe recent findings showing that the anatomical neural correlates of consciousness are primarily localized to a posterior cortical hot zone that includes sensory areas, rather than to a fronto-parietal network involved in task monitoring and reporting. We also discuss some candidate neurophysiological markers of consciousness that have proved illusory, and measures of differentiation and integration of neural activity that offer more promising quantitative indices of consciousness.
Conference Paper
Full-text available
Neural networks represent a class of functions for the efficient identification and forecasting of dynamical systems. It has been shown that feedforward networks are able to approximate any (Borel-)measurable function on a compact domain [1,2,3]. Recurrent neural networks (RNNs) have been developed for a better understanding and analysis of open dynamical systems. Compared to feedforward networks they have several advantages which have been discussed extensively in several papers and books, e.g. [4]. Still the question often arises if RNNs are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this paper we give a proof for the universal approximation ability of RNNs in state space model form. The proof is based on the work of Hornik, Stinchcombe, and White about feedforward neural networks [1].
Article
Full-text available
The goal of consciousness research is to reveal the neural basis of phenomenal experience. To study phenomenology, experimenters seem obliged to ask reports from the subjects to ascertain what they experience. However, we argue that the requirement of reports has biased the search for the neural correlates of consciousness over the past decades. More recent studies attempt to dissociate neural activity that gives rise to consciousness from the activity that enables the report; in particular, no-report paradigms have been utilized to study conscious experience in the full absence of any report. We discuss the advantages and disadvantages of report-based and no-report paradigms, and ask how these jointly bring us closer to understanding the true neural basis of consciousness.
Article
Full-text available
In the last decade, Guilio Tononi has developed the Integrated Information Theory (IIT) of consciousness. IIT postulates that consciousness is equal to integrated information (Phi). The goal of this paper is to show that IIT fails in its stated goal of quantifying consciousness. The paper will challenge the theoretical and empirical arguments in support of IIT. The main theoretical argument for the relevance of integrated information to consciousness is the principle of information exclusion. Yet, no justification is given to support this principle. Tononi claims there is significant empirical support for IIT, but this is called into question by the creation of a trivial theory of consciousness with equal explanatory power. After examining the theoretical and empirical evidence for IIT, arguments from philosophy of mind and epistemology will be examined. Since IIT is not a form of computational functionalism, it is vulnerable to fading/dancing qualia arguments. Finally, the limitations of the phenomenological approach to studying consciousness are examined, and it will be shown that IIT is a theory of protoconsciousness rather than a theory of consciousness.
Article
Full-text available
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Article
Full-text available
We recently proposed the attention schema theory, a novel way to explain the brain basis of subjective awareness in a mechanistic and scientifically testable manner. The theory begins with attention, the process by which signals compete for the brain's limited computing resources. This internal signal competition is partly under a bottom-up influence and partly under top-down control. We propose that the top-down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the 'attention schema,' in much the same way that it constructs a schematic model of the body, the 'body schema.' The content of this internal model leads a brain to conclude that it has a subjective experience. One advantage of this theory is that it explains how awareness and attention can sometimes become dissociated; the brain's internal models are never perfect, and sometimes a model becomes dissociated from the object being modeled. A second advantage of this theory is that it explains how we can be aware of both internal and external events. The brain can apply attention to many types of information including external sensory information and internal information about emotions and cognitive states. If awareness is a model of attention, then this model should pertain to the same domains of information to which attention pertains. A third advantage of this theory is that it provides testable predictions. If awareness is the internal model of attention, used to help control attention, then without awareness, attention should still be possible but should suffer deficits in control. In this article, we review the existing literature on the relationship between attention and awareness, and suggest that at least some of the predictions of the theory are borne out by the evidence.
Article
Full-text available
Significant advances have been made in the behavioral assessment and clinical management of disorders of consciousness (DOC). In addition, functional neuroimaging paradigms are now available to help assess consciousness levels in this challenging patient population. The success of these neuroimaging approaches as diagnostic markers is, however, intrinsically linked to understanding the relationships between consciousness and the brain. In this context, a combined theoretical approach to neuroimaging studies is needed. The promise of such theoretically based markers is illustrated by recent findings that used a perturbational approach to assess the levels of consciousness. Further research on the contents of consciousness in DOC is also needed. Expected final online publication date for the Annual Review of Neuroscience Volume 37 is July 08, 2014. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
Article
Full-text available
This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific - it is what it is by how it differs from alternative experiences; integration says that it is unified - irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as "differences that make a difference" within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true "zombies" - unconscious feed-forward systems that are functionally equivalent to conscious complexes.
Article
Full-text available
Personal motivation. The dream of creating artificial devices which reach or outperform human intelligence is an old one. It is also one of the two dreams of my youth, which have never let me go (the other is finding a physical theory of everything). What makes this challenge so interesting? A solution would have enormous implications on our society, and there are reasons to believe that the AI problem can be solved in my expected lifetime. So it’s worth sticking to it for a lifetime, even if it will take 30 years or so to reap the benefits. The AI problem. The science of Artificial Intelligence (AI) may be defined as the construction of intelligent systems and their analysis. A natural definition of a system is anything which has an input and an output stream. Intelligence is more complicated. It can have many faces like creativity, solving problems, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, surviving in an environment, language processing, knowledge, and many more. A formal definition incorporating every aspect of intelligence, however, seems difficult. Most, if not all known facets of intelligence can be formulated as goal
Article
Full-text available
Introduction: the challenge of a science of consciousnessUnderstanding consciousness has become the ultimate intellectual challenge ofthis new millennium. Even if philosophers now accept the notion that it is a "real ,natural, biological phenomenon literally located in the brain" (Revonsuo, 2001), a viewin harmony with the neuroscientist conception that "consciousness is entirely caused byneurobiological processes and realized in brain structures" (Changeux, 1983; Crick,1994; Edelman,...
Article
Full-text available
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's the-ory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameter-free the-ory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function mini-mization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXItl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXItl is of the order t·2 l . Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.
Article
Full-text available
Conscious perception and attention are difficult to study, partly because their relation to each other is not fully understood. Rather than conceiving and studying them in isolation from each other it may be useful to locate them in an independently motivated, general framework, from which a principled account of how they relate can then emerge. Accordingly, these mental phenomena are here reviewed through the prism of the increasingly influential predictive coding framework. On this framework, conscious perception can be seen as the upshot of prediction error minimization and attention as the optimization of precision expectations during such perceptual inference. This approach maps on well to a range of standard characteristics of conscious perception and attention, and can be used to interpret a range of empirical findings on their relation to each other.
Article
Full-text available
Higher-order theories of consciousness argue that conscious awareness crucially depends on higher-order mental representations that represent oneself as being in particular mental states. These theories have featured prominently in recent debates on conscious awareness. We provide new leverage on these debates by reviewing the empirical evidence in support of the higher-order view. We focus on evidence that distinguishes the higher-order view from its alternatives, such as the first-order, global workspace and recurrent visual processing theories. We defend the higher-order view against several major criticisms, such as prefrontal activity reflects attention but not awareness, and prefrontal lesion does not abolish awareness. Although the higher-order approach originated in philosophical discussions, we show that it is testable and has received substantial empirical support.
Article
Full-text available
When viewing a different stimulus with each eye, we experience the remarkable phenomenon of binocular rivalry: alternations in consciousness between the stimuli [1, 2]. According to a popular theory first proposed in 1901, neurons encoding the two stimuli engage in reciprocal inhibition [3-8] so that those processing one stimulus inhibit those processing the other, yielding consciousness of one dominant stimulus at any moment and suppressing the other. Also according to the theory, neurons encoding the dominant stimulus adapt, weakening their activity and the inhibition they can exert, whereas neurons encoding the suppressed stimulus recover from adaptation until the balance of activity reverses, triggering an alternation in consciousness. Despite its popularity, this theory has one glaring inconsistency with data: during an episode of suppression, visual sensitivity to brief probe stimuli in the dominant eye should decrease over time and should increase in the suppressed eye, yet sensitivity appears to be constant [9, 10]. Using more appropriate probe stimuli (experiment 1) in conjunction with a new method (experiment 2), we found that sensitivities in dominance and suppression do show the predicted complementary changes.
Article
Recent work in cognitive and computational neuroscience depicts the human brain as a complex, multi-layer prediction engine. This family of models has had great success in accounting for a wide variety of phenomena involving perception, action, and attention. But despite their clear promise as accounts of the neurocomputational origins of perceptual experience, they have not yet been leveraged so as to shed light on the so-called “hard problem” of consciousness—the problem of explaining why and how the world is subjectively experienced at all, and why those experiences seem just the way they do. To address this issue, I motivate and defend a picture of conscious experience as flowing from “generative entanglements” that mix predictions about the world, the body, and (crucially) our own reactive dispositions.
Article
We investigate opportunities and challenges for improving unsupervised machine learning using four common strategies with a long history in physics: divide and conquer, Occam's razor, unification, and lifelong learning. Instead of using one model to learn everything, we propose a paradigm centered around the learning and manipulation of theories, which parsimoniously predict both aspects of the future (from past observations) and the domain in which these predictions are accurate. Specifically, we propose a generalized mean loss to encourage each theory to specialize in its comparatively advantageous domain, and a differentiable description length objective to downweight bad data and “snap” learned theories into simple symbolic formulas. Theories are stored in a “theory hub,” which continuously unifies learned theories and can propose theories when encountering new environments. We test our implementation, the toy “artificial intelligence physicist” learning agent, on a suite of increasingly complex physics environments. From unsupervised observation of trajectories through worlds involving random combinations of gravity, electromagnetism, harmonic motion, and elastic bounces, our agent typically learns faster and produces mean-squared prediction errors about a billion times smaller than a standard feedforward neural net of comparable complexity, typically recovering integer and rational theory parameters exactly. Our agent successfully identifies domains with different laws of motion also for a nonlinear chaotic double pendulum in a piecewise constant force field.
Book
A core philosophical project is the attempt to uncover the fundamental nature of reality, the limited set of facts upon which all other facts depend. Perhaps the most popular theory of fundamental reality in contemporary analytic philosophy is physicalism: the view that the world is fundamentally physical in nature. The first half of this book argues that physicalist views cannot account for the evident reality of conscious experience and hence that physicalism cannot be true. However, the book also tries to show that familiar arguments to this conclusion—Frank Jackson’s form of the knowledge argument and David Chalmers’ two-dimensional conceivability argument—are not wholly adequate. The second half of the book explores and defends a radical alternative to physicalism known as “Russellian monism.” Russellian monists believe that (i) physics tells us nothing about the concrete, categorical nature of material entities, and that (ii) it is this “hidden” nature of matter that explains human and animal consciousness. Throughout the second half of the book various forms of Russellian monism are surveyed, and the key challenges facing it are discussed. Ultimately the book defends a cosmopsychist form of Russellian monism, according to which all facts are grounded in facts about the conscious universe.
Article
Accumulating evidence suggests that many findings in psychological science and cognitive neuroscience may prove difficult to reproduce; statistical power in brain imaging studies is low and has not improved recently; software errors in analysis tools are common and can go undetected for many years; and, a few large-scale studies notwithstanding, open sharing of data, code, and materials remain the rare exception. At the same time, there is a renewed focus on reproducibility, transparency, and openness as essential core values in cognitive neuroscience. The emergence and rapid growth of data archives, meta-analytic tools, software pipelines, and research groups devoted to improved methodology reflect this new sensibility. We review evidence that the field has begun to embrace new open research practices and illustrate how these can begin to address problems of reproducibility, statistical power, and transparency in ways that will ultimately accelerate discovery.
Article
Visual awareness is a favorable form of consciousness to study neurobiologically. We propose that it takes two forms: a very fast form, linked to iconic memory, that may be difficult to study; and a somewhat slower one involving visual attention and short-term memory. In the slower form an attentional mechanism transiently binds together all those neurons whose activity relates to the relevant features of a single visual object. We suggest this is done by generating coherent semi-synchronous oscillations, probably in the 40-70 Hz range. These oscillations then activate a transient short-term (working) memory. We outfit several lines of experimental work that might advance the understanding of the neural mechanisms involved. The neural basis of very short-term memory especially needs more experimental study.
Article
This paper considers the Cartesian theatre as a metaphor for the virtual reality models that the brain uses to make inferences about the world. This treatment derives from our attempts to understand dreaming and waking consciousness in terms of free energy minimization. The idea here is that the Cartesian theatre is not observed by an internal (homuncular) audience but furnishes a theatre in which fictive narratives and fantasies can be rehearsed and tested against sensory evidence. We suppose the brain is driven by the imperative to infer the causes of its sensory samples; in much the same way as scientists are compelled to test hypotheses about experimental data. This recapitulates Helmholtz's notion of unconscious inference and Gregory's treatment of perception as hypothesis testing. However, we take this further and consider the active sampling of the world as the gathering of confirmatory evidence for hypotheses based on our virtual reality. The ensuing picture of consciousness (or active inference) resolves a number of seemingly hard problems in consciousness research and is internally consistent with current thinking in systems neuroscience and theoretical neurobiology. In this formalism, there is a dualism that distinguishes between the (conscious) process of inference and the (material) process that entails inference. This separation is reflected by the distinction between beliefs (probability distributions over hidden world states or res cogitans) and the physical brain states (sufficient statistics or res extensa) that encode them. This formal approach allows us to appeal to simple but fundamental theorems in information theory and statistical thermodynamics that dissolve some of the mysterious aspects of consciousness.
Article
Attempts to exempt speculative theories of the Universe from experimental verification undermine science, argue George Ellis and Joe Silk.
Article
Can we make progress exploring consciousness? Or is it forever beyond human reach? In science we never know the ultimate outcome of the journey. We can only take whatever steps our current knowledge affords. This paper explores today's evidence from the viewpoint of Global Workspace (GW) theory. First, we ask what kind of evidence has the most direct bearing on the question. The answer given here is ‘contrastive analysis’ -- a set of paired comparisons between similar conscious and unconscious processes. This body of evidence is already quite large, and constrains any possible theory (Baars, 1983; 1988; 1997). Because it involves both conscious and unconscious events, it deals directly with our own subjective experience, as anyone can tell by trying the demonstrations in this article.One dramatic contrast is between the vast number of unconscious neural processes happening in any given moment, compared to the very narrow bottleneck of conscious capacity. The narrow limits of consciousness have a compensating advantage: consciousness seems to act as a gateway, creating access to essentially any part of the nervous system. Even single neurons can be controlled by way of conscious feedback. Conscious experience creates access to the mental lexicon, to autobiographical memory, and to voluntary control over automatic action routines. Daniel C. Dennett has suggested that consciousness may itself be viewed as that to which ‘we’ have access. (Dennett, 1978) All these facts may be summed up by saying that consciousness creates global access.How can we understand the evidence? The best answer today is a ‘global workspace architecture’, first developed by cognitive modelling groups led by Alan Newell and Herbert A. Simon. This mental architecture can be described informally as a working theatre. Working theatres are not just ‘Cartesian’ daydreams -- they do real things, just like real theatres (Dennett & Kinsbourne, 1992; Newell, 1990). They have a marked resemblance to other current accounts (e.g. Damasio, 1989; Gazzaniga, 1993; Shallice, 1988; Velmans, 1996). In the working theatre, focal consciousness acts as a ‘bright spot’ on the stage, directed there by the selective ‘spotlight’ of attention. The bright spot is further surrounded by a ‘fringe,’ of vital but vaguely conscious events (Mangan, 1993). The entire stage of the theatre corresponds to ‘working memory’, the immediate memory system in which we talk to ourselves, visualize places and people, and plan actions.Information from the bright spot is globally distributed through the theatre, to two classes of complex unconscious processors: those in the darkened theatre ‘audience’ mainly receive information from the bright spot; while ‘behind the scenes’, unconscious contextual systems shape events in the bright spot. One example of such a context is the unconscious philosophical assumptions with which we tend to approach the topic of consciousness. Another is the right parietal map that creates a spatial context for visual scenes (Kinsbourne, 1993). Baars (1983;1988; 1997) has developed these arguments in great detail, and aspects of this framework have now been taken up by others, such as the philosopher David Chalmers (1996). Some brain implications of the theory have been explored. Global Workspace (GW) theory provides the most useful framework to date for our rapidly accumulating body of evidence. It is consistent with our current knowledge, and can be enriched to include other aspects of human experience.
Article
This essay critically examines the extent to which binocular rivalry can provide important clues about the neural correlates of conscious visual perception. Our ideas are presented within the framework of four questions about the use of rivalry for this purpose: (i) what constitutes an adequate comparison condition for gauging rivalry's impact on awareness, (ii) how can one distinguish abolished awareness from inattention, (iii) when one obtains unequivocal evidence for a causal link between a fluctuating measure of neural activity and fluctuating perceptual states during rivalry, will it generalize to other stimulus conditions and perceptual phenomena and (iv) does such evidence necessarily indicate that this neural activity constitutes a neural correlate of consciousness? While arriving at sceptical answers to these four questions, the essay nonetheless offers some ideas about how a more nuanced utilization of binocular rivalry may still provide fundamental insights about neural dynamics, and glimpses of at least some of the ingredients comprising neural correlates of consciousness, including those involved in perceptual decision-making.
Article
Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of "perceptual presence" has motivated "sensorimotor theories" which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative "predictive processing" theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These "counterfactually-rich" generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception.
Article
Skinner outlines a science of behavior which generates its own laws through an analysis of its own data rather than securing them by reference to a conceptual neural process. "It is toward the reduction of seemingly diverse processes to simple laws that a science of behavior naturally directs itself. At the present time I know of no simplification of behavior that can be claimed for a neurological fact. Increasingly greater simplicity is being achieved, but through a systematic treatment of behavior at its own level." The results of behavior studies set problems for neurology, and in some cases constitute the sole factual basis for neurological constructs. The system developed in the present book is objective and descriptive. Behavior is regarded as either respondent or operant. Respondent behavior is elicited by observable stimuli, and classical conditioning has utilized this type of response. In the case of operant behavior no correlated stimulus can be detected when the behavior occurs. The factual part of the book deals largely with this behavior as studied by the author in extensive researches on the feeding responses of rats. The conditioning of such responses is compared with the stimulus conditioning of Pavlov. Particular emphasis is placed on the concept of "reflex reserve," a process which is built up during conditioning and exhausted during extinction, and on the concept of reflex strength. The chapter headings are as follows: a system of behavior; scope and method; conditioning and extinction; discrimination of a stimulus; some functions of stimuli; temporal discrimination of the stimulus; the differentiation of a response; drive; drive and conditioning; other variables affecting reflex strength; behavior and the nervous system; and conclusion. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
Recent experimental studies and theoretical models have begun to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity. The present review focuses on the well-delimited issue of how an external or internal piece of information goes beyond nonconscious processing and gains access to conscious processing, a transition characterized by the existence of a reportable subjective experience. Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and "ignition" of a large-scale prefronto-parietal network. We compare these findings to current theoretical models of conscious processing, including the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices. The clinical implications of these results for general anesthesia, coma, vegetative state, and schizophrenia are discussed.
Article
The integrated information theory (IIT) starts from phenomenology and makes use of thought experiments to claim that consciousness is integrated information. Specifically: (i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Phi) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections). Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience. Several observations concerning the neural substrate of consciousness fall naturally into place within the IIT framework. Among them are the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the distinct role of different cortical architectures in affecting the quality of experience. Equating consciousness with integrated information carries several implications for our view of nature.
Article
There are two concepts of consciousness, access consciousness and phenomenal consciousness. But just as the concepts of water and H2O are different concepts of the same thing, so the two concepts of consciousness might come to the same thing in the brain. Some recent papers by Crick and Koch raise issues that suggest that these two concepts of consciousness might have different (though overlapping) neural correlates, despite Crick and Koch's implicit rejection of this idea.
Article
Demonstrating that neural activity 'represents' physical properties of the world such as the orientation of a line in the receptive field of a nerve cell is a standard procedure in neuroscience. However, not all such neural activity will be associated with the mental representations that form the contents of consciousness. In some cases, such as when patients with blindsight correctly 'guess' the location of a stimulus, neural activity is associated with physical stimulation and with appropriate behaviour, but not with awareness. To identify the neural correlates of conscious experience we need to identify patterns of neural activity that are specifically associated with awareness. Experiments aimed at making such identifications require that subjects report some aspect of their conscious experience either verbally or through some pre-arranged non-verbal report while neural activity is measured. If there is some characteristic neural signature of consciousness, then this should be distinguishable from the kinds of neural activity associated with stimulation and/or behaviour in the absence of awareness. It remains to be seen whether the neural signature of consciousness relates to the location of the neural activity, the temporal properties of the neural activity or the form of the interaction between activity in different brain regions.