Brainstorms: Philosophical Essays on Mind and Psychology
Abstract
An anniversary edition of a classic in cognitive science, with a new introduction by the author.
When Brainstorms was published in 1978, the interdisciplinary field of cognitive science was just emerging. Daniel Dennett was a young scholar who wanted to get philosophers out of their armchairs—and into conversations with psychologists, linguists, computer scientists. This collection of seventeen essays by Dennett offers a comprehensive theory of mind, encompassing traditional issues of consciousness and free will. Using careful arguments and ingenious thought experiments, the author exposes familiar preconceptions and hobbling intuitions. The essays are grouped into four sections: “Intentional Explanation and Attributions of Mentality”; “The Nature of Theory in Psychology”; “Objects of Consciousness and the Nature of Experience”; and “Free Will and Personhood.”
This anniversary edition includes a new introduction by Dennett, “Reflections on Brainstorms after Forty Years,” in which he recalls the book's original publication by Harry and Betty Stanton of Bradford Books and considers the influence and afterlife of some of the essays. For example, “Mechanism and Responsibility” was Dennett's first articulation of his concept of the intentional stance; “Are Dreams Experiences?” anticipates the major ideas in his 1991 book Consciousness Explained; and “Where Am I?” has been variously represented in a BBC documentary, a student's Javanese shadow puppet play, and a feature-length film made in the Netherlands, Victim of the Brain.
... Inflation refers to the fact that AI investigates and empirically tests both philosophy and psychology. AI combines tendencies toward abstraction (in philosophy) and explicit particularity (in psychology) (Dennett, 2017), hence emphasizing more starkly the intrinsic tensions of modernity, e.g., the tension between mind and body (Ekbia, 2008). ...
... Furthermore, the mind must be able to represent causality, it must be active in terms of learning and engaging with components of the environment, and it must have narrative and agency ability. In addition, the mind must be able to adjust its internal representations of the world (usually called the frame problem), it must have the ability to interpret (inner-self), and to ground its representations in real-world experience in a dynamically structured way, and the components of this whole system have to be synthesized and fused, among other requirements (see e.g., Carter, 2007;Ekbia, 2008;Dennett, 2017). ...
... This is why, since the early nineties, there has "been relatively little movement in the philosophical debate despite the terrific advances within cognitive science and other AI-related fields" (Estrada, 2014, p. 59). Therefore, due to a lack of answers and against the wishes of Dennett (2017), AI is obliged to reinvent itself as an intense and proliferated research area, a point which references the already-invented wheel in philosophy and psychology, as noted by Ekbia (2008). However, we assume that the shortcomings in this context derive from the mainstream approaches to philosophy and psychology, and one can still find aid in marginalized or not fully investigated approaches. ...
By following the arguments developed by Vygotsky and employing the cultural-historical activity theory (CHAT) in addition to dialectical logic, this paper attempts to investigate the interaction between psychology and artificial intelligence (AI) to confront the epistemological and methodological challenges encountered in AI research. The paper proposes that AI is facing an epistemological and methodological crisis inherited from psychology based on dualist ontology. The roots of this crisis lie in the duality between rationalism and objectivism or in the mind-body rupture that has governed the production of scientific thought and the proliferation of approaches. In addition, by highlighting the sociohistorical conditions of AI, this paper investigates the historical characteristics of the shift of the crisis from psychology to AI. Additionally, we examine the epistemological and methodological roots of the main challenges encountered in AI research by noting that empiricism is the dominant tendency in the field. Empiricism gives rise to methodological and practical challenges, including challenges related to the emergence of meaning, abstraction, generalization, the emergence of symbols, concept formation, functional reflection of reality, and the emergence of higher psychological functions. Furthermore, through discussing attempts to formalize dialectical logic, the paper, based on contradiction formation, proposes a qualitative epistemological, methodological, and formal alternative by using a preliminary algorithmic model that grasps the formation of meaning as an essential ability for the qualitative reflection of reality and the emergence of other mental functions.
... The problem of inference did not go away with the advent of the computer metaphor. For instance, the problem of perceptual inference remains acknowledged nowadays both in the theoretical and the empirical literature (e.g., Chemero, 2009;Dennett, 1978;Turvey, 2018). And this is true even for the most advanced theories and techniques in computational neuroscience and machine learning, such as representation learning or reinforcement learning (see Raja et al., 2021). ...
The brain-as-computer metaphor has anchored the professed computational nature of mind, wresting it down from the intangible logic of Platonic philosophy to a material basis for empirical science. However, as with many long-lasting metaphors in science, the computer metaphor has been explored and stretched long enough to reveal its boundaries. These boundaries highlight widening gaps in our understanding of the brain’s role in an organism’s goal-directed, intelligent behaviors and thoughts. In search of a more appropriate metaphor that reflects the potentially noncomputable functions of mind and brain, eight author groups answer the following questions: (1) What do we understand by the computer metaphor of the brain and cognition? (2) What are some of the limitations of this computer metaphor? (3) What metaphor should replace the computational metaphor? (4) What findings support alternative metaphors? Despite agreeing about feeling the strain of the strictures of computer metaphors, the authors suggest an exciting diversity of possible metaphoric options for future research into the mind and brain.
... The notion of RF is rooted in Dennett's (1987;Dennett, 1978) proposal that three stances are available in the prediction of behaviour: the physical stance, the design stance and the intentional stance. He takes predicting the behaviour of a chess-playing computer as his example. ...
The term reflective function (RF) refers to the psychological processes underlying the capacity to mentalize, a concept which has been described in both the psychoanalytic (Fonagy, 1989; 1991) and cognitive psychology literatures (e.g. Morton & Frith, 1995). Reflective functioning or mentalization is the active expression of this psychological capacity intimately related to the representation of the self (Fonagy & Target, 1995; 996; Target & Fonagy, 1996). RF involves both a self-reflective and an interpersonal component that ideally provides the individual with a well-developed capacity to distinguish inner from outer reality, pretend from ‘real’ modes of functioning, intra-personal mental and emotional processes from interpersonal communications. Because of the inherently interpersonal origins to how the reflective capacity develops and expresses itself, this manual refers to reflective functioning, and no longer of reflective-self functioning (see Fonagy, Steele, Moran, Steele, & Higgitt, 1991a), as the latter term is too easily reduced to self-reflection which is only part of what is intended by the concept.
... Ellis (p. 314) quotes Dennett (1978) as saying, "That of which I am conscious is that to which I have access. " This entails that what I have no access to is not conscious. ...
... 4 In the 20th century, philosopher Daniel Dennett built on the notion of personhood as intelligence and self-awareness, adding three interdependent cognitive abilities: the capacity to recognize intentional mental states in others, to use language, and to be conscious in a way that other animals were not. 5 Other thinkers attempted to define a "passing gradeˮ for personhood. For example, Fletcher 6 put forward 15 criteria for personhood and used intelligence and intelligence quotient (IQ) scores as a dividing line between persons and nonpersons. ...
1 Background
The purpose of this concept analysis was to examine how the concept of personhood has been used in the nursing literature. The person is central to nursing, as the object of nursing work, or care, and a key element of theory. Health and illness confront conventional notions of personhood based on the Western philosophy, in delineating boundaries of life and death, and grappling with pathophysiological changes and alterations in capacities that challenge our understandings of what makes a person whole.
2 Methods
Rodgers’ evolutionary method was selected; it emphasizes the relationship between concepts, language, and communities of users. A literature search between 1950 and 2017 generated 760 articles; 54 were retained for analysis.
3 Results
Four themes were identified: (1) personhood and nursing ethics, emphasizing scientific advances, and establishing criteria; (2) personhood as a morally significant, relational process realized through nursing care; (3) personhood lost (or neglected); (4) interventions aimed at understanding, recognizing, and enhancing personhood. Related terms, antecedent concepts, and consequences are explored.
4 Conclusions
This preliminary view of personhood in the nursing literature demonstrated how the concept has been developed, used, and understood. Areas for future research include nursing ethics, theory, and clinical practice, as well as links with other academic disciplines.
People's choices of food and drink, the attitudes they express, and the beliefs that they state are influenced by their political and other identities. At the same time, people's everyday choices depend on the context of available options in ways that are difficult to explain in terms of the choosers’ preferences and beliefs. Such phenomena provoke various questions. Do partisans or conspiracy theorists really believe what they are saying? Given the systematic inconsistency of their choices, in what sense do consumers prefer the items they purchase? More generally, how “flat” is the mind—do we come to decision‐making and choice with pre‐existing preferences, attitudes, and beliefs, or are our explanations for our behavior mere post‐hoc narratives? Here, we argue that several apparently disparate difficulties are rooted in a failure to separate psychologically different types of preferences, attitudes, and beliefs. We distinguish between underlying, inferred, and expressed preferences. These preferences may be expressed in different coordinate spaces and hence support different types of explanatory generalizations. Choices that appear inconsistent according to one type of preference can appear consistent according to another, and whether we can say that a person “really” prefers something depends on which type of preference we mean. We extend the tripartite classification to the case of attitudes and beliefs, and suggest that attributions of attitudes and beliefs may also be ambiguous. We conclude that not all of the mental states and representations that govern our behavior are context‐dependent and constructed, although many are.
It is a popular hypothesis for researchers worldwide that if we manage to construct a lifelike intelligence that depicts most aspects of the human brain, it will be easier for us to understand our own existence. This discussion often ends up in polemic altercations between philosophers, neuroscientists, and technologists on the definition of intelligence. It has also been a subject of interest in both academic and industrial societies, with two prominent concepts emanating at the peak of it, often as one and the same: Artificial Intelligence and Natural Intelligence. While these terms are often used interchangeably, we theorize that they represent two totally distinct and often contradictory constructs. This work aims to portray the most significant divergences between Artificial Intelligence and Natural Intelligence and find out if those can converge under the current technological advancements. We focus primarily on their accurate definitions, then their inner workings, and their potentials and limitations enumerating in the process relative sociological and ethical consequences. Finally, we show why under the current methods the probability of creating an advanced form of Artificial Intelligence is minimal.
En la presente obra, se intenta abonar a una discusión que trascienda la superficial dualidad entre filia y fobia. Por esa misma razón, se ha invitado a estudiosos de los más diversos campos de estudio con la finalidad de proponer una hoja de ruta para entender el impacto social de la inteligencia artificial. Solamente a través de un camino con estas características, nos será posible romper el caparazón de las simplificaciones, para darnos cuenta que el verdadero estado del arte es más profundo que la dicotomía. En el corazón de la inteligencia artificial existen verdadera-mente posibilidades técnicas de resolver problemas que acechan desde hace mucho tiempo a la humanidad, pero también plantea algunos riesgos que no podemos hacer a un lado. A lo largo de los apartados del presente libro, los autores plantean estos diversos escenarios en el campo de la ecología, la educación, la psicología, la agricultura, la medicina y hasta la filosofía o la historia.
The recent explosion of Large Language Models (LLMs) has provoked lively debate about “emergent” properties of the models, including intelligence, insight, creativity, and meaning. These debates are rocky for two main reasons: The emergent properties sought are not well-defined; and the grounds for their dismissal often rest on a fallacious appeal to extraneous factors, like the LLM training regime, or fallacious assumptions about processes within the model. The latter issue is a particular roadblock for LLMs because their internal processes are largely unknown – they are colossal black boxes. In this paper, I try to cut through these problems by, first, identifying one salient feature shared by systems we regard as intelligent/conscious/sentient/etc., namely, their responsiveness to environmental conditions that may not be near in space and time. They engage with subjective worlds (“s-worlds”) which may or may not conform to the actual environment. Observers can infer s-worlds from behavior alone, enabling hypotheses about perception and cognition that do not require evidence from the internal operations of the systems in question. The reconstruction of s-worlds offers a framework for comparing cognition across species, affording new leverage on the possible sentience of LLMs. Here, we examine one prominent LLM, OpenAI’s GPT-4. Inquiry into the emergence of a complex subjective world is facilitated with philosophical phenomenology and cognitive ethology, examining the pattern of errors made by GPT-4 and proposing their origin in the absence of an analogue of the human subjective awareness of time. This deficit suggests that GPT-4 ultimately lacks a capacity to construct a stable perceptual world; the temporal vacuum undermines any capacity for GPT-4 to construct a consistent, continuously updated, model of its environment. Accordingly, none of GPT-4’s statements are epistemically secure. Because the anthropomorphic illusion is so strong, I conclude by suggesting that GPT-4 works with its users to construct improvised works of fiction.
As guests’ norms and behaviours are rapidly evolving in the digital world, explicitly problematizing commensality, a term designating the social aspects of eating with others, is needed in hospitality contexts. This article introduces the "commensal scene" concept, applying Social presence theory to redefine commensality in dining settings amidst digital-age transformations. Challenging traditional views of physical co-presence, it explores the multi-spatial, multi-levelled dynamics of being 'with others' in both physical and digital contexts. The model allows for a deeper understanding of how digital media reshapes presence beyond mere spatial factors. It highlights the evolving nature of guest behaviours and norms, focusing on the interplay between different communication mediums and responsive social spaces. This innovative approach offers new insights into emerging commensal practices, proposing a framework for designing relevant dining experiences in a digitally-influenced world.
p>Following the cultural-historical activity theory guidelines, this study investigates the potential consistency between scientific methodologies and personality syndromes. By minding not falling into rough simplification and misleading generalization, our methodological assumption suggests a line of historical similarity worthy of being investigated deeply in future studies. The study looks into the consistency in the historical development of the methodologies representing ‘the symptoms’ of psychology as a science living through its historical crisis, on one hand, and the personality syndromes representing the ‘implicit methodologies’ of individuals, on the other. Such an approach allows one to draw more on personality syndromes, their taxonomy, and their root, in addition to the potential predictions of their destiny. A crucial methodological consideration that allows such dependency is that science is a special form (highly abstract and generalized) of creative activity sharing a similar nature to the daily ordinary creative activity of personality. So, science might represent an early historically elaborated version of the ordinary-daily form of activity structure, which allows us to hypothesize that personality syndromes, in their own characteristics, might share the developmental tendency of the noted methodologies rooted in the subjective-objective epistemological rupture as a ground of the historical crisis.</p
Creativity is considered a global ability and crucial for ordinary-daily and special (e.g., science, aesthetic) activities. In this paper, from the position of Cultural-Historical Activity Theory (CHAT), we expand the debate about the creativity crisis and hypothesize that the noted crisis is only the tip of the iceberg represented by the crisis of the postmodern’ incoherent mind, reflecting the crisis of self-realization as a leading activity in the individualistic epoch. By investigating creativity as an original functionality of the mind, two key titles are stressed. One is the halting of the activity system; two, it is the inconsistency between the objective meanings sphere and the subjective sense-making sphere. Both titles represent the epistemological rupture embedded in the mainstream culture and praxis rooted in the internal contradictions of individualism and post-modernism as worldview and practices, leading the mind to close its eyes on the contradictions which are the crucial source of grasping the internal content (abstraction and generalization) of the given experience, hence, a crucial source of creativity. Thus, it is considered that not only creativity is in crisis, but also the coherence of the mind as well, as an extreme result of the shattered postmodern existence.
The book is devoted to the reconstruction and analysis of the philosophy of history and philosophy of culture, an outline of which Mieczysław Porębski presented in Z. Po-wieść (1989). This work is sometimes classified as a postmodern “professorial novel”, while the author of A seeker of sense among the strands of history tries to demonstate that the bricolage poetics is mainly staffage, as the distinguished art historian and theorist was motivated by the desire to reflect on the historical process and the meaning of European culture for contemporaries. The leitmotif of Z. is the wandering of the title character through the history and literature of the Old Continent, considered in the monograph i.a. in relation to the composition, the world presented, the autothematism, the philosophical assumptions and the ideological meaning of the novel. The authoress analyzes the figure of the protagonist, genological issues, the problem of historiography and narrative, and the textual implication of the authorial subject. She considers the dialectic between the search for universal truths about human nature and the imperative to convey the truth of one’s own time, and furthermore presents the futuristic predictions of the author of Iconosphere. Finally, she shows the hermeneutic perspective as the best one to interpret Z., as it is revealed in the actions and utterances of the protagonist himself, who seeks the essential message of tradition and the meaning of historical experience.
Two foundational ethicists of care, Nel Noddings and Eva Feder Kittay, limit the moral community of care to humans. Noddings claims that the reciprocity required for her care ethic cannot be universally present in human relationships with non-humans. Kittay advances that her care ethic requires the cared-for’s assent, or “taking up” of the care, in response to the carer’s actions, which she claims is impossible with non-human cared-fors. But these claims can be disputed. I offer a few examples to contend that ethically meaningful reciprocity is possible in some human relationships with more-than-human entities and that some non-human cared-fors can assent to carers’ actions. Following from the work of Mary Anne Warren and others on moral personhood, “humans” and “persons” can refer to different things: biological organisms and a designation of moral status respectively. There can be persons that are not humans (e.g., legal persons like corporations and chimpanzees, and moral persons like whales and dolphins). Because the concerns of Noddings and Kittay can be addressed and there are non-human persons, I argue that we should reject the human restriction within care ethics. Humans have morally significant relationships with non-human persons and we need to open the realm of care ethics to legitimize and enhance these other relationships in our rich communities.
Sense of agency and sense of ownership are considered crucial in autonomous systems. However, drawbacks still exist regarding how to represent their causal origin and internal structure, either in formalized psychological models or in artificial systems. This paper considers that these drawbacks are based on the ontological and epistemological duality in mainstream psychology and AI. By shedding light on the cultural-historical activity theory (CHAT) and dialectical logic, and by building on and extending related work, this paper attempts to investigate how the noted duality affects investigating the self and “I”. And by differentiating between the space of meanings and the sense-making space, the paper introduces CHAT’s position of the causal emergence of agency and ownership by stressing the twofold transition theory being central to CHAT. Furthermore, a qualitative formalized model is introduced to represent the emergence of agency and ownership through the emergence of the contradictions-based meaning with potential employment in AI.
Beliefs play a central role in our lives. They lie at the heart of what makes us human, they shape the organization and functioning of our minds, they define the boundaries of our culture, and they guide our motivation and behavior. Given their central importance, researchers across a number of disciplines have studied beliefs, leading to results and literatures that do not always interact. The Cognitive Science of Belief aims to integrate these disconnected lines of research to start a broader dialogue on the nature, role, and consequences of beliefs. It tackles timeless questions, as well as applications of beliefs that speak to current social issues. This multidisciplinary approach to beliefs will benefit graduate students and researchers in cognitive science, psychology, philosophy, political science, economics, and religious studies.
There have been increasing challenges to dual-system descriptions of System-1 and System-2, critiquing them as being imprecise and fostering misconceptions. We address these issues here by way of Dennett’s appeal to use computational thinking as an analytical tool, specifically we employ the Common Model of Cognition. Results show that the characteristics thought to be distinctive of System-1 and System-2 instead form a spectrum of cognitive properties. By grounding System-1 and System-2 in the Common Model we aim to clarify their underlying mechanisms, persisting misconceptions, and implications for metacognition.
The model of reality created by our brain is like a jigsaw puzzle consisting of many pieces connected into a unified picture. How do they combine together while keeping individual characteristics? In neuroscience and philosophy of mind, this question is called the binding problem. The article reveals various aspects of the problem and offers a solution based on the Teleological Transduction Theory.
Consciousness directs the actions of the agent for its own purposive gains. It re-organises a stimulus-response linear causality to deliver generative, creative agent action that evaluates the subsequent experience prospectively. This inversion of causality affords special properties of control that are not accounted for in integrated information theory (IIT), which is predicated on a linear, deterministic cause-effect model. IIT remains an incomplete, abstract, and disembodied theory without explanation of the psychobiology of consciousness that serves the vital agency the organism.
In our response to a truly diverse set of commentaries, we first summarize the principal topical themes around which they cluster, then address two “outlier” positions (the problem of consciousness has been solved vs. is intractable). Next, we address ways in which commentaries by non-integrated information theory (IIT) authors engage with the specifics of our IIT critique, turning finally to the four commentaries by IIT authors.
Human beings try to interpret and read other minds. This is the process of cognitive empathizing, which can be implicit and intuitive, or explicit and deliberate. The process also qualifies as a form of complex problem-solving, where the focal problem is another person’s mental states. Hence, cognitive empathizing by digitally augmented agents will exhibit the characteristics discussed in the preceding chapter, regarding digitalized problem-solving. It follows, therefore, that augmented agents might combine human myopia and bias, with overly farsighted, artificial sampling and search of other minds. Augmented agents will then misread other minds, often viewing them as unrealistic, irrational, or deviant. This chapter examines the origins and implications of these effects, especially for interpersonal trust and cooperation.
This thesis is concerned with self-ascriptive belief. I argue that one’s lower-order belief can be fixed from the reflective level. One reasons about whether p is the case and it is on the basis of one’s endorsement of p that one comes to believe p. I argue that one’s self-ascriptive belief can also be fixed from the reflective level. One reasons about whether p is the case and it is on the basis of one’s endorsement of p that one comes to self-ascribe the belief p. I further suggest that it is possible for the reflective way of fixing lower-order belief to fail but the reflective way of fixing self-ascriptive belief to succeed. When this happens, one is in a state of believing that she believes p when in fact one does not believe p. This suggests that the state of believing that one believes p and the state of believing p are distinct states and that the state of believing that one believes p does not necessitate the state of believing p. It also raises a sceptical worry about whether one’s self-ascriptive belief amounts to knowledge. In Chapter 1, I situate my discussions in the existing literature, focusing on the constitutive view of self-ascriptive belief. In Chapter 2, I use an everyday case in which a subject self-ascribes the belief that p and is later surprised that p to motivate the possibility that there are different levels at which beliefs are fixed. In Chapter 3, I develop an account of ratiocination and argue that the conclusion of ratiocination is in the form of I ought to believe p. Hence, at the end of ratiocination, one is in a state of believing that I ought to believe p. In Chapter 3, I discuss how one’s belief that I ought to believe p initiates a top-down fixation of the corresponding lower-order belief. I also discuss why it is possible for the top-down fixation process of a rational subject to terminate before it fixes the lower-order belief. In Chapter 4, I discuss the transparency account of self-knowledge. I first criticise the transparency account’s claim that a rational subject’s endorsing p necessarily leads to believing p. Someone who ratiocinates and concludes that p but does not believe p because the top-down fixation process terminates early is an example of how a rational subject can endorse p without believing p. I then draw on the transparency account to argue that from a rational subject’s first-person perspective, if she self-ascribes a belief to herself and if she endorses that p, she will self-ascribe the belief that p. If this is right, then one can self-ascribe the belief that p because one endorses p but in fact does not believe p because one’s endorsement fails to fix the lower-order belief. In Chapter 5, I return to the constitutive account, explaining why its central claim should be rejected. I also reject the incorrigibility thesis, which holds that a self-ascriptive belief that p entails the lower-order belief that p. Finally, I raise a number of puzzles concerning the epistemic status of self-ascriptive belief.
Advancements in novel neurotechnologies, such as brain computer interfaces (BCI) and neuromodulatory devices such as deep brain stimulators (DBS), will have profound implications for society and human rights. While these technologies are improving the diagnosis and treatment of mental and neurological diseases, they can also alter individual agency and estrange those using neurotechnologies from their sense of self, challenging basic notions of what it means to be human. As an international coalition of interdisciplinary scholars and practitioners, we examine these challenges and make recommendations to mitigate negative consequences that could arise from the unregulated development or application of novel neurotechnologies. We explore potential ethical challenges in four key areas: identity and agency, privacy, bias, and enhancement. To address them, we propose (1) democratic and inclusive summits to establish globally-coordinated ethical and societal guidelines for neurotechnology development and application, (2) new measures, including “Neurorights,” for data privacy, security, and consent to empower neurotechnology users’ control over their data, (3) new methods of identifying and preventing bias, and (4) the adoption of public guidelines for safe and equitable distribution of neurotechnological devices.
Bowker’s Age of Potential Memory describes a new era characterized by a culture of knowledge production that fosters and stifles certain forms of statements depending on the logics that subtend them. Through processes of ubiquitous data collection, analysis, and feedback, individuals are increasingly reduced to users; users are re-created as data doubles or data doppelgangers, post hoc, through the aggregation and analysis of their data traces. This discursive transformation of the human that will arise in relation to living alongside and through these doubles or doppelgangers is difficult to understand within the framework of extant disciplinary silos. And yet methods that connect disciplines are emerging. To realize these connections, translational work is required. This paper explores the complementarity of digital humanities (DH) and humanistic human-computer interaction (hHCI) through the lens of distant reading. I focus on distant reading—topic modelling in particular—because of its methodological popularity and relation to discourse. I argue that distant reading comprises a useful connection between these two young domains: a pivot that allows for the inter- or transdisciplinary study of the future human through the analysis of its potential sociotechnical, discursive compositions.
In this article, we present the main methodological principles of symptom networks in psychopathology. It is a topological approach linking entities from different scales of analysis of an individual (from genetics to behavior, via cerebral connectivity). They are an alternative to the Diagnostic and Statistical Manual of Mental Disorders (DSM) and Research Domain Criteria (RDoC), but they do not exclude them. Symptom networks exceed or circumvent some limits of these classifications. Furthermore, they contribute to the stratification and organization of these nosologies. Behind the originality of its methodology, this program proposes a redefinition of mental illness which modifies the conception of psychiatry. But their future is still uncertain: they must take on an epistemological and methodological challenge. At the same time, they have to convince the community of mental health researchers and clinicians of their utility and value.
© 2020 médecine/sciences – Inserm.
Trigger-action programming (TAP) is a programming model enabling users to connect services and devices by writing if-then rules. As such systems are deployed in increasingly complex scenarios, users must be able to identify programming bugs and reason about how to fix them. We first systematize the temporal paradigms through which TAP systems could express rules. We then identify ten classes of TAP programming bugs related to control flow, timing, and inaccurate user expectations. We report on a 153-participant online study where participants were assigned to a temporal paradigm and shown a series of pre-written TAP rules. Half of the rules exhibited bugs from our ten bug classes. For most of the bug classes, we found that the presence of a bug made it harder for participants to correctly predict the behavior of the rule. Our findings suggest directions for better supporting end-user programmers.
The role of sleep and dreaming in maintaining emotional stability represents a very tangible and practical example of protoconsciousness as a mental state that supports the proper functioning of normal waking consciousness. Normal sleep has been shown to promote basic mammalian mechanisms of emotion regulation such as habituation, extinction and physiological homeostasis (Pace-Schott et al. 2009a, b; McEwen 2006). Sleep deprivation experiments suggest that sleep is also essential to cognitively based emotion regulatory functions such as accurate identification of facial emotion (van der Helm et al. 2010). Dreaming has been widely hypothesized to take part in this emotion regulatory process. For example, Rosalind Cartwright has suggested that negative affect is progressively ameliorated across dreams elicited from successive REM periods of a night in mildly depressed college students (Cartwright et al. 1998a). Similarly, she has linked a pattern of progression from negative early dreams to positive late dreams across the night with remission at 1 year in persons meeting Beck Depression Inventory criteria for depression (Cartwright et al. 1998b). The pattern of brain activation across sleep stages revealed by PET studies, which show global de-activation in NREM followed by selective re-activation of limbic structures that include core elements of the brain’s fear and reward processing networks, suggest that both positive and negative emotional extremes could be moderated during REM and that REM sleep dreaming may reflect a subjective experience of this process (Pace-Schott 2010). Indeed, Tore Nielsen and Ross Levin have suggested that these REM-activated limbic structures regulate emotion during REM sleep via extinction processes, and that, in PTSD, this process is disrupted resulting in both nightmares and impaired daytime emotion regulation (Levin and Nielsen 2007). Therefore, functionality in terms of emotional homeostasis has been attributed not only to the selectively activated physiology of REM itself but also to its subjective manifestation, REM sleep dreaming. Protoconsciousness theory posits “A primordial state of brain organization that is a building block for consciousness” (Hobson 2009). Hobson (2009) suggests that this primordial state of consciousness is prominent prenatally and in infancy when it supports the developing “secondary consciousness” of later childhood and adulthood. Hobson posits further that protoconsciousness then continues throughout life, especially during REM sleep dreaming, functioning in support of waking consciousness. If consciousness can be profitably described and compared between brain states in terms of its component formal domains, as suggested in Hobson’s first lecture of the current series, then certainly the emotional domain is one in which support of waking function is ongoing and essential given the lifelong nature of stressors and other challenges to proper functioning of the emotional domain. And, as most clearly seen during acute stress or in the disorders of emotion (affective and anxiety), disregulation in the emotional domain has innumerable knock-on effects on all other realms of adult waking secondary consciousness impacting higher cognitive functions such as selective attention, ability to reason and ability to plan prospectively. Therefore, the nightly support of waking consciousness, whether as a function of a protoconscious REM state or the physiological processes of sleep itself, represents an undeniable and essential function of sleep.
We define the problem addressed at the eighteenth Attention and Performance symposium as that of explaining how voluntary control is exerted over the organization and activation of cognitive processes in accordance with current goals, without appealing to an all-powerful but ill-defined " executive" or controlling "homunculus." We provide background to the issues and approaches represented in the seven parts of the volume and review each chapter, mentioning also some other contributions made at the symposium. We identify themes and controversies that recur through the volume: the multiplicity of control functions that must be invoked to explain performance even of simple tasks, the limits of endogenous control in interaction with exogenous influences and habits, the emergence of control through top-down "sculpting" of reflexive procedures, the debate between structural and strategic accounts of capacity limits, the roles of inhibition and working memory, the fertile interactions between functional and neural levels of analysis. We identify important control issues omitted from the symposium. We argue that progress is at last being made in banishing - or fractionating - the control homunculus.
ResearchGate has not been able to resolve any references for this publication.