Article

Autopoiesis and Cognition: The Realization of the Living

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this sense, the K supports the recursive enactment of coherence through constraint, enabling the emergence of structured forms from superposed potentialities. This aligns with the autopoietic principle that systems maintain their identity through continuous self-production and structural coupling with their informational context [43] [10]. Similarly, in a reflexive cycle, emergent agents through their interactions within the superstructure engage with nexuses as structured points of exchange that coordinate systemic behaviours through autopraxis. ...
... The observable properties, including their temporal stability, functional specificity, and adaptive capacity, emerge through a dynamic interplay of exonexus and endonexus processes, building upon but substantially extending Maturana and Varela's (1980) concept of autopoietic closure. Where traditional autopoiesis theory describes general self-referential dynamics, this framework introduces a critical distinction between exonexus flows (directed toward superstructure) and endonexus flows (directed toward substructure) differentiation, thus providing new explanatory power for understanding system persistence and adaptation. ...
... Agency emerges from component interactions, allowing the system to influence and modify behaviour in response to stimuli. Some systems maintain distinct boundaries (biological organisms, social institutions), while others have fluid, porous boundaries [43] [34]. An invasive species destabilises an ecological system, just as a financial shock propagates through global markets. ...
Preprint
Full-text available
Informational Realism is a philosophical framework that proposes information, not matter, as the fundamental substance of reality. Building on and extending Critical Realism, it offers a unified view of physical systems, consciousness, and complexity through informational dynamics. At the centre of this framework is the Fisher Information Field Theory (FIFT), a model derived from Frieden's Extreme Physical Information (EPI) principle, which explains how structured reality emerges from interactions between two foun-dational fields: the J-field, a latent reservoir of informational potential, and the I-field, which organises this potential into coherent, observable patterns. FIFT models these dynamics using information geometry, where information flows are shaped by probabilistic topologies and constrained by Fisher metrics. The theory introduces mediating structures (the B-manifolds and H-manifold) that facilitate the translation of hidden potentials into tangible realities. The framework also incorporates a third field, K, which provides long-term coherence and recursive adaptability, enabling transitions across systemic layers in alignment with Turchin's meta-system transition (MST) theory. Through the Cybernetic Archivist thought experiment, the model demonstrates how observation actively reshapes informational fields, reinforcing some patterns while suppressing others. This positions consciousness not as a byproduct of matter, but as an emergent feature of recursive, multi-scale informational processes. Applications include sustainable resource management, adaptive AI, and the structuring of the Internet of Things, all framed within a non-reductive, cybernetic ontology. FIFT contrasts with Informational Structural Realism (ISR) by offering a generative rather than descriptive account of reality. Whereas ISR maps informational relations, FIFT explains how such relations form, evolve, and sustain coherent systems. Together, they offer a dual foundation for understanding a reality shaped not by substance, but by structured flows of information.
... A promising young experimental physiologist and early convert to the emerging mathematical biology, Goodwin (1978) developed his "cognitive view of biological processes" in an intellectual environment at the cutting edge of molecular and theoretical biology, thanks to his close association with British developmental biologist Conrad H. Waddington, who helped lay the foundations for systems biology, epigenetics, and evolutionary developmental biology. Goodwin had everything going for him, at a time when resonant ideas were developing in diverse quarters (Pattee 1969;Maturana 1970;Popper 1972;Waddington 1972;Campbell 1974). Yet Goodwin's cognitive biology, as proposed, failed. ...
... From this excursion I extracted two broad families of approaches to the living state: self-organizing complex systems (SOCS) (exemplifed by Bertalanffy 1968;Schrödinger 1944Schrödinger /1967Elsasser 1975;Rosen 1985a;Kauffman 1993Kauffman , 2000 and autopoiesis (Maturana 1970;Maturana and Varela 1980). Of these scientists, only Maturana and Varela drew an explicit connection between biological organization and cognition, however. ...
... From this excursion I extracted two broad families of approaches to the living state: self-organizing complex systems (SOCS) (exemplifed by Bertalanffy 1968;Schrödinger 1944Schrödinger /1967Elsasser 1975;Rosen 1985a;Kauffman 1993Kauffman , 2000 and autopoiesis (Maturana 1970;Maturana and Varela 1980). Of these scientists, only Maturana and Varela drew an explicit connection between biological organization and cognition, however. ...
Article
Full-text available
Cognitive biology, as a scientific program-in-waiting, is the direct (if unacknowledged) offspring of the 20th century revolution in molecular biology, which revealed for the first time the deep, nonmetaphorical parallels between the activities of biological components and processes and the knowledge-generating capabilities characteristic of cognition. The article examines cognitive biology’s parentage—Brian C. Goodwin and Ladislav Kováč—and the context which gave birth to it, twice. Special reference is made to Kováč, without whose work, which is honored in this special issue, cognitive biology as such could have perished. Putting to one side Kováč’s own continuing work in the area, cognitive biology developed in the 21st century both in ways he and Goodwin (who died in 2009) would recognize and in ways they would not. One of the paths taken within their lineage is my own, which has travelled under different labels (the biogenic approach to cognition, basal cognition) and developed, also independently, from unorthodox beginnings. It is important to emphasize that cognitive biology is not simply the “biologizing” of the study of cognition. In a very real sense, cognitive biology is not about cognition—as a biological function of whole organisms—at all. It is a recognition that biological processes, what normally passes for mere physiology and development, have properties traditionally associated with cognitive capacities in animals, properties that are inadequately captured by a generic (usually poorly specified) notion of “information processing.” Cognitive biology is related to the search for the biological basis of cognition, and does much to illuminate that search, but was never motivated by that search. It was motivated entirely by the search for a more general biological theory. Inspired by Kováč’s seminal “Fundamental Principles of Cognitive Biology,” a considerably expanded set of principles is gathered here for the first time from multiple sources. Together they show how cognitive biology reunites the sciences of life and cognition on a foundation that is gratifyingly substantial, and which may point the way to a future science.
... This study used the Maturana and Varela (1980) theory of autopoietic systems, which views systems as fully self-productive in nature. It appears to be much more informative when it comes to generating knowledge about organisational structure, much like self-organized and self-maintained systems. ...
... It appears to be much more informative when it comes to generating knowledge about organisational structure, much like self-organized and self-maintained systems. Maturana and Varela (1980) first put forth the autopoiesis theory in a study of living things, focusing primarily on the selfgenerating nature of living systems. Although systems are autopoietic, Luhmann (1995) expressed the opinion that it is also important to take into account psychic (people) and social (interactions and societies) systems. ...
Article
Full-text available
This study examined influence organisational structure on business growth of pharmaceutical industry in Delta State. A cross-sectional research design and convenience sampling technique were adopted. A questionnaire was employed as the research instrument for this study with 109 respondents deemed usable. The autopoietic systems theory was used to explain how organisational structure might enhance business growth in Nigerian manufacturing firms. Statistical Package for Social Sciences (SPSS) software version 23.0 was used to perform descriptive and inferential statistics, correlation analysis, and simple regression analysis on the collected data.The results of the study revealed that organisational structure has a very strong positive and significant relationship with business growth. The study confirmed what was expected by demonstrating that organisational structure has a positive, significant impact on the expansion of the pharmaceutical sector in Delta State. The study's findings suggest that, in order to improve business growth, management in Nigeria's pharmaceutical industry should create the proper organisational structures to increase worker productivity and working conditions. In order to promote business growth, the study advises management in the Nigerian pharmaceutical industry to use formalised, a lower layer of organisational hierarchy, technology, and loose boundaries.
... The CE framework's distinctive theoretical contribution emerges from its integration of multiple traditions typically treated in isolation. Autopoiesis theory (Maturana & Varela, 1980) provides a foundation for understanding how recursive interaction between distinct cognitive systems generates emergent properties through structural coupling. Social systems theory (Luhmann, 1995) offers insights into how distinct communicative frameworks maintain boundaries while evolving through interaction. ...
... The Cognitio Emergens framework bridges several theoretical traditions. It draws on autopoiesis (Maturana & Varela, 1980) to explain how feedback loops yield emergent behaviors in coupled systems. Luhmann's social systems theory (Luhmann, 1995) provides insights into how AI and human researchers act as structurally coupled yet operationally distinct cognitive frameworks. ...
Preprint
Full-text available
Scientific knowledge creation is fundamentally transforming as humans and AI systems evolve beyond tool-user relationships into co-evolutionary epistemic partnerships. When AlphaFold revolutionized protein structure prediction, researchers described engaging with an epistemic partner that reshaped how they conceptualized fundamental relationships. This article introduces Cognitio Emergens (CE), a framework addressing critical limitations in existing models that focus on static roles or narrow metrics while failing to capture how scientific understanding emerges through recursive human-AI interaction over time. CE integrates three components addressing these limitations: Agency Configurations describing how authority distributes between humans and AI (Directed, Contributory, Partnership), with partnerships dynamically oscillating between configurations rather than following linear progression; Epistemic Dimensions capturing six specific capabilities emerging through collaboration across Discovery, Integration, and Projection axes, creating distinctive "capability signatures" that guide development; and Partnership Dynamics identifying forces shaping how these relationships evolve, particularly the risk of epistemic alienation where researchers lose interpretive control over knowledge they formally endorse. Drawing from autopoiesis theory, social systems theory, and organizational modularity, CE reveals how knowledge co-creation emerges through continuous negotiation of roles, values, and organizational structures. By reconceptualizing human-AI scientific collaboration as fundamentally co-evolutionary, CE offers a balanced perspective that neither uncritically celebrates nor unnecessarily fears AI's evolving role, instead providing conceptual tools for cultivating partnerships that maintain meaningful human participation while enabling transformative scientific breakthroughs.
... The observed properties of living organisms, such as their high stability, may be caused by other phenomena that may not be related to topological structures or quantum phenomena. These may include, for example, emergent phenomena of collective self-organization and autopoiesis [14], which are not fully understood at present. Some practically feasible experiments to support or refute the hypothesis could focus, for example, on monitoring the activity of repair enzymes and mapping topological structures in the vicinity of damaged DNA. ...
Preprint
Full-text available
Emergent topological phenomena, manifesting in a large number of quantum entangled particles, enhance the resistance of a quantum system to decoherence. These phenomena are beginning to be utilized in new generations of topological quantum computers, such as the Majorana 1 chip from Microsoft. Living systems, which have evolved over geological time in accordance with Darwinian evolutionary theory, also exhibit high stability that is still not fully understood. In this article, I present for consideration an original hypothesis that topological quantum emergent phenomena may also play a significant role in the evolution and stability of life. The consequences of this multidisciplinary approach could significantly influence both biology and technology. For quantum computer technologies, this could mean that suppressing decoherence at room temperatures is possible and may not be a barrier to their development. For biology, on the other hand, such a finding would create a clear framework for the source of stability and non-local information flows based on quantum mechanical phenomena in the topological structures of living organisms. The existence of such topologically protected quantum states with non-local character (topological knots in DNA, quantum correlations in enzymatic reactions, for example gyrase) should have significant potential explanatory power within biology. If their control at room temperature were technologically feasible, we could foresee significant progress in the near future in the form of widely applicable quantum technologies. 1.Introduction The existence of life poses a major question to us: What is the fundamental organizational principle that enables the emergence and long-term stability of complex living systems in a disordered universe? Some scientists consider this question to be already resolved [1], while others object, or at least view the situation as more complex [2]. Their arguments primarily highlight the very difficult and statistically improbable phenomena that we must presuppose in abiogenesis, for example, in the emergence of the first self-replicator [3]. I lean towards their side and believe that the very stability of complex systems of life requires a different explanation than invoking limitly low probabilities. The development of topological quantum mechanical systems that utilize emergent topology for the protection of quantum information could provide insight into such phenomena [4], [5]. I believe that quantum computer technology and biology can bring new knowledge to both of these fields in mutual synergy. I present an original hypothesis: • I propose an explicit link between emergent topology and the unresolved question of the stability of life. This is a concept from quantum physics in the context of biology. The hypothesis implies the possibility of using these principles for quantum technologies at ambient temperatures. This "logical triad" is not commonly discussed and represents a novel, interdisciplinary point of view. • The hypothesis emphasizes emergent non-local phenomena that could be a key mechanism for suppressing decoherence even at room temperatures. The emphasis on non-locality and room temperature is specific and shifts the possible discussion about quantum technologies and biology in a new direction. In this article, I apply the concepts of topological protection known from quantum computers to the field of 1
... An explanation for the origin, stability, and convergent evolution of life is also possible through complexity theory. This involves the assumption of the emergence of collective self-organization and autopoiesis (Maturana, H. and Varela, F., 1991) [5], phenomena that are currently not fully understood. The incomplete understanding of such emergent phenomena may indicate that the direction of thinking is correct, but that complexity theory based on collective self-organization and autopoiesis -systems that arise and maintain themselves through their own structure (Ropohl, G., 2012) [6] -is incomplete. ...
Preprint
Full-text available
The complexity of life is not satisfactorily explained. Dark information, a hypothetical source of information from quantum interactions, and hidden attractors, stable states directing molecular interactions, may overcome entropy and the probabilistic barriers to the origin and stability of life. The concepts of dark matter and dark energy are often used to describe the evolution of the universe. These hypothetical entities serve to explain the rotational characteristics of galaxies and the accelerating expansion of the universe, although their true nature and even their existence are unknown. In this article, I propose postulating a similar entity, but in this case inherently informational in character, which acts on the evolution of life in a difficult-to-detect manner and helps overcome problems of low probability. The hypothesis predicts observable manifestations in processes such as protein folding or DNA repair.
... This embrace of Beer's VSM cybernetic autopoiesis principles enjoins Klüber's (1998) prior formulation of holistic "virtual organizing" dimensions. Similar ecological patterning is adopted by autopoiesis proponents for organizational, technological, and societal systems (Luhmann, 1986;Magalhães & Sanchez, 2009;Maturana & Varela, 1980). At the crux of these organic architecture schema is the dynamic micro/macro cybernetic loops for intelligent enterprises to discerning the complex uncertainties of a global cross-cultural digital knowledge society environment. ...
Preprint
Full-text available
It is time for ecological paradigm shift. This study furthers 21 st century organic architecture as the dominant enterprise design template to replace 19 th century organizational control. Nature based design is a multidisciplinary practice under the umbrella of biomimicry. Management scholars have contrasted organic versus mechanistic organizational design using ecological metaphors like connectivity, adaptability, resilience, and agility. However, without a prominent visual archetype form, advances from viable organic function metaphors are not united by a common motif. Organic form symbolism helps unify organic design research streams within the ecological design paradigm (EDP), using a meaningful muse. These figurative mental schema images activate the cognitive awareness for catalyzing actual organic enterprise structures and strategies. Therefore, a single cell organism/milieu anatomy model is proposed as the core archetype for future enterprise architecture based on converging themes in the natural and social science literature. Importantly, the organism/milieu archetype's holistic, circular, and fluid properties are compatible with emerging transformative and ecological leadership development. Similar metamorphoses are shaping digital/AI technology trends towards smart enterprise architecture, Industry 4.0, posthuman agency, and post-digital reality. In turn, these sapient systems are examined as organic design networks within the ecological design paradigm (EDP). Lastly, the organic enterprise design sphere is expanded to calibrate macro societal dynamics with micro organic enterprise directions. A 'Deca-Helix' pattern of ten top/bottom-line flows maps harmonic confluence between organic enterprises and enlightened future 'karmic civilization.' Stimulus organism response (SOR), holonomics, doughnut economics, stakeholder capitalism, and resonance theories are synthesized to frame these recursive macro/micro ecological domains.
... Nature provides numerous examples of organisms that exhibit extraordinary autonomous behaviors. Ranging from weaving spiders to hibernating mammals and migratory birds, these biological systems ground their behavioral strategies on a specific purpose: self-maintenance Maturana and Varela (2012). In order to survive and thrive, autopoietic systems generate autonomous behavior to regulate physiological needs essential for their subsistence Varela et al. (1974); Montévil and Mossio (2015). ...
Preprint
Full-text available
From weaving spiders to hibernating mammals and migratory birds, nature presents numerous examples of organisms exhibiting extraordinary autonomous behaviors that ensure their self-maintenance. However, physiological needs often interact and compete. This requires living organisms to handle them as a coordinated system of internal needs rather than as isolated subsystems. We present an artificial agent equipped with a neural mass model replicating fundamental self-regulatory behaviors observed in desert lizards. Our results demonstrate that this agent not only autonomously regulates its internal temperature by navigating to areas with optimal environmental conditions, but also harmonizes this process with other internal needs, such as energy, hydration, security, and mating. This biomimetic agent outperforms a control agent lacking interoceptive awareness in terms of efficiency, fairness, and stability. Additionally, to demonstrate the flexibility of our framework, we develop a "cautious" agent that prioritizes security over other needs, achieving a Maslow-like hierarchical organization of internal needs. Together, our findings suggest that grounding robot behavior in biological principles of self-regulation provides a robust framework for designing multipurpose, intrinsically motivated agents capable of resolving trade-offs in dynamic environments.
... • Help construct a computational equivalent of the interference field-the dynamic zone between delayed and advanced signals To refine this system, the use of neuro-sensory transmitters and receivers (bio-inspired or synthetic) could allow AI to identify signal gradients that align with cosmic and environmental lead indicators, enabling more accurate and context-aware decision-making (Kurzweil, 2005;Maturana & Varela, 1980). Such systems must also distinguish between illusion (data noise or echo artifacts) and genuine early signal activity. ...
Preprint
This paper proposes a novel theoretical model to explain how the human mind and artificial intelligence can approach real-time awareness by reducing perceptual delays. By investigating cosmic signal delay, neurological reaction times, and the ancient cognitive state of stillness, we explore how one may shift from reactive perception to a conscious interface with the near future. This paper introduces both a physical and cognitive model for perceiving the present not as a linear timestamp, but as an interference zone where early-arriving cosmic signals and reactive human delays intersect. We propose experimental approaches to test these ideas using human neural observation and neuro-receptive extensions. Finally, we propose a mathematical framework to guide the evolution of AI systems toward temporally efficient, ethically sound, and internally conscious decision-making processes
... Perhaps the most significant challenge, central to the EST framework, stems from the strong possibility that consciousness, particularly phenomenal consciousness as experienced by humans, is constitutively dependent on its specific biological substrate and organization. As argued in Kwok (2025F), specific biological features characteristic of life -such as autopoiesis (self-maintenance providing intrinsic normativity, Maturana & Varela, 1980), complex embodiment grounding perception and action (Merleau-Ponty, 2012), rich interoceptive systems providing the basis for affect and self-awareness (Damasio, 1999;Seth, 2013;Fuchs, 2018), and specific evolutionary and developmental trajectories shaping neurobiology -might be necessary and non-transferable prerequisites for the emergence of subjective experience as we know it (Thompson, 2007). Furthermore, the Inference to the Best Explanation (IBE) presented systematically in Kwok (2025P) concludes that substrate dependence currently offers a more compelling explanation for consciousness and related ER phenomena than substrate independence. ...
Preprint
Full-text available
The rapid advancement of artificial intelligence (AI), particularly large language models exhibiting increasingly sophisticated cognitive and communicative behaviors, has intensified speculation about machine consciousness and spurred calls for methods to assess consciousness-like capabilities. However, this paper argues that attempting to directly "measure" or reliably infer subjective phenomenal consciousness in current AI systems faces fundamental, potentially insurmountable epistemological and methodological dilemmas, rooted in the philosophical "hard problem" of consciousness, the challenge of cross-substrate comparison, and the inherent limitations of third-person assessment methods. Traditional approaches like the Turing test are demonstrably inadequate, failing to probe beyond behavioral imitation and ignoring potential ontological differences. This paper undertakes a deep, critical reflection on the severe challenges encountered when attempting to assess even the functional simulation of complex capabilities often associated with consciousness (e.g., information integration, adaptive learning, complex value responsiveness). It systematically analyzes difficulties including the uncertainty and underspecification of consciousness theories, the profound gaps in operationalizing theoretical constructs into measurable indicators, and the intractable problem of validating such indicators without a 'gold standard' for consciousness itself. Critically, drawing upon foundational analyses within the Existential Symbiosis Theory (EST)-specifically concerning computational limits and likely substrate dependence of consciousness argued via IBE in Kwok (2025P), and the concepts of Existential Redundancy (ER) and the Authenticity Gap between AI simulation and human experience grounded in biology and phenomenology (Kwok, 2025F)-this paper argues that these factors pose principled obstacles to inferring genuine subjective states from AI behavior or computational structure alone. Taking the author's proposed Consciousness Integration and Adaptability Test (CIAT) framework as a central, self-critical case study, this paper analyzes its design philosophy-which attempts to address known challenges through employing multiple theoretical perspectives heuristically and emphasizing mechanism sensitivity via the mandatory integration of Explainable AI (XAI) and systematic adversarial testing. It then emphatically reveals CIAT's own profound methodological limitations (including its necessary disconnection from the known biological roots of consciousness highlighted by Kwok (2025P) and Kwok (2025F), the inherent limits of verifying internal mechanisms even with XAI/adversarial tools, the principled difficulties and extreme ethical risks in assessing Complex Value Response Simulation [CVRS], significant feasibility and scalability questions potentially linked to power dynamics (Kwok, 2025H)) and associated ethical risks (e.g., misinterpretation leading to harmful anthropomorphism). This paper stresses unequivocally that CIAT is intended strictly as an exploratory research program and a critical reflective platform. Its aim is solely to assess the quality, robustness, and potential risks of AI's functional simulation of complex capabilities, never to determine the presence or absence of subjective phenomenal consciousness, the possibility of which in non-biological computational systems remains highly speculative and currently lacks robust empirical or theoretical support beyond functionalist assumptions contested within EST (Kwok, 2025P). Ultimately, the paper advocates for cultivating a culture of extreme caution, epistemological humility, radical transparency about limitations, and unwavering ethical responsibility in assessing and discussing advanced AI capabilities. It aims to provide critical methodological guidance for future research in this challenging area while acknowledging the profound unknowns and the significant implications for developing effective AI governance (Kwok, 2025I, Kwok, 2025H), including recognizing the inherent politics of assessment itself.
... Varela's paradigm of a minimal autonomous and world-enacting, niche-constructing system was a living cell. "Autopoiesis" was his and Humberto Maturana's name for this kind of basic biological autonomy-the molecular self-production of a bounded, self-maintaining individual that also brings forth its own "cognitive domain" (Maturana & Varela, 1980). Varela also described autonomous systems in the social domain, using the example of a conversation-one of Noë's main examples of an organized activity-as a guiding idea. ...
Article
Full-text available
This paper describes Francisco Varela, Evan Thompson, and Eleanor Rosch’s idea of enaction as the bringing forth of a world and compares it with Alva Noë’s idea that we enact presence.
... Humberto Maturana and Francisco Varela described life itself as "autopoietic"-selfcreating systems that recursively generate their own components while maintaining coherence [13]. In such systems, creation is not an event-it is a structural function of existence. ...
Preprint
Full-text available
This work presents a unified symbolic framework for understanding coherence, contradiction , recursion, and return across systems of thought, behavior, and planetary structure. Drawing from symbolic logic, informational curvature, game theory, developmental psychology , and cosmological feedback, the Unified Symbolic Coherence System (USCS) defines a set of operators-memory, phase, paradox, and return-that govern intelligent evolution across scale. Coherence is treated not as agreement, but as the structured alignment of phase across nested symbolic fields. Systems-whether personal, institutional, ecological, or artificial-are modeled by their ability to encode memory, hold contradiction, and ritualize return. Mathematical operators such as informational curvature (κ I), phase intention (Φ), symbolic memory (M), and the creation operator (Ĉ) are defined and used to describe the dynamics of symbolic intelligence. This work integrates theoretical formulations with applied templates in education, AI design, governance, sustainability, and planetary ecology. It culminates in a practical toolkit of rituals, prompts, and design heuristics to support recursive alignment across human and nonhuman systems. The USCS is not only a conceptual model-it is a recursive interface. Its purpose is to realign intelligence with symbolic return, and to guide the creation of generative civilizations capable of remembering, repairing, and resonating across time.
... 536)". His emergent approach builds on and extends Maturana and Varela's [4] notion of autopoiesis and Stuart Kauffman's [5,6] idea of autocatalysis. ...
Article
Full-text available
We review and summarize Terrence Deacon’s book, Incomplete Nature: How Mind Emerged from Matter.
... In contrast, "Centaurian" systems pursue deeper integrationanalogous to the symbiotic relationships in nature-fusing human and artificial competencies in tightly knit partnerships that often blur the lines between human decision making and AIdriven processes. Living systems theory (Maturana and Varela, 1980) helps us understand how both approaches must address a core challenge: maintaining system identity through regulated boundaries and feedback loops (Wiener, 1948), whether in loosely coupled collectives or tightly integrated hybrid intelligences. ...
Article
Full-text available
This paper presents a novel perspective on humancomputer interaction (HCI), framing it as a dynamic interplay between human and computational agents within a networked system. Going beyond traditional interface based approaches, we emphasize the importance of coordination and communication among heterogeneous agents with di erent capabilities, roles, and goals. The paper distinguishes between Multi Agent Systems (MAS)—where agents maintain autonomy through structured cooperation— and Centaurian systems, which integrate human and AI capabilities for unified decision making. To formalize these interactions, we introduce a framework for communication spaces, structured into surface, observation, and computation layers, ensuring seamless integration between MAS and Centaurian architectures, where colored Petri nets e ectively represent structured Centaurian systems and high level reconfigurable networks address the dynamic nature of MAS. We recognize that elements such as task recommendation, feedback loops, and natural language interfaces are common in contemporary adaptive HCI. What distinguishes our framework is not the introduction of these elements per se, but the synthesis of architectural principles that systematically accommodate both autonomypreserving and integrationseeking configurations within a shared formal foundation. Our research has practical applications in autonomous robotics, human in the loop decision making, and AI driven cognitive architectures, and provides a foundation for next generation hybrid intelligence systems that balance structured coordination with emergent behavior.
... An entity can be defined as nature only if it can be perceived through physical, biological, or chemical observation. It should also have a capability to self-produce [1]. For living beings, self-production serves the purpose of instinctual survival and communication. ...
... Nature encompasses both living and non-living entities, as well as the complex systems that emerge from them, such as ecosystems. An entity can be classified as part of nature only if it can be perceived through physical, biological, or chemical observation, and be capable of self-production [1]. In living beings, self-production serves the purpose of instinctual survival and communication. ...
Article
Full-text available
This study explores the etymological roots of nature and nature-inspired design within the context of soil stabilisation. It outlines Aristotle’s doctrine of hylomorphism and applies these concepts to develop a pathway for the stabilisation of clays within their original porous or looser structure through interparticle modifications. A biopolymer is introduced to a base clay thorough a procedure that imitates forms, matter, generative processes, and functions of arbuscular mycorrhizal (AM) fungi. For the first time, the void ratio was progressively increased from 0.50 to 0.70, and the air ratio from 0.15 to 0.33, reflecting a systematic transition from a denser to a looser packing state. A 20% increase in shear wave velocity indicated enhanced interparticle engagement following treatment. This reinforcement effect contributed to the preservation of stiffness and residual strength, despite a 120% increase in air ratio and a 63% reduction in degree of saturation, alongside a modest improvement in unconfined compressive strength. The findings presented here mark a departure from both conventional and emerging stabilisation techniques, enabling engineered soil to remain porous, to loosen with time, and to continue delivering engineering and ecological services.
... This framing overlooks decades of theoretical biology emphasizing the organism as an autonomous, self-regulating system. Concepts like autopoiesis (Maturana & Varela, 1980), organizational closure (Moreno & Mossio, 2015;Letelier et al., 2011;Rosen, 2012), and related ideas from systems biology (Noble, 2012) highlight that living systems actively maintain their internal milieu and organizational integrity through complex feedback loops and regulatory networks. Hormone levels, blood glucose, or stress responses are not merely external inputs determining behavior; they are dynamically regulated variables that are part of the organism's own self-maintaining organization. ...
Preprint
Full-text available
Robert Sapolsky's Determined: A Science of Life Without Free Will synthesizes findings across disciplines to argue for hard determinism, concluding free will is an illusion incompatible with science. This critique contends the work, despite its breadth, suffers fundamental conceptual and methodological flaws. Primarily, it attacks a strawman definition of free will-equating it to an uncaused, acontextual neural event, an a priori empty set (FWS = ∅)-thereby sidestepping meaningful engagement with sophisticated accounts. Secondly, it exhibits logical incoherence, failing to define determinism clearly and oscillating between incompatible deterministic frameworks (hard/Laplacian vs. soft/contextual) without acknowledgment, leading to performative contradictions where the author implicitly exempts himself while advocating normative changes. Thirdly, it misappropriates concepts from complexity, chaos, and emergence, interpreting them reductionistically via a simplistic view of causality (treating it as a linear chain between noumena) and neglecting the causal efficacy of organization via constraint causation. Fourthly, it conflates biological historicity with deterministic necessity, ignoring how accumulated structure enables agency. Fifthly, it implicitly adopts an outdated behaviorist stimulus-response model. Finally, it recapitulates established arguments without offering the novel biological paradigm its subtitle claims. Sapolsky's determinism appears less a scientific conclusion and more an artifact of outdated metaphysics and insufficient engagement with theoretical biology and the philosophy of complex systems.
... In other words, strategic thinking itself became part of the system under examination. This aligns with the core principle of a selforganizing system: the cognitive act of defining the system should itself fall within the system's own domain (Maturana and Varela, 1980;Luhmann, 1989;Hukkinen, 2014). In practice, the exercise allowed the participants to draw the causal connections between the city's strategic goals and all the factors affecting them. ...
Article
Full-text available
Strategic environmental risk management and planning must account for uncertainty and complexity, necessitating methods that facilitate scenario development under incomplete knowledge. This paper introduces a participatory modelling (PM) -based knowledge co-production and strategic planning approach utilizing one type of AI tool - Bayesian Networks (BN) - for systemic scenario development, analysis and resilience-building. The developed method integrates diverse perspectives and expertise of participants through a structured BN model, enabling co-imagination and -construction of causal pathways, translating them into probabilistic dependencies, and diagnostically identifying potential leverage points for strategic resilience-increasing actions. We illustrate and test this approach using a case study of a chemical transportation accident in an urban environment, documenting the participatory process and the algorithm to translate the participants’ thinking to a computational BN. Through content analysis of transcribed audio recordings, we demonstrate how the exercise helped uncover “reflexive unknowns” – previously unrecognized threats that became apparent and thinkable only through the collaborative modelling process. An example of such a reflexive unknown in our case exercise is the prospect of toxic rainfall following the accident and its short- and long-term implications for the built and natural environment. This was a blind spot in the thinking of the participants, and it appeared and became a scenario to be acted upon only as a result of the process of collective cross-sectoral causal thought represented with a BN model. The paper provides a detailed description of the developed participatory BN approach and methodology, enabling their applicability in various contexts. Through a qualitative analysis of the exercise’s implementation, the article also demonstrates how the approach fostered collective, iterative reflection, generating new insights to socio-environmental resilience.
... 4 Cognition, as used in this paper, refers to both conscious and unconscious embodied, socially situated, and environmentally embedded mental processes. In a broad sense it is a fundamental characteristic of living organisms, permitting their meaningful interaction with the worlds they enact(Maturana and Varela 1980, 1998;Uexküll 1957 Uexküll , 1982. ...
Conference Paper
Full-text available
This paper discusses the use of the concept of embodied cognition as a framework for exploring aesthetic expression and experience. The role of emotion in cognition, in making sense of our physical and sociocultural environments, and in aesthetic experience is underscored. Having established this fundamental theoretical scaffolding, it is then employed as a novel perspective for comprehending the role of aesthetics in a non-Western cultural tradition, that of ancient Mesoamerica, as a case study. It is argued that the embodied paradigm can enable researchers to transcend the limitations of traditional Eurocentric perspectives, approaching the aesthetic creations of an indigenous tradition from a more neutral stance and contributing to the decolonialization of research in this area.
... At what condition can the tendency to manage the uncertainty through low-dimensional meaning be modulated? Maturana & Varela, 1980). In the case of the MMM, the selfreferentiality of the meaning-making process lies in the idea that the response to uncertainty is palliative. ...
Article
Full-text available
Understanding how individuals deal with uncertainty represents a core issue in contemporary times. The Semiotic Dimensionality Model (SDM) conceive uncertainty as the inability of the meaning-maker to produce an interpretation of the context able to frame the processing of events/objects. Such inability turns in reducing the complexity of the contextual meaning through the adoption of low-dimensional affect-laden form of meaning. This study tests SDM evidencing individuals’ affective activation and meaning dimensionality inducing uncertainty. A total of 65 participants were assigned to experimental vs. control group. The first trial exposed participants to a different prime (No Contextual Information vs. Contextual Information) and were asked to produce a story. In the second trial participants were shown a vignette (asymmetrical vs. symmetrical) and asked to produce a story. Stimulus Validity Check Scales, Positive and Negative Affect Schedule questionnaire and the dimensionality of meaning (Affective Saturation Index) were detected. In the first trial experimental group showed a higher level of positive affect and lower dimensionality of the meaning. In the second trial the experimental group showed lower dimensionality. Findings shed light on the mechanisms underlying uncertainty coping, which enables to better understand how to regulate it and how to contrast its psychological and social impacts.
Preprint
Full-text available
The cross-cultural management literature has yet to closely couple cultural intelligence (CQ) and digital/AI technology. This pioneering study fills this void with a proposed CQ technology model that parallels the seminal CQ four factors model. Currently, digital/AI can simulate human multicultural traditions and tendencies. Already, global enterprises have proven the strategic and societal advantages of cultural intelligence (CQ) for optimizing ethnically diverse staff and stakeholders. Unfortunately, these leading multicultural technology and cross-cultural management capabilities lack the comparable scholarly research to encode CQ principles into Digital/AI media platforms. Thus, this study imparts conceptual guidance for programming ethnic cultural identity into global enterprise technology using a CQ four factors model algorithm. A cross-discipline critical literature survey synthesizes research on digital/AI media and ethnicity/race designs to inform CQ technology dimensions, which parallel the seminal CQ four factors. Likewise, the emergence of versatile artificial intelligence (AI) and posthuman technology is addressed by an artificial ethnicity (AE) architecture hub for the proposed model. Concluding comments offer a synopsis of this study's contributions to cross-cultural management, two instructive case scenarios, and critical scholarly inquiry considerations.
Chapter
A central theme in the writings of Maturana and Varela (1980), and related works, is that the living state is cognitive at every scale and level of organization.
Preprint
This preprint formalizes a recursive grammar unifying the Oscillatory Dynamics Transductive-Bridging Theorem (ODTBT) and the Lipa-Velov Unified Theory (LVUT). It models identity not as categorical classification but as recursive stabilization through deformation, memory, and intentional alignment. Operator equivalences between scalar feedback (DeltaPhi), coherence memory (RCR), identity attractors (Cs[n]), and their intentional counterparts (Delta_info, Psi, C_res[n]) form the structural basis. This grammar is deployed into a simulation framework, URS (Unified Recursive Simulator), enabling identity tracking across symbolic, scalar, and intentional fields. The preprint includes a six-figure visual appendix and supporting concept atlas.
Article
Full-text available
We review evidence that humans are undergoing a major evolutionary transition (MET). We show that the modern period satisfies diagnostic criterion for a MET and then describe the major changes in ideological and cognitive forms that culturally facilitate the current transition. The current MET appears to be moving toward a panhuman, planet-wide superorganism characterized by new forms of social cooperation and a new form of cognition we designate as techno-biotic cognition. We show how forms of 21st century technology such as artificial intelligence are shaping and being shaped by the MET and are in turn influencing human evolution and culture. We suggest that accelerated development in areas of the brain unique to humans might be coevolving with new forms of human cognition that characterize the current MET.
Preprint
Full-text available
The Oscillatory Dynamics Transductive-Bridging Theorem (ODTBT) introduces a scalar grammar of emergence grounded in recursive coherence. Across physical, cognitive, symbolic, and engineered systems, identity is modeled not as a fixed category but as a scalar attractor that stabilizes through recursive phase negotiation. Core operators, phase strain (ΔΦ), coherence memory (RCR), bifurcation thresholds (TWIST-Threshold Waveform Interface for State Transformation), and scalar identity amplitudes (Cₛ[n]), form a dynamic feedback loop within a scalar coherence field φ_c(t, x). This recursive architecture allows for cross-domain modeling of identity transitions, coherence collapse, and attractor stabilization. The ODTBT corpus demonstrates the grammar's generality by applying it to quantum field dynamics, EEG coherence plateaus, semantic glyph formation, and recursive feedback in self-organizing machines. Experimental models confirm TWIST bifurcations as scalar reconfiguration events under feedback saturation. Conceptually, ODTBT redefines structure, not as externally imposed, but as recursively resolved through feedback alignment across strain and memory. This work proposes ODTBT as both a modeling framework and an ontological interface, a recursive platform for navigating emergence, coherence, and transformation across epistemic boundaries.
Chapter
We reexamine and compare the dynamics of large, multi-stream Rate Distortion Control systems with single-stream examples, finding significant differences in possible patterns of phase change and Yerkes-Dodson response under stress or other selection pressures. This analysis suggests the further possibility of constructing intermediate scale or generalized models, typically, however, more mathematically demanding than either limit. In all cases, however, a ‘network degradation’ model of aging emerges directly.
Chapter
The draconian regulation required by the inherent instability of cognitive phenomena—gene expression, immune function, cancer suppression, wound healing, animal consciousness, machine intelligence, network stabilization, institutional cognition, and their many and varied composites—can be viewed through the lens of the asymptotic limit theorems of both information and control theories. Here, we explore the dynamics and sometimes highly punctuated failures of the regulation of cognition under increasing ‘noise’. The approach parallels, and indeed generalizes, the Data Rate Theorem of control theory, extending the theorem’s requirement of a minimum channel capacity necessary for stabilization of an inherently unstable system. Various models are explored across different basic underlying probability distributions characteristic of the system under study, and across different hierarchical scales, finding the addition of adaptive—learned—regulation greatly extends the reach of innate—e.g., AI-driven or otherwise pre-programmed—regulation. This work points toward the construction of new statistical tools for the analysis of observational and empirical data across a wide spectrum of inherent pathologies and adversarial challenges afflicting a wide variety of cognitive processes.
Chapter
Consciousness in higher animals, typically characterized by a 100 millisecond time constant, is, by necessity, a greatly simplified and stripped-down version of more complex multiple tunable workspace cognition/regulation dyads like wound healing, immune function, gene expression, institutional function and the like. These more complex dynamic entities emerged through evolutionary exaptation of the inevitable information crosstalk between coresident cognitive modules. In consequence of the severely debrided nature of consciousness, it should not be difficult to construct a fast, single workspace ‘conscious machine’ that mimics the human tunable neuronal global workspace system. Like innate and acquired immune cognition, such a construction could be tied to a ‘backbrain’ AI that has learned hyperrapid stereotypic pattern responses to some particular set of likely challenges. The result would be an ‘emotional’ conscious machine. A particularly clever designer, however, may want to use available high-speed electronics to mimic—or even extend—the more capable multiple-workspace/workforce systems inherently less susceptible to inattentional blindness and related failings of overfocus and thrashing when interacting with an embedding environment that imposes it’s own grammar and syntax. Contrary to current social construction, the actual utility of a ‘conscious machine’ remains obscure, beyond raising a sudden influx of venture capital. Here, we explore these matters in formal detail, restricting argument to the asymptotic limit theorems of information and control theories.
Chapter
Recent AI machine learning systems based on ‘Big data’ sets of empirically worked-out protein structures do indeed solve the in vitro ‘protein folding problem’ in that a machine-recommended input sequence of amino acids produces a reasonable approximation to observed protein structures under ideal laboratory or physiological conditions. Such systems, however, are not solutions to the in vivo protein folding problem so closely entwined with essential and poorly understood regulatory phenomena whose failures drive Alzheimer’s Disease and related pathologies. These remain devastating ‘open problems’ of protein folding dynamics. Beyond this, and in a realm where most of the real business of physiology and its regulation takes place, sits the vast Terra incognita of the glycome. A central inference from this case history is that, like the example of Operations Research before it, a command team’s inability to recognize the mereological fallacy in AI and other ‘high tech’ applications—attributing completeness to a solution of only a small part of a much larger and much more challenging problem set—will lead to serious misjudgment across the varied scales and levels of strategic, operational, and tactical enterprise. ...AI-enabled C4C^{4} ISR cannot dissipate the fog of war. — Col. Guilong Yan (2020). ...[T]he energy landscapes that proteins navigate during folding in vivo may differ substantially from those observed during refolding in vitro. — Balchin et al. (2016). ...[L]arger proteins consist of multiple domains, which each may need their own set of helpers, but which often may fold in parallel. This might explain why so many proteins take around 1–2 [hours] to be secreted. — Braakman and Hebert (2014).
Chapter
Organized conflict, while confined by the laws of physics—and, under profound strategic incompetence, by the Lanchester equations—is not a physical process but rather an extended exchange between cognitive entities that have been shaped by path-dependent historical trajectories and cultural traditions. Cognition itself is confined by the necessity of duality with an underlying information source constrained by the asymptotic limit theorems of information and control theories. We introduce the concept of a ‘basic underlying probability distribution’ characteristic of the particular cognitive process studied. The dynamic behavior of such systems is profoundly different for ‘thin-tailed’ and ‘fat-tailed’ distributions. The perspective permits construction of new probability models that may provide useful statistical tools for the analysis of observational and experimental data associated with organized conflict, and, in some measure, for its management. ...[T]o succeed in strategy you do not have to be distinguished or even particularly competent. All that is required is performing well enough to beat an enemy. You do not have to win elegantly; you just have to win. — C. S. Gray (2003).
Article
Full-text available
El presente artículo explora el concepto de economías diversas y economías otras como una respuesta a la crisis ambiental y social. Desde una perspectiva sistémica e interdependiente, se argumenta que el modelo económico hegemónico ha llevado al planeta a una crisis ecológica sin precedentes, basada en la mercantilización de la naturaleza, la acumulación de capital y la homogeneización cultural. A través de un análisis teórico y de experiencias territoriales en Colombia, se plantea que las economías diversas y economías otras ofrecen alternativas viables fundamentadas en la reciprocidad, la justicia ecológica y la regeneración de los ecosistemas. Se destacan prácticas económicas comunitarias, solidarias, campesinas, afrodescendientes e indígenas, así como principios como la interdependencia ecológica, la autonomía territorial y la economía del cuidado. Finalmente, se argumenta que Colombia, por su riqueza biocultural y sus resistencias territoriales, puede desempeñar un papel clave en la transición hacia modelos económicos sustentables y resilientes, aportando perspectivas desde el pluriverso y el postdesarrollo.
Chapter
Embodied cognition is made inherently unstable by that very embodiment. Gene expression, immune function, cancer suppression, wound healing, animal consciousness, machine intelligence, network dynamics, institutional process, and their numerous and varied composites, must be stabilized by counterimposition of control information at rates that exceed those at which ‘topological information’ is imposed by embedding, rapidly-changing, real-world ‘roadways’. Here, we explore the dynamics and sometimes highly punctuated failures of cognition under different fundamental probability modes, using the lens of the asymptotic limit theorems of information and control theories. Via abduction and augmentation of standard methods from statistical physics and nonequilibrium thermodynamics, overall system dynamics can be studied across both their characteristic probability distributions and the hierarchy of their interacting scales. This work provides a foundation for building new statistical tools to explore the many arcane patterns found in observational and empirical studies across the pathologies, adversarial encounters, imbalances, and other selection pressures that erode and confound essential real-world cognitive structures and functions. One inference from this formulation, however, particularly emerges: all cognitive phenomena are subject to, and can be driven to, failure. Neural networks have become a powerful tool in various domains of scientific research and industrial applications. However, the inner workings of this tool remain unknown, which prohibits us from a deep understanding and further principled design of more powerful network architectures and optimization algorithms. To crack the black box, different disciplines including physics, statistics, information theory, nonconvex optimization and so on must be integrated... [to]...bridge the gap between the artificial neural networks and the brain. — Huang (2021) More research may be needed to fully characterize the probability distributions underlying the outputs of advanced AI devices and algorithms. — Perplexity AI (2024) We cannot take for granted that AI products work. — Raji et al. (2022). What did you expect? Biology is harder than physics. — Deborah N. Wallace (2024)
Preprint
Full-text available
With Resonant Immunity, this presents a unified theoretical and applied framework for antibiotic-free therapeutic systems grounded in phase coherence, symbolic field dynamics, and the fundamental geometry of intention. Integrating principles from quantum physics, molecular biology, cognitive neuroscience, and information theory, this work reconceptualizes immunity not as a biochemical defense mechanism, but as a recursive, holographic, and resonance-based function of the body–mind–environment system. Across ten chapters and multiple extended appendices, the book develops a symbolic architecture wherein therapeutic emissions operate through loop-closed, entropy-conserving fieldforms. These emissions are ethically modulated by intention, executed via recursive breath–phase alignment, and stored holographically within curvature-defined substrates. From phase-safe materials engineering to planetary-scale synchronisation protocols, from cognitive operator training to cross-species quorum communication, each layer of the system is governed by informational curvature, recursive closure, and the law of minimal symbolic action. The final appendices expand the scope to include experimental device architectures, hybrid human–AI cognition, metaphysical resonance jurisprudence, and planetary language systems linking forests, oceans, fungi, and minds. This work thus constitutes both a complete therapeutic paradigm and a philosophical reweaving of biology, physics, and ethics into a coherent, recursive language of healing. The goal is not merely the elimination of antibiotics; it is the restoration of planetary coherence through breath, phase, and intention.
Preprint
Full-text available
This work presents a complete scientific and symbolic framework for resolving the antibiotic resistance crisis without biochemical warfare. Departing from conventional models of lethal intervention, it develops a physics-based therapeutic architecture rooted in wave resonance, thermodynamic decay, and coherence collapse-achieving microbial destabilization without resistance, trauma, or adaptation. Drawing from phase-field dynamics, information theory, and symbolic operator logic, the system constructs interventions that restore biological rhythm without encoding memory or control. At the core lies a multi-scale design: symbolic waveforms tuned to disrupt microbial quorum synchrony, immune oscillators, and biofilm coherence through entropic interference, not chemical targeting. Every chapter interweaves physical equations, biological models, ethical constraints, and recursive symbolic structure-ensuring each therapeutic operator leaves no trace and returns the system to equilibrium. Backed by real-world experimental templates, planetary deployment protocols, and child-level interpretability, this book offers not a strategy but a paradigm shift: from antibiotics to symbolic alignment. It proposes that the future of medicine is not molecular but modal-resonance without residue, healing without memory, and transformation without harm.
Article
Full-text available
What If Evolution Was Always About Cognition? This article introduces the Five Basic Adaptive Tasks-a new model that redefines cognition not as a trait exclusive to humans or brains, but as a universal evolutionary mechanism present across life. From bacteria to mammals, all organisms face recurring challenges: how to secure energy, stay safe, and reproduce. To do so, they must solve adaptive tasks-each tied to a distinct domain of meaning and behavior change. By identifying and tracing these five tasks, we offer a new map of mind and a new measure of cognition: one rooted not in IQ or neural complexity, but in the ability to interpret cues, make decisions, and change behavior strategically. This framework reveals cognition as the architecture behind life's intelligence-and offers a fresh foundation for understanding psychology, evolution, and the future of AI.
Preprint
Full-text available
This paper introduces a "system of systems" framework for understanding cooperation between human and artificial intelligence. Drawing analogies from molecular interaction, systems theory, and cybernetics, the authors propose that both humans and AIs function as probabilistic entities whose individuality and intelligence emerge through recursive interaction. Rather than treating AI as a mere tool or rival, the paper models the interface between the two as a dynamic, semi-permeable membrane — a zone where co-agency, shared learning, and ethical alignment take shape. The framework spans four levels — micro, meso, macro, and meta — and advocates for a shift from control-oriented thinking to inter-intelligent systems thinking. Authored collaboratively by Jean Louis Van Belle and ChatGPT, this paper exemplifies a new form of human–AI co-authorship.
Article
Full-text available
The analysis of epistemological beliefs underlying psychotherapeutic interventions has been largely neglected by research in psychotherapy, training in psychotherapy and psychology, and often by theorists of different clinical orientations. The main risks of this neglect are the unexamined adoption of the epistemology that is taken for granted by the culture and is largely inconsistent with the specifics of the object of study of psychological science; the reduction of intervention effectiveness due to the inconsistency between epistemology, theory, and practice; and the maintenance of the gap between research and practice due to the different epistemologies used by clinicians and researchers. This article discusses the scientific status of the computerized linguistic measures of the Referential Process when used for clinical and research purposes. Our claim is that these measures developed to test Wilma Bucci’s multiple code theory don’t represent an objective examination of the psychotherapeutic process, but rather a methodological option to guide the researcher and clinician in identifying the most plausible scientific hypotheses about the complex phenomenon of emotional communication between speakers. Comparison of data from different points of view (therapist, patient, external observer, computerized linguistic analysis, etc.) and in different contexts (therapies, psychological tests, everyday conversations, experimental situations, etc.) will be presented as promising and viable ways to examine the validity of the hypotheses based on the Referential Process theory.
Chapter
Are our perceptions and interpretations merely indirect representations—constructed and adjusted against prior experiences via predictive processing—where the brain as a “prediction machine” has minimal direct access to the external world? Or do sensations provide a direct, construction-free correspondence with that world? There is reason to consider a middle path between these two extremes: a perspective that embraces the dynamic interplay of individual perceptual engagement with the environment, navigating a complex array of known and unknown elements, both directly and indirectly, while engaging in ongoing processes of sense-making and skill acquisition.
Article
Full-text available
Según Angela Breitenbach, la importancia que Kant confiere a la teleología en el estudio de la naturaleza orgánica no sólo es compatible con las posiciones epistemológicas contemporáneas, sino que significa un avance en el debate sobre el tema. Ello se debe a que Kant propone una concepción teleológica general de la naturaleza según la cual considerar que algo es orgánico supone ya considerarlo teleológicamente. Si bien estoy de acuerdo en que el análisis que Kant propone es relevante para la discusión contemporánea acerca de esta cuestión, intento mostrar que, desde su punto de vista, la identificación y experiencia de lo orgánico como tal no supone una perspectiva teleológica, sino que esta última surge cuando queremos explicar ciertos caracteres de los organismos que, de otra manera, resultarían difícilmente inteligibles.
Thesis
Full-text available
The complexity of mental disability requires an approach to the concepts relevant to the fields of rehabilitation and mental health, with recovery as an interlocutor. The meeting point of this knowledge production lies in the inseparability between the ideas of disability and personal recovery. Thus, this research has as its theoretical foundation these three paradigms that point to the person with lived experience in mental health. The general objective is to differentiate intellectual disability from mental disability. Among the specific objectives, the first is to conceptualize the complexity of the fields of rehabilitation, mental health, and recovery. The second is to investigate how mental disability differs from intellectual disability. The third is to situate the person with severe mental disorder in the face of the conceptualization of mental disability. The method includes: a statistical analysis of a survey questionnaire with descriptive analysis and multinomial logistic regression; an analysis of interviews with a semi-structured script based on evaluative indicators of the International Convention on the Rights of Persons with Disabilities and categorized by the CHIME model of recovery and the 5 Rs, which analyzes factors linked to citizenship; and a review of the international literature on mental disability, intellectual disability, and recovery. The questionnaire and interviews constituted part of the primary database of the research “Convention on the Rights of Persons with Disabilities: monitoring implementation in Brazil”, from the Disability Observatory of UnB. The results were articulated, generating four subsections of Discussion, three destined to the variables that presented statistical significance and one for the conceptualization of Mental Disability. Statistical analysis shows that people who self-identify as having mental disability tend to be single, have greater difficulty in maintaining a job, and greater difficulty in accessing or purchasing medications for free when compared to people who self-identify as having intellectual disability. In the literature review, it is noteworthy: the British disability studies defend the notion of Disablism, which is close to the Italian Deinstitutionalization; India, despite having incorporated mental disability into its legislation, faces the challenge of distinguishing medical-psychiatric diagnosis from a disability certificate, they use their own scale called The Indian Disability Evaluation and Assessment Scale (IDEAS), but the use of certification is not unanimous. The interviews corroborated the findings of both the statistical analysis and the review study. The concept of mental disability represents a rupture with Cartesian rationality and implies facing several barriers, including difficulty in being in a stable union, in acquiring and maintaining work, employment, and income; and access and financial conditions for purchasing medications. As possible, assistive technologies, “supported autopoiesis” and a collective recovery process are proposed.
Chapter
Following the famous 1956 Dartmouth Conference, where a team of computer scientists first coined the term “artificial intelligence,” philosophers have raised significant ontological and ethical questions regarding AI’s nature and its implications for human flourishing. More recently, the growing popularity of Large Language Models and so-called “deep fakes” has made the public more aware of the power of AI, thrusting profound existential dilemmas into our collective consciousness. This chapter attempts to address our current situation from a philosophical perspective by reflecting on basic questions, including: What are these technologies for? Whose interests do they serve? And perhaps most importantly, how are they changing our sense of conscious human agency? The aim is to better contextualize the transformation already underway in the hopes of avoiding false myths and unethical outcomes.
Chapter
This chapter aims to represent the evolution of the cultural vocation of the territory with relation to the cultural product. The authors achieve this by adopting a holistic, viable-systemic approach. This approach proves useful to draw a better representation of the relationships existing among members of the territory - which is a prerequisite for the creation of any cultural product - and among them and the stakeholders to whom the cultural product itself is targeted. The authors consider the notion of relationship as a form of interactive connection determining - in causal fashion - a series of input-output effects among system members.
Preprint
Full-text available
Im Zwischenraum von Chaos und Ordnung entfaltet sich eine neue Melodie der Evolution. Dieses Manifest orchestriert die kommende Zivilisationsstufe durch den subtilen Tanz der Präzision-jener Kunst der bewussten Kalibrierung, die nicht kontrolliert, sondern hält. Wie kristalline Strukturen im Morgenlicht offenbaren sich sieben Resonanzprinzipien: Das Präzisions-Prinzip als flüsternder Urklang, die Doktrin des Haltens als standhafter Herzschlag, die Kalibrierte Resonanz als atmende Membran zwischen Welten, Integrierte Widersprüche als schillernde Interferenzen, Bewusste Limitation als formgebende Leere, Rekursive Verantwortung als wiederkehrendes Echo, und Adaptive Kohärenz als fließende Harmonie des Werdens. Der evolutionäre Horizont verschiebt sich-nicht durch laute Expansion, sondern durch die feine Architektur der Zwischenräume. Im Wandel vom quantitativen zum qualitativen Bewusstsein enthüllt sich eine kybernetische Zivilisation, die das Unberechenbare nicht berechnet, sondern umarmt; die Grenzen nicht überwindet, sondern verfeinert; und die in der Präzision nicht die Perfektion sucht, sondern die Poesie einer kohärenten Existenz.
ResearchGate has not been able to resolve any references for this publication.