ArticlePDF Available

Significance of Models of Computation, from Turing Model to Natural Computation

Authors:

Abstract

The increased interactivity and connectivity of computational devices along with the spreading of computational tools and computational thinking across the fields, has changed our understanding of the nature of computing. In the course of this development computing models have been extended from the initial abstract symbol manipulating mechanisms of stand-alone, discrete sequential machines, to the models of natural computing in the physical world, generally concurrent asynchronous processes capable of modelling living systems, their informational structures and dynamics on both symbolic and sub-symbolic information processing levels. Present account of models of computation highlights several topics of importance for the development of new understanding of computing and its role: natural computation and the relationship between the model and physical implementation, interactivity as fundamental for computational modelling of concurrent information processing systems such as living organisms and their networks, and the new developments in logic needed to support this generalized framework. Computing understood as information processing is closely related to natural sciences; it helps us recognize connections between sciences, and provides a unified approach for modeling and simulating of both living and non-living systems. KeywordsPhilosophy of computer science–Philosophy of computing–Theory of computation–Hypercomputing–Philosophy of information–Models of computation
A preview of the PDF is not available
... Syntactic mechanical symbol manipulation is replaced by information (both syntactic and semantic) processing. Compared to new computing machines, Turing machines form the proper subset of the set of information processing devices, in much the same way as Newton's theory of gravitation is a special issue of Einstein's theory, or the Euclidean geometry is a limited case of non-Euclidean geometries [9] (p. 308). ...
... Theories of concurrency are partially integrating the observer into the model by permitting limited shifting of the inside-outside boundary. By this integration, theories of concurrency might bring major enhancements to the computational expressive toolbox [9] (p. 314). ...
... For, fundamentally, the TM presupposes a human as a part of a system. That human, Dodig-Crnkovic continues, "is the one who poses the questions, provides material resources and interprets the answers" [9] (p. 306). ...
Article
Full-text available
The outputs of a Turing machine are not revealed for inputs on which the machine fails to halt. Why is an observer not allowed to see the generated output symbols as the machine operates? Building on the pioneering work of Mark Burgin, we introduce an extension of the Turing machine model with a visible output tape. As a subtle refinement to Burgin’s theory, we stipulate that the outputted symbols cannot be overwritten: at step i, the content of the output tape is a prefix of the content at step j, where i<j. Our Refined Burgin Machines (RBMs) compute more functions than Turing machines, but fewer than Burgin’s simple inductive Turing machines. We argue that RBMs more closely align with both human and electronic computers than Turing machines do. Consequently, RBMs challenge the dominance of Turing machines in computer science and beyond.
... Some arguments suggest that the problems of scalability, resiliency, and complexity of distributed software applications are symptoms that point to a foundational shortcoming of the computational model associated with the stored program implementation of the Turing Machine from which all current-generation computers are derived [7][8][9][10][11][12][13]. ...
... Video Service Management Subnetwork: It provides the video service from content to video server and client management. 10 Cognitive Red Flag Manager: when deviations occur from normal workflow such as one of the video clients fails, the SWM will switch it to the secondary video client as shown in Figure 6. It also communicates a red flag which is then communicated to the APM to take corrective action, in this case, restore the video client that went down and let the CNM know to make it secondary. ...
Preprint
Full-text available
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience, while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements.
... Moreover, the advent of many virtualized and disaggregated technologies, and the rapid increase of the Internet of Things (IoT) makes end-to-end orchestration difficult to do at scale. Some arguments suggest that the problems of scalability, resiliency, and complexity of distributed software applications are symptoms that point to a foundational shortcoming of the computational model associated with the stored program implementation of the Turing Machine from which all current-generation computers are derived [7][8][9][10][11][12][13]. ...
... It also communicates a red flag which is then communicated to the APM to take corrective action, in this case, restore the video client that went down and let the CNM know to make it secondary. 10 Event Monitor: It monitors events from video service and user interface workflows and creates an associative memory and an event-driven interaction history with a time stamp. These provide the long-term memory for other nodes to use the information in multiple ways including performing data analytics and gaining insights to take appropriate action. ...
Preprint
Full-text available
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience, while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements.
... An integration of event-driven architecture and service-oriented architecture is discussed by Papazoglou and Van Den Heuvel in [6]. However, the asynchronous and distributed nature of EDA poses several problems [7][8][9][10] that include handling failures, the dependence of an end-to-end transaction on individual component stability, etc. In this paper, we describe a new approach to designing self-regulating distributed 2 applications with autopoietic and cognitive workflow management using the tools derived from the General Theory of Information, GTI [11][12][13][14]. ...
... Each entity is represented as a vertex with process knowledge to execute various functions based on the input received. The output is shared with other entities whose functions are 8 impacted by the information received using shared knowledge defined in the functional requirements of the digital assistant genome. The vertices and the edges that connect various entities that have shared knowledge constitute a knowledge network where nodes that are wired together fire together to execute a transaction spanning the knowledge network with well-defined pre-and post-conditions. ...
Preprint
Full-text available
The benefits of event-driven architecture (EDA) derive from how systems and components are loosely coupled, which can facilitate independent development and deployment of systems, improved scaling and fault tolerance, and integration with external systems, especially in comparison to monolithic architectures. With the advent of new technologies such as containers, and microservices, a new generation of distributed event streaming platforms are commonly used in event-driven architecture for efficient event-driven communication. However, the asynchronous and distributed nature of EDA poses several problems that include handling failures, the dependence of an end-to-end transaction on individual component stability, etc. In this paper, we describe a new approach to designing self-regulating distributed applications with autopoietic and cognitive workflow management. This approach is based on the new science of information processing structures derived from the General Theory of Information. Just as a genome enables self-organizing and self-regulating biological structures, a digital genome enables a specific software application with several components, the ability to use distributed resources and self-regulate the evolution of the system based on functional and non-functional requirements, and best-practice policies that maintain the stability, safety, and survival under non-deterministic fluctuations in the demand for resources. In addition, cognitive workflow management assures end-to-end transaction delivery.
... There are several discussions of computing models pointing to the foundational shortcomings and suggesting new computing models [4][5][6][7][8][9][10][11][12]. However, recent application of the General Theory of Information (GTI) [13,14] and the theory of structural reality [15] offer a new insight into how biological systems use information and knowledge to observe, model, and make sense of what they are observing fast enough to do something about it while they are still observing it. ...
Preprint
Full-text available
General Theory of Information (GTI) offers a groundbreaking framework for designing "mindful machines" that bridge the divide between biological intelligence and digital automation. GTI reimagines traditional computing by introducing cognitive, autopoietic (self-regulating) capabilities in digital systems, enabling them to perceive, adapt, and respond autonomously to changing conditions. Unlike conventional AI and AGI models that operate within predefined algorithmic constraints, GTI-based systems go beyond by incorporating self-awareness and a resilient digital "self," capable of storing interaction histories and learning from them. This paper explores GTI’s practical applications in systems requiring resilience, cognition, and ethical safeguards. In video streaming and medical assistance, GTI-based digital genomes enable systems to function independently, actively self-correct, and conform to policy-driven ethical standards. These mindful machines embody unique traits rarely achieved with traditional AI: robust resilience through self-corrective mechanisms, adaptive learning from accumulated experiences, and alignment with ethical principles that govern system behavior. In demonstrating how GTI can cultivate resilience, autonomy, and ethical decision-making, this paper reveals GTI’s potential to empower digital automata with human-like adaptability and values. As such, GTI emerges as a bridge to a new generation of digital systems designed not merely to execute but to understand, adapt, and ethically engage within complex real-world environments.
... The matching mechanism for models in SysML and simulation can be performed with different methods, but it is very important that the eventual linking mechanism does not only consider the naming conventions and tags of models. Otherwise there is a risk of matching models with incompatible modelling assumptions or executions (considering for example models of computation [18] or semantic gaps between simulation models [39,53]), even if the interfaces are a match. Likewise, the eventual abstraction level of the V&V should be clearly defined to avoid matching sub-models with different scope in abstraction. ...
Conference Paper
Full-text available
In this paper we propose an extension to the MagicGrid framework to support virtual prototyping for early system performance Validation & Verification (V&V). Model-Based Systems Engineering (MBSE) is at a maturity where V&V of system performance is expected to be automated for mature analysis. However, current practices do not adequately cover nor describe how this can be enabled in standard MBSE processes using SysML models as the baseline for the system descriptions and knowledge capture. Therefore we propose an extension of the industrially accepted MagicGrid framework to cover virtual V&V in a tool and process agnostic method, supporting practitioners to develop and use models in MBSE for this purpose without a specific vendor or method lock-in. The framework extension is discussed for each new cell in the grid, and we provide guidelines and best practice discussions for how V&V should be enabled for each cell. Specifically, we discuss simulating/analysing SysML directly or through (co-)simulation. Automotive development is used as a running use case.
... Some arguments suggest that the problems of scalability, resiliency, and complexity of distributed software applications are symptoms that point to a foundational shortcoming of the computational model associated with the stored program implementation of the Turing machine from which all current-generation computers are derived [7][8][9][10][11][12][13][14]. ...
Article
Full-text available
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements.
... An integration of event-driven architecture and service-oriented architecture is discussed by Papazoglou and Van Den Heuvel in [6]. However, the asynchronous and distributed nature of EDA poses several problems [7][8][9][10] that include handling failures, the dependence of an end-to-end transaction on individual component stability, etc. In this paper, we describe a new approach to designing self-regulating distributed Disclaimer/Publisher's Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). ...
Preprint
Full-text available
The benefits of event-driven architecture (EDA) derive from how systems and components are loosely coupled, which can facilitate independent development and deployment of systems, improved scaling and fault tolerance, and integration with external systems, especially in comparison to monolithic architectures. With the advent of new technologies such as containers, and microservices, a new generation of distributed event streaming platforms are commonly used in event-driven architecture for efficient event-driven communication. However, the asynchronous and distributed nature of EDA poses several problems that include handling failures, the dependence of an end-to-end transaction on individual component stability, etc. In this paper, we describe a new approach to designing self-regulating distributed applications with autopoietic and cognitive workflow management. This approach is based on the new science of information processing structures derived from the General Theory of Information. Just as a genome enables self-organizing and self-regulating biological structures, a digital genome enables a specific software application with several components, the ability to use distributed resources and self-regulate the evolution of the system based on functional and non-functional requirements, and best-practice policies that maintain the stability, safety, and survival under non-deterministic fluctuations in the demand for resources. In addition, cognitive workflow management assures end-to-end transaction delivery.
Article
Full-text available
p dir="ltr"> Eco-cognitive computationalism is a cognitive science perspective that views computing in context, focusing on embodied, situated, and distributed cognition. It emphasizes the role of Turing in the development of the Logical Universal Machine and the concept of machines as “domesticated ignorant entities”. This perspective explains how machines can be dynamically active in distributed physical entities, allowing data to be encoded and decoded for appropriate results. In this perspective, we can clearly see that the concept of computation evolves over time due to historical and contextual factors, and it allows for the emergence of new types of computations that exploit new substrates. Taking advantage of this eco-cognitive framework I will also illustrate the concepts of “locked and unlocked strategies in deep learning systems, indicating different inference routines for creative results. Locked abductive strategies are characterized by poor hypothetical creative cognition due to the lack of what I call eco-cognitive openness, while unlocked human cognition involves higher kinds of creative abductive reasoning. This special kind of “openness” is physically rooted in the fundamental character of the human brain as an open system constantly coupled with the environment (that is an “open” or dissipative system): its activity is the uninterrupted attempt to achieve the equilibrium with the environment in which it is embedded, and this interplay can never be switched off without producing severe damage to the brain. Brain cannot be conceived as deprived of its physical quintessence which is its openness. In the brain, contrary to the computational case, ordering is not derived from the outside thanks to what I have called in a recent book “computational domestication of ignorant entities”, but it is the direct product of an “internal” open dynamical process of the system. </div
Preprint
Full-text available
This study aims to place Lorenzo Magnanis Eco-Cognitive Computationalism within the broader context of current work on information, computation, and cognition. Traditionally, cognition was believed to be exclusive to humans and a result of brain activity. However, recent studies reveal it as a fundamental characteristic of all life forms, ranging from single cells to complex multicellular organisms and their networks. Yet, the literature and general understanding of cognition still largely remain human-brain-focused, leading to conceptual gaps and incoherency. This paper presents a variety of computational (information processing) approaches, including an info-computational approach to cognition, where natural structures represent information and dynamical processes on natural structures are regarded as computation, relative to an observing cognizing agent. We model cognition as a web of concurrent morphological computations, driven by processes of self-assembly, self-organisation, and autopoiesis across physical, chemical, and biological domains. We examine recent findings linking morphological computation, morphogenesis, agency, basal cognition, extended evolutionary synthesis, and active inference. We establish a connection to Magnanis Eco-Cognitive Computationalism and the idea of computational domestication of ignorant entities. Novel theoretical and applied insights question the boundaries of conventional computational models of cognition. The traditional models prioritize symbolic processing and often neglect the inherent constraints and potentialities in the physical embodiment of agents on different levels of organization. Gaining a better info-computational grasp of cognitive embodiment is crucial for the advancement of fields such as biology, evolutionary studies, artificial intelligence, robotics, medicine, and more.
Chapter
Mind design is the endeavor to understand mind (thinking, intellect) in terms of its design (how it is built, how it works). Unlike traditional empirical psychology, it is more oriented toward the "how" than the "what." An experiment in mind design is more likely to be an attempt to build something and make it work—as in artificial intelligence—than to observe or analyze what already exists. Mind design is psychology by reverse engineering. When Mind Design was first published in 1981, it became a classic in the then-nascent fields of cognitive science and AI. This second edition retains four landmark essays from the first, adding to them one earlier milestone (Turing's "Computing Machinery and Intelligence") and eleven more recent articles about connectionism, dynamical systems, and symbolic versus nonsymbolic models. The contributors are divided about evenly between philosophers and scientists. Yet all are "philosophical" in that they address fundamental issues and concepts; and all are "scientific" in that they are technically sophisticated and concerned with concrete empirical research. Contributors Rodney A. Brooks, Paul M. Churchland, Andy Clark, Daniel C. Dennett, Hubert L. Dreyfus, Jerry A. Fodor, Joseph Garon, John Haugeland, Marvin Minsky, Allen Newell, Zenon W. Pylyshyn, William Ramsey, Jay F. Rosenberg, David E. Rumelhart, John R. Searle, Herbert A. Simon, Paul Smolensky, Stephen Stich, A.M. Turing, Timothy van Gelder
Chapter
This is the first of two volumes of essays in commemoration of Alan Turing, whose pioneering work in the theory of artificial intelligence and computer science continues to be widely discussed today. A group of prominent academics from a wide range of disciplines focus on three questions famously raised by Turing: What, if any, are the limits on machine `thinking'? Could a machine be genuinely intelligent? Might we ourselves be biological machines, whose thought consists essentially in nothing more than the interaction of neurons according to strictly determined rules? The discussion of these fascinating issues is accessible to non-specialists and stimulating for all readers.
Article
Designant par l'expression de mecanisme historique la proposition selon laquelle l'esprit est une machine, l'A. distingue, parmi les developpements de la these mecaniste au cours du XX e siecle, un mecanisme etroit (narrow) affirmant que l'esprit est une machine de Turing, d'une part, et un mecanisme etendu (wide) affirmant que l'esprit est une machine, certes, mais une machine qui contient la possibilite d'autres machines traitant des processus de l'information et qui ne se reduisent pas a la machine universelle de Turing. L'A. montre que Turing et Church eux-memes ne peuvent accepter la version etroite du mecanisme, refutee par les developpements recents des modeles de calcul non-conventionnels tels que l'hypothese dynamique dans la domaine des sciences cognitives
Article
All approaches to high performance computing is naturally divided into three main directions: development of computational elements and their networks, advancement of computational methods and procedures, and evolution of the computed structures. In the paper the second direction is developed in the context of the theory of super-recursive algorithms. It is demonstrated that such super-recursive algorithms as inductive Turing machines are more adequate for simulating many processes, have much more computing power, and are more efficient than recursive algorithms.