Article

On computable numbers, with an application to the Entscheidungsproblem

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... If Turing machines were truly complete, computer science with its Turing machine model would be an exception from other sciences, and computer science together with its Turing machine model would be complete. If so, by reduction techniques, we could prove also completeness of mathematics (decision problem in mathematics [43] -disproved by Gödel,Church,and Turing [24,10,40]), and completeness of physics, philosophy, medicine, economy and so on. ...
... We know now much more about EC expressiveness. The Evolutionary Turing Machine model has been introduced in [12,15], where it has been indicated that EC might be more expressive than Turing Machines, i.e., EC can be non-algorithmic, can evolve non-recursive functions, and, in particular, can solve the halting problem of the Universal Turing Machine [26,40]. Evolutionary automata, a generalization of Evolutionary Turing machines, have been introduced in order to investigate more precisely properties of EC [7,8,9,12]. ...
... It is not clear at this moment how to classify expressiveness of Infinite Time Turing Machines and Accelerating Turing Machines -simply, the conditions of an infinite number of steps or doubling the speed of each successive step alone seem not be sufficient to prove that those models can accept all languages over a given alphabet. Similarly, we do not have enough details on c-machines, because Turing did not provide sufficient details about them [40]. Also, we cannot properly classify at this moment the expressiveness of Inductive Turing Machines and Persistent Turing Machines. ...
Preprint
Evolution by natural selection, which is one of the most compelling themes of modern science, brought forth evolutionary algorithms and evolutionary computation, applying mechanisms of evolution in nature to various problems solved by computers. In this paper we concentrate on evolutionary automata that constitute an analogous model of evolutionary computation compared to well-known evolutionary algorithms. Evolutionary automata provide a more complete dual model of evolutionary computation, similar like abstract automata (e.g., Turing machines) form a more formal and precise model compared to recursive algorithms and their subset - evolutionary algorithms. An evolutionary automaton is an automaton that evolves performing evolutionary computation perhaps using an infinite number of generations. This model allows for a direct modeling evolution of evolution, and leads to tremendous expressiveness of evolutionary automata and evolutionary computation. This also gives the hint to the power of natural evolution that is self-evolving by interactive feedback with the environment.
... A universal Turing machine (UTM) is a Turing machine (TM) that can simulate any TM (or Turing-universal computing system) by giving a description of the latter one on its tape. A UTM was first proposed by Turing himself [1]. Since then it has been attracting many researchers. ...
... When the i-th counter is to be accessed, it is shifted to the corresponding position, i.e., *1 n 0 *1 n 1 * · · · @1 n i * · · · *1 n k−1 *. If the next instruction read by the state start is H, then T U goes to the state h (1) and halts in the state halt(a) or halt(r). Otherwise, T U executes the routine ca(·). ...
Article
We construct a 1-tape 98-state 10-symbol universal reversible Turing machine (URTM(98,10)) that directly simulates reversible counter machines (RCMs). The objective of this construction is not to minimize the numbers of states and tape symbols, but to give a URTM a reasonable size whose simulating processes of RCMs are easily understood. Here, we choose RCMs as the target machines of simulation, since the class of RCMs is known to be Turing universal, and their operations are very simple. Furthermore, using the framework of RCMs in the program form (rather than the quadruple form), construction of a URTM is simplified. We also created a computer simulator for the URTM(98,10), by which simulation processes of RCMs are visualized.
... Alonzo Church with his Lambda-calculus (Church (1936)) and Alan Turing with his Turing Machine (Turing (1936)). Both ways of computing were later shown to be equivalent, and they served to relate the abstract mathematical concept of computation with concrete, definable processes via the Church-Turing Thesis. ...
... It is a remarkable fact, already proven by Turing (1936) that there exist universal Turing machines. A universal TM is a Turing machine that can simulate the behavior of any other TM in the sense that, if given as input the description of a Turing machine T , it halts if and only if T halts on the blank input and, in that case, its output is identical to that of T . ...
Preprint
Full-text available
The study of undecidability in problems arising from physics has experienced a renewed interest, mainly in connection with quantum information problems. The goal of this review is to survey this recent development. After a historical introduction, we first explain the necessary results about undecidability in mathematics and computer science. Then we briefly review the first results about undecidability in physics which emerged mostly in the 80s and early 90s. Finally we focus on the most recent contributions, which we divide in two main categories: many body systems and quantum information problems.
... The variety of cyberspace can be traced to the foundational contributions of Church [27] and Turing [28] to computer science. Among other things, they showed that finite means can produce infinite scope (variety!): a discrete alphabet, whether binary, ASCII, etc., can be composed and recomposed to produce behaviours of unbounded and undecidable complexity 9 . ...
... In fact, the non-stationary and self-referential nature of cyberspace is a direct analogue to the setting of Turing's halting problem[28]. ...
Preprint
Full-text available
Our thesis is that operating in cyberspace is challenging because cyberspace exhibits extreme variety, high malleability, and extreme velocity. These properties make cyberspace largely inscrutable and limits one's agency in cyberspace, where agency is the ability to exert influence to transform the state or behaviour of the environment. With this thesis, we explore the nature of cyberspace, command and control (C2), and diagnose the challenges for cyber C2, with treatment to follow in future work.
... En 1950, ante la inquietud: "¿pueden pensar las máquinas?", con el "test de Turing" como "juego de imitación" y el concepto de "máquinas-niño" o infantiles, se introdujo la posibilidad que un artefacto se desempeñe con inteligencia (Boden 2017;Turing, 1937;2009). ...
Article
Full-text available
The 21st century reveals a significant push for fusions between science and technology, unlike other historical stages. Human beings and their daily lives are being transformed by technosciences. In that regard, this study contains a double purpose. On the one hand, it contextualizes artificial intelligence within the orbit of the fourth industrial revolution. And, on the other hand, it examines positions regarding the differences between machines and humans. The findings show how machines evolve and become more sophisticated. Faced with this, in relation to what is human, perspectives for, against, and in-between become perceptible. This panorama makes the participation of bioethics as a deliberative and disciplinary meeting place inevitable. Its contribution is focused, promotes dialogue, and encourages prior corrective measures for self-control.
... III. PRELIMINARIES ON TURING'S THEORY OF COMPUTABILITY As discussed in Section I, computable analysis is founded on Turing's theory of computability [19], [20]. The Turing machine is a model of an idealized digital computer. ...
Article
Full-text available
We investigate the feasibility of computing quantum gate-circuit emulation (QGCE) and quantum gate-circuit concatenation (QGCC) on digital hardware. QGCE serves the purpose of rewriting gate circuits comprised of gates from a varying input gate set to gate circuits formed of gates from a fixed target gate set. Analogously, QGCC serves the purpose of finding an approximation to the concatenation of two arbitrary elements of a varying list of input gate circuits in terms of another element from the same list. Problems of this kind occur regularly in quantum computing and are often assumed an easy task for the digital computers controlling the quantum hardware. Arguably, this belief is due to analogical reasoning: The classical Boolean equivalents of QGCE and QGCC are natively computable on digital hardware. In the present paper, we present two insights in this regard: Upon applying a rigorous theory of computability, QGCE and QGCC turn out to be uncomputable on digital hardware. The results remain valid when we restrict the set of feasible inputs for the relevant functions to one parameter families of fixed gate sets. Our results underline the possibility that several ideas from quantum-computing theory may require a rethinking to become feasible for practical implementation.
... It is unclear how such a system could have come to exist in the first place, since none of the processes in this network can begin unless all the other processes have already begun. This is what Hofmeyr (2021) refers to as the starting problem, analogous to Turing (1936)'s halting problem. So cycles of processes possess an interesting property which is not shared by noncyclical process-enablement structures. ...
Preprint
Full-text available
There are many perspectives through which biologists can study a particular living system. As a result, models of biological systems are often quite different from one another, both in form and size. Thus, in order for us to generate reliable knowledge of a particular system, we need to understand how the models that represent it are related. In previous work, we constructed a general model comparison framework to compare models representing any physical system. Here, we develop an alternative methodology that focuses on a fundamental feature of living systems, namely self-organisation. We employ a graph theoretic formalism which captures self-organising processes as cycles within particular kinds of graphs: process-enablement graphs. We then build the mathematical tools needed to compare biological models and their corresponding descriptions of self-organisation in a consistent and rigorous manner. We apply our formalism to a range of classical theories of life to show how they are similar and where they differ. We also investigate examples of putatively abiotic systems which nonetheless still realise primitive forms of self-organisation. While our current framework does not demarcate living systems from nonliving ones, it does allow us to better study the grey area surrounding life's edge.
... In this subsection we provide some basic facts about computable metric spaces. See [PER89,Wei00,Tur36,Wei93,BW99,BP03,Ilj09,IS18]. ...
Preprint
Full-text available
In this work, we study the computability of topological graphs, which are obtained by gluing arcs and rays together at their endpoints. We prove that every semicomputable graph in a computable metric space can be approximated, with arbitrary precision, by its computable subgraph with computable endpoints.
... Last but not least, we will introduce the relevant preliminaries from computing theory. In theoretical informatics, the arguably most well-established framework of computability emerges from the model of Turing Machines introduced in [25], [26]. The widely accepted Church-Turing thesis implicitly states that Turing Machines form an exact model of real-world digital computers, i.e., any algorithm that can be executed by real-world digital hardware can, in theory, be executed by a Turing machine. ...
Article
Full-text available
The present article analyzes aspects of the problem of remote state estimation via noisy communication channels (RSE) for their Blum-Shub-Smale (BSS) computability, motivated by an exemplary application to a formal model of virtual twinning subject to stringent integrity requirements. Computability theory provides a unique framework for the formal and mathematically rigorous analysis of algorithms and computing machines. Therefore, computability theory is essential in the domain of safety- and life-critical technology, where the formal verification of automated systems is necessary. Based on the RSE problem, we establish a simple mathematical model of virtual-twin systems that entails a formal notion of integrity (i.e., a state where the virtual entity accurately mirrors its physical counterpart). The model's notion of integrity is related to the question of whether the system under consideration is capable of computing the communication channel's zero-error capacity and corresponding zero-error codes. While this task is known to exceed the theoretical capabilities of Turing computers, we prove its formal feasibility within the model BSS machines. As different authors have proposed BSS machines as potential model of some forms of analog computing, this article serves as a proof-of-concept for a theoretical analog supremacy of unconventional information-processing hardware. Considering recent advances in the development of such hardware, forms of analog supremacy will likely become relevant in the future of cyber-physical systems and information technology.
... It is true that scientists sometimes propose a rigorous definition of a hitherto informal concept; but whether the proposed definition correctly and completely translates common intuition remains an open question, belonging to philosophy rather than science. So, when Gentzen calls his formal system of logical deduction a "natural deduction," when Turing claims that his machines can compute everything and not only what is ordinarily called "computable," both are developing technical concepts based on the prior analysis of informal concepts [3,4,12]: a philosophical approach, not a scientific one. Scientific theories of proof and computability can then be developed on this basis, but the philosophical debate will continue on the starting point, namely the definition of what it means to deduce naturally and what it means to compute. ...
Article
The concept of a program, unlike that of a file, is an informal one, giving rise to inaccurate or even false assumptions. In this article, we will contest two of them. First, using historical examples, we will show that there are programs that are not files. Then, more surprisingly, we will argue that any file can be seen as a program, since all criteria characterizing programs hold true for files. In particular, we will conclude that there is no technical reason to distinguish between file/program and program/interpreter pairs.
... Traditionally, abstract reasoning in humans has been considered a symbolic computation -a type of digital processing distinctly different from the analog nature of computation in ANNs 29,30 . Recently, however, studies have shown that complex computations once attributed solely to symbolic processing can be accomplished by extensively trained ANNs 31,32 . ...
Article
Full-text available
The nature of abstract reasoning is a matter of debate. Modern artificial neural network (ANN) models, like large language models, demonstrate impressive success when tested on abstract reasoning problems. However, it has been argued that their success reflects some form of memorization of similar problems (data contamination) rather than a general-purpose abstract reasoning capability. This concern is supported by evidence of brittleness, and the requirement of extensive training. In our study, we explored whether abstract reasoning can be achieved using the toolbox of ANNs, without prior training. Specifically, we studied an ANN model in which the weights of a naive network are optimized during the solution of the problem, using the problem data itself, rather than any prior knowledge. We tested this modeling approach on visual reasoning problems and found that it performs relatively well. Crucially, this success does not rely on memorization of similar problems. We further suggest an explanation of how it works. Finally, as problem solving is performed by changing the ANN weights, we explored the connection between problem solving and the accumulation of knowledge in the ANNs.
... Computing has a long history that predates the first programmable digital computer, the ENIAC in 1946 [11,29]. The first quantum revolution was brought by quantum mechanics, which transformed computation with the invention of the transistor and the integrated circuit unit (ICU) [29,31]. Today we are in the era of a second quantum revolution, which combines quantum mechanics with computer science and information theory [1,29]. ...
... The origins of computation are associated with the development of abstract formal language for machines, with the general principle of computation developed by Alan Turing (1937). Emil Post (1943) contributed with a mathematical production system, and Noam Chomsky created generative grammars (1957). ...
Conference Paper
Full-text available
The research presented in this article is part of the 'Computation for Architecture in Python' Project at LAMO-PROURB, FAU-UFRJ. This article describes the development of a tailored methodology for an advanced Python II course, building on an introductory course on the fundamentals of visual and textual programming applied to design. The course, being project-oriented, focuses on adapting computational processes-specifically generative systems-to design. The research explores computational techniques such as Cellular Automata, L-systems, Genetic Algorithms, and Shape Grammars. While these techniques are not new, they often rely on third-party plug-ins. The authors develop alternatives using self-written programming, employing 'string grammars' with conscious computational processes to create systems through combinatorial optimization with discrete and qualified components. The results highlight advancements in design cognition, elucidating methods, inputs, and outputs, an alternative to common artificial intelligence practices that often rely on superficial statistical methods.
... In 1936, Alan Turing showed that the Halting problem is an undecidable problem ( [42]). One common strategy to prove that other problems are also undecidable is to encode the Halting problem in them, and this is precisely what it is done in [3], and also here. ...
Preprint
Full-text available
The problem of determining the existence of a spectral gap in a lattice quantum spin system was previously shown to be undecidable for one [J. Bausch et al., "Undecidability of the spectral gap in one dimension", Physical Review X 10 (2020)] or more dimensions [T. S. Cubitt et al., "Undecidability of the spectral gap", Nature 528 (2015)]. In these works, families of nearest-neighbor interactions are constructed whose spectral gap depends on the outcome of a Turing machine Halting problem, therefore making it impossible for an algorithm to predict its existence. While these models are translationally invariant, they are not invariant under the other symmetries of the lattice, a property which is commonly found in physically relevant cases, posing the question of whether the spectral gap is still an undecidable problem for Hamiltonians with stronger symmetry constraints. We give a positive answer to this question, in the case of models with 4-body (plaquette) interactions on the square lattice satisfying rotation, but not reflection, symmetry: rotational symmetry is not enough to make the problem decidable.
... Here, we will provide a brief functional description. We will make use of the word algorithm to refer to a Turing machine [34]. The reader may think about a Turing machine as a computer program written in any standard programming language which receives some finite information called input, then proceeds to sequentially execute a finite set of instructions. ...
Article
Using tools from computable analysis, we develop a notion of effectiveness for general dynamical systems as those group actions on arbitrary spaces that contain a computable representative in their topological conjugacy class. Most natural systems one can think of are effective in this sense, including some group rotations, affine actions on the torus and finitely presented algebraic actions. We show that for finitely generated and recursively presented groups, every effective dynamical system is the topological factor of a computable action on an effectively closed subset of the Cantor space. We then apply this result to extend the simulation results available in the literature beyond zero-dimensional spaces. In particular, we show that for a large class of groups, many of these natural actions are topological factors of subshifts of finite type.
... However, it remains questionable whether we will ever succeed in fully penetrating World 3, for the aforementioned reasons of the observation problem in quantum physics, as well as the paradoxical fact that we can use finite measuring instruments that are available to us in World 1 to explore infinite states in World 3. For example, Alan Turing came to the conclusion in his famous halting problem, which can be subsumed under the decision problem, that there are certain problems for which there can be no general solution: for example, programming a higher-level, omniscient algorithm that is able to determine whether any algorithm stops for any input or continues to run indefinitely (Turing, 1936) -so if we like: an algorithm that is able to explain the realm in world 3, from which all our ideas, thoughts and consequently our existence as we know it may emerge from. Paradoxically, it seems as if it is precisely for this reason that the conscious, is the unconscious, which we understand through our deep connection within and from us, and it is precisely that, which makes us volitional individuals and our task is to simply live in harmony -even if it is only for a few brief moments throughout the day. ...
Article
Full-text available
The aim of this work is to show more consciously and tangibly how and under which conditions we can optimally develop our inner potential in today's world. The work defines a theoretical mathematical model that is intended to show the dynamics of individuation a priori on the individual and collective level and describes the associated variables with this mathematical model in detail. The work gives insights on how and why information in today's world transmit by the construction of memory traces and emotionally charged stimuli. Moreover, it introduces ideas about the role of quantum physical processes in the emergence of consciousness and intrinsic mobilization of energy. Three hypothetical courses are described: under which conditions an individual grows, regresses over time and how a natural course looks like. The work concludes with an ethical dilemma about what it means to be and remain a human being who can control his or her animal instincts and what we should normatively align our human progress with throughout our lives. Furthermore, the work explores implications based on the decision of the two options artificial intelligences choose in the ethical dilemma. The work is intended to derive insights and new ideas for change on the individual and collective level to be able to progress productively and responsibly. The work is a conglomerate of well-founded research, observations, social analyses and speculations with the best of conscience and should nevertheless always be viewed with a critical eye and healthy doubt true to the motto: "sápere aúde".
... Ultimately, any assertion of computational universality relies on the Church-Turing thesis, that all computational mechanisms are expressible by a Turing machine [Sipser, 2013, Moore andMertens, 2011]. The concept of a universal Turing machine-a Turing machine U that can simulate the execution of any other Turing machine T on any input-was developed by Alan Turing to solve the Entscheidungsproblem [Turing, 1937]. Proving computational universality of a system therefore reduces to establishing that the system can simulate the operation of a universal Turing machine. ...
Preprint
Full-text available
We show that autoregressive decoding of a transformer-based language model can realize universal computation, without external intervention or modification of the model's weights. Establishing this result requires understanding how a language model can process arbitrarily long inputs using a bounded context. For this purpose, we consider a generalization of autoregressive decoding where, given a long input, emitted tokens are appended to the end of the sequence as the context window advances. We first show that the resulting system corresponds to a classical model of computation, a Lag system, that has long been known to be computationally universal. By leveraging a new proof, we show that a universal Turing machine can be simulated by a Lag system with 2027 production rules. We then investigate whether an existing large language model can simulate the behaviour of such a universal Lag system. We give an affirmative answer by showing that a single system-prompt can be developed for gemini-1.5-pro-001 that drives the model, under deterministic (greedy) decoding, to correctly apply each of the 2027 production rules. We conclude that, by the Church-Turing thesis, prompted gemini-1.5-pro-001 with extended autoregressive (greedy) decoding is a general purpose computer.
... 2 Literature review 2.1 Brief history and trend of AI AI describes the ability of computer systems to acquire, synthesize data to adapt, self-correct, and perform complex tasks in a human-like mode (Popenici and Kerr, 2017). Indeed, AI has metamorphosed from the early idea of Turing about the integration of intelligent thinking and reasoning into machines (Turing, 1936;Turing, 2009) along with the advancement of technologies with which it can be accessed. ...
Article
Full-text available
Background Generative artificial intelligence (Gen-AI) has emerged as a transformative tool in research and education. However, there is a mixed perception about its use. This study assessed the use, perception, prospect, and challenges of Gen-AI use in higher education. Methods This is a prospective, cross-sectional survey of university students in the United Kingdom (UK) distributed online between January and April 2024. Demography of participants and their perception of Gen-AI and other AI tools were collected and statistically analyzed to assess the difference in perception between various subgroups. Results A total of 136 students responded to the survey of which 59% (80) were male. The majority were aware of Gen-AI and other AI use in academia (61%) with 52% having personal experience of the tools. Grammar correction and idea generation were the two most common tasks of use, with 37% being regular users. Fifty-six percent of respondents agreed that AI gives an academic edge with 40% holding a positive overall perception about the use in academia. Comparatively, there was a statistically significant difference in overall perception between different age ranges (I² = 27.39; p = 0.002) and levels of education (I² = 20.07; p < 0.001). Also, 83% of students believe AI use will increase in academia with over half agreeing it should be integrated into learning. Plagiarism (33%), privacy issues (14%), and lack of clarity by the university (13%) remain the top concerns regarding the use of Gen-AI and other AI tools in academia. Conclusion Gen-AI and other AI tools are being used and their use will continue to grow in higher education. While current use is challenging due mainly to plagiarism fear and lack of clarity by the university, most users believe AI should be integrated into the university curriculum.
... In a previous study [1,2], we introduced a bio-inspired mechanism capable of sorting, copying, and reading; that is, it can perform basic information processing tasks. The existence of finite-state machines [16] and Turing machines [4,5] in biological and chemical contexts has been discussed in the literature [9][10][11][12]. In this study, we focus on this aspect of our mechanism, aiming to explicitly demonstrate its computational power. ...
Preprint
Full-text available
This paper presents the implementation of a self-replicating finite-state machine (FSM) and a self-replicating Turing Machine (TM) using bio-inspired mechanisms. Building on previous work that introduced self-replicating structures capable of sorting, copying, and reading information, this study demonstrates the computational power of these mechanisms by explicitly constructing a functioning FSM and TM. This study demonstrates the universality of the system by emulating the UTM(5,5) of Neary and Woods.
... 2) Turing Completeness: In a general sense, an environment or programming language is deemed Turing-complete if it is computationally equivalent to a Turing machine [180]. This means that a Turing-complete smart contract language or environment can execute any possible calculation within finite resources. ...
Preprint
Full-text available
Bitcoin's global success has led to the rise of blockchain, but many systems labeled as "blockchain" deviate from its core principles, adding complexity to the ecosystem. This survey addresses the need for a comprehensive review and taxonomy to clarify the differences between blockchain and blockchain-like systems. We propose a reference model with four key layers: data, consensus, execution, and application, and introduce a new taxonomy for better classification. Through a qualitative and quantitative analysis of 44 DLT solutions and 26 consensus mechanisms, we highlight key challenges and offer research directions in the field.
... At the same time, the focus of city design shifted from simply securing enclaves of activities to an open platform with a diverse mix of functions ranging from industries to housing. Alan Turing's (1936) conceptualisation of computation and algorithms marked the start of the Digital Revolution, paving the way from printing to digital content dissemination. Since the introduction of the first desktop computer in the 1960s, the widespread use of PCs in the 1980s, the rise of mobile devices in the 1990s, and the introduction of smartphones, wifi, and the internet in the 2000s, a framework of "free content creation and distribution" has been conceived to virtually every corner of the globe. ...
Conference Paper
Full-text available
Looking across the data-infused societies, this paper rethinks how data is being collected, analysed, and integrated into contemporary modes of public and private governance and life. Through a comparative study, the authors interrogate their understanding of what constitutes a ‘common’ based on different cultural and historical contexts and discuss the challenges in reconciling digital literacy with public purposes across geopolitical landscapes. Paralleling three cultural and historical contexts, this paper is a self-critique on how to design interventions that may reproduce, amplify, or diminish structural violence in respective public systems. In such interrogation, what becomes apparent is the increasing mediation of our day-to-day urban life and the reproduction of conflicting values across physical to digital domains, especially in how data is being extracted and its downstream uses. By questioning how social innovation can be reorganised and developing a literacy around the ‘digital’, this paper aims at opening a cross-cultural dialogue and pondering on means to inclusive and democratic practises that might show alternatives to a primarily Eurocentric episteme, and how this may be translated into urban spaces.
... Information is considered one of the central biological concepts of the 20th century (Maynard Smith 2000;Jablonka 2002;Gitt et al. 2013), and one that is critical to the adaptive process (Plotkin 1997;Dall et al. 2005;Schmidt et al. 2010). Genetic cybernetics inspired Turing's, von Neumann's, and Wiener's development of computer science (Turing 1936;von Neumann 1950, von Neumann et al. 1987, Wiener 1948. In his 1858 essay, A.R. Wallace referred to the evolutionary principle "as exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident." ...
... 14 Cantor (1967), Russell (1903), Gödel (1992), Turing (1937) 15 Wittgenstein (1953) ...
Preprint
Full-text available
Computational modeling is a critical tool for understanding consciousness, but is it enough on its own? This paper discusses the necessity for an ontological basis of consciousness, and introduces a formal framework for grounding computational descriptions into an ontological substrate. Utilizing this technique, a method is demonstrated for estimating the difference in qualitative experience between two systems. This framework has wide applicability to computational theories of consciousness.
... p S and p R ( § 5) are semimeasures, because x n 1 p S (x 1 , x 2 , ..., x n ) < 1. The fact that the integral is less than 1 is due to the halting problem of UTMs [Turing, 1936], which means that there are some programs in the sum that never stop running. Definition C.4 (Lower semicomputability). ...
Preprint
LLMs show remarkable emergent abilities, such as inferring concepts from presumably out-of-distribution prompts, known as in-context learning. Though this success is often attributed to the Transformer architecture, our systematic understanding is limited. In complex real-world data sets, even defining what is out-of-distribution is not obvious. To better understand the OOD behaviour of autoregressive LLMs, we focus on formal languages, which are defined by the intersection of rules. We define a new scenario of OOD compositional generalization, termed rule extrapolation. Rule extrapolation describes OOD scenarios, where the prompt violates at least one rule. We evaluate rule extrapolation in formal languages with varying complexity in linear and recurrent architectures, the Transformer, and state space models to understand the architectures' influence on rule extrapolation. We also lay the first stones of a normative theory of rule extrapolation, inspired by the Solomonoff prior in algorithmic information theory.
Article
Im Beitrag wird die These vertreten, dass es keinen sinnvollen medienpädagogischen Generationenbegriff gibt und dass es auch nicht sinnvoll ist, einen medienpädagogischen Generationenbegriff zu bestimmen. Die These wird mit einer rezeptiven und einer konstruktiven Methode untersucht. Da die These nicht falsifiziert werden konnte, wurde gezeigt, dass die Vermutung, dass es nicht sinnvoll ist, einen Generationenbegriff für die Medienpädagogik zu bestimmen, beibehalten werden kann.
Chapter
Human development has been a continuing attempt to use new materials in ever more sophisticated ways to enhance the quality of human life. For millennia, we have always made things by taking a main material and then mixing it with small alloying additions to achieve the final required properties. But recently, there has been a revolution as we have discovered how to make much more complex mixtures, providing a bewildering number, literally trillions and trillions, of completely new materials, requiring entirely new scientific theories, and massively extending our ability to make useful products. These new materials are called multicomponent or high-entropy materials. This is the first textbook on the fundamentals of these new materials. It concentrates on the main new concepts and theories that have been developed and provides a summary of the state of play for researchers as well as for students and newcomers entering the field. It is written by the scientist who first discovered multicomponent high-entropy materials. It includes contextual chapters on the history and future potential for developing humankind as driven by the discovery of new materials, and core chapters on methods for discovering and manufacturing multicomponent high-entropy materials, their underlying thermodynamic and atomic and electronic structures, their physical, mechanical and chemical properties, and their potential applications.
Article
Full-text available
This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new problem of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in terms of empirical observations of their outputs and network behavior. Sections 4–7 then show that for every “aligned” function that might appear to be confirmed by empirical observation, there is always an infinitely larger number of “misaligned”, arbitrarily time-limited functions equally consistent with the same data. Section 8 shows that, from an empirical perspective, we can thus never reliably infer that an LLM or subcomponent of one has learned any particular function at all before any of an uncountably large number of unpredictable future conditions obtain. Finally, Sect. 9 concludes that the probability of LLM “misalignment” is—at every point in time, given any arbitrarily large body of empirical evidence—always vastly greater than the probability of “alignment.”
Chapter
Full-text available
We study the classical problem of verifying programs with respect to formal specifications given in the linear temporal logic (LTL). We first present novel sound and complete witnesses for LTL verification over imperative programs. Our witnesses are applicable to both verification (proving) and refutation (finding bugs) settings. We then consider LTL formulas in which atomic propositions can be polynomial constraints and turn our focus to polynomial arithmetic programs, i.e. programs in which every assignment and guard consists only of polynomial expressions. For this setting, we provide an efficient algorithm to automatically synthesize such LTL witnesses. Our synthesis procedure is both sound and semi-complete. Finally, we present experimental results demonstrating the effectiveness of our approach and that it can handle programs which were beyond the reach of previous state-of-the-art tools.
Article
В статье рассмотрены фундаментальные знания о теории систем и системном анализе. Рассматривается методологическая концепция системного подхода, проводится краткий экскурс по этапам развития систем управления в организационных системах. Рассмотрение подразделений пожарной охраны как специфического объекта управления, основной задачей которого является непосредственно реагирование на возникающие происшествия. Подчеркивается решающая роль в управлении информацией в условиях ограниченности времени и интенсивного потока данных во время реагирования на чрезвычайные ситуации. В статье основное внимание уделяется обоснованию важности избирательности информации, избыток информационных потоков может привести к информационной перегрузке, влияя на принятие решений по управлению кризисами. Ключевые слова: системный анализ, управление, пожарная охрана, принятие решений. The article discusses fundamental knowledge about systems theory and systems analysis. The methodological concept of the systems approach is considered, and a brief excursion is given into the stages of development of management systems in organizational systems. Consideration of fire departments as a specific management object, the main task of which is to directly respond to emerging incidents. Emphasizes the critical role of information management in time-constrained, data-intensive environments during emergency response. The article focuses on justifying the importance of selectivity of information; excess information flows can lead to information overload, affecting decision-making on crisis management.
Article
Full-text available
This tutorial covers physical reservoir computing from a computer science perspective. It first defines what it means for a physical system to compute, rather than merely evolve under the laws of physics. It describes the underlying computational model, the Echo State Network (ESN), and also some variants designed to make physical implementation easier. It explains why the ESN model is particularly suitable for direct physical implementation. It then discusses the issues around choosing a suitable material substrate, and interfacing the inputs and outputs. It describes how to characterise a physical reservoir in terms of benchmark tasks, and task-independent measures. It covers optimising configuration parameters, exploring the space of potential configurations, and simulating the physical reservoir. It ends with a look at the future of physical reservoir computing as devices get more powerful, and are integrated into larger systems.
Preprint
Full-text available
The concept of computation supercriticality represents a transformative threshold, akin to the critical mass in nuclear fission. It occurs when speed, information density, and architectural optimization converge to propel computational systems beyond their traditional roles, enabling emergent behaviors and exploration of previously inaccessible dimensions and realities. At this supercritical point, systems transition from predictable tools into frameworks for interacting with the ineffable—simulating higher-dimensional spaces, discovering alternate temporal flows, and even creating new forms of intelligence. This paper explores the profound implications of computation supercriticality for science, philosophy, and society. It examines how such systems might redefine our understanding of reality, uncover unknown physical laws, and raise ethical questions about control, autonomy, and the creation of novel intelligences. By drawing parallels to physical supercriticality, we present a speculative yet grounded vision of how computation might serve as a universal language and bridge to the unknown. The goal is to inspire interdisciplinary research into this paradigm-shifting concept and its potential to redefine humanity's relationship with knowledge, existence, and the cosmos. Keywords: computation supercriticality, emergent behaviors, higher-dimensional spaces, alternate realities, information density, computational architecture, dynamic simulations, artificial intelligence, multiverse modeling, computational epistemology, ethical AI, quantum systems, computational metaphysics, emergent intelligence, temporal dynamics, interdisciplinary research.
Book
Full-text available
: A Formal Proof that P = NP Using the PMLL Algorithm. This book presents The formal proof that P = NP, using the PMLL algorithm to solve the SAT problem in polynomial time. The PMLL algorithm employs a novel combination of logical refinements and memory persistence, demonstrating that NP-complete problems such as SAT can be solved efficiently, without the need for exponential time complexity.
Article
Full-text available
O presente trabalho tem como objetivo analisar a inserção da Inteligência Artificial (IA) no sistema judiciário brasileiro e seu potencial para agilizar a resolução dos processos judiciais. O contexto contemporâneo demanda soluções que melhorem a eficiência e a eficácia do Judiciário, um setor frequentemente sobrecarregado e com um alto número de litígios pendentes. Inicialmente, o estudo aborda os conceitos fundamentais de Inteligência Artificial e suas aplicações em diversas áreas, com foco na justiça. A pesquisa destaca exemplos de ferramentas de IA já em uso, como sistemas de automação para a elaboração de minutas, análise de jurisprudência e triagem de processos, que visam otimizar as atividades dos magistrados e servidores. Em seguida, a pesquisa investiga os benefícios da IA no Judiciário, como a redução do tempo de tramitação dos processos, a diminuição da carga de trabalho manual e a possibilidade de oferecer decisões mais precisas e fundamentadas. A utilização de algoritmos para análise de dados pode, por exemplo, auxiliar na identificação de padrões e na previsão de resultados, tornando o processo decisório mais ágil. Entretanto, o trabalho também aborda os desafios e riscos associados à implementação da IA, incluindo questões éticas, a necessidade de supervisão humana e os riscos de discriminação algorítmica. O estudo enfatiza a importância de garantir a transparência e a accountability dos sistemas de IA, para que os direitos dos cidadãos sejam preservados. Por fim, a pesquisa conclui que a inserção da Inteligência Artificial no Judiciário pode representar um avanço significativo na agilidade das resoluções dos processos judiciais, desde que acompanhada de regulamentações adequadas e uma abordagem responsável. Recomenda-se a realização de treinamentos e a promoção de um debate amplo sobre as implicações da tecnologia, envolvendo juristas, tecnólogos e a sociedade civil, para assegurar que a inovação tecnológica beneficie efetivamente o sistema de justiça.
Book
Full-text available
A Formal Proof that P = NP Using the PMLL Algorithm Abstract This book presents a formal proof that P = NP, using the PMLL algorithm to solve the SAT problem in polynomial time. The PMLL algorithm employs a novel combination of logical refinements and memory persistence, demonstrating that NP-complete problems such as SAT can be solved efficiently, without the need for exponential time complexity.
Article
Full-text available
We investigate conditions under which a semicomputable set is computable. In particular, we study topological pairs (A,B) which have a computable type, which means that in any computable topological space, a semicomputable set S is computable if there exists a semicomputable set T such that (S,T) is homeomorphic to (A,B). It is known that (G,E) has a computable type if G is a topological graph and E is the set of all its endpoints. Furthermore, the same holds if G is a so-called chainable graph. We generalize the notion of a chainable graph and prove that the same result holds for a larger class of spaces.
Article
Full-text available
Could we take Wittgenstein’s philosophy as antagonistic or compatible with AI? Interpretations go in opposite directions. In this paper, I stand with compatibilists and claim that Wittgenstein’s discussion on contexts has deep connections with the early stages of AI at different levels. Furthermore, his remarks on context aids in the comprehension of the recent advancement in machine learning based AI, although they embed a warning against the oversimplyfied association of artificial and human intelligence.
Article
Full-text available
Resumen En este trabajo, me propongo recuperar la relevancia de la propuesta filosófica de Simondon para analizar las tecnologías computacionales, resaltando la vigencia de esas ideas formuladas en un contexto tecnológico disímil, tratando a la vez de delinear algunas divergencias originadas por la aparición de novedades radicales en la tecnología misma. En este sentido, la idea es replicar el método o gesto de Simondon para comprender los objetos técnicos, oyendo qué tienen para decir, estableciendo un diálogo con ellos e intentando construir una mirada conceptual que sea fiel a su singularidad. En el caso de las computadoras y la computación es posible que no nos encontremos ya frente a (solo) un nuevo objeto técnico, sino que sus peculiaridades las convierten en algo mucho más multifacético y complejo. Otro de los objetivos de esta indagación será evaluar esta posibilidad.
Article
Full-text available
The widespread popularity of ChatGPT and other AI chatbots has sparked debate within the scientific community, particularly regarding their impact on academic integrity among students. While several studies have examined AI's role in education, a significant gap remains concerning how AI chatbot usage affects students' perceptions of academic integrity. This study aims to address this gap through rigorous quantitative techniques to explore the dynamics of student interactions with AI chatbots and assess whether this engagement diminishes academic integrity in higher education. Using a non-experimental design, the research investigates the causal relationship between AI chatbot usage and academic integrity, focusing on eight latent variables identified in the literature. A stratified sampling technique was employed to collect a representative sample of 594 participants via a 5-point Likert scale survey from four Southern Asian countries. The dataset underwent extensive statistical analysis using Structural Equation Modeling (SEM) techniques. The findings establish significant links between motivations for using AI chatbots and a decline in academic integrity. The study identifies a behavioral link between academic integrity and pedagogical limitations, highlighting traditional classroom-based pedagogy as the most impactful factor influencing students' motivation to engage with AI chatbots. This research not only quantitatively addresses ethical concerns related to AI in academia but also offers insights into user behavior by assigning distinct weights to post-usage behavioral factors, differentiating it from earlier studies that treated these factors equally.
Chapter
The distinction between the notions of “practice” [Praxis] and “technique” [Technik] plays a distinctive role in the maturation of Wittgenstein’s philosophy 1937–1945, the time he composed Philosophical Investigations and his later remarks on the foundations of logic and mathematics. It allows him to sophisticate his idea that meaning arises “in the practice of language”, emphasizing that it is the fact that there are a variety of “techniques” for embedding symbols in forms of life and within practices that shapes our concept of rule-following. His “anthropological” turn, radicalized in his remarks on the “beginnings” of mathematics and logic, offers a deepening response to Spengler’s and Frazer’s philosophies of culture, and allows for his mature responses to Frege, Russell, Ramsey, Hilbert, and Turing— responses that involve self-criticism. Kripke linked Wittgenstein’s idea of a “practice” to social consensus and the extrusion of “privacy”; others have contrasted Praxis with theory or joined it to forms of conventionalism, incommensurability, and “primitive” normativity. But a “practice” always already involves practitioners in disputes, understandings, and alternative routes of proceeding, the mastery of differing techniques. This is of central importance for the mature Wittgenstein.
Article
Full-text available
In the article, we aim to understand the responses of living organisms, exemplified by mycelium, to external stimuli through the lens of a Turing machine with an oracle (oTM). To facilitate our exploration, we show that a variant of an oTM is a cellular automaton with an oracle, which aptly captures the intricate behaviours observed in organisms such as fungi, shedding light on their dynamic interactions with their environment. This interaction reveals forms of reflection that can be interpreted as a minimum volume of consciousness. Thus, in our study, we interpret consciousness as a mathematical phenomenon when an arithmetic function is arbitrarily modified. We call these modifications the hybridization of behaviour. oTMs are the mathematical language of this hybridization.
Article
Full-text available
The development of biologically-inspired computational models has been the focus of study ever since the artificial neuron was introduced by McCulloch and Pitts in 1943. However, a scrutiny of literature reveals that most attempts to replicate the highly efficient and complex biological visual system have been futile or have met with limited success. The recent state-of the-art computer vision models, such as pre-trained deep neural networks and vision transformers, may not be biologically inspired per se. Nevertheless, certain aspects of biological vision are still found embedded, knowingly or unknowingly, in the architecture and functioning of these models. This paper explores several principles related to visual neuroscience and the biological visual pathway that resonate, in some manner, in the architectural design and functioning of contemporary computer vision models. The findings of this survey can provide useful insights for building futuristic bio-inspired computer vision models. The survey is conducted from a historical perspective, tracing the biological connections of computer vision models starting with the basic artificial neuron to modern technologies such as deep convolutional neural network (CNN) and spiking neural networks (SNN). One spotlight of the survey is a discussion on biologically plausible neural networks and bio-inspired unsupervised learning mechanisms adapted for computer vision tasks in recent times.
Article
Full-text available
Quantum computers use the properties of quantum physics to perform information storage and processing operations. The operation of these computers involves concepts such as entanglement and superposition, which endow them with a great processing power that even surpasses that of the most powerful current supercomputers, while consuming significantly lower amounts of energy. The different studies analyzed in this review article suggest that quantum computing will have a deep impact in areas such as finance, logistics, transportation, space and automotive technology, materials science, energy, pharmaceutical and healthcare industry, cybersecurity, and agriculture. In digital agriculture, several applications that could be executed more efficiently in quantum computers for data processing and understanding of biological processes were identified and exemplified. These applications are grouped here into the following four areas: bioinformatics, remote sensing, climate modeling, and smart farming. This article also explores the strategic importance of mastering quantum computing, highlights some advantages in relation to classical computing, and presents a mapping of the services already available, enabling institutions to undertake strategic planning for the incorporation of quantum computing into their development processes. Finally, the challenges for the implementation of quantum computing are highlighted, along with some ongoing initiatives aimed at furthering research at the forefront of knowledge in this area applied to digital agriculture.
Article
Full-text available
The use of computational tools to solve real problems of different industries has been in increased significantly. Software is used to solve problems of the air, sea, ground, pipes, and aerospace transportation planning; problems related to the science of sport, chemistry, medicine, nursing, mechatronics, robotics; business problems related to minimize costs and maximize profits, problems of management of small, medium and large companies; agro-industrial problems, environment, among many others. To solve real problems using scientific computation, Bio-inspired and computational intelligence. This paper aims to present a review of the history of Computational Intelligence Bio-Inspired Computing and scientific computing.
Presentation
Full-text available
融智学原创文集(Original Collection on Smart-System Studied) Preface Rongzhi Xue (Integrative Intelligence Science) is a novel discipline that studies the principles, methods, and examples of "reasonable division of labor, complementary advantages, high collaboration, and optimized interaction" between natural persons and computers. The formation of the Rongzhi system has undergone three stages of development (accompanied by corresponding theoretical thinking and social practice): The first stage is from 1976 (the conception of "integrating the essence of human knowledge") to 1992 (the formation of the concept of "wisdom integration" or "integrated wisdom"), marked by the proposal of the "basic law of information (hypothesis)", namely the rule of "synonymous juxtaposition and corresponding transformation". From 1980 to 1981, I introduced this concept to scholars such as Director Liu Hanyun of the Modern Laboratory of the Physics Department of Guizhou University, Lei Zhenxiao, the founder of Talent Studies, and Zhang Liang, the editor-in-chief of "Natural Information". In 1987, I published "The Role of Legal Advisors in Enterprise Bidding and Tendering" (an excellent graduation thesis). In the same year, I wrote "The Complete Set of Bidding Documents for the Talent Research Project of Guizhou Province's Seventh Five-Year Plan for Social Sciences". In 1989, relying on Shenzhen's "Library Automation Integrated System", I trialed an "all-around" consulting and training service. In 1991, I publicly presented "Exploring the Scientific System of Psychology" (a paper presented at the annual meeting of the Basic Theory Committee of the Chinese Psychological Society), "Outline of Lifelong Education", and "Exam Psychology - Principles of Learning, Reviewing, Testing, and Coping" (outline). In 1992, I publicly presented "The Combination of Socialist Public Ownership and the Company (Legal Person) System is the Breakthrough for the Construction of China's Economic Law" (a paper presented at the 8th Economic Law Academic Seminar among 13 provinces, municipalities, and autonomous regions). The second stage is from 1993 (the publication of the invention patent for "a smart communication parent-child machine") to 1999 (the completion of the conception of "a method and product for processing knowledge and information data"), marked by the establishment of the "Enterprise Intellectual Property Strategy" column. During this period, I personally invented and guided the invention of multiple new technologies that won multiple gold, silver, and bronze medals at the "China Patent Technology Exposition" and the "International Invention Exposition" (1994-1997). At the same time, I also attempted the planning and organization of a series of "Rongzhi and Financing" projects. The third stage is from 2000 (the publication of the invention patent for "a method and product for processing knowledge and information data") to 2005 (the centralized release of the "Original Collection of Rongzhi Xue"), marked by the formation of (network and computer-aided) "Rongzhi Xue" (trilogy), namely: theoretical Rongzhi Xue focusing on basic research - emphasizing the unified theoretical framework of "semantics, information, and intelligence"; engineering Rongzhi Xue focusing on indirect computing - emphasizing the indirect formalization of "knowledge and information data processing"; and applied Rongzhi Xue focusing on indirect financing - emphasizing the integrated management of "industry, academia, research, application, and computation". "Character-based and the foundation of Chinese information processing, cooperative productive teaching methods, and the integration of Rongzhi and financing" are three typical Rongzhi examples involving multiple industrial chains and clusters. The "Original Collection of Rongzhi Xue" is mainly compiled for readers interested in the two research directions and practical application fields of "how networks and computer systems assist users (natural persons)" and "how Rongzhi and financing complement each other". Since Rongzhi Xue is an emerging discipline intersecting with multiple disciplines such as basic linguistics, computer science, cognitive science, machine translation, computational linguistics, artificial intelligence, general informatics, knowledge management, knowledge economics, network and computer-aided teaching methods, and intellectual property law, it is also suitable for scholars and general readers in these fields. Professor Xu Tongqiang, the head of the Linguistics Advisory Group of the "15th National Outline for Humanities and Social Science Research in Ordinary Higher Education Institutions" of the Department of Chinese Language and Literature at Peking University, wrote in a letter dated September 11, 2001: "The proposal of the concepts of meaning, text, object, and intention is valuable, but how to elaborate on them? What core should be grasped? These need to be deeply deliberated. According to my understanding of the relationship between these four concepts, 'meaning' should be the structural mechanism of objectively existing things, or objective laws, whose operation does not change with human subjective will; 'intention' is the subjective understanding or interpretation of 'meaning', while 'text' and 'object' are just the external manifestations of this understanding. It is necessary to distinguish between 'meaning' and 'intention', and modern linguistics has also realized the necessity of this distinction, with its specific manifestation being the emphasis on functional research. How to theoretically elaborate on this distinction still requires the efforts of the academic community." Professor Yi Mianzhu from the Computational Linguistics Research Room of the PLA Luoyang Foreign Language Institute wrote in a letter dated September 25, 2001: "The conceptual system of collaborative intelligent agents extracted from the new paradigm of Rongzhi Xue is original and is bound to trigger a revolution in the processing of natural language semantic information." Professor Yuan Chunfa from the Tsinghua University State Key Laboratory of Intelligent Technology and Systems wrote in a letter dated January 3, 2003: "Thank you for your lecture at Tsinghua. Due to time constraints, we couldn't have a long talk. I haven't gained a clear understanding of the whole picture of your theory from just a few hours of discussion and exchange. From our conversation, I recognized that the 13 tables in your design scheme for the collaborative intelligent computing language database are very innovative. If these 13 tables for Chinese are established, the ambiguities at various levels in Chinese analysis will be relatively easy to resolve. This is a creative work. But at the same time, I also think that the construction of these 13 tables is a task that consumes a lot of manpower and resources. Because the establishment of a treebank for Chinese alone is a huge task that has not yet been completed, and it is only a part of your database. Therefore, I suggest starting this task after sufficient preparation and adequate human and financial resources." Professor Lu Chuan from the Institute of Applied Linguistics of the National Language Commission of the Ministry of Education (Director of the First Computational Linguistics Professional Committee of the Chinese Information Society) wrote: "The construction of these 13 tables fully demonstrates that you (referring to the author) are capable of standing at a high starting point and skillfully integrating the advantages of various existing schools." ...... Currently, the "Original Collection of Rongzhi Xue" mainly compiles articles published by the author of Rongzhi Xue in academic journals and conferences during 2000-2005. Although the author's current understanding (in 2006) is much deeper and more systematic than that of any of the past three periods (1976-1987, 1988-1992; 1993-1996, 1997-2009; 2000-2002, 2003-2005) (note: this will be gradually supplemented and updated in revised versions), considering the unique academic exchange value of the original collection, for example, it retains original achievements in science and technology and their applications that intersect with multiple disciplines such as basic linguistics, computer science, cognitive science, machine translation, computational linguistics, artificial intelligence, general informatics, knowledge management, knowledge economics, network and computer-aided teaching methods, and international intellectual property law. In particular, it involves various attempts or explorations by an emerging discipline on how to handle or express relationships with numerous related disciplines, which will serve as a reference, inspiration, or warning for researchers engaged in interdisciplinary knowledge innovation. At the same time, it is also necessary to preserve the basic appearance of the original achievements during their initial creation, which cannot be seen in graduate textbooks. Based on my direct exchanges, experiences, and insights with leaders in multiple fields mentioned above, the innovative knowledge points in this collection have considerable academic value, and some even have broad socioeconomic and practical value. Welcome readers to provide valuable feedback! I hope to engage in beneficial scientific discussions with readers on how emerging disciplines handle relationships with surrounding intersecting disciplines! I particularly hope to exchange opinions with readers who have practical experience in "collaborative intelligent computing systems" and the dual practice of "Rongzhi and financing"! Zou Xiaohui, Author of "Original Collection of Rongzhi Xue", March 25, 2006, Hengmei Garden, Zhuhai 前 言 融智学是一门研究自然人与计算机“合理分工、优势互补,高度协作、优化互动”的原理、方法及实例的新型学科。融智体系形成经历了三个发展阶段(其间伴随着相应的理论思考和社会实践): 第一阶段是从1976(产生“集人类知识之大成”的构想)-1992(形成“智慧融通”或“融通智慧”的概念),标志是提出“信息基本定律(假说)”,即:“同义并列,对应转换”法则。1980-1981笔者向贵州大学物理系现代化实验室刘汉云主任、人才学创始人雷祯孝、《自然信息》张良主编等学者做过介绍。1987发表“企业招标投标中法律顾问的作用”(优秀毕业论文)。同年, 撰写了“贵州省七五社会科学项目人才课题招标投标的全套标书”。1989依托深圳“图书馆自动化集成系统”试办“全方位”咨询与培训服务。1991公开“心理学科学体系探新”(中 国心理学会基本理论专业委员会心理学基本理论年会论文)、“终身教育学纲要”和“考试心理学——学习、复习、考试与应变的原理”(纲要)。1992公开“社会主义公有制与公司(法人)制度结合是我国经济法建设的突破口”(十三省、市、自治区第八次经济法学术研讨会论文)。 第二阶段是从1993(“一种智能通信子母机”的发明专利公开)-1999(完成“一种知识信息数据处理方法及产品”的构想),标志是开办“企业知识产权战略”专栏。这期间亲自发明和指导发明的多项新技术分别获“中国专利技术博览会”和“国际发明博览会”金奖、银奖和铜奖多枚(1994-1997)。同期,还尝试了系列“融智与融资” 项目的策划和组织实施。 第三阶段是从2000(“一种知识信息数据处理方法及产品”的发明专利公开)-2005(编著的“融智学原创文集”集中发布),标志是形成(基于网络和计算机辅助的)“融智学”(三部曲),即:着重基础研究的理论融智学——强调“语义、信息与智”的统一理论框架;着重间接计算的工程融智学——强调“知识信息数据处理”的间接形式化;着重间接融资的应用融智学——强调“产、学、研、用、算”一体化管理。“字本位与中文信息处理的基础,合作型生产式教学法,融智与融资”是涉及多个产业链及产业群的三个典型的融智实例。 “融智学原创文集”主要是为关心“网络和计算机系统与用户(自然人)如何互助”和“融智与融资如何互补”两个研究方向及实际应用领域感兴趣的读者而编著的。 由于融智学是与基础语言学、计算机科学、认知科学、机器翻译学、计算语言学、人工智能学、一般信息学、知识管理学、知识经济学、网络和计算机辅助教学方法和知识产权法学等多学科交叉的一门新兴学科,因此,也适合这些领域的有关学者和一般读者。 北京大学中文系“全国普通高等学校人文社会科学研究十五规划纲要”语言学咨询组负责人徐通锵教授(2001, 9,11)来信说:义、文、物、意概念的提出,我觉得是有价值的,但如何阐述?抓住什么样的核心?需要深入推敲。这四个概念的关系,据我的理解,“义”应是客观存在的事物的结构机理,或者说是客观规律,其运转规律不以人的主观意志为转移;“意”是主观对“义”的认识或理解,“文”与“物”只是这种认识外化的表现形式。区分“义”与“意”是很必要的,现代语言学也已意识到这种区分的必要,其具体的表现形式就是注重功能的研究。如何将这种区分进行理论上的阐述,还需学界的努力。 中国人民解放军洛阳外国语学院计算语言学研究室易绵竹教授(2001-09-25)来信说:融智学新范式提炼出协同智能主体的概念体系具有原创性,想必对自然语言语义信息的处理将引发一场革命。 清华大学智能技术与系统国家重点实验室苑春法教授(2003.1.3)来信说:“谢谢你在清华的讲座。由于时间关系,不能长谈。仅仅从几个小时的讨论交流中对你理论全貌尚未能得到一个清晰的了解。从交谈中,我认识到你的协同智能计算语言数据库的设计方案中的13张表很有新意。如果对于汉语的这13张表一旦建立了起来,那么汉语分析中的各个层次上的歧义就会比较容易地解决。这是一件有创建性的工作。但是同时我也认为这13张表的构建是一件消耗大量人力物力的工作。因为仅仅一个汉语的树库的建立就是一件浩繁的工作,至今尚未完成;而它仅仅是你的数据库中的一部分。所以我建议在经过充分酝酿和充分的人力财力准备的基础上再启动这件事。” 教育部国家语委语言文字应用研究所鲁川教授(中文信息学会首届计算语言学专业委员会主任)来信说:这13张表的构建充分体现出你(指:笔者)能站在一个较高的起点上善于集中现有各家学派的优点。 ………… 目前《融智学原创文集》主要汇集了融智学作者2000-2005期间分别在学术期刊和学术会议上发表的文章。虽说作者现在(2006)的认识已比过去三个时期的任何一个阶段(1976-1987,1988-1992;1993-1996,1997-2009;2000-2002,2003-2005)深刻且系统化了许多(注:这将在修订更新的部分陆续追加补充)。但考虑到原创文集独特的学术交流价值,例如:其中保留着对科学技术乃至应用的原创成果涉及与基础语言学、计算机科学、认知科学、机器翻译学、计算语言学、人工智能学、一般信息学、知识管理学、知识经济学、网络和计算机辅助教学方法学和国际知识产权法学等多学科发生交叉的一系列问题,尤其是涉及一门新兴学科如何处理或表述与众多相关学科之间关系的各种尝试或探讨,对从事跨学科知识创新的研究人员来说,都将会有借鉴、启迪或警示作用。 同时,也有必要保留原创成果初创时期的基本风貌,而这在研究生的教科书中也是无法看到的。 据笔者与上述有关多个领域的学科带头人直接交流的经验、感受和体会来看,本文集所具有的创新知识点是有相当学术价值的,有些还有广泛的社会经济实用价值。 欢迎广大读者多提出宝贵意见! 希望就新兴学科如何处理与周边交叉学科关系的问题,与读者进行有益的科学探讨! 尤其希望能与对“协同智能计算系统”和“融智与融资”双重实践的读者交换意见! 《融智学原创文集》作者 邹晓辉2006-3-25于珠海恒美花园
Book
Full-text available
融智学原创文集(Original Collection on Smart-System Studied) 《融智学原创文集》读后感与评析 阅读了邹晓辉先生的《融智学原创文集》,我深感这是一部极具前瞻性和创新性的学术著作,它不仅在理论上提出了许多独到的见解,还在实践中展现了广泛的应用前景。以下是我对这部文集的一些读后感和评析。 一、学术创新性 融智学作为一门新兴学科,其核心价值在于研究自然人与计算机之间的合理分工、优势互补,以及高度协作、优化互动的原理、方法及实例。邹晓辉先生通过多年的理论思考和社会实践,逐步形成了这一独特的学科体系。文集详细记录了融智学从萌芽到成熟的发展历程,展现了作者在学术道路上的不懈追求和深刻洞察。 文集中提出的“信息基本定律(假说)”——“同义并列,对应转换”法则,为信息处理提供了全新的视角和思路。这一法则不仅具有理论意义,还在实际应用中发挥了重要作用。此外,作者还通过一系列发明专利和实际应用案例,进一步验证了融智学的理论价值和实践意义。 二、跨学科融合性 融智学是一门高度跨学科的学问,它融合了基础语言学、计算机科学、认知科学、机器翻译学、计算语言学、人工智能学、一般信息学、知识管理学、知识经济学、网络和计算机辅助教学方法学以及国际知识产权法学等多个学科的知识。文集中的每一篇文章都体现了这种跨学科的融合性,作者通过深入剖析各学科之间的内在联系和相互作用,为融智学的构建提供了坚实的理论基础。 三、实践应用性 融智学不仅是一门理论学科,更是一门应用学科。文集中的多个实例和应用案例充分展示了融智学在实际应用中的巨大潜力。无论是“字本位与中文信息处理的基础”还是“合作型生产式教学法”,亦或是“融智与融资”的实践项目,都体现了融智学在解决实际问题中的独特优势。这些应用案例不仅为融智学的进一步发展提供了有力支撑,也为相关领域的实践者提供了宝贵的参考和借鉴。 四、学术交流价值 文集汇集了作者在2000-2005期间在学术期刊和学术会议上发表的文章,这些文章不仅记录了作者当时的学术思考和研究成果,也为后来的学者提供了宝贵的学术资源和交流平台。通过与多位学科带头人的交流和反馈,作者不断完善和深化了自己的理论体系,这种学术交流的精神和态度值得我们学习和借鉴。 五、未来展望 尽管文集主要汇集了作者2000-2005期间的研究成果,但作者在前言中已经明确表示,其现在的认识已经比过去深刻且系统化了许多。这让我们对融智学的未来发展充满了期待。相信在不久的将来,融智学将在更多领域发挥重要作用,为人类社会的进步和发展贡献更多智慧和力量。 综上所述,《融智学原创文集》是一部极具学术价值和实践意义的著作。它不仅为我们提供了一种全新的视角和思考方式,还为我们展示了融智学在解决实际问题中的巨大潜力。我相信,在邹晓辉先生的带领下,融智学将不断发展和壮大,成为一门真正能够造福人类的学问。 "Reflections and Critique on 'The Original Collection of Works on Rongzhi (Integrative Intelligence) Studies' After reading Zou Xiaohui's Original Collection on Smart-System Studied- 'The Original Collection of Works on Rongzhi (Integrative Intelligence) Studies,' I am deeply impressed by its forward-thinking and innovative academic nature. This work not only presents numerous unique theoretical insights but also demonstrates broad application prospects in practice. The following are my reflections and critique on this collection. I. Academic Innovation As an emerging discipline, Rongzhi Studies' core value lies in exploring the principles, methods, and examples of rational division of labor, complementary advantages, high-level collaboration, and optimized interaction between natural persons and computers. Through years of theoretical contemplation and social practice, Zou Xiaohui has gradually formed this unique disciplinary system. The collection meticulously documents the development journey of Rongzhi Studies from its inception to maturity, showcasing the author's relentless pursuit and profound insights in academia. The "Basic Law of Information (Hypothesis)" — the "Synonymous Juxtaposition, Corresponding Transformation" principle proposed in the collection offers a fresh perspective and approach to information processing. This principle holds theoretical significance and plays a crucial role in practical applications. Furthermore, the author validates the theoretical value and practical implications of Rongzhi Studies through a series of invention patents and real-world application cases. II. Interdisciplinary Integration Rongzhi Studies is a highly interdisciplinary field, integrating knowledge from various disciplines such as basic linguistics, computer science, cognitive science, machine translation, computational linguistics, artificial intelligence, general informatics, knowledge management, knowledge economics, network and computer-aided teaching methodologies, and international intellectual property law. Each article in the collection embodies this interdisciplinary integration, as the author deeply analyzes the intrinsic connections and interactions among various disciplines, providing a solid theoretical foundation for the construction of Rongzhi Studies. III. Practical Applicability Rongzhi Studies is not only a theoretical discipline but also an applied one. The multiple examples and application cases in the collection fully demonstrate its vast potential in practical applications. Whether it's the "Foundation of Word-Based Chinese Information Processing," the "Collaborative Productive Teaching Method," or the practical project of "Integrative Intelligence and Financing," they all reflect the unique advantages of Rongzhi Studies in solving real-world problems. These application cases not only provide strong support for the further development of Rongzhi Studies but also offer valuable references and insights for practitioners in related fields. IV. Academic Exchange Value The collection gathers articles published by the author in academic journals and conferences from 2000 to 2005. These articles not only record the author's academic thoughts and research findings at that time but also provide valuable academic resources and exchange platforms for future scholars. Through exchanges and feedback with multiple disciplinary leaders, the author has continuously improved and deepened his theoretical system. This spirit and attitude of academic exchange are worth learning and borrowing from. V. Future Prospects Although the collection mainly compiles the author's research findings from 2000 to 2005, the author has clearly stated in the preface that his current understanding is much more profound and systematic than before. This fills us with anticipation for the future development of Rongzhi Studies. We believe that in the near future, Rongzhi Studies will play a vital role in more fields, contributing more wisdom and strength to the progress and development of human society. In summary, Original Collection on Smart-System Studied-'The Original Collection of Works on Rongzhi (Integrative Intelligence) Studies' is a work of great academic value and practical significance. It not only provides us with a new perspective and way of thinking but also showcases the enormous potential of Rongzhi Studies in solving practical problems. I believe that, under the leadership of Zou Xiaohui, Rongzhi Studies will continue to develop and grow, becoming a discipline that can truly benefit humanity."
Article
How the brain mentally sorts a series of items in a specific order within working memory (WM) remains largely unknown. We investigated mental sorting using high-throughput electrophysiological recordings in the frontal cortex of macaque monkeys, who memorized and sorted spatial sequences in forward or backward orders according to visual cues. We discovered that items at each ordinal rank in WM were encoded in separate rank-WM subspaces and then, depending on cues, were maintained or reordered between the subspaces, accompanied by two extra temporary subspaces in two operation steps. Furthermore, the cue activity served as an indexical signal to trigger sorting processes. Thus, we propose a complete conceptual framework, where the neural landscape transitions in frontal neural states underlie the symbolic system for mental programming of sequence WM.
ResearchGate has not been able to resolve any references for this publication.