Book

Unified Theories of Cognition

Authors:
... Developing expertise in problem solving constitutes a major goal of most physics courses [1,2,3,4,5]. Problem solving can be defined as any purposeful activity where one is presented with a novel situation and devises and performs a sequence of steps to achieve a set goal [6]. ...
... The problem solver must make judicious decisions to reach the goal in a reasonable amount of time. Given a problem, the range of potential solution trajectories that different people may follow to achieve the goal can be called the problem space [1]. For each problem, the problem space is very large and based upon one's expertise, people may traverse very different paths in this space which can analogically be visualized as a maze-like structure. ...
... This difference led to a different distribution of student responses which can be classified into four of the categories used for problem (1) as shown in Table (1). A comparison with the student responses to Problem (1) shows that the responses in Category (1) almost doubled for Problem (2). In particular, in the context of the pool ball which had the initial linear speed v 0 = 0, 76% of the students believed that a higher frictional force will make v f smaller when the rolling begins. ...
Preprint
Investigations related to expertise in problem solving and ability to transfer learning from one context to another are important for developing strategies to help students perform more expert-like tasks. Here we analyze written responses to a pair of non-intuitive isomorphic problems given to introductory physics students and discussions with a subset of students about them. Students were asked to explain their reasoning for their written responses. We call the paired problems isomorphic because they require the same physics principle to solve them. However, the initial conditions are different and the frictional force is responsible for increasing the linear speed of an object in one of the problems while it is responsible for decreasing the linear speed in the other problem. We categorize student responses and evaluate student performance within the context of their evolving expertise. We compare and contrast the patterns of student categorization for the two isomorphic problems. We discuss why certain incorrect responses were better than others and shed light on the evolution of students' expertise. We compare the performance of students who worked on both isomorphic problems with those who worked only on one of the problems to understand whether students recognized their underlying similarity and whether isomorphic pairs gave students additional insight in solving each problem.
... While many of the findings remain in the shape of verbal theories, a sizable part has been captured via computational formalisms (Anderson & Lebiere, 1998; Laird, Lebiere, & Rosenbloom, 2017; Newell, 1990; Ritter, Tehranchi, & Oury, 2019). Computational formal models form a solution to the problem of "magic parameters" associated with purely verbal theories and integrate proposed mechanisms into a more unified whole (Byrne, 2012;Lane & Gobet, 2012a;Newell, 1990). ...
... While many of the findings remain in the shape of verbal theories, a sizable part has been captured via computational formalisms (Anderson & Lebiere, 1998; Laird, Lebiere, & Rosenbloom, 2017; Newell, 1990; Ritter, Tehranchi, & Oury, 2019). Computational formal models form a solution to the problem of "magic parameters" associated with purely verbal theories and integrate proposed mechanisms into a more unified whole (Byrne, 2012;Lane & Gobet, 2012a;Newell, 1990). ...
... For example, the shortest path, a path through the gates "X", "Y", "Z" and so on. In terms of choosing a psychologically plausible computational approach, one should choose a method that satisfies multiple constraints -e.g., postdicting past psychological experimental data as well as predicting findings that have not yet been reported (Newell, 1990). Our CHREST and deep-learning models both satisfy these constraints as they incorporate fundamental psychological mechanisms and structures, and are rooted in decades of psychological research. ...
Conference Paper
Full-text available
Chunking theory is among the most established theories in cognitive psychology. However, little work has been done to connect the key ideas of chunks and chunking to the neural substrate. The current study addresses this issue by investigating the convergence of a cognitive CHREST model (the computational embodiment of chunking theory) and its neuroscience-based counterpart (based on deep learning). Both models were trained from raw data to categorise novel stimuli in the real-life domains of literature and music. Despite having vastly different mechanisms and structures, both models largely converged in their predictions of classical writers and composers-in both qualitative and quantitative terms. Moreover, the use of the same chunk/engram activation mechanism for CHREST and deep learning models demonstrated functional equivalence between cognitive chunks and neural engrams. The study addresses a historical feud between symbolic/serial and subsymbolic/parallel processing approaches to modelling cognition. The findings also further bridge the gap between cognition and its neural substrate, connect the mechanisms proposed by chunking theory to the neural network modelling approach, and make further inroads towards integrating concept formation theories into a Unified Theory of Cognition (Newell, 1990).
... Establishing a unified theory of cognition has been a major goal of psychology [1,2]. While there have been previous attempts to instantiate such theories by building computational models [1, 2], we currently do not have one model that captures the human mind in its entirety. ...
... The importance of such a unified approach has already been recognized by the pioneers of our field. For example, in 1990, Newell stated that "unified theories of cognition are the only way to bring [our] wonderful, increasing fund of knowledge under intellectual control" [2]. How can we make meaningful progress toward such theories? ...
... In this task, you have to repeatedly choose between two slot machines labeled B and C. An important step towards a unified theory of cognition is to build a computational model that can predict and simulate human behavior in any domain [2,11]. The present paper takes up this challenge and introduces Centaur -the first foundation model of human cognition [12]. ...
Preprint
Full-text available
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in its entirety. Here we introduce Centaur, a computational model that can predict and simulate human behavior in any experiment expressible in natural language. We derived Centaur by finetuning a state-of-the-art language model on a novel, large-scale data set called Psych-101. Psych-101 reaches an unprecedented scale, covering trial-by-trial data from over 60,000 participants performing over 10,000,000 choices in 160 experiments. Centaur not only captures the behavior of held-out participants better than existing cognitive models, but also generalizes to new cover stories, structural task modifications, and entirely new domains. Furthermore, we find that the model's internal representations become more aligned with human neural activity after finetuning. Taken together, Centaur is the first real candidate for a unified model of human cognition. We anticipate that it will have a disruptive impact on the cognitive sciences, challenging the existing paradigm for developing computational models.
... Instead of introspectively analyzing assumed human information processes, they began to empirically study how people process information in their thinking. As a consequence of this paradigmatic research, a large number of psychologically inspired theoretical models of the human mind were developed [15,30]. It takes just a brief step to move from modeling human thinking to designing intelligent processes. ...
... In practice, human actions should be harmonized with the actions of technical artifacts in models; however, thanks to differences in the principles that human minds follow, it is essential to use paradigms that best fit mental operations, such as perception, attention, language, and thinking [15]. Such models have been developed, for example, within cognitive psychology, over the years, beginning with the Turing machine [22], test-operate-test-exit (TOTE) model [37], and physical symbol systems [17] as well as the goals-operators-methods-selection (GOMS)- [30] and adaptive character of thought-rational (ACT-R)-like [15] architectures. Moreover, additional kinds of models have been built based on neural networks [38][39][40]. ...
... The IEC model is unique, although this does not mean that the same process could not be realized by standard models of the mind, such as the GOMS [30] or ACT-R [15] models. However, among the various models of the computational mind, the old TOTE model developed by Miller, Galantr, and Pribram [37] arguably comes the closet to IEC thinking. ...
Article
Full-text available
Human digital twins are computational models of the human actions involved in interacting and operating technical artifacts. Such models provide a conceptual and practical tool for artificial intelligence designers when they seek to replace human work with intelligent machines. Indeed, digital twins have long served as models of technical and cyber-physical processes. Human digital twins have such models as their foundations but also include models of human actions. As a result, human digital twin models enable technology designers to model how people interact with intelligent technical artifacts. Yet, development of human digital twins is associated with certain conceptual problems. To clarify the basic idea, we constructed a human digital twin for Minsky’s M-Machine. The abstract conceptual structure of this machine and its generality allowed us to analyze the general properties of human digital twins, their design, and their use as tools in designing intelligent technologies.
... While many cognitive scientists and education researchers have focused on unraveling the nature of expertise, the community is still struggling with various facets of expertise [1][2][3][4][5][6][7][8][9][10]. These facets include identification of characteristics that are predictors of expertise, how expertise develops and whether this development is a gradual process or whether there are major boosts along the way in development as a result of certain types of exposure or scaffolding supports [11][12][13][14][15][16][17][18][19][20]. Physics has frequently been used as a domain in which the nature of expertise is investigated. ...
... The error bars refer to the standard error. 2D kinematics -Force/cliff 6 (15) Mechanical energy conservation -Speed 7 (17) Newton's second law Newton's third law/tension Tension only 8 (19) Impulse-momentum theorem -Force 9 (24) Mechanical energy conservation and momentum conservation or completely inelastic collision Note. These are examples of the primary and secondary categories and one commonly occurring poor/moderate category for each of the 25 questions for version II of the problem set. ...
Preprint
The ability to categorize problems based upon underlying principles, rather than surface features or contexts, is considered one of several proxy predictors of expertise in problem solving. With inspiration from the classic study by Chi, Feltovich, and Glaser, we assess the distribution of expertise among introductory physics students by asking three introductory physics classes, each with more than a hundred students, to categorize mechanics problems based upon similarity of solution. We compare their categorization with those of physics graduate students and faculty members. To evaluate the effect of problem context on students' ability to categorize, two sets of problems were developed for categorization. Some problems in one set included those available from the prior study by Chi et al. We find a large overlap between calculus-based introductory students and graduate students with regard to their categorizations that were assessed as "good." Our findings, which contrast with those of Chi et al., suggest that there is a wide distribution of expertise in mechanics among introductory and graduate students. Although the categorization task is conceptual, introductory students in the calculus-based course performed better than those in the algebra-based course. Qualitative trends in categorization of problems are similar between the non-Chi problems and problems available from the Chi study used in our study although the Chi problems used are more difficult on average.
... The semi-automatically evolved GEVL strategies produced good fit to the human data in both studies, improving on EPAM's scores by as much as factor of two on some of the pattern similarity conditions. These findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990). ...
... To conclude, our study further integrates genetic/evolutionary aspects with cognitive models (thus bridging the "how?" and "why?" questions) and automates task-specific strategy discovery. Our findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990). ...
Conference Paper
Full-text available
A fundamental issue in cognitive science concerns the interaction of the cognitive "how" operations, the genetic/memetic "why" processes, and by what means this interaction results in constrained variability and individual differences. This study proposes a single GEVL model that combines complex cognitive mechanisms with a genetic programming approach. The model evolves populations of cognitive agents, with each agent learning by chunking and incorporating LTM and STM stores, as well as attention. The model simulates two different verbal learning tasks: one that investigates the effect of stimulus-response (S-R) similarity on the learning rate; and the other, that examines how the learning time is affected by the change in stimuli presentation times. GEVL's results are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The semi-automatically evolved GEVL strategies produced good fit to the human data in both studies, improving on EPAM's scores by as much as factor of two on some of the pattern similarity conditions. These findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
... The attempt to build a Unified Cognitive Architecture (Newell, 1994) that can replicate human-like intelligence must necessarily account for the routine interplay between affect and metacognitive processes. Historically, cognitive modeling research has focused predominantly on knowledge-based processing such as reasoning, vision, and AI problem-solving, with little or no computational account of the critical role of emotion and metacognition. ...
... Procedural knowledge is commonly referred to by researchers as containing "procedural representations" (Anderson, 1982;Pavese, 2019). Within ACT-R, procedural representations are computationally specified as "production rules" which are a dominant form of representation within accounts of skill (Newell, 1994;Taatgen & Lee, 2003;Anderson et al., 2019). Neurologically, production rules are associated with the 50ms decision timing in the basal ganglia (Stocco, 2018). ...
Conference Paper
Full-text available
This paper investigates the computational mechanisms underlying a type of metacognitive monitoring known as detached mindfulness, a particularly effective therapeutic technique within cognitive psychology. While research strongly supports the capacity of detached mindfulness to reduce depression and anxiety, its cognitive and computational underpinnings remain largely unexplained. We employ a computational model of metacognitive skill to articulate the mechanisms through which a detached perception of affect reduces emotional reactivity.
... The attempt to build a Unified Cognitive Architecture (Newell, 1994) that can replicate human-like intelligence must necessarily account for the routine interplay between affect and metacognitive processes. Historically, cognitive modeling research has focused predominantly on knowledge-based processing such as reasoning, vision, and AI problem-solving, with little or no computational account of the critical role of emotion and metacognition. ...
... Procedural knowledge is commonly referred to by researchers as containing "procedural representations" (Anderson, 1982;Pavese, 2019). Within ACT-R, procedural representations are computationally specified as "production rules" which are a dominant form of representation within accounts of skill (Newell, 1994;Taatgen & Lee, 2003;Anderson et al., 2019). Neurologically, production rules are associated with the 50ms decision timing in the basal ganglia (Stocco, 2018). ...
... However, this approach, he argued, yielded a proliferation of micro-theories, limited in scope and lacking the power to coalesce into a cumulative and coherent understanding of cognition. Newell's (1973Newell's ( , 1990 proposed a radical shift in perspective. He argued that, rather than framing research around isolated tasks and artificial dichotomies, researchers should strive to develop cognitive architectures-complex computational systems designed to model a wide array of cognitive phenomena. ...
... While early proponents of cognitive architectures were optimistic about their potential to unify cognitive science (Anderson, 1983;Newell, 1973), later work increasingly emphasized integrated or integrative models of cognition (Anderson, 2007;Eliasmith, 2013;Laird, 2012;Laird et al., 2017;Newell, 1990). As Wayne Gray, in his introduction to a volume on cognitive architectures, observes, "the emphasis on integrated models recognizes that the cognitive system is too large and complex for a single researcher or laboratory to model and that progress can only be made by developing our various parts so that they can fit together with the parts developed by other researchers in other laboratories" (Gray, 2007, p. vii). ...
Article
Full-text available
This paper examines the interplay between integrative explanatory pluralism and the quest for unified theories. We argue that when grounded in virtues associated with satisfactory explanations, integrative pluralism exhibits an inherent instability stemming from the conflict between the demand for unity and the commitment to preserving a patchwork of disparate partial explanations. A case study in cognitive science illuminates the challenges of maintaining both systematicity and depth in explanations within this framework. While this instability does not render integrative pluralism fundamentally flawed, it stresses the importance of a diachronic analysis of scientific dynamics and norms. The conclusion highlights the continued value of integrative pluralism in interdisciplinary research programs, while emphasizing its role as a temporary rather than permanent approach.
... • This knowledge or ability is embodied by an entity (the agentive 5 ), which confers the latter with the potential to repeat actions in which it participates (in terms of the PC relationship) as an agent. Here, the agentive concept covers both the notion of an intentional agent [5] (i.e. an agent which is driven by a goal -representation of a desired world state) and that of a rational agent [19] (i.e. an agent which uses appropriate resources to achieve the goals that it has set itself). The action (AC) is defined in DOLCE-Lite+ as an "accomplishment exemplifying the intentionality of an agent" 6 . ...
Preprint
Our ongoing work aims at defining an ontology-centered approach for building expertise models for the CommonKADS methodology. This approach (which we have named "OntoKADS") is founded on a core problem-solving ontology which distinguishes between two conceptualization levels: at an object level, a set of concepts enable us to define classes of problem-solving situations, and at a meta level, a set of meta-concepts represent modeling primitives. In this article, our presentation of OntoKADS will focus on the core ontology and, in particular, on roles - the primitive situated at the interface between domain knowledge and reasoning, and whose ontological status is still much debated. We first propose a coherent, global, ontological framework which enables us to account for this primitive. We then show how this novel characterization of the primitive allows definition of new rules for the construction of expertise models.
... Consequently, I try to propose a model that conciliates a linguistic theory with a cognitive architecture. The choice of the linguistic theory naturally goes to Construction Grammar (Fillmore, 1988;Kay 2002) and Frame Semantics (Fillmore, 1982), due to the parallel one can draw between a production rule and a construction, and the cogni-tive architecture is, obviously, the family of Production Systems (Newell, 1990;Anderson, 1993). Moreover, since many pragmatical models rely on topologically structured representation, I introduce the notion of context, a notion that has never been adapted to these theories in order to organize data in "storages" structured in dissimilar ways. ...
Preprint
While a great effort has concerned the development of fully integrated modular understanding systems, few researches have focused on the problem of unifying existing linguistic formalisms with cognitive processing models. The Situated Constructional Interpretation Model is one of these attempts. In this model, the notion of "construction" has been adapted in order to be able to mimic the behavior of Production Systems. The Construction Grammar approach establishes a model of the relations between linguistic forms and meaning, by the mean of constructions. The latter can be considered as pairings from a topologically structured space to an unstructured space, in some way a special kind of production rules.
... Another distinction in approaches for conceiving social robots, which is of particular relevance for addressing the SGP, reflects a divergence from the more general field of cognitive architectures (or unified theories of cognition [24]). Historically, two opposing approaches have been proposed to formalize how cognitive functions arise in an individual agent from the interaction of interconnected information processing modules in a cognitive architecture. ...
Preprint
Full-text available
This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the-art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.
... However, in contrast to all other cognitive architectures, e.g. CopyCat (Hofstadter and Mitchell 1995), SOAR (Newell 1990), ACT-R (Anderson 2007), and CLARION (Sun 2002), our system integrates connectionist and symbolic systems in a manner that preserves the full power of the physical symbol system as required by Fodor and Pylyshyn; see Gary Marcus' book "The Algebraic Mind" for a detailed criticism of previous attempts at hybrid cognitive architectures within the domain of symbolic connectionism (Marcus 2001). There is no conflict in our framework with other machine learning or connectionist approaches because all the non-symbolic features of a wide range of supervised and unsupervised learning algorithms found in machine learning textbooks can be possessed inside an atom in our architecture. ...
Preprint
Full-text available
The accumulation of adaptations in an open-ended manner during lifetime learning is a holy grail in reinforcement learning, intrinsic motivation, artificial curiosity, and developmental robotics. We present a specification for a cognitive architecture that is capable of specifying an unlimited range of behaviors. We then give examples of how it can stochastically explore an interesting space of adjacent possible behaviors. There are two main novelties; the first is a proper definition of the fitness of self-generated games such that interesting games are expected to evolve. The second is a modular and evolvable behavior language that has systematicity, productivity, and compositionality, i.e. it is a physical symbol system. A part of the architecture has already been implemented on a humanoid robot.
... And again, this is a question that must be answered whether or not the representations satisfy the definition of a physical symbol system. In answering this question I have found it important to notice that both thought and evolution are intelligent, and undertake knowledge-based search as defined by Newell (Newell 1990). I propose that both open-ended thought and open-ended evolution share similar mechanisms and that a close examination of these two processes in context is helpful. ...
Preprint
Full-text available
Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.
... The idea of a homogeneous set of capacities was suggested with enthusiastic tones and as a possibly great achievement of psychological research, e.g., by Allan Newell in 1990: «psychology has arrived at the possibility of unified theories of cognition -theories that gain their power by posing a single system of mechanisms that operate together to produce the full range of human cognition». 23 Moreover, a hierarchical design presupposes that «multiple lower-level units report to a single higher-level unit, and ultimately one top-level unit oversees the whole system. The different units are organized into a pyramid» (this issue, infra, p. 98). ...
Article
Full-text available
In this introduction to the thematic issue on the future of the cognitive science(s), we examine how challenges and uncertainties surrounding the past and present of this discipline make it difficult to chart its future. We focus on two main questions. The first is whether cognitive science is a single unified field or inherently pluralistic. This question can be asked at various levels: First, with respect to the disciplines that should be included in the cognitive hexagon and their reciprocal relationships: should we speak of cognitive science or of the cognitive sciences? Second, with regard to the conceptual and methodologi-cal changes (turns or revolutions) that have taken place within the cognitive project from its inception to the present day. Third, it pertains to cognitive psychology as a discipline. Before the emergence of cogni-tive science psychology was a fragmented discipline characterized by different traditions and approaches: has cognitive science been able to stem this fragmentation? Finally, we can question the unity of the cogni-tive architecture itself: is cognition produced by homogeneous or heterogenous mechanisms for information processing? We show that the issue of unity is addressed by several of the papers included in this thematic issue. In the second part of this introduction, we query the role that each component discipline should play in the cognitive project and in particular which should lead the project going forward, and why. Again, we show how this issue has been tackled by several articles featured in this collection.
... Here, we suggest a simple formal operationalization of emergentist theories, grounded in cognitive architecture artificial intelligence research, where few general mechanisms should explain diverse and complex cognitive phenomena (Newell, 1994). To select relevant mechanisms, we start by considering minimal cognitive differences between humans and other animals. ...
Conference Paper
Full-text available
In this paper, we introduce a minimal cognitive architecture designed to explore the mechanisms underlying human language learning abilities. Our model inspired by research in artificial intelligence incorporates sequence memory, chunking and schematizing as key domain-general cognitive mechanisms. It combines an emergentist approach with the generativist theory of type systems. By modifying the type system to operationalize theories on usage-based learning and emergent grammar, we build a bridge between theoretical paradigms that are usually considered incompatible. Using a minimal error-correction reinforcement learning approach, we show that our model is able to extract functional grammatical systems from limited exposure to small artificial languages. Our results challenge the need for complex predispositions for language and offer a promising path for further development in understanding cognitive prerequisites for language and the emergence of grammar during learning.
... Orientata alla conoscenza/alla ricerca. Ogni sistema cognitivo si trova a far fronte a quello che Newell chiama trade-off di preparazione/deliberazione: procedere nella fase di ricerca ed esplorazione di altre alternative oppure utilizzare la conoscenza accumulata[Newell, 1990]. Egli tuttavia osserva che ogni situazione ...
Chapter
Full-text available
Il dibattito recente sulla natura delle competenze organizzative ha dato ampio risalto alla nozione di routine, spingendo talora a una sostanziale assimilazione dei due concetti [Teece, 1984; Rumelt, 1984; Wernefelt, 1984; Amit e Shoemaker, 1993; Teece, Pisano e Shuen, in stampa]. Questo capitolo si propone di offrire una presentazione del concetto di routine e delle principali discussioni riguardanti la sua natura, con una particolare attenzione a come tale concetto debba essere articolato per offrire una più adeguata rappresentazione delle competenze di una impresa. Le competenze consistono nella capacità di un'impresa di impiegare i propri assets, ossia alla capacità di attivare processi organizzativi per ricombinare le proprie risorse [Amit e Schoemaker, 1993:35]. La qualità, la miniaturizzazione, i sistemi di integrazione, le best practices sono spesso citati quali esempi di competenze dell'impresa. Le routine fanno riferimento alla parte automatica e abitudinaria del "saper fare" di una organizzazione, ovvero a quelle abilità che sono depositate in pratiche e comportamenti svolti in modo non problematico e con scarso o nullo ricorso alla deliberazione [Nelson e Winter 1982].
... Unlike data-analytical models, these cognitive models provide a deeper theoretical insight into cognitive functions and how the mind works (Pinker, 2003). Newell (1990) introduced the concept of unified theories of cognition, which involves acquiring knowledge, problemsolving, and perception. Cognitive architectures, such as Adaptive Control of Thoughts-Rational (ACT-R) (Anderson et al., 1998;Anderson & Lebiere, 2014;Ritter et al., 2018) and SOAR (Laird, 2019), have been developed to model human cognition in various cognitive tasks. ...
Conference Paper
Full-text available
The field of Artificial Intelligence (AI), particularly in the area of computer vision, has experienced significant advancements since the emergence of deep learning models trained on extensively large labeled datasets. However, reliance on human labelers raises concerns regarding bias, inconsistency, and ethical issues. This study aimed to replace human labelers with an interactive cognitive model that could address these concerns. We investigated human behavior in a two-phase image labeling task and developed a model using the VisiTor (Vision + Motor) framework within the ACT-R cognitive architecture. This study was designed based on a real labeling task of identifying different crystals in optical microscopy images after various treatments for inhibiting the formation of the crystals. The outcomes from the image labeling experiment, which included both learning and testing phases, revealed meaningful observations. The observed decrease in task completion times for all participants during the learning phase suggests an increased familiarity with the image features, facilitated by the reference images presented in all four consecutive example tasks. It was also discovered that the subtle distinctions between classes led to confusion in making decisions about labels. The developed interactive cognitive model was able to simulate human behavior in the same labeling task environment, while the model achieved high accuracy, it still relies on pre-defined features therefore limited its application to seen data only. Our findings suggest that interactive cognitive modeling offers a promising avenue for replacing human labelers with robust, consistent, and unbiased labeled datasets.
... In the realm of ABM for complex systems, employing a robust cognitive architecture is crucial for simulating intelligent decision-making processes. A cognitive architecture, defining the structure and functionality throughout the simulation, supports the cognitive model influencing behavior (Newell, 1990). In this study, we adopt the BDI cognitive framework-centered around Beliefs, Desires, and ...
Preprint
Full-text available
Large-scale experimental studies on Learning Progression (LP) in middle school mathematics face challenges, such as resource limitations and ethical considerations. This study introduces a simulation-based framework for LP exploration, centered on the Multi-Agent-Based Student Cognitive Development (MAB-SCD) model. The MAB-SCD model, built using Agent-Based Modeling (ABM), integrates student learning processes and cognitive development into coherent learning trajectories. It was conceptualized around the LP construction process and key instructional activities in middle school mathematics, using the BDI cognitive framework for design and implementation. A systematic verification process was conducted to ensure its suitability for LP research. Global sensitivity analysis revealed complex parameter interactions, providing insights into model dynamics and enabling simulation optimization to more accurately represent student learning experiences. Historical data were used for parameter tuning and validation, ensuring the alignment between model outputs and real-world observations. Calibration and validation results confirmed the model’s effectiveness in reflecting students' progress and cognitive development. Additionally, the model's validity was demonstrated in a typical LP research task, showing effective integration of cognitive processes with learning trajectories. Positioned at the intersection of cognitive architecture and educational theory, these findings offer actionable insights for educators and researchers. By promoting the use of computational simulations, this study enhances the understanding of mathematics learning progressions across large student populations over extended periods.
... The theory adopted for this study has practical problem-solving potentials, elements and ideals. Theories of psychology or neuroscience are often adopted for projects concerning AI, because the human mind is the best-known form of intelligence (Newell, 1990;Rumelhart & McClelland, 1986). On the basis of practical problem-solving demands, the theory of computation is usually considered to suffice for any other or a new theory of AI and its integration into any project or endeavor (Hayes & Ford, 1995;Marr, 1982). ...
Article
Full-text available
The dire need for proper maintenance of Science Laboratory Equipment (SLE) to attain efficiency, optimal results and durability cannot be overemphasized. To that end, this study proposes the leveraging of AI for optimization and efficiency in the maintenance of SLE. The study relied on both primary and secondary data. The primary data were sourced from twenty Science Laboratory (SL) professionals, while the secondary data were sourced from repositories, databases and websites on the internet. The mixed method alongside the plausible descriptive and statistical tools was employed. The analysis shows that the maintenance of SLE can be optimized and made efficient by leveraging AI for such purposes. Regrettably, public sector organizations are yet to significantly integrate AI into the maintenance of SLE. The study concludes that AI has the capacity to optimize and enhance efficient maintenance of SLE. It calls on stakeholders in the field of SL to make concerted efforts to significantly integrate AI into the maintenance of SLE. The government should help provide AI technologies for the concerned public sector organizations and sponsor the training of people for technical know-how in using and sustaining these cutting-edge technologies in SL.
... 2.1). In fact, Newell (1990) had speculated that low-level tasks such as object recognition to be well modeled by NNs, while higher-level reasoning and logical processing would require rule-based processing-anticipating aspects of today's neuro-symbolic architectures and the System 1 vs. 2 distinction (Kahneman, 2011). ...
Preprint
Full-text available
Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities. Here we review recent work that has used metalearning to help overcome some of these challenges. We characterize their successes as addressing an important developmental problem: they provide machines with an incentive to improve X (where X represents the desired capability) and opportunities to practice it, through explicit optimization for X; unlike conventional approaches that hope for achieving X through generalization from related but different objectives. We review applications of this principle to four classic challenges: systematicity, catastrophic forgetting, few-shot learning and multi-step reasoning; we also discuss related aspects of human development in natural environments.
... Another source of cognitive models are publications in that section of cognitive psychology, where not individual phenomena of the human psyche are studied, but integrated systems are synthesized that imitate the processes of problem solving and human decision-making. For example, publications devoted to the theories of cognition SOAR and ACT [5,6]. The third source of cognitive models includes publications in the field of cognitive sciences [7,8]. ...
... Lynch (1964) defines wayfinding from an environmental behavior perspective as the process of perceiving and organizing external environmental cues 19 . In contrast, Newell (1994) defines wayfinding from a cognitive psychology perspective as the ability to process and interpret information to create a cognitive map of the space 20 . ...
Preprint
Full-text available
In the process of wayfinding, people with color vision deficiency (CVD) find it difficult to search and understand the relevant signage and guidance information, which reduces the efficiency of wayfinding decision-making of travellers. The current research on CVD mainly focuses on the digital media interface, and there is a lack of exploration of the signage perception ability of CVD groups in complex environments. In this study, we selected metro travel as the type of behaviour to be studied, chose Xiamen Metro Lvcuo Station and four nodes in the surrounding urban area as the research scope, considered the two pairs of influencing factors of the environment and color vision, and used 360° panoramic view and simulation of CVD to implement the information search experiment, and made a comparative analysis of the relevant eye-movement and visual indexes. The experimental results show that subjects with simulated CVD have a significantly weaker ability to perceive the current signage system than normal color vision (NCV) subjects, and have the ability to be less easily disturbed by the colorful visual environment; changes in the outdoor light environment also significantly affect the ability of the experimental subjects to perceive the signage system. The study explores travel environment enhancement strategies that are friendly to the CVD group based on their unique visual recognition needs. It advocates the analysis of the key points of urban public space design with the concept of inclusiveness, and provides a theoretical basis for the subsequent inclusive construction and optimisation of cities.
... In addition, the self-synchronization processes required in the network structure seemed to place a heavy burden on the information processing capacities of the tactical level decision-makers. While our findings contrasted with contemporary writings on the organization of military operations (e.g., Alberts & Hayes, 2003;Atkinson & Moffat, 2005), they still make sense in light of the basic theories of information processing in organizations (e.g., Tversky & Kahneman, 1981;Simon, 1987;Newell, 1990;Morgan, 1998). A main impression from this set of experiments is that many aspects of human interaction must be managed before a network centric structure may give a full range of benefits in operations. ...
... Colom et Al. [2] defined 'intelligence' in this context in terms of a "general mental ability" for reasoning, problem solving, and learning, while Allen Newell [3] asserted that 'intelligence' was the degree to which a system approximated a knowledge-level system. Many perspectives are related to the definitions of AI. ...
... Two prominent frameworks for cognitive modeling are ACT-R (Anderson 2009;Bothell 2017) and Soar (Laird 2012): these frameworks serve as robust tools for simulating human behavior across various cognitive tasks. They are referred to as Cognitive architectures (CAs) (Laird 2012;Anderson 1998), reflecting a set of intertwined mechanisms to model human behavior and aiming for a unified representation of mind (Newell 1994). CAs use task-specific knowledge to generate behavior. ...
Preprint
Resolving the dichotomy between the human-like yet constrained reasoning processes of Cognitive Architectures and the broad but often noisy inference behavior of Large Language Models (LLMs) remains a challenging but exciting pursuit, for enabling reliable machine reasoning capabilities in production systems. Because Cognitive Architectures are famously developed for the purpose of modeling the internal mechanisms of human cognitive decision-making at a computational level, new investigations consider the goal of informing LLMs with the knowledge necessary for replicating such processes, e.g., guided perception, memory, goal-setting, and action. Previous approaches that use LLMs for grounded decision-making struggle with complex reasoning tasks that require slower, deliberate cognition over fast and intuitive inference -- reporting issues related to the lack of sufficient grounding, as in hallucination. To resolve these challenges, we introduce LLM-ACTR, a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making by integrating the ACT-R Cognitive Architecture with LLMs. Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations, injects this information into trainable LLM adapter layers, and fine-tunes the LLMs for downstream prediction. Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability of our approach, compared to LLM-only baselines that leverage chain-of-thought reasoning strategies.
... (Feldman, 2007b, p. 330) As fundamentally social beings, we form complex relationships from birth and develop increasingly sophisticated ways of engaging with others. Each one of us moves through the world as a collection of neural, bodily, behavioral, and psychological systems working together on multiple different timescales, from milliseconds to years (Hari & Parkkonen, 2015;Newell, 1994), to support social life. These multilevel social systems underlie our everyday activities and serve as powerful sources of resilience and adaptation across the lifespan (Feldman, 2020(Feldman, , 2021, while their disruption characterizes almost every psychological disorder (Bolis et al., 2017;Kennedy & Adolphs, 2012). ...
Article
Full-text available
Human interpersonal capacities emerge from coordinated neural, biological, and behavioral activity unfolding within and between people. However, developmental research to date has allocated comparatively little focus to the dynamic processes of how social interactions emerge across these levels of analysis. Second-person neuroscience and dynamic systems approach together to offer an integrative framework for addressing these questions. This study quantified respiratory sinus arrhythmia and social behavior (∼360 observations per system) from 44 mothers and typically developing 9-month-old infants during a novel modified “still-face” (text message perturbation) task. Stochastic autoregression models indicate that the infant parasympathetic nervous system is coupled within and between people second by second and is sensitive to social context. Intraindividual, we found positive coupling between infants’ parasympathetic nervous system activity and their social behavior in the subsequent second, but only during the moments and periods of active caregiver engagement. Between people, we found a bidirectional coregulatory feedback loop: Mothers’ parasympathetic activity positively predicted that of their infant in the subsequent second, a form of synchrony that decreased during the text message perturbation and did not fully recover. Conversely, infant parasympathetic activity negatively predicted that of their mother at the subsequent second, a form of synchrony that was invariant over social context. Findings reveal unidirectional parasympathetic coupling within infants and a complementary allostatic feedback loop between mother and infant parasympathetic systems. They offer novel evidence of a dynamic, socially embedded parasympathetic system at previously undocumented timescales, contributing to both basic science and potential clinical targets to better support adaptive, multisystem social development.
... La arquitectura cognitiva es un campo fascinante y complejo, dedicado a desentrañar las estructuras y procesos mentales que subyacen al pensamiento y comportamiento humano. Desde las propuestas iniciales de Anderson (1996) y Newell (1990), este enfoque ha proporcionado modelos que permiten comprender cómo las personas adquieren, procesan y utilizan información. ...
Article
Full-text available
La arquitectura cognitiva es fundamental para mejorar el diseño instruccional y el rendimiento académico en la educación superior, proporcionando un marco para comprender cómo interactúan la memoria de trabajo y la memoria a largo plazo. Este estudio revisa la literatura reciente sobre la aplicación de estos principios, centrándose en la gestión de la carga cognitiva y la personalización del aprendizaje. Se utilizó un enfoque cualitativo basado en una revisión bibliográfica, analizando estudios publicados entre 2019 y 2023. Los principales resultados indican que la personalización y la claridad en los textos educativos mejoran la comprensión y el aprendizaje, especialmente en estudiantes con bajos conocimientos previos. Además, la implementación de estrategias para simplificar la información y usar herramientas visuales puede reducir la sobrecarga cognitiva, mejorando así el rendimiento académico. En el aprendizaje en línea, la personalización del contenido y el diseño de contenidos multimedia adecuados son esenciales para evitar la sobrecarga de información y mejorar la eficacia del aprendizaje móvil. Las conclusiones subrayan la importancia de adaptar la carga cognitiva a las necesidades de cada estudiante y utilizar tecnologías educativas que faciliten la visualización de conceptos complejos.
... Since the early days of AI, cognitive approaches to AI have aimed to emulate human cognitive processes with the goal of designing computer agents that exhibit human like decision making [Riedl and Bulitko, 2012] and (possibly superior) intelligence [Newell, 1994]. While recent advances in AI have relied on neural networks, the goal of making realistic human agents is still very relevant. ...
Preprint
Full-text available
Modelling human cognitive processes in dynamic decision-making tasks has been an endeavor in AI for a long time. Some initial works have attempted to utilize neural networks (and large language models) but often assume one common model for all humans and aim to emulate human behavior in aggregate. However, behavior of each human is distinct, heterogeneous and relies on specific past experiences in specific tasks. To that end, we build on a well known model of cognition, namely Instance Based Learning (IBL), that posits that decisions are made based on similar situations encountered in the past. We propose two new attention based neural network models to model human decision-making in dynamic settings. We experiment with two distinct datasets gathered from human subject experiment data, one focusing on detection of phishing email by humans and another where humans act as attackers in a cybersecurity setting and decide on an attack option. We conduct extensive experiments with our two neural network models, IBL, and GPT3.5, and demonstrate that one of our neural network models achieves the best performance in representing human decision-making. We find an interesting trend that all models predict a human's decision better if that human is better at the task. We also explore explanation of human decisions based on what our model considers important in prediction. Overall, our work yields promising results for further use of neural networks in cognitive modelling of human decision making. Our code is available at https://github.com/shshnkreddy/NCM-HDM.
... In its adaptive purpose, due to the poor temporal achievements of biological organisms noted by Rosenblueth et al. (1943), the human mind needs some temporal room to process the elements provided by an ever-changing and often unpredictable environment (Newell, 1990). This makes necessary to temporarily protect from forgetting some information with a view to their future use, while simultaneously processing other information relevant for on-going treatments and action. ...
Article
Full-text available
The continuous flow of information in which we are immersed obliges our cognitive system to maintain accessible the relevant elements for the time necessary for their processing. The present study investigated how working memory balances the resource demands of this necessary storage in the face of demanding processing. In four experiments using a complex span task, we examined the residual performance in memory and processing of individuals who performed at their best in the other component. Reciprocal dual-task costs pointed toward a resource sharing between the two functions. However, whereas prioritizing processing almost abolished participants’ memory performance, more than 60% of their processing capacities were preserved while maintaining memory performance at span. We argue that this asymmetry might be adaptive in nature. Working memory might have evolved as an action-oriented system in which short-term memory capacity is structurally limited to spare the resources needed for processing the information it holds.
Chapter
This chapter attempts to explain the main concepts, definitions and developments of the field of artificial intelligence. It addresses the issues of logic, probability, perception, learning and action. The chapter examines the current “state of the art” of the artificial intelligence systems and its recent developments. Moreover, this chapter presents the artificial intelligence’s conceptual foundations and discusses the issues of machine learning, uncertainty, reasoning, learning and robotics.
Chapter
In this study, we focus on designing the interface and functionality of color analysis tools to investigate color associations. This approach advances beyond conventional analytical methods, rendering the results of color correlation studies more scientific and robust. This study aimed to (1) develop a color analysis tool for analyzing the relations between semantic color adjectives and the investigation results of color associations, and (2) examine the color association analysis tool to demonstrate the spatial distribution and volumetric presence of colors linked with semantic adjectives in the CIELab color space. In a previous study, the prototype of the analysis tool was designed based on the rules of user interface design and to execute an easy analysis method to understand the color distribution of serial color chips in the CIELab color space. Three experts were invited to discuss and review the functions and analysis procedures of the color analysis tool. The interface of the analysis tool includes importing the raw data with CIELab and the RGB values, and color volume calculation represented in the CIELab color space. Color analysis was performed according to the workflow process for calculating the color image from the RGB value to CIEL*a*b*. In terms of academic research contributions, this study recognizes that effective color analysis and display go beyond simple color swatch tools. It relies on establishing the strength of relationships between color semantics vocabulary and their associated colors within the CIELab color space. Future work could explore the mapping of color semantics to perceived colors in people’s associations and examine its clustering effects.
Article
This paper explores AI’s role in scholarly communication from different perspectives. It begins by examining how universities contributed to the development of AI and integrated it into teaching, learning, and research. As the digital economy grows, AI is influencing many aspects of human life. Different countries and funding agencies use various approaches that affect how research grants are allocated globally. AI’s use in scholarly writing is changing traditional communication methods. Although little research exists on the impact of AI-generated writing on metrics like the h-index, this paper discusses several issues from different perspectives. It aims to provide insights to AI researchers to improve the technology for better outcomes.
Article
Full-text available
Scholars argue that artificial intelligence (AI) can generate genuine novelty and new knowledge and, in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input–output devices, using large language models as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward looking and imitative, whereas human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of data–belief asymmetries to highlight the difference between AI and human cognition, using the example of heavier-than-air flight to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to intervene in the world and to engage in directed experimentation to generate new data. Throughout the article, we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.
Chapter
The idea for the production-based design for computing came originally from writings of Post (1943), who proposed a production rule model as a formal theory of computation. The main construct of this theory was a set of rewrite rules for strings. It is also closely related to the approach taken by Markov algorithms (Markov 1954) and, like them, is equivalent in power to a universal Turing machine.
Preprint
This proceedings contains abstracts and position papers for the work to be presented at the fourth Logic and Practice of Programming (LPOP) Workshop. The workshop is to be held in Dallas, Texas, USA, and as a hybrid event, on October 13, 2024, in conjunction with the 40th International Conference on Logic Programming (ICLP). The focus of this workshop is integrating reasoning systems for trustworthy AI, especially including integrating diverse models of programming with rules and constraints.
Article
Full-text available
Available online The current paper highlights the need for a unified approach that considers both micro-and macro-cognition in cognitive research. It suggests that integrating these perspectives can lead to a more complete interpretation of theatrical texts. The current work hypothesizes that the model of the translation process folds into two constructs of cognitive Efficacy (CE); the first is the macrocognitive interface, and the second is the microcognitive interface to pull away conceptual and operational analysis of the translator's performance of transfer ST and ground the mind into TT. The methodological consequences and practical part are applied to the theatrical texts to follow this model of the translation process and how it affects the scope of cognitive translation studies (CTS) by (Martín and de León, 2021). The study concluded that macro-cognitive research deals with the overall performance and ecology of task execution within the cognitive system of the translator. While it focuses on invariant processes, the use of binary linguistic levels is often employed in micro-cognitive research. The emphasis on internal validity and the ability to draw causal inferences is mentioned, as well as the convenience and utility of using large samples in analyzing complex relationships between variables of cognitive modeling, mental architecture, and knowledge sharing.
Article
Full-text available
Among a growing body of mobile-assisted-language-learning (MALL) studies, vocabulary application (app) studies have comprised an unprecedented proportion. However, there has been insufficient discussion about the fundamental language learning theories and pedagogies underlying app design. This study aims to fill this gap by conceptualising and theorising the design of English vocabulary learning apps with a focused analysis of their task features with relation to vocabulary learning strategies (VLS), language learning theories and pedagogies. Four Chinese–English vocabulary learning apps were purposively sampled. The results show that the design of the four evaluated apps incorporates principles from four distinct language learning theories: behaviourism, input-based emergentism, sociocultural theory and information and cognitive processing theories. The four apps’ tasks and user interfaces were analysed to generate codes for task classification. These task-related codes were further classified under the categories of Schmitt’s VLS taxonomy and under pedagogical categories based on the definitions of pedagogies and VLS codes and the intended classroom practice. Vocabulary pedagogies were further linked with theories to shed light on the theoretical foundations for the apps’ use of learning tasks and VLS as well as on the pedagogical prospects of vocabulary learning apps.
Thesis
The current research has been carried out with the general purpose of the application of WebQuest in teaching third-grade elementary science. From the viewpoint of nature, this research was operational-developmental and from the viewpoint of epistemology, it was an interpretation orientation that was carried out using the qualitative approach and the synthesis research method of Sandlovsky & Barroso (2007), based on the PRISMA guidelines. The field of research included all the research conducted in the form of research articles, specialized university theses, and specialized books available in the field of WebQuest in the period of 2010-2022 in domestic and foreign research. The tool used was a checklist, which had two bibliographic dimensions including research title, document type, researcher and year, and methodological dimension including research method and research community. Finally, after implementing the steps of Sandlovsky & Barroso's model, 49 scientific documents were selected as the ultimate date of the current research. At this stage, the documents extracted from the refinement of research were interpreted and analyzed as qualitative research information through the process of three stages of coding based on the targeted framework of the contextual theory of Strauss and Corbin's (2011) model. To answer the first research question, the researcher referred to the eight main-organizing themes such as, "human resources", "operational resources", "structural resources", "theoretical resources", "organizational resources", "technological resources", "metacognition resources", and " "Educational resources". To answer the second question of the research, the qualitative model of the research was arranged in the form of a paradigm model consisting of the sub and main- categories of the research around the axis of the overarching phenomenon of the research, i.e. "the application of WebQuest in teaching third-grade elementary science", and finally the conceptual model of the research was depicted according to the research paradigm model.
Book
Full-text available
Ha llegado el momento. Las demandas de ideas más profundas y de integración entre (sub)disciplinas son históricamente comunes en las ciencias humanas, pero se han hecho más fuertes y urgentes a la luz de la actual crisis de replicación en psicología y campos afines. Muchas contribuciones al debate actual sobre el futuro de las ciencias humanas han hecho hincapié en la necesidad de mejores fundamentos y síntesis transdisciplinares, junto con una reforma metodológica esencial. Los diálogos de buena fe a través de la brecha cognitivo-social ayudarán a satisfacer directamente esta demanda. A través del contenido de este libro se pretende aportar a dilucidar estas cuestiones y plantear rutas de investigación y puntos de encuentro. Cada capítulo encadena con el siguiente de forma transdisciplinar, para que el lector libere su mente para el entendimiento de la Sociedad Cognitiva.
Chapter
Full-text available
Safe and efficient traffic requires that road users interact and cooperate with each other. Especially in situations which are not explicitly regulated, and the right of way is not clearly defined, it is of great importance that road users are able to communicate their own intentions and understand the communication and cooperation behaviour of the other involved road users. When automated vehicles enter the current traffic system, their ability to fit into the system, that is their ability to communicate and cooperate, will determine their success. Therefore, the development of cooperatively interacting, automated vehicles requires detailed knowledge about human cooperation behaviour in traffic, which can only be obtained using appropriate methods and measures. By focusing on road narrowings and lane changing, this chapter gives an overview on how to measure cooperation between road users, considering methods for data collection, subjective and objective measures of cooperation as well as behaviour modeling, to support the systematic research on cooperation in road traffic. This overview is extended by findings from studies conducted within CoInCar, including results on factors influencing human behaviour in cooperative situations, either in a manual or an automated setting, and initial findings from modeling the cognitive processes underlying cooperative driving behaviour.
ResearchGate has not been able to resolve any references for this publication.