Book

The Atomic Components of Thought

Authors:
... Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Input information processing Sensor models (Witt et al., 2019;Wang et al., 2020a;Bellet et al., 2018;Massaro, 1979); Top-down/bottomup simulation (Denk et al., 2020;Horrey et al., 2006;Wickens et al., 2008) Recall Limited storage capacity; Selection of actions among alternatives Inference Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Physical coordination Noise models ...
... Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Input information processing Sensor models (Witt et al., 2019;Wang et al., 2020a;Bellet et al., 2018;Massaro, 1979); Top-down/bottomup simulation (Denk et al., 2020;Horrey et al., 2006;Wickens et al., 2008) Recall Limited storage capacity; Selection of actions among alternatives Inference Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Physical coordination Noise models ...
... Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Input information processing Sensor models (Witt et al., 2019;Wang et al., 2020a;Bellet et al., 2018;Massaro, 1979); Top-down/bottomup simulation (Denk et al., 2020;Horrey et al., 2006;Wickens et al., 2008) Recall Limited storage capacity; Selection of actions among alternatives Inference Cognitive models (Newell, 1994;Laird, 2012;Anderson et al., 1998;Liu et al., 2006;Tattegrain-Veste et al., 1996;Krajzewicz, 2005;Cacciabue et al., 2010b;Witt et al., 2019) Physical coordination Noise models ...
Thesis
Full-text available
In recent years, the automotive sector has seen a steady increase in the introduction of new Advanced Driving Assistance Systems (ADAS). This trend toward more complex systems will become even more pronounced with regard to Highly Automated Driving (HAD). In addition to the expected benefits of ADAS and HAD (increased comfort, efficiency, and safety), it is important to eliminate risks as much as possible to ensure that the system does not introduce new critical situations or road traffic accidents. Due to the increasing interaction of systems with the driver and their environment, it is no longer sufficient to investigate the system in isolation. There is also a need to investigate how the driver and the environment interact with the new system. Furthermore, the functional scope of the systems is expanding to cover entire application domains, such as highways and in the future rural and urban areas. This results in a significant increase in the number of parameters and scenarios that require testing for approval of these new technologies. This means that the scenario space to be analyzed is constantly expanding, which poses increasing problems for safety assessments. The expected number of test kilometers required to validate HAD is too large to be cost- and time-effective through real-world testing. This is why virtual safety assessments are necessary. In this context, the present thesis investigates whether virtual safety assessments can be efficiently performed today through Monte Carlo simulations using cognitive driver behavior models. The body of the thesis consists of four articles that consider different aspects of the safety assessment. Article 1 derives the cognitive core functions that driver behavior models must implement to display the causes and mechanisms of human error. This way, driver behavior models are able to map all hazard levels of realistic traffic, including normal traffic, critical situations, and road traffic accidents. By mapping the interactions of road users, cognitive models thus form the basis for the virtual safety assessment of ADAS and HAD systems. Due to the lack of existing cognitive driver behavior models that implement these cognitive core functions, the Driver Reaction Model (DReaM), a new driver behavior model, was developed and continuously improved as part of this work. Article 2 outlines a calibration and validation strategy, using DReaM as an example, to investigate whether driver behavior models are suitable for safety assessments, mapping all levels of realistic traffic. Subsequently, Article 3 estimates the time required to perform Monte Carlo studies for safety assessments, again using DReaM as an example. Therefore, an optimistic and pessimistic estimation is generated based on the minimum number of runs (MNR) required to simulate an exemplary traffic scenario. In summary, Articles 1–3 examine the quality of driver behavior models and the time required to perform safety-related studies. This lays the foundation for determining whether efficient safety assessments are feasible. Finally, Article 4 exemplarily assesses an urban Automatic Emergency Braking (AEB) system using DReaM to outline the overall virtual assessment methodology. Based on Article 4 and the findings of Article 1–3, minimal requirements are defined for improving and standardizing the virtual safety assessment process. These requirements aim to improve the reliability of safety assessments and enhance the comparability of results across various studies and models
... One type of computational framework that holds the promise for such integration is the class of cognitive architectures Anderson & Lebiere, 1998;A. Newell, 1990), which affords an alternative modeling approach to early theories of motor programs from the early experimental psychology literature (e.g., Henry & Rogers, 1960;Klapp, 2010;Schmidt, 1975). ...
... Newell, 1990), which affords an alternative modeling approach to early theories of motor programs from the early experimental psychology literature (e.g., Henry & Rogers, 1960;Klapp, 2010;Schmidt, 1975). Architectures specify an intelligent system's underlying infrastructure, describe how the mind represents and organizes information into larger-scale mental structures, and computationally simulate the functional processes that operate on such structures (Anderson et al., 2008;Anderson & Lebiere, 1998;Langley et al., 2009). They are useful to experimental researchers in so far as they provide a framework for further understanding how seemingly separate neurocognitive processes are brought together to yield complex behavior. ...
... It includes a symbolic production system with an associative memory, in addition to subsymbolic mechanisms that enable the architecture to process information and implement different learning strategies (Anderson et al., 2004. At the core of the architecture is a central procedural module which recognizes patterns in other module buffers and recommends cognitive and motor actions Anderson & Lebiere, 1998). Such actions are executed in response to a particular condition as part of production rules in the procedural module (Anderson et al., 2004). ...
Article
Full-text available
Timing plays a critical role when building up motor skill. In this study, we investigated and simulated human skill learning in a simplified variant of the Space Fortress video game named Auto Orbit with a strong timing component. Our principal aim was to test whether a computational model designed to simulate keypress actions repeated at rates slower than 500 ms (>500 ms) could also simulate human learning with repeated keypress actions taking place at very fast rates (≤500 ms). The main finding was that increasing speed stress forced human participants to qualitatively switch their behavior from a cognitively controlled strategy to an inherently rhythmic motor strategy. We show how the adaptive control of thought rational architecture’s periodic tapping motor extension can replicate such rhythmic patterns of keypresses in two different computational models of human learning. The first model implements streamed motor actions across hands that are temporally decoupled, while the second model implements a coupled motor strategy in which actions from both hands are executed relative to the same periodic motor clock. Different subsets of subjects correspond to these two models. Our modeling simulations integrate previous psychological and motor control findings within a single cognitive architecture, and successfully replicate human behavioral patterns across a range of experimental measures at fast speed.
... Each chunk is associated with a scalar value, called activation, which represents the log odds of a chunk being needed in the future (Anderson & Lebiere, 1998). Similarly, each production is associated with a utility value that represents the expected future rewards associated with the execution of that production and is analogous to Q-values in Reinforcement Learning (RL: Niv, 2009). ...
... This violates established findings in neuroscience and is incompatible with the EVC model. It is also a major departure from early versions of the ACT-R architecture (e.g., Anderson & Lebiere, 1998), in which goals were associated with specific values, and values were explicitly used to rank productions on the basis of a cost−benefit analysis. This older framework was, in principle, much more compatible with the EVC theory, as it explicitly selected strategies based on a cost−benefit analysis of goal values and the time needed to achieve them. ...
... This older framework was, in principle, much more compatible with the EVC theory, as it explicitly selected strategies based on a cost−benefit analysis of goal values and the time needed to achieve them. In this older framework, productions were selected not on the basis of utility but on the basis of an estimated quantity expressed as pG − C, where p is the probability of achieving that goal, G is the goal's value, and C is the time cost needed to achieve it (Anderson & Lebiere, 1998). ...
Article
Motivation is the driving force that influences people's behaviors and interacts with many cognitive functions. Computationally, motivation is represented as a cost−benefit analysis that weighs efforts and rewards in order to choose the optimal actions. Shenhav and colleagues proposed an elegant theory, the Expected Value of Control (EVC), which describes the relationship between cognitive efforts, costs, and rewards. In this paper, we propose a more fine‐grained and detailed motivation framework that incorporates the principles of EVC into the ACT‐R cognitive architecture. Specifically, motivation is represented as a specific slot in the Goal buffer with a corresponding scalar value, M , that is translated into the reward value R t that is delivered when the goal is reached. This implementation is tested in two models. The first model is a high‐level model that reproduces the EVC predictions with abstract actions. The second model is an augmented version of an existing ACT‐R model of the Simon task. The motivation mechanism is shown to permit optimal effort allocation and reproduce known phenomena. Finally, the broader implications of our mechanism are discussed.
... The framework of this architecture consists of multiple modules to replicate the different basic functionalities of the human brain involved while performing a cognitive task. For example, ACT-R 4.0 (Anderson & Lebiere, 1998) included modules like Declarative Memory, Procedural Memory, Perception and Action (see Fig. 1). The actions required to complete a particular task are not only designed to be executed by these specific modules but were also made to go through a series of stages (Conflict Resolution, Declarative Retrieval and Production Execution) to mimic the cognitive processing stages that are assumed to be undergone by humans while performing a cognitive task. ...
... But evaluating an architecture that was designed to imitate the complex functionalities of the brain using only a few factors such as response time and accuracy, failed to properly constrain the architecture (Borst et al., 2015). (Anderson & Lebiere, 1998) with the stages of the architecture being highlighted in red. The modules and stages of the architecture until this version were evaluated solely based on the behavioural data. ...
Thesis
Full-text available
Cognitive models of ACT-R architecture process information in multiple stages, mimicking the different stages that humans undergo while performing cognitive tasks. For evaluating these models’ stages, cognitive stages underlying the tasks are predicted via multivariate pattern analysis over the features extracted from the brain data of multiple participants and compared with the models’ stages. However, if the techniques used to extract the features do not consider the inter-subject alignment issues and the nonlinear dynamics of brain data, the resulting features might not be a proper representation of the neural activities. To investigate these two issues, 2 feature extraction techniques, MCCA and DMCCA, were applied to the EEG dataset of a recent cognitive study that used PCA for extracting the features from the brain data of 26 participants. Results from the two multivariate pattern analyses, stimuli classification and cognitive stage prediction, showed potential in both techniques for handling inter-subject alignment issues. And the classifier results showed DMCCA’s ability in finding nonlinear patterns. Further investigations, applying MCCA on high-density EEG data and updating the DMCCA architecture, could give better evidence supporting the applicability of the two techniques for inter-subject alignment and nonlinearity respectively.
... In this work, we investigate the capabilities of LLMs to predict human action strategies in two sequential decision-making tasks, and compare their performance with a cognitive instance-based learning (IBL) model (Gonzalez, Lerch, and Lebiere 2003a). Based on the theory of experiencebased decisions, IBL models simulate human decisionmaking by incorporating the ACT-R memory mechanisms (Anderson and Lebiere 2014). These models have proven effective in emulating human decisions in various tasks, including gambling choices (Gonzalez and Dutt 2011;Hertwig 2015), complex dynamic resource allocation (Somers, Oltramari, and Lebiere 2020), cybersecurity , and predicting the actions of other RL agents (Nguyen and Gonzalez 2022). ...
... Cognitive Modeling and Human Behavior. Cognitive architectures like ACT-R have demonstrated success in achieving human-level reasoning with limited training instances and in capturing cognitive biases in various decisionmaking tasks (Anderson and Lebiere 2014;Gonzalez, Lerch, and Lebiere 2003a;Erev et al. 2010;Thomson et al. 2015). Lebiere et al. (2013) showed that the cognitive IBL model predicts whether a person will be risky or risk-averse based on previous trial feedback. ...
Article
Large Language Models (LLMs) excel in tasks from translation to complex reasoning. For AI systems to help effectively, understanding and predicting human behavior and biases is essential. However, it remains an open question whether LLMs can achieve this goal. This paper addresses this gap by leveraging the reasoning and generative capabilities of LLMs to predict human behavior in two sequential decision-making tasks. These tasks involve balancing between exploratory and exploitative actions and handling delayed feedback, which is essential for simulating real-life decision processes. We compare the performance of LLMs with a cognitive instance-based learning (IBL) model, which imitates human experiential decision-making. Our findings indicate that LLMs excel at rapidly incorporating feedback to enhance prediction accuracy. In contrast, the IBL model better accounts for human exploratory behaviors and effectively captures loss aversion bias — the tendency to choose a sub-optimal goal with fewer step-cost penalties rather than exploring to find the optimal choice, even with limited experience. The results highlight the benefits of integrating LLMs with cognitive architectures, suggesting that this synergy could enhance the modeling and understanding of complex human decision-making patterns.
... In this work, we investigate the capabilities of LLMs, specifically open-source models, in predicting human action strategies in two sequential decision-making tasks, and compare their performance with a cognitive instancebased learning (IBL) model (Gonzalez, Lerch, and Lebiere 2003a). Grounded in the theory of decisions from experience, IBL models simulate human decision-making by incorporating mechanisms and limitations from the ACT-R cognitive architecture (Anderson and Lebiere 2014). These models have proven effective in emulating human decisions in various tasks, including gambling choices (Gonzalez and Dutt 2011;Hertwig 2015), complex dynamic resource allocation (Somers, Oltramari, and Lebiere 2020), cybersecurity , and predicting the actions of other RL agents (Nguyen and Gonzalez 2022) Our goal is to understand whether LLMs and the cognitive IBL model can predict human action strategies and capture human biases, such as loss aversion, characterized by the tendency to choose sub-optimal goals with fewer step-cost penalties rather than exploring optimal choices. ...
... Cognitive Modeling and Human Behavior. Cognitive architectures like ACT-R have demonstrated successful reasoning with limited training instances through experience (Anderson and Lebiere 2014;Gonzalez, Lerch, and Lebiere 2003a). They achieve human-level performance and capture cognitive biases in various decision-making tasks (Lebiere et al. 2013;Erev et al. 2010;Thomson et al. 2015). ...
Preprint
Full-text available
Large Language Models (LLMs) have demonstrated their capabilities across various tasks, from language translation to complex reasoning. Understanding and predicting human behavior and biases are crucial for artificial intelligence (AI) assisted systems to provide useful assistance, yet it remains an open question whether these models can achieve this. This paper addresses this gap by leveraging the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks. These tasks involve balancing between exploitative and exploratory actions and handling delayed feedback, both essential for simulating real-life decision processes. We compare the performance of LLMs with a cognitive instance-based learning (IBL) model, which imitates human experiential decision-making. Our findings indicate that LLMs excel at rapidly incorporating feedback to enhance prediction accuracy. In contrast, the cognitive IBL model better accounts for human exploratory behaviors and effectively captures loss aversion bias, i.e., the tendency to choose a sub-optimal goal with fewer step-cost penalties rather than exploring to find the optimal choice, even with limited experience. The results highlight the benefits of integrating LLMs with cognitive architectures, suggesting that this synergy could enhance the modeling and understanding of complex human decision-making patterns.
... The first is the schema approach, which was mainly developed by Köpcke and colleagues (Köpcke, 1988(Köpcke, , 1993Köpcke et al., 2021). The second kind of approach is computational and has seen the implementation of many different algorithms and architectures, from analogy to deep learning (e.g., Anderson and Lebiere, 1998;Hahn and Nakisa, 2000;Daelemans, 2002;Wulf, 2002;Daelemans et al., 2007;Rosen, 2022;Buch, 2011;McCurdy et al., 2020a;Beser, 2021;Dankers et al., 2021). ...
... Computational approaches have implemented a variety of architectures (see, for example, Anderson and Lebiere, 1998;Hahn and Nakisa, 2000;Daelemans, 2002;Wulf, 2002;Daelemans et al., 2007;Rosen, 2022;Buch, 2011;McCurdy et al., 2020a;Beser, 2021;Dankers et al., 2021). While especially the more recent deep learning models perform quite successfully in general, they have certain drawbacks. ...
Article
There is an ongoing debate on how speakers and listeners process and interpret information in a morphological system that is very complex and not very transparent. A well-known test case is the German nominal number system. In this paper we employ discriminative learning (e.g., Ramscar & Yarlett, 2007 ; Baayen et al., 2011 , 2019 ) to test whether discriminative learning networks can be used to better understand the processing of German number. We analyse behavioral data obtained from a patient with primary progressive aphasia ( Domahs et al., 2017 ), and the unimpaired system. We test a model that implements the traditional cues borrowed from the schema approach ( Köpcke, 1988 , 1993 ; Köpcke et al., 2021 ), and compare it to a model that uses segmental and phonotactic information only. Our results for the unimpaired system demonstrate that a model based on only biphones as cues is better able to predict the number of a given word-form than a model using structural phonological cues. We also test whether a discriminative learning model can predict the number decisions by the aphasic patient. The results demonstrate that a biphone-based discriminative model trained on the patient’s responses is superior to a structure-based model in approximating the patient’s behavior.
... Given that cognitive architectures have been developed to represent an integrated view of the cognitive capacities of the human mind (Anderson et al. 2004;Anderson and Lebiere 2014), previous research has explored how well models align with humans in tasks involving feedback delays (Walsh andAnderson 2011, 2014). In particular, TD credit assignment methods have been incorporated into cognitive architectures to emulate how humans process feedback delays in sequential decision-making tasks (Fu and Anderson 2006). ...
... In particular, Instance-Based Learning (IBL) models that rely on the theoretical principles of IBLT (Gonzalez, Lerch, and Lebiere 2003) have been used to emulate human binary choices (Gonzalez and Dutt 2011) and decisions in more complex dynamic resource allocation tasks such as the Internet of Things (Somers, Oltramari, and Lebiere 2020), cybersecurity , multistate gridworld tasks Gonzalez 2020, 2021), and multi-agent settings are required to build real-time interactivity between models and humans (Nguyen, Phan, and Gonzalez 2023a). IBLT provides a single general algorithm and mathematical formulation for memory retrieval that is based on the wellknown ACT-R cognitive architecture (Anderson and Lebiere 2014). It has emerged as a comprehensive theory of the cognitive process by which humans make decisions based on experience in dynamic environments (Gonzalez 2023;Gonzalez and Dutt 2011;Hertwig 2015;Nguyen, Phan, and Gonzalez 2023b). ...
Article
Temporal credit assignment is the process of distributing delayed outcomes to each action in a sequence, which is essential for learning to adapt and make decisions in dynamic environments. While computational methods in reinforcement learning, such as temporal difference (TD), have shown success in tackling this issue, it remains unclear whether these mechanisms accurately reflect how humans handle feedback delays. Furthermore, cognitive science research has not fully explored the credit assignment problem in humans and cognitive models. Our study uses a cognitive model based on Instance-Based Learning Theory (IBLT) to investigate various credit assignment mechanisms, including equal credit, exponential credit, and TD credit, using the IBL decision mechanism in a goal-seeking navigation task with feedback delays and varying levels of decision complexity. We compare the performance and process measures of the different models with human decision-making in two experiments. Our findings indicate that the human learning process cannot be fully explained by any of the mechanisms. We also observe that decision complexity affects human behavior but not model behavior. By examining the similarities and differences between human and model behavior, we summarize the challenges and opportunities for developing learning agents that emulate human decisions in dynamic environments.
... We briefly summarize the cognitive model, while details can be found in Cranford et al. (2021). The cognitive model was developed in the ACT-R cognitive architecture (Anderson & Lebiere, 1998;Anderson et al., 2004) and decisions are made according to Instance-Based Learning Theory (IBLT; Gonzalez, 2013;Gonzalez, Lerch, & Lebiere, 2003). According to IBLT, decisions in dynamic environments are made by generalizing across past experiences, learned through feedback from repeated interactions. ...
... In the simplest case, where the outcomes are numerical and the similarity function is linear, as is the case here, the process simplifies to a weighted average by the probability of retrieval. The retrieval probability is based on the activation strength of instances in memory, which is computed via standard ACT-R equations (see Anderson & Lebiere, 1998;Anderson et al., 2004). ...
... The CogSNet mechanism is inspired by the adaptive control of thought-rational (ACT-R) cognitive architecture [17], [18], [19], which has been widely used in cognitive science to model human declarative memory. ACT-R emphasizes the role of memory traces, where the strength of a memory depends on factors such as recency, frequency of activation, and context. ...
Article
Full-text available
Temporality, a crucial characteristic in the formation of social relationships, was used to quantify the long-term time effects of networks for link prediction models, ignoring the heterogeneity of time effects on different time scales. In this work, we propose a novel approach to link prediction in temporal networks, extending existing methods with a cognitive mechanism that captures the dynamics of the interactions. Our approach computes the weight of the edges and their change over time, similar to memory traces in the human brain, by simulating the process of forgetting and strengthening connections depending on the intensity of interactions. We utilized five ground-truth datasets, which were used to predict social ties, missing events, and potential links.We found: 1) the cognitive mechanism enables more accurate capture of the heterogeneity of the temporal effect, leading to an average precision improvement of 9% compared to baselines with competitive area under curve (AUC); 2) the local structure and synchronous agent behavior contribute differently to different types of datasets; and 3) appropriately increasing the time intervals, which may reduce the negative impact from noise when dividing time windows to calculate the behavioral synchrony of agents, is effective for link prediction tasks.
... Group practices, dictation, spelling games, and constant revision sessions help the students develop orthographic memory and enhance spelling over time (Goswami, 2002). Regular use and practice of word forms allow learners to develop spelling through reinforcement, which conforms with the theory of skills acquisition, pointing to the fact that practice is an important facet of skill acquisition (Anderson & Lebiere, 2014). Goswami (2002) also pointed out that through daily practice, learners are also highly likely to retain their spell knowledge and, at the same time, put it into practice. ...
Article
The mental lexicon is essential for language processing, delineating the structural and conceptual relationships between words. While considerable research has focused on phonological, semantic, and morphological aspects, the orthographic component has received less attention. This review aims to comprehensively analyze the orthographic subcomponent of the mental lexicon, examining its structure and storage. We define orthography as the written form of language and emphasize its critical role in language development, particularly in reading and writing. The relationship between orthography and phonology is explored, highlighting that phonological knowledge typically precedes the acquisition of orthographic knowledge. Furthermore, we analyze empirical studies regarding orthographic representations' organization in alphabetic languages and logographic systems such as Chinese. Our findings suggest that while the orthographic component significantly contributes to language processing in alphabetic languages, its role in logographic languages remains less defined. We also discuss the implications of orthographic representation for language acquisition and advocate for further research in this area. Lastly, we recommend that educators integrate orthographic instruction and metacognitive strategies into their teaching practices to enhance spelling skills and improve literacy outcomes for learners.
... Group practices, dictation, spelling games, and constant revision sessions help the students develop orthographic memory and enhance spelling over time (Goswami, 2002). Regular use and practice of word forms allow learners to develop spelling through reinforcement, which conforms with the theory of skills acquisition, pointing to the fact that practice is an important facet of skill acquisition (Anderson & Lebiere, 2014). Goswami (2002) also pointed out that through daily practice, learners are also highly likely to retain their spell knowledge and, at the same time, put it into practice. ...
Article
Full-text available
The mental lexicon is essential for language processing, delineating the structural and conceptual relationships between words. While considerable research has focused on phonological, semantic, and morphological aspects, the orthographic component has received less attention. This review aims to comprehensively analyze the orthographic subcomponent of the mental lexicon, examining its structure and storage. We define orthography as the written form of language and emphasize its critical role in language development, particularly in reading and writing. The relationship between orthography and phonology is explored, highlighting that phonological knowledge typically precedes the acquisition of orthographic knowledge. Furthermore, we analyze empirical studies regarding orthographic representations' organization in alphabetic languages and logographic systems such as Chinese. Our findings suggest that while the orthographic component significantly contributes to language processing in alphabetic languages, its role in logographic languages remains less defined. We also discuss the implications of orthographic representation for language acquisition and advocate for further research in this area. Lastly, we recommend that educators integrate orthographic instruction and metacognitive strategies into their teaching practices to enhance spelling skills and improve literacy outcomes for learners.
... In this section, we describe ACT-R informally. For a detailed introduction to the theory, we refer to [3][4][5]30]. Adaptive Control of Thought -Rational (ACT-R) is a popular cognitive architecture that is used in many cognitive models to describe and explain human cognition. There have been applications in language learning models [29] or in improving human computer interaction by the predictions of a cognitive model [8]. ...
Preprint
Computational psychology has the aim to explain human cognition by computational models of cognitive processes. The cognitive architecture ACT-R is popular to develop such models. Although ACT-R has a well-defined psychological theory and has been used to explain many cognitive processes, there are two problems that make it hard to reason formally about its cognitive models: First, ACT-R lacks a formalization of its underlying production rule system and secondly, there are many different implementations and extensions of ACT-R with technical artifacts complicating formal reasoning even more. This paper describes a formal operational semantics - the very abstract semantics - that abstracts from as many technical details as possible keeping it open to extensions and different implementations of the ACT-R theory. In a second step, this semantics is refined to define some of its abstract features that are found in many implementations of ACT-R - the abstract semantics. It concentrates on the procedural core of ACT-R and is suitable for analysis of the transition system since it still abstracts from details like timing, the sub-symbolic layer or conflict resolution. Furthermore, a translation of ACT-R models to the programming language Constraint Handling Rules (CHR) is defined. This makes the abstract semantics an executable specification of ACT-R. CHR has been used successfully to embed other rule-based formalisms like graph transformation systems or functional programming. There are many results and tools that support formal reasoning about and analysis of CHR programs. The translation of ACT-R models to CHR is proven sound and complete w.r.t. the abstract operational semantics of ACT-R. This paves the way to analysis of ACT-R models through CHR. Therefore, to the best of our knowledge, our abstract semantics is the first formulation of ACT-R suitable for both analysis and execution.
... ACT-R (Anderson & Lebiere, 1998;Anderson et al., 2004) provides a computational implementation of the CMC informed by the rational analysis of cognition (Anderson, 1990) that assumes that our cognitive mechanisms and representations have adapted to the statistical structure of our environment. This assumption enables the development of models based on the cognitive architecture that abstracts over details of our personal environment to generate behaviors that respond to the overall regularities of our information landscape. ...
Article
Some of the required characteristics for a true machine theory of mind (MToM) include the ability to (1) reproduce the full diversity of human thought and behavior, (2) develop a personalized model of an individual with very limited data, and (3) provide an explanation for behavioral predictions grounded in the cognitive processes of the individual. We propose that a certain class of cognitive models provide an approach that is well suited to meeting those requirements. Being grounded in a mechanistic framework like a cognitive architecture such as ACT‐R naturally fulfills the third requirement by mapping behavior to cognitive mechanisms. Exploiting a modeling paradigm such as instance‐based learning accounts for the first requirement by reflecting variations in individual experience into a diversity of behavior. Mechanisms such as knowledge tracing and model tracing allow a specific run of the cognitive model to be aligned with a given individual behavior trace, fulfilling the second requirement. We illustrate these principles with a cognitive model of decision‐making in a search and rescue task in the Minecraft simulation environment. We demonstrate that cognitive models personalized to individual human players can provide the MToM capability to optimize artificial intelligence agents by diagnosing the underlying causes of observed human behavior, projecting the future effects of potential interventions, and managing the adaptive process of shaping human behavior. Examples of the inputs provided by such analytic cognitive agents include predictions of cognitive load, probability of error, estimates of player self‐efficacy, and trust calibration. Finally, we discuss implications for future research and applications to collective human–machine intelligence.
... Memory retrieval in ACT-R is competitive. The accessibility of an instance from memory depends on its activation value relative to the activation of the other instances stored in memory (25). The activation value of an instance depends on three components: base-level activation (B i ), spreading activation (W i ), and mismatch penalty (MP i ). ...
Article
Full-text available
Introduction Seasonal influenza poses significant societal costs, including illness, mortality, and reduced work productivity. Vaccination remains the most effective strategy for preventing the disease, yet vaccination rates in the United States fall below 50% for adults. Understanding the factors influencing vaccination decisions is crucial for designing interventions to improve uptake. This study investigates how personal experiences and the experiences of social contacts affect individual decisions to get vaccinated against influenza. Methods A multi-year longitudinal survey study was conducted to examine the impact of personal and social network experiences on vaccination decisions. Participants' vaccination behaviors and experiences with influenza were tracked over time. To model these influences, we developed a memory-based vaccination decision model using the Adaptive Control of Thought – Rational (ACT-R) integrated cognitive architecture, which incorporates cognitive processes associated with memory and decision-making. Results The survey results demonstrated that both personal experiences with influenza and the experiences of close social contacts significantly influenced vaccination decisions. The memory-based model, built within the ACT-R framework, effectively captured these effects, providing a computational representation of how personal and social factors contribute to vaccination behaviors. Discussion The findings suggest that personal and social experiences play a critical role in shaping vaccination decisions, which can inform the development of targeted interventions to increase vaccination uptake. By incorporating cognitive processes into the model, we identified potential strategies to enhance vaccine promotion efforts, such as recalling past experiences with illness to motivate individuals to get vaccinated.
... Newell (1990) introduced the concept of unified theories of cognition, which involves acquiring knowledge, problemsolving, and perception. Cognitive architectures, such as Adaptive Control of Thoughts-Rational (ACT-R) (Anderson et al., 1998;Anderson & Lebiere, 2014;Ritter et al., 2018) and SOAR (Laird, 2019), have been developed to model human cognition in various cognitive tasks. ACT-R can interact with the environment through modules like perception (vision) and action (motor) modules. ...
Conference Paper
Full-text available
The field of Artificial Intelligence (AI), particularly in the area of computer vision, has experienced significant advancements since the emergence of deep learning models trained on extensively large labeled datasets. However, reliance on human labelers raises concerns regarding bias, inconsistency, and ethical issues. This study aimed to replace human labelers with an interactive cognitive model that could address these concerns. We investigated human behavior in a two-phase image labeling task and developed a model using the VisiTor (Vision + Motor) framework within the ACT-R cognitive architecture. This study was designed based on a real labeling task of identifying different crystals in optical microscopy images after various treatments for inhibiting the formation of the crystals. The outcomes from the image labeling experiment, which included both learning and testing phases, revealed meaningful observations. The observed decrease in task completion times for all participants during the learning phase suggests an increased familiarity with the image features, facilitated by the reference images presented in all four consecutive example tasks. It was also discovered that the subtle distinctions between classes led to confusion in making decisions about labels. The developed interactive cognitive model was able to simulate human behavior in the same labeling task environment, while the model achieved high accuracy, it still relies on pre-defined features therefore limited its application to seen data only. Our findings suggest that interactive cognitive modeling offers a promising avenue for replacing human labelers with robust, consistent, and unbiased labeled datasets.
... Classical cognitivists sometimes speculate that, while the sensorimotor processes humans and other animals engage in might be accommodated by a dynamical systems account, human cognition is a distinctly different process that relies exclusively on symbolic processing (e.g., Anderson & Lebiere, 2014;Dietrich & Markman, 2003;Mahon & Caramazza, 2008). However, as more and more evidence accumulates for dynamical and embodied cognition (e.g., Chemero, 2009;Favela, 2024;Raja & Anderson, 2021;Spivey, 2007), the odds on that wager appear to be changing for the worse. ...
Article
Full-text available
About 30 years ago, the Dynamical Hypothesis instigated a variety of insights and transformations in cognitive science. One of them was the simple observation that, quite unlike trial‐based tasks in a laboratory, natural ecologically valid behaviors almost never have context‐free starting points. Instead, they produce lengthy time series data that can be recorded with dense‐sampling measures, such as heartrate, eye movements, EEG, etc. That emphasis on studying the temporal dynamics of extended behaviors may have been the trigger that led to a rethinking of what a “representation” is, and then of what a “cognitive agent” is. This most recent and perhaps most revolutionary transformation is the idea that a cognitive agent need not be a singular physiological organism. Perhaps a group of organisms, such as several people working on a joint task, can temporarily function as one cognitive agent – at least while they're working adaptively and successfully.
... Two prominent frameworks for cognitive modeling are ACT-R (Anderson 2009;Bothell 2017) and Soar (Laird 2012): these frameworks serve as robust tools for simulating human behavior across various cognitive tasks. They are referred to as Cognitive architectures (CAs) (Laird 2012;Anderson 1998), reflecting a set of intertwined mechanisms to model human behavior and aiming for a unified representation of mind (Newell 1994). CAs use task-specific knowledge to generate behavior. ...
Preprint
Resolving the dichotomy between the human-like yet constrained reasoning processes of Cognitive Architectures and the broad but often noisy inference behavior of Large Language Models (LLMs) remains a challenging but exciting pursuit, for enabling reliable machine reasoning capabilities in production systems. Because Cognitive Architectures are famously developed for the purpose of modeling the internal mechanisms of human cognitive decision-making at a computational level, new investigations consider the goal of informing LLMs with the knowledge necessary for replicating such processes, e.g., guided perception, memory, goal-setting, and action. Previous approaches that use LLMs for grounded decision-making struggle with complex reasoning tasks that require slower, deliberate cognition over fast and intuitive inference -- reporting issues related to the lack of sufficient grounding, as in hallucination. To resolve these challenges, we introduce LLM-ACTR, a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making by integrating the ACT-R Cognitive Architecture with LLMs. Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations, injects this information into trainable LLM adapter layers, and fine-tunes the LLMs for downstream prediction. Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability of our approach, compared to LLM-only baselines that leverage chain-of-thought reasoning strategies.
... Quantum science is a science that investigates space smaller than atoms. This science includes various fields, the most important of which are the field of quantum physics, the field of quantum telecommunications, the field of quantum structure or chemistry, and the field of quantum mathematics [1][2][3]. ...
... At each time different instances are considered for each option. Each instance in memory has a value Activation, which represents the ease with which that information is available in memory (Anderson and Lebiere 2014). ...
Article
During the past decade, researchers of behavioral cyber security have created cognitive agents that are able to learn and make decisions in dynamic environments in ways that assimilate human decision processes. However, many of these efforts have been limited to simple detection tasks and represent basic cognitive functions rather than a whole set of cognitive capabilities required in dynamic cyber defense scenarios. Our current work aims at advancing the development of cognitive agents that learn and make defense-dynamic decisions during cyber attacks by intelligent attack agents. We also aim to evaluate the capability of these cognitive models in ``Turing-like'' experiments, comparing the decisions and performance of these agents against human cyber defenders. In this paper, we present an initial demonstration of a cognitive model of the defender that relies on a cognitive theory of dynamic decision-making, Instance-Based Learning Theory (IBLT); we also demonstrate the execution of the same defense task by human defenders. We rely on OpenAI Gym and CybORG and adapt an existing CAGE scenario to generate a simulation experiment using an IBL defender. We also offer a new Interactive Defense Game (IDG), where \textit{human} defenders can perform the same CAGE scenario simulated with the IBL model. Our results suggest that the IBL model makes decisions against two intelligent attack agents that are similar to those observed in a subsequent human experiment. We conclude with a description of the cognitive foundations required to build autonomous intelligent cyber defense agents that can collaborate with humans in autonomous cyber defense teams.
... The development of SKILL was informed by three information processing theories: the Adaptive Control of Thought (ACT-R) model of cognition (Anderson, 1993;Anderson & Lebiere, 2014), the Embedded Processes model of working memory (Cowan, 1999(Cowan, , 2014, and the Construction-Integration (CI) model of text comprehension and production (Kintsch & van Dijk, 1978;van Dijk & Kintsch, 1983). ACT-R proposes that the brain uses causal and temporal information extracted from semantic and syntactic cues embedded in stories to create multilevel representations or "chunks" of related information in LTM. ...
Article
Full-text available
Purpose Clinicians address a wide range of oral language skills when working with school-age students with language and literacy difficulties (LLDs). Therefore, there is a critical need for carefully designed, rigorously tested, multicomponent contextualized language interventions (CLIs) that have a high likelihood of successful implementation and measurable academic impacts. This clinical focus article summarizes the development and testing of a CLI entitled Supporting Knowledge in Language and Literacy (SKILL), which is a supplementary narrative intervention program for elementary school-age children. Our aims are to (a) to review the foundational theoretical models that are the foundation of SKILL; (b) describe the iterative process used to develop the phases, lessons, procedures, materials, and progress monitoring tool; (c) summarize recent findings of the randomized controlled trial that was conducted to test its efficacy; and (d) discuss factors that may contribute to successful implementation of multicomponent language interventions. Method A total of 357 students in Grades 1–4 with LLDs were randomized to a treatment group or to a business-as-usual control group. The treatment group received the SKILL curriculum in small groups during 30-min lessons by trained speech-language pathologists, teachers, and special educators. Results Students who received SKILL significantly outperformed those who did not on oral and written measures of storytelling and comprehension immediately after treatment and after 5-months at follow-up. Gains were similar among students with different levels of language ability (at-risk, language impaired) and language status (monolingual, bilingual) at pretest. Conclusions There is growing support for the use of multicomponent CLIs to bring about educationally relevant outcomes for students with LLDs. The authors present this review of how SKILL was designed, manualized, and rigorously tested by a team of researchers and practitioners with the hope that this approach will serve as a springboard for the development of future multicomponent CLIs that may meaningfully improve communicative and educational outcomes for students with LLDs.
... In practice, this means that there is a limit on how informative behavioral data (e.g., choice-response time) can be for selecting between mechanisms because the latent cognitive processes of interest tend to mimic each other at the level of behavior (Hawkins et al., 2017). This issue will likely be alleviated to some extent by the development of more detailed computational models of WM that decompose observed behavior into latent component processes or integrate WM subprocesses into models of higher-level cognition, such as reinforcement learning (e.g., Collins and Frank (2012), McDougle and Collins (2021)) and evidence accumulation models of decision-making (e.g., Brown and Heathcote (2008);Forstmann et al. (2016); Ratcliff (1978)), or more general cognitive architectures, such as ACT-R (Anderson & Lebiere, 2014;Anderson et al., 1996). Evidence accumulation models, which explain choices and response time distributions in terms of latent cognitive processes, are particularly well suited to reveal whether the WM-related phenomena outlined above occur because WM subprocesses add time outside of the decision stage (longer nondecision time), interfere with the decision process itself (reduced or noisier processing rate), or induce strategic adjustments engaging top-down cognitive control (increased response caution). ...
Chapter
Working memory (WM) refers to a set of processes that makes task-relevant information accessible to higher-level cognitive processes including abstract reasoning, decision-making, learning, and reading comprehension. In this chapter, we introduce the concept of WM and outline key behavioral and neural evidence for a number of critical subprocesses that support WM and which have become recent targets of cognitive neuroscience. We discuss common approaches to linking brain and behavior in WM research seeking to identify the neural basis of WM subprocesses. We draw attention to limitations of common approaches and suggest that much progress could be made by applying several of the recent methodological advances in model-based cognitive neuroscience discussed throughout this book (see Chapters “An Introduction to EEG/MEG for Model-Based Cognitive Neuroscience”, “Ultra-High Field Magnetic Resonance Imaging for Model-Based Neuroscience”, “Advancements in Joint Modeling of Neural and Behavioral Data”, “Cognitive Models as a Tool to Link Decision Behavior With EEG Signals”, and “Linking Models with Brain Measures”). Overall, the purpose of this chapter is to give a broad overview of WM as seen through the lens of model-based cognitive neuroscience and to summarize our current state of knowledge of WM subprocesses and their neural basis. We hope to outline a path forward to a more complete neurocomputational understanding of WM.
... Cognitive architectures encode dynamic models of cognition based on established theories about the structure of mind [2,18]. Empirically validated, these architectures constitute plausible cognitive theories that enable predictions about imminent human behavior. ...
Preprint
Full-text available
The level of automation in human-centered systems is steadily increasing, leading to a demand for advanced design methods for automation control at the human-machine interface. This is particularly important in safety-critical applications, where the multi-faceted interaction between the automated system and humans must be carefully analyzed to identify potential risks to the overall safety. This paper presents our vision of an approach determining an appropriate level of automation taking into account the automation's impact on the human. The approach is based on a game theoretic framework where we investigate whether the automation's controller can be synthesized as a strategy considering human behavior and thus ensuring human-adaptive control.
... In order to reach a unified theory of human memory, Matthew A. Kelly and Robert L. West, suggested a theoretical framework for identifying each proposed memory model within six key decisions namely; "(1) choice of knowledge representation scheme, (2) choice of data structure, (3) choice of associative architecture, (4) choice of learning rule, (5) choice of timevariant process, and (6) choice of response decision criteria" . The representation scheme of human memory model starts with the LISP (List Processing) in which cognitive architecture ACT-R (Anderson & Lebiere, 2014) and SAM (Search of Associative Memory) models are represented as "storage and retrieval of discrete symbols" (Clark., 2001). Symbolic models are represented as "expressions of symbols and the manipulation of those symbols" (Clark., 2001;, which they are inspired by linguistics and logic (Locke & Phemister, 2008). ...
Article
Full-text available
There are two approaches for simulating memory as well as learning in artificial intelligence; the functionalistic approach and the cognitive approach. The necessary condition to put the second approach into account is to provide a model of brain activity that contains a quite good congruence with observational facts such as mistakes and forgotten experiences. Given that human memory has a solid core that includes the components of our identity, our family and our hometown, the major and determinative events of our lives, and the countless repeated and accepted facts of our culture, the more we go to the peripheral spots the data becomes flimsier and more easily exposed to oblivion. It was essential to propose a model in which the topographical differences are quite distinguishable. In our proposed model, we have translated this topographical situation into quantities, which are attributed to the nodes. The result is an edge-weighted graph with mass-based values on the nodes which demonstrates the importance of each atomic proposition, as a truth, for an intelligent being. Furthermore, it dynamically develops and modifies, and in successive phases, it changes the mass of the nodes and weight of the edges depending on gathered inputs from the environment.
... If anything, task-switching (from think to NT conditions) should be more effortful than staying on the same task (J. R. Anderson & Lebiere, 2014). Even if it is the case that on-going suppression is less effective than switching between suppression and retrieval (as in the original TNT paradigm), this should require qualification of the Inhibition theory. ...
Article
Full-text available
Episodic memories may become suppressed, both incidentally and intentionally. Incidental suppression is a result of a competition induced by interfering items or responses. In contrast, intentional suppression is said to result from conscious attempts to suppress certain memory items, and should thus not depend on competition induced by interfering items or responses. However, intentional suppression is typically engendered using the Think/No-Think paradigm, in which participants are required to retrieve some target items and to suppress others. Therefore, rather than intentional suppression, forgetting in this paradigm may reflect incidental suppression of No-Think items induced by interference via prior retrieval of the Think items. To distinguish between these possibilities, we tested participants (n = 40) using an adjusted suppression paradigm, which did not include the Think condition (ExcludeThink paradigm) and compared it with the standard suppression paradigm (IncludeThink paradigm; n = 39) which included a think condition. We found that suppression was not observed in the ExcludeThink paradigm, but only in the IncludeThink paradigm. These results indicate that interference via prior retrieval is necessary to induce forgetting.
... Compact Oxford dictionary defines the term learnability as "the degree to which knowledge or skill (in something) can be acquired through study or experience or by being taught". In the field of cognitive science, the 'ACT-R′ (Adaptive Control of Thought-Rational) theory distinguishes knowledge into declarative and procedural knowledge [2]. Procedural knowledge is acquired through practice and refers to information about how to perform a task and an action that can be directly executed. ...
Article
Full-text available
In the current era of Industry 4.0, many new technologies offer manufacturing industries to achieve high productivity. Augmented Reality (AR) is one of the emerging technologies that has been adopted in industries to aid users in acquiring complex skills and carrying out many complicated tasks such product assembly and maintenance. Nevertheless, most AR applications have been developed without clear understanding of how such technology can facilitate improved learnability in terms of knowledge reusability. This paper proposed an enhanced AR-based training system that provides multimodal information with a contextualized information to improve task comprehension and knowledge reusability compared with traditional AR that presents unimodal and decontextualized information. An empirical test was carried out to assess the task performance and the task learnability aspects of this enhanced AR compared to the traditional AR and the paper-based document. The experiment consisted of a training phase where participants carried out an electrical connection task of a sensor followed by a knowledge reuse phase where participants had to wire a second sensor using their previous training. A pre-test quiz was given before the experiment followed by the post-tests phase after the training. Post-tests consist of one post-test given directly after the experiment (short-term retention test) and a second post-test quiz given one week later (long-term retention test) to measure information retention. The results indicated that AR-based approaches could enhance knowledge acquisition by around 18 % for traditional AR and almost 25 % for enhanced AR as compared to paper-based approach. While all training systems achieved relatively equivalent well for short-term retention test, trainees who used the enhanced AR training systems statistically outperformed those in the paper-based group for long term retention test. Furthermore, there was a positive correlation between the score of short-term retention test and the score in the knowledge reusability which was also shown by the higher scores in knowledge reusability for the enhanced AR training system compared to the other two approaches. These findings are discussed in relation to the Industry 5.0′s human centric core value.
... El desarrollo de una habilidad pasaría por varias etapas, y en las primeras sí habría atención y control consciente de la ejecución de las habilidades relevantes. Sin embargo, en la última fase, las habilidades adquiridas no requerirían ni atención ni control consciente (Anderson & Lebiere, 1998). Esta idea de automatización progresiva continúa siendo prevalente en el área. ...
Article
Full-text available
El trabajo gira en torno a la siguiente pregunta dentro de las áreas de psicología y filosofía del deporte. Cuando un deportista está en plena acción, ¿le resulta beneficioso enfocarse de manera consciente en la ejecución de sus habilidades o le conviene ejecutarlas de manera inconsciente? Esta pregunta ha generado cierta controversia en el área, con la filósofa Barbara Montero defendiendo una posición fuertemente inclinada hacia el procesamiento consciente. Para intentar arrojar luz sobre la controversia, analizamos los resultados de las dos principales líneas de investigación experimental sobre el tema, una liderada por Sian Beilock y la otra por Gabriele Wulf. En cada caso, presentamos y analizamos las críticas metodológicas que realiza Montero a dichos paradigmas experimentales. Así, siguiendo un enfoque de tipo cualitativo, utilizamos una metodología interpretativa para analizar los argumentos y la evidencia experimental del área. A partir de nuestro análisis, buscamos extraer consejos prácticos acerca de qué focos de atención parecen resultar beneficiosos (y cuáles no) para el desempeño deportivo. Palabras clave: foco consciente, habilidades deportivas, evidencia experimental, crítica metodológica. Abstract. This work centered on the following question within the areas of psychology and philosophy of sports. When athletes are in action, is it beneficial for them to focus consciously on exercising their skills or is it better to do this unconsciously? This issue has caused some controversy in the area, with philosopher Barbara Montero defending a strong position in favor of conscious processing. In order to shed some light on this, we analyzed the results of the two main lines of experimental research on this topic, one of them led by Sian Beilock and the other, by Gabriele Wulf. In every case, we presented and evaluated the methodological criticism made by Montero to each experimental paradigm. Thus, by following a qualitative approach, we applied an interpretative methodology to explore arguments and experimental evidence from the area. Through our analysis, we tried to provide practical advice on which focuses of attention are beneficial (and which are not) for sports performance. Keywords: conscious focus; sports skills; experimental evidence; methodological criticism.
... We identify a first group of papers that are grounded in various general theories of cognition. The work of Salvucci (2006) commits to one of the most famous and comprehensive cognitive architectures, the Adaptive Control of Thought-Rational (ACT-R) by Anderson and Lebiere (1998), represented in a summary form in Fig. 3. The ACT-R architecture was never intended for engineering applications, least of all autonomous vehicles: the performance of models based on ACT-R is far from the real-life requirements of the automotive industry. ...
... On the other hand, Cognitive Architectures (CAs) propose hypotheses about the fixed structures that underlie the functioning of minds, whether in natural or artificial systems, and how these structures cooperate to yield intelligent behavior in complex environments (Laird, Lebiere, and Rosenbloom 2017). CAs such as ACT-R (Anderson and Lebiere 2014), SOAR (Laird 2019), CLARION (Sun 2016), and LIDA (Franklin and Patterson 2006) model various aspects of human cognition, including memory, learning, reasoning, perceptual-motor interaction, theory of mind, AGI, among other aspects (Kotseruba and Tsotsos 2020). CAs prioritize bounded rationality, aiming to make satisfactory decisions under limited resources, as opposed to the optimality focus of LLMs. ...
Preprint
Full-text available
This article explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
Chapter
Cybersecurity is a crucial component of national security strategy, receiving significant attention from countries worldwide. Cyberspace data is a typical example of big data, characterized by its diverse sources, various types, and large volume. Security incidents in cyberspace exhibit three main characteristics: large scale, evolvability, and correlativity. Addressing these characteristics to comprehensively, accurately, and in real-time assess cybersecurity incidents is a global challenge. Drawing on the human cognitive process for understanding cybersecurity incidents and based on a comprehensive analysis of existing cognitive models, this chapter introduces a new cognitive model in the cybersecurity field—the MDATA cognitive model. Additionally, it details the composition and operational principles of the MDATA cognitive model.
Article
Full-text available
Adversary emulation is commonly used to test cyber-defense performance against known threats to organizations. However, many adversary emulation methods often rely on automated planning and underplay the role of human cognition. Consequently, defenders are often underprepared for human attackers who can think creatively and adapt their strategies. In this paper, we propose the design of adversarial cognitive agents that are dynamic, adaptable, and able to learn from experience. These cognitive agents are built based on the theoretical principles of Instance-Based Learning Theory (IBLT) of experiential choice in dynamic tasks, making them more challenging than strategically optimal adversaries for human defenders. Our research offers three main contributions. First, in a simulation experiment, we demonstrate how IBL attacker agents can learn from experience and become as efficient as optimal strategic algorithms against a strategic defender. In a second simulation experiment, the IBL attackers are pitted against an IBL defender, showing that the IBL attacker can be a more challenging adversary for the IBL defender, while the IBL defender can learn to counter carefully crafted optimal attack strategies. To test these observations, we conducted a third experiment, where humans played the role of defenders against both strategic and IBL attackers in an interactive task. The results confirm the predictions of the second simulation experiment: a cognitive attackers are more challenging for human defenders than strategic attackers. These insights contribute to informing future adversary emulation efforts and training of cyber defenders.
Preprint
Theory of Mind (ToM), the ability to attribute beliefs, intentions, or mental states to others, is a crucial feature of human social interaction. In complex environments, where the human sensory system reaches its limits, behaviour is strongly driven by our beliefs about the state of the world around us. Accessing others' mental states, e.g., beliefs and intentions, allows for more effective social interactions in natural contexts. Yet, these variables are not directly observable, making understanding ToM a challenging quest of interest for different fields, including psychology, machine learning and robotics. In this paper, we contribute to this topic by showing a developmental synergy between learning to predict low-level mental states (e.g., intentions, goals) and attributing high-level ones (i.e., beliefs). Specifically, we assume that learning beliefs attribution can occur by observing one's own decision processes involving beliefs, e.g., in a partially observable environment. Using a simple feed-forward deep learning model, we show that, when learning to predict others' intentions and actions, more accurate predictions can be acquired earlier if beliefs attribution is learnt simultaneously. Furthermore, we show that the learning performance improves even when observed actors have a different embodiment than the observer and the gain is higher when observing beliefs-driven chunks of behaviour. We propose that our computational approach can inform the understanding of human social cognitive development and be relevant for the design of future adaptive social robots able to autonomously understand, assist, and learn from human interaction partners in novel natural environments and tasks.
Article
The interpretability of decision-making in autonomous driving is crucial for the building of virtual driver, promoting the trust worth of artificial intelligence (AI) and the efficiency of human-machine interaction. However, current data-driven methods such as deep reinforcement learning (DRL) directly acquire driving policies from collected data, where the decision-making process is vague for safety validation. To address this issue, this paper proposes cognitive reinforcement learning that can both simulate the human driver’s deliberation and provide interpretability of the virtual driver’s behaviors. The new method involves cognitive modeling, reinforcement learning and reasoning path extraction. Experiments on the virtual driving environment indicate that our method can semantically interpret the virtual driver’s behaviors. The results show that the proposed cognitive reinforcement learning model combines the interpretability of cognitive models with the learning capability of reinforcement learning, providing a new approach for the construction of trustworthy virtual drivers.
Article
Full-text available
Introduction Generative Artificial Intelligence has made significant impacts in many fields, including computational cognitive modeling of decision making, although these applications have not yet been theoretically related to each other. This work introduces a categorization of applications of Generative Artificial Intelligence to cognitive models of decision making. Methods This categorization is used to compare the existing literature and to provide insight into the design of an ablation study to evaluate our proposed model in three experimental paradigms. These experiments used for model comparison involve modeling human learning and decision making based on both visual information and natural language, in tasks that vary in realism and complexity. This comparison of applications takes as its basis Instance-Based Learning Theory, a theory of experiential decision making from which many models have emerged and been applied to a variety of domains and applications. Results The best performing model from the ablation we performed used a generative model to both create memory representations as well as predict participant actions. The results of this comparison demonstrates the importance of generative models in both forming memories and predicting actions in decision-modeling research. Discussion In this work, we present a model that integrates generative and cognitive models, using a variety of stimuli, applications, and training methods. These results can provide guidelines for cognitive modelers and decision making researchers interested in integrating Generative AI into their methods.
Article
Full-text available
In driver monitoring various data types are collected from drivers and used for interpreting, modeling, and predicting driver behavior, and designing interactions. Aim of this contribution is to introduce manD 1.0, a multimodal dataset that can be used as a benchmark for driver monitoring in the context of automated driving. manD is the short form of human dimension in automated driving. manD 1.0 refers to a dataset that contains data from multiple driver monitoring sensors collected from 50 participants, gender-balanced, aged between 21 to 65 years. They drove through five different driving scenarios in a static driving simulator under controlled laboratory conditions. The automation level (SAE International, Standard J3016) ranged from SAE L0 (no automation, manual) to SAE L3 (conditional automation, temporal). To capture data reflecting various mental and physical states of the subjects, the scenarios encompassed a range of distinct driving events and conditions. manD 1.0 includes environmental data such as traffic and weather conditions, vehicle data like the SAE level and driving parameters, and driver state that covers physiology, body movements, activities, gaze, and facial information, all synchronized. This dataset supports applications like data-driven modeling, prediction of driver reactions, crafting of interaction strategies, and research into motion sickness.
Chapter
The level of automation in human-centered systems is steadily increasing, leading to a demand for advanced design methods for automation control at the human-machine interface. This is particularly important in safety-critical applications, where the multi-faceted interaction between the automated system and humans must be carefully analyzed to identify potential risks to the overall safety. This paper presents our vision of an approach determining an appropriate level of automation taking into account the automation’s impact on the human. The approach is based on a game theoretic framework where we investigate whether the automation’s controller can be synthesized as a strategy considering human behavior and thus ensuring human-adaptive control.
Conference Paper
Designing learning games for the retention of declarative knowledge is a way to provide learners with a large variety of adapted training situations. Such training situations can be considered as game activities built upon questioned facts. Learners must then face various game situations wherein interactive elements and rules are a means to read and answer specific questions about these facts. This chapter is an extended version of [18]. We propose Roguelite as a relevant game genre for declarative knowledge training. Indeed, its core design principles tackle the needs of variety and challenging training situations. Additionally, we propose an analysis framework to help teachers and game developers in identifying the key elements to design training games. This framework includes a set of questions to consider during the preliminary design of any training game for declarative knowledge. We identified and used this proposal in a specific research context about the training of multiplication tables. Following an iterative and prototype-centered approach, we illustrate two iterations about applying the analysis framework to guide the design and development of playable prototypes.
Article
Full-text available
In the tourism sector, there is a growing interest in Sharia Tourism or Halal Tourism, which appeals to Muslim travelers. The halal industry presents a promising business opportunity, especially in Indonesia, where the majority of the population is Muslim. As the country with the highest number of Muslims globally, Indonesia significantly contributes to halal tourism's growth. The prevalence of biased views toward hotels has encouraged industry stakeholders to innovate and adopt concepts that align with societal values and norms. This research focuses on identifying factors that influence customers' decisions to return to Sharia-compliant hotels. The study, which is quantitative in nature, involved distributing an online questionnaire to 247 participants who have stayed in Islamic hotels. It employs structural equation modeling (SEM) to analyze the data. Key variables include price, location, religiosity, and trust as independent factors, with satisfaction serving as a mediator. Findings indicate a positive correlation between the variables of price, location, religiosity, trust, and both satisfaction and the intention to repurchase. Satisfaction emerged as the most significant factor in encouraging customers to return, with trust playing a crucial role in enhancing satisfaction.
Article
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
Article
Knowledge engineering is an important task for creating and maintaining a knowledge base for cognitive models. It involves acquiring, representing, and organizing knowledge in a form that computers can use to make decisions and solve problems. However, this process can be a bottleneck for designing and using cognitive models. Knowledge engineering is a time-consuming and resource-intensive task that requires subject matter experts to provide information about a domain. In addition, models can acquire knowledge but require significant mechanisms to structure that information in a structured format appropriate for general use. Given the knowledge engineering bottleneck, we propose a solution that relies on natural language processing to extract key entities, relationships, and attributes to automatically generate chunks encoded as triples or chunks from unstructured text. Once generated, the knowledge can be used to create or add to a knowledge base within cognitive architectures to reduce knowledge engineering and task-specific models.
Preprint
Full-text available
Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation. They possess deep language comprehension, human-like text generation capabilities, contextual awareness, and robust problem-solving skills, making them invaluable in various domains (e.g., search engines, customer support, translation). In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks. This paper explores the intersection of LLMs with security and privacy. Specifically, we investigate how LLMs positively impact security and privacy, potential risks and threats associated with their use, and inherent vulnerabilities within LLMs. Through a comprehensive literature review, the paper categorizes the papers into "The Good" (beneficial LLM applications), "The Bad" (offensive applications), and "The Ugly" (vulnerabilities of LLMs and their defenses). We have some interesting findings. For example, LLMs have proven to enhance code security (code vulnerability detection) and data privacy (data confidentiality protection), outperforming traditional methods. However, they can also be harnessed for various attacks (particularly user-level attacks) due to their human-like reasoning abilities. We have identified areas that require further research efforts. For example, Research on model and parameter extraction attacks is limited and often theoretical, hindered by LLM parameter scale and confidentiality. Safe instruction tuning, a recent development, requires more exploration. We hope that our work can shed light on the LLMs' potential to both bolster and jeopardize cybersecurity.
Article
Full-text available
The level of automation in human-centered systems is steadily increasing, leading to a demand for advanced design methods for automation control at the human-machine interface. This is particularly important in safety-critical applications, where the multi-faceted interaction between the automated system and humans must be carefully analyzed to identify potential risks to the overall safety. This paper presents our vision of an approach determining an appropriate level of automation taking into account the automation’s impact on the human. The approach is based on a game theoretic framework where we investigate whether the automation’s controller can be synthesized as a strategy considering human behavior and thus ensuring human-adaptive control.
Article
One of the early goals of artificial intelligence (AI) was to create algorithms that exhibited behavior indistinguishable from human behavior (i.e., human-like behavior). Today, AI has diverged, often aiming to excel in tasks inspired by human capabilities and outperform humans, rather than replicating human cogntion and action. In this paper, I explore the overarching question of whether computational algorithms have achieved this initial goal of AI. I focus on dynamic decision-making, approaching the question from the perspective of computational cognitive science. I present a general cognitive algorithm that intends to emulate human decision-making in dynamic environments, as defined in instance-based learning theory (IBLT). I use the cognitive steps proposed in IBLT to organize and discuss current evidence that supports some of the human-likeness of the decision-making mechanisms. I also highlight the significant gaps in research that are required to improve current models and to create higher fidelity in computational algorithms to represent human decision processes. I conclude with concrete steps toward advancing the construction of algorithms that exhibit human-like behavior with the ultimate goal of supporting human dynamic decision-making.
Article
Developing effective Multi-Agent Systems (MAS) is critical for many applications requiring collaboration and coordination with humans. Despite the rapid advance of Multi-Agent Deep Reinforcement Learning (MADRL) in cooperative MAS, one of the major challenges that remain is the simultaneous learning and interaction of independent agents in dynamic environments in the presence of stochastic rewards. State-of-the-art MADRL models struggle to perform well in Coordinated Multi-agent Object Transportation Problems (CMOTPs) wherein agents must coordinate with each other and learn from stochastic rewards. In contrast, humans often learn rapidly to adapt to nonstationary environments that require coordination among people. In this paper, motivated by the demonstrated ability of cognitive models based on Instance-Based Learning Theory (IBLT) to capture human decisions in many dynamic decision making tasks, we propose three variants of Multi-Agent IBL models (MAIBL). The idea of these MAIBL algorithms is to combine the cognitive mechanisms of IBLT and the techniques of MADRL models to deal with coordination MAS in stochastic environments from the perspective of independent learners. We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of stochastic rewards compared to current MADRL models. We discuss the benefits of integrating cognitive insights into MADRL models.
Chapter
Full-text available
Declarative and procedural memory systems underlie two distinct types of knowledge, labeled declarative and procedural knowledge. The distinction between these memory/knowledge systems is useful in understanding how second language (L2) learners process L2 input data and how they convert such information into intake for their evolving interlanguage.
ResearchGate has not been able to resolve any references for this publication.