Figure 1 - uploaded by Fernand Gobet
Content may be subject to copyright.
Top panel: examples of chunks in a chess position. Bottom panel: one of the chunks elicits a possible move (retreating the white bishop).
Source publication
Similar publications
Citations
... On the cognitive psychology side, one of its most established theories -chunking theory -has been also embodied in computational cognitive architectures, first EPAM (Feigenbaum, 1963;Richman, Staszewski, & Simon, 1995) and now CHREST (Chunking Hierarchy REtrieval STructures) (Gobet, 1993(Gobet, , 2000Gobet & Lane, 2012;Gobet & Simon, 2000). Chunking theory's key idea -a chunk -is defined as a meaningful unit of information made from elements that have strong associations between each other (e.g., several digits making up a telephone number). ...
... CHREST is a self-organising computer model that simulates human learning processes via interacting cognitive mechanisms and structures. For CHREST, learning implies gradual growth of a network of chunks in LTM, a process influenced both by the environmental stimuli and the data that have already been stored (Gobet & Lane, 2012). CHREST's STM structure allows for additional ways to create links between chunks, such as linking chunks across visual and verbal modalities. ...
... We should also add that CHREST is different to many symbolic models (like "expert systems") and is closer to deep learning in its focus on perception as the primary driver of intelligence. Gobet and Lane (2012) offer an in-depth introduction to the chunking theory; for deep learning, see LeCun, Bengio, and Hinton (2015). ...
Chunking theory is among the most established theories in cognitive psychology. However, little work has been done to connect the key ideas of chunks and chunking to the neural substrate. The current study addresses this issue by investigating the convergence of a cognitive CHREST model (the computational embodiment of chunking theory) and its neuroscience-based counterpart (based on deep learning). Both models were trained from raw data to categorise novel stimuli in the real-life domains of literature and music. Despite having vastly different mechanisms and structures, both models largely converged in their predictions of classical writers and composers-in both qualitative and quantitative terms. Moreover, the use of the same chunk/engram activation mechanism for CHREST and deep learning models demonstrated functional equivalence between cognitive chunks and neural engrams. The study addresses a historical feud between symbolic/serial and subsymbolic/parallel processing approaches to modelling cognition. The findings also further bridge the gap between cognition and its neural substrate, connect the mechanisms proposed by chunking theory to the neural network modelling approach, and make further inroads towards integrating concept formation theories into a Unified Theory of Cognition (Newell, 1990).
... EPAM, CHREST and related models have been applied to predict and simulate behaviour in verbal learning research (Feigenbaum, 1959 For further details of the chunking theory and CHREST, refer to Gobet and Lane (2012). The interaction of GEMS and CHREST is presented in Figure 3. IV. ...
A common goal in cognitive science involves explaining/predicting human performance in experimental settings. This study proposes a single GEMS computational scientific discovery framework that automatically generates multiple models for verbal learning simulations. GEMS achieves this by combining simple and complex cognitive mechanisms with genetic programming. This approach evolves populations of interpretable cognitive agents, with each agent learning by chunking and incorporating long-term memory (LTM) and short-term memory (STM) stores, as well as attention and perceptual mechanisms. The models simulate two different verbal learning tasks: the first investigates the effect of prior knowledge on the learning rate of stimulus-response (S-R) pairs and the second examines how backward recall is affected by the similarity of the stimuli. The models produced by GEMS are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The models automatically evolved by GEMS produced good fit to the human data in both studies, improving on EPAM's measures of fit by almost a factor of three on some of the pattern recall conditions. These findings offer further support to the mechanisms proposed by chunking theory (Simon, 1974), connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
... For further details of the chunking theory and CHREST see Gobet and Lane (2012). ...
A fundamental issue in cognitive science concerns the interaction of the cognitive "how" operations, the genetic/memetic "why" processes, and by what means this interaction results in constrained variability and individual differences. This study proposes a single GEVL model that combines complex cognitive mechanisms with a genetic programming approach. The model evolves populations of cognitive agents, with each agent learning by chunking and incorporating LTM and STM stores, as well as attention. The model simulates two different verbal learning tasks: one that investigates the effect of stimulus-response (S-R) similarity on the learning rate; and the other, that examines how the learning time is affected by the change in stimuli presentation times. GEVL's results are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The semi-automatically evolved GEVL strategies produced good fit to the human data in both studies, improving on EPAM's scores by as much as factor of two on some of the pattern similarity conditions. These findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
... Sill acquisition in general Miller (1956), Rosenbaum et al. (2001), and Gobet and Lane (2012) Skill acquisition progresses from processing and executing component task units at the bottom level to achieving Gestalt processing at the top level. This involves the grouping of information into meaningful chunks. ...
To what extent does playing a musical instrument contribute to an individual’s construction of knowledge? This paper aims to address this question by examining music performance from an embodied perspective and offering a narrative-style review of the main literature on the topic. Drawing from both older theoretical frameworks on motor learning and more recent theories on sensorimotor coupling and integration, this paper seeks to challenge and juxtapose established ideas with contemporary views inspired by recent work on embodied cognitive science. By doing so we advocate a centripetal approach to music performance, contrasting the prevalent centrifugal perspective: the sounds produced during performance not only originate from bodily action (centrifugal), but also cyclically return to it (centripetal). This perspective suggests that playing music involves a dynamic integration of both external and internal factors, transcending mere output-oriented actions and revealing music performance as a form of knowledge acquisition based on real-time sensorimotor experience.
... It also was evident that multimedia embryology instruction from the included studies incorporated chunking mechanisms into the instructional design, as this information was presented in small segments (30,32). The segmenting principle describes the delivery of small, isolated chunks of information that eventually were combined into one single information unit so that the learner could understand the isolated elements (47). This principle was demonstrated to elicit influence on intrinsic load, as the limited working memory could process only several chunks of information at one time (48). ...
Embryology is a critical subdiscipline in medical education, focusing on human body organ development and providing a foundation for understanding developmental anatomy. However, traditional teaching methods using static 2D graphics in textbooks may hinder students' comprehension of the complex 3D embryonic growth processes. To address this, multimedia approaches, such as animations, videos, and interactive tools, have been explored for effective embryology education. This scoping review identifies five key elements of successful multimedia teaching in embryology: multimodal integrated instructional content, cognitive load-reduction strategies, cognitive engagement and physical interactivity, learner-controlled multimedia instruction, and development of tacit knowledge. These strategies promote active learning, enhance students' understanding, and foster critical thinking skills. Future research should focus on evaluating the impact of multimedia approaches on students' engagement, attitudes, and competency development. Embracing multimedia in embryology education can improve medical students' clinical understanding and support effective medical practice.
... In general, online learning techniques process single samples. Chunk-based learning covers a variant of online learning techniques formulated to process chunks of data samples (Gobet & Lane, 2012). Finally, stream learning handles non-stationary evolving environments where data samples naturally arrive in a sequential and continuous manner and target concepts may be drifting over time. ...
Retail companies are greatly interested in performing continuous monitoring of purchase traces of customers, to identify weak customers and take the necessary actions to improve customer satisfaction and ensure their revenues remain unaffected. In this paper, we formulate the customer churn prediction problem as a Predictive Process Monitoring (PPM) problem to be addressed under possible dynamic conditions of evolving retail data environments. To this aim, we propose TSUNAMI as a PPM approach to monitor the customer loyalty in the retail sector. It processes online the sale receipt stream produced by customers of a retail business company and learns a deep neural model to early detect possible purchase customer traces that will outcome in future churners. In addition, the proposed approach integrates a mechanism to detect concept drifts in customer purchase traces and adapts the deep neural model to concept drifts. Finally, to make decisions of customer purchase monitoring explainable to potential stakeholders, we analyse Shapley values of decisions, to explain which characteristics of the customer purchase traces are the most relevant for disentangling churners from non-churners and how these characteristics have possibly changed over time. Experiments with two benchmark retail data sets explore the effectiveness of the proposed approach.
... We look at how the training schedule changes the value of within-chunk (value marked by the red boundary) reaction time for AB and BC (since a sign of chunking is that the reaction time of within-chunk items is www.nature.com/scientificreports/ typically faster than between-chunk items 8,12,43 ); a figurative explanation of this method can be found in Fig. 6 in the appendix. We look at the within-chunk reaction time of AB and BC for all groups at the baseline and the test blocks and compute the difference by the signed effect size, Cohen's d, of the baseline blocks, compared to the test block d AB . ...
When exposed to perceptual and motor sequences, people are able to gradually identify patterns within and form a compact internal description of the sequence. One proposal of how sequences can be compressed is people’s ability to form chunks. We study people’s chunking behavior in a serial reaction time task. We relate chunk representation with sequence statistics and task demands, and propose a rational model of chunking that rearranges and concatenates its representation to jointly optimize for accuracy and speed. Our model predicts that participants should chunk more if chunks are indeed part of the generative model underlying a task and should, on average, learn longer chunks when optimizing for speed than optimizing for accuracy. We test these predictions in two experiments. In the first experiment, participants learn sequences with underlying chunks. In the second experiment, participants were instructed to act either as fast or as accurately as possible. The results of both experiments confirmed our model’s predictions. Taken together, these results shed new light on the benefits of chunking and pave the way for future studies on step-wise representation learning in structured domains.
... Chunking adalah salah satu model penyimpanan memori di otak yang diperkenalkan seorang ahli bernama De Groot. Dalam psikologi model ini diimplementasikan dengan memberikan potongan informasi yang saling terkait maknanya secara berulang [10]. Model penyampaian informasi ini dapat mempercepat proses mengingat dan meningkatkan kualitas belajar. ...
Kemampuan belajar anak-anak di sekolah mempengaruhi prestasi belajarnya. Guru berperan penting dalam proses belajar anak-anak. Ketidakmampuan guru memahami peran otak dan bagaimana fungsi bagian-bagian otak secara keseluruhan untuk mendukung proses belajar ini menjadi salah satu hambatan mengembangkan metode belajar yang optimal. Tujuan kegiatan workshop adalah untuk meningkatan pengetahuan dan keterampilan guru tentang perkembangan dan fungsi otak. Para guru mendapatkan materi, diskusi dan simulasi tentang anatomi dan fisiologi otak dalam proses belajar. Evaluasi kegiatan dalam bentuk pre dan post tes dan laporan hasil praktek metode belajar yang sesuai dengan ritme kerja otak anak (brain base learning). Hasil kegiatan ini menguatkan kemampuan guru dalam proses belajar dan mengajar yang memacu perkembangan otak, meningkatkan kemampuan belajar serta mencegah terjadinya perlambatan proses belajar anak di sekolah.
... -The feeling of consciousness arises in CTM because all its processors, including especially those that are particularly responsible for consciousnessthe Inner Speech, Inner Vision, Inner Sensation and Model-of-the-World processorsare privy to the same (conscious) content of STM; the gists of outer speech (what we say and hear in the world), outer vision (what we see in the world), and so on, are nearly indistinguishable from the gists of inner speech (what we say to ourselves) and inner vision (what we see in dreams); and the multi-modal gists, most importantly, are expressed in Brainish, an enormously expressive language capable of generating the illusion of sensations, actions, and feelings. bb See, e.g., \Chunking Mechanisms and Learning" [Gobet and Lane, 2012]. (7) It gives some understanding of how a pain or pleasure experience, not just its simulation, is produced (Sec. ...
The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by the cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We later suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the range of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.
... 55 See e.g., "Chunking Mechanisms and Learning" (Gobet, 2012). 56 Inconsistencies are detected by the Model-of-the-World processors (Chapter 3 (2)), among others. ...
The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We then suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the breadth of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.