ArticlePDF Available

Chunking mechanisms and learning

Authors:

Figures

Content may be subject to copyright.
C
HUNKING MECHANISMS AND LEARNING
Fernand Gobet
Department of Psychology, Brunel University
Uxbridge
United Kingdom
fernand.gobet@brunel.ac.uk
Peter C. R. Lane
School of Computer Science, University of Hertfordshire
Hatfield
United Kingdom
p.c.lane@herts.ac.uk
Synonyms
Definition
A chunk is meaningful unit of information built from smaller pieces of information, and chunking is the
process of creating a new chunk. Thus, a chunk can be seen as a collection of elements that have strong
associations with one another, but weak associations with elements belonging to other chunks. Chunks, which
can be of different sizes, are used by memory systems and more generally by the cognitive system. Within
this broad definition, two further meanings can be differentiated. First, chunking can be seen as a deliberate,
conscious process. Here, we talk about goal-oriented chunking. Second, chunking can be seen as a more
automatic and continuous process that occurs during perception. Here, we talk about perceptual chunking.
Theoretical Background
Chunking as a mechanism was initially proposed by De Groot (1946/1978) in his study of chess experts’ <<
link to Development of expertise>> perception, memory and problem solving, to explain their ability to
recall briefly presented positions with a high level of precision. It was also a central ingredient of Miller’s
(1956) classical article about the limits on human information-processing capacity. Miller proposed that chunk
are the correct measure for the information in the human cognitive system, and that 7 ± 2 chunks can be
Gobet, F., & Lane, P. C. R. (2012). Chunking mechanisms and learning.
In N. M. Seel (Ed.), Encyclopedia of the sciences of learning.
New York, NY: Springer.
held in short-term memory. Chase and Simon (1973) proposed a general theory of processes underpinning
chunking. It is interesting to note that the approaches of De Groot as well as Chase and Simon emphasize
the implicit nature of chunks, that are seen as the product of automatic learning processing sometimes called
perceptual chunking. Miller’s view emphasizes a type of strategic, goal-oriented chunking, where chunking is
essentially re-coding of the information in a more efficient way. For example, the 9-digit binary number
101000111 can be re-coded as the 3-digit decimal number 327, making it easier to process and memorize for
humans. The presence of chunks explains how humans, in spite of strict cognitive limitations in memory
capacity, attention, and learning rate, can cope efficiently with the demands of the environment. Chunking
has been established as one of the key mechanisms of human cognition, and plays an important role in
showing how internal cognitive processes are linked to the external environment.
There is considerable empirical evidence supporting the notion of a chunk, for example in our ability to
perceive words, sentences or even paragraphs as single units, bypassing their representation as collections of
letters or phonemes; this explains, for example, how skilled readers may be insensitive to word repetition or
deletion. Particularly strong evidence is found in those studies which use information about the timing of
responses to infer the presence of chunks. The use of response times assumes that the output of elements
within a chunk will be faster than the output of elements across different chunks. This is because the
elements within a chunk belong to the same structure, as well as sharing a number of relations. There is good
empirical evidence confirming that subjects’ pauses are shorter within chunks than between chunks. For
example, timing information shows that when the alphabet is recited back, letters are grouped in clusters, and
clusters grouped in super-clusters. When trained to learn alphabets using scrambled letter orders, subjects also
recall letters in a burst of activity followed by a pause, and therefore show evidence for clusters.
The strongest empirical evidence for chunks is based on their inference from several converging methods.
For example, studies on chess have shown that chunks identified by latencies in recall or placement of chess
pieces correlate highly with chunks identified by the number of relations shared between successively placed
pieces. By analyzing the patterns picked out by chess players within a position for various natural relations
(including proximity, color, and relations of attack or defense), it is evident that within-chunk relations are
much stronger than between-chunk relations. This pattern was found whether the subjects were asked to
place pieces on the board from memory (using timings to separate the groups), or to copy a board (using the
presence of glances between the two boards to separate the groups). Further empirical evidence for chunking
has been uncovered in a number of other areas including artificial-grammar learning, problem solving, and
animal research.
The chunking theory, developed by Chase and Simon (1973) was an important attempt to formalize the
mechanisms linked to chunking. It postulated that attention is serial and short-term memory limited to about
seven items (Miller’s magical number). When individuals acquire information about a domain with practice
and study, they acquire an increasingly larger number of chunks, which themselves tend to become larger, up
to a limit of four or five items. While learning is assumed to be slow (10 seconds per chunk), recognition of
the information stored in a chunk occurs in a matter of hundreds of milliseconds. Another important
assumption is that chunks are linked to possible information. For example, in chess, the domain in which the
theory was first applied, a chunk could provide information about potentially useful moves (see Figure 1).
Chunks help in a recall task, because groups of pieces rather than individual pieces can be stored in short-
term memory. They also help in a problem-solving task, because some of the chunks, being linked to
potentially useful information, provide clues about what kind of action should be taken.
Figure 1. Top panel: examples of chunks in a chess position. Bottom panel: one of the chunks elicits a
possible move (retreating the white bishop).
There is also evidence that people, in particular experts in a domain, use higher-level representations than
chunks. For example, data from chess research indicate that sometimes the entire position, up to 32 pieces, is
handled as a single unit by grandmasters. In addition, evidence from expertise research indicates that
information can sometimes be encoded in long-term memory faster than the 10 seconds proposed by
chunking theory. Together, these results led to a revision of the chunking theory with the template theory
(Gobet & Simon, 1996). The template theory proposes that frequently used chunks become “templates”, a
type of schema. A template consists of a core, which contains constant information, and slots, where variable
information can be stored. The presence of templates considerably expands experts’ memory capability.
The image cannot be displayed. Your compu ter may not have enou gh memory to open the image, or the image may have b een corrup ted. Restart your compu ter, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again.
A methodological difficulty with research on chunking has been to precisely identify the boundaries between
chunks. For example, the most direct explanation for observing a set of actions as a chunk is for the actions
to be represented internally as a single unit, i.e. a chunk, and so retrieved and output together. However, it is
also possible for a subject to plan output actions ahead, and so either break long sequences into sub-parts
(e.g., to take a breath when reciting the alphabet) or else compose short sequences into what appear as longer
ones (e.g., where a second chunk begins naturally from where the first one finished). Distinguishing between
these types is only possible with the aid of a computational model, where the precise items of information
known by the subject at a given point in time can be ascertained (Gobet et al., 2001). The advantage of using
computer models is discussed in more detail in the entry on Learning in the CHREST cognitive architecture,
a model based on the template theory.
Chunk-based theories, such as the chunking and template theories, not only provide a powerful explanation
of learning and expert behavior, but also offer useful information as to how learning occurs in the classroom
and how it could be improved (Gobet, 2005). We briefly discuss some of the implications for education
(further principles are listed in Table 1).
Table 1. Educational principles derived from chunk-based theories (after Gobet, 2005).
Teach from the simple to the complex
Teach from the known to the unknown
The elements to be learned should be clearly identified
Use an ‘improving spiral,’ where you come back to the same concepts and ideas and add
increasingly more complex new information
Focus on a limited number of types of standard problem situations, and teach the various methods
in these situations thoroughly
Repetition is necessary. Go over the same material several times, using varying points of view and
a wide range of examples
At the beginning, don’t encourage students to carry out their own analysis of well-known problem
situations, as they do not possess the key concepts yet
Encourage students to find a balance between rote learning and understanding
A first implication of chunk-based theories is that acquiring a new chunk has a time cost, and therefore time
at the task is essential, be it in mathematics or dancing. As documented by research into deliberate practice
<<link to Deliberate practice>>, practice must be tailored to the goal of improving performance. Chunk-
based theories give attention a central role – see for example the CHREST model – and such theories are
therefore suitable models of deliberate practice. In particular, conceptual knowledge is built on perceptual
skills, which in turn must be anchored on concrete examples. Thus, curricula should provide means to acquire
perceptual chunks in a given domain.
There are different useful ways to direct attention and to encourage the acquisition of perceptual chunks: to
segment the curriculum into natural components, of the right size and difficulty; to present these
components with an optimal ordering and suitable feedback; and to highlight the important features of a
problem.
If perceptual chunking is an important way of storing knowledge, then a clear consequence is that transfer
will be difficult. Unfortunately for learners, this prediction is correct, both for school knowledge and more
specific skills such as sports and arts. More than 100 years of research have established that transfer is
possible from one domain to another only when the components of the skills required in each domain
overlap. Thus, it might be helpful to augment the teaching of specific knowledge with the teaching of
metaheuristics – including strategies about how to learn, how to direct one’s attention, and how to monitor
and regulate one’s limited cognitive resources.
As noted above, an important idea in Chase and Simon’s (1973) theory is that perceptual chunks can be used
as conditions to actions, thus leading to the acquisition of productions. Then, an important aspect of
education is to balance the acquisition of the condition and action parts of productions. Another important
aspect of education is to favor the acquisition of templates (schemata). Templates are created when the
context offers both constant and variable information. As a consequence, and as is well established in the
educational literature, it is essential to have variability during learning if templates are to be created.
Finally, chunk-based theories are fairly open to the possibility of large individual differences in people’s
cognitive abilities. In particular, while they postulate fixed parameters for short-term memory capacity and
learning rates, it is plausible that these parameters vary between individuals. In addition, differences in
knowledge will lead to individual differences in performance. A clear prediction of chunk-based theories is
that individual differences play a large role in the early stages of learning, as is typical of classroom
instruction, but tend to be less important after large amounts of knowledge have been acquired through
practice and study.
Important Scientific Research and Open Questions
Chunk-based theories have spurred vigorous research is several aspects of learning and expertise. A first
aspect is the acquisition of language, where recent research has shown that chunking plays an important role
in the development of vocabulary and syntactic structures. A second aspect is related to the neurobiological
basis of chunking. Recent results indicate that perceptual chunks are stored in the temporal lobe, and in
particular the parahippocampal gyrus and fusiform gyrus.
Other issues being currently researched include the effect of order in learning, and in particular how curricula
can be designed so that that they optimize the transmission of knowledge. A possible avenue for future
research is the design of computer tutors that use chunking principles for teaching various materials,
optimizing instruction for the abilities and level of each student by providing personalized curricula,
providing judicious feedback, and teaching strategies.
Cross-References
→ Bounded rationality and learning
→ Decision making and learning
→ Deliberate practice
→ Development of expertise
→ Learning in the CHREST cognitive architecture
→ Schema
References
1. Gobet, F., & Simon, H. A. (1996). Templates in chess memory: A mechanism for recalling several
boards. Cognitive Psychology, 31, 1-40.
2. Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55-81.
3. De Groot, A. D. (1978). Thought and choice in chess (first Dutch edition in 1946). The Hague: Mouton
Publishers.
4. Gobet, F. (2005). Chunking models of expertise: Implications for education. Applied Cognitive
Psychology, 19, 183-204.
5. Gobet, F., Lane, P. C. R., Croker, S., Cheng, P. C-H., Jones, G., Oliver, I. & Pine, J. M. (2001).
Chunking mechanisms in human learning. TRENDS in Cognitive Sciences, 5, 236-243.
6. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for
processing information. Psychological Review, 63, 81-97.
Definitions
Chunk: A meaningful unit of information built from smaller pieces of information
Chunking: The creation of new chunks in long-term memory.
Short-term memory span: The largest amount of information that can be held in short-term memory at a
given time.
Chunking theory: Theory developed by Chase and Simon in 1973, explaining how experts circumvent the
limitations of cognitive processes through the acquisition of domain-specific knowledge, in particular
small meaningful units of inter-connected elements (chunks).
Template theory: Theory of expertise, developed in 1996 by Gobet and Simon, building on the chunking
theory and proposing that well-elaborated chunks lead to larger meaningful units (templates).
... On the cognitive psychology side, one of its most established theories -chunking theory -has been also embodied in computational cognitive architectures, first EPAM (Feigenbaum, 1963;Richman, Staszewski, & Simon, 1995) and now CHREST (Chunking Hierarchy REtrieval STructures) (Gobet, 1993(Gobet, , 2000Gobet & Lane, 2012;Gobet & Simon, 2000). Chunking theory's key idea -a chunk -is defined as a meaningful unit of information made from elements that have strong associations between each other (e.g., several digits making up a telephone number). ...
... CHREST is a self-organising computer model that simulates human learning processes via interacting cognitive mechanisms and structures. For CHREST, learning implies gradual growth of a network of chunks in LTM, a process influenced both by the environmental stimuli and the data that have already been stored (Gobet & Lane, 2012). CHREST's STM structure allows for additional ways to create links between chunks, such as linking chunks across visual and verbal modalities. ...
... We should also add that CHREST is different to many symbolic models (like "expert systems") and is closer to deep learning in its focus on perception as the primary driver of intelligence. Gobet and Lane (2012) offer an in-depth introduction to the chunking theory; for deep learning, see LeCun, Bengio, and Hinton (2015). ...
Conference Paper
Full-text available
Chunking theory is among the most established theories in cognitive psychology. However, little work has been done to connect the key ideas of chunks and chunking to the neural substrate. The current study addresses this issue by investigating the convergence of a cognitive CHREST model (the computational embodiment of chunking theory) and its neuroscience-based counterpart (based on deep learning). Both models were trained from raw data to categorise novel stimuli in the real-life domains of literature and music. Despite having vastly different mechanisms and structures, both models largely converged in their predictions of classical writers and composers-in both qualitative and quantitative terms. Moreover, the use of the same chunk/engram activation mechanism for CHREST and deep learning models demonstrated functional equivalence between cognitive chunks and neural engrams. The study addresses a historical feud between symbolic/serial and subsymbolic/parallel processing approaches to modelling cognition. The findings also further bridge the gap between cognition and its neural substrate, connect the mechanisms proposed by chunking theory to the neural network modelling approach, and make further inroads towards integrating concept formation theories into a Unified Theory of Cognition (Newell, 1990).
... EPAM, CHREST and related models have been applied to predict and simulate behaviour in verbal learning research (Feigenbaum, 1959 For further details of the chunking theory and CHREST, refer to Gobet and Lane (2012). The interaction of GEMS and CHREST is presented in Figure 3. IV. ...
Conference Paper
Full-text available
A common goal in cognitive science involves explaining/predicting human performance in experimental settings. This study proposes a single GEMS computational scientific discovery framework that automatically generates multiple models for verbal learning simulations. GEMS achieves this by combining simple and complex cognitive mechanisms with genetic programming. This approach evolves populations of interpretable cognitive agents, with each agent learning by chunking and incorporating long-term memory (LTM) and short-term memory (STM) stores, as well as attention and perceptual mechanisms. The models simulate two different verbal learning tasks: the first investigates the effect of prior knowledge on the learning rate of stimulus-response (S-R) pairs and the second examines how backward recall is affected by the similarity of the stimuli. The models produced by GEMS are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The models automatically evolved by GEMS produced good fit to the human data in both studies, improving on EPAM's measures of fit by almost a factor of three on some of the pattern recall conditions. These findings offer further support to the mechanisms proposed by chunking theory (Simon, 1974), connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
... For further details of the chunking theory and CHREST see Gobet and Lane (2012). ...
Conference Paper
Full-text available
A fundamental issue in cognitive science concerns the interaction of the cognitive "how" operations, the genetic/memetic "why" processes, and by what means this interaction results in constrained variability and individual differences. This study proposes a single GEVL model that combines complex cognitive mechanisms with a genetic programming approach. The model evolves populations of cognitive agents, with each agent learning by chunking and incorporating LTM and STM stores, as well as attention. The model simulates two different verbal learning tasks: one that investigates the effect of stimulus-response (S-R) similarity on the learning rate; and the other, that examines how the learning time is affected by the change in stimuli presentation times. GEVL's results are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The semi-automatically evolved GEVL strategies produced good fit to the human data in both studies, improving on EPAM's scores by as much as factor of two on some of the pattern similarity conditions. These findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
... It also was evident that multimedia embryology instruction from the included studies incorporated chunking mechanisms into the instructional design, as this information was presented in small segments (30,32). The segmenting principle describes the delivery of small, isolated chunks of information that eventually were combined into one single information unit so that the learner could understand the isolated elements (47). This principle was demonstrated to elicit influence on intrinsic load, as the limited working memory could process only several chunks of information at one time (48). ...
Article
Full-text available
Embryology is a critical subdiscipline in medical education, focusing on human body organ development and providing a foundation for understanding developmental anatomy. However, traditional teaching methods using static 2D graphics in textbooks may hinder students' comprehension of the complex 3D embryonic growth processes. To address this, multimedia approaches, such as animations, videos, and interactive tools, have been explored for effective embryology education. This scoping review identifies five key elements of successful multimedia teaching in embryology: multimodal integrated instructional content, cognitive load-reduction strategies, cognitive engagement and physical interactivity, learner-controlled multimedia instruction, and development of tacit knowledge. These strategies promote active learning, enhance students' understanding, and foster critical thinking skills. Future research should focus on evaluating the impact of multimedia approaches on students' engagement, attitudes, and competency development. Embracing multimedia in embryology education can improve medical students' clinical understanding and support effective medical practice.
... Sill acquisition in general Miller (1956), Rosenbaum et al. (2001), and Gobet and Lane (2012) Skill acquisition progresses from processing and executing component task units at the bottom level to achieving Gestalt processing at the top level. This involves the grouping of information into meaningful chunks. ...
Article
Full-text available
To what extent does playing a musical instrument contribute to an individual’s construction of knowledge? This paper aims to address this question by examining music performance from an embodied perspective and offering a narrative-style review of the main literature on the topic. Drawing from both older theoretical frameworks on motor learning and more recent theories on sensorimotor coupling and integration, this paper seeks to challenge and juxtapose established ideas with contemporary views inspired by recent work on embodied cognitive science. By doing so we advocate a centripetal approach to music performance, contrasting the prevalent centrifugal perspective: the sounds produced during performance not only originate from bodily action (centrifugal), but also cyclically return to it (centripetal). This perspective suggests that playing music involves a dynamic integration of both external and internal factors, transcending mere output-oriented actions and revealing music performance as a form of knowledge acquisition based on real-time sensorimotor experience.
... In general, online learning techniques process single samples. Chunk-based learning covers a variant of online learning techniques formulated to process chunks of data samples (Gobet & Lane, 2012). Finally, stream learning handles non-stationary evolving environments where data samples naturally arrive in a sequential and continuous manner and target concepts may be drifting over time. ...
Article
Full-text available
Retail companies are greatly interested in performing continuous monitoring of purchase traces of customers, to identify weak customers and take the necessary actions to improve customer satisfaction and ensure their revenues remain unaffected. In this paper, we formulate the customer churn prediction problem as a Predictive Process Monitoring (PPM) problem to be addressed under possible dynamic conditions of evolving retail data environments. To this aim, we propose TSUNAMI as a PPM approach to monitor the customer loyalty in the retail sector. It processes online the sale receipt stream produced by customers of a retail business company and learns a deep neural model to early detect possible purchase customer traces that will outcome in future churners. In addition, the proposed approach integrates a mechanism to detect concept drifts in customer purchase traces and adapts the deep neural model to concept drifts. Finally, to make decisions of customer purchase monitoring explainable to potential stakeholders, we analyse Shapley values of decisions, to explain which characteristics of the customer purchase traces are the most relevant for disentangling churners from non-churners and how these characteristics have possibly changed over time. Experiments with two benchmark retail data sets explore the effectiveness of the proposed approach.
... We look at how the training schedule changes the value of within-chunk (value marked by the red boundary) reaction time for AB and BC (since a sign of chunking is that the reaction time of within-chunk items is www.nature.com/scientificreports/ typically faster than between-chunk items 8,12,43 ); a figurative explanation of this method can be found in Fig. 6 in the appendix. We look at the within-chunk reaction time of AB and BC for all groups at the baseline and the test blocks and compute the difference by the signed effect size, Cohen's d, of the baseline blocks, compared to the test block d AB . ...
Article
Full-text available
When exposed to perceptual and motor sequences, people are able to gradually identify patterns within and form a compact internal description of the sequence. One proposal of how sequences can be compressed is people’s ability to form chunks. We study people’s chunking behavior in a serial reaction time task. We relate chunk representation with sequence statistics and task demands, and propose a rational model of chunking that rearranges and concatenates its representation to jointly optimize for accuracy and speed. Our model predicts that participants should chunk more if chunks are indeed part of the generative model underlying a task and should, on average, learn longer chunks when optimizing for speed than optimizing for accuracy. We test these predictions in two experiments. In the first experiment, participants learn sequences with underlying chunks. In the second experiment, participants were instructed to act either as fast or as accurately as possible. The results of both experiments confirmed our model’s predictions. Taken together, these results shed new light on the benefits of chunking and pave the way for future studies on step-wise representation learning in structured domains.
... Chunking adalah salah satu model penyimpanan memori di otak yang diperkenalkan seorang ahli bernama De Groot. Dalam psikologi model ini diimplementasikan dengan memberikan potongan informasi yang saling terkait maknanya secara berulang [10]. Model penyampaian informasi ini dapat mempercepat proses mengingat dan meningkatkan kualitas belajar. ...
Article
Full-text available
Kemampuan belajar anak-anak di sekolah mempengaruhi prestasi belajarnya. Guru berperan penting dalam proses belajar anak-anak. Ketidakmampuan guru memahami peran otak dan bagaimana fungsi bagian-bagian otak secara keseluruhan untuk mendukung proses belajar ini menjadi salah satu hambatan mengembangkan metode belajar yang optimal. Tujuan kegiatan workshop adalah untuk meningkatan pengetahuan dan keterampilan guru tentang perkembangan dan fungsi otak. Para guru mendapatkan materi, diskusi dan simulasi tentang anatomi dan fisiologi otak dalam proses belajar. Evaluasi kegiatan dalam bentuk pre dan post tes dan laporan hasil praktek metode belajar yang sesuai dengan ritme kerja otak anak (brain base learning). Hasil kegiatan ini menguatkan kemampuan guru dalam proses belajar dan mengajar yang memacu perkembangan otak, meningkatkan kemampuan belajar serta mencegah terjadinya perlambatan proses belajar anak di sekolah.
Conference Paper
Full-text available
This study aimed to investigate the proposed standards of designing microcontent in a digital learning environment. The research used the analytical descriptive research procedures based on deliberative inquiry approach in the presentation, study and analysis of the researches to obtain the standards. A final list of 4 standards and 62 indicators can be used to design microcontent in digital learning environments for application in digital education and training fields.
Article
Full-text available
Chunking models offer a parsimonious explanation of how people acquire knowledge and have been validated in domains such as expert behaviour and the acquisition of language. In this paper, we review two computational theories based on chunking mechanisms (the chunking theory and the template theory) and show what insight they offer for instruction and training. The suggested implications include the importance of perception in learning, the cost of acquiring knowledge, the significance of segmenting and ordering instruction material, the role of the variability of the instructional material in acquiring schemata, and the importance of taking individual differences into account.
Article
Full-text available
This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.
Article
This paper develops a technique for isolating and studying the per- ceptual structures that chess players perceive. Three chess players of varying strength - from master to novice - were confronted with two tasks: ( 1) A perception task, where the player reproduces a chess position in plain view, and (2) de Groot's ( 1965) short-term recall task, where the player reproduces a chess position after viewing it for 5 sec. The successive glances at the position in the perceptual task and long pauses in tbe memory task were used to segment the structures in the reconstruction protocol. The size and nature of these structures were then analyzed as a function of chess skill. What does an experienced chess player "see" when he looks at a chess position? By analyzing an expert player's eye movements, it has been shown that, among other things, he is looking at how pieces attack and defend each other (Simon & Barenfeld, 1969). But we know from other considerations that he is seeing much more. Our work is concerned with just what ahe expert chess pIayer perceives.
Book
What does a chessmaster think when he prepartes his next move? How are his thoughts organized? Which methods and strategies does he use by solving his problem of choice? To answer these questions, the author did an experimental study in 1938, to which famous chessmasters participated (Alekhine, Max Euwe and Flohr). This book is still usefull for everybody who studies cognition and artificial intelligence.
  • F Gobet
  • P C R Lane
  • S Croker
  • P. C-H Cheng
  • G Jones
  • I Oliver
  • Pine
Gobet, F., Lane, P. C. R., Croker, S., Cheng, P. C-H., Jones, G., Oliver, I. & Pine, J. M. (2001).