ArticlePDF Available

Abstract and Figures

When it comes to cognitive architecture and human information processing, chunks are one of the best known and most recognized constructs. Nevertheless, the nature of chunks is still very elusive, especially when it comes to chunks in procedural knowledge. This study deals with basic features of procedural information processing and examines the manifestation of chunks in procedural knowledge. The participants' task was to reconstruct sequences of chess moves. Chess was chosen as an experimental domain, because of its complexity, well-defined rules and standardized measure of chess player strength. From the results we conclude that short-term memory capacity is determined by the combination of the size and amount of procedural chunks recalled to the short-term memory. We have shown that on average, participants with more specialized knowledge operated faster and with larger chunks of procedural information than participants with less specialized knowledge. We have shown that in procedural information processing, the level of expertise and the sorting order of the retrieved information are important factors that influence the amount of procedural chunks retained in the short-term memory. Therefore, the capacity of short-term memory in complex situations cannot be expressed as a simple concept.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We investigate experts' ability to assess the difficulty of a mental task for a human. The final aim is to find formalized measures of difficulty that could be used in automated assessment of the difficulty of a task. In experiments with tactical chess problems, the experts' estimations of difficulty are compared to the statistic-based difficulty ratings on the Chess Tempo website. In an eye tracking experiment, the subjects' solutions to chess problems and the moves that they considered are analyzed. Performance data (time and accuracy) are used as indicators of subjectively perceived difficulty. We also aim to identify the attributes of tactical positions that affect the difficulty of the problem. Understanding the connection between players' estimation of difficulty and the properties of the search trees of variations considered is essential, but not sufficient, for modeling the difficulty of tactical problems. Our findings include that (a) assessing difficulty is also very difficult for human experts, and (b) algorithms designed to estimate difficulty should interpret the complexity of a game tree in the light of knowledge-based patterns that human players are able to detect in a chess problem.
Article
Full-text available
Experts’ remarkable ability to recall meaningful domain-specific material is a classic result in cognitive psychology. Influential explanations for this ability have focused on the acquisition of high-level structures (e.g., schemata) or experts’ capability to process information holistically. However, research on chess players suggests that experts maintain some reliable memory advantage over novices when random stimuli (e.g., shuffled chess positions) are presented. This skill effect cannot be explained by theories emphasizing high-level memory structures or holistic processing of stimuli, because random material does not contain large structures nor wholes. By contrast, theories hypothesizing the presence of small memory structures—such as chunks—predict this outcome, because some chunks still occur by chance in the stimuli, even after randomization. The current meta-analysis assessed the correlation between level of expertise and recall of random material in diverse domains. The overall correlation was moderate but statistically significant (r = .41; p < .001 ), and the effect was observed in nearly every study. This outcome suggests that experts partly base their superiority on a vaster amount of small memory structures, in addition to high-level structures or holistic processing.
Article
Full-text available
This paper attempts to evaluate the capacity of immediate memory to cope with new situations in relation to the compressibility of information likely to allow the formation of chunks. We constructed a task in which untrained participants had to immediately recall sequences of stimuli with possible associations between them. Compressibility of information was used to measure the chunkability of each sequence on a single trial. Compressibility refers to the recoding of information in a more compact representation. Although compressibility has almost exclusively been used to study long-term memory, our theory suggests that a compression process relying on redundancies within the structure of the list materials can occur very rapidly in immediate memory. The results indicated a span of about three items when the list had no structure, but increased linearly as structure was added. The amount of information retained in immediate memory was maximal for the most compressible sequences, particularly when information was ordered in a way that facilitated the compression process. We discuss the role of immediate memory in the rapid formation of chunks made up of new associations that did not already exist in long-term memory, and we conclude that immediate memory is the starting place for the reorganization of information.
Chapter
We investigate the question of automatic prediction of task difficulty for humans, of problems that are typically solved through informed search. Our experimental domain is the game of chess. We analyse experimental data from human chess players solving tactical chess problems. The players also estimated the difficulty of these problems. We carried out an experiment with an approach to automatically estimate the difficulty of problems in this domain. The idea of this approach is to use the properties of a “meaningful search tree” to learn to estimate the difficulty of example problems. The construct of a meaningful search tree is an attempt at approximating problem solving by human experts. The learned difficulty classifier was applied to our experimental problems, and the resulting difficulty estimates matched well with the measured difficulties on the Chess Tempo website, and also with the average difficulty perceived by the players.
Article
The main research question of this study is how the processing of information relates to different contextual characteristics. More specifically, how the context is associated with efficiency of information processing (success and speed), size of chunks, speed of chunk processing and the recall of a chunk. The research domain was the game of chess. The efficiency of information processing and the chunk characteristics were defined with the reconstruction of sequences of chess moves. Context variables were defined using a slightly adapted chess program. Variables on information dispersion, deviation, complexity and positivity were extracted in each chess position. Overall, the results showed that higher dispersion and complexity and lower positivity of information in a context lead to less efficient information processing. The results support the assumptions of the cognitive load theory about the negative effects of external factors burden on information processing and working memory. Our results also support the ACT-R theory, which suggests that more frequent information has a higher activation level and can therefore be retrieved more easily and quickly. The results are also congruent with the positivity effect, which proposes that it is easier to remember positive information than negative information. The findings of our study can be beneficial for the development of intelligent tutoring systems and the design of human–computer interaction systems.
Chapter
The growing use of computer-like tablets and PCs in educational settings is enabling more students to study online courses featuring computer-aided tests. Preparing these tests imposes a large burden on teachers who have to prepare a large number of questions because they cannot reuse the same questions many times as students can easily memorize their solutions and share them with other students, which degrades test reliability. Another burden is appropriately setting the level of question difficulty to ensure test discriminability. Using magic square puzzles as examples of mathematical questions, we developed a method for automatically preparing puzzles with appropriate levels of difficulty. We used crowdsourcing to collect answers to sample questions to evaluate their difficulty. Item response theory was used to evaluate the difficulty of the questions from crowdworkers’ answers. Deep learning was then used to build a model for predicting the difficulty of new questions.
Article
Chunking is the recoding of smaller units of information into larger, familiar units. Chunking is often assumed to help bypassing the limited capacity of working memory (WM). We investigate how chunks are used in WM tasks, addressing three questions: (a) Does chunking reduce the load on WM? Across four experiments chunking benefits were found not only for recall of the chunked but also of other not-chunked information concurrently held in WM, supporting the assumption that chunking reduces load. (b) Is the chunking benefit independent of chunk size? The chunking benefit was independent of chunk size only if the chunks were composed of unique elements, so that each chunk could be replaced by its first element (Experiment 1), but not when several chunks consisted of overlapping sets of elements, disabling this replacement strategy (Experiments 2 and 3). The chunk-size effect is not due to differences in rehearsal duration as it persisted when participants were required to perform articulatory suppression (Experiment 3). Hence, WM capacity is not limited to a fixed number of chunks regardless of their size. (c) Does the chunking benefit depend on the serial position of the chunk? Chunks in early list positions improved recall of other, not-chunked material, but chunks at the end of the list did not. We conclude that a chunk reduces the load on WM via retrieval of a compact chunk representation from long-term memory that replaces the representations of individual elements of the chunk. This frees up capacity for subsequently encoded material. (PsycINFO Database Record