Article

The magical number seven, plus or minus two: Some limits on our capacity to process information

Authors:
Article

The magical number seven, plus or minus two: Some limits on our capacity to process information

If you want to read the PDF, try requesting it from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Working memory is a vital part of the human memory system that is used to process and temporarily store information that is required to carry out complex cognitive tasks, such as reading comprehension (e.g., Baddeley, 1998Baddeley, , 2000Baddeley & Hitch, 1974;Cowan, 2017;Daneman & Carpenter, 1980;Just & Carpenter, 1992). It is assumed to have limited resources (e.g., Miller, 1956;Simon, 1974) that -in the context of reading comprehension-must be shared between the processing of newly read text information and the maintenance of relevant information from the preceding text and the readers' background knowledge (Graesser et al., 1997;Kintsch, 1998;van den Broek, 2010). As working memory constrains the cognitive resources available to the reader for information processing and storage (see Baddeley, 1998;Baddeley & Hitch, 1974;Cowan, 1988Cowan, , 2017, it plays an important role in the construction of a coherent mental representation (e.g., Hannon, 2012;Kintsch, 1988;Linderholm et al., 2004). ...
... It is crucial for the detection of inconsistencies that the two are not only active but are co-activated in memory (van den Broek & Kendeou, 2008). Consequently, working memory could play a role in this co-activation (e.g., Hannon & Daneman, 2001;Singer, 2006) and, since its capacity is assumed to be limited (Miller, 1956;Simon, 1974), could serve as a bottleneck for the processing of inconsistencies. ...
... Het werkgeheugen is een belangrijk onderdeel van het menselijk geheugen waar informatie verwerkt wordt en tijdelijk opgeslagen wordt (e.g., Baddeley, 1998Baddeley, , 2000Baddeley & Hitch, 1974;Cowan, 2017;Daneman & Carpenter, 1980;Just & Carpenter, 1992). Het werkgeheugen heeft een beperkte capaciteit (e.g., Miller, 1956;Simon, 1974) die tijdens het lezen verdeeld moet worden over het verwerken van de nieuw gelezen tekst en het actief houden van de relevante informatie uit de eerdere tekst en de achtergrondkennis van de lezer (Graesser et al., 1997;Kintsch, 1998;van den Broek, 2010). In de context van validatie beperkt het werkgeheugen de hoeveelheid informatie die beschikbaar is voor het validatieproces (e.g., Hannon & Daneman, 2001;Singer, 2006), wat mogelijk kan interfereren met de vaardigheid om tijdens het lezen incongruenties of onjuistheden in een tekst te detecteren en op te lossen. ...
Book
Full-text available
Discourse allows us to exchange meaning in a way that many consider to be fundamentally human or, as Graesser, Millis and Zwaan put it so eloquently, “Discourse is fundamental. It is what makes us human, what allows us to communicate ideas, facts, and feelings across time and space.” (Graesser et al., 1997, p164). To comprehend discourse and, more generally, to comprehend the world around us, we continuously build mental representations in which we integrate the current input with our existing knowledge base, for example when we read a book, watch a movie or have a conversation. Building this representation is a dynamic process; the emerging representation must be monitored and updated continuously as new information is encountered (e.g., Graesser et al., 1994; Kintsch & van Dijk, 1978; Trabasso et al., 1984; van den Broek et al., 1999). An essential aspect of building such mental representation is that comprehenders routinely monitor to what extent incoming information is both coherent and accurate – a process called validation (e.g., Isberner & Richter, 2014a; O’Brien & Cook, 2016a; Richter & Rapp, 2014; Singer, 2013, 2019; Singer et al., 1992; Singer & Doering, 2014). Validation processes function as a gatekeeper for the quality and coherence of the mental representation: Only information that is successfully validated is integrated into the mental representation. Thus, by validating incoming information readers establish coherence during comprehension and protect the emerging mental representation against inaccuracies or incongruencies (e.g., O’Brien & Cook, 2016a, 2016b; Richter & Rapp, 2014; Singer, 2013, 2019; Singer et al., 1992). The studies described in this thesis focus on validation processes in the context of reading comprehension. The rise of digital technology allows us unprecedented access to (textual) information. This provides excellent opportunities to acquire new knowledge, but also requires a much more vigilant, knowledgeable reader: Anyone can put information on the internet, therefore the texts available online vary not only in linguistic quality, but also in accuracy and trustworthiness. In light of these developments, it is important that we understand how readers validate (written) materials against various sources of information. Current theoretical frameworks propose a rudimentary cognitive architecture for validation processes, but they do not provide detailed information on when and how different sources of information, such as recently acquired knowledge (from the text) and readers’ background knowledge (from memory), exert their influence. As a result, it is unclear whether these two sources influence validation in essentially the same or in distinct ways and, hence, whether they should be distinguished in theoretical models.
... Microlearning is a training method that has the potential to realize integrated, individualized learning journeys on-the-job, by providing interactive, sequential learning chunks or so-called learning nuggets [5], see Figure 1. Microlearning is specifically designed to accommodate the human brain's cognitive limitations regarding information processing in the short term memory [9]. Microlearning could thus enable learning at high frequency, e.g. ...
... However, the learning research community has been widely focusing on training formats for the desk-based workforce [5]. In order to understand the research gaps around integrated technology-mediated learning in industrial environments, adjacent fields like industrial virtual and augmented reality training, assistance systems, vocational training, learning factories, e-learning, as well as psychological microlearning research and learning theory need to be considered [9], [5], [13]. ...
Conference Paper
The manufacturing skills gap, demographic change, and advancing digital transformation are imposing major challenges on production systems and their workforce. These challenges require increased systematic up-and re-skilling of manufacturing employees. Traditional, off-the-job trainings may be insufficient to address changing learning needs-often requiring people to intermit their work, and struggling with low engagement, effectiveness, and scalability. This gives rise to technology-mediated learning concepts, such as microlearning, which promise to bridge the gap between lifelong learning demands and operational limitations on the shop floor. However, empirical studies on the effects of industrial microlearning remain rare. This paper addresses this gap by a) investigating a systematic, human-centric approach to conceptualizing, implementing , and evaluating microlearning, and b) assessing feasibility , acceptance, and effectiveness of on-the-job microlearning in a mixed methods study, combining workshops, interviews, questionnaires, observations, and an experimental pilot study. The study conducted with 10 technicians confirms the feasibility, acceptance, and effectiveness of microlearning for lean methods in a low-volume, high-complexity electronics plant compared to classroom training. This paper indicates a high potential for industrial microlearning as an avenue for future research.
... Secondly, at least one object has to be classified under every class, thus providing evidence for conciseness [45]. The conciseness criterion is also restricted by the human capacity for information processing, which suggests having seven plus or minus two class categories [45,118]. Thirdly, every class and class subcategory has to be unique within class categories to avoid redundancy. ...
... Third, the proposed classification scheme is extendible, suggesting that researchers may add new class categories, class subcategories or classes as technology advances. Next, the proposed classification scheme has been considered concise as the number of class categories, the number of class subcategories within each class category, and the number of classes within each class subcategory have been lower than the suggested upper limit of nine items [45,118]. Furthermore, as the results of the systematic literature review suggest, the classification scheme can identify differences between IVR applications for DRs, making it explanatory [45]. ...
Article
The development of immersive virtual reality (IVR) applications for design reviews is a major trend in the design field. While many different applications have been developed, there is little consensus on the functionalities necessary for these applications. This paper proposes a classification scheme for IVR functionalities related to design reviews (DRs), combining conceptual-to-empirical and empirical-to-conceptual strategies. The classification scheme consists of eight class categories (Input, Representation, Navigation, Manipulation, Collaboration, Edit, Creation, and Output), 22 class subcategories, and 55 classes. The classification scheme has been validated by analysing several commercial IVR applications for DRs. As part of the classification scheme development, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) was utilised to review 70 articles that develop IVR applications for DRs. The results from systematic literature reviews suggest the development of solutions that integrate several class categories, are better connected to current design workflows, include various design information, support a DR planning cycle, and support distributed work. The proposed classification scheme helps to orient the future development of IVR applications for DRs and provides a framework to systematically accumulate evidence on the effect of such applications on DRs.
... Rosenzweig (2015), iyi bir uygulama tasarımının nasıl olması gerektiği ile ilgili çeşitli çalışma alanlarının bir arada görev yaptığını (multidisipliner) ve insan hafızasıyla ilgili olarak bilişsel yüklenmeyle bağlantılı belirli bir sınırın olduğunu söylemiştir. Bu sınıra yönelik en çok bilinen örneklerden biri ise 7±2 kuralıdır (Miller, 1956). Bu kurala göre bir insanın kısa süreli hafızasında tutabildiği öğe sayısının 5-9 aralığında olduğu belirtilmiştir (Miller, 1956). ...
... Bu sınıra yönelik en çok bilinen örneklerden biri ise 7±2 kuralıdır (Miller, 1956). Bu kurala göre bir insanın kısa süreli hafızasında tutabildiği öğe sayısının 5-9 aralığında olduğu belirtilmiştir (Miller, 1956). Bu kurala yönelik olarak yapılmamasına karşın Iyengar ve Lepper (2000) tarafından insan psikolojisi üzerine bir çalışma gerçekleştirilmiştir. Bu çalışmada, bir süpermarket içinde 6 farklı reçelin ve 24 farklı reçelin ayrı bir şekilde bir arada sunulduğu bir senaryo oluşturulmuştur. ...
... The 5-point response format clearly has several practical and technical advantages over the 11-point response format, making it easier to implement dPROs necessary for pursuing evidence-based dentistry across dental disciplines [53,54]. Firstly, fully labeled scales are more reliable than partially labeled scales [55]. The current 11-point response format provides label on the first and last category only. ...
... Although, the 11-point response format may help capture patient experiences more comprehensively, it may overestimate precision of patients' responses. Clinically, a 5-point response format is less burdensome and time-consuming for respondents [18], considering that there are limits to respondents' capacity to process or discern a large number of response categories [55,56]. It is also easier for clinicians to administer the 5-point response format, especially when they are reading aloud the response categories to their patients who might need assistance with filling out surveys such as the elderly and those with low literacy level [11,57]. ...
Article
Full-text available
Purpose We compared measurement properties of 5-point and 11-point response formats for the orofacial esthetic scale (OES) items to determine whether collapsing the format would degrade OES score precision. Methods Data were collected from a consecutive sample of adult dental patients from HealthPartners dental clinics in Minnesota (N = 2,078). We fitted an Item Response Theory (IRT) model to the 11-point response format and the six derived 5-point response formats. We compared all response formats using test (or scale) information, correlation between the IRT scores, Cronbach’s alpha estimates for each scaling format, correlations based on the observed scores for the seven OES items and the eighth global item, and the relationship of observed and IRT scores to an external criterion using orofacial appearance (OA) indicators from the Oral Health Impact Profile (OHIP). Results The correlations among scores based on the different response formats were uniformly high for observed (0.97–0.99) and IRT scores (0.96–0.99); as were correlations of both observed and IRT scores and the OHIP measure of OA (0.66–0.68). Cronbach’s alpha based on any of the 5-point formats (α = 0.95) was nearly the same as that based on the 11-point format (α = 0.96). The weighted total information area for five of six derived 5-point response formats was 98% of that for the 11-point response format. Conclusions Our results support the use of scores based on a 5-point response format for the OES items. The measurement properties of scores based on a 5-point response format are comparable to those of scores based on the 11-point response format.
... Ее функционирование происходит посредством мыслительных операций, при этом указывается на кратковременную (оперативную) память. Согласно G. Miller (1956), в кратковременной памяти может находиться не более 5-9 порций информации [17]. По оценке A. Elstein, человек способен одновременно удерживать в оперативной памяти 3-5 гипотез. ...
... Ее функционирование происходит посредством мыслительных операций, при этом указывается на кратковременную (оперативную) память. Согласно G. Miller (1956), в кратковременной памяти может находиться не более 5-9 порций информации [17]. По оценке A. Elstein, человек способен одновременно удерживать в оперативной памяти 3-5 гипотез. ...
Article
Full-text available
Обоснована возможность использования психологической теории решений при разработке экспертных подходов к оценке дееспособности и сделкоспособности. Выделены и проанализированы основные этапы принятия решений. Рассмотрены особенности формирования представлений о решаемой задаче, стратегиях принятия решения, выборе альтернатив. С позиций психологической теории принятия решений намечены подходы к оценке способности лица к осознанию и регуляции юридически значимого поведения в ходе реализации гражданских прав и обязанностей. / The possibility is validated of using the psychological theory of decision-making in development of expert approaches to evaluating legal competence and capacity to contract. Identified and analyzed are major steps in the decision making-process. Examined are the specific features of building up perceptions about the task being resolved, decision-making strategies, choice of alternatives. Given the above theory, approaches are suggested to evaluating the ability of a person to develop an awareness of legally valid behaviour and being able to regulate it in the course of exercising their civil rights and discharging their obligations.
... Broadbent [4] and Miller [5] demonstrated that people have a limited capacity to process information. The idea of people as limited-capacity information processors inspired researchers, and one can also find this notion underlying research in considering programming as situation awareness [6,7]. ...
... It is difficult to find a more successful approach that investigates how people interact with technologies than to study people as limited-capacity information processing systems. In the mid-20th century, Broadbent [4] and Miller [5] demonstrated that people have a limited capacity to process information. Ever since, the idea of people as limited-capacity information processors inspired researchers, and one can also find this notion underlying research in considering programming as situation awareness [6,7]. ...
Article
In interacting with technologies, people represent their action and technical artefacts in their minds. The information in their mental representations, i.e., mental contents, explains what people do and why they do it. Therefore, mental contents and its analysis provide a good tool for analyzing several different types of issues in ergonomic. As argumentation is such ergonomic research is grounded on the properties of mental contents one can call this perspective to ergonomics content based cognitive ergonomics.
... Working memory is a hypothetical place in brain that is responsible to process information, perceived from the physical world, to construct knowledge (Baddeley, 2007;Matlin, 2005). Working memory (WM), in contrast to long term memory (LTM), has a limited capacity and can hold only 7±2 chunks of information (Miller, 1956). LTM has relatively more capacity to store "knowledge being represented as locations or nodes in networks, with networks connected (associated) with one another" (Shunk, 2008, p. 157). ...
... Intertwined with this is the importance of cognitive load theory in planning alternative instructional strategies during a disruption. Miller suggested limits to working memory capacity in 1950 [25], a concept built upon in by Sweller in the 1980s to describe the effect and influences of intrinsic (associated with essential aspects of task performance) and extraneous (associated with non-essential aspects of task performance) cognitive loads on learning during instructional events [26]. Germaine cognitive load refers to that associated with intentional use of cognitive learning strategies [27]. ...
Article
Dealing with rapid, unanticipated disruptions to established learning environments are challenging. There are a number of situations that may require this including natural disasters such as weather disturbance, viral pandemics, or political unrest and violence. For example, the COVID-19 pandemic provided medical educators with this challenge and enabled valuable lessons to be learnt. These can be utilized to prepare for other occurrences in which disruptions must be faced and high-quality education delivered. Focus should be placed both on successful transition of learning events to a new modality appropriate to the emerging climate and on reliably assessing efficacy of these new educational strategies with identification of those best suited to the new environment. We present a framework, based on local lessons learnt, by which the challenges faced during an educational disruption can be addressed, and describe methods to determine which changes are most effective and should be durable.
... However, eye-tracking studies suggested we can perceive the text longer than a word at one time (Rayner, 1998). Our working memory also allows us to remember familiar multiple words (Miller, 1956). Even more, multi-word expressions can be stored in our mental lexicons (Arnon & Snider, 2010;Siyanova-Chanturia, Conklin, Caffarra, Kaan, & van Heuven, 2017). ...
Thesis
Full-text available
Language is often viewed as a sequence of discrete units, so most analyses of language material and studies of language cognition begin by determining the units of language. Most people intuitively determine linguistic units (e.g., characters, syllables, words, and sentences) in terms of distinct boundaries. Early scholars in linguistics who viewed the structure of language as a formal system introduced new units (e.g., morphemes, phrases, and clauses) to represent the formal structure of language in more detail. With the advancement of cognitive science, some scholars recently emphasized the influence of cognitive factors and actual language usage on language structure, which has led to more flexible language units. To know the language units that are genuinely used in our cognitive processes (i.e., cognitive units), this thesis adopts a usage-based view and attempts to get closer to the cognitive reality. This thesis studies cognitive units by investigating three main research questions. First, what chunks of language are learned as cognitive units? Next, how can people segment language input into cognitive units without overt patterns? Last, how do people process the cognitive units in their minds? These questions, together with the importance of investigating cognitive units, are introduced in Chapter 1. In Chapter 2, I introduce a toolbox, EasyEEG, which was developed for our subsequent multivariate pattern analysis (MVPA) of EEG data. The MVPA methods require no prior knowledge of the timing or location about neural activity. This advantage is critical to the studies in the next two chapters since previous studies had not provided sufficient timing or location knowledge of the neural activity of cognitive unit processing. In Chapters 3 and 4, I present the empirical (behavioral and neuroscience) findings of cognitive units. Chapter 3 suggests a conceptual model describing two stages of unitization during reading: In the early stage (detection stage), our mind can simultaneously detect all recognizable units nearby a point of gaze, regardless of their sizes; In the later stage (recognition stage), our minds prioritize larger detected units over smaller ones. Chapter 4 again confirms the detection stage when reading more complex strings, but fails to confirm or falsify the recognition stage. The above empirical findings suggest that our mind favors least effort and larger units, which inspired me to construct an unsupervised computational model, Less-isBetter (LiB), presented in Chapter 5. Based on the hypothesis that the cognitive units can minimize the cognitive effort of language users, the LiB model tries to find the units that minimize both the number of unit tokens (effort of working memory) and the num167 ber of unit types (effort of long-term memory). As a result, the model can segment any given text into sequences of units that show better computational performance over other commonly used units. The plausibility of the model-derived units as cognitive units should be tested in realistic cognitive tasks, i.e., tasks related to empirical human behavior. Therefore, in Chapter 6, I attempt to use the Less-is-Better units to predict eye fixations during reading under the hypothesis that the eye fixations during reading locate around the centers of cognitive units. The successful predictions support not only the hypothesis, but also the cognitive reality of the LiB units. In the final chapter, I summarize the findings of previous chapters, answer the three questions listed in Chapter 1, and discuss the findings’ implications and the theoretical connections to other domains. The current outcomes of this thesis are far from a complete understanding of the cognitive units of language, but at least it opens up a new path for studying language units. Future work can be centered around gathering more empirical information about cognitive units, improving the computational model of cognitive units, and using cognitive units in other language tasks to improve performance on these tasks.
... This limitation was made due to the fact that in real world humans can only perceive, process and remember a limited number of information. According to Miller's law (Miller 1956), this capacity is somewhere between seven plus or minus two. Additionally, we set λ to 0.95 as we simulated some sort of global expert knowledge. ...
Preprint
Full-text available
Interactive Machine Learning (IML) shall enable intelligent systems to interactively learn from their end-users, and is quickly becoming more and more important. Although it puts the human in the loop, interactions are mostly performed via mutual explanations that miss contextual information. Furthermore, current model-agnostic IML strategies like CAIPI are limited to 'destructive' feedback, meaning they solely allow an expert to prevent a learner from using irrelevant features. In this work, we propose a novel interaction framework called Semantic Interactive Learning for the text domain. We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that (a) enables more semantic alignment between humans and machines and (b) at the same time helps to maintain statistical characteristics of the input domain when generating user-defined counterexamples based on meaningful corrections. Therefore, we introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples such that the learner's reasoning is pushed towards the desired behavior. In several experiments, we show that our method clearly outperforms CAIPI, a state of the art IML strategy, in terms of Predictive Performance as well as Local Explanation Quality in downstream multi-class classification tasks.
... Decision-making generally includes four elements (1) a person must select one option from several alternatives (2) there is some amount of information available with respect to the option (3) the timeframe is relatively long (longer than a second) (4) the choice is associated with uncertainty (Wickens et al., 2003). Early research in cognition originated in the 1950's by George Miller which focused on information processing theory and contributed towards working memory capacity and information chunking (Miller, 1956). In 1976, Neisser developed the Perceptual Cycle Model (PCM) for decision making which highlighted that both schemata (or mental models/templates) and available information from the world will direct decision making (Neisser, 1976). ...
Thesis
Full-text available
The maritime industry is undergoing a transformation driven by digitalization and connectivity. There is speculation that in the next two decades the maritime industry will witness changes far exceeding those experienced over the past 100 years. While change is inevitable in the maritime domain, technological developments do not guarantee navigational safety, efficiency, or improved seaway traffic management. The International Maritime Organization (IMO) has adopted the Maritime Autonomous Surface Ships (MASS) concept to define autonomy on a scale from Degrees 1 through 4. Investigations into the impact of MASS on various aspects of the maritime sociotechnical system is currently ongoing by academic and industry stakeholders. However, the early adoption of MASS (Degree 1), which is classified as a crewed ship with decision support, remains largely unexplored. Decision support systems are intended to support operator decision-making and improve operator performance. In practice they can cause unintended changes throughout other elements of the maritime sociotechnical system. In the maritime industry, the human is seldom put first in technology design which paradoxically introduces human-automation challenges related to technology acceptance, use, trust, reliance, and risk. The co-existence of humans and automation, as it pertains to navigation and navigational assistance, is explored throughout this thesis. The aims of this thesis are (1) to understand how decision support will impact navigation and navigational assistance from the operator’s perspective and (2) to explore a framework to help reduce the gaps between the design and use of decision support technologies. This thesis advocates for a human-centric approach to automation design and development while exploring the broader impacts upon the maritime sociotechnical system. This work considers three different projects and four individual data collection efforts during 2017-2022. This research took place in Gothenburg, Sweden, and Warsash, UK and includes data from 65 Bridge Officers (navigators) and 16 Vessel Traffic Service (VTS) operators. Two testbeds were used to conduct the research in several full mission bridge simulators, and a virtual reality environment. A mixed methods approach, with a heavier focus on qualitative data, was adopted to understand the research problem. Methodological tools included literature reviews, observations, questionnaires, ship maneuvering data, collective interviews, think-aloud protocol, and consultation with subject matter experts. The data analysis included thematic analysis, subject matter expert consultation, and descriptive statistics. The results show that operators perceive that decision support will impact their work, but not necessarily as expected. The operators’ positive and negative perceptions are discussed within the frameworks of human-automation interaction, decision-making, and systems thinking. The results point towards gaps in work as it is intended to be done and work as it is done in the user’s context. A user-driven design framework is proposed which allows for a systematic, flexible, and iterative design process capable of testing new technologies while involving all stakeholders. These results have led to the identification of several research gaps in relation to the overall preparedness of the shipping industry to manage the evolution toward smarter ships. This thesis will discuss these findings and advocate for human-centered automation within the quickly evolving maritime industry.
... Information overload is a major source of cognitive complexity [7]. Studies have demonstrated that human cognition can process only seven plus or minus two chunks of information at a time before cognitive overload occurs [39]. In the context of ER modeling, information load is based on the number of entity types and attributes, the interrelations between entity types, and the degree of relationship types. ...
Article
Full-text available
Students in database courses often have difficulty learning entity–relationship (ER) modeling. According to semantic network theory, learning to construct an ER diagram for a database problem requires complex semantic transformations between the problem and the diagram. Such complex transformation may require excessive mental effort by learners, jeopardizing their learning outcomes. A concept map is a learning tool that incorporates elements of both learning theory and semantic network theory. In this study, concept maps were used to describe the semantic transformation process to increase learner understanding of ER modeling. An empirical experiment was conducted on two database courses (one concept-map-based and one conventional course) to examine the effect of using concept maps on understanding ER modeling according to cognitive load theory. The experimental results revealed that the concept-map-based teaching method was superior to the conventional teaching method because it improved mental efficiency by reducing extraneous load while increasing germane load. Moreover, concept maps can be used as a medium to facilitate communication regarding ER modeling problems between learners and instructors, thereby improving learning efficiency. The results can help educators and researchers understand the effectiveness of concept maps for ER model learning, motivate them to resolve learning difficulties, and encourage them to develop improved teaching methods by using semantic network theory.
... A key manipulation for visual search is the number of objects present in the search scene (target(s) + distractor(s) = set size) [33]. We chose 7 as our first set size; 7 is known as a magic number in data visualization, as the limit of the span of absolute judgement and immediate memory sits at about 7 points [60]. This implies that moving sufficiently beyond 7 data points will increase cognitive load; thus we chose 14 as our second set size. ...
Preprint
Full-text available
Understanding your audience is foundational to creating high impact visualization designs. However, individual differences and cognitive abilities also influence interactions with information visualization. Differing user needs and abilities suggest that an individual's background could influence cognitive performance and interactions with visuals in a systematic way. This study builds on current research in domain-specific visualization and cognition to address if domain and spatial visualization ability combine to affect performance on information visualization tasks. We measure spatial visualization and visual task performance between those with tertiary education and professional profile in business, law & political science, and math & computer science. We conducted an online study with 90 participants using an established psychometric test to assess spatial visualization ability, and bar chart layouts rotated along Cartesian and polar coordinates to assess performance on spatially rotated data. Accuracy and response times varied with domain across chart types and task difficulty. We found that accuracy and time correlate with spatial visualization level, and education in math & computer science can indicate higher spatial visualization. Additionally, we found distinct motivations can affect performance in that higher motivation could contribute to increased levels of accuracy. Our findings indicate discipline not only affects user needs and interactions with data visualization, but also cognitive traits. Our results can advance inclusive practices in visualization design and add to knowledge in domain-specific visual research that can empower designers across disciplines to create effective visualizations.
... The seminal contributions about information overload came from psychology and cognitive science, namely the famous Miller's article "The magical number seven plus or minus two" (Miller, 1956), and two books, respectively by Schroder et al. (1967) and Simon and Newell (1971) (see also Simon, 1979). The developments of the studies on information overload have been summarized by several reviews, most of which have a disciplinary focus, while Bawden et al. (1999), Epple and Mangis (2004), Goetzel (2018) encompass several disciplines. ...
Preprint
Full-text available
This paper discusses the relevance of information overload for explaining environmental degradation. Our argument goes that information overload and detachment from nature, caused by energy abundance, have made individuals unaware of the unsustainable effects of their choices and lifestyles.
... Ingatan jangka pendek memiliki kapasitas yang besar namun, informasi dalam penyimpanan ini hilang dengan cepat dan mudah digantikan dengan informasi baru yang serupa (Sperling; Ling dan Catling, 2012). Penyimpanan jangka pendek memiliki kapasitas yang terbatas, oleh Miller (1956) ditetapkan sebanyak tujuh item, +/-2 item. Short term memory atau memory jangka pendek merupakan jalan masuk informasi dari sensory memory sebelum akhirnya disimpan untuk waktu yang lama di dalam long term memory Informasi yang disimpan pada short term memory, hanya dipertahankan selama informasi tersebut masih dibutuhkan. ...
Article
Full-text available
The purpose of this study was to determine the effect of color on short term memory among student members of the Creative minority UKM, Malikussaleh University. The hypothesis in this study is the effect of color on short term memory, where students who are given colored paper with animal names written on it have better short-term memory than those who are not given colored paper. The study was conducted on 30 creative minority UKM members aged 18-22 years consisting of two groups, namely the control group and the experimental group, each of which was 15 respondents using a randomized two group design experiment - post test only. The independent sample t test results indicate that there is an influence between color on short term memory in creative minority UKM members. The conclusion of this study is that color has an effect on increasing achievement motivation. The next researcher is expected to be able to increase the number of respondents and control the situations and conditions during the experiment and be able to try researching new colors.
... Regarding the behavioral data, the results show that the average STM capacity was in the normative range (Cowan, 2001;Miller, 1956). However, there are number of differences between the digit span task that is implemented in WAIS and the current task, that limited the number of measures that could be derived. ...
... From the perspective of AI, this can be seen as a sophisticated form of multi-objective constraint-satisfaction, where objectives are retrieved/suppressed according to the demands of the situation, to reduce cognitive load and focus on current matters. Considering the limitations on human working memory [32], it may be useful to study how goals are chunked for easy recall while one handles a seemingly unrelated matter. ...
Preprint
Full-text available
Existing frameworks for situated design enable to model design activity while considering how agents internally see and understand the external world. Therefore, they are important for developing human-level intelligence in computational design systems. One major aspect in developing situated design agents is that of agent-environment interaction. While the contribution of such interaction to structuring design processes is acknowledged by practitioners and researchers alike, we lack evidence concerning the manners in which it unfolds in practice. Addressing this issue, we gather empirical data regarding agent-environment interaction in design, with emphasis on knowledge transfer (KT)-a cognitive process by which an individual applies knowledge from one situation in another. Six participants collaborated and competed in modeling a real-world building using Lego blocks. Examining KT during the activity sheds some light on the role of concrete circumstances in shaping design processes, thus offering insights towards developing situated design agents.
... By excluding the uncultured majority, a substantial portion of the tree of life is relegated to poorly ordered, ambiguous and often synonymous names or alphanumeric codes. Most of these alphanumeric codes are of limited mnemonic value because each letter or number contributes to a limited memory or digit span 6 , whereas a taxonomic name can be remembered as a single word, especially if it is meaningful or familiar. ...
Article
Full-text available
Most prokaryotes are not available as pure cultures and therefore ineligible for naming under the rules and recommendations of the International Code of Nomenclature of Prokaryotes (ICNP). Here we summarize the development of the SeqCode, a code of nomenclature under which genome sequences serve as nomenclatural types. This code enables valid publication of names of prokaryotes based upon isolate genome, metagenome-assembled genome or single-amplified genome sequences. Otherwise, it is similar to the ICNP with regard to the formation of names and rules of priority. It operates through the SeqCode Registry (https://seqco.de/), a registration portal through which names and nomenclatural types are registered, validated and linked to metadata. We describe the two paths currently available within SeqCode to register and validate names, including Candidatus names, and provide examples for both. Recommendations on minimal standards for DNA sequences are provided. Thus, the SeqCode provides a reproducible and objective framework for the nomenclature of all prokaryotes regardless of cultivability and facilitates communication across microbiological disciplines. SeqCode is the result of a community effort to unify nomenclature for uncultured and cultured prokaryotes using genome sequences.
... where α m = 1 (but can be set to 0 if working memory is not desired), W denotes the matrix of predictive/generative synapses, M is a matrix containing (conditional) memory synapses, and Q is a random synaptic projection matrix (which each element initialized by a centered Gaussian and standard deviation σ q ). m t is a working memory vector containing a representation of a recent history of observation, i.e., in this work, m t ∈ R (IH)×1 is the concatenation of a small history of randomly projected observations (this paper sets H = 7 inspired by classical work in human working memory [19]). We found that introducing and generalizing an NGC circuit to operate with a small working memory improved predictive performance and learning stability for inherently time-varying problems such as those encountered in robotics (and in our simulations). ...
Preprint
Full-text available
In this article, we propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC), designing an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards, embodying the principles of planning-as-inference. Concretely, we craft an adaptive agent system, which we call active predictive coding (ActPC), that balances an internally-generated epistemic signal (meant to encourage intelligent exploration) with an internally-generated instrumental signal (meant to encourage goal-seeking behavior) to ultimately learn how to control various simulated robotic systems as well as a complex robotic arm using a realistic robotics simulator, i.e., the Surreal Robotics Suite, for the block lifting task and can pick-and-place problems. Notably, our experimental results demonstrate that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
... Note that with k = 2 and s = 5, the model is holding 9 items in its memory and manipulating 7 items at a time. Miller (1956) gives 7 ± 2 as the maximum size of human short-term memory, and here we observe empirically that humans are doing no better than our model does with a 9-item memory. Additionally, the cluster size k = 2 matches the reduction ratio of the image pyramid described by Graham et al. (2000) and Pizlo et al. (2006). 2 ...
Article
Full-text available
The objective of the traveling salesperson problem (TSP) is to find a shortest tour through all nodes in a graph. Euclidean TSPs are traditionally represented with “cities” placed on a 2D plane. When straight line obstacles are added to the plane, a tour has to visit all cities while going around obstacles. The resulting problem with obstacles remains metric, but is not Euclidean because the shortest paths are no longer straight lines. We first revise a previous version of multiresolution graph pyramid by modifying the hierarchical clustering stage. Next, we describe two new experiments with human subjects. In the first experiment, the effect of the length of obstacles on the quality of tours produced by subjects was tested with three problem sizes. Long obstacles affect the tours to a greater degree than short obstacles, but long obstacles create obvious clusters and limit the ways in which the tours can be produced. In the second experiment we evaluated the degree to which Multidimensional Scaling (MDS) can compensate for the presence of obstacles. The results show that although MDS approximation can compensate to a large degree for the presence of obstacles, it cannot fully account for human performance. This fact suggests that mental representation of a TSP with obstacles is not Euclidean. Instead, it is likely to be based on hierarchical clustering in which pairwise distances represent the shortest paths around obstacles.
... 343 While short-term memory lasts longer than sensory memory, it is still initially very limited, engaging approximately 5 to 9 bits of information for approximately 30 seconds or so. 344 However, a second phase of short-term memory is working memory, which occurs when material is kept in conscious focus for a longer period of time, 345 and can happen when we are studying, repeating or rehearsing, focusing (for a period of time) on a core issue, etc. Because short-term memory can only handle 5 to 9 bits of information, displacement occurs when it is full and a new bit of information enters. ...
Book
Full-text available
What does it mean to be human? Increasingly, we recognize that we are infinitely complex beings with immense emotional and spiritual, physical and mental capacities. Presiding over these human systems, our brain is a fully integrated, biological, and extraordinary organ that is preeminent in the known Universe. Its time has come. This book is grounded in the Intelligent Complex Adaptive Learning System (ICALS) theory based on over a decade of researching experiential learning through the expanding lens of neuroscience. REVIEWS "Unleashing the Human Mind is the most comprehensive and enlightening book written on learning for the future. This book will expand your mind and motivate you to learn in ways you never realized are possible." -Dr. Arthur Shelley, Founder, Intelligent Answers; Author of Becoming Adaptable, KNOWledge SUCCESSion and The Organizational Zoo, Australia "With the backdrop of current global societal turmoil, the authors capture the urgency for changing course, and the research that provides a new path forward, which is found in the love of learning." -John Lewis, Ed.D., CKO, Explanation Age LLC; Author of Story Thinking, USA "Every now and then a book comes along that compels your spirit. In these times of uncertainty and even great danger for humanity, this book reminds us of what it means to be human, our infinite potential and innate ability to learn and to love." -Milton deSousa, Associate Professor, Nova School of Business and Economics, Portugal "This publication is the brilliant sublimination of a life-long accumulation of knowledge about the potential of our brain to learn, adapt and evolve to the best version of ourselves." -Johan Cools, Higher Architecture Institute of Saint-Lucas Ghent, Belgium "In our time of fast societal, technological and environmental changes and disruption, experiential learning becomes one of the few solutions to quickly adapt and survive … Such a rich and insightful book that will make you discover individual learning in a completely new way." -Dr. Vincent Ribière, Managing Director of the Institute for Knowledge and Innovation Southeast Asia (IKI-SEA), Bangkok University, Thailand "Once in a while, I am exposed to a work so profound that it literally causes a massive shift in my own thinking and beliefs … I found myself riveted to each paragraph as I embarked on a journey that vastly deepened my understanding of the learning process." -Duane Nickull, Author, Technologist, and Seeker of Higher Truth, Canada "In Unleashing, David Bennet, Alex Bennet and Robert Turner expand our collective understanding of the Human Mind by embedding timeless wisdoms, transliterations, and their collective expansive knowledge and experiences that enrich our lives from cover to cover." -Bob Beringer, CEO of EOR, Intelligence Professional, Author of Linux Clustering with CSM and GPFS, USA "Very few people have the gift to integrate such complex ideas, especially those about learning … this work can be likened to the Webb Telescope, which gives us more clarity into our mysteries. Well worth the viewing!" -Michael Stankosky, DSc, Author, Philosopher, Professor, Editor-Emeritus, Member of the Academy of Scholars, USA "It is the mastery of the authors of this book to open the reader’s mind and soul, thus offering the opportunity for the content of this live transmission to be discovered and interpreted in the most appropriate way by each reader … so that only the reader’s desire is needed to let him/her Self be seduced by this wealth of wisdom, generously placed at the reader’s disposal." -Dr. Florin Gaiseanu, Research Professor, Science and Technology of Information Bucharest (Romania) and Barcelona (Spain), Hoor Member of NeuroQuanology (Europe) and International Journal of Neuropsychology and Behavioral Sciences (USA)
... ReLU-activated neural networks (see below), matrix composition methods like principle components analysis (PCA), and large multiple regression models are all computationally interpretable, whereas Deep Learning (DL) models more-generally and ensemble trees methods like XGBoost are not. However, knowing how a prediction is computed from individual features does not automatically make the prediction comprehensible -it is generally still difficult to understand how a model behaves as there is a limit to the capacity of information that humans can process simultaneously (Miller 1956). ...
Preprint
Full-text available
This manuscript addresses the simultaneous problems of predicting all-cause inpatient readmission or death after discharge, and quantifying the impact of discharge placement in preventing these adverse events. To this end, we developed an inherently interpretable multilevel Bayesian modeling framework inspired by the piecewise linearity of ReLU-activated deep neural networks. In a survival model, we explicitly adjust for confounding in quantifying local average treatment effects for discharge placement interventions. We trained the model on a 5% sample of Medicare beneficiaries from 2008 and 2011, and then tested the model on 2012 claims. Evaluated on classification accuracy for 30-day all-cause unplanned readmissions (defined using official CMS methodology) or death, the model performed similarly against XGBoost, logistic regression (after feature engineering), and a Bayesian deep neural network trained on the same data. Tested on the 30-day classification task of predicting readmissions or death using left-out future data, the model achieved an AUROC of approximately 0.76 and and AUPRC of approximately 0.50 (relative to an overall positively rate in the testing data of 18%), demonstrating how one need not sacrifice interpretability for accuracy. Additionally, the model had a testing AUROC of 0.78 on the classification of 90-day all-cause unplanned readmission or death. We easily peer into our inherently interpretable model, summarizing its main findings. Additionally, we demonstrate how the black-box posthoc explainer tool SHAP generates explanations that are not supported by the fitted model -- and if taken at face value does not offer enough context to make a model actionable.
... Neste contexto, este artigo tem o objetivo de apresentar uma nova metodologia para priorização para decisão por multicritério, que combina a Lei de Miller [19] com o Princípio de Pareto [23], denominada de Metodologia da Árvore Balanceada (MAB). Entre as contribuições pretendidas por este trabalho estão a realização de priorizações multicritério com menor esforço, particularmente para grandes quantidades de elementos priorizáveis, e a apresentação para a decisão de resultados confiáveis e consistentes. ...
Article
Full-text available
Choosing an appropriate methodology is essential to generate reliable prioritizations in support of a decision-making process. The prioritization of projects, for example, is one of the main activities to support the decision-making of managers, helping to justify the selection, cutting and allocation of resources among the projects in a portfolio. This article aims to present a new prioritization methodology for multicriteria decisions based on the combination of Miller's Law and Pareto's Principle. As the main intended contribution, there is the expectation of being able to carry out multicriteria prioritizations with less effort, particularly when many prioritized elements are involved, with the certainty of achieving adequately consistent and reliable results.
... Seven plus minus two law. It is known that when we classify objects, we divide them into five to nine categories; see, e.g., [9,10]. In particular, when we divide a process into stage, we divide it into five to nine stages: ...
Technical Report
Full-text available
A recent paper showed that the solar activity cycle has five clear stages, and that taking theses stages into account helps to make accurate predictions of future solar activity. Similar 5-stage models have been effective in many other application area, e.g., in psychology, where a 5-stage model provides an effective description of grief. In this paper, we provide a general geometric explanations of why 5-stage models are often effective. This result also explains other empirical facts, e.g., the seven plus minus two law in psychology and the fact that only five space-time dimensions have found direct physical meaning.
... Beyond investigations into basic sensory processing, the frequency of brain oscillations had also been modulated in the context of working memory performance. It is well known that humans can uphold 7 ± 2 items in their working/ short term memory (Miller, 1956). Later, this capacity has been associated with individual gamma and theta frequencies. ...
Chapter
Current approaches to record and analyze oscillatory brain activity provide validated measures for assessing the activity’s associations with cognitive functioning. However, the vast majority of these approaches are limited to correlational evidence about such relationships. An important open question is whether brain oscillations play a direct, causal role in cognition. In recent years, transcranial brain stimulation approaches, such as repetitive transcranial magnetic stimulation (rTMS) and transcranial alternating current stimulation (tACS), have been used to modulate brain oscillations. These methods provide new pathways to assess causal relationships between brain oscillations and cognition. This chapter discusses conceptual and practical aspects of these stimulation approaches and covers the current state of research using stimulation and future directions needed to advance work in this area.
... Overall, nonverbal sound emerges as a promising training tool, suited for contexts such as sports [3], [11]. Separate sounds, being part of a single sound file or auditory model, may be easily chunked and remembered as meaningful units of information [12]. Since many key actions in sports involve a series of complex movements performed in sequence, sound files containing "instruction chunks" for various parts of these movement sequences may provide an effective tool for learning and training. ...
Conference Paper
We report on an experiment in which nine Norwegian national team rowers (one female) were tested on a rowing ergometer in a motion capture lab. After the warm-up, all participants rowed in a neutral condition for three minutes, without any instructions. Then they rowed in two conditions (three minutes each), with a counterbalanced order: (1) a coaching condition, during which they received oral instructions from a national team coach, and (2) a sound condition, during which they listened to a pre-recorded sound file that was produced to promote good rowing technique. Performance was measured in terms of distance traveled, and subjective responses were measured via a questionnaire inquiring participants about how useful the two interventions were for rowing efficiency. The results showed no significant difference between the two conditions of main interest - the pre-recorded sound file and traditional coaching - on any measure. Our study indicates that auditory guidance can be a cost-efficient supplement to athletes’ training, even at higher levels.
... The workers classify each score to one of the five bins with respect to the population: 0-20%, 20-40%, 40-60%, 60-80%, and 80-100%. We choose to ask the workers to report in 5 quantized bins instead of directly reporting a number of percentile, because prior studies have shown that workers are not able to perceive fine numbers accurately due to limited processing abilities (Miller 1956;Shah et al. 2016) and therefore have higher accuracy when a small number of quantized choices are given (e.g., Lietz 2010). We have confirmed this trend by a preliminary study comparing using 5 bins versus 10 bins. ...
Preprint
Many applications such as hiring and university admissions involve evaluation and selection of applicants. These tasks are fundamentally difficult, and require combining evidence from multiple different aspects (what we term "attributes"). In these applications, the number of applicants is often large, and a common practice is to assign the task to multiple evaluators in a distributed fashion. Specifically, in the often-used holistic allocation, each evaluator is assigned a subset of the applicants, and is asked to assess all relevant information for their assigned applicants. However, such an evaluation process is subject to issues such as miscalibration (evaluators see only a small fraction of the applicants and may not get a good sense of relative quality), and discrimination (evaluators are influenced by irrelevant information about the applicants). We identify that such attribute-based evaluation allows alternative allocation schemes. Specifically, we consider assigning each evaluator more applicants but fewer attributes per applicant, termed segmented allocation. We compare segmented allocation to holistic allocation on several dimensions via theoretical and experimental methods. We establish various tradeoffs between these two approaches, and identify conditions under which one approach results in more accurate evaluation than the other.
Chapter
Full-text available
Die Problemstellung, der Missstand in der schulischen Praxis und damit die Ziele des vorliegenden Forschungsprojekts sind bereits formuliert. Darüber hinaus sollten vor der Entwicklung eines neuen Systems zur Darstellung von Mengen Kriterien formuliert werden, die dieses zu erfüllen hat. Nach erfolgter Erarbeitung der Kriterien findet im Folgenden eine Auseinandersetzung mit bereits bestehenden Lernmaterialien statt, die eine ähnliche Problemstellung adressieren. Im Anschluss wird die Fragestellung der vorliegenden Arbeit formuliert.
Conference Paper
Data-driven persona generation can benefit from stakeholder inputs while offloading the complexities of high-dimensional datasets. To this end, we present Survey2Persona (S2P), an interactive web interface for real-time persona generation from survey data. The users of the web interface—the designers—can upload survey data and have the interface automatically generate personas. Researchers and practitioners can use S2P to explore different respondent types in their survey datasets in a privacy-preserving manner, which is akin to making the analytical journey more productive, enjoyable, and human-centered. We make the system publicly available and provide argumentation about its novelty and value for user modeling and human-computer interaction communities.
Thesis
Cognitive impairment is highly prevalent in patients with chronic heart failure, but little is known about the health-related quality of life (HRQL) of this special patient group. We aimed to examine whether cognitive impairment is associated with HRQL in heart failure patients and hypothesized that cognitive impairment would negatively impact HRQL. We examined the HRQL of 148 patients of the Cognition.Matters-HF study with chronic heart failure and objectified cognitive impairment ranging from no to severe deficits. With the exception of the self-efficacy scale of the KCCQ, cognitive impairment was not associated with lower health-related quality of life in heart failure patients. The association of self-efficacy with severity of cognitive impairment remained significant after adjustment for duration and severity of heart failure, age, and sex (p<0.001). The self-efficacy scale gives information about patients’ ability to prevent acute heart failure decompensations and could become a promising tool to detect individuals who are unable to adhere to a proper heart failure treatment regimen and manage arising complications. These patients may benefit from enhanced care, e.g. in the frame of a heart failure nurse led disease-management program.
Conference Paper
Full-text available
Future can be punctuated with various forms of uncertainties. Some recent studies have conceptualised it as the radical uncertainty, which is characterised by the events that can’t be allotted meaningful probabilities. Despite the perennial need to fathom and manage uncertainty, a comprehensive framework illustrating how the sense of radical uncertainty is made especially when the rationality-based probability models are only able to provide a very limited outlook of the future is missing. Harkening to these pressing concerns in the extant literature, this conceptual paper aims to depict the process of sensemaking of the uncertainty. Furthermore, the dimension of the prospective sensemaking is under researched in the literature of sensemaking. Therefore, the emphasis of this paper is to shed light on the prospective sensemaking of uncertainty by showing its linkages with the underexplored dimension of temporality (by discussing the novel concept of the collective mental time travel) and the narratives. This paper proposes a new comprehensive framework that shows that people make sense of a radically uncertain future with narratives and collective mental time travel is used to construct them. Keywords: Radical Uncertainty, Sensemaking, Prospective sensemaking, Collective mental time travel (MTT), Narratives
Article
Full-text available
Social media, such as Microblogs, have become an important source for people to obtain information. However, we know little about how this would influence our comprehension over online information. Based on the cognitive load theory, this research explores whether and how two important features of Weibo, which are the feedback function and information fragmentation, would increase cognitive load and may in turn hinder users’ information comprehension in Weibo. A 2 (feedback or non-feedback) × 2 (strong-interference or weak-interference information) between-participants experimental design was conducted. Our results revealed that the Weibo feedback function and interference information exerted a negative impact over information comprehension via inducing increased cognitive load. Specifically, these results deepened our understanding regarding the impact of Weibo features on online information comprehension and suggest the mechanism by which this occurs. This finding has implications for how to minimize the potential cost of using Weibo and maximize the adaptive development of social media.
Technical Report
Full-text available
Visual and linguistic factors in literacy acquisition: Instructional Implications For Beginning Readers in Low-Income Countries. A literature review prepared for the Global Partnership for Education, c/o World Bank.
Chapter
Full-text available
Robert Petkewitz ist Redakteur des Magazins Ohrenkuss und thematisiert mit diesen Worten die Pathologisierung des Down-Syndroms in der Gesellschaft. Trisomie 21 wird häufig als Krankheit missinterpretiert und Betroffene werden wiederholt als Personen bezeichnet, die „unter dem Down-Syndrom leiden“. Sprachlich werden ihnen dabei vermeintlich „gesunde“ Menschen gegenübergestellt.
Chapter
Full-text available
Die Entwicklung des Lernmaterials, das die aufgezeigte Lücke im Materialangebot schließen sollte, brachte vier Iterationen hervor. Parallel zur Entwicklung des Materials wurden drei hilfreiche Materialisierungen entwickelt. Die Entwicklung der neuen Lernmittel fußte auf den theoretischen Annahmen, die in den vorherigen Kapiteln herausgearbeitet wurden.
Article
Full-text available
Chunks are multi-word sequences that constitute an important component of the mental lexicon. In second language (L2) acquisition, chunking is essential for attaining fluency and idiomaticity. In the present study, in order to examine whether chunks provide a processing advantage over non-chunks for L2 learners at different levels of proficiency, three groups (beginner, intermediate, and advanced) English-speaking learners of Chinese participated in an online acceptability judgment task and a familiarity rating task. Our results revealed that the participants in all three groups processed chunks faster and with fewer errors than they did non-chunks. It was also found that the observed processing advantage of chunks could not be explained by a familiarity effect alone, thus suggesting that L2 learners across the board store chunks as holistic units. The implications of chunk instruction in relation to input frequency and variability in L2 settings are also discussed.
Chapter
Full-text available
Die Kraft der Fünf bezeichnet eine weitverbreitete Form der Mengendarstellung, die mentale Mengenbilder erzeugen und zur Fähigkeit, mit diesen geistig zu operieren, beitragen soll. Im Unterricht Lernender mit Simultandysgnosie entfaltet diese Darstellungsform nicht ihr volles Potential. Demnach bietet es sich an, eine alternative und barriereärmere Variante der Mengendarstellung zu entwickeln und im Unterricht einzusetzen.
Chapter
Full-text available
Das vorliegende Forschungsprojekt zeigt, dass es möglich ist, ein Unterrichtsmaterial zu entwickeln, das spezifische Kriterien erfüllt, um Mengen und Operationen barrierefrei für Menschen mit Simultandysgnosie darzustellen. Mit Hilfe formativer Evaluation wurde nachgewiesen, dass Schüler*innen mit Trisomie 21 unter Verwendung des Materials ihre mathematischen Fähigkeiten weiterentwickelt haben. Darüber hinaus wurde in einem Quasi-Experiment mithilfe von Eye-Tracking belegt, dass einige dieser Schüler*innen mentale Vorstellungen der Mengenbilder des Materials entwickelt haben.
Chapter
Full-text available
Im Folgenden werden Themenstränge aufgegriffen, die sich im Laufe des Forschungsprojekts ergeben haben. Reflektiert werden sollen die angewendeten Methoden und Theorien, die Form des Unterrichtmaterials und die Rezeptionen zu mathildr.
Chapter
Full-text available
Die Studienlage und die Untersuchungen zur Zahlbegriffsentwicklung zeigen, dass Personen mit Trisomie 21 gehäuft mathematische Lernschwierigkeiten aufweisen. Unbeantwortet bleibt die Frage, worin diese Lernschwierigkeiten begründet liegen. Angesichts des Verständnisses von Neurodiversität wäre die Begründung, eine Trisomie 21 würde regelmäßig mit einer Beeinträchtigung aller kognitiven Prozesse einhergehen, zu kurz gefasst.
Chapter
Full-text available
Die Bearbeitung der zuvor dargelegten Problemstellung erfordert die Wahl eines geeigneten Forschungsdesigns, das der Praxisorientierung des Projekts gerecht wird. Im Folgenden werden die Anforderungen formuliert, die ein solches Forschungsdesign erfüllen sollte. Darauf folgen eine Darstellung der Forschungsmethode Educational Design Research und eine begründete Auswahl des Forschungsdesigns.
ResearchGate has not been able to resolve any references for this publication.