Conference PaperPDF Available

Abstract and Figures

Literature in the area of psychology and education provides domain knowledge to learning applications. This work detects the difficulty levels within a set of multiplication problems and analyses the dataset on different error types as described and determined in several pedagogical surveys and investigations. Our research sheds light to the impact of each error type in simple multiplication problems and the evolution of error rates for different error types in relation to the increasing problem-size.
Content may be subject to copyright.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 144
Title: Determining the Causing Factors of Errors for Multiplication Problems
Authors: B. Taraghi
1
*, M. Frey
1
, A. Saranti
1
, M. Ebner
1
, V. Müller
2
, A. Großmann
2
Affiliations:
1
Graz University of Technology, Münzgrabenstrasse 35/I, 8010 Graz Austria.
2
UnlockYourBrain GmbH, Französische Strasse 24, 10117 Berlin, Germany.
*Correspondence to: b.taraghi@tugraz.at
Abstract: Literature in the area of psychology and education provides domain knowledge to
learning applications. This work detects the difficulty levels within a set of multiplication
problems and analyses the dataset on different error types as described and determined in several
pedagogical surveys and investigations. Our research sheds light to the impact of each error type
in simple multiplication problems and the course of error types in problem-size.
One Sentence Summary: This work consists in the investigation of the various error types in
multiplication problems, as well as the problem-size effect.
Main Text:
1 Introduction: Learning simple multiplications is one of the major goals in the first years at
primary school education. Math teachers find pedagogically relevant to know which exercises
improve mathematical abilities, which errors occur repeatedly and on which steps they may
require teacher's intervention.
Applying math training applications can support the teachers in this regard and enhance the basic
math education at primary schools [1]. For example, the 1x1 trainer application [2] that was first
developed by Graz University of Technology, assists the training process of pupils and enhances
the pedagogical intervention of the teachers for learning one-digit multiplication problems at
schools. The application was used in several primary schools for training goals. In our previous
works [3, 4] we analysed the gathered data (about 500,000 calculations) to get insight about the
learners' answering behaviour within this application. We identified difficulty levels within the
set of one-digit multiplication problems. In this work we continue our research on another
dataset generated by the Android application UnlockYourBrain, which poses different basic
mathematical questions to the learners. The focus is drawn first to the multiplication problems.
We perform the same analysis steps as in our previous work to identify the difficulty levels. We
primarily want to shed light to the reasons of the incorrect answers. Therefore, based on the error
rates driven from the first part of the analysis, for each multiplication problem we detect different
error types known from the literature. We present the probabilities of occurrence of the various
error types in detail and explain them individually, for each specific multiplication problem.
Draft - extended version originally published in: Taraghi, B., Frey, M., Saranti, A., Ebner, M., Müller, V.
Großmann, A. (2015) Determining the Causing Factors of Errors for Multiplication Problems. In:
Immersive Education. Ebner, M., Erenli, K., Malaka, R., Pirker, J., Walsh, A. (Eds.). Communications in
Computer and Information Science 486. Springer. pp. 27-38
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 145
Section 2 describes the dataset that is used for analysis purposes. Section 3 covers the difficulty
levels of the multiplication problems, the findings and interpretations based on the difficulty
probabilities. Section 4 describes the detected error types and proceeds to the analysis of error
types in section 5.
1.1 Related Work: There are two major arithmetic models of fact retrieval that deal with errors
in simple multiplications; the modified network interference theory by Campbell [5] and the
interacting neighbours model by Verguts and Fias [5]. Both models introduce some common
error types and their cause in simple multiplication problems.
One of the most occurring error types in simple multiplication problems are the operand errors.
They happen whenever the failed result is the product of one of the neighbouring operands
instead of the given ones; e.g. 48 = 6 * 8 for the given problem 7 * 8. The survey done by
Campbell [5], shows that the majority of errors can be classified in this category. The operand
error rates differ for each multiplication problem and are not uniformly distributed [7].
Operand intrusion error happens when at least one of the two operands matches one of the digits
of the result; e.g. answering 74 to the posed question 7 *8. Campbell argued that reading the
operands as if they were two-digit numbers causes this error. This argument is supported from
the fact that the first operand is observed in the decade digit's place and/or the second operand
appears at the unit digit of the result [7, 8].
One of the initial findings in solving arithmetic problems is the so called problem-size effect.
The problem size is defined as the sum of the operands [9]. The error rates increase as the
problems get larger and the response time evolve correspondingly. The only exceptions are five
problems (problems involving 5 as operand e.g. 5 * 7) and tie problems (problems with repeated
operands e.g. 4 * 4), that do not exhibit this error to a large extend. These problems can be
answered faster in comparison to other problems of the same category [10].
The interacting neighbour model of Verguts and Fias [11] introduces the concept of consistency
of multiplication problems. The concept of consistency was formerly known from the language
literature [12], where it was proposed that the reaction time to pronounce a given word depends
on the consistency of the word to its neighbours, with respect to pronunciation. In the context of
simple multiplications, each problem has a set of neighbouring problems. The operands that are
used in these problems, are the neighbours of the operands (in the multiplication table) of the
original problem. Two arbitrary problems are consistent if their solutions have the same decade
or unit digits; e.g. 56 = 7 * 8 and 36 = 4 * 9 are two consistent problems with respect to their unit
digit. The authors argue, that the consistency measure explains the problem-size effect as well as
the tie effect. Tie problems have less neighbours and they are inconsistent rather than consistent.
Hence less competition exists for tie problems. For all five problems there are consistent
neighbours with distance 2 (they share 5 as unit digit). Although the neighbour distance is far, it
is assumed to be the reason for smaller error rates. Altogether, multiplication problems that have
a higher consistency with their neighbours can be answered faster with higher accuracy [13].
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 146
2 Dataset Description: The learning application that was used to provide insight for the
characterisation of learning difficulties is UnlockYourBrain. Android users are confronted with
basic mathematical questions each time they attempt to unlock their screen. The application
provides for each posed question a list of possible answers; only one of them is correct. The list
has variable length, meaning that it can vary from trial to trial between two and five possible
answers, even if the posed question is the same. The answering process evolves as follows: the
learner can either attempt to answer or chooses to skip the usage and continue with unlocking the
screen. In case of an answering attempt, either the correct answer is chosen and the application
finishes, or a wrong answer is selected. In the latter case the application indicates the mistake and
repeats the question with the remaining possible answers. The user reattempts to answer the
question with less possible answers or chooses to skip.
The dataset was cleaned to remove noise and was reduced to a minimum number of occurrences
of entities in order to ensure a high degree of confidence in the statistical results. The methods
used can be read at [14]. The final dataset contained 268 questions that were posed totally
1191450 times to 46357 users.
3 Answer Types and Difficulty Levels of Multiplication Problems: A measure of the
difficulty of a question is the answering manner of the learners. The possible answer types are
gathered in the following set {R, WR, W, WWR, WW, WWWR, WWW, WWWW} where W means
“wrong” and R “right”. A question that was posed with three answering options (see [14]) can
have three answering types: R which denotes that the user found the correct answer in the first
answering attempt, WR that the first attempt was wrong but the second right and WW that both
attempts failed. The set of answer types is the classification algorithm's dimensions. Every
multiplication lies in an eight-dimensional feature space where the value in each dimension is the
probability that the question was answered as the corresponding answer type. By applying the K-
Means algorithm [15] in this space we classified the problems in 11 clusters; each of them
contains problems that were answered in similar means from the learners.
Figure 1 depicts the computed difficulty probabilities (error rates) of all provided multiplication
problems within the dataset. A low probability indicates a rather easy problem whereas a high
probability implies a relatively difficult one.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 147
!
Figure 1. Difficulty map of multiplication problems. Axes stand for the two operands A and B of
a multiplication problem A * B. Low probabilities imply lower error rates, hence rather easy
problems. High probabilities indicate relatively difficult problems.
It can be observed that the difficulty values appear to a great extent symmetric. The error rate of
problems A * B and B * A seem to be strongly correlated, therefore the order of operands
probably does not have a decisive influence on the error rate. One-digit multiplication problems
are considered easier than the two-digit multiplications. Looking further into the set of one-digit
multiplications (the top left quadratic area in figure 1 where both operands are less than ten) we
achieve the same results as we gained in our previous research work [3] in the one-digit
multiplication problems. 5 and 10 problems are relatively easier to solve. The problems
involving operands 6, 7, 8 and 9 are rather difficult problems.
Looking into two-digit problems, we observe the influence of the 5 and 10 operands in the
simplicity of the question containing them. As in one-digit problems, the unit digits 1, 2 and 5
show the lowest error rates. The same is true for difficult operands. It can be seen that specially
the unit digits 6, 7 and 8 make the two-digit problems extremely difficult relative to the other
operands. Considering the problems containing $5$ as unit digit, the combination with difficult
operands as decade digit leads to a higher error rate, compared to the other decade operands.
The tie effect is also visible. The problems containing repeated operands have lower error rates
compared with other neighbour problems, but the problem-size must be also taken into account.
While the tie problems in the interval of one-digit problems are relatively easy, they become
more difficult for two-digit problems. The provided dataset in our case contains tie problems no
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 148
greater than 17 * 17. In figure 1, problems 11 * 11 and 12 * 12 seem easy due to their unit digits
(1 and 2 effect); 15 * 15 shows relatively a lower rate than the other tie problems with operands
greater than 12 * 12. It can be argued that the use of 5 as one of the operands could explain this
phenomenon.
4 Error Types: The complete list of analysed error types with a short explanation can be found
in table 1. For a sample given multiplication problem 56 = 7 * 8 an example is given to clarify
how to interpret an error type.
Error type
Description
e.g. 56 = 7 * 8
Operand errors
A neighbouring operand is taken
Split 1
The neighbouring distance is 1
48 = 6 * 8
Split 2
The neighbouring distance is 2 for an
operand or 1 for both operands
40 = 5 * 8
Which operand?
Is the smaller or larger operand affected?
Ties were ignored.
Which neighbours?
Are smaller or larger neighbours taken?
Operand intrusions
A digit of the result matches an operand
First operand
Decade digit matches the first operand
74 ! 7 * 8
Second operand
Unit digit matches the second operand
68 ! 7 * 8
Unit consistency
Only the unit digit is correct
76 ! 56
Decade consistency
Only the decade digit is correct
51 ! 56
Table 1. The analysed error types and their descriptions.
5 Results and Discussion:
5.1 Operand Errors: The majority of errors can be categorized as operand errors. The operand
error rates differ for different multiplication problems (see section 1.1). Figure 2 depicts the
probabilities of an operand error for each simple multiplication problem where each square
represents a specific problem. The first operand can be read off the X-axis, the second operand
off the Y-axis. The color of the square indicates the probability of an operand error occurrence
for the corresponding problem; red color indicates higher probabilities and blue color a very low
probability. As it can be seen, the problems that are rather difficult (see section 3) are more
affected by operand errors than the easy ones.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 149
Figure 2. Probabilities (error rates) of an operand error for each simple multiplication problem.
Figures 2A and 2B compare the error rates of an operand error with the split of 1 and 2
respectively. Figures 2C and 2D depict errors with decremented and incremented operands,
respectively. These are restricted to the error rates of operand errors with a split of 1. Figures 2E
and 2F compare errors caused by smaller and larger operands respectively. These are restricted
to the error rates of operand errors with a split of 1 incremented.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 150
Figures 2A and 2B show the probabilities of an operand error with the split of 1 and 2.
Comparing the two heatmaps, it is visible that the shortest neighbour distance (split 1) contains
the most operand errors; e.g. for a given problem 7 * 8 the errors such as 48 = 6 * 8 are more
probable than 40 = 5 * 8. It is observable that the most difficult problems have the highest
operand error rates. Relatively easy problems comprised by operands 2, 5 and 10 show the
lowest error rates. This is also true for operand errors with split 2. One can verify that the error
rates are also not uniformly distributed over all problems. Five problems are less affected from
the effects that were described above. We can observe a slightly higher error rate for five
problems that involve an operand greater than 5 though. It can be argued that the difficult
operands account for this effect.
Looking further into operand errors with split 1, that account for the majority of errors, it can be
observed that the larger operand neighbours are more frequently responsible for the cause of
errors than the smaller ones. In other words, learners tend to choose a value greater than the true
result of a problem rather a decremented one; e.g. for a given problem 4 * 8 the errors such as
36 = 4 *9 are more probable than 28 = 4 * 7. Figures 2C and 2D show this finding. We
emphasize that this is a valid prediction for all simple multiplications. Looking further through
each multiplication individually, we observe some exceptions such as 9 * 7 and 7 * 9 where a
decremented operand is rather due. Furthermore the tie problems seem to follow the same rule as
can be seen in figure 2D.
Considering the operand errors with split 1 and incremented operands, the next step was to
analyse which operand accounts for the error. More specifically, to investigate which one is
incremented: the larger or the smaller operand. Our analysis shows that the mean probabilities
for the set of larger and smaller operands are extremely close to each other, so that we can not
claim that the relative size of operands plays an important role. Figures 2E and 2F show this
comparison for each multiplication problem. As an example it can be seen that 8 * 4 or 4 * 8
show a very high error rate, meaning that the most probable false answer in this case was
36 = 9 * 4.
5.2 Operand Intrusions and Consistency Errors:!Operand intrusion errors occur when an
operand intrudes into the result. Figure 3 depicts the error rates for the first operand A and the
second operand B respectively. In general the probability of an intrusion for the second operand
B is higher than for the first operand A. While no specific pattern can be found within the set of
simple multiplication problems, it can be observed that some operands reveal a higher
probability relatively to other problems. For instance in case of the first operand intrusion,
specially the operand A = 4 shows a probability over 10% while multiplied by difficult operands
B {7,8, 9}
. Interestingly
are more often intruded to the results while multiplied
by B = 9. In case of second operand intrusion, B = 6 reveals a probability of 12% while being
multiplied by difficult operands
A {7,8, 9}
. It is followed by
A {3, 4}
multiplied by B = 8. In
both cases, first operands
play a stronger role in operand intrusion compared with
other operands.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 151
!
Figure 3. Probabilities (error rates) of an operand intrusion error for each simple multiplication
problem. Figures 3A and 3B show the error rates for the first operand A and the second operand
B respectively.
Considering the decade and unit consistency errors, we could find no clear pattern in the
multiplication table. The probability of error occurrence related to decade consistency is
relatively higher than unit consistency. Decade consistency errors are specially more probable if
both operands are greater than 5 and are unequal. The reason for this could be explained by the
problem-size effect.
5.3 Problem Size Effect:!Problem-size is the sum of the operands and expresses how large the
problem is. Figure 4A shows the error rate (of any type) against increasing problem size. As the
problem size increases, the error rate has also a tendency to increase. However there is no
continuous ascending course of error rate. As predicted in [5, 7, 10] the tie problems can be
answered faster and more accurate compared with other problems, also while the problem size
increases. We see here that the tie problems have a different course by ascending problem-size.
While the error rates for all other error types increases, in tie problems a decrease is observed.
This can be claimed only upto problem size 25, due to the fact that the provided dataset for the
analysis is restricted. Furthermore, the error rates for the tie problems have a local minimum at
5 * 5, 10 * 10 and 15 * 15, which can be argued by the 5 effect and the easy 10-problems.
We analysed each error type described in table 1 against the problem size individually. Decade
and unit consistency errors increase by ascending problem-size. Figures 4B and 4C depict the
unit and decade consistency errors against problem size respectively. All other analysed error
types do not reveal any increasing course and stay constant within a close probability interval. As
an example, the operand error with split 1 is depicted against the problem size in figure 4D. It
can be observed that the error rate varies between 5% and 10% and comes even down to about
3% at problem size 25. In sum, considering the set of analysed error types, the problem-size
effect can be defined according to the unit and decade consistency errors.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 152
Figure 4. Error probabilities (fraction of errors) against problem size. Problem size is calculated
as the sum of the operands A + B. The largest considered operand is 15. Ties and non-ties are
depicted separately. Figure 4A shows the general round error probability; that is the fraction of
rounds where at least one error of any type has been made against problem size. Figure 4B
depicts the unit consistency errors and 4C shows the decade consistency errors against the
problem size.
References and Notes:
1. M. Ebner, M. Schön, in Why Learning Analytics in Primary Education Matters!, C.
Karagiannidis, S. Graf, Eds. (Bulletin of the Technical Committee on Learning Technology
2013) vol. 15, issue 2, pp. 14-17.
2. M. Schön, M. Ebner, G. Kothmeier, It's Just About Learning the Multiplication Table
(Proceedings of the 2nd International Conference on Learning Analytics and Knowledge
ACM, New York, NY, USA 2012) S. Buckingham Shum, D. Gasevic, R. Ferguson (Eds.) pp.
73-81. DOI=10.1145/2330601.2330624.
3. B. Taraghi, M. Ebner, A. Saranti, M. Schön, On Using Markov Chain to Evidence the
Learning Structures and Difficulty Levels of One Digit Multiplication (proceedings of the 4th
International Conference on Learning Analytics and Knowledge, Indianapolis, USA 2014)
pp. 68-72.
E-iED 2014 ImmersiveEducation.org
PROCEEDINGS 153
4. B. Taraghi, A. Saranti, M. Ebner, M. Schön, Markov Chain and Classification of Difficulty
Levels Enhances the Learning Path in One Digit Multiplication, P. Zaphiris, A. Ioannou,
Eds. (Springer LNCS 2014), vol. 8523, pp. 322-333.
5. J. I. D. Campbell, Mechanisms of simple addition and multiplication: A modified network-
interference theory and simulation. Mathematical Cognition. 1, 121-165 (1995).
6. T. Verguts, W. Fias, Interacting neighbours: A connectionist model of retrieval in single-digit
multiplication. Memory & Cognition. 33, 1-16 (2005).
7. J. I. D. Campbell, On the relation between skilled performance of simple division and
multiplication. Journal of Experimental Psychology: Learning, Memory, & Cognition. 23,
1140-1159 (1997).
8. J. I. D. Campbell, J. M. Clark, in Cognitive number processing: An encoding-complex
perspective, J. I. D. Campbell, Ed. The Nature and Origins of Mathematical Skills
(Amsterdam: Elsevier Science 1992) pp. 457-491.
9. N. J. Zbrodoff, G. D. Logan, in What everyone finds: The problem size effect, J. I. D.
Campbell, Ed. Handbook of Mathematical Cognition (New York: Psychology Press 2004)
pp. 331-346.
10. J. A. LeFevre, G. S. Sadesky, J. Bisanz, Selection of procedures in mental addition:
Reassessing the problem size effect in adults. Journal of Experimental Psychology:
Learning, Memory & Cognition, 22, 216-230 (1996)
11. T. Verguts, W. Fias, Neighborhood effects in mental arithmetic. Psychology Science, 47,
132–140 (2005)
12. M. S. Seidenberg, J. L. McClelland, A distributed, developmental model of word recognition
and naming. Psychological Review, 96, 523-568 (1989)
13. F. Domahs, M., Delazer, H., Nuerk, What makes multiplication facts difficult: Problem-size
or neighborhood consistency? Experimental Psychology, 53, 275–282. doi:10.1027/1618-
3169.53.4.275 (2006)
14. B. Taraghi, A. Saranti, M. Ebner, V. Müller, A. Großmann, Towards a Learning-Aware
Application Guided by Hierarchical Classification of Learner Profiles. Journal of Universal
Computer Science, Special Issue on Learning Analytics, in print (2015)
15. C. Bishop, Pattern Recognition and Machine Learning (Springer Science and Business
Media, LLC, New York, USA 2006), pp. 424-430
... After analysing the most prevalent error types that are observed in one-digit multiplication, we carried out a detailed analysis [23], [22], [19]. With the use of heat maps and diagrams we depicted those misconceptions that are of higher relevance, because they occur more often, and we provided hints at probable reasons. ...
... We have considered six specific error types of the onedigit multiplication: operand, intrusion, consistency, off-by-±1 and off-by-±2, pattern, as well as confusion with addition, subtraction, and division operation errors, as described in [3] and [19]. The collection of these mistakes consist of our bug library [4] or taxonomy of misconceptions [5] which, at this point, is created by enumeration and is driven by domain knowledge. ...
Conference Paper
One-digit multiplication errors are one of the most extensively analysed mathematical problems. Research work primarily emphasises the use of statistics whereas learning analytics can go one step further and use machine learning techniques to model simple learning misconceptions. Probabilistic programming techniques ease the development of probabilistic graphical models (bayesian networks) and their use for prediction of student behaviour that can ultimately influence learning decision processes.
... Students' errors provide a unique window into the mind, as error responses may reflect the cognitive processes-such as applied strategies-activated during problem solving. This fundamental understanding has spawned decades of research, from classifications of errors (Ben-Zeev, 1998;Straatemeier, 2014), and cognitive models aimed at explaining errors (Braithwaite et al., 2017;Buwalda et al., 2016), to the diagnosis of observed errors (Taraghi et al., 2015;Taraghi et al., 2016). In this contribution to the field of errors in learning, we propose a model for the latter. ...
Article
Full-text available
In learning, errors are ubiquitous and inevitable. As these errors may signal otherwise latent cognitive processes, tutors-and students alike-can greatly benefit from the information they provide. In this paper, we introduce and evaluate the Systematic Error Tracing (SET) model that identifies the possible causes of systematically observed errors in domains where items are susceptible to most or all causes and errors can be explained by multiple causes. We apply the model to single-digit multiplication, a domain that is very suitable for the model, is well-studied, and allows us to analyze over 25,000 error responses from 335 learners. The model, derived from the Ising model popular in physics, makes use of a bigraph that links errors to causes. The error responses were taken from Math Garden, a computerized adaptive practice environment for arithmetic that is widely used in the Netherlands. We discuss and evaluate various model configurations with respect to the ranking of recommendations and calibration of probability estimates. The results show that the SET model outranks a majority vote baseline model when more than a single recommendation is considered. Finally, we contrast the SET model to similar approaches and discuss limitations and implications.
... Any false answer that does not belong to one of those six categories is assigned to the unclassified category. The description of the error types is explained in detail in [48]; a brief description follows here: ...
Chapter
Full-text available
One-digit multiplication problems is one of the major fields in learning mathematics at the level of primary school that has been studied over and over. However, the majority of related work is focusing on descriptive statistics on data from multiple surveys. The goal of our research is to gain insights into multiplication misconceptions by applying machine learning techniques. To reach this goal, we trained a probabilistic graphical model of the students’ misconceptions from data of an application for learning multiplication. The use of this model facilitates the exploration of insights into human learning competence and the personalization of tutoring according to individual learner’s knowledge states. The detection of all relevant causal factors of the erroneous students answers as well as their corresponding relative weight is a valuable insight for teachers. Furthermore, the similarity between different multiplication problems - according to the students behavior - is quantified and used for their grouping into clusters. Overall, the proposed model facilitates real-time learning insights that lead to more informed decisions.
... Since the LTI protocol ensures a connection between any two online learning environments that support the protocol, the bridge can also be used by teachers and researchers that use a different type of learning environment. Those other environments might also benefit from additional pedagogical flexibility and learning measures, or neither provide the tools for teacher-driven experimental comparisons, thus for those teachers and researchers the protocol can also come in helpful (e.g., Henrick, 2012 Ben-Zeev, 1998;Straatemeier, 2014), and cognitive models aimed at explaining errors (e.g., Buwalda, Borst, van der Maas, & Taatgen, 2016), to the identification of misconceptions from observed errors (e.g., Taraghi et al., 2015;Taraghi, Saranti, Legenstein, & Ebner, 2016). In this contribution to the field of errors in learning, we investigate a method for the latter-a new approach to detecting the latent causes of an individual student's manifest errors. ...
Thesis
Full-text available
Picture education as a long chain of interventions in a self-organizing developmental system. On the one extreme, such educational sequences can be identical for each and every student, whereas on the other extreme, each sequence may be perfectly tailored to the individual. The latter is what is meant with idiographic education. All educational programs can be seen to lie somewhere in between those extremes, and in this book, methods are explored that may help increase the tailoring of education. The book covers advances in three fundamental approaches. First, it discusses and illustrates an experimental approach: online randomized experiments, so-called A/B tests, that enable truly double-blind evidence-based educational improvements. Second, it introduces a diagnostic approach: a scalable method that helps identify students’ misconceptions. Third and finally, it introduces a theoretical approach: a formal conceptualization of intelligence that permits a novel educational, developmental, and individual perspective, and that may justify and ultimately guide the tailoring of education.
... As part of the previous frameworks, an adequate visualization has to be applied to present the feedback as simple and informative as possible to the stakeholders [32,33]. Furthermore, analytical approaches to model a learner's profile based on their answering behaviour and the analysis of different error types can lead to findings that help to enhance the whole learning process [34,35]. 3 The platform 3 ...
Article
Full-text available
According to the NMC Horizon Report (Johnson et al. in Horizon Report Europe: 2014 Schools Edition, Publications Office of the European Union, The New Media Consortium, Luxembourg, Austin, 2014 [1]), data-driven learning in combination with emerging academic areas such as learning analytics has the potential to tailor students’ education to their needs (Johnson et al. 2014 [1]). Focusing on this aim, this article presents a web-based (training) platform for German-speaking users aged 8–12.Our objective is to support primary-school pupils—especially those who struggle with the acquisition of the German orthography—with an innovative tool to improve their writing and spelling competencies. On this platform, which is free of charge, they can write and publish texts supported by a special feature, called the intelligent dictionary. It gives automatic feedback for correcting mistakes that occurred in the course of fulfilling a meaningful writing task. Consequently, pupils can focus on writing texts and are able to correct texts on their own before publishing them. Additionally, they gain deeper insights in German orthography. Exercises will be recommended for further training based on the spelling mistakes that occurred. This article covers the background to German orthography and its teaching and learning as well as details concerning the requirements for the platform and the user interface design. Further, combined with learning analytics we expect to gain deeper insight into the process of spelling acquisition which will support optimizing our exercises and providing better materials in the long run.
... The authors [40,41] show that, in addition to the above analysis, it is necessary to take into account how information is presented to the stakeholders in this process information, that is, system feedback. Finally, the study of students' behavior patterns can be very useful in order to adopt strategies to improve the learning process [42,43]. ...
Conference Paper
The emergence of new technologies such as IoT and Big Data, the change in the behavior of society in general and the younger generation in particular, require higher education institutions to “look” for teaching differently. This statement is complemented by the prediction of the futurist Thomas Frey, who postulates that “in 14 years it will be a big deal when students learn from robot teachers over the internet”. Thus, it is necessary to urgently begin a disruption of current teaching models, to be able to include in these processes the new technologies and the daily habits of the new generations. The early usage of mobile devices and the constant connection to the Internet (social networks, among others) mean that the current generation of young people, who are reaching higher education, has the most technological literacy ever. In this new context, this article presents a disruptive conceptual approach to higher education, using information gathered by IoT and based on Big Data & Cloud Computing and Learning Analytics analysis tools. This approach will, for example, allow individualized solutions taking into account the characteristics of the students, to help them customize their curriculum and overcome their limitations and difficulties, throughout the learning process .
... The results of such analysis are demonstrated on Learning Analytics Dashboards for better comprehension and further recognition of ongoing activities. An example of in detail analysis is the work on one-digit multiplication problems [19,20,21] or even beyond [22,23]. Taraghi et al analyzed at the first step the most prevalent error types and the statistical correlations between them in one-digit multiplication problems [24]. ...
Conference Paper
Full-text available
In this paper, we discuss the design, development, and implementation of a Learning Analytics (LA) dashboard in the area of Higher Education (HE). The dashboard meets the demands of the different stakeholders, maximizes the mainstreaming potential and transferability to other contexts, and is developed in the path of Open Source. The research concentrates on developing an appropriate concept to fulfil its objectives and finding a suitable technology stack. Therefore, we determine the capabilities and functionalities of the dashboard for the different stakeholders. This is of significant importance as it identifies which data can be collected, which feedback can be given, and which functionalities are provided. A key approach in the development of the dashboard is the modularity. This leads us to a design with three modules: the data collection, the search and information processing, and the data presentation. Based on these modules, we present the steps of finding a fitting Open Source technology stack for our concept and discuss pros and cons trough out the process.
... As a part of the previous frameworks, an adequate visualization has to be applied to present the feedback as simple and informative as possible to the stakeholders [14,15]. Furthermore, analytical approaches to model a learner's profile based on their answering behavior and the analysis of different error types can lead to findings that help to enhance the whole learning process [18,19]. ...
Conference Paper
Full-text available
Data-driven learning in combination with emerging academic areas such as Learning Analytics (LA) has the potential to tailor students’ education to their needs [1]. The aim of this article is to present a web-based training platform for primary school pupils who struggle with the acquisition of German orthography. Our objective is the improvement in their writing and spelling competences. The focus of this article is on the development of the platform and the details concerning the requirements and the design of the User Interface (UI). In combination with Learning Analytics, the platform is expected to provide deeper insight into the process of spelling acquisition. Furthermore, aspects of Learning Analytics will help to develop the platform, to improve the exercises and to provide better materials in the long run.
Article
Full-text available
Many pupils struggle with the acquisition of the German orthography. In order to meet this struggle a web based platform for German speaking countries is currently developed. This platform aims to motivate pupils aged 8 to 12 to improve their writing and spelling competences. In this platform pupils can write texts in the form of blog entries concerning everyday events or special topics. Since the core of this platform consists of an intelligent dictionary focussing on different categories of misspellings, students can improve their own spelling skills by trying to correct their mistakes according to the feedback of the system. Teachers are informed about specific orthographic problems of a particular student by getting a qualitative analysis of the misspellings from this intelligent dictionary. The article focuses on the development of the intelligent dictionary, details concerning the requirements, the categorization and the used wordlist. Further, necessary information on German orthography, spelling competence in general and the platform itself is given. By implementing methods of learning analytics it is expected to gain deeper insight into the process of spelling acquisition and thus serves as a basis to develop better materials on the long run.
Article
Full-text available
This article presents a computational theory of retrieval of simple addition and multiplication facts (e.g. 9 x 6 = ?; 3 + 4 = ?). According to the network-interference model, a presented problem activates memory representations for a large number of related number facts, with strength of activation of specific facts determined by featural and magnitude similarity to the presented problem. Nonnative frequencies of confusion errors (e.g. 9 x 6 = 36) were u,sed to quantify similarity factors. Problem nodes continuously receive similarity-based excitatory input during retrieval and compete by way of mutual inhibition until one node reaches a critical activation threshold and triggers a response. The model demonstrates that similarity-based interference, in addition to accounting for many features of eJTors, also provides accurate prediction of variability in speed of correct retrieval among problems. The model also accounts for subtle features of inter-trial err.or priming, as well as changes in the rates and types of errors observed as a function of elapsed retrieval time. Most educated adults can quickly and accurately produce the answers to a large number of simple arithmetic problems (e.g. 4 + 2 = ?, 6 x 6 = ?). This knowledge of basic number facts is usually characterised as "simple arithmetic", but experimental analysis has uncovered many subtle phenomena reflecting a complex system of interacting memory representations and retrieval processes. Understanding these memory processes is important both because of the status of simple mental arithmetic as a fundamental intellectual skill and because simple arithmetic provides a unique opportunity to study elementary memory processes in a highly constrained domain. The article presents a revision of the network-inJeiference model of number-fact retrieval described by Campbell and Oliphant (1992). The model constitutes a detailed theory of the memory codes Requests for reprints should be sent to Jamie Campbell,
Article
Full-text available
Learner profiling is a methodology that draws a parallel from user profiling. Implicit feedback is often used in recommender systems to create and adapt user profiles. In this work the implicit feedback is based on the learner's answering behaviour in the Android application UnlockYourBrain, which poses different basic mathematical questions to the learners. We introduce an analytical approach to model the learners' profile according to the learner's answering behaviour. Furthermore, similar learner's profiles are grouped together to construct a learning behaviour cluster. The choice of hierarchical clustering as a means of classification of learners' profiles derives from the observations of learners behaviour. This in turn reflects the similarities and subtle differences of learner behaviour, which are further analysed in more detail. Building awareness about the learner's behaviour is the first and necessary step for future learning-aware applications.
Conference Paper
Full-text available
In this work we focus on a specific application named “1x1 trainer” that has been designed to assist children in primary school to learn one digit multiplications. We investigate the database of learners’ answers to the asked questions by applying Markov chain and classification algorithms. The analysis identifies different clusters of one digit multiplication problems in respect to their difficulty for the learners. Next we present and discuss the outcomes of our analysis considering Markov chain of different orders for each question. The results of the analysis influence the learning path for every pupil and offer a personalized recommendation proposal that optimizes the way questions are asked to each pupil individually.
Conference Paper
Full-text available
Understanding the behavior of learners within learning applications and analyzing the factors that may influence the learning process play a key role in designing and optimizing learning applications. In this work we focus on a specific application named “1x1 trainer” that has been designed for primary school children to learn one digit multiplications. We investigate the database of learners’ answers to the asked questions (N > 440000) by applying the Markov chains. We want to understand whether the learners’ answers to the already asked questions can affect the way they will answer the subsequent asked questions and if so, to what extent. Through our analysis we first identify the most difficult and easiest multiplications for the target learners by observing the probabilities of the different answer types. Next we try to identify influential structures in the history of learners’ answers considering the Markov chain of different orders. The results are used to identify pupils who have difficulties with multiplications very soon (after couple of steps) and to optimize the way questions are asked for each pupil individually.
Article
Full-text available
The ubiquitous availability of applications enables us to offer students opportunities to test and train competences in almost every situation. At Graz University of Technolgy two apps for testing competences in multiplication are developed. They estimate the competence level of every user and adapt to their individual development in this domain. They collect a lot of data during a longer period, which could be used on further research. In the foreground they give feedback in a compact and clearly arranged way to the single student and the teachers of classes. But furthermore the analysis of the data during a longer term showed us, that the process of testing and giving feedback has also an positive effect on learning. We emphasize that this quality in supporting the students could not be achieved by human teachers. Information Technology and Learning Analytics gives them a wider radius to perceive specific behavior and establishes their capacity for storing and processing all the relevant data.
Conference Paper
Full-text available
One of the first and basic mathematical knowledge of school children is the multiplication table. At the age of 8 to 10 each child has to learn by training step by step, or more scientifically, by using a behavioristic learning concept. Due to this fact it can be mentioned that we know very well about the pedagogical approach, but on the other side there is rather less knowledge about the increase of step-by-step knowledge of the school children. In this publication we present some data documenting the fluctuation in the process of acquiring the multiplication tables. We report the development of an algorithm which is able to adapt the given tasks out of a given pool to unknown pupils. For this purpose a web-based application for learning the multiplication table was developed and then tested by children. Afterwards so-called learning curves of each child were drawn and analyzed by the research team as well as teachers carrying out interesting outcomes. Learning itself is maybe not as predictable as we know from pedagogical experiences, it is a very individualized process of the learners themselves. It can be summarized that the algorithm itself as well as the learning curves are very useful for studying the learning success. Therefore it can be concluded that learning analytics will become an important step for teachers and learners of tomorrow.
Article
Full-text available
A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back-propagation learning algorithm. The model simulates many aspects of human performance, including (a) differences between words in terms of processing difficulty, (b) pronunciation of novel items, (c) differences between readers in terms of word recognition skill, (d) transitions from beginning to skilled reading, and (e) differences in performance on lexical decision and naming tasks. The model's behavior early in the learning phase corresponds to that of children acquiring word recognition skills. Training with a smaller number of hidden units produces output characteristic of many dyslexic readers. Naming is simulated without pronunciation rules, and lexical decisions are simulated without accessing word-level representations. The performance of the model is largely determined by three factors: the nature of the input, a significant fragment of written English; the learning rule, which encodes the implicit structure of the orthography in the weights on connections; and the architecture of the system, which influences the scope of what can be learned.
Article
Full-text available
Adults' solution times to simple addition problems typically increase with the sum of the problems (the problem size effect). Models of the solution process are based on the assumption that adults always directly retrieve answers to problems from an associative network. Accordingly, attempts to explain the problem size effect have focused either on structural explanations that relate latencies to numerical indices (e.g., the area of a tabular representation) or on explanations that are based on frequency of presentation or amount of practice. In this study, the authors have shown that the problem size effect in simple addition is mainly due to participants' selection of nonretrieval procedures on larger problems (i.e., problems with sums greater than 10). The implications of these results for extant models of addition performance are discussed. Twenty years of research on mental arithmetic has shown that problems involving larger numbers (e.g., 9 + 6) are solved more slowly than problems involving smaller numbers (e.g., 3 + 4). Surprisingly, in spite of the wealth of empirical data and the extensive theoretical development on mental arith-metic, the problem size effect has eluded satisfactory explana-tion (Ashcraft, 1992; McCloskey, Harley, & Sokol, 1991; Widaman & Little, 1992). The goal of the present research was to test an explanation of the problem size effect in adults that has been used to account for the arithmetic performance of children (Ashcraft, 1992; Siegler, 1987). We hypothesized that variability in the selection of procedures to solve simple addition problems has a major impact on solution latencies and may account for a substantial portion of the problem size effect.
Article
According to the encoding-complex approach (Campbell & Clark, 1988; Clark & Campbell, 1991), numerical skills are based on a variety of modality-specific representations (e.g., visuo-spatial and verbal-auditory codes), and diverse number-processing tasks (e.g., numerical comparisons, calculation, reading numbers, etc.) generally involve common, rather than independent, cognitive mechanisms. In contrast, the abstract-modular theory (e.g., McCloskey, Caramazza, & Basili, 1985) assumes that number processing is comprised of separate comprehension, calculation, and production subsystems that communicate via a single type of abstract quantity code. We review evidence supporting the specific-integrated (encoding-complex) view of number processing over the abstract-modular view, and report new experimental evidence that one aspect of number processing, retrieval of simple multiplication facts, involves non-abstract, format-specific representations and processes. We also consider implications of the encoding-complex hypothesis for the modularity of number skills.