ArticlePDF Available

Abstract and Figures

This article proffers a novel model by which to conceptualize the impact of emotions upon learning. We believe there is an interplay of emotions and learning, but this interaction is far more complex than previous theories have articulated. Our model goes beyond previous research studies not just in the emotions addressed, but also in an attempt to formalize an analytical model that describes the dynamics of emotional states during model-based learning experiences.
Content may be subject to copyright.
1
An Affective Model of the Interplay Between Emotions and Learning
Barry Kort, Rob Reilly, Rosalind Picard
Media Laboratory, M.I.T.
{bkort, reilly, picard}@media.mit.edu
Abstract
This article proffers a novel model by which to
conceptualize the impact of emotions upon learning.
We believe there is an interplay of emotions and
learning, but this interaction is far more complex than
previous theories have articulated. Our model goes
beyond previous research studies not just in the
emotions addressed, but also in an attempt to formalize
an analytical model that describes the dynamics of
emotional states during model-based learning
experiences.
1. Introduction
Why is there no word in English for the
art of learning? Webster says that
pedagogy means the art of teaching.
What is missing is the parallel word for
learning. In schools of education,
courses on the art of teaching are simply
listed as “methods.” Everyone
understands that the methods of
importance in education are those of
teaching—these courses supply what is
thought to be needed to become a skilled
teacher. But what about methods of
learning?
- Seymour Papert, The Children’s Machine
Educators have traditionally emphasized conveying
information and facts; rarely have they modeled the
learning process. When teachers present material to the
class, it is usually in a polished form that omits the
natural steps of making mistakes (e.g., feeling
confused), recovering from them (e.g., overcoming
frustration), deconstructing what went wrong (e.g., not
becoming dispirited), and starting over again (with
hope and perhaps enthusiasm). Those of us who work
in science, math, engineering, and technology (SMET)
as professions know that learning naturally involves
failure and a host of associated affective responses.
Yet, educators of SMET learners have rarely
illuminated these natural concomitants of the learning
experience. The unfortunate result is that when
students see that they are not getting the facts right (on
quizzes, exams, etc.), then they tend to believe that
they are either ‘not good at this,’ ‘can’t do it,’ or that
they are simply ‘stupid’ when it comes to these
subjects. What we fail to teach them is that all these
feelings associated with various levels of failure are
normal parts of learning, and that they can be actually
be helpful signals for how to learn better.
Expert teachers are very adept at recognizing and
addressing the emotional state of learners and, based
upon that observation, taking some action that
positively impacts learning. But what do these expert
teachers ‘see’ and how do they decide upon a course of
action? How do student who have strayed from
learning return to productive path, such as the one that
Csikszentmihalyi [1990] refers to as his “zone of flow”?
Preliminary research by Lepper and Chabay [1988]
indicates that “expert human tutors… devote at least as
much time and attention to the achievement of
affective and emotional goals in tutoring, as they do to
the achievement of the sorts of cognitive and
informational goal that dominate and characterize
traditional computer-based tutors.”
Skilled humans can assess emotional signals with
varying degrees of accuracy, and researchers are
beginning to make progress giving computers similar
abilities at recognizing affective expressions. Although
computers perform as well as people only in highly
restricted domains, we believe that accurately
identifying a learner’s emotional/cognitive state is a
critical indicator of how to assist the learner in
achieving an understanding of the efficiency and
pleasure of the learning process. We also assume that
computers will, much sooner than later, be more
capable of recognizing human behaviors that lead to
strong inferences about affective state.
2
Axis -1. 0 -0. 5 0 +0. 5 +1. 0
Anxiety-Confidence Anxiety Worry Discomfort Comfort Hopeful Confident
Boredom-Fascination Ennui Boredom Indifference Interest Curiosity Intrigue
Frustration-Euphoria Frustration Puzzlement Confusion Insight Enlightenment Ephipany
Dispirited-Encouraged Dispirited Disappointed Dissatisfied Satisfied Thrilled Enthusiastic
Terror-Enchantment Terror Dread Apprehension Calm Anticipatory Excited
-1. 0 -0. 5 0 +0. 5 +1. 0
Figure 1 – Emotion sets possibly relevant to learning (in contrast to traditional emotion theories)
2. Affective Computing: Emotions and
Learning
The extent to which emotional upsets
can interfere with mental life is no
news to teachers. Students who are
anxious, angry, or depressed don’t
learn; people who are caught in these
states do not take in information
efficiently or deal with it well.
- Daniel Goleman, Emotional Intelligence
In order to accomplish our goal, which is to
embody a computer with the ability to identify a
learner’s affective state and respond accordingly, we
must redefine, and in some cases, reengineer various
aspects of educational pedagogy. To this end it is
necessary for us to rethink our perspective of what is
happening in education and based upon our
hypothesis reengineer accordingly. Some of these
beliefs will be theorized, perhaps beyond a practical
level but not beyond a level needed for understanding
them. We need to explore the underpinnings of
various educational theories and evolve or revise
them. For example, we propose a model that
describes the range of various emotional states during
learning (see Figure 1). The model is inspired by
theory often used to describe complex interactions in
engineering systems, and as such is not intended to
explain how learning works, but rather is intended to
give us a framework for thinking about and posing
questions about the role of emotions in learning.
Like with any metaphor, the model has limits to its
application. In this case, the model is not intended to
fully describe all aspects of the complex interaction
between emotions and learning, but rather only to
serve as a beginning for describing some of the key
phenomena that we think are all too often overlooked
in learning pedagogy. Our model goes beyond
previous research studies not just in the emotions
addressed, but also in an attempt to formalize an
analytical model that describes the dynamics of
emotional states during model-based learning
experiences, and to do so in a language that the
SMET learner can come to understand and utilize.
3. Guiding Theoretical Frameworks:
Developing an Advanced Technology
The older [learning theories] deal with
the activity that is sometimes
caricatured by the image of a white-
coated scientist watching a rat run
through a maze…newer [thinking is]
more likely to be based upon the
theories of performance of computer
programs than on the behavior of
animals… but… they are not about the
art of learning… they do not offer
advice to the rat (or to the computer)
about how to learn.
- Seymour Papert, The Children’s Machine
Before describing the model’s dynamics, we should
say something about the space of emotions it names.
Previous emotion theories have proposed that there
are from two to twenty basic or prototype emotions
(see for example, Plutchik, 1980; Leidelmeijer,
1991). The four most common emotions appearing
on the many theorists’ lists are fear, anger, sadness,
and joy. Plutchik [1980] distinguished among eight
basic emotions: fear, anger, sorrow, joy, disgust,
acceptance, anticipation, and surprise. Ekman [1992]
has focused on a set of from six to eight basic
emotions that have associated facial expressions.
However, none of the existing frameworks seem to
address emotions commonly seen in SMET learning
experiences, some of which we have noted in Figure
1. Whether all of these are important, and whether
the axes shown in Figure 1 are the “right” ones
remains to be evaluated, and it will no doubt take
many investigations before a “basic emotion set for
learning” can be established. Such a set may be
culturally different and will likely vary with
developmental age as well. For example, it has been
argued that infants come into this world only
3
expressing interest, distress, and pleasure [Lewis,
1993] and that these three states provide sufficiently
rich initial cues to the caregiver that she or he can
scaffold the learning experience appropriately in
response. We believe that skilled observant human
tutors and mentors (teachers) react to assist students
based on a few ‘least common denominators’ of
affect as opposed to a large number of complex
factors; thus, we expect that the space of emotions
presented here might be simplified and refined
further as we tease out which states are most
important for shaping the companion’s responses.
Nonetheless, we know that the labels we attach to
human emotions are complex and can contain
mixtures of the words here, as well as many words
not shown here. The challenge, at least initially, is to
see how our model and its hypothesis can do initially
with a very small space of possibilities, since the
smaller the set, the more likely we are to have greater
classification success by the computer.
Constructive Learning
Disappointment Awe
Puzzlement Satisfaction
Confusion Curiosity
II I
Negative Positive
Affect Affect
III IV
Frustration Hopefulness
Discard Fresh research
Misconceptions
Un-learning
Figure 2a – Proposed model relating phases of
learning to emotions in Figure 1
II I
III IV
Figure 2b – Circular and helical flow of emotion
Figures 2a and 2b attempt to interweave the emotion
axes shown in Figure 1 with the cognitive dynamics
of the learning process. The horizontal axis is an
Emotion Axis. It could be one of the specific axes
from Figure 1, or it could symbolize the n-vector of
all relevant emotion axes (thus allowing multi-
dimensional combinations of emotions). The positive
valence (more pleasurable) emotions are on the right;
the negative valence (more unpleasant) emotions are
on the left. The vertical axis is what we call the
Learning Axis, and symbolizes the construction of
knowledge upward, and the discarding of
misconceptions downward. (Note: we do not see
learning as being simply a process of
constructing/deconstructing or adding/subtracting
information; this terminology is merely a projection
of one aspect of how people can think about learning.
Other aspects could be similarly included along the
Learning Axis.)
The student ideally begins in quadrant I or II: they
might be curious and fascinated about a new topic of
interest (quadrant I) or they might be puzzled and
motivated to reduce confusion (quadrant II). In either
case, they are in the top half of the space, if their
focus is on constructing or testing knowledge.
Movement happens in this space as learning
proceeds. For example, when solving a puzzle in The
Incredible Machine, a student gets an idea how to
implement a solution and then builds its simulation.
When she runs the simulation and it fails, she sees
that her idea has some part that doesn’t work – that
needs to be deconstructed. At this point it is not
uncommon for the student to move down into the
lower half of the diagram (quadrant III) where
emotions may be negative and the cognitive focus
changes to eliminating some misconception. As she
consolidates her knowledge—what works and what
does not—with awareness of a sense of making
progress, she may move to quadrant IV. Getting a
fresh idea propels the student back into the upper half
of the space, most likely quadrant I. Thus, a typical
learning experience involves a range of emotions,
moving the student around the space as they learn.
If one visualizes a version of Figures 2a and 2b for
each axis in Figure 1, then at any given instant, the
student might be in multiple quadrants with respect to
different axes. They might be in quadrant II with
respect to feeling frustrated; and simultaneously in
quadrant I with respect to interest level. It is
important to recognize that a range of emotions
occurs naturally in a real learning process, and it is
not simply the case that the positive emotions are the
good ones. We do not foresee trying to keep the
student in quadrant I, but rather to help them see that
the cyclic nature is natural in SMET learning, and
that when they land in the negative half, it is only
4
part of the cycle. Our aim is to help them to keep
orbiting the loop, teaching them how to propel
themselves especially after a setback.
A third axis (not shown), can be visualized as
extending out of the plane of the page—the
Knowledge Axis. If one visualizes the above
dynamics of moving from quadrant I to II to III to IV
as an orbit, then when this third dimension is added,
one obtains the ‘excelsior spiral that climbs the tree
of knowledge.’ In the phase plane plot, time is
parametric as the orbit is traversed in a
counterclockwise direction. In quadrant I,
anticipation and expectation are high, as the learner
builds ideas and concepts and tries them out.
Emotional mood decays over time, either from
boredom or from disappointment. In quadrant II, the
rate of construction of working knowledge
diminishes, and negative emotions emerge as
progress flags. In quadrant III, the learner discards
misconceptions and ideas that didn't pan out, as the
negative affect runs its course. In quadrant IV, the
learner recovers hopefulness and positive attitude as
the knowledge set is now cleared of unworkable and
unproductive concepts, and the cycle begins anew.
In building a complete and correct mental model
associated with a learning opportunity, the learner
may experience multiple cycles around the phase
plane until completion of the learning exercise. Each
orbit represents the time evolution of the learning
cycle. Note that the orbit doesn't close on itself, but
gradually moves up the knowledge axis.
A computerized Learning Companion, which
would track a learner through their learning journey,
be sensitive to their affective state and respond
appropriately, could, we believe, use models such as
these to assess whether or not learning is proceeding
at a healthy rate. The model could help guide it in
exploring strategies for making decisions about when
best to intervene with a hint, word of encouragement,
or observation (typically in quadrants III and IV.)
Thus, we see the computerized Learning Companion
as helping to scaffold the learning experience by
trying to keep the learner moving through this space,
e.g., not avoiding quadrant III, but helping them to
keep moving through it instead of getting stuck there.
The models may also be useful to learners in aiding
in their own metacognition about their learning
experience, especially helping them identify and
work with naturally-occurring negative emotions in a
productive and cognitively satisfying way. And, as a
vicarious outcome, this model could be utilized by
human teachers when dealing with students.
4. References
[1] Chen, L.S., T.S. Huang, T. Miyasato, and R. Nakatsu,
“Multimodal human emotion/expression recognition,” in
Proc. of Int. Conf. on Automatic Face and Gesture
Recognition, (Nara, Japan), IEEE Computer Soc., April
1998.
[2] Csikszentmihalyi, M. (1990). Flow: The Psychology of
Optimal Experience, Harper-Row: NY.
[3] DeSilva, L.C., T. Miyasato, and R. Nakatsu, Facial
emotion recognition using multi-modal information, in
Proc. IEEE Int. Conf. on Info., Comm. and Sig. Proc.,
(Singapore), pp. 397-401, Sept 1997.
[4] Donato, G., M.S. Bartlett, J.C. Hager, P.Ekman, and
T.J. Sejnowski, Classifying facial actions, IEEE Pattern
Analy. and Mach. Intell., vol. 21, pp. 974--989, October
1999.
[5] Ekman, Paul. (1992). Are there basic emotions?,
Psychological Review, 99(3): 550-553.
[6] Goleman, D., (1995). Emotional Intelligence. Bantam
Books: New York.
[7] Huang, T.S., L.S. Chen, and H. Tao, Bimodal emotion
recognition by man and machine, ATR Workshop on
Virtual Communication Environments, (Kyoto, Japan),
April 1998.
[8] Leidelmeijer, K. (1991). Emotions: An Experimental
Approach. Tilburg University Press.
[9] Lepper. M.R. and R.W. Chabay (1988). Socializing the
intelligent tutor: Bringing empathy to computer tutors. In
Heinz Mandl and Alan Lesgold (Eds.), Learning Issues for
Intelligent Tutoring Systems, pp. 242-257.
[10] Lewis M., (1993). Ch. 16: The emergence of human
emotions. In M. Lewis and J. Haviland, (Eds.), Handbook
of Emotions, pages 223-235, New York, NY. Guilford
Press.
[11] Papert, Seymour (1993). The Children’s Machine:
Rethinking School in the Age of the Computer, Basic
Books: New York.
[12] Picard, Rosalind.W., (1997). Affective Computing.
Cambridge, MA: MIT Press 1997.
[13] Plutchik, R. ‘A general psychoevolutionary theory of
emotion,’ in Emotion Theory, Research, and Experience
(R. Plutchik and H. Kellerman, eds.), vol. 1, Theories of
Emotion, Academic Press, 1980.
[14] Scheirer, J., R. Fernandez and Rosalind. W. Picard
(1999), Expression Glasses: A Wearable Device for Facial
Expression Recognition, Proceedings of CHI, February
1999.
... Aussi, si la dimension émotionnelle ne constitue pas le fondement essentiel de l'identité de la personne, elle en assure selonGuitouni (2000, p. 16), « l'unité en opérant la jonction entre les dimensions cognitives et instinctives de l'être ». En outre, du point de vue de l'apprentissage et des comportements, nombre de recherches(Kort, Reilly et Picard, 2000 ;Ocde 2000 ;Ocde-Ceri 2004) en neuro-cognition mettent en avant le lien et les effets des émotions sur l'apprentissage. L'absence de capital émotionnel chez l'élève peut être responsable de déséquilibres et de tensions et constituer une faiblesse et perturber l'apprentissage et à terme la constitution du capital humain. ...
... This theory divides academic emotions into four types with valence as the horizontal axis and learning degree as the vertical axis. A typical learning experience includes a series of emotional experiences, that is, with the deepening of learning, emotions should fluctuate [20]. Ryan studied the frequency, persistence, and impact of boredom, frustration, confusion, preoccupation, happiness, and surprise among students with different populations, different research methods, and different learning tasks as variables [21]. ...
Article
Full-text available
Since the outbreak of COVID-19, remote teaching methods have been widely adopted by schools. However, distance education can frequently lead to low student emotional engagement, which can not only cause learning burnout, but also weaken students’ interest in online learning. In view of the above problems, this study first proposed a learner knowledge state model that integrates learning emotions under the background of digital teaching to accurately describe the current learning state of students. Then, on the basis of the public face dataset lapa, we built an online multi-dimensional emotion classification model for students based on ResNet 18 neural network. Experiments showed that the method has an average recognition accuracy of 88.76% for the four cognitive emotions of joy, concentration, confusion, and boredom, among which the accuracy of joy and boredom is the highest, reaching 96.3% and 97.0% respectively. Finally, we analyzed the correlation between students’ emotional classification and grades in distance learning, and verified the effectiveness of the student’s emotional classification model in distance learning applications. In the context of digital teaching, this study provides technical support for distance learning emotion classification and learning early warning, and is of great significance to help teachers understand students’ emotional states in distance learning and promote students’ deep participation in the distance learning process.
... Pierwsza z nich pomogła w uzyskaniu informacji na temat agentów pedagogicznych lub animowanych postaci, które służą jako nauczyciele lub rówieśnicy w zakresie technologii nauczania 36 . Ta ostatnia pomagała w informowaniu o reakcjach adaptacyjnych spersonalizowanych systemów uczenia się, takich jak reagowanie na uczniów wykazujących nudę lub frustrację 37 . Później, w miarę dojrzewania tej dyscypliny w XXI wieku, badacze tacy jak Rafael Calvo i Sidney D'Mello opracowali sposoby bardziej niezawodnego i mniej inwazyjnego wyczuwania tych stanów, używając narzędzi takich jak urządzenia do śledzenia wzroku, rozpoznawanie twarzy i gestów, ruchy myszy i czujniki postawy 38 . ...
... An unlikely example is the age old adage for teachers, "don't smile before Christmas," implying that a stern, frowning teacher will be better able to manage a classroom filled with active learners. This approach to setting a tone for learning has no backing from recent studies showing that positive emotional involvement is a critical component to best practices in teaching and learning for children and for adults (Kort 2004, Kort, Reilly & Picard, 2001. ...
Article
Full-text available
The focus of this commentary is the possibilities for learning communities in dropout prevention and their interventionist role in this problem. Educators are urged to develop advocacy for adolescents at risk of dropping out and failing to graduate. High school dropout has been described as a national epidemic, yet urban youth continue to be disengaged in their learning and success. The role of professional learning communities in simultaneously engaging students and improving schools offers a solution for dealing with the dropout problem. This type of collective school-wide initiative is potentially a significant strategy for coping with student disengagement, failure, dropout, and teacher isolation across grade levels. Work within learning communities can prepare teachers for the changing culture within their schools. Full-text availability: https://www.aasa.org/uploadedFiles/Publications/Journals/ AASA_Journal_of_Scholarship_and_Practice/Fall2011.FINAL.pdf
Chapter
In recent years, web-based education has been perceived as a support tool for instructors as it gives the comfort of using it at anytime and anyplace. Engineering educationalist relies on scientific labeling like Bloom’s classification to design learning goals that are used to build up lesson’s contents. As per Bloom’s scientific classification, lesson goals can be arranged into three spaces: cognitive, affective, and psychomotor area. While evaluating the learners in e-learning, most encouraging courses just put down weight on facts broadcast – to be specific, the cognitive area – and ignore the learner’s learning abilities and emotions due to distance.Emotion plays a very noteworthy part in the cognitive procedure of an individual, so the change is deficient with no capturing the student’s passionate state. With the help of the affective computing technology, it is possible for researchers to understand learner’s affect during online learning based on which they are able to take corrective action.This chapter presents an organized review related to affective computing technology. This chapter explores and spots the different modalities, techniques, tools developed, and emotions identified by researchers to realize their practicability and development. Finally, few challenges are listed for helping researchers while applying affective computing technology. A total of 61 research papers are incorporated in this survey. The results reflect that data is a major concern in this area, and only very few data sets with some modalities are available. Most of the researchers have used facial expression modality for predicting affective states. Many authors has used generalized term affect during prediction. There is yet a lot of scope in prediction of academic emotions like confusion, engagement, frustration, etc. The overall evolution of the research goes through various changes in terms of modalities, technologies, tools and data are targeted.KeywordsBloom’s scientific classificatione-learningAffective areaAffective StatesLearner’s emotions
Chapter
Frustration is a universal human experience. Although it is not desired, it cannot be avoided. However, individuals vary greatly in the manner they respond to the inevitable roadblocks of life.
Chapter
Full-text available
It is well understood that affect interacts with and influences the learning process [2, 7, 9].
Article
Full-text available
The advancement of using the Artificial Intelligence (AI) methods and techniques in design Intelligent Tutoring Systems (ITSs) makes understanding them more difficult, so that teachers are less and less prepared to accept these systems. As a result, the gap between researchers in the field of ITSs and the educational community is constantly widening. While ITSs are becoming more common and proving to be increasingly effective, each one must still be built from scratch at a significant cost. Also present ITSs need quite big development environments, huge computing resources and, in consequence, are expensive and hardly portable to personal computers. This paper describes our efforts toward developing uniform data, explanation and control structures that can be used by a wide circle of authors who are involved in building ITSs (e.g., domain experts, teachers, curriculum developers, etc.) that is, the model of the ITSs framework, the GET-BITS model.
Article
Full-text available
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.
Article
The computer, it has been said, is a once-in-several-centuries invention—a technology potentially capable of transforming the process of education (Brown, 1985; Kay, 1977; Papert, 1980; Simon, 1983; Suppes, 1966). Indeed, as the computer has begun to infiltrate primary as well as college classrooms, the array of educational uses to which it has been put is impressive (Kleiman, 1984; Lepper, 1985). It can serve, simultaneously, as a tool for facilitating the performance of a variety of mundane tasks (such as word processing, data analysis, or information retrieval), a device for creating and presenting complex simulations and rich exploratory-learning environments, and a means of providing practice of skills and testing of knowledge in many content areas.