ArticlePDF Available

Measuring Motivational Characteristics of Courses: Applying Keller's Instructional Materials Motivation Survey to a Web-Based Course

Authors:

Abstract and Figures

The Instructional Materials Motivation Survey (IMMS) purports to assess the motivational characteristics of instructional materials or courses using the Attention, Relevance, Confidence, and Satisfaction (ARCS) model of motivation. The IMMS has received little use or study in medical education. The authors sought to evaluate the validity of IMMS scores and compare scores between standard and adaptive Web-based learning modules. During the 2005-2006 academic year, 124 internal medicine residents at the Mayo School of Graduate Medical Education (Rochester, Minnesota) were asked to complete the IMMS for two Web-based learning modules. Participants were randomly assigned to use one module that adapted to their prior knowledge of the topic, and one module using a nonadaptive design. IMMS internal structure was evaluated using Cronbach alpha and interdimension score correlations. Relations to other variables were explored through correlation with global module satisfaction and regression with knowledge scores. Of the 124 eligible participants, 79 (64%) completed the IMMS at least once. Cronbach alpha was >or=0.75 for scores from all IMMS dimensions. Interdimension score correlations ranged 0.40 to 0.80, whereas correlations between IMMS scores and global satisfaction ratings ranged 0.40 to 0.63 (P<.001). Knowledge scores were associated with Attention and Relevance subscores (P=.033 and .01, respectively) but not with other IMMS dimensions (P>or=.07). IMMS scores were similar between module designs (on a five-point scale, differences ranged from 0.0 to 0.15, P>or=.33). These limited data generally support the validity of IMMS scores. Adaptive and standard Web-based instructional designs were similarly motivating. Cautious use and further study of the IMMS are warranted.
Content may be subject to copyright.
Education Issues
Measuring Motivational Characteristics of
Courses: Applying Keller’s Instructional
Materials Motivation Survey to a Web-Based
Course
David A. Cook, MD, MHPE, Thomas J. Beckman, MD, Kris G. Thomas, MD,
and Warren G. Thompson, MD
Abstract
Purpose
The Instructional Materials Motivation
Survey (IMMS) purports to assess the
motivational characteristics of
instructional materials or courses using
the Attention, Relevance, Confidence,
and Satisfaction (ARCS) model of
motivation. The IMMS has received little
use or study in medical education. The
authors sought to evaluate the validity of
IMMS scores and compare scores
between standard and adaptive Web-
based learning modules.
Method
During the 2005–2006 academic year,
124 internal medicine residents at the
Mayo School of Graduate Medical
Education (Rochester, Minnesota) were
asked to complete the IMMS for two
Web-based learning modules.
Participants were randomly assigned to
use one module that adapted to their
prior knowledge of the topic, and one
module using a nonadaptive design.
IMMS internal structure was evaluated
using Cronbach alpha and
interdimension score correlations.
Relations to other variables were
explored through correlation with global
module satisfaction and regression with
knowledge scores.
Results
Of the 124 eligible participants, 79
(64%) completed the IMMS at least
once. Cronbach alpha was 0.75 for
scores from all IMMS dimensions.
Interdimension score correlations ranged
0.40 to 0.80, whereas correlations
between IMMS scores and global
satisfaction ratings ranged 0.40 to 0.63
(P.001). Knowledge scores were
associated with Attention and Relevance
subscores (P.033 and .01,
respectively) but not with other IMMS
dimensions (P.07). IMMS scores were
similar between module designs (on a
five-point scale, differences ranged from
0.0 to 0.15, P.33).
Conclusions
These limited data generally support the
validity of IMMS scores. Adaptive and
standard Web-based instructional
designs were similarly motivating.
Cautious use and further study of the
IMMS are warranted.
Acad Med. 2009; 84:1505–1509.
Motivation is widely acknowledged as
central to meaningful learning.
1–5
Although motivation is a characteristic of
the individual learner, for a teacher it is
useful to consider motivation from the
standpoint of the instructional design—
namely, the impact of course materials
and format on an individual’s motivation
to participate and learn, and specific
strategies to enhance the motivational
features of a course to better facilitate
engagement and learning. As new
technologies such as Web-based and just-
in-time learning bring personalized
instruction closer to reality, measurement
of and subsequent adaptation to
motivational needs may become
increasingly relevant.
6
Yet the assessment
of motivation has received little attention
in medical education.
Building on expectancy-value theories of
motivation, Keller
5,7
defined a four-
dimension model for motivation with
practical application to instructional
design—Attention, Relevance,
Confidence, and Satisfaction (ARCS):
Attention, or interest, must be obtained
and maintained.
Relevance to learners’ goals and needs
must be made clear.
Learners must feel Confident in their
ability to succeed in learning
(expectancy for success).
Learners should feel Satisfaction about
their accomplishments in the learning
opportunity.
Keller also developed an instrument, the
Instructional Materials Motivation
Survey (IMMS), to assess motivational
features of a course in each of the ARCS
dimensions (Keller JM. Development of
two measures of learner motivation:
Florida State University, 2006;
unpublished).
The information from such an
assessment could be used to improve a
course design generally, or to adapt a
course to an individual’s motivational
needs.
Only two studies
8,9
have used the IMMS
in medical education. Both used a Korean
translation of the IMMS to compare a
Web-based course with a series of face-
to-face lectures, and both found no
significant differences in overall
motivation scores between groups.
Dimension-specific subscores were not
Dr. Cook is associate professor of medicine and
director, Office of Education Research, College of
Medicine, Mayo Clinic, Rochester, Minnesota.
Dr. Beckman is associate professor of medicine,
College of Medicine, Mayo Clinic, Rochester,
Minnesota.
Dr. Thomas is assistant professor of medicine,
College of Medicine, Mayo Clinic, Rochester,
Minnesota.
Dr. Thompson is associate professor of medicine,
College of Medicine, Mayo Clinic, Rochester,
Minnesota.
Correspondence should be addressed to Dr. Cook,
Division of General Internal Medicine, Mayo Clinic
College of Medicine, Baldwin 4-A, 200 First Street
SW, Rochester, MN 55905; telephone: (507) 538-
0614; fax: (507) 284-5370; e-mail:
(cook.david33@mayo.edu).
Academic Medicine, Vol. 84, No. 11 / November 2009 1505
reported. Further research using the
IMMS seems warranted.
Evidence to support the validity of IMMS
scores is lacking. The validity of an
instrument’s scores can be supported by
evidence from five sources: content,
internal structure, relations to other
variables, response process, and
consequences.
10–12
Keller has provided a
detailed description of the development
of IMMS questions (content evidence),
and the internal consistency reliability
(internal structure evidence) is reportedly
excellent: 0.88 (Keller JM.
Development of two measures of learner
motivation: Florida State University,
2006; unpublished). A comprehensive
search of Medline, PsychInfo, and ERIC
revealed only one study evaluating the
validity of IMMS scores. In that study
13
of college students, factor analysis
suggested that 16 items could be
eliminated and that 11 of the remaining
20 loaded on the Attention dimension;
reliability information was not reported.
The studies using the IMMS Korean
translation reported internal consistency
reliabilities of 0.83
8
and 0.87
9
for the
entire instrument, but dimension-
specific information was not provided.
Additional evidence regarding the
validity of IMMS scores is needed.
We carried out the present study to
evaluate the validity of IMMS scores and,
as part of that evaluation, to measure
motivational differences between two
Web-based designs for teaching internal
medicine residents about ambulatory
medicine. We collected validity evidence
of internal structure (internal consistency
reliability and interdimension score
correlations) and relations to other
variables (discrimination of instructional
designs, and associations between IMMS
scores and knowledge scores, time spent
learning, and global course ratings). We
anticipated that higher motivation scores
would be associated with higher
knowledge scores (improved learning),
longer time on task (more motivated to
invest time), and higher course ratings.
Our study extends the findings of a
randomized trial comparing adaptive
and nonadaptive Web-based learning
designs.
14
In the adaptive design
(described in detail below), learners were
allowed to skip a section of the Web-
based course if they correctly answered a
multiple-choice question preceding that
section. We hypothesized that the
adaptive design would be more
motivating because the faster pace and
enhanced learner control would
command more attention and engender
greater satisfaction, and information
would seem more relevant when
presented in response to incorrectly
answered questions.
Method
Participants
The eligible participants for the present
study, conducted in the 2005–2006
academic year, were the 124 internal
medicine residents at the Mayo School
of Graduate Medical Education in
Rochester, Minnesota, who were enrolled
in a previously reported
14
randomized
trial. The Mayo institutional review
board deemed the study exempt, and all
participants consented to participate.
Interventions
The instructional interventions were two
Web-based learning modules, one on
asthma and the other on depression, each
developed using two instructional
designs.
14
In the standard design, the
module began with a patient scenario and
a multiple-choice question. After
responding to the question, the learner
was given the correct answer, a brief
rationale, and then detailed information
(including tables and figures, as
appropriate) on the topic. After reading
the detailed information, the learner
proceeded to the next question or patient
scenario (there was a total of five or
six scenarios, each with two to five
questions). The second design, an
adaptive design, was identical to the
standard design if the learner responded
incorrectly to a given question. However,
if the learner responded correctly, then
only the correct answer and rationale
were presented; the detailed information
associated with that question could be
skipped or reviewed (accessed via a Web
link) at the learner’s discretion.
Information was identical between the
two module designs. Participants were
randomly assigned to complete the
asthma module using one design
(standard or adaptive) and to complete
the depression module using the other
design (a randomized, crossover study
design).
Instruments
The IMMS consists of 36 statements with
response options ranging from 1 (not
true) to 5 (very true). After correction for
negatively phrased items (i.e., reverse
scoring), higher scores indicate greater
motivation. The 12 Attention items focus
on how well a course’s content, writing
style, and organization capture and
maintain attention or help avoid
boredom. Nine Relevance items assess
how well the information links to the
learner’s prior knowledge and experience,
perceived needs, and potential future
applications. Nine Confidence items
address the material’s apparent difficulty
and how the course presentation provides
assurance that learning would be
successful. Six Satisfaction items assess
enjoyment during the course and
perceived accomplishment afterward.
The IMMS was administered
immediately after each module.
Knowledge was assessed following IMMS
administration using a multiple-choice
test (Cronbach alpha 0.68). A global
measure of satisfaction with the module
(i.e., global module satisfaction) was then
assessed using the question, “On a scale
from 1 (poor) to 6 (excellent), what is
your overall rating of this module?” Time
spent completing modules was calculated
from computer logs.
14
We also wished to
determine the association between
motivation and satisfaction with the
course as a whole (i.e., overall course
rating). We measured this outcome
approximately one month after the last
module, with a question, “On a scale
from 1 to 6, with 1 being awful and 6
being excellent, what is your overall
rating of this Web-based course?”
Data analysis
Each dimension’s subscores were
standardized, as suggested by Keller,
by dividing by the number of questions
to yield a score for each dimension
ranging from 1 to 5. Internal consistency
was calculated using Cronbach alpha
(each module considered separately).
Discrimination between IMMS
dimensions was determined by
calculating the correlation (Pearson r)
between IMMS subscores within each
module. Discrimination was also
evaluated by determining the degree to
which IMMS scores distinguished the
two module designs. This was done by
comparing average IMMS scores between
Education Issues
Academic Medicine, Vol. 84, No. 11 / November 20091506
the module designs using general linear
models, and by comparing the responses
of individuals between designs using the
intraclass correlation coefficient (ICC).
Convergence was evaluated by calculating
correlation (Pearson r) between IMMS
scores and global module satisfaction
scores. Finally, associations between
IMMS scores and knowledge scores
(percentage of responses correct), time
spent, and overall course ratings were
determined using mixed linear models.
Statistical analysis used SAS 9.1 (SAS
Institute, Cary, NC). All available data
were used in each analysis. All analyses
used a two-sided alpha of 0.05.
Results
Between March and August 2006, 79 of the
124 residents (a response rate of 64%)
completed the IMMS at least once. Twenty-
six of the 79 (33%) were women, and 29
(37%), 29 (37%), and 21 (26%) were in
their first, second, and third postgraduate
years, respectively. Of the 79 residents, 66
completed the IMMS after using standard
modules, 53 completed it after using
adaptive modules, and 40 completed it after
both modules.
Validity evidence
Internal consistency (Cronbach alpha)
was 0.75 for all IMMS dimensions,
suggesting adequate reliability (Table 1).
Cronbach alpha for all 36 items together
was 0.93 to 0.95. Deleting individual
items had little effect on alpha for
individual dimension or total scores
except for one item in the Relevance
dimension, “This lesson was not relevant
to my needs because I already knew most
of it.” Deleting this item improved the
Relevance alpha from 0.78 to 0.82 for
Standard and from 0.80 to 0.84 for
Adaptive. This item was one of 10
“reverse-scored” items (i.e., agreement
with this item indicated low motivation).
Alphas remained stable or dropped after
individually deleting the other reverse-
scored items, suggesting that reverse
scoring did not lead to poor item
performance.
Correlation coefficients between scores
from different IMMS dimensions (Table
2) are lower than the internal consistency
reliability (Cronbach alpha), indicating
some degree of discrimination between
motivation dimensions. However,
interdimension correlations are still fairly
high (rranging 0.4 to 0.8), particularly
for the Attention dimension correlated
with others, suggesting at least modest
overlap between dimensions.
Correlations between IMMS scores and
the global module satisfaction scores
(assessed at the end of each module)
ranged from 0.40 to 0.63 (Table 2).
We found positive associations between
IMMS total scores and dimension-
specific subscores and knowledge
scores, time spent, and overall course
ratings (assessed after a several-week
delay); see Table 3. These associations
were statistically significant for
regression of Attention and Relevance
scores with knowledge (P.033 and
P.01, respectively), and all IMMS scores
with overall course ratings (P.001).
Comparison between designs
There were no significant differences in
mean IMMS scores between module
Table 1
Reliability and Differences Between Instructional Designs for Scores (From 79
Internal Medicine Residents) on the Instructional Materials Motivation Survey
(IMMS) and Global Ratings, Mayo School of Graduate Medical Education,
2005–2006
*
Standard (n 66) Adaptive (n 53)
Differences between
designs (n 40)
Scale Mean (SD) Alpha Mean (SD) Alpha
Mean difference
(95% CI) ICC
IMMS Attention 3.1 (0.6) .82 3.2 (0.6) .87 0.1 (0.2, 0.3)
0.41
.........................................................................................................................................................................................................
IMMS Relevance 3.2 (0.6) .78 3.2 (0.7) .80 0.05 (0.2, 0.3)
0.58
.........................................................................................................................................................................................................
IMMS Confidence 3.6 (0.6) .75 3.6 (0.6) .83 0.0 (0.2, 0.3)
0.56
.........................................................................................................................................................................................................
IMMS Satisfaction 2.5 (0.8) .89 2.6 (0.9) .90 0.15 (0.2, 0.5)
0.59
.........................................................................................................................................................................................................
IMMS total
(entire scale)
3.1 (0.5) .93 3.2 (0.6) .95 0.1 (0.1, 0.3)
0.51
.........................................................................................................................................................................................................
Global module
satisfaction
§
4.3 (0.9) 4.2 (1.1) 0.15 (0.5, 0.2)
0.52
* The scale for all IMMS dimensions and the total score ranges from 1 to 5. Participating residents were asked to
complete the IMMS after two modules, and 79 completed the IMMS at least once: 66 following the standard
module, 53 following the adaptive module, and 40 after both modules. SD indicates standard deviation; CI,
confidence interval.
ICC indicates intraclass correlation coefficient comparing ratings for Standard versus Adaptive dimension.
All P.33 comparing instructional designs.
§
Global module satisfaction was assessed using a separate question, with responses ranging from 1 (poor) to 6
(excellent).
Table 2
Correlations of Scores (From 79 Internal Medicine Residents) Among
Instructional Materials Motivation Survey Dimensions and With a Global Rating,
Mayo School of Graduate Medical Education, 2005–2006
*
Scale Attention Relevance Confidence Satisfaction
Standard (n 66)
.........................................................................................................................................................................................................
IMMS Relevance 0.73
.........................................................................................................................................................................................................
IMMS Confidence 0.70 0.68
.........................................................................................................................................................................................................
IMMS Satisfaction 0.69 0.64 0.40
.........................................................................................................................................................................................................
Global module satisfaction
0.50 0.47 0.54 0.53
Adaptive (n 53)
.........................................................................................................................................................................................................
IMMS Relevance 0.80
.........................................................................................................................................................................................................
IMMS Confidence 0.69 0.62
.........................................................................................................................................................................................................
IMMS Satisfaction 0.73 0.75 0.60
.........................................................................................................................................................................................................
Global module satisfaction
0.50 0.63 0.45 0.40
* Values in this matrix represent correlations within individual modules. P.001 for each coefficient. Participating
residents were asked to complete the IMMS after two modules, and 79 completed the IMMS at least once: 66
following the standard module, 53 following the adaptive module.
Global module satisfaction was assessed using a separate question, with responses ranging from 1 (poor) to 6
(excellent).
Education Issues
Academic Medicine, Vol. 84, No. 11 / November 2009 1507
designs (Table 1). The mean difference
ranged 0.0 to 0.15 points, indicating that
residents had not consistently perceived
one module design to be more
motivating than the other. However,
ICCs ranging 0.41 to 0.59 showed only
modest consistency from one design to
the next, suggesting that individual
residents provided somewhat different
IMMS scores for the two module designs
(i.e., evidence of discrimination, or
sensitivity to change).
Discussion
In this study, we sought evidence to
support the validity of IMMS scores for
making inferences about the motivational
aspects of instructional events.
Supportive validity evidence included
good to very good internal consistency
reliability, variability in individual
participants’ ratings for different
module designs, lower interdimension
correlation than internal consistency,
significant positive associations with
overall course ratings, and consistently
positive (albeit rarely statistically
significant) associations with
knowledge scores and time spent.
Conversely, IMMS scores were not
sensitive to variation in instructional
design overall, and at least one item in
the Relevance dimension seemed
inconsistent with other scale items.
Correlations with global module
satisfaction ratings were generally lower
than correlations with other IMMS
dimension scores. However, global
module satisfaction reflects not only
motivation but also other aspects of
instructional design, and it may make sense
to see higher correlations among the IMMS
dimensions (all of which focus on
motivation) than between the IMMS
dimensions and this global measure.
The failure to detect a difference in mean
IMMS scores between module designs
could reflect insensitivity of IMMS
scores, perceptual differences in what
constitutes motivational materials, or
similarly motivating module designs. We
did not intend the designs to illustrate
unusually large motivational differences
but, rather, to reflect the variation found
between instructional alternatives used in
real life. Modest ICC values could
indicate that individual residents
perceived and rated the designs
differently (i.e., individuals discriminated
between designs even though averages
did not). However, if, in fact, the modules
were similarly motivating, then the ICCs
would indicate poor test-retest
reproducibility. To address this concern,
future studies might compare modules on
different topics using identical instructional
designs, or module designs that vary more
than those in this study. Of course, if
between-design differences in research
settings exceed those commonly found in
actual educational practice, then differences
in scores will have limited application.
Research might also explore the use of
individual, rather than average, IMMS
scores in certain applications (such as
motivationally adaptive courses).
Internal consistency data in this study are
similar to those reported by Keller (Keller
JM. Development of two measures of
learner motivation: Florida State
University, 2006; unpublished) and those
found in a Korean translation of the
IMMS.
8,9
Further evidence of internal
structure might come from research on
the temporal stability or factor structure
of IMMS scores. Relations to other
variables evidence is available from
studies demonstrating that IMMS scores
are sensitive to instructional design
changes intended to highlight the ARCS
model.
15,16
Additional evidence might
come from studies correlating IMMS
scores with scores from other
instruments assessing motivation using,
for example, the multitrait–multimethod
matrix.
17
Content evidence has been
provided by the IMMS’s author in a
detailed description of item development
(Keller JM. Development of two
measures of learner motivation: Florida
State University, 2006; unpublished).
Evidence that assessing and acting on
IMMS scores can affect learning
outcomes would constitute evidence of
consequences, but no such studies were
found. Likewise, no response process
evidence was found.
In accord with our predictions,
participants who perceived the course as
more relevant and attention-getting had
higher knowledge scores. By contrast, the
associations between motivation and
time were not statistically significant.
Although these findings are preliminary,
they set the stage for further study of
motivation in medical education. IMMS
scores will be most useful when they
inform evidence-based course changes—
for example, through formative feedback
to instructors (after the course) or
through dynamic adaptations in response
to motivational needs of individual
learners (during the course). These, in
turn, will require research investigating
what instructional changes actually
enhance motivation, what motivational
enhancements improve learning
outcomes, and how educators can apply
these enhancements effectively and
efficiently. For example, studies could
investigate how to enhance the relevance
of material and whether that promotes
better learning. Alternatively, educators
might consider adapting Web-based
learning to motivational needs. A study
in high school students found that a
motivationally adaptive computer-
assisted learning design was more
effective than designs with fixed
motivational features.
15
To our
Table 3
Associations Between Instructional Materials Motivation Survey (IMMS) Scores
(From 79 Internal Medicine Residents) and Other Study Outcomes, Mayo School
of Graduate Medical Education, 2005–2006
*
Outcome
IMMS dimension
Attention:
b (SE) P
Relevance:
b (SE) P
Confidence:
b (SE) P
Satisfaction:
b (SE) P
IMMS total:
b (SE) P
Knowledge
score
4.6 (2.1) .033 5.0 (1.9) .010 3.7 (2.2) .088 1.4 (1.3) .27 3.4 (2.4) .16
.........................................................................................................................................................................................................
Time
3.4 (5.0) .50 6.0 (4.5) .19 4.0 (5.2) .45 6.1 (3.4) .074 8.1 (5.8) .16
.........................................................................................................................................................................................................
Overall
course
rating
1.2 (0.2) .001 0.8 (0.2) .001 1.2 (0.2) .001 0.5 (0.1) .001 1.3 (0.2) .001
* Numbers in table represent slope (b) of the regression line indicating the average difference in outcome for a
one-unit change in motivation ratings. For example, on average, a one-point increase in Attention ratings would
correspond to a 4.6-point improvement in knowledge scores. Motivation ratings are standardized to a five-point
scale in which higher ratings indicate higher motivation. The IMMS total score is an unweighted average of the
four individual dimension scores.
Units are as follows: knowledge score (percent correct); time (minutes); course rating (1 awful, 6 excellent).
Education Issues
Academic Medicine, Vol. 84, No. 11 / November 20091508
knowledge, no such studies have been
done in medical education.
This study has limitations. First, the
validity evidence presented is restricted to
internal consistency and relations to
other variables. Future studies might
explore associations with scores from
other instruments measuring motivation.
Second, the sample size precluded factor
analysis. Third, we conducted multiple
independent hypothesis tests. A
conservative threshold for statistical
significance using Bonferroni adjustment
would be alpha 0.003. Finally, score
validity is context-specific, and thus these
results may not apply to other learners or
to non-Web-based learning
environments.
In conclusion, this study presents the first
validity evidence regarding IMMS scores
in medical education. Though limited,
these data are generally supportive and
suggest that cautious use and further
study of this instrument’s scores are
warranted.
Acknowledgments
Funding for this study was from the Mayo Clinic
College of Medicine Education Innovation
program.
References
1Maslow AH. A theory of human motivation.
Psychol Rev. 1943;50:370–396.
2Bandura A. Self-efficacy. In: Ramachaudran
VS, ed. Encyclopedia of Human Behavior.
Vol 4. New York, NY: Academic Press; 1994:
71–81.
3Gagne´ RM. The Conditions of Learning and
Theory of Instruction. 4th ed. New York, NY:
Holt, Rinehart and Winston; 1985.
4Biggs JB. Good learning: What is it? How can
it be fostered? In: Biggs JB, ed. Teaching for
Learning: The View From Cognitive
Psychology. Hawthorn, Australia: The
Australian Council for Educational Research;
1991.
5Keller JM. Motivation and instructional
design: A theoretical perspective. J Instr Dev.
1979;2(4):26–34.
6Astleitner H, Keller JM. A model for
motivationally adaptive computer-assisted
instruction. J Res Comput Educ. 1995;27:
270–280.
7Keller JM. Development and use of the ARCS
model of instructional design. J Instr Dev.
1987;10(3):2–10.
8Jang KS, Park OJ, Hong MS, et al. A study on
development of Web-based learning program
with multimedia ECG monitoring and its
application. J Korean Soc Med Inform. 2003;
9:101–110.
9Jang KS, Hwang SY, Park SJ, Kim YM, Kim
MJ. Effects of a Web-based teaching method
on undergraduate nursing students’ learning
of electrocardiography. J Nurs Educ. 2005;44:
35–39.
10 Messick S. Validity. In: Linn RL, ed.
Educational Measurement. 3rd ed. New
York, NY: American Council on Education/
Macmillan; 1989.
11 American Educational Research Association;
American Psychological Association;
National Council on Measurement in
Education. Standards for Educational and
Psychological Testing. Washington, DC:
American Educational Research Association;
1999.
12 Cook DA, Beckman TJ. Current concepts in
validity and reliability for psychometric
instruments: Theory and application. Am J
Med. 2006;119:166.e7–e16.
13 Huang W, Huang W, Diefes-Dux H, Imbrie
PK. A preliminary validation of attention,
relevance, confidence and satisfaction
model-based instructional material
motivational survey in a computer-based
tutorial setting. Br J Educ Technol. 2006;37:
243–259.
14 Cook DA, Beckman TJ, Thomas KG,
Thompson WG. Adapting Web-based
instruction to residents’ knowledge
improves learning efficiency: A randomized
controlled trial. J Gen Intern Med. 2008;23:
985–990.
15 Song SH, Keller JM. Effectiveness of
motivationally adaptive computer-assisted
instruction on the dynamic aspects of
motivation. Educ Technol Res Dev. 2001;
49(2):5–22.
16 Gabrielle DM. The Effects of Technology-
Mediated Instructional Strategies on
Motivation, Performance, and Self-Directed
Learning [dissertation]. Tallahassee, Florida:
Florida State University; 2003. Available at:
(http://etd.lib.fsu.edu/theses/available/etd-
11142003-171019). Accessed July 20, 2009.
17 Campbell DT, Fiske DW. Convergent and
discriminant validation by the multitrait–
multimethod matrix. Psychol Bull. 1959;56:
81–105.
Education Issues
Academic Medicine, Vol. 84, No. 11 / November 2009 1509
... Online instruction can ameliorate barriers due to geography, scheduling, and cost that make in-person learning infeasible for many health professionals and trainees [3]. Learners' motivation is key to their success when learning online [4]. Motivation is an energetic force that instigates and sustains learning behaviour [5]. ...
... Motivation is an energetic force that instigates and sustains learning behaviour [5]. Several studies in HPE have reported positive associations between learners' motivation and their learning behaviours and outcomes, including in online contexts [4,[6][7][8][9][10][11][12]. However, some learners may struggle with their motivation when learning online. ...
... For example, feeling isolated from one's peers and navigating the metacognitive challenges associated with online instruction may negatively affect some learners' motivation [13,14]. Therefore, educators must build motivational support into online instruction via a process of motivational design [4]. Studies have demonstrated that even small changes to instruction can have a meaningful impact on learners' motivation, including in online contexts [15][16][17]. ...
Preprint
BACKGROUND Educators’ choices about how to design online instruction can influence learners’ motivation. To optimize learners’ motivation, educators must be capable of using effective motivational design strategies that target a breadth of motivational constructs (e.g., interest, confidence). OBJECTIVE This systematic review and directed content analysis aimed to catalogue the motivational constructs that researchers have targeted in their experimental comparison studies of motivational design strategies for online instruction in health professions education. The authors sought to identify motivational constructs that have received attention in the field and those that are presently understudied, and thus should be the focus of future studies. METHODS Medline, Embase, Emcare, PsychINFO, ERIC, and Web of Science were searched from 1990 to August 2022. Studies were included if they compared online instructional design strategies intending to support a motivational construct (e.g., interest) or motivation in general, among learners in licensed health professions. Two team members independently screened and coded studies regarding the motivational theories that researchers used to inform their studies and the motivational constructs they targeted with their design strategies. RESULTS From 10,584 records, 46 studies were included. Researchers tested motivational design strategies intended to make instruction more interesting, enjoyable, and fun (n = 23) far more than they tested motivational designs intended to support extrinsic value (n = 9), confidence (n = 6), social connectedness (n = 4), or autonomy (n = 2). A focus on intrinsic value beliefs appeared to be driven by studies that were not informed by any theory of motivation. CONCLUSIONS Researchers in health professions education have primarily focused on motivating learners by making online instruction more interesting, enjoyable, and fun. We recommend that future research investigate motivational design strategies targeting other high-yield motivational constructs such as purpose, confidence, and autonomy. Such research would help to generate a broader tool-kit of strategies for educators to draw on to support learners’ motivation in online settings. CLINICALTRIAL PROSPERO registration number: #CRD42022359521
... Motivation has been attributed to result in 'meaningful learning' (Maslow, 1943;Keller, 1979;Gagne, 1985;Biggs, 1991;Bandura, 1994 andCook et al., 2009). Even though motivation is considered as an individual trait, it has been one of important criteria being considered by educators when planning their lessons. ...
... From this finding also, it can be summarized that students were highly motivated by the course and the course trainer as their high motivation level was sufficient to show that learning had taken place. Hence, it can be concluded also that this is in conjunction with the earlier findings that motivation has been attributed to result in meaningful learning, as reported by Maslow (1943), Keller (1979), Gagne (1985, Biggs (1991), Bandura (1994) and Cook et al. (2009). Moreover, it can also be justified that this study reports the same finding with Keller (2010) and Molaee & Dortaj (2015) that motivation is a key factor in learning. ...
... The first conclusion that can be made from this study is that all respondents generally had high level of motivation when they underwent the Maritime English course during the period of Movement Control Order (MCO) from March to June 2020. In conjunction with this finding also, all respondents were therefore expected not to face any difficulty in passing the subject under the new norm of online class facilitation since their high motivation shall lead to high academic performance in terms of meaningful learning (Maslow, 1943;Bandura, 1994;Gagne 1985;Biggs, 1991;Keller, 1979;and Cook et al., 2009). This also proves that motivation is also affected by lesson delivery (pedagogy) and the trainer's initiatives in the overall conduct of the teaching, learning and assessment process (TLAs) as demonstrated in the experiment, as supported by findings from dimensions of Attention, Confidence and Satisfaction. ...
Article
Full-text available
The global pandemic of Covid-19 has affected the teaching and learning of the STCW courses (Standards of Training, Certification and Watch-keeping for Seafarers) which witnessed the drastic move from normal face-to-face facilitation to full online and distance learning (ODL). This new paradigm shift has resulted in significant changes as well as immense challenges to students who experienced this crisis for the first time. Hence, the study aims to discover students’ motivation level in adapting to the new environment of online learning as experienced by semester 2 students in Maritime English classes via quantitative study adopted Keller’s ARCS Model of Motivation survey administered on 78 respondents. The data collected were analysed and the results showed high level of students’ motivation despite having to undergo challenges in online distance learning during the MCO. Moreover, the four elements of the ARCS Model tested in the experiment indicated very high scores in students’ engagement, confidence, motivation, and satisfaction. This preliminary study has helped to provide a new perspective on online learning as well as students’ motivation to the maritime education and training institutions. Hence, it is hoped that the findings could help them to make continuous quality improvement in pedagogical, technological adaptation and assessment aspects for the benefit of students and stakeholders of the maritime industry.
... The study evaluates the effectiveness of traditional teaching versus game-based learning via a controlled case study. Keller's Instructional Materials Motivational Survey [Cook et al., 2009] is utilized to evaluate student motivation, intending to enhance technology integration in classrooms and cultural learning. Furthering this endeavor is the game Seres do Folclore Brasileiro [Domingues, 2022], which aims to facilitate elementary history education by encouraging the discovery and identification of Brazilian folklore characters hidden within the game's scenario. ...
Article
Full-text available
This paper presents “Caturama VR,” an English-localized VR game that immerses players in Brazilian folklore, portraying the adventures of Caturama, a young indigenous hero. It features a VR-optimized interface, dynamic inventory management, visually captivating special effects, and first-person animated hands for deep narrative interaction within a mystical environment. Employing advanced 3D graphics, the game enriches the educational exploration of folklore, aiming to cultivate an appreciation for indigenous cultural heritage. Rigorous pre-user VR testing, conducted in-house by the development team acting as players, has significantly boosted interaction quality, gameplay balance, immersion, presence, and overall performance, laying a solid foundation for forthcoming advancements and real-user UX evaluations.
... [8][9] The IMMS five-point Likert-type scale has been validated with a range of student populations as an assessment measure of motivation with the goal of enhancing the effectiveness of learning activities. [11][12][13] The second component was comprised of qualitative evaluation questions used to measure participant's perceptions of the Beyond Milestones resource and recommendations for use or modification. These questions were developed by the researchers and tested on a small representative sample of allied health professionals. ...
Article
Purpose: The study aimed to evaluate the observed impact of Beyond Milestones online education resource in developing allied health professionals’ knowledge of normal child development and skills in observational assessment. It also aimed to identify the usefulness of the resource and necessary modifications for potential application with allied health professionals working for New South Wales (NSW) Health. While the effectiveness of the resource with medical clinicians has been demonstrated, no evidence was identified regarding the usefulness with allied health professionals. Methods: The study used a crossover repeated measures design to determine the observed impact of Beyond Milestones on developing the knowledge and skills of participating allied health professionals. Quantitative data was analysed manually. Mean differences between the study groups were compared using independent sample t-tests. A significance level of pResults: A total of 30 participants representing Dietetics, Occupational Therapy, Physiotherapy, Psychology, Speech Pathology and Social Work completed all components of the study. Quantitative results indicated that Beyond Milestones was an online resource that provided an effective learning opportunity. The qualitative evaluation identified perceived improvements to the Beyond Milestones online module. Overall, forty one percent (n=12) of respondents identified that Beyond Milestones is adequate as a standalone online resource with 59% (n=17) identifying that there would be benefit to it being part of a broader education program. Conclusions: Although allied health participants demonstrated a significant improvement in performance on allocated observational assessment tasks, this was not attributable to completion of the Beyond Milestones teaching modules. Despite this, study participants perceived the online resource to be an effective learning opportunity. The recommendations regarding modification of the Beyond Milestones resource require consideration and evaluation prior to broader application. Further research regarding the usefulness of this model of educational practice is warranted.
... The study evaluates the effectiveness of traditional teaching versus game-based learning via a controlled case study. Keller's Instructional Materials Motivational Survey [Cook et al. 2009] is utilized to evaluate student motivation, intending to enhance technology integration in classrooms and cultural learning. Furthering this endeavour is the game "Seres do Folclore Brasileiro" [Domingues 2022], which aims to facilitate elementary history education by encouraging the discovery and identification of Brazilian folklore characters hidden within the game's scenario. ...
Conference Paper
This paper presents “Caturama,” a serious game prototype crafted for educators, students and proponents of cultural heritage preservation. Designed to immerse players in the rich folklore of the Brazilian Amazon’s indigenous tribes, its conception was guided by twin goals: authenticity in visual and narrative representation and engaging user experience. Drawing inspiration from three iconic legends of the region, Caturama seeks to bridge the gap between the modern world and ancient traditions. Detailed within are the design principles, developmental methodologies, and user feedback analytics. With a favourable Net Promoter Score of +70, it underscores the game’s appeal and its potential as an educational medium for fostering cultural understanding and preservation.
Article
Full-text available
Background: The motivational design of online instruction is critical in influencing learners’ motivation. Given the multifaceted and situated nature of motivation, educators need access to a range of evidence-based motivational design strategies that target different motivational constructs (eg, interest or confidence). Objective: This systematic review and directed content analysis aimed to catalog the motivational constructs targeted in experimental studies of online motivational design strategies in health professions education. Identifying which motivational constructs have been most frequently targeted by design strategies—and which remain under-studied—can offer valuable insights into potential areas for future research. Methods: Medline, Embase, Emcare, PsycINFO, ERIC, and Web of Science were searched from 1990 to August 2022. Studies were included if they compared online instructional design strategies intending to support a motivational construct (eg, interest) or motivation in general among learners in licensed health professions. Two team members independently screened and coded the studies, focusing on the motivational theories that researchers used and the motivational constructs targeted by their design strategies. Motivational constructs were coded into the following categories: intrinsic value beliefs, extrinsic value beliefs, competence and control beliefs, social connectedness, autonomy, and goals. Results: From 10,584 records, 46 studies were included. Half of the studies (n=23) tested strategies aimed at making instruction more interesting, enjoyable, and fun (n=23), while fewer studies tested strategies aimed at influencing extrinsic value beliefs (n=9), competence and control beliefs (n=6), social connectedness (n=4), or autonomy (n=2). A focus on intrinsic value beliefs was particularly evident in studies not informed by a theory of motivation. Conclusions: Most research in health professions education has focused on motivating learners by making online instruction more interesting, enjoyable, and fun. We recommend that future research expand this focus to include other motivational constructs, such as relevance, confidence, and autonomy. Investigating design strategies that influence these constructs would help generate a broader toolkit of strategies for educators to support learners’ motivation in online settings. Trial Registration: PROSPERO CRD42022359521; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022359521 JMIR Med Educ 2025;11:e64179 doi:10.2196/64179
Article
Background Pediatric airway diseases are associated with complex challenges because of smaller and more dynamic airway structures in children. These conditions, should be immediately and precisely recognized to prevent life-threatening obstructions and long-term respiratory complications. Recently, virtual reality (VR) has emerged as an innovative approach to clinical medical education. To evaluate and compare the effectiveness of VR-based education and traditional lectures in enhancing knowledge retention, clinical reasoning, and motivation among senior respiratory therapy students. Methods This study was conducted between November 2020 and September 2022, involving 54 students from a School of Respiratory Therapy, with 43 completing a pretest and undergoing random assignment into either a VR or a traditional education (non-VR) group. Samsung Gear VR Oculus headsets were used by the VR group for instructions on conditions such as laryngeal malacia, subglottic stenosis, and tracheomalacia. Theoretical exams, objective structured clinical examinations (OSCE), and instructional material motivation survey (IMMS) were used to evaluate participants’ knowledge retention, clinical reasoning, and application capabilities, followed by a statistical analysis comparing both study groups. Results No significant differences in pretest scores were observed between the two groups. However, the VR group outperformed the non-VR group in OSCE scores significantly (15 ± 3 vs 10 ± 3, p = 0.003) and demonstrated greater learning motivation and satisfaction based on IMMS scores. No notable difference in immediate posteducation theoretical examination scores was observed between the groups. Conclusion VR can effectively serve as a supplemental educational tool in clinical training programs for pediatric airway disease. To optimize its implementation in medical educational settings, further research with larger cohorts and longer follow-up periods is needed.
Chapter
Cerebral magnetic resonance angiography (MRA) is an economical and minimally invasive imaging method utilised to diagnose various neurological disorders. Nevertheless, MRA images are two-dimensional and require a solid grasp of cerebral vascular anatomy for interpretation. In modern medical education, technology-enhanced learning approaches, such as interactive applications and digital 3D models, have been embraced to improve learning outcomes. However, in Indonesia, conventional 2D illustrations in textbooks, lecture slides and anatomical specimens remain prevalent in the learning process. Recognising this disparity, the research presented here is dedicated to developing a web-based interactive 3D application for learning about cerebral MRA and exploring its potential as a supplementary learning tool for Indonesian medical students. Patient MRA data were digitally reconstructed into 3D models depicting normal cerebral vascularization, which were subsequently refined and integrated into a game engine platform to produce eight interactive 3D anatomy models. These models were then combined with relevant medical information and made available on itch.io as a web-based application to ensure broad accessibility. The application’s evaluation was completed through an online survey aimed at Indonesian medical professionals. Responses from 23 participants indicated a high usability rating (average SUS score = 72), positive comments on its usefulness as a learning aid and a significant learning motivation among the participants. While further enhancements are necessary, this research possesses the potential to serve as a valuable supplementary learning approach for integration into Indonesian medical education. Furthermore, it represents an initial step in fostering awareness towards the implementation of technology-enhanced education in the medical field in Southeast Asian countries.
Article
Purpose The Instructional Materials Motivation Survey (IMMS) was developed to measure motivational characteristics of a learning activity, building on Keller's Attention, Relevance, Confidence, Satisfaction (ARCS) motivation model. We aimed to validate IMMS scores using validity evidence of internal structure and relations with other variables. Methods Participants were internal medicine and family medicine residents who completed the IMMS following an online module on outpatient medicine, from 2005 to 2009. We used confirmatory factor analysis (CFA) to examine model fit using half the data (split-sample approach). Finding suboptimal fit, we conducted exploratory factor analysis (EFA) and developed a revised instrument. We evaluated this instrument with CFA using the remaining data. Associations were evaluated between IMMS scores and knowledge and other measures of motivation (Motivated Strategies for Learning Questionnaire, MSLQ). All analyses accounted for repeated measures on subjects. Results There were 242 participants. Although internal consistency reliabilities were good (Cronbach alpha ≥0.70), CFA of the original 36-item, 4-domain instrument revealed poor model fit for data sample 1. EFA found that reverse-scored items clustered strongly together. Further EFA using data sample 1, followed by CFA using data sample 2, found good fit for a 13-item, 4-domain model that omitted reverse-scored items (standardized root mean square residual 0.045, root mean square error of approximation 0.066, comparative fit index 0.96). Linear regression confirmed positive, statistically significant associations for most hypothesized relationships, including IMMS total with knowledge (r=0.19) and MSLQ total (r=0.53; both p<.001). Examination of reverse-scored items suggested participant inattention but not acquiescence. Conclusions IMMS scores show good reliability and relations with other variables. However, the hypothesized and empirical factor structures do not align, and reverse-scored items show particularly poor fit. A 13-item, 4-domain scale omitting reverse-scored items showed good model fit.
Article
Purpose: To validate the Motivated Strategies for Learning Questionnaire (MSLQ), which measures learner motivations; and the Instructional Materials Motivation Survey (IMMS), which measures the motivational properties of educational activities. Methods: Participants (333 pharmacists, physicians, and advanced practice providers) completed the MSLQ, IMMS, Congruence-Personalization Questionnaire (CPQ), and a knowledge test immediately following an online learning module (April 2021). We randomly divided data for split-sample analysis using confirmatory factor analysis (CFA), exploratory factor analysis (EFA), and the multitrait-multimethod matrix. Results: Cronbach alpha was ≥0.70 for most domains. CFA using sample 1 demonstrated suboptimal fit for both instruments, including 3 negatively-worded IMMS items with particularly low loadings. Revised IMMS (RIMMS) scores (which omit negatively-worded items) demonstrated better fit. Guided by EFA, we identified a novel 3-domain, 11-item 'MSLQ-Short Form-Revised' (MSLQ-SFR, with domains: Interest, Self-efficacy, and Attribution) and the 4-domain, 12-item RIMMS as the best models. CFA using sample 2 confirmed good fit. Correlations among MSLQ-SFR, RIMMS, and CPQ scores aligned with predictions; correlations with knowledge scores were small. Conclusions: Original MSLQ and IMMS scores show poor model fit, with negatively-worded items notably divergent. Revised, shorter models-the MSLQ-SFR and RIMMS-show satisfactory model fit (internal structure) and relations with other variables.
Article
Full-text available
"This paper advocates a validational process utilizing a matrix of intercorrelations among tests representing at least two traits, each measured by at least two methods. Measures of the same trait should correlate higher with each other than they do with measures of different traits involving separate methods. Ideally, these validity values should also be higher than the correlations among different traits measure by the same method." Examples from the literature are described as well as problems in the application of the technique. 36 refs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Article
The purpose of this design experiment was to positively affect motivation, performance, and self-directed learning of undergraduate students enrolled in a tuition-free, public military school. A second purpose was to use new technologies to efficiently deliver these instructional strategies as supplementary course content. This empirical study was conducted during one semester with 784 students, representing approximately 20 percent of the population at the academy. The within-subjects research design used a mixed method approach involving quantitative and qualitative data. Four surveys were used to measure motivation and self-directed learning: (1) the Course Interest Survey developed by Keller; (2) the Instructional Materials Motivation Survey developed by Keller; (3) the Self-Directed Learning Readiness Scale developed by Guglielmino, and; 4) the Self-Directed Learning survey. Students in 48 participating sections were randomly divided into control and experimental groups for each of 16 instructors. Within these courses, students in each section had identical syllabi and classroom-based content. The researcher communicated with control and experimental group students via email, and used email to direct experimental group students to the technology-mediated instructional strategies (TMIS). Strategies were designed using Keller’s ARCS model of motivation and delivered via Personal Digital Assistant (PDA), web, CD-ROM, and other technologies. For students in the experimental group, web-based post-strategy SDL surveys were administered throughout the semester, tracking participation, perceptions, and self-directed learning. To provide for a richer study, qualitative data were collected via open-ended questions on the SDL survey and via threaded discussions on web forums. Follow-up interviews also helped triangulate the data. Those students who accessed the TMIS had significantly higher levels of academic performance than control group students. There were also significant differences in motivation and proclivity to be self-directed learners, with higher levels for treatment group students than control group students. These findings suggest that systematically designed technology-mediated instructional strategies can positively effect motivation, performance, and self-directed learning. Further, new technologies such as the PDA can help improve the efficiency of delivering such strategies. Suggestions for future empirical research are presented.
Article
Until now, the matching of teaching processes to cognitive aspects of learning has been in the foreground of discussions in the field of computer-assisted instruction (CAI). There has been little effort to match teaching processes to the motivational dynamics of the learners. This study will attempt to show how theories and empirical findings of research on motivation can be integrated in a formal model in order to describe and predict motivation within the framework of motivationally adaptive computer-assisted instruction. This article begins with a discussion of problems in CAI and the reasons for these problems. The middle section of this article contains the theoretical basis for the study, which includes the components of a formal model to be implemented as a computer simulation. This article concludes with an example of how computer simulation can represent and predict motivational processes in instructional situations.
Article
This paper describes a preliminary validation study of the Instructional Material Motivational Survey (IMMS) derived from the Attention, Relevance, Confidence and Satisfaction motivational design model. Previous studies related to the IMMS, however, suggest its practical application for motivational evaluation in various instructional settings without the support of empirical data. Moreover, there is a lack of discussion regarding the validity of the instrument. Therefore, this study empirically examined the IMMS as a motivational evaluation instrument. A computer-based tutorial setting was selected owing to its wide application in teaching large entry-level college courses. Data collected from 875 subjects were subjected to exploratory and confirmatory factor analyses, and measurement modelling LISREL. Findings suggested that 16 original items should be excluded from the IMMS; the retained 20 items were found to fall into different constructs, indicating that instructional features of the tutorial may influence the validity of the survey items. The implication of the study supports the situational  feature  of  the  IMMS.  Therefore,  a  preevaluation  adjustment  on the IMMS items is recommended to identify suitable items before the full motivational evaluation. Future research should focus on the further validation of the IMMS based on this preliminary evidence.
Article
The ARCS Model of motivation was developed in response to a desire to find more effective ways of understanding the major influences on the motivation to learn, and for systematic ways of identifying and solving problems with learning motivation. The resulting model contains a four category synthesis of variables that encompasses most of the areas of research on human motivation, and a motivational design process that is compatible with typical instructional design models. Following its development, the ARCS Model was field tested in two inservice teacher education programs. Based on the results of these field tests, the ARCS Model appears to provide useful assistance to designers and teachers, and warrants more controlled studies of its critical attributes and areas of effectiveness.