ArticlePDF Available

Testing the ATI hypothesis: Should multimedia instruction accommodate verbalizer-visualizer cognitive style?

Authors:
  • California State University, Office of the Chancellor

Abstract and Figures

College students (Experiment 1) and non-college adults (Experiment 2) studied a computer-based 31-frame lesson on electronics that offered help-screens containing text (text group) or illustrations (pictorial group), and then took a learning test. Participants also took a battery of 14 cognitive measures related to the verbalizer-visualizer dimension including tests of cognitive style, learning preference, spatial ability, and general achievement. In Experiment 3, college students received either both kinds of help-screens or none. Verbalizers and visualizers did not differ on the learning test, and almost all of the verbalizer-visualizer measures failed to produce significant attribute x treatment interactions (ATIs). There was not strong support for the hypothesis that verbal learners and visual learners should be given different kinds of multimedia instruction.
Content may be subject to copyright.
Testing the ATI hypothesis: Should multimedia instruction
accommodate verbalizer-visualizer cognitive style?
Laura J. Massa, Richard E. Mayer
Department of Psychology, University of California, Santa Barbara, CA 93106, United States
Received 9 December 2005; accepted 27 September 2006
Abstract
College students (Experiment 1) and non-college adults (Experiment 2) studied a computer-based 31-frame lesson on
electronics that offered help-screens containing text (text group) or illustrations (pictorial group), and then took a learning test.
Participants also took a battery of 14 cognitive measures related to the verbalizer-visualizer dimension including tests of cognitive
style, learning preference, spatial ability, and general achievement. In Experiment 3, college students received either both kinds of
help-screens or none. Verbalizers and visualizers did not differ on the learning test, and almost all of the verbalizer-visualizer
measures failed to produce significant attribute xtreatment interactions (ATIs). There was not strong support for the hypothesis that
verbal learners and visual learners should be given different kinds of multimedia instruction.
© 2006 Elsevier Inc. All rights reserved.
Keywords: cognitive style; learning preference; spatial ability; multimedia learning; elearning
Some people (who could be called visualizers) learn better with visual methods of instruction, whereas other people
(who could be called verbalizers) learn better with verbal methods of instruction. This ideadeeply engrained in the
folklore of educational practiceis one aspect of what can be called the attribute-treatment interaction (ATI)
hypothesis. In the case of verbalizer-visualizer differences, the ATI hypothesis predicts that visualizers will perform
best on tests of learning when they receive visual rather than verbal methods of instruction, whereas verbalizers will
perform best on tests of learning when they receive verbal rather than visual methods of instruction.
In spite of the widespread popularity of the ATI hypothesis among educators, the search for research-based ATIs
over the last 25 years has had a somewhat disappointing history (Cronbach, 2002; Cronbach & Snow, 1977; Sternberg
& Zhang, 2001). For example, Biggs (2001, p. 80) observed: Significant disordinal interactions of this kind [ATIs] are
rare, and providing for them is expensive if not impractical where more than one aptitude is addressed.In reviewing
research on ATIs involving cognitive styles, Cronbach and Snow (1977) concluded: The research has generated
hypotheses but no firm conclusions(p. 389). A quarter century later, the empirical research on ATIs still contains few
consistent effects: the results on any one (ATI) hypothesis are often inconsistent(Cronbach, 2002, p. 21).
Learning and Individual Differences 16 (2006) 321335
www.elsevier.com/locate/lindif
This research was supported by Office of Naval Research Grant N00014-01-1-1039. Qian Su produced the computer-based lessons. For more
information about this paper, please contact Laura J. Massa at massa@psych.ucsb.edu or Richard E. Mayer at mayer@psych.ucsb.edu.
Corresponding author. Tel.: +1 805 893 2472; fax: +1 805 893 4303.
E-mail address: mayer@psych.ucsb.edu (R.E. Mayer).
1041-6080/$ - see front matter © 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.lindif.2006.10.001
The purpose of the present study is to carefully examine one aspect of the ATI hypothesis, using 14 different
measures of the verbalizer-visualizer dimension, and an on-line science lesson that offers help screens in the form of
printed text (text group) or illustrations (pictorial group). Previous work (Mayer & Massa, 2003) has identified three
facets of the verbalizer-visualizer dimension-cognitive ability (i.e., proficiency in creating, holding, and manipulating
spatial representations), cognitive style (i.e., tendency to use visual or verbal modes of knowledge representation and
thinking), and learning preference (i.e., preference for receiving instruction involving pictures or words). In the present
study, we examine whether students who score high on spatial ability, visual cognitive style, or visual learning
preference learn better from a multimedia lesson containing pictorial help screens, whereas those scoring high on
verbal ability, verbal cognitive style, or verbal learning preference learn better with text help screens. We also include
several tests of general achievement related to mathematical and verbal achievement.
1. Experiment 1
Experiment 1 was conducted to determine whether visual learners learn better from multimedia instruction that offers help
screens using pictures whereas verbal learners learn better from multimedia instruction that offers help screens using words.
1.1. Method
1.1.1. Participants and design
The participants were 52 college students recruited from the Psychology Subject Pool at the University of
California, Santa Barbara, with 26 students serving in the pictorial group and 26 in the text group. The mean age was
18.00 years (S.D.= 1.04); the percentage of men was 44.20 (n=23) and the percentage of women was 55.80 (n=29);
and the mean combined SAT score was 1180 (S.D. = 144).
1.1.2. Materials and apparatus
The individual differences materials consisted of 11 instruments measuring cognitive style, learning preference, or
spatial ability in which high scores denote visualizers and low scores denote verbalizers, as well as three additional
measures of general achievement. The instruments were categorized based on a previously conducted factor analysis
(Mayer & Massa, 2003), and are summarized in Table 1.
Four measures assessed verbalizer-visualizer cognitive style: the 15-item Verbalizer-Visualizer Questionnaire
(VVQ) developed by Richardson (1977) in which students rated their agreement to statements such as, I prefer to read
instructions about how to do something rather than have someone show mealong a 7-point scale; a 6-item Santa
Barbara Learning Style Questionnaire intended to tap the same factor as the VVQ but with fewer questions (Mayer &
Massa, 2003); a 5-item Learning Scenario Questionnaire that asked about preferences in five learning situations based
on brief text descriptions, such as whether you would rather read a paragraph or see a diagram describing an atom
(Mayer & Massa, 2003); and a 1-item Visual-Verbal Learning Style Rating in which students are asked to rate on a 7-
point scale the degree to which they are more verbal than visual or more visual than verbal (Mayer & Massa, 2003). In
addition, we included another measure intended to assess cognitive style that did not load onto the same factor as any of
the other tests in previous work (Mayer & Massa, 2003): the verbal-imager subtest of the Cognitive Styles Analysis
(CSA) developed by Riding (1991) in which students press trueor falsebuttons in response to statements on a
computer screen such as, COAL and SNOW are the same COLOR.
Three instruments all original assessed learning preference in the context of authentic multimedia training tasks.
First, there are two scales of a 5-item Multimedia Learning Preference Test which consisted of five text frames explaining
the process of lightning formation presented via computer screen so that the learner can click on help buttons that offer an
annotated graphic (i.e., pictorial help) or a glossary that defineselected terms (i.e., verbal help);the choice scale was based
on the number of times the learner selected the visual help first, and the preference scale was based on the number of times
the learner reported that the visual help was most useful when asked subsequently. Finally, a 5-item Multimedia Learning
Preference Questionnaire is a paper version of the preference scale of the Multimedia Learning Preference Test with a
seven-point response scale ranging from strongly prefer verbal help to strongly prefer visual help for each item.
Three measures assessed a specific cognitive ability, namely spatial ability: a 3-minute version of the Card Rotations
Test from the Kit of Factor-Referenced Cognitive Tests (Ekstrom, French, & Harman, 1976), a 3-minute version of the
Paper Folding Test from the Kit of Factor-Referenced Cognitive Tests (Ekstrom et al., 1976), and a 2-item Verbal-
322 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
Table 1
Fourteen individual difference measures
General achievement measures
SAT-Math
1-item Questionnaire Educational testing service
Task: Write SAT-Math score on questionnaire.
Score: Self-reported score from mathematics scale of the SAT (200 to 800).
SAT-Verbal
1-item Questionnaire Educational testing service
Task: Write SAT-Verbal score on questionnaire.
Score: Self-reported score from the verbal scale of the SAT (200 to 800).
Vocabulary Test
18-Item timed test Adapted from Baron's Educational Series (2001)
Task: Given a target word such as gritty,select a synonym from a list of five words.
Score: Number correct minus one-fifth number incorrect in 3 min (0 to 18).
Spatial ability measures
Card Rotations Test
80-item Timed test Ekstrom et al. (1976)
Task: Determine whether a shape is a rotated image of a target shape.
Score: Number correct minus number incorrect in 3 min (0 to 80).
Paper Folding Test
10-item Timed test Ekstrom et al. (1976)
Task: Imagine folding a sheet of paper, punching holes, and opening it. Select pattern from 5 choices.
Score: Number correct minus one-fifth number incorrect in 3 min (0 to 10).
VerbalSpatial Ability Rating
2-item Questionnaire Original
Task: Rate level of spatial ability on 5-point scale and verbal ability on 5-point scale.
Score: Self-rating of spatial ability minus self-rating of verbal ability (4 to +4).
Learning preference measures
Multimedia Learning Preference Test-Choice
5-item Computer-based Behavior Original
Task: Choose visual or verbal help in a 5-frame on-line multimedia lesson.
Score: Number of frames in which visual help was chosen first (0 to 5).
Multimedia Learning Preference Test-Rating
5-item Computer-based Rating Original
Task: Rate preference for visual or verbal help in a 5-frame on-line multimedia lesson.
Score: Number of frames in which visual help was rated higher (0 to 5).
Multimedia Learning Preference Questionnaire
5-item Questionnaire Original
Task: Rate preference for visual or verbal help in a 5-frame paper-based multimedia lesson.
Score: Number of frames in which visual help was rated higher (0 to 5).
Cognitive style measures
Verbalizer-Visualizer Questionnaire
15-item Questionnaire Richardson (1977)
Task: Rate agreement with statements about verbal and visual modes of thinking on 7-point scale. [Original VVQ had truefalse format rather
than 7-point scale.]
Score: Weight of pro-visual ratings minus weight of pro-verbal ratings (45 to + 45). [3 for strongly agree/disagree, 2 for moderately agree/
disagree, 1 for slightly agree/disagree.]
Cognitive style measures
(continued on next page)
323L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
Spatial Ability Rating in which students were asked to rate their verbal ability and spatial ability on 5-point scales
(Mayer & Massa, 2003).
Three measures were intended to assess general achievement (or general cognitive ability): Self-reported score on
the SAT-Verbal, self-reported score on the SAT-Math, and an 18-item Vocabulary Test adapted from the vocabulary
scale of the Armed Services Vocational Aptitude Battery. Although not intended to directly measure the verbalizer-
visualizer dimension, these general achievement tests may tap related skills.
The instructional materials consisted of two on-line programs on basic electronics, a definition test sheet, a reasoning
test sheet, and five problem-solving test sheets. The program, created using Visual Basic, consisted of 31 frames divided
into four sections: atomic structure, electron flow, electrical circuits, and electric motors. Each frame contained 120 to
250 words, with 2 to 7 key words indicated in blue color. If a participant clicked on a key word, a text definition appeared
on the screen (for participants in the text group) or an illustration appeared on the screen (for participants in the pictorial
group). By clicking on the RETURNkey the participant could return to the instructional frame. Participants were told
that they could click on as many key words as they liked and for as many times as they liked. Participants could click on
the NEXTbutton to go to the next frame and the BACKbutton to go to the previous frame.
The definition test sheet asked the participant to write brief definitions for six terms that had been defined in the lesson:
aluminum atom, open circuit, free electron, conventional current flow, ampere, and Ohm's law. The reasoning test sheet
presented four multiple choice questions such as: In an electrical circuit with one battery and one resistor, the rate offlow
of the current is2 amps. What happens tothe rate of flow of the current if you add a second battery in series? ___ decreases
to less than 2 amps, ___ stays the same at 2 amps, ___ increases to more than 2 amps, ___ can't tell.Thefiveproblem-
solving sheets contained the following questions, respectively: (a) Why are some materials (such as copper) better than
others (such as rubber) for conducting electricity?,(b)Describe what happens inside a wire when electricity flows
through it.,(c)How does a battery work?,(d)Suppose you turn on an electric motor but the wire loop does not rotate.
What could be wrong?(e) In an electric motor how could you get the wire loop to rotate in the opposite direction?
The apparatus for presenting the CSA, the Multimedia Learning Preference Test, and the on-line lesson consisted of
five Sony Vaio laptop computers with 15-inch color screens.
1.1.3. Procedure
The procedure was for participants to be randomly assigned to treatment group and tested in groups of 1 to 5 per session.
Each participant sat in an individual cubicle that contained a laptop computer. First, participants completed the Participant
Questionnaire (which solicited information concerning the participant's SAT-Verbal, SAT-Math, Verbal-Spatial Ability
Table 1 (continued )
General achievement measures
Santa Barbara Learning Style Questionnaire
6-item Questionnaire Original
Task: Rate agreement with statements about verbal and visual modes of learning on 7-point scale.
Score: Weight of pro-visual ratings minus weight of pro-verbal ratings (18to +18). [3 for strongly agree/disagree, 2 for moderately agree/disagree, 1 for
slightly agree/disagree.]
Verbal-Visual Learning Style Rating
1-item Questionnaire Original
Task: Rate preference for visual versus verbal learning on 7-point scale.
Score: Weight of rating with strongly more visual than verbalcounted as +3 and strongly more verbal than visualcounted as 3(3 to + 3).
Learning Scenario Questionnaire
5-item Questionnaire Original
Task: Choose preferred mode of learning for descriptions of 5 learning tasks.
Score: Number of tasks on which visual mode is preferred (0 to 5).
Cognitive Styles Analysis
40-item Computer-based Behavior Riding (1991)
Task: Respond to whether on-screen statements about visual and verbal statements are true or false.
Score: Based on pattern of response times program assigns score.
Note. The Cognitive Styles Analysis is not included in any of the four factors.
324 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
Rating, Verbal-Visual Style Rating) and an assessment of knowledge of electronics. Second, participants studied the on-
line electronics lesson at their own rate (with an average of 40 min). Third, participants moved to a cubicle in an adjoining
room, where they completed the definition sheet (with a 6 min time limit), and the reasoning sheet and the 5 problem-
solving sheets (with 3 min/sheet). The learning tests required approximately 25 min. Fourth, participants moved back to
their original cubicle and completed each of the remaining individual differences instruments in the following order: Santa
Barbara Learning Style Questionnaire, Learning Scenario Questionnaire, Card Rotations Test, Paper Folding Test,
Vocabulary Test, Verbalizer-Visualizer Questionnaire, Multimedia Learning Preference Test (Choice Scale and Preference
Scale), Cognitive Style Analysis, and Multimedia Learning Preference Questionnaire. The individual differences
instruments required approximately 40 min. Finally, participants were debriefed and thanked.
1.2. Results
1.2.1. Scoring
The 11 individual difference instruments and 3 general achievement measures were scored as described by Mayer
and Massa (2003). Knowledge of electronics prior to the lesson was determined by response to a 5-point scale (0 = no
knowledge, 4 = very knowledgeable) added to one point for each item of electrical knowledge or experience checked
on a list of 12 items. The prior knowledge score could range from 0 to 16. For the definition sheet, students received
one point for each correct definition, yielding a possible of range of 0 to 6. For the reasoning sheet, students received
one point for each correct answer, yielding a possible range of 0 to 4. For each of the problem-solving sheets, we
produced a list of acceptable answers. Students received one point for each acceptable answer they produced tallied
across all five problems, yielding a possible range of 0 to 20. A composite learning score was created by adding the
scores on the definition, reasoning, and problem-solving sheets. The total test score had a possible range from 0 to 30,
and this measure was used as the dependent variable in all subsequent analyses.
1.2.2. Do verbalizers and visualizers need different instructional methods?
Prior to testing the main analyses, knowledge of electronics prior to the lesson was examined as a possible covariate.
Knowledge of electronics prior to the lesson (M=4.75, S.D. =2.40) correlated with learning score (M=9.42, S.D.= 3.63),
r=0.27, p= 0.05, and so was included as a covariate in the analyses of the mainhypotheses. In order to analyze the data, we
Table 2
Experiment 1: descriptive statistics for learning test score
Measure Pictorial condition Text condition
High score/
visualizer
Low score/verbalizer High score/
visualizer
Low score/
verbalizer
nM S.D. nM S.D. nM S.D. nM S.D.
General achievement factor 14 12.79 2.16 10 8.50 3.41 8 10.00 2.33 12 7.08 2.68
SAT-Math 12 11.25 3.02 12 10.75 3.93 10 9.30 3.09 10 7.20 2.35
SAT-Verbal 14 11.57 3.25 10 10.20 3.71 6 9.33 1.37 14 7.79 3.26
Vocabulary Test 14 11.93 3.15 12 9.58 3.34 10 9.80 3.55 16 6.87 2.73
Spatial ability factor 13 12.00 3.65 13 9.69 2.78 8 9.13 3.60 18 7.50 3.19
Card Rotations Test 16 11.69 3.57 10 9.50 2.72 10 8.20 3.79 16 7.88 3.14
Paper Folding Test 15 11.80 3.43 11 9.55 3.01 10 8.60 3.53 16 7.62 3.26
VerbalSpatial Ability Rating 7 13.86 1.95 19 9.74 3.14 5 7.40 2.70 21 8.14 3.51
Learning preference factor 7 11.14 2.41 19 10.74 3.74 9 7.56 2.54 17 8.24 3.67
Multimedia Learning Preference Test-Choice 15 11.07 2.69 11 10.55 4.30 9 7.56 2.46 17 8.24 3.77
Multimedia Learning Preference Test-Rating 11 10.36 3.14 15 11.20 3.63 10 7.20 2.82 16 8.50 3.61
Multimedia Learning Preference Questionnaire 12 11.00 2.49 14 10.71 4.10 13 7.62 2.63 13 8.38 3.99
Cognitive style factor 9 10.89 3.44 17 10.82 3.47 7 7.43 3.21 19 8.21 3.44
Verbalizer-Visualizer Questionnaire 14 11.50 3.48 12 10.08 3.26 11 7.18 3.31 15 8.60 3.33
Santa Barbara Learning Style Questionnaire 12 10.33 2.99 14 11.29 3.75 13 8.31 2.87 13 7.69 3.84
Verbal-Visual Learning Style Rating 14 10.07 3.60 12 11.75 3.02 11 8.09 3.08 15 7.93 3.61
Learning Scenario Questionnaire 11 11.82 2.32 15 10.13 3.93 8 7.75 2.60 18 8.11 3.68
Cognitive Styles Analysis 6 9.67 4.23 16 11.56 3.22 14 7.86 3.42 11 8.36 3.47
Note. The Cognitive Styles Analysis is not included in any of the four factors.
325L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
created composite measures of general achievement, spatial ability, cognitive style, and learning preference by adding
together standard scores for the instruments comprising each composite measure and creating two levels of the attribute by
median split. A 2× 2 analysis of covariance was conducted on each of the four composite measures with attribute
(visualizer versus verbalizer) and treatment group (pictorial versus text) as the between subject factors, and learning test
score as the dependent measure. No significant interactions were found between attribute and treatment for any of the four
composite measures. To further analyze the data a 2× 2 analysis of covariance was conducted on each of the 14 individual
measures. We again created the two attribute levels by a median split. Table 2 summarizes the mean learning score (and
standard deviation) for visualizers and verbalizers in each treatment condition for each of the 14 individual difference
measures and 4 composite measures. Table 3 summarizes the ANCOVA information for the attribute× treatment
interaction for each of the 14 individual difference measures and 4 composite measures. Only one of the 14 individual
difference measures interacted significantly (at pb.05) with the treatment: Verbal-Spatial Ability Rating in which
visualizers benefited more from the pictorial treatment than did verbalizers. Overall, these results do not provide strong
evidence that different instructional methods are required for visualizers and verbalizers.
A possible criticism concerns the sample size. We addressed this issue by conducting a replication (Experiment 2),
which produced similar results, and by examining the effect size of each of the 18 interactions in Experiment 1. The
final column in Table 3 lists the value of eta squared, which indicates the proportion of total variance attributed to the
interaction effect. As you can see none of the general achievement measures or the learning preference measures
yielded interaction effects accounting for more than 2% of the variance. The eta squared values were also at or below
the 2% range for most cognitive style measures, although one factor (verbalizer-visualizer questionnaire) yielded an
interaction effect accounted for 5% of the variance in the predicted direction whereas another (visual-verbal learning
style rating) accounted for 4% of the variance but in the opposite direction. Concerning spatial ability, most measures
did not produce large eta squares but one measure (i.e., verbalspatial ability rating) produced an interaction effect in
the predicted direction that accounted for 8% of the variance the largest of all measures tested. This is also the only
measure to produce a statistically significant interaction. Overall, the ATI effect sizes were very small, thus supporting
our conclusions based on significance testing in the previous paragraph.
In all of the 18 ANCOVAs, there was a treatment effect in which the pictorial group outperformed the text group. In
15 of the 18 ANCOVAs, there was no significant effect of attribute; there was a significant effect for the vocabulary test
[F(1, 47) = 8.60, MSE= 9.38, pb.01] in which high verbal ability learners (M=11.04, S.D. =3.42) outperformed low
verbal ability learners (M=8.04, S. D. =3.25), for the paper folding test [F(1, 47) = 4.63, MSE = 9.99, p= .04] in which
high spatial ability learners (M=10.52, S.D. =3.75) outperformed low spatial ability learners (M= 8.41, S.D. = 3.25),
Table 3
Experiment 1: interaction F-values, learning test score as dependent variable
Measure df MS
interaction
MS
error
Fpη
2
General achievement factor 1, 39 4.70 6.78 0.69 .41 .02
SAT-Math 1, 39 8.12 9.99 0.81 .37 .02
SAT-Verbal 1, 39 0.01 9.92 0.00 .98 .00
Vocabulary Test 1, 47 1.19 9.38 0.13 .72 .00
Spatial ability factor 1, 47 24.03 9.98 2.41 .13 .05
Card Rotations Test 1, 47 17.10 10.43 1.64 .21 .03
Paper Folding Test 1, 47 6.33 9.99 0.63 .43 .01
VerbalSpatial Ability Rating 1, 47 42.81 9.83 4.35 .04 .08
Learning preference factor 1, 47 2.91 10.96 .27 .61 .01
Multimedia Learning Preference Test-Choice 1, 47 0.03 11.05 0.00 .96 .00
Multimedia Learning Preference Test-Rating 1, 47 2.45 10.52 0.23 .63 .00
Multimedia Learning Preference Questionnaire 1, 47 4.31 10.99 0.39 .53 .01
Cognitive style factor 1, 47 0.37 11.02 0.03 .86 .00
Verbalizer-Visualizer Questionnaire 1, 47 27.56 10.52 2.62 .11 .05
Santa Barbara Learning Style Questionnaire 1, 47 12.72 10.76 1.18 .28 .00
Verbal-Visual Learning Style Rating 1, 47 19.41 10.19 1.90 .17 .04
Learning Scenario Questionnaire 1, 47 2.50 11.02 0.23 .64 .02
Cognitive Styles Analysis 1, 42 5.40 11.35 0.48 .49 .01
Note. The Cognitive Styles Analysis is not included in any of the four factors.
326 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
and for the composite general achievement measure [F(1, 47) = 19.15, MSE = 6.78, pb.01] in which high achievement
learners (M= 11.77, S.D. =2.56) outperformed low achievement learners (M=7.73, S.D. =3.04).
2. Experiment 2
Experiment 1 did not provide strong support for the ATI hypothesis, as reflected in the finding that all learners (i.e.,
both visualizers and verbalizers) benefited more from pictorial help than verbal help. Experiment 2 was conducted with
a different population, non-college educated adults, to determine if the findings generalized beyond the undergraduate
population.
2.1. Method
2.1.1. Participants and design
The participants were 61 non-college educated adults recruited from an employment agency, with 30 serving in the
pictorial group and 31 in the text group. The mean age was 24.62 years (S.D. = 8.47); the percentage of men was 39.34
(n= 24) and the percentage of women was 60.66 (n= 37). None of the parti cipants had graduated from college. Of the
61 participants 15 stated that the highest level of education they had received was high school, 5 stated they had
completed a technical school program, 40 had taken one or more courses at a junior college, and one participant stated
that the highest level of education completed was the 11th grade.
2.1.2. Materials and apparatus
We used the same materials and apparatus in Experiment 2 as we used in Experiment 1, with one exception. The
Participant Questionnaire was modified by removing a question asking for SAT scores, and including a question asking
participants to state the highest level of education they had completed. The materials included three additional tests
designed to distinguish spatial and imagery types of visualizers (Kozhevnikov, Hegarty, & Mayer, 2002), but we do not
report on them because of difficulties with scoring and reliability.
2.1.3. Procedure
The procedure was identical to that used for Experiment 1, except three additional tests not reported in this
analysis were placed at the end of the session.
2.2. Results
2.2.1. Do verbalizers and visualizers need different instructional methods?
Knowledge of electronics prior to the lesson (M=5.03, S.D. = 2.21) correlated with learning score (M=7.77, S.D. =
3.80), r= 0.26, p= .04, and so knowledge of electronics was used as a covariate in the analyses of the main hypotheses.
A 2 × 2 analysis of covariance was conducted on each of the 12 individual differences measures included in this
experiment and on the three composite measures that they constituted (spatial ability, cognitive style, and learning
preference). Attribute (visualizers versus verbalizers) and treatment group (pictorial versus text) served as the between
subject factors, and learning test score as the dependent measure. As in Experiment 1, we created two levels of the
attribute by a median split. Table 4 summarizes the mean learning score (and standard deviation) for visualizers and
verbalizers in each treatment condition for each of the 12 individual difference measures and the 3 composite measures.
Our main focus is on the degree of support for the ATI hypothesis, which proposes that verbalizers will learn better
with text help and visualizers will learn better with pictorial help. Table 5 summarizes the ANCOVA information for the
attribute × treatment interaction for each of the 12 individual difference measures and the three composite measures.
Eleven of the 12 individual difference measures and all three composite measures did not interact significantly with
treatment. The same pattern of results was also obtained using an ANOVA (without any covariate). As in Experiment 1,
we did not find convincing support for the ATI hypothesis.
Also as in Experiment 1, a possible criticism concerns the sample size. We addressed this issue by conducting
Experiment 2 as a replication producing similar results as in Experiment 1, and by examining the effect size of each of
the 15 interactions in Experiment 2. The final column in Table 5 lists the value of eta squared, which indicates the
proportion of total variance attributed to the interaction effect. As you can see, most of the interaction effects accounted
327L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
for 3% or less of the total variance. Of the remaining 3 interactions with eta above .03, two (vocabulary test and paper
folding test) produced patterns in the opposite direction as predicted whereas one (CSA) was in the predicted direction.
Overall, the ATI effect sizes were very small (or in the non-predicted direction) thus supporting our conclusions based
on significance testing in the previous paragraph.
Although our main focus was not on the overall effects of the visualizer-verbalizer attribute, we did find a main
effect of cognitive style [F(1, 56) = 4.06, MSE = 12.38, p= .05] in which visualizers (M=8.93, S.D. = 4.18) scored
higher than verbalizers (M= 6.65, S.D. = 3.04). Although our main focus was not on the overall effects of the pictorial
Table 5
Experiment 2: interaction F-values, learning test score as dependent variable
Measure df MS
interaction
MS
error
Fpη
2
General achievement factor –– – –
SAT-Math –– – –
SAT-Verbal –– – –
Vocabulary Test 1, 56 30.44 10.82 2.82 .10 .05
Spatial ability factor 1, 56 22.53 11.54 1.95 .17 .03
Card Rotations Test 1, 56 4.54 12.60 0.36 .55 .01
Paper Folding Test 1, 56 61.17 10.57 5.79 .02 .09
VerbalSpatial Ability Rating 1, 56 5.01 12.94 0.39 .54 .03
Learning preference factor 1, 56 5.46 13.23 0.41 .52 .01
Multimedia Learning Preference Test-Choice 1, 56 5.07 13.17 0.38 .54 .01
Multimedia Learning Preference Test-Rating 1, 56 3.61 13.12 0.28 .60 .00
Multimedia Learning Preference Questionnaire 1, 56 15.99 13.03 1.23 .27 .02
Cognitive style factor 1, 56 2.96 12.38 0.24 .63 .00
Verbalizer-Visualizer Questionnaire 1, 56 14.42 12.73 1.13 .29 .02
Santa Barbara Learning Style Questionnaire 1, 56 6.77 13.02 0.52 .47 .01
Verbal-Visual Learning Style Rating 1, 56 19.84 12.90 1.54 .22 .03
Learning Scenario Questionnaire 1, 56 10.94 12.24 0.89 .35 .02
Cognitive Styles Analysis 1, 56 39.34 12.23 3.26 .08 .06
Note. The Cognitive Styles Analysis is not included in any of the four factors.
Table 4
Experiment 2: descriptive statistics for learning test score
Measure Pictorial condition Text condition
High score/
visualizer
Low score/
verbalizer
High score/
visualizer
Low score/
verbalizer
nM S.D. nM S.D. nM S.D. nM S.D.
General achievement factor –––––––––––
SAT-Math –––––––––––
SAT-Verbal –––––––––––
Vocabulary Test 17 9.24 4.02 13 8.15 2.88 13 9.38 4.31 18 4.94 1.98
Spatial ability factor 19 9.42 4.09 11 7.64 2.11 13 9.00 4.38 18 5.22 2.44
Card Rotations Test 18 9.44 4.20 12 7.75 2.05 12 8.25 4.71 19 5.89 2.92
Paper Folding Test 15 9.27 4.18 15 8.27 2.86 13 9.54 4.08 18 4.83 2.06
VerbalSpatial Ability Rating 20 8.95 4.19 10 8.40 1.90 19 7.53 4.49 12 5.67 2.15
Learning preference factor 17 8.59 3.81 13 9.00 3.34 13 7.31 4.70 18 6.44 3.15
Multimedia Learning Preference Test-Choice 13 8.23 2.95 17 9.18 4.00 7 6.86 6.01 24 6.79 3.11
Multimedia Learning Preference Test-Rating 15 8.13 4.17 15 9.40 2.82 11 6.64 2.91 20 6.90 4.32
Multimedia Learning Preference Questionnaire 16 9.50 3.80 14 7.93 3.20 14 6.57 3.76 17 7.00 3.98
Cognitive style factor 20 9.60 3.32 10 7.10 3.60 10 7.60 5.50 21 6.43 2.80
Verbalizer-Visualizer Questionnaire 17 9.65 3.57 13 7.62 3.33 10 7.00 5.06 21 6.71 3.23
Santa Barbara Learning Style Questionnaire 16 8.63 3.46 14 8.93 3.79 10 5.90 4.04 21 7.24 3.74
Verbal-Visual Learning Style Rating 16 9.06 3.82 14 8.43 3.34 9 5.11 3.30 22 7.50 3.88
Learning Scenario Questionnaire 8 9.75 2.44 22 8.41 3.88 9 8.67 5.43 22 6.05 2.75
Cognitive Styles Analysis 16 8.88 4.06 14 8.64 3.03 14 5.36 3.03 17 8.00 4.08
Note. The Cognitive Styles Analysis is not included in any of the four factors.
328 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
versus text treatment, we did find a main effect of condition in the analyses with the composite learning preference
score as an independent variable [F(1, 56) = 4.26, MSE = 13.23, p= .04] in which those in the pictorial condition
(M= 8.77, S.D. = 3.56) scored higher than those in the text condition (M= 6.81, S.D. = 3.82).
3. Experiment 3
Experiments 1 and 2 did not provide support for the ATI hypothesis. In Experiment 3, we made a third attempt in
which we replicated Experiment 1 using the same measures of the verbalizer-visualizer attribute but two different
treatmentsone group received both pictorial and text help (both group) and another group received no help (none
group). In Experiment 3 we tested the prediction that verbalizers would outperform visualizers with the none treatment
(because the lesson is largely verbal), but visualizers would outperform verbalizers with the both treatment (because
visualizers could seek pictorial help to supplement the largely verbal lesson). In addition, we examined the behavior of
the learners in the both group in Experiment 3, in order to test the behavioral validation of our self-report measures of
the verbalizer-visualizer dimension.
3.1. Method
3.1.1. Participants and design
The participants were 62 college students recruited from the Psychology Subject Pool at the University of California,
Santa Barbara. Half served in the both group and half served in the none group. The mean age was 19.00 (S.D. = 1.44);
the percentage of men was 21.00 (n= 13) and the percentage of women was 79.00 (n=49); and the mean combined SAT
score was 1120 (S.D. = 198).
3.1.2. Materials and apparatus
The materials and apparatus were the same as in Experiment 1 except that two new instructional programsthe both
and none programswere created to replace those used in Experiment 1. The both program offered both pictorial and
verbal help: When the student clicked on a highlighted term on any of the 31 instructional frames, a frame appeared
containing a Vbutton, a Pbutton and a Returnbutton. When the student clicked on the Vbutton the computer
displayed the same verbal help as for the verbal group in Experiment 1; when the student clicked on the Pbutton the
computer displayed the same pictorial help as for the pictorial group in Experiment 1. When the student finished
viewing the help, the student clicked on a button that returned the student to screen showing V,P, and Return
buttons. From there the student could click on Vor Pto get more help, or click on the Returnto go to the current
instructional screen. The none program offered no help options, so students could only click the forward button to
move to the next screen or the back button to go back to the previous screen.
3.1.3. Procedure
The procedure was identical to Experiment 1 except that students were randomly assigned to either the both or none
group, and the instructions for each program were altered accordingly.
3.2. Results
3.2.1. Do verbalizers and visualizers need different instructional methods?
Prior to testing the main analyses, knowledge of electronics prior to the lesson was examined as a possible covariate.
Knowledge of electronics prior to the lesson (M=4.00, S.D. = 1.85) did not corre late with learning score (M=6.89,
S.D. = 3.27), r= 0.05, p= 0.71, and so was not included as a covariate in the analyses of the main hypotheses. A 2 × 2
analysis of variance was conducted on each of the four composite measures (general achievement, spatial ability,
cognitive style, and learning preference) and each of the 14 individual differences measures with attribute (visualizers
versus verbalizers) and treatment group (both versus none) as the between subject factors, and learning test score as the
dependent measure. As in Experiment 1, we created two levels of the attribute by a median split.
Our main focus is on whether or not there were attribute ×treatment interactions in which verbalizers learned best
with one instructional method and visualizers learned best with another method of instruction. Table 6 summarizes the
mean learning score (and standard deviation) for visualizers and verbalizers in each treatment condition for each of the
329L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
four composite measures and the 14 individual difference measures. Table 7 summarizes the ANOVA information for
the attribute xtreatment interaction for each of the four composite measures and 14 individual difference measures.
None of the four composite measures interacted significantly with treatment, and none of the 14 individual difference
measures interacted significantly with treatment. We note one marginally significant interaction (p= .06) among the 18
comparisons involving the Help Screen Questionnaire in which visualizers benefited more from the both treatment
whereas verbalizers benefited more from the none treatment. In addition, the final column of Table 7 lists the eta
squared values for each ATI, indicating that the interaction effect sizes were very small in Experiment 3. Overall, in
Table 6
Experiment 3: Descriptive statistics for learning test score
Measure Both condition No help condition
High score/
visualizer
Low score/verbalizer High score/
visualizer
Low score/
verbalizer
nM S.D. nM S.D. nM S.D. nM S.D.
General achievement factor 12 6.83 1.27 13 5.62 2.63 14 7.71 4.92 13 7.15 3.08
SAT-Math 11 7.27 1.42 14 5.36 2.27 14 7.43 4.70 13 7.46 3.46
SAT-Verbal 11 6.73 1.27 14 5.79 2.61 15 8.73 4.37 12 5.83 3.13
Vocabulary Test 13 6.38 1.56 18 6.44 2.85 17 7.94 3.72 14 6.64 4.27
Spatial ability factor 13 6.38 2.53 161 6.38 2.42 17 8.53 3.78 14 5.93 3.83
Card Rotations Test 13 6.38 2.53 17 6.35 2.34 16 8.62 3.88 15 6.00 3.70
Paper Folding Test 12 6.67 1.61 18 6.28 2.84 17 8.35 4.42 14 6.14 3.06
VerbalSpatial Ability Rating 5 6.80 1.30 26 6.35 2.53 6 7.33 3.50 25 7.36 4.13
Learning preference factor 16 6.69 1.92 14 5.64 2.13 13 6.46 2.76 18 8.00 4.62
Multimedia Learning Preference Test-Choice 15 6.27 1.75 15 6.13 2.39 11 6.73 3.00 20 7.70 4.44
Multimedia Learning Preference Test-Rating 12 6.17 1.99 18 6.22 2.16 11 6.09 2.81 20 8.05 4.38
Multimedia Learning Preference Questionnaire 16 7.31 2.47 15 5.47 1.88 12 6.50 2.88 19 7.89 4.51
Cognitive style factor 18 6.28 2.52 13 6.62 2.22 13 7.85 3.74 18 7.00 4.19
Verbalizer-Visualizer Questionnaire 17 6.53 2.50 14 6.29 2.27 13 8.54 4.94 18 6.50 2.94
Santa Barbara Learning Style Questionnaire 15 6.27 2.76 16 6.56 2.00 14 7.79 3.91 17 7.00 4.09
Verbal-Visual Learning Style Rating 13 6.31 2.84 18 6.50 2.04 13 8.46 5.03 18 6.56 2.88
Learning Scenario Questionnaire 10 7.50 2.55 21 5.90 2.14 7 6.43 3.26 24 7.63 4.17
Cognitive Styles Analysis 13 7.15 3.26 18 5.89 1.28 18 6.94 3.17 13 7.92 4.94
Note. The Cognitive Styles Analysis is not included in any of the four factors.
Table 7
Experiment 3: interaction F-values, learning test score as dependent variable
Measure df MS
interaction
MS
error
Fpη
2
General achievement factor 1, 48 1.40 11.03 0.13 .72 .00
SAT-Math 1, 48 12.22 10.79 1.13 .29 .02
SAT-Verbal 1, 48 12.28 9.98 1.23 .27 .02
Vocabulary Test 1, 58 7.02 10.79 0.65 .42 .01
Spatial ability factor 1, 56 24.90 10.43 2.39 .13 .04
Card Rotations Test 1, 57 25.388 10.22 2.48 .12 .04
Paper Folding Test 1, 57 12.32 10.52 1.17 .28 .02
Verbal-Spatial Ability Rating 1, 58 0.52 11.00 0.05 .83 .00
Learning preference factor 1, 57 25.05 9.96 2.51 .12 .04
Multimedia Learning Preference Test-Choice 1, 57 4.46 10.30 0.43 .51 .01
Multimedia Learning Preference Test-Rating 1, 57 12.95 9.94 1.30 .26 .02
Multimedia Learning Preference Questionnaire 1, 58 39.61 10.31 3.84 .06 .06
Cognitive style factor 1, 58 5.29 10.90 0.48 .49 .01
Verbalizer-Visualizer Questionnaire 1, 58 12.26 10.46 1.17 .28 .02
Santa Barbara Learning Style Questionnaire 1, 58 4.51 10.92 0.41 .52 .01
Verbal-Visual Learning Style Rating 1, 58 16.62 10.53 1.58 .21 .03
Learning Scenario Questionnaire 1, 58 23.46 10.58 2.22 .14 .04
Cognitive Styles Analysis 1, 58 19.00 10.68 1.78 .19 .03
Note. The Cognitive Styles Analysis is not included in any of the four factors.
330 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
Experiment 3 we did not find strong support for the ATI hypothesis. Finally, in one last attempt to uncover support for
the ATI hypothesis, we counted the number of interactions that were in the predicted direction in Experiments 1, 2, and
3. This count yielded 20 out of 33 interactions in the predicted direction, which is not statistically different from chance
(pb.05) based on a binomial probability test.
Although treatment and attribute effects were our main focus, we found no significant treatment effects in any of the
18 ANOVAs in Experiment 3. There was a significant effect for the SAT-Verbal score [F(1, 48) = 4.73, MSE= 9.98,
p= .04] in which high verbal ability learners (M=7.88, S.D. = 3.51) outperformed low verbal ability learners
(M=5.81, S.D. = 2.80).
3.2.2. Are self-reported measures of verbalizer-visualizer style valid?
The measures used to evaluate cognitive style and learning preference depend on self-reports from students. Are
such reports related to what they actually do when learning in a multimedia learning environment? In order to answer
this question, we examined the log files for 28 students in the both group, and derived the following four measures for
each student: number of times (out of 31 instructional screens) first clicked on pictorial help, number of times (out of 31
instructional screens) first clicked on verbal help, total number of pictorial help screens viewed, and total number of
verbal help screens viewed. Table 8 shows the correlations between each of the four composite measures (cognitive
style, learning preference, spatial ability, and general achievement) and each of the four processing measures (first
pictorial, first verbal, total pictorial, and total verbal).
First, there isa consistent relation between cognitive style measures and the processing measures, in which people who
report themselves as visualizers tend to rely more on pictorial help whereas people who report themselves as verbalizers
tend to rely more on verbal help. This pattern provides a validation of the self-report instruments used to measure verbal-
visual cognitive style. Table 8 also lists the correlations between the four process measures and each of four instruments
used to measure verbal-visual cognitive style: Verbal-Visual Learning Style Rating, Santa Barbara Learning Style
Questionnaire, Learning Scenario Questionnaire, and the Visualizer-Verbalizer Questionnaire. The two measures that
most strongly correlate with learning process measures are the Verbal-Visual Learning Style Rating and the Learning
Scenario Questionnaire. Importantly, these two short instruments displayed higher validity than did longer questionnaires.
Second, Table 8 shows that there is a strong and consistent relation between learning preference measures and the
processing measures, in which people who report themselves as preferring pictorial presentations tend to rely more on
pictorial help whereas people who report themselves as preferring verbal presentations tend to rely more on verbal help.
Table 8
Correlations of attribute measures with processing measures
Attribute Measure First Pictorial First Verbal Total Pictorial Total Verbal
General achievement factor .32 .23 .29 .24
SAT-Math .24 .10 .23 .10
SAT-Verbal .31 .29 .27 .30
Vocabulary Test .38 .22 .43 .35
Spatial ability factor .35 .04 .37 .05
Card Rotations Test .32 .04 .34 .09
Paper Folding Test .11 .18 .08 .13
VerbalSpatial Ability Rating .29 .11 .33 .14
Learning preference factor .52 ⁎⁎ .49 ⁎⁎ .44 .44
Multimedia Learning Preference Test-Choice .39 .41 .32 .38
Multimedia Learning Preference Test-Rating .32 .42 .24 .40
Multimedia Learning Preference Questionnaire .51 ⁎⁎ .44 .45 .40
Cognitive style factor .43 .34 .38 .29
Verbalizer-Visualizer Questionnaire .30 .14 .31 .11
Santa Barbara Learning Style Questionnaire .37 .35 .31 .26
Verbal-Visual Learning Style Rating .43 .18 .39 .17
Learning Scenario Questionnaire .31 .44 .23 .40
Cognitive Styles Analysis .02 .19 .00 .20
Note. The Cognitive Styles Analysis is not included in any of the four factors.
pb.05.
⁎⁎ pb.01.
331L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
This pattern provides a validation of the self-report instruments used to measure verbal-visual learning preference.
Included in Table 6 are the correlations between the four process measures and each of three instruments used to measure
verbal-visual learning preference: Multimedia Learning Preference Questionnaire, Multimedia Learning Preference
Test-Choice Scale, and Multimedia Learning Preference Test-Rating Scale. All three learning preference measures
correlated well with the process measures, indicating a strong relation between what people report they will do and what
they actually do in a multimedia learning episode. Although all three learning preference measures display significant
relations with process measures, the Multimedia Learning Preference Questionnaire is the simplest measureinvolving
only a short paper and pencil questionnaire rather than an actual computer-based performance test and it appears to
produce the strongest correlations with process measures.
Third, Table 8 shows that there is no statistically significant relation between the composite score of spatial ability
and process measures, although some correlations reach marginal significance. Similarly, Table 8 shows that there are
no statistically significant correlations between any of the process measures and any of the three spatial ability
instruments Card Rotations Test, Paper Folding Test, and VerbalSpatial Ability Rating. The lack of strong
correlations is consistent with the idea that cognitive ability is separate from cognitive style or learning preference.
Fourth, Table 8 shows that there is no statistically significant relation between the composite score of general ability
measures and process measures. Similarly, Table 8 shows that although self-reported SAT-Verbal and SAT-
Mathematics scores are not related to the process measures, scores on the Vocabulary Test are related. The lack of
strong and consistent correlations is consistent with the idea that cognitive ability is separate from cognitive style or
learning preference.
4. Supplemental analysis
In a previous factor analysis examining the verbalizer-visualizer dimension, Mayer and Massa (2003) found a four-
factor solution incorporating 13 of the 14 verbalizer-visualizer variables (with the CSA not loading on any of the
factors). To determine if this factor structure holds with another group of participants drawn from the same population
we combined the data from Experiments 1 and 3, and then conducted a confirmatory factor analysis using AMOS 4.01
Statistical Package (Arbuckle, 1999). The participants in Experiments 1 and 3 came from the same population as those
used by Mayer and Massa (2003). The confirmatory factor analysis was performed on the variancecovariance matrix
of the 13 measures using maximum likelihood estimation. For ease of interpretation the corresponding correlation
matrix is displayed in Table 9.Fig. 1 displays the four-factor model with factor loadings and correlations among the
factors. An alpha level of .05 was used to determine significance for all analyses.
The four cognitive style measures (Verbal-Visual Learning Style Rating, Verbalizer-Visualizer Questionnaire, Santa
Barbara Learning Style Questionnaire, and the Learning Scenario Questionnaire), the three learning preference
measures (Multimedia Learning Preference Questionnaire, Multimedia Learning Preference Test Choice Scale, and
Table 9
Correlations of the 13 verbalizer-visualizer measures included in the confirmatory factor analysis
Measure 1 2 3 4 5 6 7 8 9 10 11 12
1. SAT-Math
2. SAT-Verbal .42
3. Vocabulary Test .47 .14
4. Card Rotations Test .15 .30 .23 ⁎⁎
5. Paper Folding Test .16 .36 .16 .45
6. VerbalSpatial Ability Rating .27 .09 .10 .06 .14
7. Multimedia Learning Preference Test-Choice .10 .22 .03 .10 .12 .06
8. Multimedia Learning Preference Test-Rating .05 .10 .17 .03 .05 .33 .37
9. Multimedia Learning Preference Questionnaire .04 .13 .04 .07 .06 .29 .40 .58
10. Verbalizer-Visualizer Questionnaire .16 .18 .06 .17 .27 .37 .19 ⁎⁎ .27 .24
11. Santa Barbara Learning Style Questionnaire .16 .13 .15 .30 .20 .27 .36z .36 .36 .45
12. Verbal-Visual Learning Style Rating .15 .13 .13 .23 ⁎⁎ .16 .39 .33 .33 .40 .41 .70
13. Learning Scenario Questionnaire .02 .26 .12 .15 .28 .24 .32 .41 .43 .35 .43 .46
pb.01.
⁎⁎ pb.05.
332 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
Multimedia Learning Preference Test Preference Scale), and the three general achievement measures (SAT-Math, SAT-
Verbal, and the vocabulary test) all loaded significantly on their corresponding factors. Two of the three spatial ability
measures (Card Rotations, and Paper Folding) loaded significantly on the spatial ability factor. The third spatial ability
measure, the Verbal-Spatial Ability Rating, had a loading that trended toward significance (p= .06).
The confirmatory factor analysis also examined the correlations among the factors. The cognitive style factor was
significantly correlated with the learning preference factor, and with the spatial ability factor. The learning preference
factor only correlated significantly with the cognitive style factor. General achievement correlated significantly with
the spatial ability factor. All correlations were as expected.
The overall confirmatory factor analysis produced a χ
2
(59) = 110.37, pb.01. According to the Hoelter Index sample
size would need to be reduced to n= 90 for χ
2
to no longer be significant. The CFI (.86) and GFI (.87) indicate an
acceptable fit of the data. The RMSEA (.08) indicates that we have a reasonable error of approximation of the
population covariance matrix. Overall the model indicates support for the four-factor verbalizer-visualizer structure
found by Mayer and Massa (2003).
5. Conclusion
Overall, the present study provides support for the idea that it is possible to use instruments that distinguish between
verbalizers and visualizers (i.e., support for the visualizer-visualizer hypothesis) but does not provide support for the
Fig. 1. Results of a confirmatory factor analysis of the four factor model of verbalizer-visualizer dimensions found by Mayer and Massa (2003).
333L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
idea that different instructional methods should be used for visualizers and verbalizers (i.e., no support for the ATI
hypothesis).
5.1. Support for the verbalizer-visualizer hypothesis
Some people are visual learners and some people are verbal learners. This idea, which we have called the visualizer-
verbalizer hypothesis (Mayer & Massa, 2003), was supported in two ways. First, in the supplemental analysis, a
confirmatory factor analysis revealed that the four-factor structure found by Mayer and Massa (2003) holds, thus
providing reliability to their conclusions that cognitive style, learning preference, spatial ability, and general
achievement are four separate facets of the verbalizer-visualizer dimension. Second, in the both group of Experiment 3,
there were substantial correlations between paper-and-pencil measures of cognitive style and learners' behaviors
during learning, and between paper-and-pencil measures of learning preference and learners' behaviors during
learning. For example, students who reported that they used visual modes of representation or preferred visual modes
of presentation tended to select pictorial help screens whereas students who reported that they used verbal modes of
representation or preferred verbal models of presentation tended to select verbal help screens. This pattern provides
some validation of the paper-and-pencil measures. Overall, consistent with the results of Mayer and Massa (2003),
people appear to differ on the visualizer-verbalizer dimension with respect to cognitive style, learning preference, and
cognitive ability.
5.2. No support for the ATI hypothesis
The ATI hypothesis states that verbal learners should receive verbal methods of instruction and visual learners
should receive visual methods of instruction. To test the ATI hypothesis we constructed a realistic computer-based
training lesson, along with two forms of adjunct supporthelp in the form of printed words that was intended for
verbalizers and help in the form of labeled diagrams and labeled illustrations intended for visualizers. We attempted to
give the ATI hypothesis a fair hearing by using many different ways of measuring the verbalizer-visualizer dimension
(i.e., 14 different measures and 4 different composite measures), by testing both college students and non-college
educated adults, and by conducting three different experiments. However, the ATI hypothesis was not supported by the
results of each of the three studies: (1) there was no significant ATI in 17 of the 18 tests in Experiment 1 including all
composite measures (i.e., general achievement, spatial ability, learning preference, and cognitive style); (2) there was
no significant ATI in 14 of the 15 tests in Experiment 2 including all composite measures; and (3) there was no
significant ATI in 18 of 18 tests in Experiment 3 including all composite measures. Overall, we tried 51 ways to find a
significant ATI and were successful twice; with alpha at the .05 we could have expected to find 2.5 significant effects
out of 51 attempts just by chance. In addition, the interaction effect sizes were generally very small.
As one final attempt to test the ATI hypothesis we examined all interactions to determine whether they were in the
predicted direction even if they were not statistically significant. Of the 51 interactions we examined across all three
experiments, 27 were in the predicted direction and 24 were opposite the predicted direction. This difference is not
statistically significant based on a Fisher Exact Test with pb.05. Overall, the direction of the interaction was almost
equally likely to come out one way as the other, again indicating no evidence for the ATI hypothesis.
In Experiments 1 and 2, our extensive study of verbalizer-visualizer measures failed to yield convincing evidence
for the idea that adding pictorial aids to an on-line lesson helped visualizers more than verbalizers or that adding verbal
aids to an on-line lesson helped verbalizers more than visualizers. Overall, in spite of careful testing using more than a
dozen verbalizer-visualizer measures, we were unable to find support for the ATI hypothesis that verbal learners should
be given verbal instruction and visual learners should be given visual instruction. Instead, adding pictorial aids to an
on-line lesson that was heavily text-based tended to help both visualizers and verbalizers. These results are consistent
with what Mayer (2001) calls the multimedia effect: people learn better from words and pictures than from words alone.
Similarly, in Experiment 3, we were also unable to find support for the ATI hypothesis when we used the both versus
none treatments. Finally, the lack of main effects attributable to verbalizer-visualizer measures is consistent with the
idea that people can learn equally well as verbalizers or visualizers.
Overall, our results do not provide a convincing rationale for customizing different on-line instruction programs for
visualizers and verbalizers. This conclusion should be tempered by the acknowledgement that our studies are based on
one lesson and one kind of pictorial and verbal instruction. It is possible that ATIs could be obtained with some other
334 L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
type of lesson and with some other way of implementing pictorial and verbal methods of instruction. Nevertheless, this
work represents a rigorous effort to test the ATI hypothesis, and yields no support for it. In contrast, research on prior
knowledge commonly produces ATIs in which instructional methods that benefit beginners often do not benefit more
experienced learners (Kalyuga, Ayers, Chandler, & Sweller, 2003; Mayer, 2001). Therefore, the failure to obtain ATIs
in the present set of experiments should not be taken to suggest that instruction should never be designed to
accommodate individual differences. Rather, our findings cast doubt on the effectiveness of designing instruction to
accommodate individual differences in the verbalizer-visualizer dimension.
References
Arbuckle, J. L. (1999). AMOS 4.01. Chicago, IL: Smallwaters Corporation.
Biggs, J. (2001). Enhancing learning: A matter of style or approach? In R. J. Sternberg & L. Zhang (Eds.). Perspectives on thinking, learning, and
cognitive style (pp. 73102). Mahwah, NJ: Erlbaum.
Cronbach, L. J. (Ed.). (2002). Remaking the concept of aptitude: Extending the legacy of Richard E. Snow. Mahwah, NJ: Erlbaum.
Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods. New York: Irvington.
Ekstrom, R. B., French, J. W., & Harman, H. H. (1976). Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing
Service.
Kalyuga, S., Ayers, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist,38,2332.
Kozhevnikov, M., Hegarty, M., & Mayer, R. E. (2002). Revising the visualizerverbalizer dimension: Evidence for two types of visualizers.
Cognition and Instruction,20,4778.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
Mayer, R. E., & Massa, L. (2003). Three facets of visual and verbal learners: Cognitive ability, cognitive style, and learning preference. Journal of
Educational Psychology,95, 833841.
Richardson, A. (1977). Verbalizervisualizer: A cognitive style dimension. Journal of Mental Imagery,1, 109126.
Riding, R. J. (1991). Cognitive styles analysis. Birmingham, UK: Learning and Training Technology.
Sternberg, R. J., & Zhang, L. (Eds.), (2001). Perspectives on thinking, learning, and cognitive styles. Mahwah, NJ: Erlbaum.
335L.J. Massa, R.E. Mayer / Learning and Individual Differences 16 (2006) 321335
... According to this theory, incoming information is processed and mentally represented in two ways: verbally (i.e., verbalizers) and visually (i.e., visualizers). In the case of verbalizer-visualizer differences, the aptitude-treatment interaction (ATI) hypothesis predicts that visualizers will achieve the best learning performance when they receive visual rather than verbal methods of instruction, whereas verbalizers will perform best when they receive verbal rather than visual methods of instruction (Massa & Mayer, 2006). ...
... The literature is however somehow reluctant regarding the effectiveness of providing instructions designed to accommodate individual differences (Massa & Mayer, 2006). In a recent study, (Koć-Januchta et al., 2017) support the hypothesis that individual differences in visualverbal cognitive styles do exist and can be observed in the eye movements. ...
... Numerous studies show however that a combination of text and pictures support learning and deepens understanding and problem-solving processes (J. M. Clark & Paivio, 1991), (Mayer & Fiorella, 2014), (Wittrock, 2010), independently on the cognitive style of individuals (Massa & Mayer, 2006). ...
Thesis
Full-text available
To remain competitive in the ongoing industrial revolution (i.e., Industry 4.0), manufacturing sectors must ensure high flexibility at the production level, a need best addressed by skilled human workforce. As traditional training becomes increasingly inefficient, finding a better way of training novice workers becomes a critical requirement. Literature suggests that augmented reality (AR), an emerging technology proposed by Industry 4.0, can potentially address this concern.The benefits of AR-based knowledge sharing tools have been demonstrated in a variety of domains, including industry, from manufacturing to validation and maintenance. However, despite the progress of AR in recent years, no significant industrial breakthrough can be noted. We found that most AR systems are elaborated and evaluated under controlled settings, without the implication of the eventual end users. Guided by literature recommendations, we conducted a long-term case study in a manual assembly factory, to identify needs and expectations that an AR training system should meet, to optimally address the considered industrial sector.Further, we conducted an in-depth analysis on information representation and conveyance in AR, with respect to cognitive implications and content authoring efforts. We explored as well human-computer interaction paradigms to identify principles and design guidelines for elaborating an AR tool dedicated to the shop floor context. We found that the visual representation of the assembly expertise in AR can rely on spatially registered low-cost visual assets (i.e., text, photo, video, and predefined auxiliary data), while a human-centered design should be adopted during the elaboration of the AR system, prioritizing usability and usefulness rather than performance.We defined a formalized visual representation (i.e., 2W1H principle) of assembly operations in AR, that considers authoring concerns and supports training performance. We proposed an HMD-only immersive authoring that allows one to capture his assembly expertise in-situ, during the assembly itself. The authoring is a one-step process, does not rely on existing data or external services and does not require AR or technical expertise, pre- or post-processing of data. During training, the assembly information is conveyed via AR by following the 2W1H principle, designed to guide novice workers in a natural, non-intrusive manner, minimizing user input and UI clutter, and aiming to optimize comprehension and learning.We evaluated our proposal by conducting several experiments. The first, conducted on a real-world assembly workstation, confirmed the hypothesis that spatially registered low-cost visual assets can effectively convey manual assembly expertise to novice workers via AR in an industrial setup. The findings of the second experiment supported the assumption that the worthiness of authoring CAD-based AR instructions in similar industrial context is questionable. A final experiment proved that the proposed AR system, including both authoring and training procedures, can be used effectively by novices in a matter of minutes. The overall reported feedback demonstrated the usability and efficiency of the proposed AR training approach, indicating that a similar system implementation could be successfully adopted in shop floor environments.Future work should validate the reported experimental findings in large-scale industrial evaluations and propose reliable “intelligent” modules (e.g., assembly validation and feedback) to better assist novice workers during training and optimize the authoring procedure as well.
... After conducting a comprehensive literature review, Pashler et al. (2008) could only identify a handful of studies meeting those criteria, none of which had significant results (e.g. Constantinidou & Baker, 2004;Massa & Mayer, 2006;Zacharis, 2011 Kavale & Forness, 1987;Hirshoren & Forness, 1998;Snider, 1999;Tarver & Dawson, 1978 -cited by Willingham et al., 2015, 267). It can thus be concluded that neither the existence of learning-styles (hypotheses 2 and3) as trait preferences, nor the benefits of teaching according to the identified learning-style (hypothesis 4) has been verified in valid and reliable study designs. ...
... Krätzig & Arbuthnott (2006) conclude in their study "that objective test performance did not correlate with learning style preference". Massa & Mayer (2006), Coffield et al. (2014) and Papapatou-Pastou et al. (2018) draw similar conclusions and highlight the inaccuracy of self-report instruments when it comes down to identifying learning-styles. People are generally poor at self-assessment and selfperception, and self-designating a specific learner-type is no exception. ...
Article
Full-text available
Over the last twenty years the VAK learning-styles theory, which differentiates between visual, auditory, and kinesthetic learning types, has been criticized and debunked by various academic disciplines and declared by several scientists as a neuromyth. Regardless of its criticism, the concept has retained its popularity within the teacher-community and is regularly taught in teacher education. The aim of this article is to meet this theory-practice gap in a constructive way. After (1) a short introduction, this paper starts (2) with a differentiated assessment of the theory. The VAK learning-style theory will be deconstructed into four main hypotheses which are then (3) one at a time, evaluated (empirically as well as in view of teaching practice). After a complex evaluation of the concept and its criticism, this article continues with (4) showing, how the learning-style theory provides teachers with an approachable understanding of learning and comforts them in dealing with learning differences within a heterogenic student body. Considering the empirical evidence on the one hand and teacher's needs on the other hand, this article (5) lines out fundamental insights of learning theories, as well as (6) the relevance and capacity of emotions for perception and evaluation processes. Approaching (7) learning style-theory from the perspective of learning-theories and theories of emotion, which highlights the interdependency of learning, achievement, and emotion, finally allows concluding the paper (8) with four specific and normative principles, which allows teachers to benefit from an empirical accurate understanding of a complex process of learning and teaching.
... After conducting a comprehensive literature review, Pashler et al. (2008) could only identify a handful of studies meeting those criteria, none of which had significant results (e.g. Constantinidou & Baker, 2004;Massa & Mayer, 2006;Zacharis, 2011 Kavale & Forness, 1987;Hirshoren & Forness, 1998;Snider, 1999;Tarver & Dawson, 1978 -cited by Willingham et al., 2015, 267). It can thus be concluded that neither the existence of learning-styles (hypotheses 2 and3) as trait preferences, nor the benefits of teaching according to the identified learning-style (hypothesis 4) has been verified in valid and reliable study designs. ...
... Krätzig & Arbuthnott (2006) conclude in their study "that objective test performance did not correlate with learning style preference". Massa & Mayer (2006), Coffield et al. (2014) and Papapatou-Pastou et al. (2018) draw similar conclusions and highlight the inaccuracy of self-report instruments when it comes down to identifying learning-styles. People are generally poor at self-assessment and selfperception, and self-designating a specific learner-type is no exception. ...
Article
Full-text available
Over the last twenty years the VAK learning-styles theory, which differentiates between visual, auditory, and kinesthetic learning types, has been criticized and debunked by various academic disciplines and declared by several scientists as a neuromyth. Regardless of its criticism, the concept has retained its popularity within the teacher-community and is regularly taught in teacher education. The aim of this article is to meet this theory-practice gap in a constructive way. After (1) a short introduction, this paper starts (2) with a differentiated assessment of the theory. The VAK learning-style theory will be deconstructed into four main hypotheses which are then (3) one at a time, evaluated (empirically as well as in view of teaching practice). After a complex evaluation of the concept and its criticism, this article continues with (4) showing, how the learning-style theory provides teachers with an approachable understanding of learning and comforts them in dealing with learning differences within a heterogenic student body. Considering the empirical evidence on the one hand and teacher’s needs on the other hand, this article (5) lines out fundamental insights of learning theories, as well as (6) the relevance and capacity of emotions for perception and evaluation processes. Approaching (7) learning style-theory from the perspective of learning-theories and theories of emotion, which highlights the interdependency of learning, achievement, and emotion, finally allows concluding the paper (8) with four specific and normative principles, which allows teachers to benefit from an empirical accurate understanding of a complex process of learning and teaching. Article visualizations: </p
... Un neuromito a su vez se trata de un error de interpretación que encuentra su origen en malas citas o un mal entendimiento de hallazgos científicos (OECD, 2002). La idea de adaptar la clase según estilos de aprendizaje de los estudiantes es un buen ejemplo de neuromito (Massa y Mayer, 2006). Mismo que además se vincula estrechamente al de PEAc. ...
... La enseñanza conductivista siguió siendo conductivista durante varias décadas después de la reforma. Por otro lado, actualmente sabemos que en realidad no hay diferencias significativas entre un aprendizaje desde lo verbal, visual o kinestésico (Massa y Mayer 2006) como parte de la creciente lista de neuromitos (Bruyckere, 2015;Barraza y Leiva, 2019;Barraza, 2020). Sin embargo, nos induce a replantearnos, la palabra "enseñanza". ...
Chapter
Full-text available
En cada uno de los capítulos que conforman esta obra, sus autores muestran posibilidades para la construcción de un mundo mejor, ya sea a partir de la revisión de las concepciones pedagógicas vigentes en América Latina y Europa; ya sea a través de la aprehensión de posibilidades metodológicas innovadoras para el desarrollo de una educación mediada por tecnologías digitales acordes con los deseos de los sujetos de la actualidad; ya sea a través del señalamiento de alternativas que nos permitan construir una sociedad más equitativa a través de políticas públicas inclusivas. La mayoría de sus autores pertenecen a la Red Ecuatoriana de Investigación Científica Inclusiva Multidisciplinar (REICIM), un colectivo compuesto por más de cincuenta miembros, distribuidos geográficamente en nueve países: Colombia, Ecuador, Perú, Bolivia, Brasil, Argentina, Portugal, España y Francia. El objetivo de este colectivo es generar diálogos y reflexiones que estimulen la cultura de la investigación, la innovación y el emprendimiento a través del intercambio de experiencias de investigación, sobre la exploración y los resultados de proyectos innovadores relacionados con la educación.
... Studies conducted by Constantinidou and Baker (2002), Massa and Mayer (2007), Cook, Thompson, Thomas, and Thomas (2009) also claim that learning outcomes were not affected by differences in material stimuli. As with Massa and Mayer (2006), the Verbalizer-Visualizer Questionnaire (VVQ) experiment in this study confirmed the findings that there was insufficient evidence of significant differences between learners' learning abilities, regardless of learning style. According to the researchers, the findings of the study did not support the meshing hypothesis that better learning outcomes can be achieved through providing learning materials that correspond to students' learning preferences. ...
Article
Full-text available
The study investigated the relationship between sensory-specific English learning material stimuli and sensory learning preferences. The study involved 53 English as a foreign language participants. Participants were provided with different sensory-specific English learning material stimuli to analyse their overall comprehension. A canonical correlation analysis was used to analyse the collected data statistically. Auditory learning preference was significantly related to adaptive English learning stimuli (paper texts with sound, images with captions, and sound with images). The canonical correlation coefficient of .665 indicated 44.2% of the variance in English learning was determined by learners with auditory learning preferences and adaptive learning materials.
... In brief, the development of the practical worksheet and the learning style tendency that works well with the added value of the existing applications on Google Store or on Apps Store will influence the active processing for an individual in building a coherent mental model [9][10][11]. If this active processing is disturbed, it will make the meaning knowledge formation process become complicated. ...
Conference Paper
Full-text available
COVID-19 has left a great impact to the teaching delivery strategy today. Online teaching and learning (OTL) have gained popularity due to the movement control order, control of the disease contagion, and the enforced limitations. Nonetheless, a question arises on how the OTL developed needs to lessen the students’ cognitive burden in developing meaningful learning, especially for abstract course delivery that requires high cognitive processing. Thus, a survey is conducted to see the learning style tendency of the Electronic Engineering (Communication) (DEP) diploma students in Politeknik Port Dickson (PPD) undertaking the course Wireless Communication System using the Felder-Silverman Learning Style Model (FSLSM) as its basics. Using the Felder-Solomon Learning Style Index, the domination of the student learning style dimension can be measured, if it has a tendency towards the active-reflective, verbal-visual, or sequential-global learning type. Based on the finding, DEP PPD students have a strong tendency towards the visual learning style. The reliability of both the pre-test and post-test also shows a significant consistency on the study results. Finally, the finding from this study is able to assist lecturers in preparing and implementing an effective delivery strategy in line with students’ learning style so that meaningful learning can be achieved.
... For the other tasks (simple subtraction, complex subtraction, self-adapted math achievement, mental rotation, and nonverbal matrix reasoning), we calculated indexes using the Guilford formula: S = R -W/(n -1), in which S represents the adjusted number of correct answers without the aid of chance, R represents the number of correct responses, W represents the number of incorrect responses, and n represents the number of choices given for each problem. This reduced the effect of guessing on the time-limited tasks (Guilford & Guilford, 1936) and has been used by previous studies (Cirino, 2011;Massa & Mayer, 2006;Wei et al., 2012a, b). ...
Article
Full-text available
Visual perception is a critical factor in mathematical performance. The current study investigated whether form-perception speed underlies the association between visual perception capability and mathematical performance. Visual form perception tasks having different perceptual loads were administered to 162 adults in Experiment 1 and 273 children in Experiment 2. Experiment 1 showed that adult’s visual perception capability correlated with mathematical performance, even after controlling for age, gender, nonverbal matrix reasoning, choice reaction time, and mental rotational ability. However, only the correlation modulated by processing speed—and only visual perceptions with lower perceptual loads— predicted the variance of mathematical performance for adults mathematical performance. The findings in children corresponded to those in adults. Thus, form-perception speed modulates the association between visual perception capability and mathematical performance.
Chapter
This chapter examines the reciprocal relation between intelligence and achievement, particularly within academic domains such as verbal ability and mathematical ability. In particular, the chapter examines the specific knowledge needed for successful performance on tests of verbal ability that focus on decoding or reading comprehension, and tests of mathematical ability that focus on solving arithmetic computation problems or arithmetic word problems.
Article
Full-text available
This paper presents how the social model of disability can be used to reshape inclusive education practices in Guyana. Inclusive education in Guyana is metamorphosizing but still firmly held in the tenets of the Medical Model of Disability which influences the experiences of children with Special Education Needs and/or Disabilities (SEN/D). An ethnographic approach to data gathering was employed in this study. Qualitative data were gathered from the voices of children with and without SEN/D as well as their mainstream teachers to present the interplay of discourses and subjectivities in the situation. The data were analyzed using Adele Clarke's situational analysis. The data suggest that it is possible but will be challenging to fully contextualize and adopt Loreman's synthesis and Booth's and Ainscow's Index in the two mainstream schools studied. In addition, the data paved the way for the presentation of the 'Southern Inclusive Education Framework for Guyana' and its support tool 'The Inclusive Checker created for Southern mainstream primary classrooms'.
Article
This study examined the effects of instructional multimedia tasks designed based on five principles of reducing extraneous processing on language learners’ listening and reading comprehension development. The study comprises two phases of design and experimentation. In the design phase, twelve sets of multimedia tasks were designed considering two conditions of applying and violating the principles. In the experimentation phase, the tasks were used in two conversation classes, each consisting of 15 students. The participants’ listening and reading comprehension were assessed before and after the study by the International English Language Testing System test. The experimental group received instruction based on condition 1, using multimedia prepared in accord with the principles; and the control group received the same instruction based on condition 2, using traditionally designed videos. The results revealed that condition 1 was significantly effective in the development of both listening and reading comprehension of the participants. More detailed analyses showed that the tasks had a significant impact on improving the comprehension of monologues rather than dialogues. Further, the instruction under condition 1 led to the development of both reading for gist and reading for specific information, but the effect size of the intervention was larger for the former. The findings elucidate the value of theory-grounded and practice-supported principles for improving the design and production of multimedia presentations and the critical role of rigorously designed multimedia in language learners’ input processing and comprehension.
Article
Full-text available
The authors examined the hypothesis that some people are verbal learners and some people are visual learners. They presented a battery of 14 cognitive measures related to the visualizer-verbalizer dimension to 95 college students and then conducted correlational and factor analyses. In a factor analysis, each measure loaded most heavily onto 1 of 4 factors: cognitive style (such as visual-verbal style questionnaires), learning preference (such as behavioral and rating instruments involving visual-verbal preferences in multimedia learning scenarios), spatial ability (such as visualization and spatial relations tests and verbal-spatial ability self-ratings), and general achievement (such as tests of verbal and mathematical achievement). Results have implications for how to conceptualize and measure individual differences in the visualizer-verbalizer dimension and cognitive style in general. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When new information is presented to learners, it must be processed in a severely limited working memory. Learning reduces working memory limitations by enabling the use of schemas, stored in long-term memory, to process information more efficiently. Several instructional techniques have been designed to facilitate schema construction and automation by reducing working memory load. Recently, however, strong evidence has emerged that the effectiveness of these techniques depends very much on levels of learner expertise. Instructional techniques that are highly effective with inexperienced learners can lose their effectiveness and even have negative consequences when used with more experienced learners. We call this phenomenon the expertise reversal effect. In this article, we review the empirical literature on the interaction between instructional techniques and levels of learner experience that led to the identification of the expertise reversal effect.
Book
This volume presents the most comprehensive, balanced, and up-to-date coverage of theory and research on cognitive, thinking, and learning styles, in a way that: * represents diverse theoretical perspectives; * includes solid empirical evidence testing the validity of these perspectives; and * shows the application of these perspectives to school situations, as well as situations involving other kinds of organizations. International representation is emphasized, with chapters from almost every major leader in the field of styles. Each chapter author has contributed serious theory and/or published empirical data--work that is primarily commercial or that implements the theories of others. The book's central premise is that cognitive, learning, and thinking styles are not abilities but rather preferences in the use of abilities. Traditionally, many psychologists and educators have believed that people's successes and failures are attributable mainly to individual differences in abilities. However, for the past few decades research on the roles of thinking, learning, and cognitive styles in performance within both academic and nonacademic settings has indicated that they account for individual differences in performance that go well beyond abilities. New theories better differentiate styles from abilities and make more contact with other psychological literatures; recent research, in many cases, is more careful and conclusive than are some of the older studies. Cognitive, learning, and thinking styles are of interest to educators because they predict academic performance in ways that go beyond abilities, and because taking styles into account can help teachers to improve both instruction and assessment and to show sensitivity to cultural and individual diversity among learners. They are also of interest in business, where instruments to assess styles are valuable in selecting and placing personnel. The state-of-the-art research and theory in this volume will be of particular interest to scholars and graduate students in cognitive and educational psychology, managers, and others concerned with intellectual styles as applied in educational, industrial, and corporate settings. © 2001 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Article
Sixty participants were administered spatial ability tests, a verbal ability test, and a visualizer-verbalizer cognitive style questionnaire. Although verbalizers tended to be a homogeneous group with an intermediate level of spatial ability, there were 2 groups of visualizers, 1 with high spatial ability (the spatial type) and another with low spatial ability (the iconic type). To compare the use of mental images by the 2 types of visualizers in solving problems, interviews with 8 high-spatial visualizers and 9 low-spatial visualizers were conducted. The students were presented with graphs of motion and were asked to visualize and interpret the motion of an object. Whereas low-spatial visualizers interpreted the graphs as pictures and mostly relied on visual (iconic) imagery, high-spatial visualizers constructed more schematic images and manipulated them spatially. In addition, we compared problem-solving strategies used by verbalizers to those of visualizers. In contrast to visualizers, verbalizers of low and high spatial ability did not have any clearly marked preference to use visual or spatial imagery.
Article
Describes a research instrument (the Verbalizer–Visualizer Questionnaire) which measures individual differences on a verbalizer–visualizer dimension of cognitive style. This 15-item true–false questionnaire was found, in 4 studies with right-handed high school and college students, to be unaffected by social desirability response bias and to have an acceptable level of test–retest stability. This questionnaire also showed statistically significant theoretically important associations with other experiential, behavioral, and physiological events. Its relation to eye movement responses has a systematic component, but the conditions which control this component require investigation in their own right. (25 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)