ArticlePDF Available

Abstract and Figures

This study analyzed the effects of online formative and summative assessment materials on undergraduates'experiences with attention to learners' testing behaviors (e.g., performance, study habits) and beliefs (e.g., test anxiety, perceived test threat). The results revealed no detriment to students' perceptions of tests or performances on tests when comparing online to paper-pencil summative assessments. In fact, students taking tests online reported lower levels of perceived test threat. Regarding formative assessment, findings indicate a small benefit for using online practice tests prior to graded course exams. This effect appears to be in part due to the reduction of the deleterious effects of negative test perceptions afforded in conditions where practice tests were available. The results support the integration of online practice tests to help students prepare for course exams and also reveal that secure web-based testing can aid undergraduate instruction through improved student confidence and increased instructional time.
No caption available
No caption available
Content may be subject to copyright.
The Effects of Online
Formative and Summative
Assessment on
Test Anxiety and Performance
The Journal of Technology, Learning, and Assessment
Volume 4, Number 1 · October 2005
A publication of the Technology and Assessment Study Collaborative
Caroline A. & Peter S. Lynch School of Education, Boston College
Jerrell C. Cassady & Betty E. Gridley
e Effects of Online Formative and Summative Assessment on
Test Anxiety and Performance
Jerrell C. Cassady & Betty E. Gridley
Editor: Michael Russell
Technology and Assessment Study Collaborative
Lynch School of Education, Boston College
Chestnut Hill, MA 02467
Copy Editor: Kevon R. Tucker-Seeley
Design: omas Hoffmann
Layout: Aimee Levy
JTLA is a free on-line journal, published by the Technology and Assessment Study
Collaborative, Caroline A. & Peter S. Lynch School of Education, Boston College.
Copyright ©2005 by the Journal of Technology, Learning, and Assessment
(ISSN 1540-2525).
Permission is hereby granted to copy any article provided that the Journal of Technology,
Learning, and Assessment is credited and copies are not sold.
Preferred citation:
Cassady, J. C. & Gridley, B. E. (2005). e effects of online formative and
summative assessment on test anxiety and performance. Journal of Technology,
Learning, and Assessment, 4(1). Available from
Author’s note:
We would like to thank Judey Budenz-Anders, Gary Pavlechko and Wayne Mock for their
help with an early version of this work. Correspondence concerning this article should be
addressed to Jerrell C. Cassady, Ph.D., Department of Educational Psychology, Ball State
University, TC 522, Muncie, IN 47306;
Volume 4, Number 1
is study analyzed the effects of online formative and summative assessment materials
on undergraduates’ experiences with attention to learners’ testing behaviors (e.g., per-
formance, study habits) and beliefs (e.g., test anxiety, perceived test threat). e results
revealed no detriment to students’ perceptions of tests or performances on tests when
comparing online to paper-pencil summative assessments. In fact, students taking tests
online reported lower levels of perceived test threat. Regarding formative assessment,
findings indicate a small benefit for using online practice tests prior to graded course
exams. is effect appears to be in part due to the reduction of the deleterious effects of
negative test perceptions afforded in conditions where practice tests were available. e
results support the integration of online practice tests to help students prepare for course
exams and also reveal that secure web-based testing can aid undergraduate instruction
through improved student confidence and increased instructional time.
e Effects of On-line Formative
and Summative Assessment
on Test Anxiety and Performance
Jerrell C. Cassady
Betty E. Gridley
Department of Educational Psychology
Ball State University
e use of the Internet to provide students with access to course
materials has become an increasingly common practice for undergrad-
uate instruction (Duchastel, 1996). Standard online materials typically
include links to a course syllabus, an outline of class topics, instructional
materials, and communication conduits (Wheeler, 2000). However, recent
developments with user-friendly web-based assessment packages and
secure Internet testing protocols have led to the common usage of online
assignments, quizzes, and tests. Although there is great enthusiasm among
educators regarding the potential for online delivery of both formative
and summative assessment materials, there is little evidence regarding
the impact of web-based assessment practices on student performance
(Buchanan, 1998; 2000). Similarly, the unique impact of online testing
on students’ attitudes and anxieties is an under-explored topic. is
investigation explored undergraduate students’ experiences within
the context of a course utilizing online assessments. In particular, two
primary questions were examined: (1) Are there differences in students’
perceptions and performances for graded (summative) tests based on
the format of delivery (online vs. paper-pencil)?; and (2) How are under-
graduate students’ experiences uniquely influenced by the availability of
online formative assessments (practice quizzes)?
The Learning-Testing Cycle
Perhaps the most comprehensive body of research that has explored
the experience of learners in various testing conditions comes from the test
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
anxiety literature, which has detailed a variety of conditions and criteria
that tend to positively or negatively influence academic test performance.
One generality in this body of research is that understanding students’
experiences with tests is facilitated when viewing the entire learning and
testing process as a recursive cycle.
ree phases are included in the learning-testing cycle: test prepara-
tion (forethought), test performance, and test reflection (Schutz & Davis,
2000; Zeidner, 1998). Students with high levels of cognitive test anxiety
and other negative test perceptions have difficulty operating in all three of
these phases (Cassady, 2004b). e conclusion from this line of research
has been that the beliefs and behaviors students maintain during each of
these phases directly influence performance. e current study targeted
students’ experiences in the test preparation and performance phases, and
used the established framework of the learning-testing cycle to investigate
theoretical benefits and drawbacks related to online testing.
Test Preparation
In the test preparation phase, students with high levels of cognitive
test anxiety tend to procrastinate, worry over potential failure, utilize inef-
fective study strategies, and demonstrate insufficient cognitive processing
skills to gain effective conceptual understanding for the content (Cassady,
2004b; Culler & Holohan, 1980; Hembree, 1988; Wittmaier, 1972). ere
is evidence that students with test anxiety develop these patterns due to
deficient abilities in effectively encoding to-be-learned content (Cassady,
2004a; Naveh-Benjamin et al., 1987), with some research pointing directly
to the articulatory processing loop, which controls verbal processing in
working memory (Ikeda, Iwanaga, & Seiwa, 1996). ese pervasive
processing failures have been explained through skill deficit models,
where the students simply have not developed the necessary strategies to
encode, organize, and store the materials at hand (e.g., Naveh-Benjamin
et al., 1987). Training the learner to employ effective strategies for test
preparation should alleviate such a skill deficit, and consequently promote
higher test performance for students who have a history of test anxiety
and test failure. e learning-testing cycle framework predicts that once
a student gains an effective study strategy for encoding and storing core
content, the traditional deleterious effects of test anxiety will be less
dramatic because the student will recognize the content is accessible and
the self-deprecating ruminations and coping strategies such as procrasti-
nation and task avoidance will be less readily activated (Cassady, 2004b).
Another proposition for helping learners overcome the effects of
cognitive test anxiety is to reduce the perceived threat of an evaluative
event. For example, Cassady (2004a) found that under conditions where
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
there was no external evaluative pressure (i.e., ungraded tests of memory
in a laboratory setting), the influence of test anxiety on performance
was significantly lower than in conditions of high external evaluative
pressure (college entrance exams). is pattern of results indicates that when
the evaluative stress is removed, the processing deficits are attenuated,
supporting the proposition that the test anxious learner has the basic
cognitive skills to encode, organize, and store core content.
is study was designed to extend the laboratory-based finding with
contrived materials to a realistic educational setting by providing ungraded
practice tests as a test preparation strategy available to learners in educa-
tional psychology courses.
Test Performance
e classic view of test anxiety has been focused on the test perfor-
mance phase, where learners fail to perform well due to task interference.
is interference can take many forms, including: (a) sudden, inexplicable
loss of previously mastered information at the time of testing (Covington
& Omelich, 1987); (b) interfering self-deprecating ruminations (Sarason,
1986); (c) distracting thoughts of failure brought on by feelings of threat
to self imposed by the test (Cassady, 2002; 2004b; Schwarzer & Jerusalem,
1992); or (d) physiological reactions that impair stable cognitive action
(e.g., headache, perspiration, heart palpitation; Sarason, 1986). ese
distracters during the testing event naturally reduce the ability of the
learner to effectively locate and use relevant information stored in long-
term memory.
Contemporary views of test anxiety have demonstrated additional
problems in the performance phase for those test-anxious students with
poor study skills (e.g., Naveh-Benjamin et al., 1987). ese students face
additional difficulty because the encoding and storage processes in the
test preparation phase have been adversely affected as well, significantly
reducing the probability of competent performance under pressure.
To reduce the impact of test anxiety and related test perceptions on
test performance, the use of practice tests in an instructional program can
serve two purposes: (a) provide ungraded testing experiences that serve
as effective test preparation activities and (b) provide non-threatening
practice exams that build student confidence through repeated attempts
and presumed success with realistic testing materials. In this study, online
presentation of practice tests was used as a simplified means to make
practice tests consistently and readily available to students.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Online Formative and
Summative Assessment
ere is a limited research base on the use of online tools to deliver
formative and summative assessments. However, the research base
on traditional testing formats is relevant and provides insight into the
experiences of learners. To frame the theoretical framework for this
study, we present the literature demonstrating that (a) formative assess-
ments can serve as effective test preparation events, (b) providing
multiple formative assessments can influence learners’ test perceptions,
and (c) migrating traditional multiple-choice tests to an online testing
protocol provides no universal performance or perception variances.
Impact of Formative Assessment on
Learning and Achievement
e decision to use formative assessment in instruction is typically
motivated by an attempt to provide the instructor with an accurate esti-
mation of student ability at a particular point in the course, or to provide
the students with an assessment task similar in nature to the summative
test (Buchanan, 1998). is allows the student to identify strengths and
weaknesses and to better prepare for the “real” exam. One of the great
advantages of online test programs is the ability to deliver practice tests
that serve as formative assessment tools for the students. Practice tests
have been shown to increase students’ final outcome performance by
roughly twelve percent (Bocij & Greasley, 1999; also see Carrier & Pashler,
1992; Dempster, 1997; Glover, 1989; McDaniel, Kowitz, & Dunay, 1989).
Delivering practice tests online may provide an additional benefit to
the student by allowing her or him to complete the test conveniently
without the environmental distractions that are common during in-class
practice tests.
Because different conceptualizations for “practice test” or
“practice quiz” are common, there are dramatically different educational,
cognitive, and theoretical implications when employing the different
strategies of practice testing; thus, operationalization is key. In this
discussion, unless otherwise noted, practice quizzes and formative assess-
ments refer to assessment tools that are completed by students prior
to a summative (graded) assessment. ese practice tests are similar to
summative assessments in format and difficulty level, but do not impact
the students’ course grade and are comprised of a different set of items.
e utility of formative assessment is partly reliant upon the manner
through which the feedback is provided to the learner. e most desirable
feedback approach appears to be immediate post-performance reporting,
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
which provides feedback directly after the entire quiz or test has been
completed (King & Behnke, 1999). is method takes advantage of a
primary benefit of computer-assisted assessment by supplying timely
feedback (Clariana, Ross, & Morrison, 1991; Jongekrijg & Russell, 1999),
while avoiding the problem of inducing anxiety or distraction that can
arise when providing performance indicators directly after each item
(Wise, Plake, Eastman, Boettcher, & Luken, 1986; Wise, Plake, Pozehl,
Barnes, & Lukin, 1989). e anxiety induced in item-by-item feedback has
been shown to hamper performance through motivational processes such
as learned helplessness or externalized attributions of control over perfor-
mance (Boggiano & Ruble, 1986).
Formative Assessment and Students’ Perceptions of Tests
e benefits of repeated formative assessment for students are likely
to rest in their perceptions of test preparedness for the summative
measure. Bandura (1986) proposed repeated exposure to successful testing
experiences for students with high anxiety would promote self-efficacy
for later tests. e use of formative assessments (where no evaluative
pressure is imposed) as practice for tests is likely to increase the
probability that students will have a positive experience in the testing
event with respect to anxiety. In these formative assessment experiences,
perceived threat, self-awareness, cognitive test anxiety, and emotion-
ality should all be lower than in standard summative assessment sessions
(Kurosawa & Harackiewicz, 1995; Schwarzer & Jerusalem, 1992). With
the suppression of these affective detractors, the student is more likely
to be able to benefit from self-regulatory processes in the practice testing
session, leading to higher performance, growth, and subsequent success
(Bandura, 1986; Schutz & Davis, 2000).
Online Summative Assessment
Summative assessment in an online environment differs in form
and function from the formative assessment process. Not only are the
summative assessments graded, but the methods through which students
access and respond to the tests usually differ. e summative assessment
process requires high levels of control and security in the testing process
to ensure reliability and validity in scores, attention to technical problems
that may arise during the testing session, and assurance that the online
nature of the testing process itself has no impact on actual performance.
An additional concern that is often raised by instructors considering online
summative assessment is that online testing will induce heightened levels
of anxiety over the test, leading to performance levels that underestimate
true ability.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
e advantages for providing course tests online can include flexibility
in delivering tests to students and efficiency in scoring, depending upon
the method of delivery chosen by the instructor. With the online delivery
of tests, students are not necessarily bound by the traditional artificial
academic scheduling constraints. Specifically, (a) they can complete exams
at different times of the day to fit their convenience; (b) they can poten-
tially complete the tests in different locations if the test is not a required
“closed-book” exam; and (c) unless there is an explicit reason for a time
limit, students can take as long as needed to complete the exam. In a
similar line, an additional benefit that can be gained through online
summative assessment is that additional class time may be gained in
traditional on-campus courses. at is, rather than taking a class period
to have the students complete the course exam, the instructor can use the
class period for instruction.
In perhaps the most complete examination of online summative assess-
ment to date, Bocij & Greasley (1999) reported that students claimed
online testing was superior because they were less distracted with the
process of handwriting their responses, which helped them maintain focus
on the test items and were less panicked. e lower levels of panic were
impacted in part by the fact that online tests took less time to complete.
Students in Bocij & Greasleys (1999) work reported the tests were fair,
unbiased, and “less threatening than conventional examinations” (p. 14).
Finally, the authors reported that performance gains were noted in the
online testing conditions, but these effects were not present for the high
ability students who appeared to be unaffected by test delivery format.
Present Investigation
As mentioned earlier, this investigation addressed two research
questions. e first was a comparison of the effect of delivering course
exams online versus in class on paper. is portion of the study involved
examining the affective experiences of one instructor’s students. e
students were enrolled in the same course, separated by one year. e only
evaluative difference existing between the two courses was the method of
delivering the course exams. For the first group of students, all tests were
delivered in class on paper. For the second group, all tests were delivered
online in a computer-based testing laboratory staffed by testing proctors
who ensured the security of the testing process and corrected any technical
issues that arose. Students’ levels of cognitive test anxiety, emotionality,
and perceived threat of tests were compared to determine if there were dif-
ferential perceptions of tests for students experiencing the two alternate
methods of test delivery. ese data were intended to examine the extent
to which online testing leads to heightened levels of fear, anxiety, or worry
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
over tests. e hypothesis underlying this question was that the method
of presentation would have no meaningful detrimental impact for the
students in any of these variables.
e second part of the study examined the relationships among
the use of online formative assessments, student performance, and test
perceptions. For both groups of students, online practice tests were made
available as a test preparation option for only the third exam. It was
expected that the students using online formative assessment tests (as
practice) would have higher rates of performance on subsequent summa-
tive assessment measures. Due to the differential patterns of behavior
and performance traditionally noted in students with test anxiety based
in part on study strategies (Naveh-Benjamin, McKeachie, & Lin, 1987), no
a priori predictions regarding the relationship between online formative
assessment and test perceptions were reasonable.
Undergraduate students in introductory educational psychology
courses were the participants in this investigation. Participants were drawn
from intact classes of students enrolled in the same Midwestern university
in the fall of 1999 and fall of 2000. Eighty-four undergraduate students
participated in the in-class testing group in the fall of 1999. e partici-
pants were predominantly White (n = 81), with the remaining students
reporting race as Black (n =1) or biracial (n =1), and one student refrained
from reporting on racial status. In the in-class testing group there were
74 females and 10 males, which was representative of the population in
the elementary education program that the courses served. Ninety-two
participants were included in the online testing group in the fall of 2000,
with 3 Black, 2 Hispanic, and 87 White students. ere were 24 males and
68 females in the online testing condition. e participants in the study
were all volunteers; participation in the study served as one of many
options to complete a course requirement on professional research.
Performance indicators used in this study were three course examina-
tions taken across the duration of the target academic semester. Tests 1
and 2 in the semester served as indications of prior performance in the
design of this study because they were completed prior to the self-report
instruments that are the focus of the analyses. Test 3 was the targeted test
for the investigation given that it was the test for which online formative
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
assessments were available and the test students completed shortly after
completing the self-report instruments on test perceptions and prepara-
tion behaviors.
e self-report instruments in this study have all been used and
validated in previous work with test anxiety (Cassady, 2004b; Cassady
& Johnson, 2002; Cassady et al., 2004). To promote additional replication,
all scales have been previously published in their entirety in the noted
Test Anxiety
Test anxiety research has repeatedly validated the existence of two
interrelated factors commonly referred to as worry and emotionality
(Hembree, 1988). Although over two decades of research has confirmed the
presence of both factors, there is clear evidence that the cognitive factor
has the most direct negative impact on test performance (Deffenbacher,
1980; Sarason, 1986). e term “cognitive test anxiety” refers to the wide
variety of thoughts and beliefs that can impair performance either during a
learner’s attempts to prepare for or take an examination (Cassady, 2004b).
ese cognitive barriers include (a) comparing self-performance to peers,
(b) considering the consequences of failure, (c) low levels of confidence in
performance, (d) excessive worry over evaluation, (e) feeling unprepared
for tests, or (f) limitations in retrieval cues utilization (Deffenbacher,
1980; Geen, 1980; Hembree, 1988; Morris, Davis, & Hutchings, 1981;
Sarason, 1986). e Cognitive Test Anxiety scale (Cassady & Johnson,
2002) is a 27-item instrument focused on only the cognitive domain of
test anxiety. Students respond to the items on this instrument using a
four-point Likert-type scale (“Not at all typical of me,” “Only somewhat
typical of me,” “Quite typical of me,” “Very typical of me”). Previous
research with this instrument has demonstrated high internal consis-
tency (alpha >.90) as well as construct stability as measured by test-retest
consistency at three administration periods (beginning, middle, end
of academic semester, rs 0.88 to 0.93) (Cassady, 2001b). To measure
cognitive test anxiety, the Cognitive Test Anxiety scale was completed
by all students no more than 2 days prior to the taking of the third
examination. e timing of the test administration was determined
by prior investigations with similar samples (Cassady, 2004b) that
demonstrated students had sufficient experience with the course testing
procedures to have an adequate understanding of the specific test condi-
tions and procedures for the given course.
e second factor of test anxiety is known as emotionality (Liebert
& Morris, 1967). is factor is the individual’s subjective awareness of
heightened autonomic arousal during examinations (Schwarzer, 1984). To
measure the emotionality component of test anxiety, the Bodily Symptoms
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
subscale of Sarason’s (1984) Reactions to Tests was administered. is
10-item scale addresses students’ self-perceived physiological reactions
during tests (e.g., sweating, increased heart rate, headache). e students
responded to the items using the same response scale as the Cognitive
Test Anxiety scale.
Perceived Test Threat
e Perceived reat of Tests is an 18-item self-report instrument that
focuses on the perception of the upcoming test as threatening, either due
to general difficulty of course content or personal barriers to success on
the test (Cassady, 2004b). Participants respond to a four-point Likert-type
scale, with responses ranging from strongly disagree to strongly agree.
Select items are reverse-coded such that high values on the Perceived
reat of Tests instrument reveal high levels of perceived threat.
Test Preparation Strategies
An 8-item study skills survey was also used in this investigation
to gather self-report information on the students’ study habits and
strategies using the same response options as in the Cognitive Test
Anxiety scale (Cassady, 2004b). e items assessed students’ chosen
study activities as well as their perceived ability with test preparation
strategies (e.g., reading comprehension and task focus). A combined score
for the study skills items represents an overall study efficacy rating from
the student, with a high score indicating they rate themselves highly on
positive test preparation activities.
Use of the online practice tests was also coded as an indicator of
individuals’ test preparation activities. For the paper-based testing group,
students self-reported the use of the practice tests in response to a dichot-
omous (yes-no) query after the third exam. Advances in available online
courseware in the fall of 2000 enabled tracking of individual users for
the online testing group. us, for that group only, actual number of times
each participant accessed practice tests was available. Because the paper-
based testing group data were self-reported and did not meet the assump-
tion of interval data, the main analyses exploring the impact of online
practice tests were conducted on data collected only from the online
testing group.
In-class Testing Condition
Students in the in-class testing group took four tests during the
semester, including one comprehensive examination. e first three tests
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
of the semester are the focus of this investigation, given the unique nature
of final examinations regarding content coverage and student prepara-
tion (see Cassady & Johnson, 2002 for detail). e three tests were each
completed during 75-minute class sessions in the regular course meeting
room. e instructor was present for the exam administration. e tests
were multiple-choice exams ranging in length from 32 to 36 items, with an
average difficulty index (the percentage of test takers correctly answering
the item) of 0.76. Two days prior to taking the third exam, students in the
study completed the self-report instruments. is contrived timing of data
collection was intended to provide sufficient situational anxiety to capture
heightened rates of perceived threat and emotionality (Cassady, 2004b).
Logistic and ethical concerns prevented completing the scales on the day
of testing. Logistically, there was no reliable time for the students to all
complete the items directly prior to the test and maintain sufficient time
to complete the exam items. Ethically, it is conceivable that completing the
cognitive test anxiety scale or perceived test threat measure would induce
additional anxiety that could have a detrimental impact on performance if
taking the test immediately thereafter.
Online Testing Condition
e students in the online testing sample also took four exams,
including one comprehensive examination. e tests differed slightly in
content due to differences between the courses. However, the tests were
also multiple choice tests of similar length with an average difficulty index
of 0.74. e students in this sample took all exams in a secured computer-
based testing laboratory at their convenience, determining at which point
during a 7-day period they would complete the exam. Tests were proctored
by a laboratory assistant, who logged students onto the proper test and
ensured the security of the testing session. e computer-based testing
laboratory was accessible during the weekends, and until midnight every
day for student use. Students in this sample completed the test anxiety and
perceived test threat instruments no more than two days prior to taking
the test (completing the surveys online, with date stamping to ensure the
appropriate time lapse).
Online Formative Assessments
For both semesters, online practice tests1 were made available to
students after the second exam, as an additional test preparation option.
e practice tests were announced in class as well as through the online
course management system. All practice tests were created to provide
related (but not identical) items for student preparation for the course
exams. ere were four practice tests offered to the students, with
each test providing no less than 10 items targeting one of the chapters
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
covered in the third course exam. Starting four weeks prior to the third
exam, students had freedom to access the practice tests at any time, as
many times as desired.
e results are organized to present the analyses centering on the
two primary questions. First, is there a meaningful difference between
the paper-based and online-testing groups in test perceptions and
performance? Second, what unique contribution to student performance
does using online practice tests provide when simultaneously accounting
for prior performance and test perceptions?
Online vs. In-class Summative Assessment
Given Bocij & Greasley’s (1999) finding that performance gains
observed in computer-based testing conditions did not occur for the higher-
ability students, the participants in this study were split into three groups
based on performance on the first two exams (which occurred prior to col-
lection of any data for this study). Using the students’ mean performance
levels on the first two exams, quartile splits were established. e top 25%
were considered the high-scoring group, the bottom 25% were the low-
scoring group, and the middle 50% were the average-scoring group. Using
this contrived grouping system, a 3 5 2 multivariate analysis of variance
was conducted, examining the main effects and interaction of the inde-
pendent variables: prior performance (high, average, low) and assessment
format (paper, online) on the dependent measures cognitive test anxiety,
emotionality, perceived test threat, study skills, and quiz usage. e
results of the MANOVA revealed significant main effects for both prior
performance, F(10, 294) = 4.08, p <.001, η2 =.12, and assessment format,
F(5, 146) = 18.48, p <.001, η2 =.39. e interaction effect was not sta-
tistically significant, F(10, 294) = 1.25, p =.26, η2 =.04. e absence of a
significant interaction does not confirm the finding by Bocij and Greasley
(1999) demonstrating differential benefits for online testing for the high
and low ability students.
Prior Test Performance Eects
Follow-up between-subjects analyses of variance revealed several
statistically significant effects. For simplicity, only significant effects are
presented. For the main effect of prior test performance, a statistically
significant difference was observed for the following dependent vari-
ables: (a) cognitive test anxiety, F(2, 150) = 10.90, p <.001, η2 =.13; (b)
perceived test threat, F(2, 150) = 7.14, p <.001, η2 =.08; and (c) quiz use,
F(2, 150) = 4.38, p <.02, η2 = 06. Examination of the means in Table 1
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
illustrate the effects of Scheffe’s post-hoc analyses (all ps <.05) which
demonstrated that (a) low-scoring students held significantly higher
levels of cognitive test anxiety than both the average- and high-scoring
students; (b) low-scoring students held higher levels of perceived test threat
than the high-scoring students; and (c) more students in the high-scoring
group reported using the practice tests than students in the average-score
group. Note that although the differences are all statistically significant,
the effect sizes are weak.
Table 1: Means and Standard Deviations on Test Perception and Preparation
Measures: Assessment Format and Prior Performance
Prior Test Performance
Low Average High
Paper-Based Testing
n=17 n=30 n=18
Cognitive Test Anxietya80.41 (12.04) 70.10 (16.26) 65.72 (15.69)
Emotionalityb17.65 (4.83) 16.97 (6.12) 17.39 (5.28)
Perceived Test Threatc56.53 (5.35) 53.20 (7.18) 52.72 (6.52)
Study Skills Scaled17.65 (5.18) 18.87 (5.18) 20.83 (5.22)
Quiz usee.65 (.49) .40 (.49) .44 (.51)
Online Testing
n=24 n=44 n=23
Cognitive Test Anxiety 74.33 (16.73) 71.23 (13.16) 58.70 (13.26)
Emotionality 18.00 (7.46) 18.11 (7.00) 15.74 (5.57)
Perceived Test Threat 48.29 (5.17) 46.41 (4.29) 42.48 (6.04)
Study Skills Scale 20.50 (6.33) 21.14 (5.02) 22.04 (3.77)
Quiz use .63 (.49) .43 (.50) .87 (.34)
Notes: aPossible score range is 27 to 108.
bPossible score range is 10 to 40.
cPossible score range is 18-72.
dPossible score range is 8 to 32.
eQuiz use is determined by a dummy-code of 0 = “no” and 1 = yes.”
Higher scores indicate a greater percentage of the group using the quizzes.
Testing Format Eects
Between-subjects analyses for the main effect of testing format revealed
significant differences for (a) perceived test threat, F(1, 150) = 76.68, p
<.001, η2 =.34 and (b) self-reported study skills, F(2, 150) = 5.90, p <.02,
η2 =.04. e means displayed in Table 1 reveal that students in the online
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
testing group had meaningfully lower levels of perceived test threat. e
results also demonstrate that the weak effect size for self-reported study
skills favored the online testing group.
A separate univariate analysis of covariance was conducted to examine
the effect of online testing on Test 3 performance, using the average
performance level on Test 1 and Test 2 as the covariate. e results revealed
no significant difference based on the format of the test administration,
F(1, 172) =.07, p =.79, η2 =.00.
The Role of Practice Testing in the Learning-Testing Cycle
e first indirect test on the efficacy of online practice tests was through
student self report. For both semesters, a subset of the participants
provided ratings of the usefulness of the online practice tests by responding
to the statement, “I found the online quizzes to be helpful in prepara-
tion for the exam. Only six of the 64 students who responded to this
Likert-scaled item disagreed with the statement (41 “agree”; 17 “strongly
agree”). Chi-square analyses revealed no differential rates of endorsing the
statement based on method of summative assessment, X2 (3, N = 64) =
2.64, p >.05.
Only the online summative assessment group provided data regarding
the total number of uses for the practice quizzes (recall that the paper
assessment group provided only nominal data indicating use or no-use).
erefore, the remaining analyses focusing on the influence of practice
testing on the learning-testing cycle are restricted to the online sum-
mative assessment group. is has the additional benefit of eliminating
the confound of having differing formats for the practice (online) and
summative (paper) assessments.
e data presented in Table 2 demonstrate a complex relationship
among the various constructs of perceived test threat, cognitive test
anxiety, performance, and study strategies. e addition of the online
practice quizzes for only the third course exam provided a unique
context for students’ test preparation that had not been available in
previous exams. Initial ANOVA-based analyses revealed no consistent
pattern of impact for the online practice quizzes on outcomes for the third
exam, when using prior test performance as a covariate. However, it is
clear from earlier analyses that those students who are likely to use the
quizzes differ from those who are not, presenting a condition that cannot
be easily interpreted through standard ANOVA. Given the complexity of
the relationships among these variables in the learning-testing cycle, more
detailed examination with structured equation modeling was employed
to investigate the unique influence of practice tests on perceptions and
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Table 2: Intercorrelation Matrix for the Online Testing Group (n = 91)
1. Exam 1 Performance
2. Exam 2 Performance .52**
3. Exam 3 Performance .38** .32**
4. Cognitive Test Anxiety -.40** -.40** -.12
5. Emotionality -.10 -.22 -.11 .69**
6. Perceived Test Threat -.43** -.36** -.15 -.48** .30**
7. Number of Practice Quizzes Used .16 .19 .25* -.07 .02 -.03
8. Study Skills and Habits .11 .09 .14 -.07 -.03 -.31 .01
Notes: *p<.01
We created two viable models based on the extant research involving
test perceptions, preparation, and performance. Both structural equation
models proposed that three latent variables provided direct effects on per-
formance on the third exam. ese three variables (Test Perceptions, Past
Performance, and Test Preparation) also were modeled to influence one
another, which led to the primary difference between the two presented
models. Model A (Figure 1) rests on the proposition that Test Perceptions
is primarily a stable entity that has influence over upcoming and past
test performances. is proposition rests on the assumption that percep-
tions of tests develop over time and are likely to maintain stability across
one academic semester, as has been supported in earlier work with these
materials (Cassady, 2001a). Perceptions of tests were also hypothesized
to influence Test Preparation indirectly through Past Performance, and
have indirect influence on test performance through the other two latent
variables. Past Performance was hypothesized to be related directly to
Test Preparation and current test performance (also influencing current
performance indirectly through test preparation). e path linking Past
Performance to Test Preparation is consistent with the learning-testing
cycle framework. In that model, during the test reflection phase, attribu-
tions accounting for success of failure in previous testing situations dictate
the types of preparation strategies that are selected. Furthermore, those
attributions are connected to the learner’s perceptions of tests in general
(see Cassady, 2004b).
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Figure 1: Model A
Model B (Figure 2) differed by including an additional path leading
from prior test performance to test perceptions. e notion is that past
performances contribute to the overall level and orientation of beliefs
about tests, recognizing a bi-directional relationship between test per-
ceptions and performances in the past. is relationship is particularly
compelling in a condition such as the current study, where the Past
Performance variable is composed entirely of tests from the same course
as the outcome variable (i.e., Test 3).
Study Skill
Quizzes Used
Test 2
Test 1
Test Threat
Test Anxiety
Test 3 e8
-.56 .27
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Figure 2: Model B
As demonstrated in Figures 1 and 2 and Table 3, with the exception of
the addition of the path from Past Performances to Test Perceptions that
appears only in Model B, the estimates for the paths are identical for the
two models. Most effect sizes (path coefficients) were moderate to low. Past
Performance had a greater direct effect on scores on Test 3 than did either
Test Perceptions or Test Preparation. Test Perceptions had a moderate
effect on Past Performance as did Past Performance on Test Preparation.
e indirect effect of Test Perceptions through Past Performances on Test
Preparation was small. Small indirect effects on the Test 3 scores were also
noted for Test Perceptions, as modeled through both Past Performance
and Test Preparation.
Study Skill
Quizzes Used
Test 2
Test 1
Test Threat
Test Anxiety
Test 3 e8
-.50 -.09 .27
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Table 3: Model Comparison Data
Model A Model B
Direct Eects
Test Perceptions – Test 3 .27 .27
Test Perceptions – Past Performance -.56 -.50
Test Preparation – Test 3 .25 .25
Past Performance – Test Perception -.09
Past Performance – Test Preparation .48 .48
Past Performance – Test 3 .58 .58
Indirect Eects
Test Perception – Test Preparation -.27 -.24
Total Eects
Test Perception – Test 3 -.117 -.075
Past Performance – Test 3 .700 .707
Test Preparation – Test 3 .248 .248
Fit Statistics
χ2(18) 30.40 30.40
p.03 .03
χ2/df (ratio) 1.69 1.69
TLI .88 .88
CFI .92 .92
PCFI .59 .59
RMSEA .09 .09
AIC 66.40 66.40
Following established criteria for model comparisons (Gridley, 2002)
the fit statistics for the two models are identical (Table 3). e addition of
a path from Past Performances to Test Perceptions in addition to the one
from Test Perceptions to Past Performances does not significantly modify
the statistical explanations available in the models. erefore, there are
no differences between the models in their ability to fit the data. While
parsimony would suggest adopting Model A, Model B provides a more
theoretically tenable solution given the acknowledgement of the influence
of past performances on the formation of test perceptions. In essence,
Model B illustrates that although Test Perceptions and Past Performance
exert influence upon one another, the downward path in both models
is dominant.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
e intriguing finding with the models in this study highlight the
potential impact of the online practice quizzes. e direct effect of
Test Perceptions to Test 3 performance and Past Performance confirm
prior results demonstrating an overall impact of test perceptions,
specifically cognitive test anxiety, on test performance levels. However,
in the unique testing situation under investigation in this study, that is a
testing condition accompanied by online practice quizzes, examination
of the total effects indicated that the standard negative influence of Test
Perceptions was no longer prevalent.
e rapid growth of using the Internet to deliver course materials,
including assessment measures, has opened a new branch of research in
effective instructional practice (Wheeler, 2000). However, to date there
has been limited information examining the learning benefits gained
through systematic use of these online instructional tools (Buchanan,
1998; 2000). Structured around the established framework of the learning-
testing cycle and the broad base of research on the impact of testing condi-
tions on students with test anxiety, this study begins to answer fundamental
questions regarding the utility of online testing practices, and has doc-
umented specific benefits of providing both formative and summative
assessments online.
Online Summative Assessment
Our results provide no support that online testing will induce
additional anxiety or impact performance levels. However, it is important
to recognize these results should not be overgeneralized to all undergrad-
uate students; all participants in this study were involved in courses that
required frequent use of the Internet to access course materials and infor-
mation. is systematic access to technology tools and materials likely
facilitated any adjustment students needed to make to use online evalu-
ative materials. It is improbable that students with lower levels of online
experience would have similar comfort levels, and the level of emotion-
ality and anxiety may be expected to rise for students without systematic
exposure to computer-based instructional processes (Cassady, 2001a).
e only meaningful difference reported by students in the two testing
conditions was the heightened level of perceived threat reported by
students taking tests on paper. We propose this outcome was mostly
influenced by the lack of personal control over the testing events (Boggiano
& Ruble, 1986; Butler, 2003). Given the flexibility afforded by the secure
computer-based testing laboratories, the online testing group was
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
permitted to complete each test over the course of an entire week, including
evenings and weekends. is led to anecdotal reports from the students
that they enjoyed being able to take tests on “light” days. is ability to
schedule the tests seemed to allow the students to reduce the level of
contextual stress by strategically placing their testing times in convenient
time slots. For the students taking tests during assigned times, there
was no ability to choose what day would work best with their schedules.
ese students frequently reported they had several other assignments
or tests during the same day or week that the test was given. As many
students have reported, “everything is due at the same time.” us, while the
students reported great satisfaction in their level of choice in testing, this
benefit of online assessment resulted in a confound in these analyses; it is
impossible with the current data to determine that the reduced test threat
in the online condition is not simply due to the ability to choose testing
time. However, even as a confound, this condition of flexible timing for
testing is more easily achieved in online testing given logistic concerns.
e data suggest that providing tests online in a secure, proctored
computer-based testing laboratory may not simply provide a reason-
able alternative method for gathering summative assessment data from
students, but may actually be a preferable method. In addition to lower
levels of perceived test threat and the obvious benefits of ease in scoring or
test delivery, online testing can also provide increased instructional time.
In our case, the gains in instructional time were a by-product of delivering
the tests outside of the confines of class meeting rooms and sessions.
e use of online testing produced approximately 4.5 additional hours of
instructional time, as compared to in-class testing. is additional time
was gained by replacing three 75-minute class periods formerly reserved
for testing (total time = 3.75 hours) as well as an additional 15 minutes
per test for returning corrected tests and providing the correct answers,
which was administered automatically through the online testing module
(conservative estimate; total time = 4.5 hours).
e only noted barriers to effective assessment in an online envi-
ronment are the standard logistical concerns. First, as more instructors
become proficient with online testing, labs become stressed to meet
the need for testing. is institutional barrier warrants considerable
attention due to the expense associated with creating and maintaining
additional testing laboratories that can be monitored. Second, some
students struggled with responding on screen rather than on paper. In
particular, some students found it hard to keep track of items they had
skipped over to come back to later. e standard solution to this barrier
has been to suggest that all students bring blank paper to work with during
the test period. Recent advancements in online testing programs have also
helped to alleviate this problem by providing reminders to test takers when
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
an item has been left unanswered before closing the testing session. ird,
students in the online testing condition were not able to ask questions of
the instructor during the assessment period. Losing the ability to clarify
questions with the instructor prior to responding is a barrier highlighted
by a few students who describe question-asking during the test as a coping
behavior they periodically employ during testing. Finally, testing security
is a constant concern in online testing. Use of secure testing facilities an
software solutions that can randomize pre-selected equivalent content
items help combat these concerns. Just as instructors have to be consci-
entious in overcoming the “fraternity test file” from previous semesters
with paper-based testing, instructors using online assessments need to
monitor the test conditions to preserve the integrity of assessment.
Online Formative Assessment
Previous studies have discussed the availability of online formative
assessment tools (Buchanan 1998; 2000), however no data have been
available demonstrating the overall impact on students’ performances
or perceptions of testing events. Students overwhelmingly reported that
they found the online formative assessment tools (practice quizzes/tests)
to be useful in preparation for the exam. Although student perceptions
of utility are important in determining the impact of practice tests on
the learning-testing cycle, particularly when taking the impact of cog-
nitive test anxiety and perceived threat into account (Cassady, 2004b),
the contribution of this study comes from the results generated in our
exploration of the relationships among test perceptions, test preparation,
and prior performance variables.
e small but positive impact of practice test use on subsequent course
examination performance provides preliminary evidence that online
practice tests can serve as an effective test preparation strategy. e
data in this study support the pattern of results predicted by the testing
phenomenon (Glover, 1989), where the completion of a realistic testing
event can promote performance on subsequent assessment tasks. In
addition, the similarity between the formative and summative assessment
tools in function, difficulty, and format likely facilitated the transfer of
content information or contextual cues from the practice setting to the
final performance session, which should aid recall of the target informa-
tion (McDaniel et al., 1989; Roediger & Guynn, 1996).
e formative assessment generator used in this study also provided
the pedagogically desirable method of immediate post-test feedback
(King & Behnke, 1999; Wise et al., 1989). e feedback process is accom-
plished through a separate pop-up browser window. is allows the user
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
to simultaneously view the corrective feedback and the original question,
promoting the user’s ability to modify existing cognitive structures and
retrieval cues.
With respect to the learning-testing cycle, the addition of online quizzes
to learners’ test preparation strategies provided a unique structured study
tool that helped to alleviate the overall effect of Test Perceptions on Test
3 performance. In repeated studies of cognitive test anxiety and perfor-
mance, there has been a stable and definite trend documenting a signifi-
cant negative relationship for students from undergraduate populations
(Cassady, 2004a; 2004b; Cassady & Johnson, 2002; Cassady et al., 2004).
is trend was repeated in this sample as well for the first two course
examinations, for which there were no practice tests available. However,
as shown in Table 2, there was no significant correlation between Test 3
performance and cognitive test anxiety or perceived test threat. Indeed,
only prior test performances and the use of the practice tests were signifi-
cantly related to Test 3 performance. As illustrated in Figure 2 (Model B),
although Test Perceptions continue to have influence on the overall model,
the influence in this unique condition appears to be in driving the learner
toward a more useful study strategy (practice tests) that nullifies the
standard effects of test perception.
It is essential to stress that the benefits seen for those students using
the formative assessment quizzes were not likely a mere consequence of
delivery method. We predict that all benefits observed in this study would
be replicated with paper-pencil practice tests, provided they matched
the actual tests in format and difficulty level. e unique contributions
provided by the QuizEditorJS software used in this study rest in the
primary benefits afforded through computerized delivery of assessment:
greater student access, flexibility, ease of constructing the assessment
tools, and immediate formative feedback (Bransford, Brown, & Cocking,
1999; Buchanan, 2000; Dempster & Perkins, 1993). Allowing students
to freely access practice tests and receive immediate corrective feedback
provides personal control over test preparation. is method of delivery
also has benefits over the standard in-class short quiz approach in that
students can repeatedly access a variety of different practice tests.
Limitations and Future Directions
Naturally, the conduct of research with samples of convenience in
naturally occurring educational settings provides multiple threats to
external validity that are key to vary in replication studies in order to
confirm the effects are not situation-specific. e primary limitation in
this study is the small sample size, particularly in the online testing sample
upon which the bulk of the formative assessment data analyses (i.e., SEM)
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
are based. e small sample size harms the power for all analyses, which
naturally affects significance testing, but more importantly provides
concern for the stability of the two models. Additional participants in
the present study would have enabled more detailed analyses of the
contributing factors leading to the positive effects associated with the
practice quizzes. In particular, we are interested in exploring which
students are most likely to access the quizzes and what role success or failure
on initial attempts with practice quizzes has on repeated attempts.
e presence of confounded variables also needs to be controlled in
future investigations. First, the individual’s control over the timing of
the test administration is likely to influence the perceived level of cog-
nitive test anxiety and perceived test threat. To address this concern,
providing the on-paper group with the option to take the test at any point
in a given time frame would control the confounding variable.
e second confound in our study is that all practice tests were
provided online. Does presentation format of the practice quizzes matter?
Most textbook publishers provide student study guides for core under-
graduate course textbooks that include practice test items. Would the
same benefits be granted with use of these materials? e limitations
to this study preclude a definitive answer, however we propose that the
presentation format likely does matter. Specifically, the issue of impor-
tance is a positive match in presentation format between the formative
and summative assessments. It is a well-established effect that memory
performance is improved in conditions where retrieval cues sparked in
the testing condition are more consistent with the cues available during
encoding (Roediger & Guynn, 1996; Tulving & ompson, 1973), or
provide more specific “diagnostic” information that facilitates reconstruc-
tion of the target content (Nairne, 2002a; 2002b).
A third confounding condition that could be controlled in future
investigations is related to the comparison of the online and paper-
based testing conditions. In our study, the paper-based class received
fewer instructional periods given their in-class testing requirement. It is
possible that the effects in this study are influenced by the different
amount of instructional time.
A final limitation to this study is the absence of an attributional
measure following testing which would complete the analysis of the
learning-testing cycle by providing information on the test reflection
phase. Although our models address this phase indirectly as described
earlier, empirical verification is desirable.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
1 e formative assessment tool used in this study was QuizEditorJS, which was
designed, coded, and debugged at Ball State University by Wayne K. Mock,
Multimedia Development Coordinator in the Center for Teaching Technology,
Office of Teaching and Learning Advancement and Jon L. Weiss, Lead Micro
Analyst/CWIS Coordinator in University Computing Services. e unique features
of QuizEditorJS are immediate post-performance feedback delivery, privacy
of feedback (only the student taking the quiz sees the performance report in a
separate pop-up window), simplicity of the question-generation interface, and
a cross-platform design. Available online:
Bandura, A. (1986). Social foundations of thought and action:
A social-cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.
Bocij, P. & Greasley, A. (1999). Can computer-based testing achieve
quality and efficiency in assessment? International Journal
of Educational Technology, 1(1), 17 pages. Available online:
(last accessed November 5, 2003).
Boggiano, A. K. & Ruble, D. N. (1986). Children’s responses to
evaluative feedback. In R. Schwarzer (Ed.) Self-related cognitions
in anxiety and motivation (pp. 195–228). Hillsdale, NJ: LEA.
Bransford, J. D., Brown, A. L., & Cocking, R. R. How people learn:
Brain, mind, experience, and school. Washington, DC: National
Academy Press.
Buchanan, T. (1998). Using the World Wide Web for formative
assessment. Journal of Educational Technology Systems, 27(1), 71–79.
Buchanan, T. (2000). Potential of the Internet for personality research.
In M.H. Birnbaum (Ed.) Psychological experiments on the Internet.
San Diego: Academic Press.
Butler, D. L. (2003). e impact of computer-based testing
on student attitudes and behavior. e Technology Source,
January/February. Available online:
Carrier, M. & Pashler, H. (1992). e influence of retrieval on retention.
Memory & Cognition, 20, 633–642.
Cassady, J. C. (2001a). Integrating technology instruction in
pre-professional training programs. Trainer’s Forum, 19(3), 1–2; 8–10.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Cassady, J. C. (2001b). e stability of undergraduate students’ cognitive
test anxiety levels. Practical Assessment, Research & Evaluation, 7(20).
Available online:
Cassady, J. C. (2004a). e impact of cognitive test anxiety on text
comprehension and recall in the absence of salient evaluative
pressure. Applied Cognitive Psychology, 18(3), 311–325.
Cassady, J. C. (2004b). e influence of cognitive test anxiety across the
learning-testing cycle. Learning and Instruction, 14(6), 569–592.
Cassady, J. C. & Johnson, R. E. (2002). Cognitive test anxiety and
academic performance. Contemporary Educational Psychology, 27,
Cassady, J. C., Mohammed, A., & Mathieu, L. (2004). Cross-cultural
differences in test anxiety: Women in Kuwait and the United States.
Journal of Cross-Cultural Psychology, 35(6), 715–718.
Clariana, R. B., Ross, S. M., & Morrison, G. R. (1991). e effects
of different feedback strategies using computer-administered
multiple-choice questions as instruction. Educational Training,
Research, and Development, 39, 5–17.
Covington, M. V. & Omelich, C. L. (1987). “I knew it cold before the
exam”: A test of the anxiety-blockage hypothesis. Journal of
Educational Psychology, 79, 393–400.
Culler, R. E. & Holohan, C. J. (1980). Test anxiety and academic
performance: e effects of study-related behaviors. Journal of
Educational Psychology, 72, 16–26.
Deffenbacher, J. L. (1980). Worry and emotionality in test anxiety.
In I. G. Sarason, (Ed.) Test anxiety: eory, research, and applications
(pp. 111–124). Hillsdale, NJ: Lawrence Erlbaum.
Dempster, F. N. (1997). Using tests to promote classroom learning. In
R. F. Dillon (Ed.) Handbook of testing (pp. 332–346). Westport, CT:
Greenwood Press.
Dempster, F. N. & Perkins, P. G. (1993). Revitalizing classroom
assessment: Using tests to promote learning. Journal of Instructional
Psychology, 20, 197–203.
Duchastel, P. (1996). A Web-based model for university instruction.
Journal of Educational Technology Systems, 25, 221–228.
Geen, R. G. (1980). Test anxiety and cue utilization. In I.G. Sarason (Ed.)
Test anxiety: eory, research, and applications (pp. 43–62). Hillsdale,
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Glover, J. A. (1989). e “testing” phenomenon: Not gone but nearly
forgotten. Journal of Education Psychology, 81, 392–399.
Gridley, B. E. (2002b). In search of an elegant solution: Reanalysis of
Plucker, Callahan, and Tomchin, with respects to Pyryt and Plucker.
Gifted Child Quarterly, 46, 224–234.
Hembree, R. (1988). Correlates, causes, and treatment of test anxiety.
Review of Educational Research, 58, 47–77.
Jongekrijg, T. & Russell, J. D. (1999). Alternative techniques for
providing feedback to students and trainees: A literature review
with guidelines. Educational Technology, 39(6), 54–58.
Ikeda, M., Iwanga, M., & Seiwa, H. (1996). Test anxiety and working
memory system. Perceptual and Motor Skills, 82, 1223–1231.
King, P. E. & Behnke, R. R. (1999). Technology-based instructional
feedback intervention. Educational Technology, 39(5), 43–49.
Kurosawa, K. & Harackiewicz, J. M. (1995). Test anxiety, self-awareness,
and cognitive interference: A process analysis. Journal of Personality,
63, 931–951.
Liebert, R. M. & Morris, L. W. (1967). Cognitive and emotional
components of test anxiety: A distinction and some initial data.
Psychological Reports, 20, 975–978.
McDaniel, M. A., Kowitz, M. D., & Dunay, P. K. (1989). Altering memory
through recall: e effects of cue-guided retrieval processing. Memory
& Cognition, 17, 423–434.
Morris, L. W., Davis, M. A., & Hutchings, C. H. (1981). Cognitive and
emotional components of anxiety: Literature review and a revised
worry-emotionality scale. Journal of Educational Psychology, 73,
Nairne, J. S. (2002a). e myth of the encoding-retrieval match.
Memory, 10, 389–395.
Nairne, J. S. (2002b). Remembering over the short-term: e case against
the standard model. Annual Review of Psychology, 53, 53–81.
Naveh-Benjamin, M., McKeachie, W. J., & Lin, Y. (1987). Two types of
test-anxious students: Support for an information processing model.
Journal of Educational Psychology, 79, 131–136.
Roediger, H. L. & Guynn, M. J. (1996). Retrieval processes. In E. C.
Carterette & M. P. Friedman (Series Eds.) & E. L. Bjork & R. A. Bjork
(Vol. Eds.) Handbook of Perception and Cognition (2nd Ed.). Memory.
San Diego, CA: Academic Press.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Sarason, I. G. (1984). Stress, anxiety, and cognitive interference:
Reactions to Tests. Journal of Personality and Social Psychology, 46,
Sarason, I. G. (1986). Test anxiety, worry, and cognitive interference.
In R. Schwarzer (Ed.) Self-related cognitions in anxiety and motivation
(pp. 19–34). Hillsdale, NJ: LEA.
Schutz, P. A. & Davis, H. A. (2000). Emotions and self-regulation during
test taking. Educational Psychologist, 35, 243–256.
Schwarzer, R. (1984). Worry and emotionality as separate components in
test anxiety. International Review of Applied Psychology, 33, 205–220.
Schwarzer, R. & Jerusalem, M. (1992). Advances in anxiety theory:
A cognitive process approach. In K. A. Hagtvet & T. B. Johnsen
(Eds.) Advances in test anxiety research (Vol. 7, pp. 2–31). Lisse, e
Netherlands: Swetts & Zeitlinger.
Tulving, E. & omson, D. (1973). Encoding specificity and retrieval
processes in episodic memory. Psychology Review, 80, 352–373.
Wheeler, S. (2000). Instructional design in distance education through
telematics. Quarterly Review of Distance Education, 1(1), 31–44.
Wise, S. L., Plake, B. S., Eastman, L. A., Boettcher, L. L., & Luken,
M. E. (1986). e effects of item feedback and examinee control
on test performance and anxiety in a computer-administered test.
Computers in Human Behavior, 2, 21–29.
Wise, S. L., Plake, B. S., Pozehl, B. J., Barnes, L. B., & Luken, M. E. (1989).
Providing item feedback in computer-based tests: Effects of initial
success and failure. Educational and Psychological Measurement, 49,
Wittmaier, B. C. (1972). Test anxiety and study habits. e Journal of
Educational Research, 65, 352–354.
Zeidner, M. (1998). Test anxiety: e state of the art. New York:
Plenum Press.
Impacts of Online Formative and Summative Assessment on Test Anxiety and Performance Cassady & Gridley
Author Biographies
Jerrell C. Cassady is Associate Professor of Psychology in the Department
of Educational Psychology at Ball State University. His research
interests include test anxiety, student learning, and the influences
of technology on the learning and education for students of all
ages. In addition to his research, Dr. Cassady serves as an evaluation
consultant to several projects exploring the effects of programs
interested in improving the learning environments in schools. He also
serves as co-editor for e Teacher Educator, an international peer-
review journal focused on the practices of enhanced teacher training.
Betty E. Gridley is Professor of Psychology-Educational Psychology
at Ball Sate University. She directs the MA/EdS programs in school
psychology. Her current teaching and research interests focus on
assessment and multivariate statistics particularly as applied to
instrument validation. For over 20 years her varied research projects
have included exceptional learners ranging from those with high
abilities to those with attention and learning problems.
Technology and Assessment Study Collaborative
Caroline A. & Peter S. Lynch School of Education, Boston College
Editorial Board
Michael Russell, Editor
Boston College
Allan Collins
Northwestern University
Cathleen Norris
University of North Texas
Edys S. Quellmalz
SRI International
Elliot Soloway
University of Michigan
George Madaus
Boston College
Gerald A. Tindal
University of Oregon
James Pellegrino
University of Illinois at Chicago
Katerine Bielaczyc
Harvard University
Larry Cuban
Stanford University
Lawrence M. Rudner
University of Maryland
Mark R. Wilson
UC Berkeley
Marshall S. Smith
Stanford University
Paul Holland
Randy Elliot Bennett
Robert J. Mislevy
University of Maryland
Ronald H. Stevens
Seymour A. Papert
Terry P. Vendlinski
Walt Haney
Boston College
Walter F. Heinecke
University of Virginia
The Journal of Technology, Learning, and Assessment
... 272). It is the cognitive dimension of anxiety that impacts performance the most (Cassady & Gridley, 2005). ...
... The aspects of this exam setting, viz. computer-based, oral, and summative have been identified in the literature as anxiety-inducing (Yingzi, 2020;Andujar & Cruz-Martinez, 2020;Cassady & Gridley, 2005). ...
... The results demonstrate that almost 30 per cent of respondents reported high levels of cognitive test anxiety, which significantly lowers confidence, restricts potential, and impedes test performance (Andujar & Cruz-Martinez, 2020;Cassady & Johnson, 2002;Cassady & Gridley, 2005;Zheng & Cheng, 2018). The majority of respondents (47 per cent) lie in the 'moderate anxiety' range with the average score being 55. ...
Full-text available
JOURNAL: FORTELL The unprecedented shift to online teaching post-Covid-19 compelled educational institutes to make adaptations in teaching and assessment practices. These unfamiliar test methods amid the pandemic represent sources of test anxiety for learners. This mixed-methods study investigates online test perceptions and test anxiety among 115 undergraduate learners of English at a girls' college. A post-examination questionnaire was administered containing the Cognitive Test Anxiety Scale-Revised and items addressing test features. CTAR scores were interpreted according to defined severity standards, and responses to open-ended items were subjected to qualitative content analysis. Twenty-eight per cent of learners reported high test anxiety, the chief causes being hesitation, perceived test bias, examiner characteristics, technical issues, and lack of feedback. Recommendations to mitigate test anxiety include assigning learners more control and making them comfortable through orientation, practice, support, and a conducive examiner approach.
... While few studies have investigated test anxiety in the context of online assessments, previous research has demonstrated that online testing does not induce additional anxiety or impact academic performance (23). Instead, students who typically experience higher levels of test anxiety in the classroom report lower levels of test anxiety in online environments (23,24). ...
... While few studies have investigated test anxiety in the context of online assessments, previous research has demonstrated that online testing does not induce additional anxiety or impact academic performance (23). Instead, students who typically experience higher levels of test anxiety in the classroom report lower levels of test anxiety in online environments (23,24). However, it is important to note that previous research centers on students who choose to take online coursework and have more experience with online learning. ...
... Furthermore, many students had to also contend with the challenges presented by the pandemic (e.g., illness of a loved one and challenges associated with employment status or housing). These students may have been less proficient using technology required for learning, felt uncomfortable preparing for exams or in their testing environments, and experienced increased test anxiety during this time (23,25). To our knowledge, no studies have compared reports of student test anxiety from an undisrupted, traditional semester to the semester when COVID-19 forced a sudden transition online. ...
Full-text available
ABSTRACT Test anxiety is a common experience shared by college students and is typically investigated in the context of traditional, face-to-face courses. However, the onset of the COVID-19 pandemic resulted in the closure of universities, and many students had to rapidly shift to and balance the challenges of online learning. We investigated how the shift to online learning during the pandemic impacted trait (habitual) and state (momentary) test anxiety and whether there was variation across different demographic groups already vulnerable to performance gaps in science, technology, engineering, and mathematics (STEM) courses. Quantitative analyses revealed that trait and state test anxiety were lower in Spring 2020 (COVID semester) than in Spring 2019 and were higher overall in women than men. We did not find a difference in either trait or state anxiety in first-generation students or among persons excluded because of ethnicity or race. Qualitative analyses revealed that student priorities shifted away from coursework during Spring 2020. While students initially perceived the shift to online learning as beneficial, 1 month after the shift, students reported more difficulties studying and completing their coursework. Taken together, these results are the first to compare reports of test anxiety during a traditional, undisrupted semester to the semester where COVID-19 forced a sudden transition online.
... The extensive exercises in OFA are open questions that involve the completion of journal entries, balance sheets, and income statements, requiring problem solving skills. This is in contrast to prior research examining OFAs, which mainly used multiple choice questions (e.g., Cassady et al., 2001;Cassady & Gridley, 2005;Einig, 2013;Massoudi et al., 2017;McNulty et al., 2015). ...
... This conclusion was obtained with both the regression and the PSM analysis, as well as with the mediation analysis of Hayes and Preacher (2014). Similarly, Cassady and Gridley (2005) could not confirm the direct link between OFA use and lower test anxiety, making the current study's findings unique as it considers the indirect link through self-efficacy. Meer and Chapman (2014) concluded from a qualitative study that formative assessments can address students' ...
In a financial accounting course, it is important that students review the learning material multiple times throughout the semester. Nonetheless, procrastination behaviour is adopted by many accounting students. Procrastination behaviour, induced by low self-efficacy, is harmful as it leads to higher test anxiety. Hence, interventions are suggested to favourably influence procrastination. Based on Astin's (1984) theoretical Input-Environment-Output model, this study investigates the potential of voluntary online exercises-online formative assessments (OFAs)-to increase self-efficacy and decrease test anxiety. The setting involves a quasi-experiment in a financial accounting course. Quantitative OFA log data are used (N = 762), as well as survey data, measuring self-efficacy and test anxiety at the beginning and the end of the semester. Qualitative focus group data verifies if the quantitative results are in line with students' OFA perceptions. Findings show that (1) during the semester, students' self-efficacy decreases and test anxiety increases, (2) students using OFA experience an increase in self-efficacy, and (3) those students also tend to experience lower test anxiety. Students confirm these favourable effects based on their experiences with the OFAs. This paper contributes to accounting education by demonstrating how OFAs support accounting students to increase self-efficacy and decrease test anxiety.
... The results of the Valdez and Maderal (2021) study showed that students have a high level of motivation toward the subject of mathematics and have a positive perception of online assessments (Valdez & Maderal, 2021). However, our findings are contrary to the explanations of authors Cassady and Gridley (2005) who indicate that there is an additional concern often raised by instructors considering online summative assessment is that online testing will cause high levels of test anxiety, leading to performance levels that underestimate true ability (Cassady & Gridley, 2005). However, our findings show that students were less stressed when taking an online test on the subject of applied mathematics, compared to the stress they had in classroom tests. ...
... The results of the Valdez and Maderal (2021) study showed that students have a high level of motivation toward the subject of mathematics and have a positive perception of online assessments (Valdez & Maderal, 2021). However, our findings are contrary to the explanations of authors Cassady and Gridley (2005) who indicate that there is an additional concern often raised by instructors considering online summative assessment is that online testing will cause high levels of test anxiety, leading to performance levels that underestimate true ability (Cassady & Gridley, 2005). However, our findings show that students were less stressed when taking an online test on the subject of applied mathematics, compared to the stress they had in classroom tests. ...
Full-text available
Taking assessments these days doesn't have to be stressful as technology has revolutionized the entire education system. Assessments are the most important part of the applied mathematics course, as they give students an accurate background of where they stand. It acts as a catalyst and positive reinforcer for students encouraging them to perform better. Most importantly, teaching and assessing students are two actions that should not be confined within the walls of a classroom but can be done anytime, anywhere with the help of technology. Therefore, in this study we have tried to analyze an assessment process developed through online tests in the subject of applied mathematics, as one of the instruments of formative assessment. To formalize this study, it was decided to use quantitative research methodology as a research strategy. The sample number was 110 students and it was a purposive sample. The empirical data collected during this research study involved a variety of data collection techniques. The methods used consist of questionnaires. The results from this research are important because online assessment is a formative assessment tool for students. The conclusion from this study is that students enjoyed taking online tests in the subject of applied mathematics. Therefore, the online assessment had a great positive impact on the formative assessment process in this subject. This paper has shown that online assessment as one of the instruments used in the formative assessment process affects the appearance of special assessment strategies in the subject of applied mathematics.
... At the same time, the opposite was true for those low in classroom anxiety. Another study by Cassady and Gridley (2005) examined the influences of online formative and summative assessment materials on undergraduate learners' testing behaviours like study habits, beliefs and performance, study habits. The study revealed that students attending tests online reported lower levels of perceived test threat. ...
... For example, a study conducted by Stowell and Bennett (2010) indicated that online tests contribute to decreasing the percentage of test anxiety, with the opposite for face-to-face education. Similarly, the outcomes from the present study were comparable to a study by Cassady and Gridley (2005). They pointed out that online tests have a different environment from paper tests, as they contain features that help reduce test anxiety. ...
Full-text available
In blended learning, which has become very common due to the prevailing conditions, tests play a pivotal role, as the percentage of reliance on online tests has increased more than on paper tests. This study aims to measure the percentage of test anxiety in online and paper-based tests. It also seeks to determine the students' most desire for online paper-based tests in terms of test anxiety. 138 students participated in this study, and a questionnaire was used to measure the level of test anxiety and students' preferences for the two test types in relation to test anxiety. The results showed that students experience more test anxiety in paper-based tests than on online tests. Students also showed a significant tendency toward using online tests for testing in relation to test anxiety. They prefer online tests more than paper tests to cope with test anxiety.
... For example, based on the study conducted by Kling, McCorkle, Miller & Reardon (2005), under certain conditions such as high content overlap between the quizzes and the comprehensive final exam, students' performance (as well as satisfaction) tends to increase by more frequent testing. Furthermore, Cassady & Gridley (2005) found that in their study, students overwhelmingly reported that the online formative assessment tools (practice quizzes/tests) are useful in helping them prepare for the exam. ...
Full-text available
Investigated in this study were the effects of Computer Based Test (CBT) and Paper and Pencil Tests (PPT) on Secondary School (SS) students’ academic achievement and test anxiety in Economics. The pretest-posttest non-randomized control group design was used as the research design. The study was done in Asaba, Delta State, Nigeria. All Senior Secondary II (SSII) students who offered Economics comprised the study’s population. 107 SS II students were selected as the sample. Two instruments - Economics Achievement Test (EAT) and Test Anxiety Inventory (TAI) were used for data collection. Both EAT and TAI were validated by the experts. The reliability coefficients of the instruments were .95 and .68, respectively. The data collected were analyzed using mean statistics to answer two research questions, whereas two formulated null hypotheses were tested at a .05 level of significance using ANCOVA. The findings of the study indicated that students’ mean achievement scores in PPT were slightly higher than students’ mean achievement scores in CBT and the students’ mean achievement scores were significantly different. Students in PPT exhibited greater test anxiety than their counterparts in CBT, even though the difference in the mean test anxiety scores of students in CBT and PPT was not significant. Based on the findings, the researchers recommended, among others, that the Federal Government should make and implement policies to mandate senior secondary students to use PPT for all internal assessments in various subjects in the country.
Full-text available
Bu araştırmada üniversite öğrencilerinin sınav kaygısına ilişkin metaforik ifadelerinin incelenmesi amaçlanmıştır. Araştırmada üniversite öğrencilerinin sınav kaygısına yönelik ürettikleri metaforları ortaya koymak amacıyla nitel araştırma desenlerinden biri olan fenomenoloji deseni kullanılmıştır. Bu araştırmanın çalışma grubu 2020-2021 eğitim-öğretim yılı güz döneminde bir devlet üniversitesinin farklı fakülte ve yüksekokullarında lisans düzeyinde öğrenim gören 456 öğrenciden oluşmaktadır. Üniversite öğrencilerinin sınav kaygısına ilişkin metaforik ifadelerinin değerlendirilmesinin amaçlandığı bu çalışmada, öğrencilerin ürettikleri 264 metafor içerik analizine tabi tutularak farklı kategoriler altında incelenmiştir. Bulgular, üniversite öğrencilerinin sınav kaygısına ilişkin metaforlarının 11 kategori altında toplandığını göstermiştir. Bu metaforların büyük oranda somut ifadelerden oluştuğu, sınav kaygısına olumsuz anlam atfedildiği görülmüştür. En yüksek frekansa sahip olanlardan (stres kaynağı, korku, endişe, hastalık) hareketle üniversite öğrencilerinin sınav kaygısına yönelik olumsuz algılarının sınav kaygısını stres ve hastalıkla ilişkili olarak düşünmeleri olduğu söylenebilir. Sonuç olarak, öğrenciler sınav kaygısını önemli bir sorun olarak ifade etmekte ve genellikle sınavı olumsuz bir şekilde anlamlandırmaktadırlar. Sınav kaygısını başta hastalık ve stres ile ilişkili ifadelerle betimlemişlerdir. Sınav kaygısı ile ilgili ifadelerin çok geniş ve çeşitli metaforlar ortaya koyması öğrencilerin sınav kaygısını hayatlarının bir parçası olarak içselleştirdiklerini göstermektedir. Bu anlamda öğrencilerin baş etmek zorunda oldukları en önemli stresörlerden birinin sınav kaygısı olduğu söylenebilir. Gelecek araştırmalarda lise öğrencileri, ortaokul öğrencileri gibi farklı eğitim seviyelerindeki öğrencilerle de çalışılabilir. Farklı sosyodemografik özelliklere sahip gruplar üzerinde sınav kaygısına ilişkin çalışma yürütülebilir. Bu bağlamda yarı yapılandırılmış bireysel görüşmeler daha derinlemesine analiz yapmak için kullanılabilir ve ebeveynlerin başarılarına yönelik tutumlarının metaforik ifadeleri incelenerek sınav kaygısının psikososyal yönü değerlendirilebilir. Ayrıca üniversitelerde öğrencilerde sınav kaygısına neden olan faktörlere yönelik çalışmalar yapılabilir. Bu araştırmadan elde edilen bulgular, uygulayıcılar tarafından sınav kaygısına ilişkin müdahale ve önleme çalışmalarının oluşturulmasında kullanılabilir.
Full-text available
The paper examined the perception of undergraduate students of the Faculty of Management Sciences towards the factors influencing the use of Computer Based Tests in University of Maiduguri. The study adopted a survey design and administered 500 structured questionnaires to the students out of which 431 were retrieved. A proportionate stratified sampling technique was employed to select the respondents from the five departments in the faculty because each department did not have equal population of students. The research instrument's validity and reliability were tested using the Content validity and Cronbach's alpha coefficient. The study employed descriptive and Inferential Statistics with the aid of Statistical package for Social Sciences (SPSS) version 23 to analyse the data collected. The results from the Exploratory Factor Analysis showed that the students' perceptions of electronic examinations were influenced by their assessment of the electronic examination facilities, the technical challenges they encountered during the examinations and their views about how electronic examinations could be enhanced. The correlation results showed technical challenges encountered during electronic examinations have a significant and negative correlation with students' perceptions of electronic examinations, their assessment of the electronic examination facilities and their views about enhancing electronic examinations. The multiple regression results also showed that all the factors account for 29% variation in predicting students' perception of electronic examinations. In addition, technical challenges encountered significantly and negatively influence students' perception. However, students' assessment of electronic examination facilities and their views about enhancing electronic examinations have a positive influence on their overall perceptions. Thus, it is recommended that the university authorities should address the technical challenges encountered by students during electronic examinations to enhance students' acceptability of electronic examinations and achieve the purpose for which computer-based examinations were implemented.
Assessment plays an important role in education, and can both guide and motivate learning. Assessment can, however, be carried out with different aims; providing the students with feedback that supports the learning (formative assessment) and judging to which degree the students have fulfilled the intended learning outcomes (summative assessment). In this study, we explore the instructors’ perspective on assessment within the context of introductory programming courses offered to non-computer science majors at a public tuition-free state-funded university in a Nordic country. These courses are given to a large number of students and also employ several teaching assistants (TAs). We used constructivism as a basis for our study and investigated how instructors implement formative and summative assessments, how they view their role, and what expectations they have of their TAs in these assessments. We interviewed seven course coordinators (main instructors for introductory programming courses with additional administrative duties but without formal responsibility of the grading) and analyzed 205 course artifacts, such as syllabi, lab assignment instructions and course material from the cross-department TA training course. The results showed that course coordinators use formative and summative assessments both separately and within the same activity. They view themselves as responsible for the assessments, as the planners and material developers, as the organizers and administrators, as well as monitors of the assessments. However, the results also showed that these course coordinators delegate much of the responsibility for the assessments to their TAs, and expect TAs to both grade the students and provide them with feedback and guidance. In addition, the TAs are also expected to act as informants about their students’ performance. The course coordinators’ role entails many different aspects, where communicating through instructions to both students and TAs are essential. We see that this implementation of assessment, with lots of responsibility distributed to the TAs, could be difficult to manage for a single faculty member who is not necessarily responsible for the grading. Based on the results, we outline some recommendations, such as offering TA training.
It is widely acknowledged that formative assessments and feedback on performance can play an important role in supporting learning. In traditional teaching and learning paradigms, students are constrained by time, place, and resources in the extent to which they may access such support. These problems can be addressed using computer based systems, and the World Wide Web has great potential as a platform for these. This article describes the implementation and use of such a system, which proved popular with students. Patterns of usage are discussed, as is the role of such systems in current and future models of higher education.
The purpose of this article is to illustrate the use of confirmatory factor analysis (CFA) to explicate constructs underlying performance assessment tasks based on Garder's (1983) theory of multiple intelligences. Data from Plucker, Callahan, and Tomchin (1996) were reanalyzed using CFA. I analyzed several models based on the theoretical conceptualization of the authors, their exploratory factor analysis, and the Subsequent reanalysis of the data by Pyryt (2000). Models that allowed for intercorrelations among factors fit better than those without such correlations. Although the performance tasks appeared to measure their constructs as predicted, a model with three factors that combined Linguistic and Iiterpersonal Intelligence, rather than the four original factors, was supported. Higher order models indicated the presence of a general factor underlying the multiple inelteligences. Although not conforming exactly to the original theory, it appears that this set of performance tasks is a step in the right direction in terns of measuring multiple intelligences. Pyryt's concern for the influence g was confirmed to some extent, but it appears that both factors and individual tasks retain sufficient varianice to allow for the interpretation that they measure separate abilities.
Results of 562 studies were integrated by meta-analysis to show the nature, effects, and treatment of academic test anxiety. Effect sizes were computed through the method invented by Glass (Glass, McGaw, & Smith, 1981). Correlations and effect-size groups were tested for consistency and significance with inferential statistics by Hedges and Olkin (1985). Test anxiety (TA) causes poor performance. It relates inversely to students’ self-esteem and directly to their fears of negative evaluation, defensiveness, and other forms of anxiety. Conditions (causes) giving rise to differential TA levels include ability, gender, and school grade level. A variety of treatments are effective in reducing test anxiety. Contrary to prior perceptions, improved test performance and grade point average (GPA) consistently accompany TA reduction.
This study examines differential levels of test anxiety, perceived test threat, and performance for females in Kuwait and the United States. The results demonstrated that Kuwaiti females reported higher levels of affective test anxiety, whereas females from the United States characterized tests as more threatening. These differences arose in conjunction with self-reported deficiencies in study skills. The interpretation of results points to cultural variations in views of education as competitive and attributional orientations for test performance.
Publisher Summary This chapter discusses the opportunities offered by the Internet for the psychometric approach to personality research. It highlights some of the problems associated with online personality research and describes a series of studies focused on the feasibility of such an endeavor along with offering some methodological recommendations on the basis of findings to date. Although the tests and assessment instruments used in online studies are usually presented as forms on Web pages, other Internet resources and technologies are also employed such as, communicating the participant through e-mail or recruiting participants by means of advertisements placed in Usenet newsgroups. A number of professional psychologists and test publishers are already beginning to offer such services. The Internet offers access to very large numbers of participants with little cost or effort. For researchers without easy access to large “captive” participant populations, such as undergraduate students who may participate in return for course credit, recruiting the numbers needed may prove difficult.
This study investigated the effects of providing item feedback on student achievement test performance and anxiety, and how these effects may be moderated by the amount of success and students experience on the initial items of the test. Introductory statistics students were randomly assigned to six forms of a computer-based algebra test that differed in terms of (a) the difficulty of the first five items, and (b) the type of item feedback provided. Although test performance was not affected significantly by differences among the test forms, student anxiety levels were significantly increased when administered the test form using difficult initial items and providing item feedback along with a running score total. Implications for the use of item feedback in computer-based testing are discussed.
Explores five common types of instructional feedback--written, conferences, audio, video, and computer. Provides a description drawn from the literature, and offers several guidelines for using each format in education and training. (AEF)