Available via license: CC BY-NC 2.0
Content may be subject to copyright.
Volume 50, Number 6 TechTrends 27
ccording to the Committee on the
Foundations of Educational Assessment,
traditional educational assessment does a
reasonable job of measuring knowledge of basic
facts, procedures and prociency of an area
of the curriculum. However, the traditional
approach fails to capture the breadth and
richness of knowledge and cognition (Pellegrino,
Chudowsky, & Glaser, 2001). Such a concern
arises because traditional assessment practices
generally focus on assessing whether a student
has acquired the content knowledge, but they
oen fail in assessing the learning process and
higher-order thinking skills (Baek, 1994; Bahr
& Bahr, 1997). Dede (2003) concludes that the
current practices of educational assessment are
“based on mandating performance without
providing appropriate resources, then using
a ‘drive by’ summative test to determine
achievement” (p. 6).
At a time when traditional assessment is
under increasing scrutiny and criticism, the
nation is placing greater expectations on the
potential role of the computer in educational
assessment. It is anticipated that the appro-
priate use of computer technology would
help enhance assessment at multiple levels of
practice by incorporating ongoing and mul-
tiple assessment strategies into the learning
process. Given this possibility, it is timely to
review current computer-based assessment
practices in educational settings. Further-
more, a review of some emerging assessment
tools that incorporate interactive multimedia
can also deepen our understanding of the role
that computer technology plays in assessment.
Technology use in assessment and
ngertip eects
Fingertip eects of computer technology
Computer technology has signicantly
changed the curriculum and teachers’ instruc-
tional practices. It has also changed the way stu-
dents construct and demonstrate their knowl-
edge and skills. ese changes “in turn are stim-
ulating people to rethink
what is assessed, how that
information is obtained,
and how it is fed back into
the educational process in a
productive and timely way”
(Pellegrino, Chudowsky, &
Glaser, 2001, p. 272).
Perkins (1985), a
pioneer thinker who viewed
computers as learning tools,
pointed out that computer
technology has “a valuable
history of putting things at
our ngertips to be seized
and used widely for their
designed objectives as well
as for other purposes” (p.
11). However, he warned
that the opportunities provided by computer
technology are not always accepted in education.
He further explained that computer technology
actually has “two order ngertip eects.” e rst
order ngertip eects occur when a computer
innovation changes “the way people do certain
things without actually changing very much the
basic aspirations, endeavors, or thinking habits
The Fingertip Eects of
Computer-based Assessment
in Education
By Hong Lin and Francis Dwyer
A
“At a time when
traditional assessment
is under increasing
scrutiny and criticism,
the nation is placing
greater expectations
on the potential
role of the computer
in educational
assessment.”
28 TechTrends Volume 50, Number 6
of a population” (Perkins, 1985, p. 11). For
instance, unlike regular mail, emails and instant
messaging allow for faster communication
with friends, relatives and business associates
thousands of miles away. Another example is
that computer-based tests use built-in databases
to automatically collect and compute data.
In these instances, the rst order ngertip
eects of computer technology answer the
question, “What could you do that you could
not before?” Specically, computer technology
can help automate routine
procedures quickly and
accurately, thus improving
productivity and eciency.
e second order n-
gertip eects answer the
question, “What dierence
will a computer really make
to a person’s higher-order
skills, i.e., decision making,
reection, reasoning and
problem solving?” (Per-
kins, 1986, p. 11). Jonassen
(2000) indicated that the
second order eects should
help “in the construction
of generalizable, transfer-
able skills that can facilitate
thinking in various elds” (p. 18). However,
when computer technology is used, it should go
beyond its automated function as a production
tool; it should be used to promote higher-order
skills. For example, Microso Excel is a spread-
sheet tool that is useful for teachers as a grade
book. By inputting grades and one function,
or a series of functions, a teacher can produce
report cards very quickly, thus saving time at
the end of a marking period. e spreadsheet
data can also be turned into a graph so that the
report card can be seen visually or graphically.
e teacher can also look at the results and use
them to work on comments for student perfor-
mance. In doing so, the teacher can reect on
his or her teaching, ask “what if ” questions or
help a struggling student. In this instance, hav-
ing the computer do the menial task of averag-
ing grades and displaying graphics are the rst
order ngertip eects; reective thinking and
helping a struggling student’s learning are the
second order.
In summarizing Perkins’ words, if computer
technology fails to achieve its full potential, it
has only been used to achieve the rst order
ngertip eects. It is the second order ngertip
eects, the non-automatic and eortful process,
that establishes the true value of the computer
technology. With this distinction in mind, the
question then becomes: Which level of ngertip
eects of the computer have current computer-
based assessment tools achieved?
Limitations of computer technology
use in assessment
Computer applications in educational
assessment are evident in testing preparation,
administering, scoring and reporting (Zenisky &
Sireci, 2002). To this end, computer technology
is oen used to present test items and collect
responses. Clearly, computer technology does a
great job of automating varying phases of testing
processes such as creating, storing, distributing
and sharing test materials. e automation,
especially in large-scale assessment such as
that administered by the Educational Testing
Service (ETS), can benet both examiners and
examinees in multiple ways.
Unlike most computer-assisted tests,
Computer Adaptive Testing (CAT), which has
been used and improved during the past 15
years, has noteworthy advantages over “xed-
item” tests. is adaptive approach to testing can
update the estimate of an examinee’s ability aer
each test item and select the appropriate level of
subsequent items for the examinee. In this way,
student deciencies and strengths can be quickly
identied and addressed. Another innovative
use of computer-based assessment can be seen
in some computer simulation projects. In their
project, Shavelson, Baxter and Pine (1992)
required their students to replicate electric
circuits by manipulating icons of batteries and
wires presented on a Macintosh computer.
Obviously, CAT and computer simulations
demonstrate a more sophisticated approach
to testing, but these strategies are seldom
implemented with teacher-made tests due to
technical complexity and logistical problems.
Instead, teachers oen use computers to help
such small-scale assessments as creating
traditional multiple-choice, ll in the blank
and short essay type questions. It is worth
pointing out that in either large or small-scale
assessments, computer technology is frequently
used as a test preparation and production tool
(Perkin’s rst order ngertip eects) rather than
as a learning tool to enhance higher-order skills
(the second order ngertip eects).
Another observation of computer-based
assessment is that computer technology is
oen seen by teachers as a “representation
container” rather than an eective assessment
tool (Dede, 2003, p. 7). Specically, computers
allow students to create multimedia materials at
any time and on demand and, as a result, make
learning visual, mobile and fun. However, aer
“Computer technology
does a great job
of automating
varying phases of
testing processes
such as creating,
storing, distributing
and sharing test
materials.”
Volume 50, Number 6 TechTrends 29
students nish the projects, it is not unusual
for students’ competencies in using computer
soware to be measured while the generic
problem-solving competencies are ignored
(Becker & Lovitts, 2003, p. 134). Obviously, it
is much easier to assess computer literacy than
the problem-solving process. However, when
assessment strategies only play a secondary role
and the real outcomes of the projects’ generic
problem-solving are not assessed, chances are
that computers are not being used to their full
potential by teachers.
e third observation is that the design
of the computer-based assessment does not
“adequately support human practices that
produce meaningful information about student
learning” (Hall, Knudsen, & Greeno, 1996,
p. 316). Take the design of multiple-choice
tests for example. Multiple-choice questions
are the most widely used format in computer-
based assessments, but this format has been
criticized for giving students no practice at
expressing their thoughts and for not providing
individual feedback or interactions regarding
student performance. It is important to note
that assessment approaches could easily
replace one form of computer technology with
another without really paying attention to
human interactions. For example, portfolios
are used in place of standardized examinations,
whereas little explicit attention is paid to the
human interactions surrounding either the
portfolio or the standardized examinations. In
fact, an eective assessment requires extensive
interactions between examiners and examinees
(Bahr & Bahr, 1997). Such interactions provide
an opportunity for examiners to identify
learning gaps and for examinees to moderate
their learning.
Emerging technology assessment tools
A review of the related computer literature
would indicate that the current use of computer
technology in educational assessment does not
achieve Perkins’ (1985) second order ngertip
eects in general. As discussed above, the second
order should go beyond the automated function
of computer technology and extend to enhance
higher-order skills. Fortunately, although still in
their early stage of development, some emerging
prototype tools have demonstrated great poten-
tial to push computer-based assessment beyond
the automation of testing, representation chan-
nels and insucient human practices.
e SMART model provides one example.
is computer-based learning tool for science
and math concepts contains a variety of assess-
ment strategies permeating a problem-based
and project-based learning environment. On-
going assessment is incorporated throughout
the learning process in a way that allows com-
puter technology to support student reection.
In addition, student works are evaluated by self-
assessment, peers, teachers and external agen-
cies. is way teachers can identify deciencies
and strengths in student performance. Equally
important, students can reect on their learning
process and improve their higher-level skills.
Table 1 reviews a selection of other prototype
tools in which assessment strategies are inter-
woven with the learning process.
Discussion and conclusions
Computer technology has revolutionized
instruction and student learning, and it holds
great promise for enhancing educational as-
sessment. Although still in their early stages of
development, computer-based assessment tools
oer innovative approaches
for documenting students’
learning process, identify-
ing learners’ deciencies
and strengths, and provid-
ing timely feedback. Such a
promise cannot be realized
without the cooperation of
instructional technologists,
teachers and schools.
It is true that computer
technology is as powerful
as it is seductive. It is easy
for instructional technolo-
gists to get carried away and
spend all their time design-
ing scenarios and gathering
complex data, only then to
ask “How do we assess it?”
(Mislevy, Steinberg, Almond, Haertel, & Penu-
el, 2003). When this happens, computer tech-
nology is not used to its maximum potential.
With rapid advances in computer technology,
the challenge for instructional technologists is
to capture more complex performances in as-
sessment settings. To design eective complex
assessments, instructional technologists should
read Messick’s (1994) discussion about comput-
er-based simulations, portfolio assessments and
performance tasks.
It is also true that it is beyond many
teachers’ abilities to design advanced assessment
prototype tools. In fact, the concept of Perkins’
second order ngertip eects can be applied to
many classroom assessment routines, especially
with the help of free internet resources. For
example, when incorporating the idea of
“With rapid advances
in computer
technology, the
challenge for
instructional
technologists
is to capture
more complex
performances in
assessment settings.”
30 TechTrends Volume 50, Number 6
Table 1. Prototype tools of computer-based assessment in education.
Tools Area Assessment Strategies
DIAGNOSER Key math and
science concepts
is web-based program contains the following tools and
strategies to assess student learning and provide feedback:
DIAGNOSER: Students receive ongoing feedback as they work
through their assignment. Teachers receive a summary of student
diagnoses.
Elicitation Questions: Aer students respond to the carefully
constructed questions, the program can pinpoint areas of possible
misunderstanding, give immediate feedback on reasoning
strategies, and prescribe relevant instruction.
Developmental Lessons: ese lessons open up the ideas elicited
in the class discussion and help students test their initial ideas.
Prescriptive Activities: Teachers can use activities to target
specic problematic ideas.
DIAGNOSER
Tools Website
eduPortfolios Digital portfolio is tool allows intimate interaction between students, teachers
and other stakeholders. On the one hand, students can view and
assess real student work and compare them against established
learning standards. On the other hand, students are asked to
write about how they understand the learning standards and
how they meet the standards in their work.
Aerwards, feedback from multiple teachers is attached to
student portfolios and their reections. In this way, students can
see how their understanding matched or did not match their
teachers’ understandings, and vice versa. is approach allows for
a continual process of reection, understanding and learning.
Ahn, 2004
Summary Street Reading
comprehension
and writing skills
Summary Street is an educational soware based on latent
semantic analysis (LSA), which is a computer method for
representing the content of texts.
Students can prepare multiple dras of a summary and receive
content-based feedback. For example, the Redundancy Check
performs a sentence-by-sentence comparison to ag sentences
that appear to have overlapping content. e Relevance Check
compares each sentence with the original text and pinpoints
sentences that have little or no relevance to the topic.
In addition, the content feedback is presented in a game-like,
easy-to-grasp graphic display. In this way, students are more
willing to repeat the cycles of rewriting and revision before
submitting the nal summaries to their teachers.
Wade-Stein &
Kintsch (2004)
PROUST Programming
problems in
using the Pascal
language
PROUST is considered a milestone in the eld of intelligent
tutoring systems. is system describes a diverse array of
programming problems and the ways in which parts of the
problem can be solved.
Based on how people reason out computer programs, PROUST
is designed to analyze a student’s computer program, identify
strengths and weaknesses in the student’s work, and then present
comments on the student’s work. is soware not only can
identify errors of syntax, but, more interestingly, on errors in a
student’s solutions to solve a problem.
Johnson &
Soloway (1985)
SourceArea
Volume 50, Number 6 TechTrends 31
computer-assisted tests into existing
teacher practice, teachers can search
for creative and eective methods for
conducting testing and evaluation in
addition to the traditional multiple
choice, ll in the blank and short essay
questions (Khan, 1997). ey can
include web-based group discussions
and e-portfolio development to
evaluate students’ progress. ey can
allow students to submit comments
and reections about their project
design and delivery activities via
a web log (or blog, a type of online
learning diary). ey can also use
computer simulations for hands-on
performance assessments. All these
assessment strategies can be greatly
facilitated by using free online
resources.
Another example of applying
Perkins’ second order ngertip eects
is the use of online rubric tools such
as Rubistar. In fact, teachers can
relinquish their intellectual authority
a little and have students create the
rubric in groups. By negotiating the
rubric among their peers and with
their teacher, students can spell
out their project expectations and
have ownership of the assessment
process. Aerwards, the students
can use the rubric as a central
guidance to provide feedback to their
counterparts. In this instance, the use
of rubric tools for communication,
negotiation and peer review is in line
with Perkins’ second order ngertip
eects. It is toward this end that the
teachers and students can make the
most of computer-based tools for
assessment.
Lastly, what are the roles of
schools in incorporating Perkins’
second order ngertip eects into
computer-based assessment prac-
tices? Some may argue that many
schools do not have the technology
infrastructure and/or the budgets to
support the eort of incorporating
computer-based tools into classroom
assessments. e real challenge,
however, is to overcome the fear, sus-
picion and doubt that are found in
many schools about the relative im-
portance of such eorts. e point
cannot be made more clearly than
Dede (2003) did when he claimed that
“the fundamental barriers to employ-
ing these technologies eectively for
learning are not technical or economic,
but psychological, organizational, po-
litical and cultural” (p. 9).
Hong Lin received her doctoral degree from
the Department of Learning and Performance
Systems at Penn State, University Park campus.
She is manager of faculty development in the
Institute for Teaching and Learning Excellence
at Oklahoma State University. Her research in-
terests include, but are not limited to, online in-
struction, blended learning, assessment and the
ethical applications of instructional technology
in higher education.
Dr. Frank Dwyer is professor of education
in the instructional systems program in the De-
partment of Learning and Performance Systems
at Penn State. Dr. Dwyer was president of the
Association for Educational Communications
and Technology (AECT) from 1984 to1985. He
led the Department of Adult Education, Instruc-
tional Systems and Workforce Education and
Development at Penn State from 1990 to1995.
Dr. Dwyer’s research interests focus on distance
education, corporate instructional systems, in-
structional design/strategies and visual learning
systems.
References
Ahn, J. (2004, April). Electronic portfolios:
Blending technology, accountability and
assessment. T.H.E. Journal, 31(9), 12-18.
Retrieved April 20, 2005, from http://
thejournal.com/magazine/vault/A4757B.
cfm
Baek, S. G. (1994). Implications of cognitive
psychology for educational testing.
Educational Psychology Review, 6(4), 373-
389.
Bahr, M. W., & Bahr, C. M. (1997, Winter).
Educational assessment in the next
millennium: Contributions of technology.
Preventing School Failure, 41(2), 90-94.
Becker, H. J., & Lovitts, B. E. (2003). A project-
based approach to assessing technology.
In G. D. Haertel & B. Means (Eds.),
Evaluating educational technology (pp.
129-148). New York, NY: Teachers College
Press.
DIAGNOSER TOOLS. Retrieved September
1, 2005, from http://www.diagnoser.com/
diagnoser/
Dede, C. (2003, March-April). No cliché le
behind: Why education policy is not
like the movies. Educational Technology,
43(2), 5-10.
Hall, R. P., Knudsen, J., & Greeno, J. G.
(1996). A case study of systemic aspects
of assessment technologies. Educational
Assessment, 3(4), 315-361.
Johnson, W. L., & Soloway, E. (1985). PROUST:
An automatic debugger for PASCAL
programs. In P. Lemmons (Ed.), Lecture
notes in computer science (pp. 179 – 190).
Hightstown, NJ: McGraw-Hill, Inc.
Jonassen, D., H. (2000). Computers as mindtools
in schools: Engaging critical thinking (2nd
ed.).Upper Saddle River, NJ: Prentice-
Hall, Pearson Education.
Khan, B. H. (1997). Web-based instruction:
what is it and why is it? In B. H. Khan
(Ed.), Web-based Instruction (pp. 5-
18). Englewood Clis, NJ: Educational
Technology.
Messick, S. (1994). e interplay of evidence
and consequences in the validation of
performance assessments. Educational
Researcher, 23(2), 13-23.
Mislevy, R. J., Steinberg, L., S., Almond, R., G.,
Haertel, G., D., & Penuel, W. R. (2003).
PADI technical report 2: Leverage points
for improving educational assessment. SRI
International.
Perkins, D. (1985, August/September). e
ngertip eect: How information-pro-
cessing technology shapes thinking. Edu-
cational Researcher, 14, 11-17.
Pellegrino, J. W., Chudowsky, N., & Glaser,
R. (2001). Knowing what students know:
e science and design of educational as-
sessment. Washington, DC: National Re-
search Council.
Shavelson, R. J., Baxter, G. P., & Pine, J. (1992).
Performance assessments: Political rheto-
ric and measurement reality. Educational
Research (21), 4, 168-177.
Wade-Stein, D., & Kintsch, E. (2004). Summa-
ry Street: Interactive computer support
for writing. Cognition and Instruction,
22(3), 333-362.
Zenisky, A. L., & Sireci, S. G. (2002). Techno-
logical innovations in large-scale assess-
ment. Applied Measurement in Education,
15(4), 337-362.