ArticlePDF Available

The Fingertip Effects of Computer-based Assessment in Education

Authors:
Volume 50, Number 6 TechTrends 27
ccording to the Committee on the
Foundations of Educational Assessment,
traditional educational assessment does a
reasonable job of measuring knowledge of basic
facts, procedures and prociency of an area
of the curriculum. However, the traditional
approach fails to capture the breadth and
richness of knowledge and cognition (Pellegrino,
Chudowsky, & Glaser, 2001). Such a concern
arises because traditional assessment practices
generally focus on assessing whether a student
has acquired the content knowledge, but they
oen fail in assessing the learning process and
higher-order thinking skills (Baek, 1994; Bahr
& Bahr, 1997). Dede (2003) concludes that the
current practices of educational assessment are
“based on mandating performance without
providing appropriate resources, then using
a ‘drive by’ summative test to determine
achievement” (p. 6).
At a time when traditional assessment is
under increasing scrutiny and criticism, the
nation is placing greater expectations on the
potential role of the computer in educational
assessment. It is anticipated that the appro-
priate use of computer technology would
help enhance assessment at multiple levels of
practice by incorporating ongoing and mul-
tiple assessment strategies into the learning
process. Given this possibility, it is timely to
review current computer-based assessment
practices in educational settings. Further-
more, a review of some emerging assessment
tools that incorporate interactive multimedia
can also deepen our understanding of the role
that computer technology plays in assessment.
Technology use in assessment and
ngertip eects
Fingertip eects of computer technology
Computer technology has signicantly
changed the curriculum and teachers’ instruc-
tional practices. It has also changed the way stu-
dents construct and demonstrate their knowl-
edge and skills. ese changes in turn are stim-
ulating people to rethink
what is assessed, how that
information is obtained,
and how it is fed back into
the educational process in a
productive and timely way”
(Pellegrino, Chudowsky, &
Glaser, 2001, p. 272).
Perkins (1985), a
pioneer thinker who viewed
computers as learning tools,
pointed out that computer
technology has “a valuable
history of putting things at
our ngertips to be seized
and used widely for their
designed objectives as well
as for other purposes (p.
11). However, he warned
that the opportunities provided by computer
technology are not always accepted in education.
He further explained that computer technology
actually has “two order ngertip eects.” e rst
order ngertip eects occur when a computer
innovation changes “the way people do certain
things without actually changing very much the
basic aspirations, endeavors, or thinking habits
The Fingertip Eects of
Computer-based Assessment
in Education
By Hong Lin and Francis Dwyer
A
At a time when
traditional assessment
is under increasing
scrutiny and criticism,
the nation is placing
greater expectations
on the potential
role of the computer
in educational
assessment.
28 TechTrends Volume 50, Number 6
of a population (Perkins, 1985, p. 11). For
instance, unlike regular mail, emails and instant
messaging allow for faster communication
with friends, relatives and business associates
thousands of miles away. Another example is
that computer-based tests use built-in databases
to automatically collect and compute data.
In these instances, the rst order ngertip
eects of computer technology answer the
question, “What could you do that you could
not before?” Specically, computer technology
can help automate routine
procedures quickly and
accurately, thus improving
productivity and eciency.
e second order n-
gertip eects answer the
question, “What dierence
will a computer really make
to a persons higher-order
skills, i.e., decision making,
reection, reasoning and
problem solving?” (Per-
kins, 1986, p. 11). Jonassen
(2000) indicated that the
second order eects should
help “in the construction
of generalizable, transfer-
able skills that can facilitate
thinking in various elds” (p. 18). However,
when computer technology is used, it should go
beyond its automated function as a production
tool; it should be used to promote higher-order
skills. For example, Microso Excel is a spread-
sheet tool that is useful for teachers as a grade
book. By inputting grades and one function,
or a series of functions, a teacher can produce
report cards very quickly, thus saving time at
the end of a marking period. e spreadsheet
data can also be turned into a graph so that the
report card can be seen visually or graphically.
e teacher can also look at the results and use
them to work on comments for student perfor-
mance. In doing so, the teacher can reect on
his or her teaching, ask “what if ” questions or
help a struggling student. In this instance, hav-
ing the computer do the menial task of averag-
ing grades and displaying graphics are the rst
order ngertip eects; reective thinking and
helping a struggling student’s learning are the
second order.
In summarizing Perkins’ words, if computer
technology fails to achieve its full potential, it
has only been used to achieve the rst order
ngertip eects. It is the second order ngertip
eects, the non-automatic and eortful process,
that establishes the true value of the computer
technology. With this distinction in mind, the
question then becomes: Which level of ngertip
eects of the computer have current computer-
based assessment tools achieved?
Limitations of computer technology
use in assessment
Computer applications in educational
assessment are evident in testing preparation,
administering, scoring and reporting (Zenisky &
Sireci, 2002). To this end, computer technology
is oen used to present test items and collect
responses. Clearly, computer technology does a
great job of automating varying phases of testing
processes such as creating, storing, distributing
and sharing test materials. e automation,
especially in large-scale assessment such as
that administered by the Educational Testing
Service (ETS), can benet both examiners and
examinees in multiple ways.
Unlike most computer-assisted tests,
Computer Adaptive Testing (CAT), which has
been used and improved during the past 15
years, has noteworthy advantages over “xed-
item” tests. is adaptive approach to testing can
update the estimate of an examinees ability aer
each test item and select the appropriate level of
subsequent items for the examinee. In this way,
student deciencies and strengths can be quickly
identied and addressed. Another innovative
use of computer-based assessment can be seen
in some computer simulation projects. In their
project, Shavelson, Baxter and Pine (1992)
required their students to replicate electric
circuits by manipulating icons of batteries and
wires presented on a Macintosh computer.
Obviously, CAT and computer simulations
demonstrate a more sophisticated approach
to testing, but these strategies are seldom
implemented with teacher-made tests due to
technical complexity and logistical problems.
Instead, teachers oen use computers to help
such small-scale assessments as creating
traditional multiple-choice, ll in the blank
and short essay type questions. It is worth
pointing out that in either large or small-scale
assessments, computer technology is frequently
used as a test preparation and production tool
(Perkins rst order ngertip eects) rather than
as a learning tool to enhance higher-order skills
(the second order ngertip eects).
Another observation of computer-based
assessment is that computer technology is
oen seen by teachers as a “representation
container” rather than an eective assessment
tool (Dede, 2003, p. 7). Specically, computers
allow students to create multimedia materials at
any time and on demand and, as a result, make
learning visual, mobile and fun. However, aer
“Computer technology
does a great job
of automating
varying phases of
testing processes
such as creating,
storing, distributing
and sharing test
materials.”
Volume 50, Number 6 TechTrends 29
students nish the projects, it is not unusual
for students’ competencies in using computer
soware to be measured while the generic
problem-solving competencies are ignored
(Becker & Lovitts, 2003, p. 134). Obviously, it
is much easier to assess computer literacy than
the problem-solving process. However, when
assessment strategies only play a secondary role
and the real outcomes of the projects generic
problem-solving are not assessed, chances are
that computers are not being used to their full
potential by teachers.
e third observation is that the design
of the computer-based assessment does not
“adequately support human practices that
produce meaningful information about student
learning” (Hall, Knudsen, & Greeno, 1996,
p. 316). Take the design of multiple-choice
tests for example. Multiple-choice questions
are the most widely used format in computer-
based assessments, but this format has been
criticized for giving students no practice at
expressing their thoughts and for not providing
individual feedback or interactions regarding
student performance. It is important to note
that assessment approaches could easily
replace one form of computer technology with
another without really paying attention to
human interactions. For example, portfolios
are used in place of standardized examinations,
whereas little explicit attention is paid to the
human interactions surrounding either the
portfolio or the standardized examinations. In
fact, an eective assessment requires extensive
interactions between examiners and examinees
(Bahr & Bahr, 1997). Such interactions provide
an opportunity for examiners to identify
learning gaps and for examinees to moderate
their learning.
Emerging technology assessment tools
A review of the related computer literature
would indicate that the current use of computer
technology in educational assessment does not
achieve Perkins’ (1985) second order ngertip
eects in general. As discussed above, the second
order should go beyond the automated function
of computer technology and extend to enhance
higher-order skills. Fortunately, although still in
their early stage of development, some emerging
prototype tools have demonstrated great poten-
tial to push computer-based assessment beyond
the automation of testing, representation chan-
nels and insucient human practices.
e SMART model provides one example.
is computer-based learning tool for science
and math concepts contains a variety of assess-
ment strategies permeating a problem-based
and project-based learning environment. On-
going assessment is incorporated throughout
the learning process in a way that allows com-
puter technology to support student reection.
In addition, student works are evaluated by self-
assessment, peers, teachers and external agen-
cies. is way teachers can identify deciencies
and strengths in student performance. Equally
important, students can reect on their learning
process and improve their higher-level skills.
Table 1 reviews a selection of other prototype
tools in which assessment strategies are inter-
woven with the learning process.
Discussion and conclusions
Computer technology has revolutionized
instruction and student learning, and it holds
great promise for enhancing educational as-
sessment. Although still in their early stages of
development, computer-based assessment tools
oer innovative approaches
for documenting students’
learning process, identify-
ing learners’ deciencies
and strengths, and provid-
ing timely feedback. Such a
promise cannot be realized
without the cooperation of
instructional technologists,
teachers and schools.
It is true that computer
technology is as powerful
as it is seductive. It is easy
for instructional technolo-
gists to get carried away and
spend all their time design-
ing scenarios and gathering
complex data, only then to
ask “How do we assess it?”
(Mislevy, Steinberg, Almond, Haertel, & Penu-
el, 2003). When this happens, computer tech-
nology is not used to its maximum potential.
With rapid advances in computer technology,
the challenge for instructional technologists is
to capture more complex performances in as-
sessment settings. To design eective complex
assessments, instructional technologists should
read Messick’s (1994) discussion about comput-
er-based simulations, portfolio assessments and
performance tasks.
It is also true that it is beyond many
teachers’ abilities to design advanced assessment
prototype tools. In fact, the concept of Perkins’
second order ngertip eects can be applied to
many classroom assessment routines, especially
with the help of free internet resources. For
example, when incorporating the idea of
“With rapid advances
in computer
technology, the
challenge for
instructional
technologists
is to capture
more complex
performances in
assessment settings.
30 TechTrends Volume 50, Number 6
Table 1. Prototype tools of computer-based assessment in education.
Tools Area Assessment Strategies
DIAGNOSER Key math and
science concepts
is web-based program contains the following tools and
strategies to assess student learning and provide feedback:
DIAGNOSER: Students receive ongoing feedback as they work
through their assignment. Teachers receive a summary of student
diagnoses.
Elicitation Questions: Aer students respond to the carefully
constructed questions, the program can pinpoint areas of possible
misunderstanding, give immediate feedback on reasoning
strategies, and prescribe relevant instruction.
Developmental Lessons: ese lessons open up the ideas elicited
in the class discussion and help students test their initial ideas.
Prescriptive Activities: Teachers can use activities to target
specic problematic ideas.
DIAGNOSER
Tools Website
eduPortfolios Digital portfolio is tool allows intimate interaction between students, teachers
and other stakeholders. On the one hand, students can view and
assess real student work and compare them against established
learning standards. On the other hand, students are asked to
write about how they understand the learning standards and
how they meet the standards in their work.
Aerwards, feedback from multiple teachers is attached to
student portfolios and their reections. In this way, students can
see how their understanding matched or did not match their
teachers’ understandings, and vice versa. is approach allows for
a continual process of reection, understanding and learning.
Ahn, 2004
Summary Street Reading
comprehension
and writing skills
Summary Street is an educational soware based on latent
semantic analysis (LSA), which is a computer method for
representing the content of texts.
Students can prepare multiple dras of a summary and receive
content-based feedback. For example, the Redundancy Check
performs a sentence-by-sentence comparison to ag sentences
that appear to have overlapping content. e Relevance Check
compares each sentence with the original text and pinpoints
sentences that have little or no relevance to the topic.
In addition, the content feedback is presented in a game-like,
easy-to-grasp graphic display. In this way, students are more
willing to repeat the cycles of rewriting and revision before
submitting the nal summaries to their teachers.
Wade-Stein &
Kintsch (2004)
PROUST Programming
problems in
using the Pascal
language
PROUST is considered a milestone in the eld of intelligent
tutoring systems. is system describes a diverse array of
programming problems and the ways in which parts of the
problem can be solved.
Based on how people reason out computer programs, PROUST
is designed to analyze a student’s computer program, identify
strengths and weaknesses in the student’s work, and then present
comments on the student’s work. is soware not only can
identify errors of syntax, but, more interestingly, on errors in a
student’s solutions to solve a problem.
Johnson &
Soloway (1985)
SourceArea
Volume 50, Number 6 TechTrends 31
computer-assisted tests into existing
teacher practice, teachers can search
for creative and eective methods for
conducting testing and evaluation in
addition to the traditional multiple
choice, ll in the blank and short essay
questions (Khan, 1997). ey can
include web-based group discussions
and e-portfolio development to
evaluate studentsprogress. ey can
allow students to submit comments
and reections about their project
design and delivery activities via
a web log (or blog, a type of online
learning diary). ey can also use
computer simulations for hands-on
performance assessments. All these
assessment strategies can be greatly
facilitated by using free online
resources.
Another example of applying
Perkins’ second order ngertip eects
is the use of online rubric tools such
as Rubistar. In fact, teachers can
relinquish their intellectual authority
a little and have students create the
rubric in groups. By negotiating the
rubric among their peers and with
their teacher, students can spell
out their project expectations and
have ownership of the assessment
process. Aerwards, the students
can use the rubric as a central
guidance to provide feedback to their
counterparts. In this instance, the use
of rubric tools for communication,
negotiation and peer review is in line
with Perkins’ second order ngertip
eects. It is toward this end that the
teachers and students can make the
most of computer-based tools for
assessment.
Lastly, what are the roles of
schools in incorporating Perkins’
second order ngertip eects into
computer-based assessment prac-
tices? Some may argue that many
schools do not have the technology
infrastructure and/or the budgets to
support the eort of incorporating
computer-based tools into classroom
assessments. e real challenge,
however, is to overcome the fear, sus-
picion and doubt that are found in
many schools about the relative im-
portance of such eorts. e point
cannot be made more clearly than
Dede (2003) did when he claimed that
“the fundamental barriers to employ-
ing these technologies eectively for
learning are not technical or economic,
but psychological, organizational, po-
litical and cultural” (p. 9).
Hong Lin received her doctoral degree from
the Department of Learning and Performance
Systems at Penn State, University Park campus.
She is manager of faculty development in the
Institute for Teaching and Learning Excellence
at Oklahoma State University. Her research in-
terests include, but are not limited to, online in-
struction, blended learning, assessment and the
ethical applications of instructional technology
in higher education.
Dr. Frank Dwyer is professor of education
in the instructional systems program in the De-
partment of Learning and Performance Systems
at Penn State. Dr. Dwyer was president of the
Association for Educational Communications
and Technology (AECT) from 1984 to1985. He
led the Department of Adult Education, Instruc-
tional Systems and Workforce Education and
Development at Penn State from 1990 to1995.
Dr. Dwyer’s research interests focus on distance
education, corporate instructional systems, in-
structional design/strategies and visual learning
systems.
References
Ahn, J. (2004, April). Electronic portfolios:
Blending technology, accountability and
assessment. T.H.E. Journal, 31(9), 12-18.
Retrieved April 20, 2005, from http://
thejournal.com/magazine/vault/A4757B.
cfm
Baek, S. G. (1994). Implications of cognitive
psychology for educational testing.
Educational Psychology Review, 6(4), 373-
389.
Bahr, M. W., & Bahr, C. M. (1997, Winter).
Educational assessment in the next
millennium: Contributions of technology.
Preventing School Failure, 41(2), 90-94.
Becker, H. J., & Lovitts, B. E. (2003). A project-
based approach to assessing technology.
In G. D. Haertel & B. Means (Eds.),
Evaluating educational technology (pp.
129-148). New York, NY: Teachers College
Press.
DIAGNOSER TOOLS. Retrieved September
1, 2005, from http://www.diagnoser.com/
diagnoser/
Dede, C. (2003, March-April). No cliché le
behind: Why education policy is not
like the movies. Educational Technology,
43(2), 5-10.
Hall, R. P., Knudsen, J., & Greeno, J. G.
(1996). A case study of systemic aspects
of assessment technologies. Educational
Assessment, 3(4), 315-361.
Johnson, W. L., & Soloway, E. (1985). PROUST:
An automatic debugger for PASCAL
programs. In P. Lemmons (Ed.), Lecture
notes in computer science (pp. 179 – 190).
Hightstown, NJ: McGraw-Hill, Inc.
Jonassen, D., H. (2000). Computers as mindtools
in schools: Engaging critical thinking (2nd
ed.).Upper Saddle River, NJ: Prentice-
Hall, Pearson Education.
Khan, B. H. (1997). Web-based instruction:
what is it and why is it? In B. H. Khan
(Ed.), Web-based Instruction (pp. 5-
18). Englewood Clis, NJ: Educational
Technology.
Messick, S. (1994). e interplay of evidence
and consequences in the validation of
performance assessments. Educational
Researcher, 23(2), 13-23.
Mislevy, R. J., Steinberg, L., S., Almond, R., G.,
Haertel, G., D., & Penuel, W. R. (2003).
PADI technical report 2: Leverage points
for improving educational assessment. SRI
International.
Perkins, D. (1985, August/September). e
ngertip eect: How information-pro-
cessing technology shapes thinking. Edu-
cational Researcher, 14, 11-17.
Pellegrino, J. W., Chudowsky, N., & Glaser,
R. (2001). Knowing what students know:
e science and design of educational as-
sessment. Washington, DC: National Re-
search Council.
Shavelson, R. J., Baxter, G. P., & Pine, J. (1992).
Performance assessments: Political rheto-
ric and measurement reality. Educational
Research (21), 4, 168-177.
Wade-Stein, D., & Kintsch, E. (2004). Summa-
ry Street: Interactive computer support
for writing. Cognition and Instruction,
22(3), 333-362.
Zenisky, A. L., & Sireci, S. G. (2002). Techno-
logical innovations in large-scale assess-
ment. Applied Measurement in Education,
15(4), 337-362.
... The weighted goals described by Taras are matched against specified criteria, which are directly informed by the curriculum. Therefore if it is intended in a course that students learn through practical or creative production work (i.e. the outcome is a tangible artefact as discussed by Dillon and Brown (2006)) then this work should be assessed, because otherwise this work will be devalued by students and teachers (Lin & Dwyer, 2006;McGaw, 2006). For high-stakes external assessment it is important that the form of assessment reflects the requirements of the practical curriculum. ...
... Computer-supported assessment is a term used to describe a large number of ways in which computer technology may be used in assessment whether that be students using the technology to complete an assessment task or teachers using the technology to manage or mark the assessment output (Bull & Sharp, 2000). Applications of computer-supported assessment have been developed in response to the short-comings of traditional forms of assessment that in particular are seen to fail to assess more complex tasks and higher-order thinking skills (Lin & Dwyer, 2006;McGaw, 2006;Pellegrino & Quellmalz, 2011). The use of portfolios has typically been offered as one means of addressing these issues but has presented some obstacles, particularly in terms of manageability and measurement reliability (Clarke-Midura & Dede, 2010). ...
Article
Full-text available
High-stakes external assessment for practical courses is fraught with problems impacting on the manageability, validity and reliability of scoring. Alternative approaches to assessment using digital technologies have the potential to address these problems. This paper describes a study that investigated the use of these technologies to create and submit digital representations of practical production work and forms of creative expression for summative high-stakes assessment. The study set out to determine the feasibility of students creating and submitting these digital representations for assessment and to identify which of analytical or comparative pairs scoring generated the more reliable scores. This paper proposes that scoring digital representations of creative practical work submitted by students is a viable alternative to traditional approaches to assessment. L’évaluation externe à enjeux élevés dans les cours pratiques se heurte à des problèmes qui se répercutent sur la gestion, la validité et la fiabilité de la notation. Des approches différentes de l'évaluation utilisant des technologies numériques ont le potentiel de remédier à ces problèmes. Cet article décrit une étude consacrée à l'utilisation de ces technologies pour créer et soumettre des représentations numériques de travaux pratiques de production et de création pour une évaluation sommative à enjeux élevés. L'étude visait à déterminer si la création de ces représentations numériques par les étudiants et leur soumission pour évaluation étaient réalisables. Elle visait aussi à identifier quel système de notation de groupe, analytique ou comparatif, générait les scores les plus fiables. Cet article soutient que noter les représentations numériques de travaux pratiques soumis par les étudiants offre un choix viable aux approches traditionnelles d'évaluation.
... Studies identify increased writing quantity (Grimes & Warschauer, 2008) and increased engagement in writing processes (Lowther, Inan, Ross & Stahl, 2012;Yang & Wu, 2012). In terms of tutorial properties, studies of digital learning environments identify potential for an increase in learning-focused interactions and higher level thinking skills (Grimes & Warschauer, 2008;Lin & Dwyer, 2006;Somekh, et al., 2007;Yang & Wu, 2012). In their survey of writing teachers, Purcell, Buchanen and Friedrich (2013) reported that teachers see benefits for connecting with an audience, collaboration and creativity. ...
Article
This paper reports on the teaching practices identified as effective for students' writing progress in a digital learning environment. The study is situated within a design-based research partnership between researchers and a group of urban schools serving culturally diverse students from low income communities who have implemented a digital pedagogy innovation which includes student device ownership, wireless access and a shared pedagogical approach. The research design logic was to select demonstrably effective teachers as ‘case studies’ in order to understand what effective teachers in the innovation did that promoted greater progress in writing. Qualitative analyses of selected teachers' class sites and students' individual blogs identified features of teaching practice hypothesised to promote student development in writing. To strengthen our understandings, teachers were interviewed to check the comprehensiveness and validity of our interpretation. Classroom observations from these case study teachers were compared with observations from a larger group of teachers to investigate whether identified practices were differentially employed by these effective teachers. Finally, the effects on student writing achievement of the relative presence of these practices in all observed classes were predicted using a hierarchical linear model. Our findings indicate effects of using digital tools in ways that promote complex compositional tasks, discussion and critical thinking. The study adds to a growing number of studies that investigate the nature of effective pedagogy within a digital environment. It contributes to the identification of promising practices for the design of more effective instruction in writing within classes that have ubiquitous digital access.
... What is assessed is critical and paper-based ''exams'' fail to assess higher-order thinking, decision-making, reflection, reasoning and problem-solving (Lin and Dwyer 2006). Further, very few real tasks are done on paper in modern society. ...
Article
Full-text available
The well-being of modern economies and societies is increasingly requiring citizens to possess capabilities in integrating knowledge and skills in science, technology, engineering and science to solve problems. However, by the end of schooling, the majority of Australian students show little interest in these discipline areas and have no plans to continue study or work in them; many refer to these disciplines as boring. Further, they typically have little experience in integrating knowledge and skills from these disciplines and/or in applying this to solve relevant problems. Therefore, there is a need to engage students with such learning experiences to develop their interest and capabilities, particularly during the early years of secondary schooling. This is not easy for teachers to respond to, but with the support of modern digital technologies and the new Australian curriculum, the potential is expanded and the challenge is more readily achievable. However, appropriate pedagogies need to be supported that include more authentic approaches to assessment. Learning activities need to support students to integrate knowledge and skills across discipline areas in tackling real problems, and this also needs to be reflected in how students are assessed. In this paper, I will draw on personal experience as a teacher, a review of recent literature, components of the Australian Curriculum, and findings from research projects associated with my University research centre, to argue for, and illustrate how, teachers can orchestrate powerful learning activities to promote an interdisciplinary approach to STEM.
Chapter
This chapter aims to address assessment in the modern age in terms of its importance, challenges and solutions by examining the views of 1,423 users at UK test centres following their recent experience of using two systems which employ computer-based assessment (CBA) and computer-assisted assessment (CAA). Generally speaking, based on the research, which informs the findings presented in this chapter, both systems face similar challenges but there are challenges which are specific to the CAA system. Similarly, both systems may require common solutions to improve user's future experience, but there are solutions which are more relevant to the CAA system. The chapter concludes with a discussion around the UK apprenticeship and a case study of a pilot apprenticeship programme in which CBA and CAA are also integrated.
Research
Full-text available
Prüfungen sind ein Schlüsselelement im Lernprozess. Sie geben die Richtung, bieten Ziele und motivieren die Studierenden in dem sie ihren Fortschritt messen und Feedback über ihren Stand geben. Wenn diese Wichtigkeit von Prüfungen in Betracht der steigenden Studienzahlen gezogen wird, ist die Effizienzsteigerung durch E-Assessments im Bereich der Reduktion der Prüfungszeit, einfacherer Administration, schnellerer Ergebnisberechnung, Überwachung der Studierendenperformance so wie besserer Objektivität und Sicherheit unumgänglich. Nebst dem finden die Studierenden die elektronischen Prüfungen mehr vielversprechend, glaubwürdig, objektiv, fair, interessant, unterhaltsam, schnell und weniger schwierig oder anstrengend. (Siozos et al. 2009, S. 811)
Chapter
This chapter explores the use of portfolios in assessment, starting with a general overview of the nature of assessment portfolios, then moving on to their use within technology education for developing and assessing capability. I start by considering their early use in public examinations in England and reasons why they were introduced. From this I explore issues presented by using portfolios, their potential and their problems. I draw on a range of research and development projects, mainly from within technology education, then present a case study of portfolio development from research at Goldsmiths, University of London, and use this as a basis for exemplifying the potential of digital portfolios. Finally, I provide hopeful but cautious guidance, drawing from the success stories, the findings, and the concerns raised through the chapter.
Article
Full-text available
In this paper, we review computer‐based assessment for learning (CBAfL), in elementary and secondary education, as a viable way to merge instruction and assessment of students' developing proficiencies. We begin by contextualizing our topic relative to summative and formative assessment before presenting the current literature, which we categorized into the following: (a) supplementary use in classrooms, (b) web‐based, and (c) data‐driven, continuous CBAfL. Examples of research studies per category are provided. Findings show that using CBAfL in the classroom, via the Internet, or embedded in a game, generally enhances learning and other outcomes across a range of content areas (e.g. biology, math, and programming). One conclusion is that feedback, to be most beneficial to learning, should not be overly complex and must be used to be effective. Findings also showed that the quality of the assessment (i.e. validity, reliability, and efficiency) is unimpaired by the inclusion of feedback. The possibilities created by advances in the learning sciences, measurement, and technology have paved the way toward new assessment approaches that will support personalized learning and that can accurately measure and support complex competencies. The next steps involve evaluating the new assessments regarding their psychometric properties and support of learning. Lay Description What is currently known about computer‐based assessment for learning (CBAfL)? Early CBAfL systems were divided into linear and branching programs with no diagnostics and evolved into systems possessing more personalized/adaptive remediation with AI. Current CBAfL can support a range of competencies in various digital environments. Advanced learning analytic methods include learning analytics and stealth assessment. What our paper adds to what is already known about CBA for learning? Trends in our review suggest CBAs will improve in personalizing learning in a variety of contexts. Innovative CBAfL techniques will move beyond the laboratory and into the mainstream. Boundaries between instruction, learning and assessment will eventually become blurred, thus removing the need for high‐stake tests of learning. What are the implications of our topic for practitioners? With CBAfL advances, teachers will have more time to provide targeted support to learners. Students would not need to worry about taking exams if CBAfL is continuous and formative. Educators will be able to provide personalized learning experiences for diverse students. Students will be equipped with the knowledge and skills needed to succeed in the 21st century.
Chapter
This chapter aims to address assessment in the modern age in terms of its importance, challenges and solutions by examining the views of 1,423 users at UK test centres following their recent experience of using two systems which employ computer-based assessment (CBA) and computer-assisted assessment (CAA). Generally speaking, based on the research, which informs the findings presented in this chapter, both systems face similar challenges but there are challenges which are specific to the CAA system. Similarly, both systems may require common solutions to improve user's future experience, but there are solutions which are more relevant to the CAA system. The chapter concludes with a discussion around the UK apprenticeship and a case study of a pilot apprenticeship programme in which CBA and CAA are also integrated.
Chapter
This chapter reports on the first year of an applied research project that utilises new digital technologies to address the challenge of embedding authentic complex performance as an integral part of summative assessment in senior secondary courses. Specifically, it reports on the approaches to marking authentic assessment tasks to meet the requirements of external examination. On-line marking tools were developed utilising relational databases to support the use of the analytical rubric-based marking method and the paired comparisons method to generate scores based on Rasch modelling. The research is notable in seeking to simultaneously enhance assessment and marking practices in examination contexts and in so doing, also contribute to the advancement of pedagogical practices associated with constructivist learning environments. The chapter will be relevant to courses and subjects that incorporate a significant performance dimension.
Book
Full-text available
The following values have no corresponding Zotero field: ID - 92
Article
Full-text available
Abstract Advances in cognitive psychology,deepen our understanding of how students gain and use knowledge. Advances in technology make it possible to capture more complex performances in assessment settings, by including, for example, simulation, interactivity, collaboration, and constructed response. The challenge is in knowing just how to put this new knowledge,to work. Familiar schemas,for designing and analyzing tests produce assessments that are useful because they are coherent, within the constraints under which they evolved. Breaking beyond the constraints requires not only the means for doing so (through the advances mentioned above) but schemas for producing assessments that are again coherent; that is, assessments that may indeed gather complex data to ground inferences about complex student models, to gauge complex learning or evaluate complex,programs—but,which build on a sound chain of reasoning from what we observe to what we infer. This presentation first reviews an evidence-centered framework,for designing and analyzing assessments. It then uses this framework,to discuss and to illustrate how advances in technology and in education and psychology can be harnessed to improve educational assessment.
Article
Full-text available
Computers have had a tremendous impact on assessment practices over the past half century. Advances in computer technology have substantially influenced the ways in which tests are made, administered, scored, and reported to examinees. These changes are particularly evident in computer-based testing, where the use of computers has allowed test developers to re-envision what test items look like and how they are scored. By integrating technology into assessments, it is increasingly possible to create test items that can sample as broad or as narrow a range of behaviors as needed while preserving a great deal of fidelity to the construct of interest. In this article we review and illustrate some of the current technological developments in computer-based testing, focusing on novel item formats and automated scoring methodologies. Our review indicates that a number of technological innovations in performance assessment are increasingly being researched and implemented by testing programs. In some cases, complex psychometric and operational issues have successfully been dealt with, but a variety of substantial measurement concerns associated with novel item types and other technological aspects impede more widespread use. Given emerging research, however, there appears to be vast potential for expanding the use of more computerized constructed-response type items in a variety of testing contexts.
Article
Contemporary beliefs about the impact of information-processing technology (IPT) on thinking are examined. Whereas some suggest that learning to program and other contacts with IPT will empower thinking, it is argued from both theory and evidence that typical contacts with IPT today do not meet certain conditions for significantly reshaping thought. Whereas others suggest that IPT will have a narrowing and dehumanizing influence, it is argued that the striking diversification of IPT now underway will eventually allow for many styles of involvement. In the long term, as this diversification spreads to nearly all aspects of society, thinking may change in certain basic ways as it has in response to literacy and print.
Article
Stanford University Current designs and classroom uses of assessment technology do not adequately reflect or support human practices that produce meaningful information about student learning. We base this argument on findings from a study of the diverse practices making up an assessment system in a fifth-grade mathematics classroom where students have extensive access to personal computing. Videotaped records of classroom activities, reflecting both teachers’ and students’ perspectives on the classroom, are analyzed with teachers to develop a grounded theoretical framework for describing assessment practices. We present the framework, use it to compare different examples of assessment activities, and propose new criteria for designing systemically valid assessments as kinds of technology that support and extend existing classroom practices.
Article
Authentic and direct assessments of performances and products are examined in the light of contrasting functions and purposes having implications for validation, especially with respect to the need for specialized validity criteria tailored for performance assessment. These include contrasts between performances and products, between assessment of performance per se and performance assessment of competence or other constructs, between structured and unstructured problems and response modes, and between breadth and depth of domain coverage. These distinctions are elaborated in the context of an overarching contrast between task-driven and construct-driven performance assessment. Rhetoric touting performance assessments because they eschew decomposed skills and decontextualized tasks is viewed as misguided, in that component skills and abstract problems have a legitimate place in pedagogy. Hence, the essence of authentic assessment must be sought elsewhere, that is, in the quest for complete construct representation. With this background, the concepts of “authenticity” and “directness” of performance assessment are treated as tantamount to promissory validity claims that they offset, respectively, the two major threats to construct validity, namely, construct underrepresentation and construct-irrelevant variance. With respect to validation, the salient role of both positive and negative consequences is underscored as well as the need, as in all assessment, for evidence of construct validity.