A preview of this full-text is provided by American Psychological Association.
Content available from Journal of Educational Psychology
This content is subject to copyright. Terms and conditions apply.
Toward Automated Computer-Based Visualization and Assessment of
Team-Based Performance
Dirk Ifenthaler
Deakin University
A considerable amount of research has been undertaken to provide insights into the valid assessment of
team performance. However, in many settings, manual and therefore labor-intensive assessment instru-
ments for team performance have limitations. Therefore, automated assessment instruments enable more
flexible and detailed insights into the complex processes influencing team performance. The central
objective of this study was to advance knowledge in automated assessment of team-based performance
using a language-oriented approach. Fifty-six teams of learners (N⫽224) in 3 experimental conditions
solved 2 tasks in an online learning environment. They were analyzed with the Automated Knowledge
Visualization and Assessment (AKOVIA) methodology. AKOVIA integrates a natural language-oriented
algorithm and enables a structural and semantic compression of individual- and team-based knowledge
representations. Findings indicate initial evidence of the feasibility and validity of the fully automated
methodology. A framework for integrating research and methodology development is suggested for
improving educational technology innovations such as computer-based assessment environments in
international large-scale assessments.
Keywords: team, shared mental model, automated assessment, natural language processing
Teams are a critical and essential part of most working envi-
ronments because they combine different views, multiple skills,
diverse experiences, analytical judgments, and rich knowledge.
Consequently, research in teams and their assessment has been a
continuous endeavor in various scientific areas for more than 30
years. Yet, there exist various definitions of team using different
perspectives. For example, Kanaga and Kossler (2011) defined a
team as “a specific kind of group whose members are collectively
accountable for achieving the team’s goals” (p. 4). A more detailed
definition is given by Katzenbach and Smith (2003), who de-
scribed a team as “a small number of people with complementary
skills who are committed to a common purpose, performance
goals, and approach for which they are mutually accountable” (p.
45). From an operational point of view, Cohen, Levesque, and
Smith (1997) defined a team as “a set of agents having a shared
objective and a shared mental state” (p. 95). Salas, Dickinson,
Converse, and Tannenbaum (1992) described a team as
a distinguishable set of two or more people who interact dynamically,
interdependently, and adaptively toward a common and valued goal,
who have each been assigned specific roles or functions to perform
and who have a limited life span of membership. (p. 4)
To sum up, common characteristics of definitions of a team in-
clude at least two individuals, common objectives, shared respon-
sibility and interdependence, as well as optimal performance.
Instruments for measuring team performance have been devel-
oped over the past decades; however, adequate computer-based
assessments of team-based performance are scarce (Fischer &
Mandl, 2005). The recent advancement of web-based technology
allowed widening the scope of computer-based assessments
(Csapó, Ainley, Bennett, Latour, & Law, 2012;Frey & Hartig,
2013). For example, international large-scale assessments such as
the Programme for International Student Assessment (PISA) or the
Programme for the International Assessment of Adult Competen-
cies (PIAAC) currently implement advanced computer-based as-
sessment environments (Organisation for Economic Co-operation
and Development [OECD], 2010,2013).
Previously, most of the team-based assessment instruments re-
quired a great deal of time and effort using highly trained research-
ers (e.g., think-aloud protocol analysis) and were mainly limited to
subjective self-reports (Wildman et al., 2012), and they as also
required labor-intense manual analysis of performance indicators
(Almond, Steinberg, & Mislevy, 2002). As a result, such assess-
ments have been limited to the scientific community and have had
only a minor impact on practical issues such as the design of
effective learning, teaching, and working environments. Motivated
by a desire to have practical assessment instruments that are useful
and valid has led researchers to uncover significant developments
in the last several years (Chung, O’Neil, & Herl, 1999;Mandl &
Fischer, 2000). Especially instruments using graphical representa-
tions for computer-based assessment have been successfully tested
and implemented, such as the DEEP methodology (Spector &
Koszalka, 2004), KU-Mapper (Taricani & Clariana, 2006), and
knowledge mapping tools (Herl, O’Neil, Chung, & Schacter, 1999;
O’Neil, Chuang, & Baker, 2010). However, only a few of these
instruments have been fully automated and tested for reliability
and validity. Furthermore, automated and language-oriented as-
sessment methodologies that enable a domain-independent analy-
This article was published Online First February 17, 2014.
Correspondence concerning this article should be addressed to Dirk
Ifenthaler, Centre for Research in Digital Learning, Deakin University,
Level 4, 550 Bourke Street, Melbourne VIC 3000, Australia. E-mail:
dirk@ifenthaler.info
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Educational Psychology © 2014 American Psychological Association
2014, Vol. 106, No. 3, 651–665 0022-0663/14/$12.00 DOI: 10.1037/a0035505
651