Content uploaded by Judee Kathelene Burgoon
Author content
All content in this area was uploaded by Judee Kathelene Burgoon on Nov 10, 2014
Content may be subject to copyright.
For Review
An Empirical Investigation of Virtual Interaction in Supporting Learning
Journal: Information Systems Research
Manuscript ID: ISR-2006-158
Manuscript Type: Research Articles
Manuscript Category: Regular Issue
Date Submitted by the
Author: 20-Jun-2006
Complete List of Authors: Cao, Jinwei; University of Delaware, Accounting and MIS
Crews, Janna; University of Nevada, Reno, Accounting and IS
Lin, Ming; University of Arizona, Center for the Management of
Information
Burgoon, Judee; University of Arizona, Center for the Management
of Information
Nunamaker, Jay; University of Arizona, Center for the Management
of Information
Keywords: E-learning, Field experiments, Systems design and implementation,
User acceptance of IT
Information Systems Research
For Review
An Empirical Investigation of Virtual Interaction
in Supporting Learning
Jinwei Cao, Accounting and MIS, University of Delaware
Janna M. Crews, Accounting and IS, University of Nevada, Reno
Ming Lin, Judee K. Burgoon and Jay F. Nunamaker, Jr.
Center for the Management of Information, University of Arizona
Authors’ Note
Jinwei Cao is Assistant Professor of MIS at the University of Delaware. Janna Crews is Assistant
Professor of MIS at the University of Nevada at Reno. Min Lin is a doctoral candidate in MIS at
the University of Arizona. Judee Burgoon (Professor of Communication and Director for Human
Communication Research) and Jay Nunamaker (Professor of Management Information Systems
and Center Director) are both with the Center for the Management of Information at the
University of Arizona. Correspondence should be directed to the first author, Newark, DE 19716,
jcao@lerner.udel.edu; voice: 1-302-831-1796.
Parts of this paper have been presented at ICIS 2005.
Abstract
This research investigates “virtual interaction,” which – in the context of this paper – is a special
type of interaction between a learner and a rich media representation of an instructor. The
impacts of virtual interaction on the effectiveness of online learning are studied based on a
review of multiple learning theories and technologies. An exploratory research model (the LVI
model) is proposed to explain the relationships among the following constructs: system
interactivity, interactive learning activity, learning phases, and learning outcomes. A series of
studies, including a controlled experiment and surveys, have been conducted to test this model.
Findings from both quantitative and qualitative results indicate that virtual interaction positively
impacts learner behaviors by encouraging learners to interact more and by increasing learner
satisfaction with the learning process; however, the influence of virtual interaction on actual
learning performance is limited.
Keywords: virtual interaction, e-learning, question answering, effectiveness
Page 1 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
1
An Empirical Investigation of Virtual Interaction in Supporting Learning
1. INTRODUCTION
Recent advances of information technology have affected the learning market dramatically.
Thousands of online courses, including degree and certificate programs, are now offered by
universities and corporations world-wide. Among these numerous online programs, many1are
now taking a Web-enabled rich media presentation approach. This approach digitally records
traditional, classroom-based courses and makes them available online. Presentation information,
such as a video of the presenter, PowerPoint slides, and any other accompanying graphics, are
synchronized and streamed via the Web to participants (Zhang and Nunamaker 2003). Such a
rich media presentation can easily be viewed using a Web browser with a media player, such as
Real Networks' RealPlayer or Microsoft Media Player, among others.
According to the media richness theory (Daft and Lengel 1984, 1986) and the social presence
theory (Short, Williams, and Christie 1976), using rich media, and particularly video, in this style
of learning provides multiple verbal and nonverbal cues as well as high intimacy and immediacy,
thereby enabling learners to be more involved in and more satisfied with the learning process.
However from a learner’s perspective, simply watching an instructor lecturing in a video is still
quite different from learning with a human instructor, even with the help of the rich media
presentation format. An important factor of learning, “interaction,” is usually still missing in
multimedia online lectures (O'Connor, Sceiford, Wang, and Foucar-Szocki 2003). In the context
of this paper, we define interaction as “reciprocal events that require at least two objects and two
actions. Interactions occur when these objects and events mutually influence one another”
1To name a few:
Stanford University (http://scpd.stanford.edu/scpd/students/onlineClass.htm)
The University of Colorado (http://www.cuonline.edu/)
The University of Delaware (http://www.udel.edu/UMS/itv/demo/)
Page 2 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
2
(Wagner 1994, p.8). Obviously, such a process cannot be supported by a linear playback of the
multimedia recording of lectures.
Meanwhile, many modern learning theories indicate that effective learning requires an
iterative interaction process between the learner and the knowledge providers (Bruner 1960,
1966; Pask 1975). In prior research about technology supported learning, interactions are usually
classified into three types (Moore 1993): learner-content interaction (LCI); learner(s)-tutor(s) 2
interaction (LTI); learner(s)-learner(s) interaction (LLI). Of these three types of interaction, LTI
and LLI are studied the most. For example, collaborative learning technologies, such as chat
rooms and discussion forums, have been commonly used to provide a platform for LTI and LLI
in e-learning systems. These types of interactions have the advantages not only of increasing the
involvement of learners in learning, but also of bringing more personal and social aspects to the
interactions (Hiltz and Wellman 1997). However, the personal aspects of these types of
interactions may also cause a problem. Specifically, one big disadvantage of LTI is its reliance
on the availability of live tutors because the resource of qualifying tutors may be scarce and/or
tutors are often not available at the exact time needed. Although asynchronous interaction (Hiltz
and Wellman 1997) may provide a temporary solution when the tutor/instructor is unavailable, it
cannot always provide the timely feedback that is generally desirable to support learning.
In contrast, LCI refers to learners examining and studying the course content. For example,
very often now learners can explore the online learning materials by following navigational
hyperlinks, and the materials they discover can impact their knowledge construction. Such
interaction, however, significantly lacks the personal aspects even with the help of the rich media
presentations. In particular, it is tedious and time-consuming for learners to search a long, linear
and unstructured video for answers to their specific questions. Meanwhile, a Web search for the
2We use the words “instructor” and “tutor” interchangeably in this paper.
Page 3 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
3
desired information may not only take a lot of time, but it also may result in unreliable or
inaccurate answers, rather than answers obtained from the instructor.
The research presented in this paper, therefore, endeavors to address the disadvantages of
both LCI and LTI discussed above. Specifically, this research investigates a special type of
interaction that is referred to as “virtual interaction.” In the context of this paper, “virtual
interaction” is defined as the interaction between a learner and a rich media representation of an
instructor (a “virtual instructor”). It is the process in which a learner questions or is questioned
by the virtual instructor and gets feedback from or gives input/responses to the virtual instructor.
We therefore have two research questions: 1) How can virtual interaction be implemented? and 2)
How does virtual interaction affect the effectiveness of online learning?
The rest of this paper is organized as follows. Section 2 describes the research framework of
this study. Based on a review of multiple learning theories and technologies, an exploratory
model named Learning with Virtual Instructors (LVI) is proposed, and specific constructs and
hypotheses about virtual interaction are discussed. Section 3 introduces a prototype system to
implement the LVI model. Section 4 then describes an exploratory study conducted to explore
the relationships among the core constructs of the LVI model. Both quantitative results and
qualitative results are presented in this section and then summarized in Section 5. Section 5 also
discusses the implications of this research study.
2. RESEARCH FRAMEWORK
To investigate the impact of virtual interaction on learning effectiveness, we consider the
following aspects associated with learning, including instructional strategies, information
technologies, and learners’ psychological learning processes. All of these are based on Alavi and
Leidner’s (2001) suggestion for Technology Mediated Learning (TML) research.
Page 4 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
4
Alavi and Leidner (2001) suggest that, to generate effective learning outcomes in TML,
information technologies should implement appropriate instructional strategies (e.g. methods of
learning content presentation and instructional event organizations) for the learning task. When
instructional strategies and information technologies work together properly, they will activate
the learners’ psychological learning processes required for the learning task, such as their
cognitive and information processing activities, and thus produce effective learning outcomes.
Therefore to study the impacts of information technologies on learning in TML, researchers
should first draw on learning theories to understand the unobservable psychological learning
processes, and then find appropriate instructional strategies that may activate the required
psychological learning processes, and finally study how the technologies can support the chosen
instructional strategies (Alavi and Leidner 2001).
Thus in the following sub-sections, we first examine the existing learning theories to
understand the role and effect of interaction in the learning process. Next we summarize a set of
instructional strategies that are relevant to the use of (virtual) interaction, and then choose certain
information technologies to support the virtual interaction and these relevant instructional
strategies. Based on these discussions, we finally draw a research model as our theoretical
foundations for studying the impacts of virtual interaction in learning.
2.1. Interactions in Learning - Learning Theories and Learning Phases
Among many different learning theories, we chose the following four well accepted ones that
explain the psychological learning processes of adult learners from different perspectives.
Behaviorist. Behaviorists view learning as a change in observable behavior caused by external
stimuli in the environment and view learners as passive recipients of information delivered by
the expert (Skinner 1938, 1971). Behaviorists usually emphasize traditional lecture-based
Page 5 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
5
instructional strategies. For example, behaviorists suggest that learning materials should be
sequenced properly (e.g. from simple to complex), and repetitive tests and feedback should be
provided so that learners can reinforce their knowledge/skills (Ally 2004). Therefore, in terms of
interaction, behaviorists focus on the interaction initiated by the instructors.
Cognitivist. Behaviorists only focus on the observable measures of learning and cannot explain
what is going on in the learner’s head. Cognitivists, on the other hand, see learning as internal
information processing with the use of memory, motivation, and thinking (Ally 2004). To
support this internal process, the following strategies should be used: 1) facilitate maximum
sensation that allows learners to attend to the information, for example using multiple modes of
delivery (e.g., audio, visuals, animations, etc.); 2) help learners to construct memory links
between the new information and the existing information. For example, if there is a lot of
information in a lesson, then the information should be chunked to prevent overload, and the
chunks should be organized to show their logical structures (Miller 1956). Cognitivists focus on
the presentation and arrangement of information in instruction, which are more relevant to
learners’ internal interaction with the learning materials.
Constructivist. Constructivists assume that knowledge is constructed from previous knowledge,
and students’ personal interpretations are of great importance in the process of knowledge
acquisition (Bruner 1966). Therefore, learning should be learner-centered instead of teacher-
centered (Phillips 1995). There are many instructional strategies relevant to the constructivist
view that recognizes the learner’s role in the process, including: designing instructional materials
that make the content relevant to the learner; providing instructor-guided learner control of the
learning agenda; and applying new knowledge and information to support internalization by
Page 6 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
6
learners (Ally 2004). In terms of interaction, constructivists start to reveal the importance of the
learner initiated/controlled interaction with learning content and/or instructors.
Social Constructivist. Extending the constructivists view, social constructivists emphasize the
importance of the social and cultural environment to the construction of knowledge (learning).
They believe that meaning and knowledge are constructed by learners through interaction with
instructors, other learners, instructional materials and their environment (Ernest 1998). This
school of thought explains the learning process completely from the perspective of interaction,
and thus in this view instructional strategies should support the two-way interactions between
learners and instructors/learners (Murphy and Cifuentes 2001).
Learning Phases
The learners’ psychological learning process is a dynamic, changing process of knowledge
acquisition. At different stages or phases of such a process, learners usually have different
knowledge backgrounds and different learning objectives/tasks. As a result, different learning
phases may require different instructional strategies that are suggested by different learning
theories (Ertmer and Newby 1993). According to Jonassen (1991), initial (introductory)
knowledge acquisition is perhaps best served by classical instruction that is backed by
behaviorist and cognitivist theories, while a constructivist or social constructivist learning
environment is more suited to the second (advanced) phase of knowledge acquisition in which
learners acquire more specific knowledge to solve more complex, domain specific problems.
Therefore, the role and effect of interaction may also vary with different learning phases.
Interactions, particularly learner-initiated interactions, may be of limited importance in the initial
learning phase, but are expected to acquire a more important role in the second phase of learning.
Page 7 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
7
2.2. Strategic Use of Interaction - Instructional Strategies
Based on our research into learning theories, we think the following instructional strategies,
outlined in Table 1, should be chosen for effective learning in different learning phases.
Table 1. Instructional Strategies
These strategies are not limited to interactions, but they provide explanations for the
appropriate use of interaction. The basic idea is that support for the interactions should be
increased when learners progress from the introductory learning phase to the advanced learning
phase. Particularly, interactions in the introductory learning phase should be limited and mostly
instructor-initiated interaction because learners may not have enough knowledge to initiate
interaction (e.g., ask questions) at that stage. However, once learners acquire enough basic
knowledge, they should be provided ample opportunities to interact with the knowledge source.
2.3. Information Technologies Supporting the Strategic Use of Interaction
Most of the instructional strategies in the introductory learning phase, such as multiple modes
of delivery, can be supported by the Web-enabled rich media presentation described in the
introduction section. This technology usually provides limited interactive functionalities such as
navigable course outlines and embedded quizzes. However, it is still challenging to find
technology to support learner-initiated interaction in the advanced learning phase without relying
on a human instructor. Thus, we need to investigate the following research question: how can
virtual interaction be implemented?
Anew technology, video-based question answering (QA), now makes virtual interaction
possible. By pre-recording an instructor’s instructions (e.g. lectures) in digital video format, the
human questioning and answering process can be simulated by a process of finding the specific
video segment(s) that are most relevant to the learner’s question. Because learners can ask
Page 8 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
8
specific questions in natural language and watch a person talking back to them, such virtual
interactions simulate LTI.
The video-based QA technology integrates speech recognition, information retrieval, and
natural language processing technologies. Zhang and Nunamaker (2004) proposed a method that
applies text-based QA approaches to video applications. In this approach, the speech in video
lectures is transcribed into text manually or automatically using speech recognition software.
Each video of a lecture is then manually segmented into short segments representing lecture
topics. The transcript of each video segment is treated as a text document. Finally, a template-
based approach is used to identify answers to posted questions from the collection of transcribed
video segments.
This template-based approach was improved by using phonetic matching technology and
additional knowledge sources (Cao 2005). The improved version was then used as the core
technology to implement the QA type of the virtual interaction.
2.4. The LVI Model, Constructs and Hypotheses
With the relevant instructional strategies and their supporting technologies identified, we
propose an exploratory research model named Learning with Virtual Instructors (LVI, Figure 2)
to investigate the impact of virtual interaction on learning. To help explain the constructs in the
LVI model, we first illustrate the process showing how learners learn with virtual instructors in
Figure 1. In this figure, the specific instructional technologies and their enabled learning
activities are allocated to the different phases of the learning process.
Figure 1. The LVI Process Illustration
As illustrated in Figure 1, the learning outcomes are represented by learners’ actual learning
performance and their perception/satisfaction of/with the whole learning process (e.g., perceived
learning effectiveness and satisfaction with interaction in learning). The learning outcomes may
Page 9 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
9
be affected by several factors during the learning process, including the individual learner’s
characteristics, the system features/instructional technologies, the learning phases, and/or the
learner’s learning activities. However, although individual learner characteristics such as
learning motivation and learner capability are also important, they are not investigated herein.
Instead, they are controlled by random assignment in the experimental studies.
In the LVI model (Figure 2), therefore, we focus on the following core constructs that are
directly related to virtual interaction: system interactivity, interactive learning activity, and
learning phases.
Figure 2. The LVI Model
System Interactivity. This is the key construct in the LVI model, representing the
information technology features. As shown in Figure 1, the synchronized rich media presentation,
which represents classical, sequential instruction, provides little-to-no interaction in learning; the
question and answering process initiated by learners adds interactivity into the learner learning
process; and the assessment and feedback process initiated by the virtual instructor (the system)
further increases the interactivity. It is expected that more virtual interactive functions available
in an e-learning system (higher level of system interactivity) will result in learners engaging in
more interactive learning activities, which will then result in improved learning outcomes
(especially when instructional strategies require more interactions).
Interactive Learning Activity. This construct may be considered as a mediating factor
(Baron and Kenny 1986) in the LVI model. We propose that more interactive learning activities
(actual or perceived) can be the results of higher level of system interactivity, and they will result
in better learning outcomes in the advanced learning phase.
Learning Phases. As stated earlier, different learning phases require different instructional
strategies and thus different instructional technologies. Therefore, learning phases becomes an
Page 10 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
10
important moderating factor (Baron and Kenny 1986) in the LVI model. That is, system
interactivity may not play a big role in the introductory learning phase when learners focus on
gathering initial knowledge through traditional instruction. We expect that the effect of system
interactivity will show up in the advanced learning phase when learners are trying to deepen their
understanding of the subject matter.
To summarize, we have the following five hypotheses about the core constructs of learning
outcomes, system interactivity, interactive learning activities, and learning phases.
Hypothesis 1: Higher levels of system interactivity (assuming a human instructor has the highest
level of interactivity) result in learners engaging in more interactive learning activities:
H1 a. in the advanced learning phase;
H1 b. in the overall learning process.
Hypothesis 2: The influence of system interactivity on learners’ interactive learning activities is
stronger in the advanced learning phase than in the introductory learning phase.
Hypothesis 3: Higher levels of system interactivity result in better learning outcomes,
specifically greater:
H3 a. actual learning performance in the advanced learning phase;
H3 b. actual learning performance in the overall learning process;
H3 c. perceived learning effectiveness in the advanced learning phase;
H3 d. perceived learning effectiveness in the overall learning process;.
H3 e. satisfaction with interaction in the advanced learning phase;
H3 f. satisfaction with interaction in the overall learning process.
Hypothesis 4: The influence of system interactivity on learning outcomes is greater in the
advanced learning phase than in the introductory learning phase as measured by:
Page 11 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
11
H4 a. actual learning performance;
H4 b. perceived learning effectiveness;
H4 c. satisfaction with interaction.
Hypothesis 5: Learners engaging in more interactive learning activities will have better learning
outcomes.
3. LVI – THE PROTOTYPE
Figure 3. User Interface of the Prototype System
To test the above hypotheses about the LVI model, a Web-based prototype was developed
using the rich media presentation technology, plus the core video-based QA technology. As
illustrated in Figure 3, in the version of the LVI prototype with the most interactive functions, the
Web-based user interface is divided into four cells or sections: 1) a video display of the instructor,
2) a PPT slide associated with the current video segment, 3) a text note that is the transcription of
the speech in the current video segment, and 4) an outline of all the topics in the lecture. The
content in the four cells is synchronized. Furthermore, learners may interact with the LVI system
in two ways. First, each topic in the outline is directly linked to the relevant video segment,
allowing learners to click on any link to review a specific topic. The outline can be viewed as a
pre-compiled list of questions, although it is less flexible than the second way of virtual
interaction, direct QA. A textbox in the center of the four cells is provided for learners to ask
questions. Once a question is submitted, a new window with the four-cell design will pop up to
present the potential answers. The answer video segment determined to be the “best fit” will be
automatically played, with its associated PPT slide and text note appearing in the other cells.
Other “close fit” video segments are also listed as alternative responses. Therefore learners can
immediately watch and hear the virtual instructor responding. In the answer window, the lecture
Page 12 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
12
outline is replaced with the list of answer topics. If learners are not satisfied with the first answer,
they can click on the links in this answer list to view the other video answers.
Besides the interactions initiated by learners, the virtual instructor uses pop-up quizzes to
initiate interaction with learners. A QA box is also added into the quiz feedback page, so that if
learners find their answer is not correct, they can immediately ask the virtual instructor a
question.
Several video lectures have been created to be used in this prototype. In the study described
in this paper, a lecture (about 40 minutes long) about computer security concepts was used.
4. STUDY DESIGN
When investigating the LVI model, we conducted a field experiment in conjunction with
observations such as survey and interviews. The experiment was a longitudinal pretest-posttest
comparison between one control group and five treatment groups (see research design in Table
2). The five treatment groups are listed below with a brief description for each. The acronyms
mean the following: VI –IN = Virtual Instructor – Instruction only, IN = Instruction, and QA =
Question Answering.
Control Group. Human Instructor: learners attended a face-to-face lecture from a human
instructor;
Treatment 1. VI - IN only: learners could only watch the instruction sequentially;
Treatment 2. VI – IN + Outline: learners could click on the links in the outline to change
topic when watching the instruction;
Treatment 3. VI – IN + QA: learners could ask questions when watching the instruction;
Treatment 4. VI – IN + Outline + QA: learners could not only click on the links in the
outline to change topic but also ask questions when watching the instruction;
Page 13 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
13
Treatment 5. VI – IN + Outline + QA + Quiz: learners could not only click on the links in
the outline to change topic but also ask questions when watching the instruction; In addition, the
system could pop-up questions in the middle of the instruction. Five pop-up quizzes were
embedded in the lecture, and they were mandatory because the participants were required to
answer the pop-up questions before they could continue.
Table 2. Experimental Design
4.1. Participants
One hundred fifty-eight undergraduate students (59% male, 41% female) from an
Information Systems (IS) course at a southwestern university volunteered to participate in this
research study as an alternative of an assignment (the other alternative was to write a research
report for a class-related topic). They were told that the learning materials presented in this study
were more advanced knowledge of the topics (computer security) that they were studying in the
IS course. Students could get full credit for this assignment as long as they actively participated
in all experimentation sessions. Sixty-nine percent of the participants reported their grade point
average (GPA) as between 3.0-4.0 on a 4.0 scale, while thirty-one percent of the participants
reported their GPA as between 2.0-3.0.
4.2. Procedures
Participants signed up for this research study through a website, and they chose to attend one
of six available time slots. The six time slots were randomly assigned to the six groups (five
treatment groups and one control group). All groups used the same computer lab for the research
study, including the Human Instructor group (the control group).
The study began with a researcher (one of the authors) describing the study design and
procedures to the student participants. Participants in each of the five treatment groups were then
given a quick demonstration of the particular version of the prototype LVI system (treatment)
Page 14 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
14
they would use. Participants in each group participated in two learning sessions with the same
treatment. The first session was designed to be introductory, non-task-oriented learning. In this
session, participants first filled out a pre-experiment survey (30min total; including demographic
information and surveys that measured learners’ learning motivation/strategy/styles), and then
they completed a pre-test that had 10 multiple-choice questions about the computer security
concepts (15 min). Next, they received the computer security lecture in their particular treatment
condition (50 min; with human instructor or using the LVI prototype). The treatment time was
longer than the lecture time (about 40 minutes) so that participants could have enough time to
interact with the (virtual) instructor. Finally, they completed a post-test, which had the same
questions as those in the pre-test, but the order of both the questions and the answer choices were
reordered (15 min). After the post-test, the participants answered a post-experiment
questionnaire (15 min, including questions about perceived learning effectiveness and other
factors). In addition, the participants’ activities when using the LVI system were recorded to a
system log file, and the activities of the participants in the Human Instructor group were
videotaped.
The second session was designed to be advanced, task-oriented learning. Two weeks after the
first session, the same participants were asked to attend the second session, and they were
required to complete an open-ended assignment with more in-depth questions about the
computer security concepts. They first took a 15-minute pretest (or so-called delayed posttest) on
their knowledge about computer security; next they completed the assignment in 50 minutes
either by asking the instructor directly (in the control group, only one participant could ask a
question at a time) or by using the system (in the other five treatment groups), and then they
completed another posttest (15 min). Finally, participants completed the same post-experiment
Page 15 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
15
questionnaire as in the first session (15 min). Again, the participants’ activities when using the
LVI system were recorded to a system log file, and the activities of the participants in the Human
Instructor group were videotaped.
4.3. Measures and Instruments
4.3.1. Independent Variables for the Experiment
1) System Interactivity
The system interactivity increased from treatment group 1 to group 5. We also expected that
a human instructor could provide the highest level of interactivity in the learning process.
2) Learning Phases
Each participant would go through two learning phases. The first, class session, was the
introductory learning phase. The second, assignment session, was the advanced learning phase.
4.3.2. Dependent Variables
1) Learning Activities
There were three types of interactive activities recorded in the system log file for the five
treatment groups: 1) asking a question, 2) switching topic using the outline, and 3) switching
topic using the navigation buttons. This variable was measured by the total number of the
participant’s interactive activities during a session.
2) Learning Performance
The participant’s actual learning performance was measured by his or her percentage
accuracy score on the post-tests (10 multiple-choice questions on the participant’s knowledge of
computer security), as compared to the score on the pre-tests. Participants took the same format
of test four times in two sessions. The order of the questions and the response choices were
alternated for each test.
Page 16 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
16
3) Perceived Learning Effectiveness
The participant’s perceived learning effectiveness (EFFECT) was measured by a scale
consisting of eight items in the post-experiment questionnaire adapted from (Alavi 1994)
(Cronbach’s Alpha = .88 in session 1 and Cronbach’s Alpha = .91 in session 2). For all items
in the post-experiment questionnaire, participants rated themselves on a five-point Likert scale
ranging from “strongly disagree” (scale = 1) to “strongly agree” (scale = 5). An individual’s
score on the scale was conducted by taking the mean of the eight items (Alavi 1994). A high
score on this scale meant that the participant thought he or she learned effectively in this study.
4) Satisfaction with Interaction
This variable (INTER) concerned whether or not participants were satisfied with their
interaction with the virtual instructor (or human instructor in the control group). It was measured
by a self-developed, four-item scale in the post-experiment questionnaire (Cronbach’s Alpha
= .82 in session 1 and Cronbach’s Alpha = .88 in session 2). Again, an individual’s score on the
scale was conducted by taking the mean of the items that made up that scale. A high score on this
scale meant that the participant was satisfied with the (virtual) interactions in learning.
Besides the quantitative data collected in the experiment, qualitative data were collected from
open-ended questions in the post-experiment questionnaires, and this data were used to help
interpret the quantitative results.
5. ANALYSIS AND RESULTS
5.1. Quantitative Results
5.1.1. Quantitative Results about Interactive Learning Activities (H1 and H2)
Table 3 lists the means and standard deviations of participants’ interactive learning activities
in each session.
Table 3. Means and Standard Deviations of Interactive Learning Activities
Page 17 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
17
As mentioned earlier, all participants’ learning activities were recorded into a system log file
for the five treatment groups. However, in the first group, the “IN only” group, the participants
had no access to the interactive system functions such as QA and outline. Therefore, only four
groups had interactive learning activities recorded in system logs, and our analysis was based on
these four groups.
Repeated measures analysis indicated that participants’ interactive learning activities were
different for the factor Session, F(1, 103) = 203.833, p < .005, partial Q2= .664; the factor
Treatment, F(3, 103) = 4.089, p = .009, partial Q2= .106; and the interaction Session*Treatment,
F(3, 103) = 4.069, p = .009, partial Q2= .106. To investigate the differences among each of the
different treatment groups, a planned, reverse Helmert contrasts analysis3was conducted on the
Treatment factor for both session 2 and the overall learning process. Results indicated that in
session 2, participants in group 3 engaged in significantly more interactive activities than did
participants in group 2 (p = .018), but participants did not further increase their interactive
activities in groups 4 and 5. However, for the average interactive learning activities in the overall
learning process (across two sessions), it was the participants in group 5 that engaged in
significantly more interactive activities than did participants in group 2, 3 and 4 (p = .006).
Therefore hypothesis 1 was only partially supported. In addition, the Session*Treatment
interaction indicated that although all groups had more interactive learning activities in the
advanced learning session, the increase was different among groups. A planned, reverse Helmert
contrasts analysis conducted on the difference between the two sessions indicated that
participants in group 3 had a significantly greater increase in interactive activities than did
3This contrast analysis compares the mean of the first or last group to the mean of the other groups, and then
conducts similar comparisons among the remaining groups. In this case, it compared the mean of group 5
(IN+QA+Outline+Quiz) with the mean of group 2 (IN+QA+Outline), 3 (IN+QA), and 4 (IN+QA+Outline); then the
mean of group 4 with the mean of group 2 and 3; and finally the mean of group 3 with the mean of group 2.
Page 18 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
18
participants in group 2 (p = .018), but participants did not have a greater increase in groups 4 and
5. Therefore hypothesis 2 was also only partially supported. A possible interpretation of these
results is that if participants had enough background knowledge to form specific questions (e.g.,
in the advanced learning session), they tended to use more QA than outline function. Therefore,
when both the QA and outline functions were available, adding the outline function could not
trigger more interactive learning activities in the advanced learning session. The inclusion of
pop-up quizzes in group 5 increased the absolute amount of interactive activities since these
quizzes were mandatory. However, with the same time allotted, it might also reduce the amount
of other learner-initiated interactive activities in the advanced session.
Another important observation was that, although we assumed that the human instructor
would be able to provide the most interactivity, in both sessions we found that there were very
few learner-initiated interactions between the participants and the instructor in the classroom. In
the first session, two participants asked the live instructor a total of three questions at the end of
the lecture; while in the second session, only four participants asked the instructor any questions
when completing the assignment. This was obviously different from the expectation. The
qualitative results from the post-experiment, semi-structured interviews helped explain this
circumstance. When asked, “Why didn’t you ask the instructor questions when you did not
understand the topic?” The most common responses were, “I thought maybe another student
would ask this question,” “I didn’t want to appear stupid if the other students all understand
what I wanted to ask,” “I didn’t want to disturb the whole class,” and “I didn’t want to wait in
the line to ask a question.” Therefore, combined with our findings about the interactive activities
in the treatment groups, we found that virtual interactions could actually remove this
psychological barrier and enable more direct question and answering processes in learning.
Page 19 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
19
5.1.2. Quantitative Results about Learning Performance (H3.a/b & H4.a)
Table 4 lists the means and standard deviations of participants’ learning performance test
scores in each session. Although percentage scores are presented here for easier interpretation,
ArcSin transformation was performed on all percentage scores to improve the equality of
variance (Broota 1989) before conducting any statistical test.
A 6 ×2×2repeated measures analysis was conducted to test the hypotheses. Results showed
that participants’ learning performance test scores were significantly different for the factor
Session, F(1, 146) = 126.467, p < .005, partial Q2= .464; and interaction Session*Prepost4, F(1,
146) = 20.706, p < .005, partial Q2= .124. This indicated that participants in all groups had
higher learning performance test scores in the advanced learning session,, but the pre to post test
score gains were different from session 1 to session 2. All groups had more test score gains in the
first session than in the second session. This is quite understandable because in session 2 there
was less room for improvement. However, no significant treatment effect and no significant
interactions (including Session*Treatment,Prepost*Treatment, and Session*Prepost*Treatment)
were found through the repeated measures analysis. It seemed that all groups had similar
patterns in test score changes across sessions, thus hypothesis 4.a was not supported.
Table 4. Means and Standard Deviation of Learning Performance (% of Correct Responses)
As suggested by Bonate (2000), a more powerful Analysis of Covariance (ANCOVA) test
was then conducted to see if there were really no differences among treatment groups. Since we
hypothesized that the performance gain would be different among treatment groups in the second
session (H3.a) and in the overall learning process (H3.b), we conducted two ANCOVA tests,
respectively. To check the performance gain in the second session, we took the posttest in
4The Prepost factor represents the four different pretests and posttests across the two sessions: pretest in session 1,
posttest in session1, pretest in session 2, and posttest in session 2.
Page 20 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
20
session 2 as dependent variable and the pretest in session 2 as covariate. To check the
performance gain in the overall learning process, we took the posttest in session 2 as dependent
variable and the pretest in session 1 as covariate.
The results were indeed different from the repeated measures analysis results. There were
significant differences among groups in both session 2 (F(5, 145) = 2.274, p = .05, partial Q2
= .073) and the overall process (F(5, 145) = 2.409, p = .039, partial Q2= .077). In session 2, a
reverse Helmert contrasts analysis revealed that the human instructor group had a significantly
lower test score gain when compared to the mean of the five system groups. Similar results were
found for the overall learning process. The overall test score gain (posttest in session 2 – pretest
in session 1) in the human instructor group was significantly lower than the mean of the other
five system groups.
Therefore, the human instructor group, which was expected to have the highest learning
performance gain in both session 2 and overall (see hypothesis 3.a & 3.b), actually had the
lowest learning performance gain. Based on our observations of the participants’ actual
interactive learning activities in both sessions, we might argue that the human instructor in a
classroom setting would have the lowest interactivities and thus hypothesis 3.a & 3.b might still
be partially supported. However, it is clear that the learning performance gain was not
significantly different among the five system groups.
5.1.3. Quantitative Results about Perceived Learning Effectiveness/Satisfaction with
Interaction (H3.c/d/e/f & H4.b/c)
Table 5 lists the means and standard deviations of the two variables from the self-report post-
experiment survey in each session.
Table 5. Means and Standard Deviations of Self-reported Learning Outcomes
Page 21 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
21
Amultivariate ANOVA test on the EFFECT and INTER measures showed that participants’
perceived learning effectiveness and satisfaction with the interaction had a weakly significant (at
0.1 significance level) difference between the two sessions, Wilk’s S= .961, F(2, 143) = 2.938, p
= .056, Q2 = .039. A follow-up univariate analysis of participants’ satisfaction with interaction
found no significant difference between sessions, no significant interaction Session*Treatment
(hypothesis 4.c was not supported), but significant differences among treatment groups
(between-subject main effect, F(5,144) = 12.490, p < .005). A reverse Helmert contrast analysis
found that the satisfaction with interaction was significantly higher in the human instructor group
than in the other five treatment groups both in session 2 (p < .005) and in the overall learning
process (p < .005). It also found that treatment group 3 (IN+QA) had significantly higher
satisfaction with interaction than did the mean of group 1 and group 2, both in session 2
(p = .001) and in the overall learning process (p = .010). However, there were no significant
differences among group 3, 4 and 5. Therefore, hypotheses 3.e & 3.f were only partially
supported. This result was not as expected but was quite similar to what happened with the actual
interactive learning activities.
Aunivariate analysis about participants’ perceived learning effectiveness, on the other hand,
revealed that the perceived learning effectiveness was significantly less in session 2 than in
session 1, F(1,144) = 5.914, p = 0.016. One possible explanation for this finding is that
participants judged their learning effectiveness only on the amount of new knowledge they
received. Because participants learned the same content twice, they did not feel they learned new
knowledge in the second session and, therefore, had a lower perceived learning effectiveness. A
reverse Helmert contrast analysis found that overall the perceived learning effectiveness was
significantly higher in the human instructor group than in the other five treatment groups
Page 22 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
22
(p < .005). However, we failed to find a significant difference among the five treatment groups
both in session 2 and in the overall learning process. Hypotheses 3.c & 3.d were again only
partially supported. In addition, no significant interaction Session*Treatment was found about
the perceived learning effectiveness; therefore, hypothesis 4.b was not supported.
Finally, a correlation analysis was conducted between interactive learning activities and
learning outcomes. However, no significant correlation was found, so the hypothesis 5 was not
supported.
5.2. Qualitative Interpretations
The qualitative results from the open-ended responses in the post-test questionnaires provide
interpretations for our quantitative results. Responses to these open-ended questions were
analyzed by theme, using content analysis, in which verbal content was coded using a quasi-
objective scheme. This was then further analyzed using relative frequency statistics. A software
program CDC EZ-Text was used to support the coding and analysis of these open-ended
responses (Carey, Wenzel, Reilly, Sheridan, and Steinberg 1998). Below, we used the common
themes (code frequency >=5 in each treatment group) and direct quotes of participant responses
(no typos were corrected) to support our interpretation of the quantitative results.
5.2.1. Qualitative Results about Interactions
Similar to the quantitative results, we found from the qualitative results that the participants’
perception/satisfaction with interaction was different from their actual interactive learning
activities. First, the participants’ perception about interaction did not change along with their
actual behavior across sessions. Second, although many learners did not talk with the instructor
at all in the classroom, they still thought they had good interaction with the instructor; on the
Page 23 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
23
other hand, while some learners did ask more questions in the LVI system, they did not feel they
were interacting.
From the qualitative responses, we found that the participants’ understanding of interaction
in learning was a bit different from what we expected. Many participants implied that interaction,
in their view, had to be a face-to-face type of process between live people where questions and
answers can be verbally put forth if needed. The non-verbal communication features such as eye
contact or gestures, as well as the feeling of social presence, were considered as important or
even must-have elements of interactions. Therefore, even though the participants in the LVI
system with full functionalities could ask questions in plain English and see their questions
answered by the instructor talking back to them in video, they could not feel the personal
connection between themselves and the instructor in the video. Therefore, they did not feel that
they were actually interacting with someone. A participant in group 4 clearly demonstrated this
mood in the following statement, when responding to a question comparing the LVI system to
traditional courses after the introductory session:
I enjoy seeing facial features and feeling the grandness of someones presence in real
life, it just feels different. The excitment a teacher can bring and how the teacher grabs
an entire classes' attention is far better. And spontaneous questions can be asked, and
this way other students might generate questions at the same time, this interaction is
not available in this system. I am more awake when I have someone talking to me or
when a class is actively participating. I didn't feel the need to pay attention with this
system..
Interestingly, even though we found that participants in the Human Instructor group did not
actually interact much with the instructor (most of them just sat there and listened to the lecture),
when asked, “Do you like your interaction with the instructor? Why?” most participants still said
that they liked the interaction with the instructor. One reason is that instead of reporting the real
interactive activities during the lecture, the participants would actually consider the potential or
capability of interaction with a live instructor to be good interaction. In addition, even though the
Page 24 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
24
participant himself or herself might not interact with the instructor during the lecture, he or she
could observe the interaction between other participants and the instructor, and then he or she
would perceive the interaction based on this observation. For example, two participants in the
Human Instructor group gave the following responses after the advanced session:
The interaction was good, as the instructor was open to any questions, he could answer.
There wasn’t much but i do like being able to personaly talk with them to answer my
questions.
On the other side, many participants did not perceive the virtual interaction as good
interaction because of a limitation of the current technology - the answers were repeated parts of
the pre-recorded lecture. Most participants quickly realized that the virtual instructor could not
rephrase the answer based on their individual need; nor could the virtual instructor answer
anything outside the boundaries of the lecture. Therefore, they felt the virtual instructor could not
really understand their questions and provide the tailored answers they need. In addition,
participants in group 5 felt that the virtual instructor could not understand their learning status
and thus question them accordingly because “The pop up questions were just automated, not
actual interaction.”To summarize, the participants did not feel that the virtual instructor knew
about their presence in the interaction process. The participants did not perceive dynamic and
spontaneous interaction initiated from the virtual instructor side. Therefore, many participants
did not accept virtual interaction as real, two-way interaction. For example, when responding to
the question about their interaction with the virtual instructor (after the advanced session), a
participant in group 4 stated:
No, I felt that I really didn’t have any. This was more of a one way interaction to me.
Some of my questions weren’t answered and so the virtual teacher wasn’t interacting
with me.
Page 25 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
25
Besides explaining the discrepancy between reality and perception about the interactions, the
qualitative results also confirmed our other findings from the quantitative results. Similar to the
quantitative results that partially support our hypothesis 3.e & 3.f, the qualitative results also
showed that participants perceived virtual interaction better when a higher level of system
interactivity was provided, especially in the advanced session. Particularly, when QA functions
were available (group 3, 4, and 5), participants did include QA as one of the reasons why they
liked the system, while only a few (fewer than 5 in each group) participants included the outline
or the pop-up quiz as one of the reasons. In addition, a few times (2 times in group 3, 3 times in
group 4, and 5 times in group 5) more participants listed QA as one of the reasons they liked the
system in the second session better than in the first session. In the detailed responses, participants
described how they liked the way that they could ask questions they might not feel comfortable
asking in class. Some even thought the virtual instructor was like a real professor. For example,
when being asked whether they liked the virtual interaction in the system, a participant in group
4 stated the following after the introductory session:
Yes, of course. It is because I can really get the more correct and quick answer rather
than serching the internet or asking classmates.
Supporting hypothesis 3 from another perspective, we found that participants complained
about the lack of interaction when lower levels of system interactivity were provided. For
example, in group 1 and group 2, where the QA function was not available, many participants
complained that they could not ask the instructor questions. Typical comments echoed the
following observation by a participant in group 1 after the introductory session:
I did not like that you could not ask question about a certain topic if you had trouble
with it and you could not go back in the lecture to try to understand it better.
Another desired interactive functionality is the navigable outline. In group 1 and group 3,
where the hyperlinks on the outline were removed, many participants complained that they could
Page 26 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
26
not quickly navigate through the lecture and interact with the learning content. Participants in
group 1 were extremely unhappy with the fact that they had to use the fast-forward/rewind
buttons. Therefore, although adding the outline hyperlinks might not make the participants like
the system more, removing them could result in inconvenience in learning and thus generate
negative feelings towards the system. For instance, a participant in group 1 wrote the following
response after the advanced session:
It is difficult to navigate through. I wanted to be able to use the outline to click my
way to various topics but I had to use fast forward. When I fast forwarded it would
stop at each topic in the lecture and pause to load information. If I wanted to go from
the beginning to the end, it would take a very long time.
We also asked participants to compare the two interactive functions – outline and QA – and
report their preferences. Considering the quantitative results showed that participants in group 4
and 5 (QA+Outline) perceived more interaction than those in group 2 (Outline only) and group 3
(QA only) did; we expected to see that participants preferred the combination of these two
functionalities for interacting with the virtual instructor. However, the qualitative results showed
that, although some participants did prefer the combination, most participants had their own
preference towards one of the two functionalities. It seemed that participants who preferred QA
generally thought QA was a faster and easier way to find exact answers with similar topics that
are related. Just as a participant in group 4 reported after the advanced session:
Asking questions. Becuase 8 out of 10 I got an immediate answer from the ask function that
was correct, and it provided another link for more info. or some topic related to.
On the other hand, participants who preferred browsing mostly thought browsing was an
easier thing to do when compared to asking questions, especially when they had no specific
questions. For instance, after the introductory session a student in group 5 reported:
...browsing the lecture outline after listening to the entire lecture. it was easier for me.
Idid not feel like i was asking the right questions to get the right response when I did
ask questions.
Page 27 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
27
For those who preferred the combination of these two functionalities, they found that each
functionality was useful in different situations; therefore, using both of them was simply the best
way to find specific information in the system. A participant in group 4 described this strategy
clearly after the advanced session:
Iused both the question and the outline. If I remembered exactly where the information
I was looking for was in the lecture then I used the outline, but when I was looking for
general information on a topic I used the ask feature.
5.2.2. Qualitative Results about Learning Effectiveness
Similar to the findings about interactivity, the participants’ perception of learning
effectiveness is the opposite of the reality. Although participants in the human instructor group
had the fewest actual interactive learning activities and had the lowest learning performance gain,
they had the highest perceived learning effectiveness. We also failed to find significant
differences among the five treatment groups for both the actual and perceived learning
effectiveness. Looking at the qualitative results, we realized that the self-paced control and
convenience of reviewing specific content commonly provided in all system groups were
actually the key factors that contributed to the difference between the effectiveness of the virtual
instructor and the human instructor in the advanced learning session. For example, when asked
why they liked the system, most participants listed the self-paced control provided in all system
groups as their top reason. Because the lecture was pre-recorded, they could pause, skip, or
replay any part of the lecture video, without the limitation of time or location. Even for the
instruction-only group (VI – IN only) where the participants had the least control over the lecture
(only standard pause, fast forward, and rewind buttons were available), they liked the system
because of this self-paced control. At the same time, participants also mentioned that they liked
the convenience of reviewing specific content in the system. There was not much difference in
Page 28 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
28
the amount of these two themes between sessions or among groups. Typical comments include
the following from a student in group 5 after the advanced session:
I like being able to move through the lecture at my own pace. In a classroom you
don’t get to ask the teacher to repeat as much as may be necessary. This program
allows for the student to actually get the answer they neeed.
Two other common themes across different groups and sessions were that participants liked
1) the structured and synchronized multimedia lecture that integrated the lecture video, slides and
notes; 2) the ease of use of the system. For example, a student in group 1 after the introductory
session stated:
Having videos and powerpoint slides next to one another on the screen makes it seem
like attending a real lecture. Also, the outline for the presentation was helpful in
keeping me focused on exactly what I was supposed to be learning.
And a student in group 4 after the advanced session stated:
I thought the navigation was very easy and made it much better to learn with than a
traditional system where you would watch the whole thing and then have to find the
place you wanted to watch again.
The above four themes partially explained why there was not much difference among
learning outcomes among the five virtual instructor treatment groups. That is, the most important
features of e-learning to participants, in this study, were not the interaction, but something that
were available in each group such as the self-paced control and the convenience of reviewing
specific content. In other words, different levels of interactivity actually determines different
levels of convenience in reviewing specific content, but as long as this convenience exists,
participants can learn better with virtual instructor than with human instructor. Therefore
increasing the level of interactivity will mainly increase students’ satisfaction with the learning
process, not the learning effectiveness.
Similarly, when asked to compare the LVI system to traditional lectures, participants who
preferred the LVI system over a traditional lecture were mostly attracted by the self-paced
Page 29 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
29
control and convenience of the system, not the interactivity of the system. They still thought the
interaction in the learning system was not as good as the interaction with a human instructor in a
traditional course, mainly because the learning system removed much of the non-verbal and
social side of the interactions in learning. In addition, the participants who preferred the LVI
system also usually reported that they were either shy and did not like to interact with other
people, or they had constraints in attending classes regularly (e.g., having busy work schedule or
needing to take care of a baby). For example, a participant in group 5 described the following
reason for preferring the LVI system:
...this online system because of my personal learning and lifestyle. I liked the
integration of learning styles and I also have a child so it would be nice to do this at my
own pace.
Therefore, again it is clear that although different levels of interactivity could affect
the participants’ satisfaction towards the virtual learning system, the interaction does not
matter as much to the participants’ learning effectiveness and preference towards the
virtual learning system as do other factors such as convenience and access. The
participants’ preference towards the virtual learning system could also be determined by
many external factors such as the participants’ learning and life style.
6. DISCUSSIONS AND IMPLICATIONS
To summarize, this research investigates a special type of interaction between a learner and a
rich media representation of an instructor, which is referred to as “virtual interaction” in the
context of this paper. The impacts of virtual interaction on the effectiveness of online learning
are studied, and an exploratory research model (the LVI model) is proposed to explain the
relationships among the following constructs: system interactivity, interactive learning activity,
learning phases, and learning outcomes. A series of studies have been conducted to test this
model.
Page 30 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
30
Findings from both quantitative and qualitative results indicate that, as we expected, the
virtual interaction technology did impact learners’ behaviors in interaction and did improve
learners’ satisfaction with e-learning to some extent. Specifically, participants who were
provided the QA function did engage in more interactive learning activities and were
significantly more satisfied with their interaction with the virtual instructor than those who were
only provided the outline function. In addition, learning phases did have an impact on the
influence of system interactivity. Although all participants had more interactive learning
activities in the advanced learning phase than in the introductory learning phase, those who were
provided the QA function had significantly greater increase in interactive activities than did
those who were only provided the Outline function. This indicates that, overall, the QA type of
virtual interaction did enable more actual interaction and satisfaction with interaction than the
traditional hyperlink type of interaction, particularly in the advanced learning phase in which
more interactions were required.
However, there are also results that were not expected. Particularly, learners’ satisfaction
with interaction was somewhat different from their actual interactive learning activities when a
human instructor was involved. Most participants were more satisfied with their interaction with
ahuman instructor than with the virtual instructor system, even if they actually interacted more
with the virtual instructor. The possible reasons for this discrepancy include both the limitation
of the current technology and the social aspects of interaction. We think that the technology
limitations might be mitigated by extending the answers with content extracted from the Web;
however, the social aspect of interaction with a human instructor will be difficult to fully
simulate by virtual interaction through current technologies. Asynchronous collaboration may
still be needed to complement the virtual interaction.
Page 31 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
31
Finally, we found that the impact of virtual interaction on the learning effectiveness is limited.
The test score gains and perceived learning effectiveness were not significantly different among
the treatment groups who were provided different levels of virtual interaction. Meanwhile, one
interesting finding is that the Human Instructor group had the lowest test score gain when
compared to the other groups, although participants in this group perceived the highest learning
effectiveness. Our qualitative results indicate that, instead of virtual interactions, it might be the
factors such as “control on learning pace” and “convenience for access information” that directly
affect the learning effectiveness of an e-learning system. However, another possible explanation
for the lack of significant differences in performance among the treatment groups may be a
limitation in the content in the system. As many participants pointed out, because the virtual
interaction in our experiment was limited to repetition of segments of the same lecture video,
learners could not get their questions answered in a different way. Therefore, if learners did not
understand the first time, repetition may not help them understand any better the second time.
However, if a learner can search multiple videos from multiple instructors about the same topic,
they would be able to get reframed answers. Thus, despite the lack of significant results, we
believe that the current technology may still have the potential to improve performance if the
content is improved as indicated.
The research findings reported in this paper have both research and practical implications.
From the research perspective, the findings not only confirm again that e-learning provides
comparable learning effectiveness to classroom learning, but they also and more importantly
reveal the impacts of a specific technical feature (virtual interaction) that was previously not
studied. Although some results are not as expected, they do provide directions for future research
that may expend the current LVI model into a complete model for successful e-learning. For
Page 32 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
32
example, while the system interactivity factor should be kept as a factor affecting learners’
satisfaction, the factors that can directly affect the learning effectiveness still need be explored in
the future. We also hope that this paper provides an example and starting point for e-learning
researchers on how to study the e-learning problems in depth by considering the underlying
structures of the e-learning environments, including instructional strategies, information
technologies, learners’ psychological learning process, and so on.
This research also provides suggestions for e-learning practitioners. Since virtual interaction
does not rely on the availability of human instructors, it can be more accessible, convenient, and
cost efficient than interaction with human instructors. Although virtual interaction was not found
to directly impact the learning performance, it is related to learners’ satisfaction towards e-
learning. Therefore we suggest that adding virtual interaction will help to reduce the drop rate of
e-learning and attract more participants. However, we also suggest that the use of virtual
interaction in e-learning needs to be adjusted based on different learning phases in order to
achieve the best learning outcomes. Specifically, in the introductory course, using simple
hypermedia lectures may be enough and, therefore, may reduce cost, but in the advanced
learning phase when learners need to review and reinforce the old content, virtual interaction
should be added to help learners quickly find answers to their questions. However, based on the
observations in this study, virtual interaction may not be sufficient for a more advanced,
discussion type of class, and asynchronous collaboration or face-to-face classroom discussion
may still be needed. In this situation, an appropriate combination of virtual and human
instructor(s) may prove to be the best way to learn.
Page 33 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
33
7. REFERENCES
Alavi, M. "Computer-Mediated Collaborative Learning: An Empirical Evaluation," MIS
Quarterly. 1994, pp 159-174.
Alavi, M. and Leidner, D.E. "Research Commentary: Technology-Mediated Learning - A Call
for Greater Depth and Breadth of Research," Information Systems Research. (12:1) 2001,
pp 1-10.
Ally, M. "Foundations of Educational Theory for Online Learning," in: Online Learning
Handbook, F.A.T. Elloumi (ed.), Athabasca University, 2004.
Baron, R.M. and Kenny, D.A. "The Moderator-Mediator Variable Distinction in Social
Psychological Research: Conceptual, Strategic, and Statistical Considerations," Journal
of Personality and Social Psychology (51:6) 1986, pp 1173-1182.
Bates, A.W. Technology, Open Learning and Distance Education. Routledge, London, 1995.
Bonate, P.L. Analysis of Pretest-Posttest Designs. Chapman & Hall/CRC, London, 2000.
Broota, K. Experimental Design in Behavioural Research. Wiley, New York, 1989.
Bruner, J. The Process of Education. Harvard University Press, Cambridge, MA, 1960.
Bruner, J. Toward a Theory of Instruction. Harvard University Press, Cambridge, MA, 1966.
Cao, J. Learning With Virtual Mentors: How To Make E-Learning Interactive And Effective?
Doctoral dissertation in: Management Information Systems, University of Arizona,
Tucson, 2005.
Carey, J.W., Wenzel, P.H., Reilly, C., Sheridan, J., and Steinberg, J.M. "CDC EZ-Text: Software
for Management and Analysis of Semistructured Qualitative Data Sets," Cultural
Anthropology Methods Journal. (10:1) 1998, pp 14-20.
Daft, R.L., and Lengel, R.H. "Information Richness: A New Approach to Managerial Behavior
and Organizational Design," in: Research in Organizational Behavior, L.L. Cummings
and B.M. Staw (eds.), JAI Press, Homewood, IL, 1984, pp. 191-233.
Daft, R.L., and Lengel, R.H. "Organizational Information Requirements, Media Richness and
Structural Design," Management Science. (32:5) 1986, pp 554-571.
Ernest, P. Social Constructivism as a Philosophy of Mathematics. State University of New York
Press, Albany, NY, 1998.
Ertmer, P.A. and Newby, T.J. "Behaviorism, Cognitivism, Constructivism: Comparing Critical
Features from an Instructional Design Perspective," Performance Improvement Quarterly.
(6:4) 1993, pp 50-70.
Page 34 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
34
Hiltz, S.R. and Wellman, B. "Asynchronous Learning Networks as a Virtual Classroom,"
Communication of ACM. (40:9) 1997, pp 44-49.
Jonassen, D.H. "Objectivism Versus Constructivism: Do We Need A New Philosophical
Paradigm?" Educational Technology Research and Development. (39:3) 1991, pp 5-14.
Miller, G. "WordNet: An On-line Lexical Database," International Journal of Lexicography.
(3:4) 1990.
Miller, G.A. "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity
for Processing Information," Psychological Review. (63) 1956, pp 81-97.
Moore, M.G. "Three Types Of Interaction," in: Distance Education: New Perspectives, K. Harry,
M. John and D. Keegan (eds.), Routledge, London, 1993.
Murphy, K.L. and Cifuentes, L. "Using Web Tools, Collaborating, And Learning Online,"
Distance Education. (22:2) 2001, pp 285-305.
O'Connor, C., Sceiford, E., Wang, G., and Foucar-Szocki, D. "Departure, Abandonment, and
Dropout of E-learning: Dilemma and Solutions," Masie Report. 2003, retrieved February
2, 2006 from: http://www.masie.com/researchgrants/2003/JMU_Final_Report.pdf.
Pask, G. Conversation, Cognition, and Learning. Elsevier, New York, 1975.
Phillips, D.C. "The Good, the Bad and the Ugly: The Many Faces of Constructivism,"
Educational Researcher. (24:7) 1995, pp 5-12.
Short, J.A., Williams, E., and Christie, B. The Social Psychology of Telecommunications. John
Wiley & Sons, New York, 1976.
Skinner, B.F. The Behavior of Organisms: An Experimental Analysis. Appleton-Century, New
York, 1938.
Skinner, B.F. Beyond Freedom and Dignity. Knopf, New York, 1971.
Voutilainen, A. "Helsinki Taggers and Parsers for English," in: Corpora Calore: Analysis and
Techniques in Describing English, J.M. Kirk (ed.), Rodopi, Amsterdam & Atlanta, 2000.
Wagner, E.D. "In Support of a Functional Definition of Interaction," The American Journal of
Distance Education. (8:2) 1994, pp 6-29.
Zhang, D. and Nunamaker, J.F. "A Natural Language Approach to Content-Based Video
Indexing and Retrieval For Interactive E-Learning," IEEE Transactions on Multimedia.
(6:3) 2004.
Zhang, D. and Nunamaker, J.F. "Powering E-Learning In the New Millennium: An Overview of
E-Learning and Enabling Technology," Information Systems Frontiers. (5:2) 2003, pp
207-218.
Page 35 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
35
Figure 1. The LVI Process Illustration
(IT: Instructional Technology; LA: Learning Activity)
Figure 2. The LVI Model
Page 36 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
36
Figure 3. User Interface of the Prototype System
Table 1. Instructional Strategies
Learning Phases Instructional Strategies
Segment the learning content into small chunks, organize these chunks in
logical structures, and sequence them from simple to complex
Use multiple modes of delivery (e.g. audio, visuals, animations, etc.)
Introductory
Provide repetitive tests and feedback
Include examples that relate to learners in the learning materials
Have the learners control the learning agenda
Support learners’ interactions with instructors or peers
Advanced
Support learners’ interactions with information and the environment
Table 2. Experimental Design
Treatments
Human
Instructor
VI –
IN only
VI –
IN + Outline
VI –
IN + QA
VI –
IN + QA +
Outline
VI –
IN + Outline
+QA + Quiz
Pretests (45 minutes)
Learning (50 minutes, including 10 minutes of review time)
Class
Session
Posttests (30 minutes)
Two Weeks
Pretests (15 minutes)
Finishing an assignment (50 minutes)
Assignment
Session
Posttests (30 minutes)
Page 37 of 38 Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
For Review
37
Table 3. Means and Standard Deviations of Interactive Learning Activities
Session 1
Activities Session 2
Activities
Group
Nmean (std) N mean (std)
2(IN + Outline) 25 6.44 (6.01) 25 19.28 (10.50)
3(IN + QA) 24 2.92 (2.90) 23 27.54 (8.71)
4(IN + QA + Outline) 28 9.04 (4.95) 28 25.04 (14.14)
5(IN + QA + Outline + quizzes) 30 11.50 (7.58) 30 27.17 (13.40)
Total 158 6.35 (6.05) 152 23.46 (12.27)
Table 4. Means and Standard Deviation of Learning Performance (% of Correct Responses)
Session 1 (Class Session) Session 2 (Assignment Session)
Pretest Posttest Pretest Posttest
Nmean (std) mean (std) N mean (std) mean (std)
IN only 26 28.8 (16.1) 72.7 (16.4) 22 50.9 (16.9) 80.5 (16.5)
IN + Outline 25 29.2 (17.1) 68.8 (17.2) 25 43.2 (17.3) 74.0 (19.8)
IN + QA 24 26.3 (15.0) 73.3 (17.4) 23 48.3 (19.7) 80.4 (14.3)
IN + QA + Outline 28 32.5 (21.0) 71.8 (17.7) 28 51.4 (18.6) 76.4 (14.5)
IN + QA + Outline + quizzes 30 29.0 (14.7) 66.3 (19.2) 30 44.7 (21.3) 76.7 (18.8)
Human Instructor 25 30.4 (21.7) 68.0 (18.3) 24 45.4 (21.3) 65.8 (20.2)
Total 158 29.4 (17.6) 69.9 (17.6) 152 47.2 (19.3) 75.6 (17.9)
Table 5. Means and Standard Deviations of Self-reported Learning Outcomes
Session 1 (Class Session) Session 2 (Assignment Session)
EFFECT INTER EFFECT INTER
Nmean (std) mean (std) N mean (std) mean (std)
IN only 26 3.64 (.50)2.68 (.82) 22 3.57 (.66)2.69 (.75)
IN + Outline 25 3.56 (.53)2.91 (.77) 25 3.24 (.96)2.43 (.92)
IN + QA 24 3.68 (.68)3.04 (.76) 22 3.54 (.58)3.23 (.63)
IN + QA + Outline 28 3.47 (.68)2.94 (.84) 28 3.55 (.69)3.02 (.84)
IN + QA + Outline + quizzes 30 3.65 (.58)3.06 (.66) 29 3.34 (.95)2.83 (.90)
Human Instructor 25 3.86 (.72)3.98 (.55) 24 3.68 (.69)3.98 (.50)
Total 158 3.64 (.62)3.10 (.83) 150 3.48 (.78)3.02 (.91)
Page 38 of 38Information Systems Research
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60