Please cite: Lipman AJ, Sade RMS/slk, Glotzbach AL, Lancaster CJ, Marshall MF. The
Incremental Value of Internet-Based Instruction: A Prospective Randomized Study. Academy of
Medicine 2001;76 (10): 72-6
The Incremental Value of Internet-based Instruction as an Adjunct to Classroom
Instruction: A Prospective Randomized Study
Andrew J. Lipman, MD, Robert M. Sade, MD, Angela L. Glotzbach, Carol J. Lancaster, PhD,
Mary Faith Marshall, PhD
Dr. Lipman is Clinical Associate, Department of Medicine, Tufts University School of Medicine.
Dr. Sade is Professor of Surgery, Department of Surgery and Director, Institute of Human
Values in Health Care, Ms. Glotzbach was Administrative Assistant, Program in Bioethics, Dr.
Lancaster is Associate Professor, Department of Biometry and Epidemiology, all at the Medical
University of South Carolina, Charleston. Dr. Marshall is Bioethics Officer, Department of
General and Geriatric Medicine, University of Kansas Medical Center, Kansas City.
Address correspondence and requests for reprints to Robert M. Sade, MD, Department of
Surgery, Medical University of South Carolina, 96 Jonathan Lucas Street, Suite 409, P.O. Box
250612, Charleston, SC 29425; Phone: (843) 792-5278; Fax: (843) 792-8286; E-mail:
Presented at the American Society for Bioethics and Humanities Annual Meeting, October 30,
1999, Philadelphia, Pennsylvania
Purpose. Computer-based methods of instruction offer the possibility of helping medical
students to learn clinical skills and professionalism. Without rigorous documentation of its
pedagogic advantages, the utility of internet-based teaching is not solidly grounded. The authors
carried out a prospective, randomized study of educational outcomes, comparing a traditional
classroom course in clinical ethics with the same course supplemented by internet-based
Methods. Introduction to Clinical Ethics is a sophomore medical school course that teaches a
specific method for analyzing clinical ethical problems. One sophomore class was randomly
assigned to either classroom teaching alone (traditional group; n = 65) or classroom teaching
supplemented with internet-based discussion of cases illustrating ethical issues (internet
component group; n = 62). A final case analysis comprehensively evaluated students’
understanding of the analytic method taught in the course. Grades for both groups on the final
case analyses, which were rated by two external reviewers, were compared.
Results. The students’ understanding of ethical analysis, as measured by grades of external
reviewers on the final paper, was significantly higher for those in the course with the internet
component than it was for those in the traditional course (3.0 ± 0.6 and 2.6 ± 0.7, respectively; p
Conclusion. The study documents the incremental value of internet-based teaching of clinical
ethics to sophomore medical students.
Over the past decade, medical educators at all levels have increasingly incorporated computer-
based methods of instruction.1 The internet has been used to teach courses,2 as well as to publish
class schedules, syllabi, and student evaluations.3 Computer literacy and an overall interest in
computers have been studied in medical students, and they believe a course on medical
computing in the undergraduate curriculum to be important. 4,5
Few studies, however, rigorously explore the pedagogic advantages of internet-based
teaching over classroom-based teaching. Coulehan and co-workers reported anecdotal
experiences of students’ exposure to computer-based teaching modules.6 They offered an e-mail
program to medical students enrolled in a medical humanities course, which students used as a
discussion forum. Students evaluated the course with questionnaires and instructors assessed the
students’ case analyses, but there was no comparison of this teaching method with traditional
We searched the medical education literature in Medline, using search terms ‘medical ethics’,
‘medical schools’, and ‘medical informatics,’ and found 6,607, 1,347, and 613 citations,
respectively. A further search of the Science Citation Index was carried out, using the most
pertinent articles from the previous search. We found no study that provided objective data
based on randomized studies concerning the value of information technology in medical
education. We therefore embarked upon a prospective randomized study of educational outcomes
in our medical school’s sophomore course entitled Introduction to Clinical Ethics (ICE),
comparing the traditional classroom format with the same format supplemented with internet-
based discussion. We reasoned that a prospective randomized trial would limit bias and provide
clear evidence for cause and effect relationships.
ICE is a required course in the College of Medicine of the Medical University of South Carolina.
It takes place in the fall semester of the sophomore year. All students use the same textbook of
clinical ethics, which describes and illustrates the use of a specific four-step method of
identifying ethical issues, analyzing them, and bringing them to resolution.7 Small groups of
nine to 12 students meet in a classroom with one or two instructors for two hours weekly during
an 11-week period to discuss an assigned chapter of the textbook. The goal of the course is to
teach medical students to use the four-step method to identify and resolve clinical ethical issues.
In the fall of 1998, all sophomore students in our medical school were randomly assigned to
one of two arms of the study: one traditional, and one with an internet component. Both groups
read textbook assignments for a two-hour classroom session each week, and all students wrote a
detailed case analysis at the end of the course. In addition to this shared format, students in the
traditional arm were required to write a mid-term case analysis. Students assigned to the arm that
contained an internet component were required to regularly visit an internet site housing WebCT,
an internet-based application designed for university-level courses. We found the variety of tools
WebCT provides to be useful for teaching clinical ethics to small groups of medical students,
particularly its capability to stratify discussions hierarchically by topic and by thread, and to
separate discussions by group and by ethical case. Details of its structure and function can be
found at its website.8 During the course, students in the course with the internet component used
the application to discuss, within the structure of the four-step method, a series of four cases
involving substantial ethical problems. Access to the site was password protected, and
appropriate levels of privacy were provided on each page for students and instructors. Traditional
students were not assigned a password and could not participate in the web discussions.
The outcome measures were students’ final grades in the course (scale 1.0 - 4.0), quality of
students’ classroom participation, grades given by instructors and by external reviewers on the
final written case analysis (scale 1.0 - 4.0), and subjective evaluations of the course by both
students (using our university’s standard course evaluation instrument) and faculty (using an
evaluation instrument developed for this study). The final grade itself was an aggregate of small
group participation (25%), mid-term case analyses or WebCT participation (35%), and
instructors’ grades of final case analyses (40%). The faculty survey contained ten items regarding
subjective impressions about the course and the course’s format, the students’ work load, and an
estimate of the number of hours a spent on the course each week. The students’ university-wide
course evaluation instrument allowed responses in a Likert-type format to ten items related to the
structure, content, and educational value of the course. It did not ask for an assessment of work
load. The most objective outcome measure was the external reviewers’ grading of the final case
analyses because each used specific criteria for content and form, were otherwise unconnected to
the course, and were blinded to the course assignments. One was a philosopher from a local
liberal arts college, the other a professor of English. Neither was personally acquainted with any
student in the sophomore class.
The experimental protocol was granted exempt status by our Institutional Review Board for
We used a paired t-test to compare grades (from both instructors and external reviewers) for
the final case analyses, and to compare the external reviewers’ case analysis grades for each
student with those of the instructors. We measured reliability between (1) the external reviewers'
grades and (2) the instructor's grades and the average of the external reviewers’ grades with
Cronbach’s alpha. We used Student's t-tests to compare the two study groups for each item on the
students’ subjective evaluations of the course, as well as for the aggregate score of the ten items,
to compare the two study groups for final course grades and small-group participation grades.
Finally, we analyzed the faculty’s subjective evaluations of the course using t-tests to assess
differences between those teaching the traditional course and those teaching the course with the
web component. Wherever appropriate, the alpha level of .05 was adjusted for the number of
comparisons (Bonferroni method).9
There were no significant differences between the traditional (n = 66) and the internet component
(n = 64) groups in the means of their final course grades (3.7 ± 0.4 and 3.6 ± 0.3, respectively),
small group participation grades (3.5 ± 0.5 and 3.6 ± 0.3), or final case analysis grades from
instructors (3.6 ± 0.5 and 3.6 ± 0.5). The frequency distribution of instructors’ grades on the final
written case analysis was skewed to the right (Figure 1A), while that of the external reviewers’
grades, averaged for each student, more closely resembled a normal distribution (Figure 1B). We
noted, incidentally, that the instructors of two small groups (both in the traditional arm) awarded
a large number of 4.0 grades for the final case analyses (16 of 22 students, 73%).
The case analysis grades from external reviewers were significantly lower than were those of
the instructors (2.8 ± 0.7 and 3.6 ± 0.5; p < .0001). The final case analysis grades from external
reviewers were significantly higher for students in the course with the internet component than
they were for students in the traditional group (3.0 ± 0.6 and 2.6 ± 0.7, respectively; p < .005).
This was true of the reviewers’ grades individually and in aggregate.
Reliability between the two external reviewers for their grades on the final case analysis was
alpha = .62. The reliability of instructors’ grades and the average of the two external reviewers
was alpha = .63, and the correlation was r = .48 (p < .05).
There were no significant differences in responses to the course evaluation survey between
the students in the two groups. Of the 27 instructors in the course, 18 responded to the survey.
There was only one significant difference between the instructors of the two groups in their
subjective evaluations. Instructors who taught the course with an internet component (n = 9)
reported greater hours of preparation per week for the course than did instructors of the
traditional course (n = 9) (5.8 ± 1.7 and 3.7 ± 1.3 hours a week, respectively; p < .01). The
instructors rated the item “My students had more work to do than students in the other course
format,” (scale of 1 - 5, internet component 2.67, traditional 2.75, p = .83) the same for both
formats. The standard course evaluation instrument for students did not ask for an assessment of
Information technology in many forms is being used increasingly in higher education. It seems
important, therefore, that its value to the educational process be reliably documented. Prospective
randomization has long been used to produce objective evidence for other types of educational
interventions, yet our extensive literature search found no studies that used this method to
evaluate any educational method involving information or internet-based technologies.
One of the difficulties of attempting to study the value of educational methods is identifying
objective outcome measures, especially when the educational goal is to teach a process rather
than a large accumulation of facts that can be tested easily with a pencil-and-paper examination.
The goal of ICE is to teach medical students to use a specific method of case analysis for ethical
questions; therefore, the best way to evaluate how well they learned this method seemed to be a
requirement for a case analysis. Grades of narrative work provided by instructors tend to be
unreliable, as illustrated by our study. They know the students well and may be biased by
personal factors unrelated to the analysis itself. For example, in two small groups, nearly three
quarters of the grades were 4.0, the highest possible score. Moreover, the frequency distributions
of instructors’ grades for small-group participation, for the course, and for the final case analysis
were sharply biased toward the high end of the grading spectrum.
At the outset of the study, we believed that the most objective outcome measure would be
students’ grades on the final case analysis graded by external reviewers. Our data confirmed this
belief. The external reviewers’ grades had a broad frequency distribution, and had significant
interrater reliability. Students in the course with the internet component had higher grades, as
assessed by the external reviewers’ ratings, than did students in the traditional course, indicating
the former had better mastery of the four-step analytic method. This result documented clearly
that traditional teaching supplemented by a structured discussion of several ethical cases using
the WebCT internet-based program significantly improved the students’ understanding of ethical
We suspected that the differing backgrounds and perspectives of a philosopher and an
English professor might have led them to grade the case analyses differently, even though they
were both provided the same criteria for grading. Our data showed, however, that students in the
course with the internet component received higher grades from each external reviewer than did
students in the traditional course (with a moderately high degree of interrater reliability), as well
as from both combined. The uniformity of this difference between the groups adds further weight
to the validity of our conclusion: the addition of an internet-based discussion improved
understanding of a particular analytic method for clinical ethics.
The amount of time instructors spent on the course was roughly estimated retrospectively.
The faculty who taught with the internet component recalled spending 57% more time on the
course than did the instructors of the traditional course. It seems likely that the increased time
used by the faculty who taught with the internet component was related to the fact that they
facilitated the classroom discussion, as did the traditional instructors, but additionally they
facilitated the internet-based discussion. Interpretation of this observation is complicated by the
fact that the instructors were not randomly assigned, but self-selected the type of format they
wanted to teach.
The structure of our study merits comment as well. We designed the two arms of the study to
require approximately equal amounts of work by students in each group. Students in the
traditional class had to write a case analysis in the middle of the course that was not required of
the students in the course with the internet component. On the other hand, the internet students
had to participate in much less formal but ongoing discussions on WebCT throughout the course.
Our attempt probably succeeded because the instructors in both formats noted in their post-
course survey that they believed their students did no more work than did the other students.
Although the amount of work was judged by the faculty to be about the same, WebCT was
clearly more effective in helping the students to learn how to use the four-step method of case
Figure 1. Frequency distribution of grades on final case analysis. A) Grades by course instructors.
B) Grades by external reviewers.
4.00 3.753.503.253.002.75 2.502.252.00 1.751.50
11 Download full-text
1. Stevens R, Reber E. Maximizing the Utilization and Impact of Medical Educational Software
by Designing for Local Area Network (LAN) Implementation. Proceedings of the Annual
Symposium on Computer Applications in Medical Care. 1993; 781-5.
2. Salas AA, Anderson MB. Introducing Information Technologies into Medical Education:
Activities of the AAMC. Acad Med. 1997;72:191-3.
3. Pallen MJ. Medicine and the Internet- Dreams, Nightmares and Reality. Br J Hosp Med.
4. Kidd MR, Connoley GL, McPhee W. What Do Medical Students Know about Computers?
Med J Austral. 1993;158:283-4.
5. Roberts LW. Sequential Assessment of Medical Student Competence with Respect to
Professional Attitudes, Values, and Ethics. Acad Med. 1997;72:428-9.
6. Coulehan JL, Williams PC, Naser C. Using Electronic Mail for Small-group Curriculum in
Ethical and Social Issues. Acad Med. 1995;70:158-60.
7. Fletcher J. Introduction to Clinical Ethics (2nd edition). Frederick, MD: University
Publishing Group, Inc.; 1997.
8. Goldberg MW. Communication and Collaboration Tools in World Wide Web Course Tools
(WebCT). Proceedings of the conference Enabling Network-Based Learning, 1997 May 28-
30; Espoo, Finland. Available from: URL: http://www.webct.com/library/comm.html
9. Bailar III J. Medical Uses of Statistics. Massachusetts: NEJM Books; 1986.