An empirical study of personal response technology for improving attendance and learning in a large class
ABSTRACT Student evaluations of a large General Psychology course indicate that students enjoy the class a great deal, yet attendance is low. An experiment was conducted to evaluate a personal response system as a solution. Attendance rose by 30% as compared to extra credit as an inducement, but was equivalent to offering pop quizzes. Performance on test items targeted by in-class questions rose by an average of 21% while control test questions rose by only 3%. The effect is seen in both factual and conceptual test items. Two theories that may explain the effect are discussed.
Journal of the Scholarship of Teaching and Learning, Vol. 9, No. 1, January 2009, pp. 13 - 26.
An empirical study of personal response technology for improving
attendance and learning in a large class
Abstract: Student evaluations of a large General Psychology course indicate that
students enjoy the class a great deal, yet attendance is low. An experiment was
conducted to evaluate a personal response system as a solution. Attendance rose
by 30% as compared to extra credit as an inducement, but was equivalent to
offering pop quizzes. Performance on test items targeted by in-class questions
rose by an average of 21% while control test questions rose by only 3%. The
effect is seen in both factual and conceptual test items. Two theories that may
explain the effect are discussed.
Key Words: Personal Response System, Classroom Technology, Clickers,
Attendance, Test Performance, Learning, Methodology.
It’s a problem familiar to many professors. Each semester my students wrote glowing
reviews of my General Psychology course, rating the class a mean of 4.5 on a 5-point scale. In
spite of the good reviews, large numbers were absent from class on any given day. This
mismatch between students’ perceptions and behaviors is most often found in large classes
where attendance is not easily monitored and students are largely anonymous. Indeed, the
problem became pronounced after my class size increased from 60 to 210 students. The student
evaluations remained very positive but attendance was reduced to roughly 50-60% on any given
day. Moreover, when students were in attendance many were inattentive, either dozing or
otherwise occupied for at least part of the class period. In speaking with colleagues at campuses
across the country, I found the problem to be common. The attendance and attention problem is
directly related to learning because students aren’t learning the classroom material if they aren’t
in class. I increased attendance to roughly 80% by giving pop quizzes throughout the semester.
The system was cumbersome, however, as handing out and collecting tests from hundreds of
students took a lot of time away from class. Grading them and inputting scores to grade books
also proved time consuming.
I found a partial solution, however, in the use of a personal response system (PRS). PRS
facilitates presentation of multiple-choice questions into any class equipped with a digital
projection system. Because I was already using PowerPoint to present all my lecture material, the
technology integrated naturally into my classroom. PRS requires students to purchase a remote
(commonly called a “clicker”) that allows them to “click in” responses, which are recorded by a
receiver. Questions can be used to check comprehension or promote discussion. With the
instructor’s remote, a button click allows instant projection of class responses to provide
1 Psychology Department, University of Massachusetts Dartmouth, 285 Old Westport Road, North Dartmouth, MA 02747-2300,
Journal of the Scholarship of Teaching and Learning, Vol. 9, No. 1, January 2009. 14
immediate feedback. Uploading students’ responses to a grade book also requires just a simple
button click by the instructor.
PRS use in large classes has become very common nationwide but research on its
learning effects is both sparse and inconclusive. Some investigations show positive effects of
PRS on learning while others do not. As I will show in the following section, the underlying
reason for differential findings may stem from methodological issues. For this reason, the present
investigation focuses on the attendance and learning effects of PRS. It uses a methodology that
offers a more fine-grained view of the learning effects in the hope of explaining and clarifying
some of the literature’s contradictory results.
A. Research on PRS Effectiveness.
PRS systems have been used for a variety of purposes including teaching case studies
(Herried, 2006; Brickman, 2006), replicating published studies in class (Cleary, 2008) and
electronic testing (Epstein, Lazarus, Calvano, Matthews, Hendel, Epstein and Brosvic, 2002).
Based on published reports, however, the most common use appears to be during lectures for
assessing students’ comprehension of class material in real time and improving participation and
attendance (Beekes, 2006; Poirier and Feldman, 2007; Shih, Rogers, Hart, Phillis and Lavoi,
2008). The latter function has also been the focus of more scrutiny.
A number of studies have attempted to test the effect of PRS on attendance and
participation. Student volunteers using PRS in a controlled laboratory study were significantly
more likely to participate than students asked to raise their hand or use laminated response cards
to indicate responses to instructor questions (Stowell and Nelson, 2007). In a case study of PRS
in a large introductory biology course, Ribbens (2007) reported an attendance increase of 20%
after the technology was introduced in his course. PRS was also shown to enhance student
participation in classes as part of an institution-wide evaluation across disciplines (Draper and
Brown, 2004). One of the strongest effects of the technology in that study was its ability to
promote class discussion among students. The increased participation may come, in part, from
student enjoyment of the technology. Indeed, a common finding among PRS studies is that
students enjoy the technology in class. For example, Hatch, Jensen and Moore (2005) report that
96% of students enrolled in their anatomy and environmental science courses agreed or strongly
agreed that they liked using the technology. It is highly likely, though, that participation effects
also stem from using PRS to determine required participation grades.
Not all reports have shown a clear improvement in student participation, however.
Morling, McAuliffe, Cohen and DiLorenzo (2008) compared outcomes of 2 classes using PRS
with 2 classes that were not. Two instructors each taught one PRS class and one no-PRS class. In
the PRS classes, the technology was used at the start of each class to quiz students on assigned
readings. They found that one instructor’s PRS class rated attendance as more important than the
no-PRS class, but ratings were comparable between the other instructor’s classes. Neither PRS
class reported being more engaged or attentive in class than their matched no-PRS class. It is
important to note, however, that the PRS questions were scored for extra credit and not as a
required component of the course. Moreover, PRS use in this study was limited to tests given at
the start of class and only probed memory for the assigned reading, not for in-class material.
There is no reason to expect that testing students about outside reading in the beginning of class
would cause students to be more attentive or interested in lecture material during class. Indeed,
Journal of the Scholarship of Teaching and Learning, Vol. 9, No. 1, January 2009. 15
in a discussion of recommended best practices, Ribbens (2007) suggests integrating PRS
throughout the class and using it as part of the graded requirements.
The learning benefit of PRS is another important issue that has been addressed in the
empirical literature. In an assessment of students’ self-perceptions of learning, Hatch et al.
(2005) report that 92% of students indicated PRS helped them identify what they did and did not
know and 83% indicated that the technology helped them learn. Of course, student’s self-
perceptions are not as accurate as more direct measures. In one study more directly measuring
the effect, Ribbens (2007) found that students in his introductory biology course did 8% better on
tests than his class 2 years prior, before adopting PRS. Morling et al. (2008) also reported higher
mean test scores on 2 of 4 tests in their PRS classes than in their no-PRS classes. Morling et al.’s
study is nicely controlled by its use of 4 classes counterbalanced between instructors and control
groups. However, the authors do not mention how many of the test questions were directly
related to the information targeted by the in-class PRS questions. The same question arises about
Ribbens’ (2007) findings. That information would be useful to understanding the extent of PRS
effect on students’ learning and understanding, as performance on questions not targeted by PRS
may be diluting the dependent measure.
Other investigations have yielded somewhat mixed results in their analyses of learning
with PRS. For example, Kennedy and Cutts (2007) found that the strength of the relationship
between PRS use and learning outcome measures hinged on how successful students were in
answering the PRS questions. Others have found no learning effects of PRS at all. Stowell and
Nelson (2007) gave laboratory subjects a simulated introductory psychology lecture and
compared test performance between groups asked to use PRS or do other sorts of participative
activities during the lecture. They found no differences between groups on learning outcomes
measures. Of course, the study was conducted in a laboratory so motivation to learn was
different than in a live classroom.
In sum, the majority of studies on PRS point to the technology as an effective
pedagogical tool and methodological issues appear to be a factor in those that do not.
Specifically, assessments of PRS do not always assure internal validity through careful control of
the relationship between in-class PRS use and the dependent measure. The hypothesis explored
in the present study is that learning measures targeted at specific PRS questions will demonstrate
a strong effect of the technology on learning. Before describing the study, the following section
explains more about PRS and how it was incorporated into my classroom.
B. PRS Use in the Present Classroom.
My General Psychology class is typical of most, covering roughly one chapter of a
textbook each week and spanning a cross section of the field. I use demonstrations, role-playing,
audio, video, and interactive activities to demonstrate points. All of my lectures are presented
with PowerPoint slides that are available on my website for students to download.
The system used in the present study was iClicker. The devices cost students $20-35
(depending on whether it came bundled with a text or was bought used). The clickers were
available at the campus bookstore or through Amazon.com. A receiver connected to the
instructor’s computer registers the responses. The iClicker company provides the instructor’s
hardware (a receiver and 2 instructor remotes) and software at no cost. The software runs
concurrently with PowerPoint, with a small function box floating on top of the slides in a place
of the user’s choosing. It allows a bar graph of the class responses to be displayed
Journal of the Scholarship of Teaching and Learning, Vol. 9, No. 1, January 2009. 16
instantaneously. Record keeping is also automated, as students’ scores are uploaded to a grade
book within seconds or entered into a text only file that is transferred easily to an Excel file or
Blackboard grade book. Questions can be ungraded, assigned participation points for entering
any response or assigned points for correct responses only. Earned points can be factored into
final grade calculations or used for extra credit.
In my class, I present roughly 50-70 credited, multiple choice questions over the course
of the semester. The accumulated PRS points are scaled to a maximum score of 50 and
calculated into the final grade as 50 out of a possible 350 points. Other questions, however, are
not scored and are used solely to make a point or generate discussion. I typically present graded
PRS questions after making an important point, explaining a theory or presenting a research
finding, but only after soliciting questions and encouraging students to ask for clarifications.
Some questions are factual (e.g., What is the major difference between a punishment and a
negative reinforcer?) while others are more conceptual, requiring students to apply a principle
(e.g., Given what we know about the role of proximity and similarity in our attraction to others,
in which setting are you least likely to meet a new friend or your future spouse?). The PRS
question slides are omitted from the download files I make available to students.
The purpose of incorporating PRS into the course this way was to improve attendance
and to enhance student learning. The study presented here was conducted to evaluate the
effectiveness of the approach. While the research summarized earlier was encouraging enough to
try out PRS in my class, it also convinced me that evaluations of PRS are particularly sensitive to
variables affecting external and internal validity. To maximize external validity, the study was
conducted in a live classroom. To maximize internal validity, I focused on the relationship
between the PRS questions and the assessment items during stimulus development. I also used
control items and control groups from prior semesters. Specifically, the effect on learning was
measured by pairing in-class PRS questions with specific test questions. Performance on the
targeted test questions was compared with test questions that were not paired with PRS items. In
addition, performance on the same test items in a prior semester that did not include PRS was
used as a baseline measure. The methodology is explained in detail below.
Students enrolled in a 210-student General Psychology (PSY101) class at the University
of Massachusetts Dartmouth during fall 2007 comprised the experimental group. All but a
handful was traditional students, aged between 18-21. The majority, 81%, were freshmen, 14%
sophomores, 4% juniors and 1% seniors. Because the course satisfies a university-wide
distribution requirement as well as requirements within several majors, students came from all
five campus colleges. 29% of the class was business majors, 23% nursing, and 40% liberal arts
and sciences. The rest were disctibuted between engineering and visual and performing arts.
Attendance and test scores of students registered in fall 2006, prior to the implementation of
PRS, were used as baseline measures to evaluate the performance of the fall 2007 class. The
majority were freshmen and sophomores, 84% and 14%, respectively. They represented all 5
colleges, with the bulk coming from business, nursing and social sciences/humanities, 43%,
16%, and 36%, respectively. One other attendance measure was used from a class in fall 2005.
Students in that class were similar to the others in distribution of academic years (68% freshmen,
Journal of the Scholarship of Teaching and Learning, Vol. 9, No. 1, January 2009. 17
22% sophomores and the rest juniors and seniors) with most majoring in business, nursing and
liberal arts and sciences, 21%, 3% and 58%, respectively. An IRB waiver was obtained prior to
conducting the study.
B. Stimuli, Materials and Procedure.
The course taught to the PRS class was almost identical to the course taught to the No-
PRS class, including the assigned text, all lectures and PowerPoint slides, projected with an
Apple iBook G4. The difference was the addition of the PRS questions in the experimental
semester. In all classes, the course material used for the study spanned half the material covered
during the semester. This encompassed 6 chapters from the required text, covered on the second
and fourth of 4 tests during the semester. A total of 30 test questions were targeted for analysis,
five from each chapter. Of the 30 questions from each chapter, 18 were factual questions, asking
about definitions, steps in a process, or other facts about the material. The other 12 questions
were conceptual, requiring application of the factual material to given situations. Although
students were not alerted to any relationship between the PRS items and the test questions, the
relationship was the independent variable used to create 3 experimental and 2 control conditions.
The test questions were chosen carefully in order match, as closely as possible, their relative
degree of difficulty within each treatment condition. All PRS questions used for the study were
factual, asking only about basic information presented in class.
The 5 test questions from each chapter were each used in one of the study’s 5 conditions.
Sample items are provided in the Appendix. The Identical condition presented a factual test item
in class as a PRS question. The Reworded condition contained factual PRS and test questions on
the same topic, but the items were not identical to one another. Both the questions and the
response choices differed. The Conceptual condition included a factual PRS question in class
that probed the information relevant to a targeted conceptual test question. Conceptual test
questions required students to apply a principle or fact to a hypothetical situation. The Control-
Factual and Control-Conceptual conditions, respectively, presented factual or conceptual
questions on the tests but no in-class PRS questions relevant to their content. Six of each item
type (one from each chapter) were included in the study.2
The classes all met three times per week (Monday, Wednesday and Friday) for 50
minutes at 11:00am. In fall 2007, the PRS items were spread across 7 weeks of a 15-week
semester. The PRS questions appeared on slides as part of the PowerPoint presentations
delivered in class each day. An average of 3-6 PRS questions were given in class each week with
the items relevant to the study dispersed throughout the weeks in which the targeted chapters
were discussed. As the instructor explained concepts or research findings, students were
encouraged to ask questions or engage in discussion about the material. PRS questions were
typically asked after a concept was presented and discussed, and only after students were
encouraged to ask for clarifications or additional information. Some were asked as discussion
starters and others for credit. A title at the top of a PRS slide indicated to students whether a
given question was for credit or discussion. All PRS questions used in the present study were
When the instructor activates the iClicker system with the remote (or keyboard) a timer
appears on the screen, allowing a time limit to be set for responses. Typically, 60-90 seconds was
2 Due to a test production error, one of the items in the Identical condition was left off the second exam. As such, the analyses for
that condition are based on results from 5 items rather than 6.