CLASSROOM RESPONSE SYSTEMS FACILITATE STUDENT
ACCOUNTABILITY, READINESS, AND LEARNING
SARA J. JONES
University of Houston
The University of Texas at Austin
JANE S. VOGLER
Oklahoma State University
DANIEL H. ROBINSON
Colorado State University
In three experiments using crossover designs, we investigated the effects
of Classroom Response Systems (CRS) when presenting multiple-choice
questions in real classrooms. In Experiment 1, students either used CRS for
bonus points or simply saw the questions. There were no differences on a
unit exam. In Experiment 2, students were told prior to a unit that they would
either use CRS for course credit or no credit. Students using CRS for credit
performed better on pre-lecture questions and a unit exam. In Experiment 3,
students used CRS to answer pre-lecture questions for course credit or no
credit. Students using CRS for credit again performed better on a unit exam.
CRS appear to enhance learning when they encourage student accountability
and increase readiness for lectures.
Large lecture classes are not uncommon in many American universities today.
Despite some calls to eliminate these large classes, more universities may be
considering increasing class sizes due to challenging economic times (Krueger,
2003; Nelson & Hevert, 1992; Toth & Montagna, 2002). Such large classes may
Ó2013, Baywood Publishing Co., Inc.
J. EDUCATIONAL COMPUTING RESEARCH, Vol. 49(2) 155-171, 2013
present unique challenges for instructors, including how to monitor class par-
ticipation, encourage regular attendance, and facilitate student engagement.
Among the many attempted solutions to these challenges is the use of classroom
response systems (CRS).
CRS were first implemented in physics classrooms in the 1960s. These
first devices were built into the desks and were expensive to install. Fortunately,
new technology has allowed CRS to evolve into small, relatively inexpensive,
portable devices. These devices allow instructors to gather a large number of
student responses instantly, monitor student comprehension, and provide
immediate feedback, even adjusting instruction when necessary. Instructors can
choose to record student responses anonymously or by individual name. One
might speculate that tracking student participation in real time using CRS leads
to increased student accountability (e.g., Abrahamson, 2006; DeBourgh, 2008;
Mazur, 1997), resulting in increased student engagement, which might logically
lead to increased student learning. The authors define accountability as a student’s
awareness of and responsiveness toward a given activity’s effect on course credit.
The present study was designed to examine the effects of CRS use on learning
outcomes when they are used during class lectures for accountability purposes—
answering multiple-choice questions for either bonus points or course credit.
Active learning strategies have been associated with student learning and
retention of course content (Mayer, 2008). It has been suggested that CRS can
foster active learning in large classrooms (Edmonds & Edmonds, 2008). Although
most CRS research has focused on attitudinal surveys of students and instructors
(e.g., DeBrough, 2008; Johnson & Zimmaro, 2004), some recent research has
shown moderate gains in student engagement with the use of CRS (Bunce,
VandenPlas, & Havanki, 2006; Stowell & Nelson, 2007). Both students and
professors report instant feedback and flexibility as benefits of CRS technology
(Blood & Neel, 2008; DeBourgh, 2008; Medina, Medina, Wanzer, Wilson, Er,
& Britton, 2008).
Most research with CRS shows little-to-no gains in learning (King & Joshi,
2008; Morling, McAuliffe, Cohen, & DiLorenzo, 2008). A possible weakness
with these studies is that they tend to use the anonymous mode of the CRS
(e.g., Medina et al., 2008). Holding students accountable for their answers by
associating responses with each student’s registered CRS device may encourage
them to attend class more frequently, prepare for class more thoroughly, and be
more engaged while in class. All of these factors could positively impact student
learning. In a physics class, Burnstein and Lederman (2001) found that when the
student responses counted for more than 15% of the final grade, attendance rates
were above 80%. Additionally, the students stated that they “[made] genuine
attempts to prepare for the reading quizzes and remain alert throughout the
lecture period” (p. 10). Comparing CRS use with graded pop quizzes, Shapiro
(2009) found that students in classes with graded CRS or pop quizzes had higher
attendance rates than classes using pop quizzes only as extra credit. Bruff (2009)
156 / JONES ET AL.
discussed a variety of ways in which instructors have used CRS to increase
the likelihood that students will both attend and prepare for class: “One way to
encourage students to complete reading assignments is to administer a reading
quiz at the start of a class session. Even asking very straightforward questions
about the reading can motivate students to complete reading assignments” (p. 67).
Other instructors administer homework quizzes or simple attendance checks
via CRS. Unfortunately, none of these claims have been tied directly to student
learning in the research literature.
A recent study by Mayer, Stull, DeLeeuw, Almeroth, Bimber, Chun, et al.
(2009) provides a model for CRS research using real classrooms. Using a
quasi-experimental design, three sections of the same course were assigned to
either a CRS condition where students answered multiple-choice questions during
lectures, a non-CRS condition where the same questions were presented without
CRS, and a control condition where no questions were presented. Mayer et al.
found that the CRS students outperformed the other two conditions on course
exams. In the present study, we sought to extend this research by using an
experimental design where students were assigned to conditions within the same
classroom. To ensure the same instructional conditions for all students, we used a
crossover design. Students who had initially received the treatment condition
received the control condition in the following unit, and those initially in the
control condition then received the treatment condition. By studying CRS and
non-CRS students in the same classroom, we were able to eliminate confounding
variables such as: differences in instruction, instructor bias for or against the
effectiveness of CRS, semester, and time of day.
Participants and Setting
Participants were 54 undergraduates enrolled in an educational psychology
course at a large, south-central, public university during the Fall 2009 semester.
Eighty percent were female. All 54 students completed the survey; however, one
student opted not to provide consent to participate in the study, so survey and exam
data for this student were excluded from all analyses. Additionally, three students
were excluded from analyses regarding learning outcomes because they were
absent during both of the instructional days when their group was assigned to
the CRS condition. A fifth student was removed from the analysis of learning
outcomes because one unit exam score was not available. Thus, exam data from
49 students and survey responses for 53 students were used in the analyses.
The course was organized around six instructional units that included lectures,
student presentations, and exams. CRS were incorporated into units 4 and 5,
which occurred during the second half of the semester.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 157
The iClicker CRS was used. This device was chosen because of its ease of
use and availability on campus. Researchers supplied iClicker devices for students
to use during the class periods involved in the study.
Each instructional unit included two 75-minute lectures presented during two
consecutive class sessions. For the 2 days of lecture, the instructor embedded
a total of eight multiple-choice questions—the same format as questions on
Each unit exam included 30 multiple-choice questions. Student performance
on these exams was used to assess learning outcomes. A survey consisting of
eight questions (adapted from Johnson & Zimmaro, 2004) asked students about
their perceptions of the course and use of the CRS device (see Table 1). Items
from this survey were used to assess how the opportunity to use CRS and earn
bonus points affected student preparedness for class.
158 / JONES ET AL.
Table 1. Survey Items
1. The questions asked with the clicker during lecture sessions helped prepare
me for the exams.
2. The questions asked with the clicker during lecture sessions helped me learn
3. The clicker questions helped me realize how much I understood about what
was being covered in the lecture sessions.
4. The clicker helped me feel more engaged during the lecture sessions.
5. Using the clicker caused me to change how I prepared for classes.
6. Using the clicker increased my confidence in my own understanding of the
7. The clicker was easy to use.
8. I would recommend using CRS in future classes.
Students were assigned to one of nine teams at the beginning of the semester,
using stratified random assignment (by sex and academic major). Teams 1-4 were
designated as Group 1, Teams 5-8 were in Group 2, and students in Team 9 were
randomly assigned to either Group 1 or 2. As shown in Table 2, Group 1 was
assigned to the CRS condition and Group 2 was assigned to the non-CRS
condition for unit 4. All students were expected to attend class lectures for each
unit, regardless of the condition to which they were assigned. The conditions
were reversed for unit 5 to allow students equal opportunities for bonus points.
Due to the possibility of carry-over effects from the crossover design, exam scores
for unit 5 were not analyzed.
Prior to incorporating CRS into class lectures, the instructor sent the fol-
lowing e-mail, which assigned students to a particular condition (CRS/non-CRS)
for that unit:
We have an opportunity to try out some clickers the College of Education
has purchased. In case you have not used clickers before, they are hand-held
remote devices used for audience participation in the classroom. Instructors
typically project multiple-choice questions occasionally throughout their
lectures and students have a chance to earn extra credit points by answering
the questions correctly. I’d like to use the clickers in this format next week.
Now, the problem is that the College of Ed has only about 30 clickers
and we have 54 students. Thus, we’ll have half the students use them this
next week and half will use them during the lectures for the 5th Quiz. For
next week, students in Teams 1-4, plus a few from Team 9 will use the clickers
and have an opportunity to earn 4 extra credit points if they answer questions
correctly during class. The rest of you will be able to see the questions
and may answer them mentally, but you won’t get any extra credit. Your
opportunity will come two weeks later. The questions will be based on the
readings and the lecture, just like the exams.
The instructor also sent an e-mail reminder when the groups switched conditions.
During the days on which lectures were given for units 4 and 5, the instructor
displayed two to six multiple-choice questions in slides, for a total of eight
questions in each unit. For each question, the stem and four or five options would
appear and students were told to submit their answers within one minute. Once the
minute had elapsed or all students with CRS had submitted—whichever occurred
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 159
Table 2. Study Design
Unit 4 Unit 5
Group 1 (N= 27)
Group 2 (N= 27)
first—the instructor closed the system so that responses could no longer be
submitted. Then the correct answer was shared followed by a brief explanation.
Students’ answers were not displayed in graphical form. Students assigned to the
CRS condition had the opportunity to earn up to 4 total bonus points by answer-
ing the multiple-choice questions presented during the lecture. Students who
answered four or more correctly could earn all 4 points. Those who answered
three correctly would get 3 points, etc.
Students in the non-CRS condition had the opportunity to consider each
question that was presented. Although students in this condition were neither
held accountable for their answers by having to “click in” and submit a response,
nor were they eligible to receive bonus points, they were able to view each of
the questions and listen to the explanations. Thus, the only difference between
the two conditions was that students assigned to the CRS condition submitted
answers and received extra-credit points, whereas those in the non-CRS con-
dition did not.
At the end of each unit, all students took the same unit exam. In the following
unit, the students switched conditions so that all students had an opportunity to
use the CRS device to answer questions during class lectures for bonus points.
At the end of these two instructional units, students completed a survey about
the use of CRS in the class and their perceived engagement in the course.
As all previous research has found either no effect or a slight positive effect
of classroom response technology on student achievement, we hypothesized that
students in the clicker group would perform as well or better than students in
the control groups (Caldwell, 2007). Based on this hypothesis, we performed a
one-tailed t-test of significance to determine if the use of clickers led to better
performance on the unit exams.
Results and Discussion
Group means and standard deviations for the unit exam are reported in Table 3.
A one-tailed t-test indicated no difference between the CRS and non-CRS con-
ditions on the unit 4 exam, t(47) = 0.43, p= 0.33. Thus, as demonstrated by
unit exam scores, the use of CRS did not improve students’ learning outcomes
over and above the opportunity to view the questions and listen to each of the
answers being explained by the instructor. However, students received no training
in how to use the iClickers. Also, as they were allowed to use the CRS device for
only 2 days of instruction, one might argue that the duration of the intervention
was too short to have a measurable impact on student learning. Perhaps the
incentive of 4 bonus points was not enough to encourage students to become more
actively engaged in the lecture. Finally, the fact that all students were exposed
to the questions and subsequent discussions of the correct answers may have
minimized the effects of being in the CRS condition. We sought to address these
issues in Experiment 2.
160 / JONES ET AL.
Although there was no difference in learning outcomes as measured by the unit
exam, student survey results were generally positive regarding CRS use during
class lecture (see Table 4). As shown in Figure 1, the majority of students (47 of
53) agreed or strongly agreed with the statement, “The questions asked with the
clicker during lecture sessions helped prepare me for the exams.” Even more
students (52 out of 53) agreed/strongly agreed that the clicker questions helped
them to learn better in the class. Although this question did not consider a specific
alternative form of instruction, perhaps these students were comparing CRS use
to no questions at all. Students in the non-CRS condition also saw and may have
learned from the clicker questions. Regarding readiness, we examined students’
responses to the statement, “Using the clicker caused me to change how I prepared
for classes.” As shown in Table 4, the mean score on this item was lowest of
all eight items on the survey (M= 3.00). As Figure 1 further illustrates, nearly
half of students (24 out of 53) were neutral in response to this item, and more
students responded disagree/strongly disagree to this statement (16) than those
who responded agree/strongly agree (13).
Thus, students felt that using the CRS during instruction was helpful to their
overall learning and exam preparation (even though the unit exam scores indicated
otherwise). This finding is consistent with previous self-report findings in the
literature (Blood & Neel, 2008; Mayer et al., 2009; Morling, McAuliffe et al.,
2008). Even though students reported agreement with the idea that clicker
questions were helpful to their learning, there was no indication that the
accountability afforded by CRS affected the way they prepared for the lecture.
Over half of the students were neutral or disagreed outright to a statement
regarding the use of CRS changing how they prepared for class. Such minimal
impact on student preparedness could possibly be related to the minimal number
of bonus points (4) students were able to earn for correct responses. In a class with
a total of 300 points, it may be that the opportunity to earn 4 bonus points was
not enough for students to value the opportunity afforded by being in the CRS
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 161
Table 3. Group Means and Standard
Deviations for Unit 4 Exam Scores
162 / JONES ET AL.
Table 4. Means and Standard Deviations of Student Responses
on Each Item (n= 53)
Survey item MSD
1. The questions asked with the clicker during lecture
sessions helped prepare me for the exams.
2. The questions asked with the clicker during lecture
sessions helped me learn better.
3. The clicker questions helped me realize how much I
understood about what was being covered in the lecture
4. The clicker helped me feel more engaged during the
5. Using the clicker caused me to change how I prepared
6. Using the clicker increased my confidence in my own
understanding of the material.
7. The clicker was easy to use.
8. I would recommend using CRS in future classes.
Note: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree.
Figure 1. Survey items related to effects of CRS opportunity
on student preparedness (n= 53).
condition. Furthermore, allowing students to earn 1 bonus point for correctly
answering one of every two questions may have diminished the level of account-
ability students felt for correctly responding to every question.
Experiment 2 was designed to increase the accountability and reduce any
unfamiliarity effects that may have been present in Experiment 1. Instead of bonus
points, the CRS questions counted as part of the students’ overall course grade.
Further, we allowed students to practice using the CRS before they were used
during the lectures. To examine whether CRS could affect student preparation
for class, we gave students 10 “practice” questions at the beginning of the unit.
Participants and Design
Participants were 46 different undergraduate students enrolled in the same
course during the Spring 2010 semester. Of the 46 students, 39 consented to allow
researchers access to their responses and exam scores. Eighty-seven percent of
the sample was female, which is common for this education course.
Students were divided into two groups, using the same stratified random
assignment as in Experiment 1, and alternated using CRS in two consecutive units.
All of the students, with and without CRS, were encouraged to attend and
participate in class on a daily basis.
Materials and Apparatus
iClicker devices were again used for the first half of the study. However, to
facilitate the need for a second “training” session in the crossover design, Mobile
Ongoing Course Assessment (MOCA), a web-based program, was used. Students
had the option to use their own laptop or borrow a laptop for these class periods.
Both the laptops and the program were provided to the students free of charge.
CRS questions were described in the course syllabus and served as participation
grades for this semester of the course. The course was graded on a 300-point scale,
of which 30 points could be earned by answering CRS questions. Compared to the
4 bonus points from the previous semester, 30 points could make a letter-grade
difference in the students’ grades for the course. For example, not answering or
incorrectly answering CRS questions could be the difference between earning
an A or a B. Below is the excerpt from the course syllabus:
We will be using classroom response systems (CRS) and laptops at various
times where you will answer questions during the lectures. These questions
will cover the readings and the lecture—just like the quizzes. You may earn
up to 30 points by answering these questions correctly. We won’t have
questions during every lecture—and you will be notified as to which lectures.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 163
During units 2 and 3, 10 multiple-choice pre-reading questions were asked
prior to lecture on the first day with the purpose of providing training. These were
completed by all students, but did not count for any points. Additionally, eight
multiple-choice questions were embedded and asked during the 2 lecture days.
Students in the CRS condition answered these questions for course credit for
that particular unit.
For the two units in this study, each exam included 30 multiple-choice
questions. Student performance on these unit exams was used to assess learning
outcomes. Students’ scores on the practice pre-lecture questions at the beginning
of the unit were used to assess student preparedness.
The study took place during two consecutive units (Units 2 and 3). Students
used CRS in alternating units for points toward their grade. During the first unit
of the semester, the class lectures did not contain any in-class multiple-choice
questions. Prior the second class unit, half of the class was told that they would
have the opportunity to earn points for correctly answering in-class questions.
The control group was instructed that due to a lack of devices, they would not
be answering in-class questions until the following unit. Below is the e-mail
that students received prior to the second unit.
On Thursday, half of you (teams 1-4) will have an opportunity to earn
some points in class by answering questions using hand-held devices. The
lecture covers chapter 7 and we will do this again the following Tuesday.
The reason only half the class will be using the CRS is that we only have
access to about 25-30 iClickers. Otherwise, I would have to require that
everyone purchase a clicker for about $25. So, teams 5-8 will get to use them
for the lectures that go with quiz 3.
There is a survey associated with the CRS so please complete it. You
get one of your 30 possible points for completing the survey. You’ll get
another point for completing a survey later in the semester. The surveys
take only 2 min. to complete. That leaves 28 points to be distributed for
answering in-class questions.
Immediately upon entering class for unit 2, all students were given an iClicker
device and asked to answer 10 pre-lecture, practice questions as a training session.
The practice questions were factual questions based on the assigned chapter.
The students were told that extra devices had been recently purchased and to
save time all of the students would be trained on the devices at once. None of the
students received points for these questions, but they were asked to try to answer
them correctly. After the training session, the control group students were told
that they could keep the CRS to participate in the in-class questions, but they
164 / JONES ET AL.
would not receive points until the following unit. The CRS group kept the CRS
and received participation points for each correct answer. It is important to
note here that the control group differed from that in Experiment 1. In Experi-
ment 1, the control group did not use CRS—they simply saw the questions.
In Experiment 2, all students used CRS. But only the treatment group received
course credit for answering questions correctly. This was done in an attempt to
isolate the accountability variable.
In the following unit, the procedure was reversed. To make it necessary
to conduct a second “training session,” the researchers introduced Mobile
Ongoing Course Assessment (MOCA) as the classroom response system instead
of iClickers. Due to the possibility of crossover effects, scores from this unit
were not analyzed.
Results and Discussion
Scores on the 10 practice items were first examined. Based on our own
results in Experiment 1 and others’ previous research (e.g., Blood & Neel, 2008;
Mayer et al., 2009; Shapiro, 2009), we hypothesized that students in the CRS
condition would do as well or better than those in the non-CRS condition.
Consequently, we used a one-tailed t-test. Students who were informed they
would be using CRS (M= 5.89, SD = 1.43) answered more questions correctly
than did students who did not expect to use CRS for that unit (M= 5.07, SD =
2.21), t(32) = 1.80, p= 0.04. Thus, informing students that they would be
using CRS to answer in-class lecture questions for course credit may have been
associated with better preparation for the lecture.
Comparing unit 1 exam scores to examine possible pre-existing differences
between the two groups found that groups did not differ, t(37) = 0.16, p= 0.87.
Table 5 shows the groups’ means on the unit 1 and 2 exams. Using a one-tailed
t-test, students in the CRS condition performed better on the unit 2 exam than
did students who did not receive points for using CRS, t(38) = 1.95, p= 0.03.
In Experiment 2, increasing the accountability of student CRS responses by
making them part of the overall grade led to better student preparedness and better
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 165
Table 5. Group Means and Standard Deviations for
Exam Scores by Unit
Unit 1-Control Unit 2-CRS
Group 1 (n= 21)
Group 2 (n= 18)
scores on the unit exam. This finding confirms claims made by Burnstein and
Lederman (2001). Students who anticipated being graded on CRS questions in
class came to class better prepared than students who were not being graded.
Also, even though all students were allowed to use the CRS during the lectures,
only those students who were held accountable for their responses performed
better on the unit exam, which may have been due to their initial preparedness
advantage. Experiment 3 was designed to further examine the readiness effect
that might occur when students are told they may receive course credit for
correctly answering pre-lecture CRS questions.
Participants and Design
Experiment 3 used the same 39 consenting students from Experiment 2. Using
the same groups as before, students participated in the two conditions (CRS,
no-CRS) in two consecutive, different units of the course, units 4 and 5.
Materials and Apparatus
The Mobile Ongoing Course Assessment (MOCA) response system was used
because it allowed students to answer questions outside of class time. Addi-
tionally, the students could answer questions at their own pace within a teacher-
specified time frame.
The CRS questions could be answered for course credit, which was outlined in
the syllabus. Students in the CRS condition received points for correct responses
on 10 pre-lecture questions before each of the two lecture classes in a unit for a
possible total of 20 points. CRS questions were also asked during the lecture
classes, but these questions were optional and students did not receive points
for their answers to in-class questions.
Unit exams were used to measure student learning. Because students in the
control condition were not required to answer pre-lecture questions, we did not
compare the two conditions on this measure. However, student response rates to
the pre-lecture questions were recorded.
During the fourth unit of the semester, half of the class was told that they
would receive participation points for answering questions prior to the start of
166 / JONES ET AL.
class. All students had the ability to view and answer the questions, but during the
fourth unit, only half of the students were able to earn points for correct responses.
In the following unit, the groups switched conditions and the other half of the
students were able to earn points. For both units, students had the opportunity
to answer 10 questions prior to the two lectures for each unit. Each lecture class
covered one chapter of information in the textbook. Students earned 1 point
for each correct answer for a total of 20 possible points.
Multiple-choice questions were also posed during lectures. All students had
the ability to answer these questions using MOCA, but students were not required
to respond and they did not earn points for their responses. The system was used
in every class to reduce any novelty effects of the technology. All students
had been trained in using MOCA prior to unit 3.
Results and Discussion
Because the groups were the same as in Experiment 2, no additional analyses
were conducted to test for initial group differences. Table 6 shows the mean
exam scores for unit 4. Due to the possibility of carry-over effects, scores for unit 5
were not analyzed. Once again, our hypothesis was that students in the CRS
condition would do as well or better than those in the non-CRS group. A one-tailed
t-test revealed that the CRS group scored significantly higher on the unit 4 exam,
t(37) = 2.39, p< 0.01. Thus, compared to students who read the same pre-lecture
questions and heard the same explanations, but were not held accountable
for their answers, holding students accountable for pre-lecture questions using
CRS led to better exam performance.
When in the CRS condition, 100% of the students in both groups attempted to
answer the pre-lecture questions. However, during unit 4, only 56% of the students
in the control condition (not receiving points) voluntarily answered the CRS
pre-lecture questions. In unit 5, 71% of students in the control condition volun-
tarily answered the clicker pre-lecture questions. This difference in voluntary
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 167
Table 6. Group Means and Standard
Deviations for Unit 4 Exam Scores
response rates is perhaps because some of these students witnessed the benefit
of reading and preparing for the questions during unit 4 and continued to do so
for unit 5 even though they were not receiving course credit, confirming the
possibility of a carry-over effect.
Taken together, the three experiments provide support for using CRS to
increase student accountability, which increases readiness through preparation,
and ultimately may lead to better exam performance. In Experiment 1, students
who answered CRS questions for only a few bonus points did no better on a unit
exam than those who saw the same questions for no points. However, consistent
with several other CRS studies, students perceived that the CRS enabled them to
learn more. Perhaps measuring learning compared to asking students whether they
feel they learn better may lead to differences in study findings and conclusions.
In Experiment 2, we attempted to increase accountability by making the CRS
questions worth up to 10% of the students’ overall grade, compared to just over
1% in Experiment 1. Students who were told they would be using CRS to answer
questions for course credit performed better on a practice, “pop-quiz-like” assess-
ment of readiness than those who did not expect to use CRS for course credit.
These students who came to lecture better prepared, as demonstrated by their
performance on the clicker questions, also performed better on the subsequent
Finally, in Experiment 3, we had students use CRS to answer pre-lecture
questions outside of class for course credit. Students who did so for course credit
performed better than those who did not on a unit exam. Thus, it appears that one
possible benefit of CRS is in increasing student accountability, which can lead
to student readiness for lectures, which leads to better exam performance. While
paper-based quizzes or other forms of in-class assessment may also afford a
similar level of accountability, many of these methods are time consuming and can
be difficult to manage in large classes. Additionally, these methods do not provide
instructors with instant feedback. Based on our results, CRS may be an effective
replacement for those traditional methods for increasing student accountability.
Even with the relatively brief CRS interventions, the present study showed
increases in student preparation and learning when CRS questions were tied to
course credit. Although the anonymity feature of CRS is often promoted, instruc-
tors may also consider the affordance of holding students accountable with CRS
technology. Our findings showed that attaching even a few points to the questions
can increase students’ preparation for the class.
We did not measure student engagement/attentiveness during the lectures.
Whether using CRS with accountability also increases student engagement during
lectures is a question to be addressed in future studies. Several authors have
claimed that CRS increase student engagement (Blood & Neel, 2008; Bruff, 2009;
168 / JONES ET AL.
Stowell & Nelson, 2007). But again, these authors measured engagement with
student self-reports. We feel it is important to also use more objective measures
The within-class, crossover design may be a useful methodology for exam-
ining actual learning gains due to the introduction of technology into the
classroom. This design allows researchers to study learning in actual classroom
settings while controlling for instructor, individual, and various classroom com-
LIMITATIONS AND FUTURE RESEARCH
The CRS interventions in these studies were only two class sessions in length.
These studies focused only on accountability and the effects of pre-lecture prepar-
ation which should not be dependent on students getting accustomed to clicker
use, so semester-long use by the same students was not considered necessary.
However, to assess long-term effects of using CRS for an entire course, future
studies will need to examine the effects of a longer CRS intervention that could
have more impact on student grades.
In addition to its brevity, the implementation of CRS into these courses was
also relatively shallow. The questions were largely fact- and recall-based, and
the questions were not used to prompt discussion or collaboration. Further, the
majority of students answered each question correctly, so the instructor had no
need to adapt instruction to the students’ responses. It is reasonable to believe,
however, that the observed effects of accountability and CRS use would be
similar or perhaps greater when combined with a more adaptive, student-centered
As with all research designs, the use of a crossover design has limitations. To
eliminate the possibility of carryover effects, results from the second group using
CRS were not analyzed. However, the ability to compare students receiving
exactly the same instruction in the same classroom outweighs the limitations of
A final limitation of these studies is that they both took place in educational
courses with a majority female population. These results may not extend to other
populations or course disciplines. More research needs to be done on CRS in
different classes with different students who may have different levels of moti-
vation, ability, or interest. With these limitations aside, the present study offers
some insight into the possible affordances of CRS.
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy,
and implications. In D. Banks (Ed.), Audience response systems in higher education:
Applications and cases (pp. 1-25). Hershey, PA: IDC Publishing, Inc.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 169
Blood, E., & Neel, R. (2008). Using student response systems in lecture-based instruction:
Does it change student engagement and learning? Journal of Technology and Teacher
Education, 16, 375-383.
Bruff, D. (2009). Teaching with classroom response systems: Creating active learning
environments. San Francisco, CA: Jossey-Bass.
Bunce, D. M., VandenPlas, J. R., & Havanki, K. L. (2006). Comparing the effectiveness
on student achievement of a student response system versus online WebCT quizzes.
Journal of Chemical Education, 83(3), 488-493.
Burnstein, R. A., & Lederman, L. M. (2001). Using wireless keypads in lecture classes.
The Physics Teacher, 39, 8-11.
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice
tips. CBE-Life Sciences Education, 6(1), 9-20.
DeBourgh, G. A. (2008). Use of classroom “CRS” to promote acquisition of advanced
reasoning skills. Nurse Education in Practice, 8(2), 76-87.
Edmonds, C. T., & Edmonds, T. P. (2008). An empirical investigation of the effects
of SRS technology on Introductory Managerial Accounting students. Issues in
Accounting Education, 23, 412-434.
Johnson, M., & Zimmero, D. (2004). Using classroom performance systems in computer
science 303. Retrieved June 14, 2009 from The University of Texas at Austin: Center
for Teaching and Learning.
King, D., & Joshi, S. (2008). Gender differences in the use and effectiveness of
personal response devices. Journal of Science Education and Technology, 17,
Krueger, A. B. (2003). Economic consideration and class size. The Economic Journal, 113,
Mayer, R. E. (2008). Learning and instruction (2nd ed.). Upper Saddle River, NJ: Merrill
Prentice Hall Pearson.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., et al. (2009).
Cllickers in college classrooms: Fostering learning with questioning methods in large
lecture classes. Contemporary Educational Psychology, 34, 51-57.
Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle River, NJ: Prentice
Medina, M. S., Medina, P. J., Wanzer, D. S., Wilson, J. E., Er, N., & Britton, M. L. (2008).
Useof an audience response system (ARS) in a dual-campus classroom environment.
American Journal of Pharmaceutical Education, 72(2), Article 38.
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. M. (2008). Efficacy of personal
response systems (“CRS”) in large, introductory psychology classes. Teaching of
Psychology, 35, 45-50.
Nelson, R., & Hevert, K. T. (1992). Effects of class size on economies of scale and mar-
ginal costs in higher education. Applied Economics, 24, 473-482.
Shapiro, A. (2009). An empirical study of personal response technology for improving
attendance and learning in a large class. Journal of the Scholarship of Teaching
and Learning, 9, 13-26.
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems
on student participation, learning, and emotion. Teaching of Psychology, 34(4),
170 / JONES ET AL.
Toth, L. S., & Montagna, L. G. (2002). Class size and achievement in higher education:
A review of current research. College Student Journal, 36(2), 253-261.
Direct reprint requests to:
Dr. Sara J. Jones
College of Education
491 Farish Hall
University of Houston
Houston, TX 77204-5029
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 171