ArticlePDF Available

Classroom Response Systems Facilitate Student Accountability, Readiness, and Learning

Authors:

Abstract and Figures

In three experiments using crossover designs, we investigated the effects of Classroom Response Systems (CRS) when presenting multiple-choice questions in real classrooms. In Experiment 1, students either used CRS for bonus points or simply saw the questions. There were no differences on a unit exam. In Experiment 2, students were told prior to a unit that they would either use CRS for course credit or no credit. Students using CRS for credit performed better on pre-lecture questions and a unit exam. In Experiment 3, students used CRS to answer pre-lecture questions for course credit or no credit. Students using CRS for credit again performed better on a unit exam. CRS appear to enhance learning when they encourage student accountability and increase readiness for lectures.
Content may be subject to copyright.
CLASSROOM RESPONSE SYSTEMS FACILITATE STUDENT
ACCOUNTABILITY, READINESS, AND LEARNING
SARA J. JONES
University of Houston
JASON CRANDALL
The University of Texas at Austin
JANE S. VOGLER
Oklahoma State University
DANIEL H. ROBINSON
Colorado State University
ABSTRACT
In three experiments using crossover designs, we investigated the effects
of Classroom Response Systems (CRS) when presenting multiple-choice
questions in real classrooms. In Experiment 1, students either used CRS for
bonus points or simply saw the questions. There were no differences on a
unit exam. In Experiment 2, students were told prior to a unit that they would
either use CRS for course credit or no credit. Students using CRS for credit
performed better on pre-lecture questions and a unit exam. In Experiment 3,
students used CRS to answer pre-lecture questions for course credit or no
credit. Students using CRS for credit again performed better on a unit exam.
CRS appear to enhance learning when they encourage student accountability
and increase readiness for lectures.
Large lecture classes are not uncommon in many American universities today.
Despite some calls to eliminate these large classes, more universities may be
considering increasing class sizes due to challenging economic times (Krueger,
2003; Nelson & Hevert, 1992; Toth & Montagna, 2002). Such large classes may
155
Ó2013, Baywood Publishing Co., Inc.
doi: http://dx.doi.org/10.2190/EC.49.2.b
http://baywood.com
J. EDUCATIONAL COMPUTING RESEARCH, Vol. 49(2) 155-171, 2013
present unique challenges for instructors, including how to monitor class par-
ticipation, encourage regular attendance, and facilitate student engagement.
Among the many attempted solutions to these challenges is the use of classroom
response systems (CRS).
CRS were first implemented in physics classrooms in the 1960s. These
first devices were built into the desks and were expensive to install. Fortunately,
new technology has allowed CRS to evolve into small, relatively inexpensive,
portable devices. These devices allow instructors to gather a large number of
student responses instantly, monitor student comprehension, and provide
immediate feedback, even adjusting instruction when necessary. Instructors can
choose to record student responses anonymously or by individual name. One
might speculate that tracking student participation in real time using CRS leads
to increased student accountability (e.g., Abrahamson, 2006; DeBourgh, 2008;
Mazur, 1997), resulting in increased student engagement, which might logically
lead to increased student learning. The authors define accountability as a student’s
awareness of and responsiveness toward a given activity’s effect on course credit.
The present study was designed to examine the effects of CRS use on learning
outcomes when they are used during class lectures for accountability purposes—
answering multiple-choice questions for either bonus points or course credit.
Active learning strategies have been associated with student learning and
retention of course content (Mayer, 2008). It has been suggested that CRS can
foster active learning in large classrooms (Edmonds & Edmonds, 2008). Although
most CRS research has focused on attitudinal surveys of students and instructors
(e.g., DeBrough, 2008; Johnson & Zimmaro, 2004), some recent research has
shown moderate gains in student engagement with the use of CRS (Bunce,
VandenPlas, & Havanki, 2006; Stowell & Nelson, 2007). Both students and
professors report instant feedback and flexibility as benefits of CRS technology
(Blood & Neel, 2008; DeBourgh, 2008; Medina, Medina, Wanzer, Wilson, Er,
& Britton, 2008).
Most research with CRS shows little-to-no gains in learning (King & Joshi,
2008; Morling, McAuliffe, Cohen, & DiLorenzo, 2008). A possible weakness
with these studies is that they tend to use the anonymous mode of the CRS
(e.g., Medina et al., 2008). Holding students accountable for their answers by
associating responses with each student’s registered CRS device may encourage
them to attend class more frequently, prepare for class more thoroughly, and be
more engaged while in class. All of these factors could positively impact student
learning. In a physics class, Burnstein and Lederman (2001) found that when the
student responses counted for more than 15% of the final grade, attendance rates
were above 80%. Additionally, the students stated that they “[made] genuine
attempts to prepare for the reading quizzes and remain alert throughout the
lecture period” (p. 10). Comparing CRS use with graded pop quizzes, Shapiro
(2009) found that students in classes with graded CRS or pop quizzes had higher
attendance rates than classes using pop quizzes only as extra credit. Bruff (2009)
156 / JONES ET AL.
discussed a variety of ways in which instructors have used CRS to increase
the likelihood that students will both attend and prepare for class: “One way to
encourage students to complete reading assignments is to administer a reading
quiz at the start of a class session. Even asking very straightforward questions
about the reading can motivate students to complete reading assignments” (p. 67).
Other instructors administer homework quizzes or simple attendance checks
via CRS. Unfortunately, none of these claims have been tied directly to student
learning in the research literature.
A recent study by Mayer, Stull, DeLeeuw, Almeroth, Bimber, Chun, et al.
(2009) provides a model for CRS research using real classrooms. Using a
quasi-experimental design, three sections of the same course were assigned to
either a CRS condition where students answered multiple-choice questions during
lectures, a non-CRS condition where the same questions were presented without
CRS, and a control condition where no questions were presented. Mayer et al.
found that the CRS students outperformed the other two conditions on course
exams. In the present study, we sought to extend this research by using an
experimental design where students were assigned to conditions within the same
classroom. To ensure the same instructional conditions for all students, we used a
crossover design. Students who had initially received the treatment condition
received the control condition in the following unit, and those initially in the
control condition then received the treatment condition. By studying CRS and
non-CRS students in the same classroom, we were able to eliminate confounding
variables such as: differences in instruction, instructor bias for or against the
effectiveness of CRS, semester, and time of day.
EXPERIMENT 1
Method
Participants and Setting
Participants were 54 undergraduates enrolled in an educational psychology
course at a large, south-central, public university during the Fall 2009 semester.
Eighty percent were female. All 54 students completed the survey; however, one
student opted not to provide consent to participate in the study, so survey and exam
data for this student were excluded from all analyses. Additionally, three students
were excluded from analyses regarding learning outcomes because they were
absent during both of the instructional days when their group was assigned to
the CRS condition. A fifth student was removed from the analysis of learning
outcomes because one unit exam score was not available. Thus, exam data from
49 students and survey responses for 53 students were used in the analyses.
The course was organized around six instructional units that included lectures,
student presentations, and exams. CRS were incorporated into units 4 and 5,
which occurred during the second half of the semester.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 157
Materials
Apparatus
The iClicker CRS was used. This device was chosen because of its ease of
use and availability on campus. Researchers supplied iClicker devices for students
to use during the class periods involved in the study.
CRS Questions
Each instructional unit included two 75-minute lectures presented during two
consecutive class sessions. For the 2 days of lecture, the instructor embedded
a total of eight multiple-choice questions—the same format as questions on
unit exams.
Outcome Measures
Each unit exam included 30 multiple-choice questions. Student performance
on these exams was used to assess learning outcomes. A survey consisting of
eight questions (adapted from Johnson & Zimmaro, 2004) asked students about
their perceptions of the course and use of the CRS device (see Table 1). Items
from this survey were used to assess how the opportunity to use CRS and earn
bonus points affected student preparedness for class.
158 / JONES ET AL.
Table 1. Survey Items
1. The questions asked with the clicker during lecture sessions helped prepare
me for the exams.
2. The questions asked with the clicker during lecture sessions helped me learn
better.
3. The clicker questions helped me realize how much I understood about what
was being covered in the lecture sessions.
4. The clicker helped me feel more engaged during the lecture sessions.
5. Using the clicker caused me to change how I prepared for classes.
6. Using the clicker increased my confidence in my own understanding of the
material.
7. The clicker was easy to use.
8. I would recommend using CRS in future classes.
Procedure
Students were assigned to one of nine teams at the beginning of the semester,
using stratified random assignment (by sex and academic major). Teams 1-4 were
designated as Group 1, Teams 5-8 were in Group 2, and students in Team 9 were
randomly assigned to either Group 1 or 2. As shown in Table 2, Group 1 was
assigned to the CRS condition and Group 2 was assigned to the non-CRS
condition for unit 4. All students were expected to attend class lectures for each
unit, regardless of the condition to which they were assigned. The conditions
were reversed for unit 5 to allow students equal opportunities for bonus points.
Due to the possibility of carry-over effects from the crossover design, exam scores
for unit 5 were not analyzed.
Prior to incorporating CRS into class lectures, the instructor sent the fol-
lowing e-mail, which assigned students to a particular condition (CRS/non-CRS)
for that unit:
We have an opportunity to try out some clickers the College of Education
has purchased. In case you have not used clickers before, they are hand-held
remote devices used for audience participation in the classroom. Instructors
typically project multiple-choice questions occasionally throughout their
lectures and students have a chance to earn extra credit points by answering
the questions correctly. I’d like to use the clickers in this format next week.
Now, the problem is that the College of Ed has only about 30 clickers
and we have 54 students. Thus, we’ll have half the students use them this
next week and half will use them during the lectures for the 5th Quiz. For
next week, students in Teams 1-4, plus a few from Team 9 will use the clickers
and have an opportunity to earn 4 extra credit points if they answer questions
correctly during class. The rest of you will be able to see the questions
and may answer them mentally, but you won’t get any extra credit. Your
opportunity will come two weeks later. The questions will be based on the
readings and the lecture, just like the exams.
The instructor also sent an e-mail reminder when the groups switched conditions.
During the days on which lectures were given for units 4 and 5, the instructor
displayed two to six multiple-choice questions in slides, for a total of eight
questions in each unit. For each question, the stem and four or five options would
appear and students were told to submit their answers within one minute. Once the
minute had elapsed or all students with CRS had submitted—whichever occurred
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 159
Table 2. Study Design
Unit 4 Unit 5
Group 1 (N= 27)
Group 2 (N= 27)
CRS condition
non-CRS condition
non-CRS condition
CRS condition
first—the instructor closed the system so that responses could no longer be
submitted. Then the correct answer was shared followed by a brief explanation.
Students’ answers were not displayed in graphical form. Students assigned to the
CRS condition had the opportunity to earn up to 4 total bonus points by answer-
ing the multiple-choice questions presented during the lecture. Students who
answered four or more correctly could earn all 4 points. Those who answered
three correctly would get 3 points, etc.
Students in the non-CRS condition had the opportunity to consider each
question that was presented. Although students in this condition were neither
held accountable for their answers by having to “click in” and submit a response,
nor were they eligible to receive bonus points, they were able to view each of
the questions and listen to the explanations. Thus, the only difference between
the two conditions was that students assigned to the CRS condition submitted
answers and received extra-credit points, whereas those in the non-CRS con-
dition did not.
At the end of each unit, all students took the same unit exam. In the following
unit, the students switched conditions so that all students had an opportunity to
use the CRS device to answer questions during class lectures for bonus points.
At the end of these two instructional units, students completed a survey about
the use of CRS in the class and their perceived engagement in the course.
As all previous research has found either no effect or a slight positive effect
of classroom response technology on student achievement, we hypothesized that
students in the clicker group would perform as well or better than students in
the control groups (Caldwell, 2007). Based on this hypothesis, we performed a
one-tailed t-test of significance to determine if the use of clickers led to better
performance on the unit exams.
Results and Discussion
Group means and standard deviations for the unit exam are reported in Table 3.
A one-tailed t-test indicated no difference between the CRS and non-CRS con-
ditions on the unit 4 exam, t(47) = 0.43, p= 0.33. Thus, as demonstrated by
unit exam scores, the use of CRS did not improve students’ learning outcomes
over and above the opportunity to view the questions and listen to each of the
answers being explained by the instructor. However, students received no training
in how to use the iClickers. Also, as they were allowed to use the CRS device for
only 2 days of instruction, one might argue that the duration of the intervention
was too short to have a measurable impact on student learning. Perhaps the
incentive of 4 bonus points was not enough to encourage students to become more
actively engaged in the lecture. Finally, the fact that all students were exposed
to the questions and subsequent discussions of the correct answers may have
minimized the effects of being in the CRS condition. We sought to address these
issues in Experiment 2.
160 / JONES ET AL.
Although there was no difference in learning outcomes as measured by the unit
exam, student survey results were generally positive regarding CRS use during
class lecture (see Table 4). As shown in Figure 1, the majority of students (47 of
53) agreed or strongly agreed with the statement, “The questions asked with the
clicker during lecture sessions helped prepare me for the exams.” Even more
students (52 out of 53) agreed/strongly agreed that the clicker questions helped
them to learn better in the class. Although this question did not consider a specific
alternative form of instruction, perhaps these students were comparing CRS use
to no questions at all. Students in the non-CRS condition also saw and may have
learned from the clicker questions. Regarding readiness, we examined students’
responses to the statement, “Using the clicker caused me to change how I prepared
for classes.” As shown in Table 4, the mean score on this item was lowest of
all eight items on the survey (M= 3.00). As Figure 1 further illustrates, nearly
half of students (24 out of 53) were neutral in response to this item, and more
students responded disagree/strongly disagree to this statement (16) than those
who responded agree/strongly agree (13).
Thus, students felt that using the CRS during instruction was helpful to their
overall learning and exam preparation (even though the unit exam scores indicated
otherwise). This finding is consistent with previous self-report findings in the
literature (Blood & Neel, 2008; Mayer et al., 2009; Morling, McAuliffe et al.,
2008). Even though students reported agreement with the idea that clicker
questions were helpful to their learning, there was no indication that the
accountability afforded by CRS affected the way they prepared for the lecture.
Over half of the students were neutral or disagreed outright to a statement
regarding the use of CRS changing how they prepared for class. Such minimal
impact on student preparedness could possibly be related to the minimal number
of bonus points (4) students were able to earn for correct responses. In a class with
a total of 300 points, it may be that the opportunity to earn 4 bonus points was
not enough for students to value the opportunity afforded by being in the CRS
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 161
Table 3. Group Means and Standard
Deviations for Unit 4 Exam Scores
Unit 4
MSD
Group 1-CRS
(n= 24)
Group 2-non-CRS
(n= 25)
24.42
23.96
3.72
3.72
162 / JONES ET AL.
Table 4. Means and Standard Deviations of Student Responses
on Each Item (n= 53)
Survey item MSD
1. The questions asked with the clicker during lecture
sessions helped prepare me for the exams.
2. The questions asked with the clicker during lecture
sessions helped me learn better.
3. The clicker questions helped me realize how much I
understood about what was being covered in the lecture
sessions.
4. The clicker helped me feel more engaged during the
lecture sessions.
5. Using the clicker caused me to change how I prepared
for classes.
6. Using the clicker increased my confidence in my own
understanding of the material.
7. The clicker was easy to use.
8. I would recommend using CRS in future classes.
4.17
4.25
4.23
4.26
3.00
3.87
4.58
4.51
0.73
0.62
0.54
0.76
0.98
0.73
0.57
0.70
Note: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree.
Figure 1. Survey items related to effects of CRS opportunity
on student preparedness (n= 53).
condition. Furthermore, allowing students to earn 1 bonus point for correctly
answering one of every two questions may have diminished the level of account-
ability students felt for correctly responding to every question.
Experiment 2 was designed to increase the accountability and reduce any
unfamiliarity effects that may have been present in Experiment 1. Instead of bonus
points, the CRS questions counted as part of the students’ overall course grade.
Further, we allowed students to practice using the CRS before they were used
during the lectures. To examine whether CRS could affect student preparation
for class, we gave students 10 “practice” questions at the beginning of the unit.
EXPERIMENT 2
Method
Participants and Design
Participants were 46 different undergraduate students enrolled in the same
course during the Spring 2010 semester. Of the 46 students, 39 consented to allow
researchers access to their responses and exam scores. Eighty-seven percent of
the sample was female, which is common for this education course.
Students were divided into two groups, using the same stratified random
assignment as in Experiment 1, and alternated using CRS in two consecutive units.
All of the students, with and without CRS, were encouraged to attend and
participate in class on a daily basis.
Materials and Apparatus
Apparatus
iClicker devices were again used for the first half of the study. However, to
facilitate the need for a second “training” session in the crossover design, Mobile
Ongoing Course Assessment (MOCA), a web-based program, was used. Students
had the option to use their own laptop or borrow a laptop for these class periods.
Both the laptops and the program were provided to the students free of charge.
CRS questions were described in the course syllabus and served as participation
grades for this semester of the course. The course was graded on a 300-point scale,
of which 30 points could be earned by answering CRS questions. Compared to the
4 bonus points from the previous semester, 30 points could make a letter-grade
difference in the students’ grades for the course. For example, not answering or
incorrectly answering CRS questions could be the difference between earning
an A or a B. Below is the excerpt from the course syllabus:
We will be using classroom response systems (CRS) and laptops at various
times where you will answer questions during the lectures. These questions
will cover the readings and the lecture—just like the quizzes. You may earn
up to 30 points by answering these questions correctly. We won’t have
questions during every lecture—and you will be notified as to which lectures.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 163
During units 2 and 3, 10 multiple-choice pre-reading questions were asked
prior to lecture on the first day with the purpose of providing training. These were
completed by all students, but did not count for any points. Additionally, eight
multiple-choice questions were embedded and asked during the 2 lecture days.
Students in the CRS condition answered these questions for course credit for
that particular unit.
Outcome Measures
For the two units in this study, each exam included 30 multiple-choice
questions. Student performance on these unit exams was used to assess learning
outcomes. Students’ scores on the practice pre-lecture questions at the beginning
of the unit were used to assess student preparedness.
Procedure
The study took place during two consecutive units (Units 2 and 3). Students
used CRS in alternating units for points toward their grade. During the first unit
of the semester, the class lectures did not contain any in-class multiple-choice
questions. Prior the second class unit, half of the class was told that they would
have the opportunity to earn points for correctly answering in-class questions.
The control group was instructed that due to a lack of devices, they would not
be answering in-class questions until the following unit. Below is the e-mail
that students received prior to the second unit.
On Thursday, half of you (teams 1-4) will have an opportunity to earn
some points in class by answering questions using hand-held devices. The
lecture covers chapter 7 and we will do this again the following Tuesday.
The reason only half the class will be using the CRS is that we only have
access to about 25-30 iClickers. Otherwise, I would have to require that
everyone purchase a clicker for about $25. So, teams 5-8 will get to use them
for the lectures that go with quiz 3.
There is a survey associated with the CRS so please complete it. You
get one of your 30 possible points for completing the survey. You’ll get
another point for completing a survey later in the semester. The surveys
take only 2 min. to complete. That leaves 28 points to be distributed for
answering in-class questions.
Immediately upon entering class for unit 2, all students were given an iClicker
device and asked to answer 10 pre-lecture, practice questions as a training session.
The practice questions were factual questions based on the assigned chapter.
The students were told that extra devices had been recently purchased and to
save time all of the students would be trained on the devices at once. None of the
students received points for these questions, but they were asked to try to answer
them correctly. After the training session, the control group students were told
that they could keep the CRS to participate in the in-class questions, but they
164 / JONES ET AL.
would not receive points until the following unit. The CRS group kept the CRS
and received participation points for each correct answer. It is important to
note here that the control group differed from that in Experiment 1. In Experi-
ment 1, the control group did not use CRS—they simply saw the questions.
In Experiment 2, all students used CRS. But only the treatment group received
course credit for answering questions correctly. This was done in an attempt to
isolate the accountability variable.
In the following unit, the procedure was reversed. To make it necessary
to conduct a second “training session,” the researchers introduced Mobile
Ongoing Course Assessment (MOCA) as the classroom response system instead
of iClickers. Due to the possibility of crossover effects, scores from this unit
were not analyzed.
Results and Discussion
Scores on the 10 practice items were first examined. Based on our own
results in Experiment 1 and others’ previous research (e.g., Blood & Neel, 2008;
Mayer et al., 2009; Shapiro, 2009), we hypothesized that students in the CRS
condition would do as well or better than those in the non-CRS condition.
Consequently, we used a one-tailed t-test. Students who were informed they
would be using CRS (M= 5.89, SD = 1.43) answered more questions correctly
than did students who did not expect to use CRS for that unit (M= 5.07, SD =
2.21), t(32) = 1.80, p= 0.04. Thus, informing students that they would be
using CRS to answer in-class lecture questions for course credit may have been
associated with better preparation for the lecture.
Comparing unit 1 exam scores to examine possible pre-existing differences
between the two groups found that groups did not differ, t(37) = 0.16, p= 0.87.
Table 5 shows the groups’ means on the unit 1 and 2 exams. Using a one-tailed
t-test, students in the CRS condition performed better on the unit 2 exam than
did students who did not receive points for using CRS, t(38) = 1.95, p= 0.03.
In Experiment 2, increasing the accountability of student CRS responses by
making them part of the overall grade led to better student preparedness and better
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 165
Table 5. Group Means and Standard Deviations for
Exam Scores by Unit
Unit 1-Control Unit 2-CRS
MSDMSD
Group 1 (n= 21)
Group 2 (n= 18)
20.19
20.06
2.34
2.94
22.38
19.61
2.87
5.75
scores on the unit exam. This finding confirms claims made by Burnstein and
Lederman (2001). Students who anticipated being graded on CRS questions in
class came to class better prepared than students who were not being graded.
Also, even though all students were allowed to use the CRS during the lectures,
only those students who were held accountable for their responses performed
better on the unit exam, which may have been due to their initial preparedness
advantage. Experiment 3 was designed to further examine the readiness effect
that might occur when students are told they may receive course credit for
correctly answering pre-lecture CRS questions.
EXPERIMENT 3
Method
Participants and Design
Experiment 3 used the same 39 consenting students from Experiment 2. Using
the same groups as before, students participated in the two conditions (CRS,
no-CRS) in two consecutive, different units of the course, units 4 and 5.
Materials and Apparatus
Apparatus
The Mobile Ongoing Course Assessment (MOCA) response system was used
because it allowed students to answer questions outside of class time. Addi-
tionally, the students could answer questions at their own pace within a teacher-
specified time frame.
The CRS questions could be answered for course credit, which was outlined in
the syllabus. Students in the CRS condition received points for correct responses
on 10 pre-lecture questions before each of the two lecture classes in a unit for a
possible total of 20 points. CRS questions were also asked during the lecture
classes, but these questions were optional and students did not receive points
for their answers to in-class questions.
Outcome Measures
Unit exams were used to measure student learning. Because students in the
control condition were not required to answer pre-lecture questions, we did not
compare the two conditions on this measure. However, student response rates to
the pre-lecture questions were recorded.
Procedure
During the fourth unit of the semester, half of the class was told that they
would receive participation points for answering questions prior to the start of
166 / JONES ET AL.
class. All students had the ability to view and answer the questions, but during the
fourth unit, only half of the students were able to earn points for correct responses.
In the following unit, the groups switched conditions and the other half of the
students were able to earn points. For both units, students had the opportunity
to answer 10 questions prior to the two lectures for each unit. Each lecture class
covered one chapter of information in the textbook. Students earned 1 point
for each correct answer for a total of 20 possible points.
Multiple-choice questions were also posed during lectures. All students had
the ability to answer these questions using MOCA, but students were not required
to respond and they did not earn points for their responses. The system was used
in every class to reduce any novelty effects of the technology. All students
had been trained in using MOCA prior to unit 3.
Results and Discussion
Because the groups were the same as in Experiment 2, no additional analyses
were conducted to test for initial group differences. Table 6 shows the mean
exam scores for unit 4. Due to the possibility of carry-over effects, scores for unit 5
were not analyzed. Once again, our hypothesis was that students in the CRS
condition would do as well or better than those in the non-CRS group. A one-tailed
t-test revealed that the CRS group scored significantly higher on the unit 4 exam,
t(37) = 2.39, p< 0.01. Thus, compared to students who read the same pre-lecture
questions and heard the same explanations, but were not held accountable
for their answers, holding students accountable for pre-lecture questions using
CRS led to better exam performance.
When in the CRS condition, 100% of the students in both groups attempted to
answer the pre-lecture questions. However, during unit 4, only 56% of the students
in the control condition (not receiving points) voluntarily answered the CRS
pre-lecture questions. In unit 5, 71% of students in the control condition volun-
tarily answered the clicker pre-lecture questions. This difference in voluntary
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 167
Table 6. Group Means and Standard
Deviations for Unit 4 Exam Scores
Unit 4
MSD
Group 1-CRS
(n= 21)
Group 2-non-CRS
(n= 18)
24.62
21.39
3.55
4.85
response rates is perhaps because some of these students witnessed the benefit
of reading and preparing for the questions during unit 4 and continued to do so
for unit 5 even though they were not receiving course credit, confirming the
possibility of a carry-over effect.
DISCUSSION
Taken together, the three experiments provide support for using CRS to
increase student accountability, which increases readiness through preparation,
and ultimately may lead to better exam performance. In Experiment 1, students
who answered CRS questions for only a few bonus points did no better on a unit
exam than those who saw the same questions for no points. However, consistent
with several other CRS studies, students perceived that the CRS enabled them to
learn more. Perhaps measuring learning compared to asking students whether they
feel they learn better may lead to differences in study findings and conclusions.
In Experiment 2, we attempted to increase accountability by making the CRS
questions worth up to 10% of the students’ overall grade, compared to just over
1% in Experiment 1. Students who were told they would be using CRS to answer
questions for course credit performed better on a practice, “pop-quiz-like” assess-
ment of readiness than those who did not expect to use CRS for course credit.
These students who came to lecture better prepared, as demonstrated by their
performance on the clicker questions, also performed better on the subsequent
unit exam.
Finally, in Experiment 3, we had students use CRS to answer pre-lecture
questions outside of class for course credit. Students who did so for course credit
performed better than those who did not on a unit exam. Thus, it appears that one
possible benefit of CRS is in increasing student accountability, which can lead
to student readiness for lectures, which leads to better exam performance. While
paper-based quizzes or other forms of in-class assessment may also afford a
similar level of accountability, many of these methods are time consuming and can
be difficult to manage in large classes. Additionally, these methods do not provide
instructors with instant feedback. Based on our results, CRS may be an effective
replacement for those traditional methods for increasing student accountability.
Even with the relatively brief CRS interventions, the present study showed
increases in student preparation and learning when CRS questions were tied to
course credit. Although the anonymity feature of CRS is often promoted, instruc-
tors may also consider the affordance of holding students accountable with CRS
technology. Our findings showed that attaching even a few points to the questions
can increase students’ preparation for the class.
We did not measure student engagement/attentiveness during the lectures.
Whether using CRS with accountability also increases student engagement during
lectures is a question to be addressed in future studies. Several authors have
claimed that CRS increase student engagement (Blood & Neel, 2008; Bruff, 2009;
168 / JONES ET AL.
Stowell & Nelson, 2007). But again, these authors measured engagement with
student self-reports. We feel it is important to also use more objective measures
of engagement.
The within-class, crossover design may be a useful methodology for exam-
ining actual learning gains due to the introduction of technology into the
classroom. This design allows researchers to study learning in actual classroom
settings while controlling for instructor, individual, and various classroom com-
munity variables.
LIMITATIONS AND FUTURE RESEARCH
The CRS interventions in these studies were only two class sessions in length.
These studies focused only on accountability and the effects of pre-lecture prepar-
ation which should not be dependent on students getting accustomed to clicker
use, so semester-long use by the same students was not considered necessary.
However, to assess long-term effects of using CRS for an entire course, future
studies will need to examine the effects of a longer CRS intervention that could
have more impact on student grades.
In addition to its brevity, the implementation of CRS into these courses was
also relatively shallow. The questions were largely fact- and recall-based, and
the questions were not used to prompt discussion or collaboration. Further, the
majority of students answered each question correctly, so the instructor had no
need to adapt instruction to the students’ responses. It is reasonable to believe,
however, that the observed effects of accountability and CRS use would be
similar or perhaps greater when combined with a more adaptive, student-centered
pedagogical approach.
As with all research designs, the use of a crossover design has limitations. To
eliminate the possibility of carryover effects, results from the second group using
CRS were not analyzed. However, the ability to compare students receiving
exactly the same instruction in the same classroom outweighs the limitations of
this approach.
A final limitation of these studies is that they both took place in educational
courses with a majority female population. These results may not extend to other
populations or course disciplines. More research needs to be done on CRS in
different classes with different students who may have different levels of moti-
vation, ability, or interest. With these limitations aside, the present study offers
some insight into the possible affordances of CRS.
REFERENCES
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy,
and implications. In D. Banks (Ed.), Audience response systems in higher education:
Applications and cases (pp. 1-25). Hershey, PA: IDC Publishing, Inc.
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 169
Blood, E., & Neel, R. (2008). Using student response systems in lecture-based instruction:
Does it change student engagement and learning? Journal of Technology and Teacher
Education, 16, 375-383.
Bruff, D. (2009). Teaching with classroom response systems: Creating active learning
environments. San Francisco, CA: Jossey-Bass.
Bunce, D. M., VandenPlas, J. R., & Havanki, K. L. (2006). Comparing the effectiveness
on student achievement of a student response system versus online WebCT quizzes.
Journal of Chemical Education, 83(3), 488-493.
Burnstein, R. A., & Lederman, L. M. (2001). Using wireless keypads in lecture classes.
The Physics Teacher, 39, 8-11.
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice
tips. CBE-Life Sciences Education, 6(1), 9-20.
DeBourgh, G. A. (2008). Use of classroom “CRS” to promote acquisition of advanced
reasoning skills. Nurse Education in Practice, 8(2), 76-87.
Edmonds, C. T., & Edmonds, T. P. (2008). An empirical investigation of the effects
of SRS technology on Introductory Managerial Accounting students. Issues in
Accounting Education, 23, 412-434.
Johnson, M., & Zimmero, D. (2004). Using classroom performance systems in computer
science 303. Retrieved June 14, 2009 from The University of Texas at Austin: Center
for Teaching and Learning.
King, D., & Joshi, S. (2008). Gender differences in the use and effectiveness of
personal response devices. Journal of Science Education and Technology, 17,
544-552.
Krueger, A. B. (2003). Economic consideration and class size. The Economic Journal, 113,
F34H.
Mayer, R. E. (2008). Learning and instruction (2nd ed.). Upper Saddle River, NJ: Merrill
Prentice Hall Pearson.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., et al. (2009).
Cllickers in college classrooms: Fostering learning with questioning methods in large
lecture classes. Contemporary Educational Psychology, 34, 51-57.
Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle River, NJ: Prentice
Hall.
Medina, M. S., Medina, P. J., Wanzer, D. S., Wilson, J. E., Er, N., & Britton, M. L. (2008).
Useof an audience response system (ARS) in a dual-campus classroom environment.
American Journal of Pharmaceutical Education, 72(2), Article 38.
Morling, B., McAuliffe, M., Cohen, L., & DiLorenzo, T. M. (2008). Efficacy of personal
response systems (“CRS”) in large, introductory psychology classes. Teaching of
Psychology, 35, 45-50.
Nelson, R., & Hevert, K. T. (1992). Effects of class size on economies of scale and mar-
ginal costs in higher education. Applied Economics, 24, 473-482.
Shapiro, A. (2009). An empirical study of personal response technology for improving
attendance and learning in a large class. Journal of the Scholarship of Teaching
and Learning, 9, 13-26.
Stowell, J. R., & Nelson, J. M. (2007). Benefits of electronic audience response systems
on student participation, learning, and emotion. Teaching of Psychology, 34(4),
253-258.
170 / JONES ET AL.
Toth, L. S., & Montagna, L. G. (2002). Class size and achievement in higher education:
A review of current research. College Student Journal, 36(2), 253-261.
Direct reprint requests to:
Dr. Sara J. Jones
College of Education
491 Farish Hall
University of Houston
Houston, TX 77204-5029
e-mail: sjjones3@uh.edu
ACCOUNTABILITY WITH CLASSROOM RESPONSE SYSTEMS / 171
... Even in a typical mathematics class, it is hard to get the students involved in the learning activities and rarely do you see students participate voluntarily in class discussions. Among the emerging technologies, teachers can use to address these learning challenges and create an active learning environment is the Classroom Response System [13,21,25]. ...
... Surveys show that CRS made class lectures more engaging and interesting compared to traditional classes, it arouses students' interest in learning, it increases student's focus and performance, helps students learn better in class, it enhances the ability to auto-evaluate and critically reflects on one's ability, it improves students' attendance, and reduce attrition rate [3,5,6,13,22]. In addition, Caldwell [3] and Jones et al. [13] found that students come to school more prepared to learn new materials since they can answer daily questions using CRS. ...
... Surveys show that CRS made class lectures more engaging and interesting compared to traditional classes, it arouses students' interest in learning, it increases student's focus and performance, helps students learn better in class, it enhances the ability to auto-evaluate and critically reflects on one's ability, it improves students' attendance, and reduce attrition rate [3,5,6,13,22]. In addition, Caldwell [3] and Jones et al. [13] found that students come to school more prepared to learn new materials since they can answer daily questions using CRS. Jones et al. [13] argue that monitoring students' participation increases students' accountability and readiness in class resulting in to increase in students' participation. ...
Article
Full-text available
Classroom Response System (CRS) is one of the educational tools teachers can use to promote active learning and improve student learning. Despite the promising benefits of the CRS, there is less account of its impact in secondary mathematics classrooms as compared to higher education. In this study, a descriptive survey design was used to describe the perception of Senior High School students on the use of the CRS in learning mathematics. Students were exposed to Blicker, a classroom response system tool that uses Bluetooth Low Energy to interact with the teacher. Survey shows that CRS increase students’ engagement, interactivity, and students perceived CRS to be beneficial for their learning. Moreover, students favored the CRS’ anonymity feature and enjoyed the affordability of the CRS to promote student-teacher interaction through the question-feedback process. The interview shared common themes on the CRS benefits and validated students’ claims that CRS attributed to an increase in engagement, interactivity, and learning in mathematics. Overall, the student’s perceptions of the use of CRS are overwhelmingly positive and students strongly recommend the use of CRS in other learning areas.
... Depending on the type of questions, they can increase students' attention and participation (Anthis, 2011). Although some researchers have found no improvement in nal exams (Fallon & Forrest, 2011), they have reported an increase in students' responsibility to class assignments (Graham, Tripp, Seawright, & Joeckel, 2007;Jones, Crandall, Vogler, & Robinson, 2013). In particular, studies conducted with university populations have shown that ARS increase attendance and improve commitment to learning contents and subject matter, regardless of the di culty level or academic eld (Landrum, 2013;Rana & Dwivedi, 2016;Stowell & Nelson, 2007). ...
Preprint
Full-text available
Audience Response Systems (ARS), also known as clickers, are wireless devices commonly used in instruction. The present study explored the effects of ARS on students’ performance in an introductory psychology course. The study also described the trajectories of students during the course. Participants in the experimental group used ARS to solve ten true/false questions throughout a three-hour lecture, held once a week. They received feedback immediately after providing an answer. The control group was exposed to the same questions in a paper-and-pencil format at the beginning of each lecture. They received feedback after seven days. The key dependent variable was quantitative performance on course quizzes and exams. Results show that students in the experimental group had statistically significant superior performance compared to the control group. Additionally, analysis of learning trajectories of students in both groups showed that the ARS group gradually progressed to higher performance, whereas the paper-and-pencil group maintained similar performance through the study. These results are discussed within the context of previous findings related to the effects of ARS on instruction. Particularly, we revisit research related to environmental affordances, learning monitoring, motivational factors, feedback density, and ecological validity. .
... Although few would disagree that computers are beneficial, if not essential, in higher education-especially during the current pandemic, their integration into classroom lectures is often uneven. Beyond using technology such as clickers to promote student learning (Jones et al., 2013), instructors seldom use computers to respond to students and instead most computer use in lecture classrooms remains primarily as overhead projectors (Caldwell, 2007;Connor, 2009;Guerrero, 2010;Watkins & Sabella, 2008;Young, 2006). Thus, despite its potential, technology has proved to be a mixed blessing in the classroom. ...
Article
Full-text available
Classroom response systems (i.e., clickers) have become increasingly popular to facilitate student learning. Unfortunately, the common practice of pausing a lecture to ask questions takes up precious time to cover content. Asking questions “on the fly” without pausing is a possible solution. But can students both attend to lecture and answer questions simultaneously? Is this multitasking detrimental to student learning? In three experiments, we examined the effects of relevant and irrelevant “on-the-fly” questions and note taking on lecture retention. Undergraduates watched a video of a classroom lecture while either taking notes or not and receiving 0, 6, 18, or 36 questions that were either relevant or irrelevant to the lecture and then took a test. Students performed better on the test when receiving relevant rather than irrelevant questions. As for an optimal number of questions or whether note taking should also be allowed, there were no obvious advantages. Thus, when considering using “on the fly” clicker questions during a lecture vs. having no such questions, our evidence indicates no clear interference. Rather, such activities such as clickers may counter lecture boredom by allowing students to multitask with relevant activities.
... Similar benefits associated with increased accountability have been demonstrated with students' preparedness for Classroom Response System (CRS) questions. When CRS question responses were made part of students' overall grades, students reported better question preparedness and received higher exam scores, likely due to the increased accountability (Jones et al., 2013). Relatedly, a number of students in the current study requested the option to continue submitting weekly time management assignments after the intervention ended-reflecting their desire to have an accountability mechanism in place to encourage them to keep using the strategies. ...
Article
Full-text available
Time management difficulties are prevalent among undergraduate students and very few practical and effective instructor-implemented interventions exist. This study empirically tested two multicomponent interventions targeting time management in undergraduates enrolled in introductory and upper-level psychology courses. Students in the Schedule and Goals intervention were taught about the usefulness and importance of time management and shown how to use scheduling and goal setting strategies. Students in the Schedule Only intervention were only shown how to use the scheduling strategy. Students in both interventions also submitted either a weekly schedule and time management goals (Schedule and Goals) or only a weekly schedule (Schedule Only) on their course Learning Management System for 8 weeks. No significant post-intervention differences in time management behavior were found between the intervention conditions. However, students in the introductory course experienced a significant increase in post-intervention time management behavior. Post-intervention time management behavior was also positively correlated with final course grades. Results support the use of instructor-implemented interventions to improve college student time management. Keywords: Time management, self-regulated learning, college teaching, instructor-implemented interventions
... The teacher presented the steps implemented to identify the appropriate opportunities for students' learning and the learnt performances based on combined empirical research and her experience [2]. Especially, the classroom response systems (CRS) [3] is commonly used in the prep C group. For example, if a student is interrupting others in the class, or when students are interacting with the teacher's questions, or contributing to class activities, the teacher will act accordingly to ensure the best teaching outcomes. ...
... Teachers use technological applications in the classroom (mobile applications and classroom response) that allows students to reply to the questions devised by the teacher on course content. The use of technology shows that students can improve their perceptions of engagement, learning, information, and actual assessment (Denker, 2013;Jones, Crandall, Vogler, & Robinson, 2013). ...
... Depending on the type of questions, they can increase students' attention and participation (Anthis, 2011). Although some researchers have found no improvement in final exams (Fallon & Forrest, 2011), they have reported an increase in students' responsibility to class assignments (Graham et al., 2007;Jones et al., 2013). In particular, studies conducted with university populations have shown that ARS increase attendance and improve commitment to learning contents and subject matter, regardless of the difficulty level or academic field (Landrum, 2013;Rana & Dwivedi, 2016;Stowell & Nelson, 2007). ...
Article
Full-text available
Audience Response Systems (ARS), also known as clickers, are wireless devices commonly used in instruction. The present study explored the effects of ARS on students’ performance in an introductory psychology course. The study also described the trajectories of students during the course. Participants in the experimental group used ARS to solve ten true/false questions throughout a three-hour lecture, held once a week. They received feedback immediately after providing an answer. The control group was exposed to the same questions in a paper-and-pencil format at the beginning of each lecture. They received feedback after seven days. The key dependent variable was quantitative performance on course quizzes and exams. Results show that students in the experimental group had statistically significant superior performance compared to the control group. Additionally, analysis of learning trajectories of students in both groups showed that the ARS group gradually progressed to higher performance, whereas the paper-and-pencil group maintained similar performance through the study. These results are discussed within the context of previous findings related to the effects of ARS on instruction. Particularly, we revisit research related to environmental affordances, learning monitoring, motivational factors, feedback density, and ecological validity.
... It's been shown to increase student learning standards, engagement and real test scores. [47], [48]. Though several teachers a 2013 study of desktop interactive learning studies found a lack of adequate research on existing communication devices and small groups in their classes. ...
Article
Local governments still depend on traditional town halls for community consultation, despite problems such as a lack of inclusive participation for attendees and difficulty for civic organizers to capture attendees' feedback in reports. Building on a formative study with 66 town hall attendees and 20 organizers, we designed and developed CommunityClick, a communitysourcing system that captures attendees' feedback in an inclusive manner and enables organizers to author more comprehensive reports. During the meeting, in addition to recording meeting audio to capture vocal attendees' feedback, we modify iClickers to give voice to reticent attendees by allowing them to provide real-time feedback beyond a binary signal. This information then automatically feeds into a meeting transcript augmented with attendees' feedback and organizers' tags. The augmented transcript along with a feedback-weighted summary of the transcript generated from text analysis methods is incorporated into an interactive authoring tool for organizers to write reports. From a field experiment at a town hall meeting, we demonstrate how CommunityClick can improve inclusivity by providing multiple avenues for attendees to share opinions. Additionally, interviews with eight expert organizers demonstrate CommunityClick's utility in creating more comprehensive and accurate reports to inform critical civic decision-making. We discuss the possibility of integrating CommunityClick with town hall meetings in the future as well as expanding to other domains.
Article
Full-text available
The objective of this chapter is to explain the huge, burgeoning sense of excitement surrounding response systems, and more generally, networked classrooms today. Also why, for an idea apparently more than 40 years old, it took this long to happen! Beginning with a brief history of early response systems, it takes up the story from the author's own experience, leading through hardware barriers, misconceptions about pedagogy, and classroom successes, to summarize the variety of uses, and how they lead to improved teaching and learning. It then discusses why this is such a potentially important area of study for improving education, and finally goes on to describe the emerging characteristics of, and rationale for, more powerful types of modern systems.
Article
To investigate whether student response system (SRS) technology increases student exam performance, we conduct a quasi‐experiment using six introductory managerial accounting courses. Three courses were taught using SRS technology and three were taught without using SRS technology. The students in the SRS courses performed on average 3.15 percentage points better than students in the non‐SRS courses after controlling for age, gender, prior GPA, and ACT score. SRS technology was more beneficial to students with the lowest prior GPAs. The study found evidence that SRS technology helps these low‐GPA students without having a negative effect on high‐GPA students.
Article
I. OVERVIEW. 1. Introduction. 2. Peer Instruction. 3. Motivating the Students. 4. A Step-by-Step Guide to Preparing for a Peer Instruction Lecture. 5. Sample Lecture. 6. Epilogue. II. RESOURCES. 7. Mechanics Baseline Test. 8. Force Concept Inventory. 9. Questionnaire Results. 10. Reading Quizzes. 11. Concept Tests. 12. Conceptual Exam Questions. Appendix: Disk Instructions. Index.
Article
Experience using a wireless keypad system for five years in class rooms is discussed. This system is designed to increase student participation. The data can be displayed to students quickly and/or saved. The system is flexible and can accommodate a wide variety of teaching styles.
Article
This paper compares the effectiveness of an electronic student response system (SRS) to deliver ConcepTests with the use of WebCT quizzes for nursing students enrolled in general chemistry, organic chemistry, and biochemistry courses. SRS is a Web-based system designed to assist instructors in delivering and analyzing student responses to questions used in lecture and recitation. Student responses are captured and summarized graphically, providing students and instructors with immediate feedback. WebCT quizzes provide students with another opportunity for practice of the concepts presented in class. Student achievement after experience with either or both SRS and WebCT quizzes on teacher-written hour-long exams and an American Chemical Society final exam was investigated. Results show that small differences in teacher implementation of both of these innovations can have large effects on student achievement. As currently implemented, SRS did not provide opportunities for reflection and review, while WebCT did. Using SRS demonstrated no effect on student achievement measured by teacher-written exams; a minimal effect of using SRS on student achievement measured by the ACS exam was shown. WebCT quizzes resulted in statistically higher achievement on teacher-written hour-long tests, but not on the ACS exam. This was probably a result of the WebCT quizzes not being reviewed by students because of time constraints. Student survey answers were used to corroborate this interpretation. Keywords (Audience): First-Year Undergraduate / General
Article
We compared an electronic audience response system (clickers) to standard lecture, hand-raising, and response card methods of student feedback in simulated introductory psychology classes. After hearing the same 30-min psychology lecture, participants in the clicker group had the highest classroom participation, followed by the response card group, both of which were significantly higher than the hand-raising group. Participants in the clicker group also reported greater positive emotion during the lecture and were more likely to respond honestly to in-class review questions.
Article
Four sections of introductory psychology participated in a test of personal response systems (commonly called “clickers”). Two sections used clickers to answer multiple-choice quiz questions for extra credit; 2 sections did not. Even though we used clickers very minimally (mainly to administer quizzes and give immediate feedback in class), their use had a small, positive effect on exam scores. On anonymous course evaluations, students in 1 clicker section reported that regular attendance was more important, but otherwise, students in clicker sections (compared to traditional sections) did not report feeling significantly more engaged during class. We suggest that future researchers might combine clicker technology with other, established pedagogical techniques.