AUDIENCE RESPONSE SYSTEMS AS AN INSTRUMENT OF
QUALITY ASSURANCE IN ACADEMIC TEACHING
Ortrun Gröblinger1, Michael Kopp2, Barbara Hoffmann1
1University of Innsbruck (AUSTRIA)
2University of Graz (AUSTRIA)
Audience response systems (ARS) permit students to answer electronically displayed multiple-choice
questions using a remote control device. In higher education, more and more lecturers use ARS to
engage their students more intensively, especially when it comes to mass lectures. This paper deals
with the questions how ARS as an innovative technology can foster the feedback of students during
ex-cathedra teaching and how this feedback can contribute to quality assurance in academic teaching.
The examination of advantages and challenges of using ARS as well as of their didactical and
technical potentials combines theoretical considerations with an online survey among the more than
two hundred students enrolled in a mass lecture held in winter term 2015 at the University of
Innsbruck. Additionally, the responsible lecturers were asked to share their experience using an ARS
for the first time. In closing, the paper provides some answers if and how ARS are valuable
instruments to enhance quality assurance in higher education.
Keywords: audience response systems, quality assurance, e-learning, educational technology
1 AUDIENCE RESPONSE SYSTEMS AS AN INNOVATIVE TEACHING
To ask students directly whether they have understood the contents of a lecture, is one of the most
efficient methods of quality assurance in teaching . In smaller groups the oral interaction with
students often works quite well (provided that they are interested in discussing at all). To initiate an
oral discussion process with a larger group (e.g. in a mass lecture) is not that easy and keeping the
discussion process alive is even more difficult. The lecturer´s role in such situations is mostly that of a
reader and the students are merely passive consumers as Schmucker  points out.
To give and to receive feedback during mass education is rather challenging . According to
Anderson  there are three main reasons for that: 1) the feedback is dramatically increasing with the
number of students (feedback lag); 2) students are afraid to ask putative “stupid” questions (student
apprehension); 3) the ex-cathedra teaching setting leads to less active participation (single speaker
paradigm). However, Roy  stresses that “effective feedback is an essential part of students’ learning
and skills development in the classroom”. Though Roy has the feedback of lecturers in mind, the
students´ feedback is even more important to establish high-quality teaching and learning.
But how can feedback be given efficiently in the context of mass lectures? Educational technologies
provide a proper solution for this. By name, audience responsive systems (ARS) are very suitable
tools. As Kay & LeSage  explain, ARS permit students to answer electronically displayed multiple
choice questions using a remote control device. In higher education, more and more lecturers use
ARS to engage their students more intensively, especially when it comes to mass lectures. Kay &
LeSage emphazise that ARS already “have been used to improve student interaction, engagement,
and attention, increase attendance, stimulate peer and class discussion, provide feedback for both
students and instructors in order to improve instruction, and improve learning performance”.
For lecturers, using a state of the art ARS means that they can prepare a series of questions which
may be distributed to the students at any time during their lecture. Students use their own devices to
answer those questions. Hence, lectures get a direct feedback about the students´ understanding of
the content. Moreover, students have the chance to ask questions themselves which are collected by
the system automatically and can be answered by the lecturer. This means that ARS provide lecturers
as well as students with a new and effective way of interaction, especially in massive courses .
As Ebner et al.  describe, there are generally two different types of ARS: A distinction is made
between the better-known digital front channel systems and the digital back channel systems. Both
can be further divided into qualitative and quantitative applications. Of course, there are systems that
support both types. A front channel system is visible to all participants. The lecturer asks a question
and students can answer it. If there are default answers set to choose from, it is called a quantitative
system. Conversely, if the students can enter free text as a response it is a question of a quantitative
system. On the other hand, a back channel system runs in the background and is not visible. These
systems allow to write notes, to collect notes or even to give teachers feedback on the presentation
speed. Again, the difference between quality and quantity lies in the form of the feedback (cf. Fig.1).
Audience Response Systems
Front Channel Systems Back Channel Systems
Qualitative Systems Quantitative Systems Qualitative Systems Quantitative Systems
systems with free-
text answer option
systems with default
answers to choose
Free text feedback
Fig. 1: Types of audience response systems, quoted after Ebner et al. , translated from
Admittedly, ARS per se are not really a brand new technology. They are in the focus of research since
the 1960s. But since then technology has evolved radically. Nowadays not only there is an adequate
internet connection (WiFi) available in most lecture halls but also students bring their own smart
phones and tablets to their lectures. Considering this “bring your own device” approach the use of
special (and expensive) devices is not necessary anymore. Simultaneously, ARS have developed
rapidly and offer a wide range of different types of questions and additional feedback functions. Thus,
the combination of expansive available internet access and enhanced ARS-applications can be
considered as an innovative approach to enhance the quality of teaching and learning.
2 AUDIENCE RESPONSE SYSTEMS AS QUALITY ASSURANCE TOOLS
Naturally, quality assurance in teaching has been on the agenda of higher education institutions for
quite a while. In terms of quality assurance, the educational process has to be seen as supplying
knowledge for students (as customers of higher education institutions). Their satisfaction is a crucial
benchmark for high quality teaching and learning in higher education . According to Boud & Prosser
 high quality learning activities have to follow four principles: engagement of learners;
acknowledgement of context; challenge for learners; the involvement of practice. In the context of
mass education the question arises how these principles can be adopted and which educational
technologies are helpful and suitable to teach effectively to large numbers of students .
As mentioned above, the traditional university model of mass lectures can be seen as focusing too
heavily on learning through passive reception. As a behaviorist concept, the main focus is on passive
absorption of a predefined body of knowledge by the learner. The engagement of students is rather
low. Moreover, learners are hardly challenged to participate actively. Under the assumption that the
quality of teaching and learning is enhanced if learning is considered as active, cumulative, individual,
self-regulated, goal-oriented and situated , a constructivist approach seems more appropriate.
Following this approach, the role of the lecturer shifts from instructor to facilitator. Students become
more active in their understanding of knowledge and increase their understanding of the delivered
content through the possibility of learning through asking questions and the use of assessments.
As Stoyanova  points out, the above-mentioned features can be efficiently achieved by using ARS.
Thus, ARS support state-of-the-art instructional strategies in large lecture contexts . Undoubtedly,
a lot of benefits come along with the use of ARS as Key & LeSage  summarize in their review of the
literature. Concerning classroom environment benefits, ARS increase attendance in the class rooms
(most of all when provided questions are linked to examination questions), they attract the attention of
students (by interrupting exhausting ex-cathedra lectures) and they foster the engagement of students
(not least because students have the possibility to take part anonymously). In terms of learning
benefits numerous studies  claim that students learn more and the learning performance is higher
when ARS are used. Reasons for this might be the associated increase of interaction and discussion
processes. Moreover, ARS enable lecturers to modify their teaching performance immediately based
on the students´ feedback.
Taking into account student feedback is an efficient and well-accepted technique to assess teaching
quality . Marsh & Dunkin  identify four purposes for collecting students´ evaluations of
teaching: 1) diagnostic feedback to teachers about the effectiveness of their teaching; 2) a measure of
teaching effectiveness to be used in administrative decision making; 3) information for students to use
for the selection of course units and teachers; 4) an outcome or process description for use in
research on teaching. Student feedback gained by the use of ARS assists lecturers in assessing
student comprehension. When used to support formative assessment without grading, misconceptions
can be identified and instruction strategies can be altered. Additionally, students can compare their
responses with peers after ARS feedback is presented in class, so they can monitor their progress
and/or receive confirmation that they are not alone in their misunderstanding.
Since feedback is closely linked to assessment and assessment is closely linked to quality ,
feedback is a crucial component of measuring high-quality teaching and learning. Instruction and
questioning, response and display as well as data management and analysis are the core features of
ARS . Therefore, ARS can be very helpful to give, receive and analyze feedback especially in
mass education, which makes them efficient tools for quality assurance in higher education.
Certainly, there are also challenges in using ARS. In didactic terms lecturers not only have to prepare
adequate questions, but also the right amount of questions per lecture. Furthermore they need to have
a good sense of when to provide students with those questions, to ensure that the overall concept of
the lecture is not compromised by the usage of ARS. All of this can be very challenging. As for
students, using an ARS requires more cognitive energy and cooperation so that students may refuse
to take part in ARS votings. Not all discussions initiated by ARS run smoothly which may lead to
confusion. Students may also be concerned that their responses are being monitored and saved by
the system. Inter alea, the following case study is meant to examine to which extent these challenges
and concerns have to be taken into account when ARS are used in mass lectures.
3 CASE STUDY: ARS USED IN A MASS LECTURE
This case study is mainly intended to explore the advantages and challenges of using an ARS in
lectures with more than two hundred students. Most importantly it will be examined whether it is
possible to activate students in such mass lectures and if there is a didactical added value by using
ARS. Particular attention is paid to the question whether ARS can be used as an instrument of quality
assurance in academic teaching.
3.1 General settings
The case study is based on the lecture “Grundlagen der Betriebswirtschaft“(Fundamentals of
Management) held in winter term 2015. The course is part of the core curriculum of the Bachelor's
Program Management and Economics at the University of Innsbruck. It provides students with a
profound overview over the field of business and management. Due to the total number of more than
five hundred enrolled students, students participated both in class and via online streaming.
The course was held throughout the whole semester for three hours per week. There were three
lecturers involved, two of them actively used the ARS ARSnova in several of their lessons. ARSnova
is a front channel system (cf. chapter 1). It allows qualitative and quantitative response.
The average amount of questions ranged between two to four questions during the lectures. The
questions were time limited and had the same design as questions used for the final exam. So, aside
from the aim to activate students during lecture, it was also intended to provide them with questions
that might be used in the final exam.
In the first lecture where ARSnova was used, all in all there were 392 participants logged in. During
the whole lecture there where 4 questions asked and the number of students answering each question
ranged between 84 and 122.
While questions were presented in full screen, students had a certain amount of time to answer the
questions with their mobile devices. After time ran out, the statistics were provided and the answers
were discussed. The lecturer usually explained the correct answers in more detail and hinted common
misunderstandings or tricky parts of questions. Questions were presented at the beginning of a
lecture, basically for repetition of last weeks’ lecture content and also during lessons, mostly to see
whether students have understood the content.
At the end of the course qualitative interviews were held with those lecturers that used ARSnova and
students could give their feedback through an online questionnaire.
ARSnova, the chosen ARS used in the lectures, is developed and supported by THM (“Technische
Hochschule Mittelhessen - University of Applied Sciences”) in Germany and it is an open source
The online service ARSnova is available for free and can be used by all lecturers at universities,
schools and other educational institutions. ARSnova can be operated without any license needed on
an institutions’ own server, as it is at the University of Innsbruck.
ARSnova is based on didactical principles like peer instruction and inverted classroom. Therefore,
there are two sections (‘lecture questions’ and ‘preparation tasks’) that can be filled with questions by
the lecturer. ARSnova has various question types: besides multiple and single choice, it is also
possible to create flashcards or picture based questions. The range of functions includes not only
questions that can be answered by students, but also the possibility for students to ask questions or
give feedback to the lecturer whether they still can follow the lesson . During the period of
observation the lecturers only offered questions that allowed quantitative response.
As in many other ARS as well, ARSnova is based on an ‘easy to use approach’. It suggests that
ARSnova does not need a lot of instruction beforehand, neither for lecturers nor for students.
3.2 Didactical dimension
Due to the fact that ARSnova is designed to be used in an easy and self-explanatory way, there was a
short introduction at the beginning of the first lecture. As part of this introduction the lecturer provided
the students with the link to ARSnova, as well as the Session-ID which was needed for the login.
According to the feedback questionnaire, most students felt adequately informed after the short
instruction. But the fact that almost twenty-nine students (out of eighty-two) did not feel sufficiently
informed might suggest that information should already be provided before and not only during the first
lecture (Fig 2).
Fig. 2: Information before first use of ARSnova, n = 82
The results in Fig. 2 show sort of a contrast to the results of Fig. 3: The majority of students said that
actually using ARSnova for the first time has not caused any problems. This might lead to the
conclusion that providing information beforehand would be very well appreciated by the students,
although they did get along very well by using ARSnova for the first time.
0 5 10 15 20 25 30
Fig. 3: Managability of ARSnova during the first use, n = 82
As it can be seen in Fig. 4 there was a high quantity of students that felt the use of ARSnova was an
accomplishment regarding the understanding of the lectures’ content. Specifically mentioned was the
fact that it was possible to receive an immediate feedback whether the chosen answer had been
Fig. 4: Supporting the understanding of the course content, n = 82
Being asked about the amount of questions provided during lessons, the majority of students stated
that more questions would have been appreciated. Usually two or three questions were asked during
a lecture. This was followed closely by the opinion that the amount of questions was good, whereas
only a few students thought that there were too many questions (Fig. 5).
0 5 10 15 20 25 30 35 40
0 5 10 15 20 25 30 35 40 45
Fig. 5: Number of questions per course, n = 80
Students had to answer each question within a time frame, which was usually one or two minutes.
Mostly students stated they had an adequate amount of time to do so. Some students however stated
that having more time to find the correct answer would have been better for them (Fig. 6).
Fig. 6: Available response time per question, n = 80
3.3 Technical dimension
One of the main issues of using browser based ARS is the quality of the internet connection. So it was
a crucial part of this case study to get also feedback about technical difficulties and occurring
Fig. 7 shows clearly that mostly students had no difficulties in the usage of their own mobile device
and only a few claimed otherwise.
Fig. 7: Students use of their own mobile device, n = 82
The students were also asked for technical difficulties occurring during the use of ARSnova (Fig. 8).
One of the major problems mentioned was that students could not establish a steady WLAN
connection or failed to load the provided ARSnova link. Others mentioned that the login with the
session ID had to be repeated several times. The mentioned problems mainly can be identified as
internet connection problems.
Fig. 8: Technical difficulties with ARSnova, n = 79
3.4 Students’ perspective
Generally the students’ response about the use of ARSnova in lecture was very positive. Using
ARSnova on a more regular basis in other lectures as well would be very much appreciated.
When asked which benefits students see in using audience response systems in lectures and which
aspects they liked best, one of the main benefits mentioned by students was the immediate feedback
right after answering the questions. Students were able to verify their knowledge regarding to the
From students’ point of view it was also interesting to see how their fellow students voted, so the
possibility to show live statistics was very well received. They also said it was motivating to participate
in the lecture in a more active way.
The fact that questions were quite similar to the actual exam questions gave them a preview of the
final exam and therefore was an additional motivation for the students. Some mentioned that it had
been helpful that the lecturer went into depth explaining the right and wrong answers. It also made it
easier for students to remember the content that had been taught during the lecture.
All in all the students’ reaction of the use of ARSnova was very positive. So the result shown in Fig. 9
will not surprise: Most students hope for the use of ARSnova in other courses, too (Fig. 9).
Fig. 9: Further use of ARSnova, n = 80
3.5 Lecturers’ perspective
A useful tool should fit both for students and for lecturers. As already seen in chapter 3 the students’
feedback was quite good. To get an impression how the lecturers felt during the use of ARSnova,
qualitative interviews were done with the two responsible lecturers. The following section summarizes
the analysis of their feedback.
Analogous to the students’ questionnaire the interview started with questions regarding the “look and
feel” of ARSnova, the usability and technical problems. Like the students (see Fig. 3), the lecturers
also found ARSnova easy to use, but they pointed out that it is necessary to have some kind of
instruction prior to first use. This personal instruction by a specialist for the system took about 10
minutes. After this first input the lecturers spent two hours on average to learn how the software
works. The development of questions is not trivial, but the lecturers got used to it after a few trials. It
was mentioned that the correct setting for the timer was a bit challenging, especially at the beginning,
since the timer has to be set directly during lecture and therefore cannot be prepared beforehand.
Due to the fact that it was the first time ARSnova was used in a mass lecture, there was technical
support present at the beginning of the first lecture, to help solve occurring problems. During the
lectures, some small technical problems occurred, but they mostly could be solved directly. For
example, in one case all questions were visible for the students from the beginning although the plan
was that the lecturer decides when he wants to show the next questions.
As already pointed out in chapter 3.1 the students were not asked for qualitative feedback. It was a
conscious decision depending on the didactical design of the lecture. The focus was on the training
effects for students. They should get familiar with the types, as well as the wording of questions that
would appear during the final exam. It was also intended that students experience the same limited
time frame and therefore time pressure to answer exam questions. The lecturers were asked if they
think they will use the qualitative feedback function of ARSnova the next time. Both disagreed,
because they did not think that this function would work in a mass lecture with more than 300
Without knowing the results of the students’ questionnaire the lecturers had the impression that the
way they used ARSnova helped their students to understand the lecture’s contents, especially with
regard to the way questions in the final exam would look like. As shown in Fig 4., students shared this
Asked for further comments both lecturers could imagine using ARSnova again. It was pointed out that
the way the use of such a tool influences the own didactic can be a huge challenge. It forces the
lecturer to reflect on the course and the learning goals. The lecturers are interested in a detailed
comparison of strengths and weaknesses with other ARS and they want to learn more about didactic
0 5 10 15 20 25 30 35 40 45 50
opportunities of ARS in smaller groups. Finally, they asked what role ARS can play in the context of
quality assurance of learning and teaching.
The core assumption of this paper is that ARS deliver a valuable contribution for quality assurance in
higher education, especially when used in the context of mass lectures. As mentioned above, several
studies carried out by other researchers prove that ARS are efficient tools when it comes to foster the
engagement of students. ARS enable them to report directly whether or not they have understood the
provided content of a lecture. Thus, lecturers can alter their teaching methods immediately if
necessary. Simultaneously, they can measure the factor of intelligibility of their students. Thus, ARS
are very suitable to give and receive feedback, which is a veritable basis for assessing both, the
students’ acknowledgement and the lecturers’ teaching competence. Again, assessment is closely
related to quality assurance, which leads to the conclusion that ARS in fact can be considered as a
quality assurance measure.
But is it worth it to use ARS? Most of the studies conducted previously were based on the use of so
called “clicker-systems”, which means that it was usually necessary to install rather expensive
technology in a lecture hall. The need to provide distinctive devices restricted the operation of ARS
due to the associated costs. In contrast, state-of-the-art ARS do not require additional devices.
Smartphones and tablets can be used by the students to give feedback. Moreover, several ARS are
available as open source products, which means that they are (more or less) scalable and hardly
charge the university´s budget. But this is only an advantage if they work smoothly, have a didactic
value and can be operated easily by students and lecturers.
The conducted survey among students at the University of Innsbruck as well as the feedback of the
lecturers in charge prove that this is generally the case. This is confirmed by the following results of
the survey and the qualitative interviews:
Students cope well with the use of ARS even if they are not extensively introduced to the
system (although an appropriate introduction is recommended);
Lecturers need only a short introduction to use ARS properly (but this short introduction is
Students claim that ARS enhance their understanding of the course content (over 80 percent
were of this opinion). Lecturers share this estimation;
The didactic approach to engage the students worked out well. The vast majority of students
report that the number of questions and the time for response was adequate;
It is no problem for students to use their own mobile devices (almost 87 percent responded in
Using ARS, a few technical problems occurred among the students and the lecturers. But
most of them were related to a weak Wifi connection and could be solved fast and easily;
Concerning the future use of ARS, students as well as lecturers can imagine to use them
again, regarding the recognized advantages.
Concerning the additional didactic benefit as well as technical issues these results lead to the
conclusion that ARS can be used as an efficient feedback tool. The present study results support the
theoretical considerations and indicate that ARS are a valuable instrument of quality assurance in
academic teaching as long as necessary didactic and technical issues are taken into account. But
since this conclusion is based only on one single survey, further research is needed to prove that the
use of ARS in massive lectures enhances the quality of teaching and learning.
 Stuart, I. (2004). The Impact of Immediate Feedback on Student Performance. Global
Perspectives on Accounting Education 1, pp. 1-15.
 Schmucker, S. (2015). Cognitive Activation in Mass Lectures through Voting Systems in the
Lecture Theatre. The Online Journal of Quality in Higher Education 2(2), pp. 17-22.
 Ebner, M. (2013). The Influence of Twitter on the Academic Environment. In: Patrut, B., Patrut,
M., Cmeciu, C. (ed.). Social Media and the New Academic Environment: Pedagogical
Challenges. IGI Global, pp. 293-307.
 Anderson, R.J., Anderson, R., Vandegrift, T., Wolfman, S., Yasuhara, K. (2003). Promoting
Interaction in Large Classes with Computer-Mediated Feedback. In: Designing for Change in
Networked Learning Environments. Proceedings of CSCL 2003, pp. 119-123.
 Roy, J. (2015). The Implementation of Feedback in the English Classes of Bengali Medium
Schools. Global Journal of Human-Social Science: G Linguistics & Education 15(7), pp. 38-54.
 Kay, H. R. & LeSage, A. (2009). Examining the benefits and Challenges of using audience
response systems: A review of the literature. Computers & Education 53, pp. 819-827.
 Haintz, C., Pichler, K., Ebner, M. (2014). Developing a Web-Based Question-Driven Audience
Response System Supporting BYOD. Journal of Universal Computer Science 20(1), pp. 39-56.
 Ebner, M., Haintz, C., Pichler, K., Schön, S. (2014). Technologiegestützte Echtzeit-Interaktion in
Massenvorlesungen im Hörsaal. Entwicklung und Erprobung eines digitalen Backchannels
während der Vorlesung. In: Rummler, K. (ed.). Lernräume gestalten – Bildungskontexte
vielfältig denken. Waxmann, pp. 567-578.
 Mrozek, Z., Adjei, O., Mansour, A. (1997). Quality Assurance in Higher Education. Proceedings
of 4th International Conference Computer Aided Engineering Education, pp. 156-164.
 Boud. D. & Prosser, M. (2002). Key principles for high quality student learning in higher educa-
tion: a framework for evaluation. Educational Media International, 39(3), 237–245.
 Roberts, G. (1993). Educational Technology and the Mass Lecture. A Restatement of
fundamental Issues. Australasian Journal of Educational Technology 9(2), pp. 182-187.
 Dickinson, J. (2005). Enabling E-Learning in Higher Education. Newcastle: Newcastle Business
 Stoyanova, S. (2015). Use of Audience Response Systems in the HE Teaching and Learning
Context. Poster presentation at the Teaching and Learning in Social Sciences conference.
 Brady, M, Seli, H., Rosenthal, J. (2013): Clickers and metacognition: A quasi-experimental
comparative study about metacognitive self-regulation and use of electronic feedback devices.
Computers & Education 65, pp. 56-63.
 Richardson, J. (2005). Instruments for obtaining Student Feedback: A Review of the Literature.
Assessment & Evaluation in Higher Education 30(4), pp. 387-415.
 Marsh, H. W. & Dunkin, M. J. (1992). Students’ evaluations of university teaching: a
multidimensional perspective. In: J. C. Smart (Ed.) Higher education: handbook of theory and
research. Vol. 8.
 Ehlers, U.-D. (2013). Open Learning Cultures. A Guide to Quality, Evaluation, and Assessment
for Future Learning. Springer.
 Deal, A. (2007). Classroom Response Systems. A Teaching with Technology White Paper.
 Peez, G. & Camuka, A. (2015). "Das macht auf jeden Fall die Stunde spannender...".
Strukturmerkmale eines Audience Response Systems und dessen Nutzungsakzeptanz im
Hörsaal. medienimpulse-online 2/2015, pp. 1-3. http://medienimpulse.at/articles/view/793