Content uploaded by Ron Fisher
Author content
All content in this area was uploaded by Ron Fisher on Mar 16, 2017
Content may be subject to copyright.
1
Capturing student expectations in marketing education
Opportunities for real-time responsiveness.
Abstract
Capturing student expectations at several points in a one semester course creates the
possibilities of delivering real-time responsiveness to student expectations, concerns and
intentions. In our discipline, best practice marketing is predicated on good market research.
Using that platform, we argue that good marketing education should be responsive to the
needs and expectations of its consumers, in this case, students enrolled in marketing courses.
The expectations, concerns and learnings of students were captured at three points in a one-
semester undergraduate course. The findings show that understanding student expectations
and intentions can assist in designing courses that achieve high levels of student satisfaction
in terms of both content and process. One of the contributions of the paper is to explore the
use of a second round of gathering student expectations during a course, which increases the
potential to be responsive in real-time.
Keywords: marketing education, student expectations
Introduction
The push for excellence and the recognition of demands for accountability and responsiveness
to our students’ needs, continue to be challenges for marketing educators. Much research has
looked at student expectations at the macro or institutional level, but less has been reported at
the course (subject) level. Yet, this course level is the very interface between the teacher
(marketing educator) and the learner (student). This study reports on a process developed to
enhance educator responsiveness to student expectations, concerns and intentions.
Need to lengthen this – maybe shift something from below?
The Literature
Concern with student expectations is not new per se, and investigators like Soutar and Turner
(2002) studied the influential factors on student choice of university. Contemporary
marketing educators work in a context coloured by the shift to accreditation, competition
between tertiary institutions, increased accountability and rising student expectations (see
Smart, Craig, and Conant 2003). Researchers have used studies of service quality in the
university sector as one approach to developing performance indicators (see Soutar and
McNeil, 1996). These studies make their contribution by looking at the broader higher
education sector, and they indicate that many factors affect both educators and students. The
findings have implications for program and university administration and for tertiary
education policy. In this context, Duke (2002) discussed a process for developing learning
outcomes at the institutional level where one of the data collection tools gathered student
perceptions through exit surveys. He suggested that although trends could be tracked across
time, it was important to identify possible explanations for changes such as changes in
curriculum (Duke 2002), and in the Australian context changes in fee paying arrangements
would be one potential example of a major influence. However, in the complex university
environment, individual educators interact with students at the specific course (subject) level,
and this study focuses at the point of intervention in the teaching and learning partnership.
Measuring the effectiveness of curriculum, teacher and situation has been used as a way of
placing emphasis on the quality of the learning experience (Patel, 2003) and universities have
fostered the wide use of student evaluations of higher educational courses for over 30 years
2
(Marks, 2000). Wallace and Wallace (1998) went so far as to argue that the cost of such
evaluations had outstripped their contribution, but their use has continued unabated in various
forms. Until recently, the majority of course evaluations have been administered at the end of
courses (Hernandez, 2002), mainly using quantitative methods to analyse student responses to
questions measured on an interval scale (Marks, 2000). These evaluations are more useful for
assessing teaching performance than student satisfaction, and follow-up action usually
concentrates on improving subsequent offerings of the same course.
Focussing on commencing university students, Sander, Stevenson, King and Coates (2000)
looked at student expectations of teaching. They did not however look at the expectations
students have of their own contribution to the teaching and learning partnership. In a different
approach, student expectations at the start of two courses were compared with evaluations at
the end, using gap analysis (LaBay and Comm, 2003).
Drawing on all these approaches, by inference, educators should be striving to be more
responsive to student expectations. It is insufficient to gather data that might influence
lecturers’ future performance or the conduct of the course in a subsequent semester or year.
The opportunity exists to explore a more comprehensive process for capturing and using
student expectation data.
Research Design
The focus of this qualitative and quantitative study is at the one semester course level in an
Australian university. The aim was to capture student expectations and intentions at the start
of semester and at mid-semester, which was an addition to the LaBay and Comm (2003)
approach, and to have a final evaluation that was more informative than the centrally driven
teaching evaluations. Three instruments were used to capture data at these points. The
instruments were administered to eighty students enrolled in a second year undergraduate
marketing course, ,who voluntarily and anonymously completed the instruments.
The purpose of the evaluations at commencement and mid-semester was to establish student
expectations and intentions in relation to a particular course of study. For the educators, the
information obtained at each stage was used to respond to student concerns and refine the
course delivery where appropriate in the ensuing weeks. The end of course evaluation
contributed to a comprehensive review process aimed at improving future courses.
The distinctiveness of the course concerned was achieved by a unique approach to integrating
theory and practice, using a lecture/ tutorial format, which encouraged active student
participation and interaction. The energy and enthusiasm for learning that was generated by
the highly committed staff from the commencement of the semester set the tone for the
duration of the course. Staff discussed their expectations in the first sessions, and the design
of the first instrument indicated some areas deemed important for success in the course. By
completing the form, students started to make their expectations explicit to themselves, and in
turn to the staff.
The instrument used to collect data at the start of semester contained several qualitative
measures of student key expectations, questions about the course, individual learning styles,
and how students can contribute to their own learning and that of their group. Students were
asked to mark on five point interval scales the degree to which they were actively scanning
the business environment, and the degree to which they were scanning business literature and
3
the business press. Student intentions in relation to consultation with tutor, attendance at
lectures and attendance at tutorials were also captured.
At mid semester, the instrument captured student expectations along similar lines to the start
of semester snapshot. Students were asked to suggest innovations in tutorial process and
content. As well, students were asked to list the best three articles that they had read so far,
indicate future career aspirations and to indicate their patterns of tutorial and lecture
attendance. The end of semester learning evaluation asked students to state the best aspects of
the course, how the course could be improved and to give examples of new skills that had
been acquired during the course. In addition to these open-ended questions, students were
asked to indicate, on interval scales, responses to a series of questions relating to
improvements in knowledge and learning as a result of undertaking the course.
Findings
Analysis of the commencement of semester instrument showed that the most frequent key
expectation of students was to learn and understand the course material. This was followed
by obtaining good grades and by acquiring business skills. Students also questioned what the
course entailed and how it would enable them to obtain employment. The best ways in which
students stated they learned were by reading about others experiences, listening, visually and
through real-life examples. Students indicated that they believed that the best ways in which
they could contribute to their own learning was through reading, attendance and study, while
contribution to group activities could best be achieved through effective communications,
participation and completing work in a timely manner.
The instruments completed at mid-semester measured student expectations and intentions in
the same terms as the instrument used at the commencement of semester. No major changes
in expectations and intentions were noted. However, additional items were included, which
items sought student suggestions for innovations in tutorial content and processes, together
with a measure of the best aspects of tutorials and lectures in the course so far. The students
provided few suggestions for innovation in tutorial content or processes, which suggests that
their needs were being met and substantial changes were not needed. When asked what were
the best aspects of the tutorials to date, students responded that the interaction with other
students and tutors was the most important aspect. This was followed in order of importance
by student presentations. The best aspects of lectures were reported as interaction with
students and lecturer, satisfaction with the lecturer and the use of real-world examples to
illustrate and reinforce concepts.
In the end of semester learning evaluation students reported their perceptions about the degree
to which their expectations in undertaking a course have been met. For this course, students
completed a set of statements designed to assess the degree to which knowledge and learning
in this course had been met. Students marked the response to questions on an interval scale
with responses ranging from 1 to 5. Preliminary analysis of the data is shown in Table 1. The
majority of students expressed high or very high agreement with the statement made in
conjunction with variables used to indicate process and content issues relating to the course.
It was extremely pleasing to see that students rated the usefulness of lessons to the real world
with the highest mean score (4.18). The textbook, guest lecturer, and library resources
received the lowest mean scores (3.13, 3.14, and 3.57 respectively) and offer opportunities for
further development when the course in next offered. REVERSE THE ORDER ? –
COMMENT THAT THESE WERE STILL RELATIVELY POSITIVE.
4
Students responded to two open ended questions inviting feedback on the best aspects of the
course and how the course could be improved. Most students reported that the lecturer was
the best aspect of the course, followed in decreasing order of frequency by the linking of
theory and practice, and tutorials. Many students identified the posting of lecture notes prior
to the lectures as an area where improvement could be made together with the need to source
a different textbook, preferably Australian. Table 1
Percentage frequencies of variables
Variable
Interval scale (%)
Mean
SD
1
2
3
4
5
Overall satisfaction with the course
1.4
1.4
16.9
52.1
28.2
4.04
.801
Ability to use key concepts
0
2.8
25.4
57.7
14.1
3.83
.697
Usefulness of lessons to the real world
0
1.4
8.5
60.6
29.6
4.18
.639
Ability to integrate theory and practice
0
1.4
15.5
56.3
26.8
4.08
.692
Future ability to meet strategic challenges
0
2.8
28.2
47.9
21.1
3.87
.773
Utility of course outline
1.4
2.8
23.9
47.9
23.9
3.90
.848
Marking guidelines and feedback
2.8
2.8
26.8
42.3
25.4
3.85
.936
Course web page
2.8
9.9
25.4
46.5
15.5
3.62
.962
Student’s critical analysis of presentations
1.4
1.4
31.0
45.1
21.1
3.83
.828
Workshops, seminars and tutorials
0
7.0
2.82
50.7
14.1
3.72
.796
Lectures
5.6
9.9
23.9
42.3
18.3
3.58
1.078
Guest lecturer
4.3
13.0
23.2
40.6
18.8
3.57
1.078
Textbook
8.5
16.9
35.2
32.4
7.0
3.13
1.055
Library resources
10.1
11.6
36.2
37.7
4.3
3.14
1.033
Students’ own motivation
1.4
7.0
23.9
52.1
15.5
3.73
.861
Concepts used throughout the course
0
2.8
31.0
46.5
19.7
3.83
.774
* Interval scale figures are expressed as percentages
** Interval scale measures: 1 = very low agreement; 2 = low agreement; 3 = neutral; 4 = high agreement;
5 = very high agreement.
Discussion
The use of multiple instruments contributes to an iterative course evaluation process, based on
continuous improvement. To be successful, the teaching team must be able to respond with
agility to matters raised. All of the instruments gave students the opportunity to think about
their involvement and learning in the course, and thus enhanced reflection for the student, as
well as providing feedback to the staff.
By collecting early information, the course convenor can respond to expectations and
concerns and where necessary or possible make modifications. For example, when students
identify concerns about career prospects early, the lecturer can brief guest lecturers, use
current recruitment advertisements and have guest spots from students who have successfully
completed the course in previous years. To illustrate, a student who expressed concerns about
career options was helped to find work experience with one of the university’s industry
partners. Another advantage of the commencement instrument is that it subtly indicates to the
student expectations of the course, and the processes that could contribute to success. A
benefit of the mid- semester instrument was that it served as a critical checkpoint for staff and
students.
5
The end of semester instrument had reflective components, which helped to focus the students
prior to the final examination. The students welcomed the opportunity to put their ideas and
responses on paper, whereas the researchers were concerned that the students could feel
‘over-sampled’. Informal feedback indicated that the students felt that their opinions and
feedback were valued and that action would be taken where needed and possible.
The three instruments, which are the focus of this paper were part of a larger suite of course
evaluation mechanisms which included quantitative student teaching evaluations conducted
centrally by the university, structured peer review, the collection of tutor expectations at
commencement and mid semester, and tutor feedback at the conclusion of the semester. The
results of all evaluations were data sources for the final reflective Course Report compiled by
the Course Convenor in conjunction with tutoring staff and other inputs including peer
review.
Implications
The major implications for marketing education are in terms of processes and content, and
ongoing research. Marketing educators need to be committed to evaluation processes and
welcome rather than fear student feedback. In other spheres capturing customer feedback
would be mandatory. The process must be transparent, and students need to see that
academic staff have taken notice of the feedback. Not all matters that students mention can be
rectified within a semester. For example timetabling and lecture facilities are less amenable
to quick changes. However, in such cases acknowledging concerns at the very least respects
the students’ opinions.
In terms of research, the use of course evaluation tools will permit within course comparisons
over time, and cross course comparisons. The dissemination of results within universities
could contribute to improved practice. Dissemination through marketing education
scholarship could contribute to improved measures.
Conclusions
Marketing educators can use marketing tools to improve performance in educational settings.
The study makes several contributions. Firstly, it extends the points of feedback collection.
In this study, the response from students was positive in terms of being invited to give their
opinions. Secondly, the study indicates that ongoing feedback can contribute to continuous
improvement and student satisfaction within a one-semester course. Finally, the paper
suggests that capturing student feedback is a challenge that all marketing academics can
address in their teaching and research.
References
Hernandez, S. A. 2002. Team learning in a marketing principles course: Cooperative
structures that facilitate active learning and higher level thinking. Journal of Marketing
Education, 24(1), 73-85.
LaBay, D. G., & Comm, C. L. 2003. A case study using gap analysis to assess distance
learning versus traditional course delivery. The International Journal of Educational
Management, 17(7), 312-317.
6
Marks, R. B. 2000. Determinants of student evaluations of global measures of instructor and
course value. Journal of Marketing Education, 22(2), 108-119.
Patel, N. V. 2003. A holistic approach to learning and teaching interaction: factors in the
development of critical learners. The International Journal of Educational Management, 17(6),
272-284.
Sander, Stevenson, King and Coates (2000
Smart, Craig, and Conant 2003
Soutar and McNeil, 1996
Soutar and Turner 2002
Wallace and Wallace (1998)