ArticlePDF Available

The Development and Validation of the Student Evaluation of Online Teaching Effectiveness

Authors:

Abstract and Figures

Traditional student evaluations of teaching do not adequately assess the essential constructivist-based practices that have been recommended for effective online instruction. There is a need for student evaluation of teaching instruments that are specifically designed to provide online instructors with valid feedback about the effectiveness of their online teaching practices. The studies described in this article were undertaken to develop and validate the Student Evaluation of Online Teaching Effectiveness (SEOTE). Items for the SEOTE were written to assess constructivist-based online teaching practices represented by Chickering and Gamson's (1987)20. Chickering , A. W. and Gamson , Z. F. 1987. Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7) March: 3–7. View all references Seven Principles of Effective Teaching. The first validation study was conducted by analyzing SEOTE responses from 498 undergraduate and graduate students enrolled in online courses. Results from an exploratory factor analysis of SEOTE responses yielded four interpretable factors: Student–faculty interaction, active learning, time on task, and cooperation among students. A second study involving 809 students enrolled in undergraduate and graduate courses was undertaken to provide further validity for the SEOTE. Results from this second validation study identified and confirmed the same hypothesized four-factor SEOTE structure identified by the first validation study.
Content may be subject to copyright.
This article was downloaded by: [Montana State University Bozeman]
On: 11 February 2014, At: 14:39
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,
UK
Computers in the Schools
Publication details, including instructions for
authors and subscription information:
http://www.tandfonline.com/loi/wcis20
The Development and
Validation of the Student
Evaluation of Online Teaching
Effectiveness
Arthur W. Bangert a
a Department of Education , Montana State
University ,
Published online: 08 Sep 2008.
To cite this article: Arthur W. Bangert (2008) The Development and Validation of
the Student Evaluation of Online Teaching Effectiveness, Computers in the Schools,
25:1-2, 25-47, DOI: 10.1080/07380560802157717
To link to this article: http://dx.doi.org/10.1080/07380560802157717
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the
information (the “Content”) contained in the publications on our platform.
However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness,
or suitability for any purpose of the Content. Any opinions and views
expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the
Content should not be relied upon and should be independently verified with
primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages,
and other liabilities whatsoever or howsoever caused arising directly or
indirectly in connection with, in relation to or arising out of the use of the
Content.
This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden. Terms & Conditions of access and use can be found at
http://www.tandfonline.com/page/terms-and-conditions
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
The Development and Validation
of the Student Evaluation of Online
Teaching Effectiveness
Arthur W. Bangert
ABSTRACT. Traditional student evaluations of teaching do not adequately
assess the essential constructivist-based practices that have been recom-
mended for effective online instruction. There is a need for student evalua-
tion of teaching instruments that are specifically designed to provide online
instructors with valid feedback about the effectiveness of their online teach-
ing practices. The studies described in this article were undertaken to de-
velop and validate the Student Evaluation of Online Teaching Effectiveness
(SEOTE). Items for the SEOTE were written to assess constructivist-based
online teaching practices represented by Chickering and Gamson’s (1987)
Seven Principles of Effective Teaching. The first validation study was con-
ducted by analyzing SEOTE responses from 498 undergraduate and graduate
students enrolled in online courses. Results from an exploratory factor anal-
ysis of SEOTE responses yielded four interpretable factors: Student–faculty
interaction, active learning, time on task, and cooperation among students.
A second study involving 809 students enrolled in undergraduate and grad-
uate courses was undertaken to provide further validity for the SEOTE.
Results from this second validation study identified and confirmed the same
hypothesized four-factor SEOTE structure identified by the first validation
study.
KEYWORDS. Online course evaluation, online learning, student evalua-
tion of online teaching effectiveness, student evaluations of teaching
Arthur Bangert is an Assistant Professor for the Department of Education
at Montana State University, Reid Hall 115, Bozeman, MT 59717 (E-mail:
abangert@montana.edu.)
Computers in the Schools, Vol. 25(1–2), 2008
Available online at http://xxxx.haworthpress.com
C
2008 by The Haworth Press. All rights reserved.
doi: 10.1080/07380560802157717 25
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
26 COMPUTERS IN THE SCHOOLS
The introduction of Internet-based technologies has revolutionized dis-
tance education. Distance or distributed education is no longer limited to
traditional correspondence courses, archaic electronic bulletin boards, or
compressed video. Electronic course management systems such as We-
bCT, Blackboard, and eCollege integrate a variety of technologies that
allow instructors to create rich learning experiences for distance learners.
The number of students enrolling in online degree and training programs
continues to grow dramatically each year. According to Allen and Seaman
(2005) by the beginning of the 2004 academic school year over 2.35 mil-
lion students were enrolled in online courses. Estimated tuition revenues
generated from Internet-delivered courses during the same academic year
were about five billion dollars (Blumenstyk, 2005), suggesting that offering
online degree programs is a good business decision for most institutions.
The significant growth in the number of online degree programs is not sur-
prising when considering the flexibility they offer to both traditional and
nontraditional students. Web-based degree programs provide educational
opportunities for students who want to take courses that will have positive
impacts on future careers but will at the same time allow them to maintain
family and work responsibilities.
There has been concern among educators that quality assurance proce-
dures for guiding the design and delivery of Internet-based courses have
largely been ignored, as colleges and universities rush to offer an array of
distance-delivered programs that will allow them to maintain and grow en-
rollments (Motiwalla & Tello, 2000; Phipps, Wellman, & Merisotis, 1998).
The complex issues surrounding the evaluation of Web-based distance-
delivered courses suggest that judgments relevant to the quality of courses
and programs must be guided by specific and measurable benchmarks
(Stella & Gnanam, 2004). The dramatic increase in Web-based distance
education courses and programs has compelled higher education policy
and accrediting organizations to offer recommendations to guide the design
and delivery of Web-based courses and programs. The Institute for Higher
Education Policy (IHEP) has recommended 24 benchmarks that cover
seven aspects that define excellence in Internet-based distance learning.
The seven categories that are used to evaluate the quality of online degree
and certification programs include institutional support, course develop-
ment, teaching/learning, course structure, student support, faculty support,
and evaluation and assessment (IHEP, 2000). Regional accreditation com-
missions such as the Western Interstate Commission for Higher Education
(WICHE) have collaborated with the Council of Higher Education Accred-
itation (CHEA) to develop standards for evaluating electronically offered
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 27
programs. The five standards used by WICHE to assess online programs
include institutional context and commitment, curriculum and instruction,
faculty support, student support and evaluation and assessment.
Sheard and Markham (2005) recommend that comprehensive evalua-
tions of Web-based learning environments should consider both the tech-
nical and pedagogical aspects of online instructional systems. Several
comprehensive reviews of the literature have identified small numbers
of studies relevant to the evaluation of Web-based courses (e.g., Stewart,
Hong, & Strudler, 2004). Most of the research reviewed focused on evalu-
ating the appearance and structure of Web pages, the quality of hyperlinks,
navigation, computer-mediated communication, Web-based teaching and
learning, and the effect of technology on learning. A major limitation of
these studies, however, was their failure to use or develop psychometrically
sound instruments to assess the impact of the variables that were studied
on the educational experiences of learners enrolled in Web-based courses.
The lack of commonly accepted, research-based frameworks for evaluat-
ing Web-based learning environments creates major limitations for those
attempting to assess the quality of online learning programs. Despite the
challenges, several efforts have been made to assess the effectiveness of
online learning environment.
Stewart, Hong, and Strudler (2004), for example, have developed a 44-
item instrument used by students to evaluate the quality of Web-based
instruction. The development of their instrument was the result of a rigor-
ous item-development and validation process. The 44 items comprising-
Stewart et al.’s instrument was based on an in-depth review of the litera-
ture related to online instruction. The response format for each item is a
five-point Likert scale with the following response options: 1 =strongly
disagree, 2 =disagree, 3 =undecided, 4 =agree, or 5 =strongly agree.
An exploratory factor analysis of 1,405 responses from online students
enrolled in 182 courses from 34 institutions identified seven factors: In-
structor and Peer Interaction, Technical Issues, Appearance of Web Pages,
Hyperlinks and Navigation, Content Delivery, Online Applications, and
Class Procedures and Expectations. The Cronbach’s alpha for each of the
seven dimensions was reported to exceed 0.70.
Roberts, Irani, Telg, and Lundy (2005) used Biner’s (1993) four-step
process to create the Telecourse Evaluation Questionnaire (TEQ). The
TEQ was designed to assess distance delivered-coursework using an eval-
uation form that was consistent with evaluation forms used for traditional,
face–to-face courses. Data from land-grant and Research I universities
was gathered to establish national benchmarks for evaluating distance
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
28 COMPUTERS IN THE SCHOOLS
education. Based on this data, 85 items were written to evaluate charac-
teristics of sound online distance education. The items were reviewed by
experts in the field of distance education and pilot tested with distance edu-
cation students enrolled in online courses. As a result of reviews by experts
and pilot testing, 20 items were retained for the final version of the TEQ.
Thirteen of the TEQ items are related to instructor characteristics, with
the remaining seven relevant to the course materials, library resources, and
technology used for course delivery. To maintain consistency with evalu-
ation procedures for face-to-face courses, student responses were elicited
using a five-point Likert scale ranging from 1 (poor) to 5 (excellent). In
addition, two open-ended questions were included that asked students to
make suggestions for improving the evaluation form and procedures for
a distance-delivered course. The TEQ was pilot tested with 112 distance
education students enrolled in Web-based courses. Results from this study
found that final 20-item instrument yielded a Cronbach’s alpha of .95,
indicating a high level of internal consistency.
The two instruments reviewed are solid attempts to provide technically
sound approaches for evaluating Web-based courses and programs. How-
ever, it is difficult for a single instrument to assess the range of complex
variables that contribute to quality online courses and programs. Web-
based learning environments are characterized by dynamic and complex
relationships that co-exist between content, pedagogy, and technology.
Considering the complex contexts that characterize online learning envi-
ronments, it is doubtful if the instruments reviewed are able to adequately
tap the important online instructional practices that integrate technology
appropriately for designing and delivering quality online courses.
THE STUDENT EVALUATION OF ONLINE TEACHING
EFFECTIVENESS (SEOTE)
Constructivist models of learning are almost exclusively recom-
mended as a guide for the design and delivery of Internet-based courses
(e.g., Bonk & Cunningham, 1998; Jonassen, 2000; Partlow & Gibbs, 2003).
The constructivist model of learning is premised on the notion that learn-
ers actively construct their own meaning and knowledge from their experi-
ences (Svinicki, 1999). This learning paradigm views teaching as a process
that involves helping learners to create knowledge through interactive and
authentic learning experiences (Partlow & Gibbs, 2003). The teacher’s
role is to guide students toward experiences that will facilitate meaningful
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 29
learning. Direct instructional activities where students passively assim-
ilate knowledge are minimized. Key features of constructivist learning
environments include active learning, authentic instructional tasks, collab-
oration among students, and diverse and multiple learning formats (Partlow
& Gibbs, 2003).
The Student Evaluation of Online Teaching Effectiveness was devel-
oped by Bangert (2004) to assess constructivist-compatible online teaching
practices recommended by Chickering and Gamson’s Seven Principles of
Effective Teaching (1987). Student responses are elicited using a six-point
Likert scale ranging from strongly agree to strongly disagree. In addition,
an open-ended question is administered to capture more individualized and
detailed student perceptions of the quality of Web-based teaching effec-
tiveness. The technical features of the SEOTE have been documented in
pilot studies (Bangert, 2004, 2005a) as well as validation studies involving
exploratory (Bangert, 2005b) and confirmatory factor analyses (Bangert,
2006).
THE SEOTE: ASSESSING RESEARCH-BASED
INSTRUCTIONAL PRACTICES
During the past 75 years thousands of research studies have been con-
ducted to provide insights into the complex array of variables that impact
student learning in college (Cross, 1999). These studies have been synthe-
sized into large volumes of information and position papers that highlight
best-practice for effectively delivering classroom instruction (e.g., APA,
1997; Pascarella & Terenzini, 1991). One of the best known summaries
of research-based instructional practices is the widely disseminated list of
Seven Principles of Effective Teaching authored by Chickering and Gam-
son (1987). The seven principles emerged from a panel of higher education
scholars who were asked to derive from their knowledge and experience a
set of principles that could be applied to improve learning. As a result of this
panel’s work, Chickering and Gamson concluded that student success is
related to effective teaching practices, which encourage (a) student-faculty
contact, (b) cooperation among students, (c) active learning, (d) prompt
feedback, (e) time on task, (f) high expectations, and (g) respect for di-
verse talents and ways of learning.The majority of the learner-centered
instructional practices that comprise the seven principles framework are
clearly focused on constructivist-based teaching practices. The principle
of active learning suggests that effective teaching engages students in
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
30 COMPUTERS IN THE SCHOOLS
authentic learning activities that require them to select, organize, and in-
tegrate their experiences with existing knowledge to create new cognitive
schema (Hacker & Niederhauser, 2000). Authentic instructional activities
that include simulations, case-based examples, and other problem-solving
exercises not only increase interactive learning but also support the princi-
ple of high expectations. Clear performance expectations that accompany
authentic instructional activities inform students of the criteria necessary
for demonstrating acceptable and proficient levels of performance. When
performance expectations for authentic exercises are clearly communi-
cated, students not only have a better understanding of the criteria required
for successful task completion but also gain insights about expected perfor-
mances necessary for real-world problem solving (Magnani, Nersessian,
& Thagard, 1999; Vye, Schwartz, Bransford, Zech, & CTGVT, 1998). The
principle of cooperation among students is aligned with the constructivist
notion that social interaction enhances learning (Svinicki, 1999). A deeper
understanding of concepts occurs when students have opportunities to talk,
listen, and reflect with their peers as they engage in problem-solving ex-
ercises that require them to apply newly acquired knowledge and skills
(Millis & Cottrell, 1998).
Constructivists assert that learners are responsible for taking control
and ownership for their learning (Jonassen, 2003). The principle of prompt
feedback encourages students to be responsible learners by promoting self-
efficacy (Bandura, 1986) or confidence in their abilities to successfully
accomplish learning tasks. Research has demonstrated that self-efficacy
increases when students are supplied with immediate and frequent per-
formance feedback (Schunk, 1983). When perceived self-efficacy is high,
students are more likely to engage in effective self-regulatory strategies that
enhance academic achievement. Confident students take responsibility for
creating meaningful learning experiences by efficiently monitoring their
academic work time, persisting on tasks when confronted with academic
challenges, and accurately monitoring the quality of their work through
frequent self-evaluations (Pajares, 2002). Improved learner self-efficacy is
necessary for supporting the principle of time on task because students
who are confident about their skills maintain the academic persistence
necessary for high levels of academic achievement (Pintrich & DeGroot,
1990).
Instructional practices that allow for diverse talents and multiple ways
of learning consider that knowledge acquisition is a unique experience
for each learner (Svinicki, 1999). Students bring a varied range of aca-
demic talents, preferences, and experiences to instructional environments.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 31
Allowing students to choose the pathways they will follow to achieve
learning goals is necessary for self-regulated learning and an increased
sense of self-efficacy. The practice of allowing students to choose instruc-
tional activities that are aligned with their unique learning styles, academic
strengths, and interests also supports learner self-efficacy.
The constructivist-based teaching practices recommended by the seven
principles framework are well suited for guiding the design and delivery of
quality Internet-based instruction (Beldarin, 2006; Billings, 2000). How-
ever, manipulating the existing technology in a manner that effectively
operationalizes these best practices for effective instruction may be per-
ceived as a significant challenge. Chickering and Erhmann (1996) dispel
this notion by emphasizing that the newest versions of course-authoring
tools allow faculty to easily create the kinds of instructional activities
recommended by the seven principles framework. However, what must
be emphasized here is that the pedagogy implicitly defined by the seven
principles framework will ultimately determine the effectiveness of on-
line teaching and not the technology used to create Web-based learning
environments (Reeves & Reeves, 1997).
DEVELOPMENT OF THE SEOTE
Pilot Studies
The SEOTE was first piloted with graduate students (N =24) enrolled
in an online educational statistics course. Study participants were seeking
master’s degrees in educational leadership, counseling, family and con-
sumer science, and curriculum and instruction and agricultural education
(Bangert, 2004). Items for the initial 35–item scale were written to assess
constructivist-based online teaching practices represented by Chickering
and Gamson’s (1987) Seven Principles of Effective Teaching. Content
validity of the instrument was established by a panel of college and uni-
versity online instructors who reviewed the items for clarity, accuracy,
and appropriateness for assessing the research-based practices that have
been identified as critical for effective online teaching. Student responses
were elicited using a six-point Likert scale which ranged from strongly
agree (6) to strongly disagree (1) (i.e., strongly agree, agree, mildly agree,
mildly disagree, disagree, strongly disagree). The 35–item questionnaire
was administered through the WebCT quiz tool during the last two weeks
of the Spring 2004 semester. Overall, student responses indicated that the
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
32 COMPUTERS IN THE SCHOOLS
instructor used constructivist-compatible principles effectively to deliver
an online educational statistics course. In addition, the SEOTE was found
to be highly reliable, yielding a coefficient alpha of .94. Similar results
were found when the SEOTE was used to assess the quality of an online
assessment and evaluation course for practicing nurses seeking a master’s
degree in nursing education (Bangert, 2005a).
Exploratory Factor Analysis: Validation Study 1
The first validation study conducted with the SEOTE involved 498
undergraduate and graduate students enrolled in fully online (70%) and
blended classes (30%) offered through WebCT during the spring 2004
semester at Montana State University (Bangert, 2005b). Results from pilot
studies and further review by content experts identified 26 of the original 35
items as most appropriate for conducting an exploratory factor analysis to
determine if the SEOTE represented the seven distinct instructional dimen-
sions defined by Chickering and Gamson’s principles of effective teaching.
Students participating in this research were sampled from courses
across the following disciplines: agriculture, business, computer science,
education, nursing, arts, music, English, medical health, science (biol-
ogy/chemistry/physics), social science, philosophy, and psychology. The
most frequently identified majors of students completing the SEOTE
were found in education (38%), nursing (15%), and physics (9%) courses.
The remaining 38% of respondents were enrolled in a wide range of course
offerings across all university programs. Sixty-six percent of students com-
pleting the SEOTE were females and 40% were males. Most of the students
(68%) completing the SEOTE were enrolled in undergraduate courses.
Sixty percent of students surveyed indicated that they had previously taken
at least one WebCT course. Items were reviewed and revised based on
recommendations made from the review panel. The SEOTE was voluntar-
ily completed by students using the WebCT quiz tool during the last two
weeks of the Spring 2004 semester.
Twenty-six of the original 35 SEOTE items were analyzed using ex-
ploratory factor analysis procedures recommended by Field (2000) and
Fabrigar, Wenger, MacCallum, and Strahan (1999). The 489 students com-
pleting the SEOTE supplied 16 subjects per each assessed variable, ex-
ceeding the sample size recommendations for conducting a factor analysis
with this data. Data screening procedures were also undertaken to evaluate
the factorability of the correlation matrix. Results from the Kaiser-Meyer-
Olkin Measure of Sampling Adequacy (KMO =.96) and Bartlett’s Test of
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 33
Sphericity indicated that the data were appropriate for the factor analysis
to proceed.
Several principal component factor analysis procedures were conducted
to establish a factor solution that yielded the most interpretable results.
The clearest factor pattern emerged when using oblique rotation methods
that identified four factors. The four-factor solution was evaluated against
Kaiser’s criterion and Cattell’s (1966) Scree test and found to best represent
the underlying traits for the final 26-item scale. A factor loading criterion
of .40, recommended by Stevens (2002), was adopted for including an item
in the interpretation of the final 26-item scale (see Table 1).
TABLE 1. Validation Study 1: Rotated Factor Structure for the Student
Evaluation of Online Teaching Effectiveness
Student Faculty Cooperation
Interaction Active Learning Time on Task Among Students
Item Factor I Factor II Factor III Factor IV
SFC1 .770 .139 .020 .006
SFC2 .520 .290 .045 .142
SFC3 .744 .060 .038 .064
SFC4 .926 .003 .090 .041
CAS2 .600 .088 .121 .287
PF1 .537 .038 .323 .130
PF2 .847 .156 .185 .029
PF3 .849 .096 .051 .054
HE1 .517 .316 .120 .006
DTWL1 .590 .175 .009 .126
DTWL2 .427 .118 .235 .053
DTWL3 .515 .165 .151 .150
AL1 .164 .624 .216 .219
AL2 .142 .662 .196 .077
HE2 .057 .614 .160 .095
HE3 .248 .579 .112 .005
HE4 .006 .801 .129 .036
DTWL4 .203 .636 .178 .039
DTWL5 .089 .630 .274 .066
AL3 .044 .318 .536 .217
TT1 .371 .089 .540 .046
TT2 .241 .361 .406 .127
TT3 .371 .089 .590 .046
AL4 .013 .116 .005 .770
CAS1 .016 .104 .205 .873
CAS3 .101 .021 .230 .850
Alpha Coefficients .94 .87 .82 .82
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
34 COMPUTERS IN THE SCHOOLS
Twelve of the 26 items comprising the SEOTE loaded on factor 1,
student-faculty interaction. The items associated with this factor were writ-
ten to assess dimensions of student- faculty contact, prompt feedback, and
diverse talents and ways of learning. These results suggest that students’
perceptions related to this group of items not only represent the frequency
of student-faculty contact but also the quality of interactive dialogues that
instructors have with students when supplying feedback, communicating
course expectations, conveying respect for students, and allowing flexibil-
ity for completing course assignments.
The two remaining diverse talents and ways of learning items (DWTL
items 4 and 5), active learning items 1 and 2, and three items from high
expectations (HE 2, 3, and 4) loaded significantly on the active learning
factor. The seven items comprising this factor assess the use of authentic
instructional activities that engage students in active learning experiences
that communicate clear expectations for proficient performance expected
in real-world contexts. Simulations, case-based examples, and other real-
istic problem-solving activities that require students to integrate their new
learning experiences with existing knowledge are essential for promoting
the type of active knowledge construction and discourse that lead to deeper
levels of understanding (Hacker & Neiderhauser, 2000). One explanation
offered for the association of the three high expectations items with the
active learning factor is that many authentic learning activities are typi-
cally accompanied by scoring rubrics that provide clear expectations that
inform students of the criteria expected for successful problem solving in
real-world contexts (Magnani, Nersessian, & Thagard, 1999). The two di-
verse talents and ways of learning items, DTWL 4 and 5 (“The course used
a variety of assignments and activities that allowed students to demonstrate
understanding of critical course concepts” and “I was given choices about
the types of activities or assignments that I would complete to demon-
strate understanding of important course concepts”), which also loaded
on the active learning factor, suggest that students felt more actively en-
gaged in coursework when permitted to choose tasks that were interesting
and that they thought could be successfully completed. Allowing multi-
ple pathways for constructing knowledge supports self-regulated learning
because students who engage in tasks with goals that are perceived to be
obtainable more effectively monitor their academic work time, put forth
greater effort, and persist longer when confronted with academic chal-
lenges (Pajares, 2002). Interestingly, the active learning item “The course
allowed me to take responsibility for my learning” (AL 3) was found
to load with items representing the time on task factor, suggesting that
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 35
students perceived they were better able to assume responsibility for their
learning when provided with an efficient learning environment.
The fourth factor, cooperation among students, contained two items
written to specifically assess the inclusion of collaborative learning activ-
ities but also captured an item originally created to assess active learning.
The association of the item “The course was used to stimulate thoughtful
discussions” (AL 4) with the cooperation among students factor indicates
that students interpreted their experiences with course discussions as a
feature of group interaction rather than a characteristic of active learning.
Surprisingly, the item “I felt comfortable interacting with the instructor
and other students” (CAS 2) was found to contribute significantly to factor
1 (student-faculty interaction), suggesting that students perceived this item
as assessing their comfort level with instructor interactions rather than the
quality of collaborative interactions occurring mutually with both the in-
structor and other classmates. The coefficient alphas for the four factors to
emerge from this exploratory study ranged from 0.94 to 0.82.
The factor analysis of items written to assess the seven traits repre-
sented in Chickering and Gamson’s (1987) framework failed to emerge
as seven unique factors. One conclusion that might be drawn from this
finding is that the underlying traits that represent the characteristics of
effective face-to-face classroom settings manifest themselves differently
for online learning environments. Results from this study suggest that the
four factors that were identified to represent the construct of online teach-
ing effectiveness are aligned much more closely with Hacker and Nei-
derhauser’s (2002) proposed framework for promoting deep and durable
online learning. They suggest that meaningful online learning is accom-
plished by creating instructional environments that are characterized by
active learning, collaborative problem solving, and student-faculty inter-
actions that motivate students toward high levels of task engagement and
achievement.
Exploratory and Confirmatory Factor Analysis:
Validation Study 2
A second validation study with the 26-item SEOTE identified in the
previous validation study was conducted with a much larger sample of
undergraduate and graduate students (n =807) enrolled in fully online
(58%) and blended classes (42%) (Bangert, in press ). WebCT was the
courseroom management system used for both forms of online instruction.
Sixty-eight percent of all students surveyed were enrolled in undergraduate
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
36 COMPUTERS IN THE SCHOOLS
programs of study. Of the total number of students completing the SEOTE,
66% were female and 34% were male. The majority of students were
enrolled in education courses (26%) with the least number of students
taking social science (6%) courses. The remaining 29% of the respon-
dents for this study were enrolled in online courses offered across a di-
verse range of university colleges and departments (e.g., history, archi-
tecture, library science). Sixty percent of the students surveyed indicated
that they had taken at least one WebCT course prior to their participa-
tion in this study. The SEOTE was voluntarily completed by students
using the WebCT quiz tool during the last 2 weeks of the Spring 2004
semester.
Preliminary Analysis
Before the exploratory and confirmatory factor analytic procedures were
conducted, the data were subjected to tests of normality and skewness.
Both the symmetry (skewness =−1.143, SE =0.09) and the “flat-
ness” (kurtosis =1.80, SE =0.17) were found to depart significantly from
normality (W =0.925, p <0.001). Visual inspection indicated that the
distributions for each of the 26 SEOTE items were negatively skewed
(M =−1.47, SD =0.398). According to Fabrigar et al. (1999), factor an-
alytic procedures that employ maximum likelihood extraction methods are
not adversely affected when skewness of the variables is less than 2.00 and
kurtosis is not greater than 7.00. None of the individual variables analyzed
exceeded these critical thresholds. Additionally, Byrne (1998) points out
that non-normal data do not adversely affect estimation if all the distribu-
tions of the items that comprise the model are skewed in the same direction.
Descriptive statistics for all student responses from the original 26-item
SEOTE are presented in Table 2.
Exploratory Analysis
Exploratory factor analysis procedures were conducted using one ran-
domly divided subsample (n =404) of student responses from the SEOTE.
Guidelines suggested by Field (2000) and Fabrigar et al. (1999) were fol-
lowed when this analysis was conducted. Results from the 404 student
responses supplied 15 individuals for each of the 26 SEOTE variables
exceeding the sample size recommendations suggested by Fabrigar et al.
(1999). Data screening procedures were also undertaken to evaluate the fac-
torability of the correlation matrix. Results from the Kaiser-Meyer-Olkin
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 37
TABLE 2. Descriptive Statistics for the Original Twenty-Six Item Version of
the SEOTE from Validation Study 2
MSD
Student Faculty Contact (SFC)
1. The instructor communicated effectively. 5.11 1.20
2. The instructor was enthusiastic about online teaching. 5.05 1.12
3. The instructor was accessible to me outside of the course. 4.99 1.15
4. The amount of contact with the instructor was satisfactory (e.g., email, discussions,
face-to-face meeting, etc.)
5.00 1.16
Cooperation Among Students (CAS)
1. The course was structured so that I could discuss assignments with other students. 5.03 1.12
2. I felt comfortable interacting with the instructor and other students. 5.22 0.966
3. This course included activities and assignments that provided students with
opportunities to interact with one another.
4.94 1.16
Active Learning (AL)
1. This course included interactive assignments and links to examples from the Web
that directly involved me in the learning process.
4.61 1.34
2. This course used realistic assignments and problem-solving activities that were
interesting and motivated me to do my best work.
4.85 1.14
3. The course allowed me to take responsibility for my own learning. 5.38 0.834
4. The course was used to stimulate thoughtful discussions. 4.78 1.26
Prompt Feedback (PF)
1. My questions about WebCT were responded to promptly. 5.22 0.880
2. My questions about course assignments were responded to promptly. 5.16 1.01
3. I was provided with supportive feedback related to course assignments. 5.06 1.11
Time on Task (TT)
1. The course was structured to be user friendly. 5.27 0.868
2. The course was designed to provide an efficient learning environment. 5.05 1.00
3. The course allowed me to complete assignments across a variety of learning
environments.
5.20 0.984
High Expectations (HE)
1. This course used examples that clearly communicated expectations for completing
course assignments.
4.95 1.13
2. This course provided good examples and links to other examples published on the
Web that helped to explain concepts and skills.
4.74 1.20
3. The assignments for this course were of appropriate difficulty level. 5.05 0.998
4. The course used realistic assignments and problem-solving activities related to
situations that I am likely to encounter outside of this course or in a future job
situation.
4.91 1.15
Diverse Talents and Ways of Learning (DTWL)
1. The instructor was respectful of students’ ideas and views. 5.34 0.952
2. The course was designed so that technology would minimally interfere with
learning.
4.82 1.17
3. Flexibility was permitted when completing course assignments. 4.79 1.25
4. This course used a variety of assignments and activities that allowed students to
demonstrate understanding of critical course concepts.
4.68 1.23
5. I was given choices about the types of activities or assignments that I would
complete to demonstrate learning of important course concepts.
3.94 1.50
Note
. Students were asked to rate each item using the following response scale: Strongly Agree (6), Agree
(5), Mildly Agree (4), Mildly Disagree (3), Disagree (2) and Strongly Disagree (1).
Open-ended Item: Please make specific comments that you might have to explain in more detail your
perceptions related to the questions above.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
38 COMPUTERS IN THE SCHOOLS
Measure of Sampling Adequacy (0.95) and Bartlett’s Test of Sphericity
indicated that the data were appropriate for the factor analysis to proceed.
As in the previous study, the clearest factor pattern emerged when using
maximum likelihood extraction and oblique rotation methods. Again, when
Kaiser’s criterion and Cattell’s (1966) Scree test were considered, the four
factors to emerge were found to best represent the underlying dimensions
for the original 26-item scale. The rotated factor solution for the 26-item
Student Evaluation of Online Teaching Effectiveness (SEOTE) is presented
in Table 2. A minimum factor loading criterion of .400 recommended by
Stevens (2002) was adopted for including an item in the final interpreta-
tion. Factor 1, interpreted as student-faculty interaction, captured all four
student-faculty contact items (SFC 1 through 4) and the three items written
to assess prompt feedback (PF 1 through 3). The other items loading with
this factor were from the diverse talents and ways of learning (DTWL 1
and 2), cooperation among students (CAS 2) and the high expectations
(HE 1) items pools. Two items from the original SEOTE, “Flexibility was
permitted when completing course assignments” (DTWL 3) and “This
course used a variety of assignments and activities that allowed students to
demonstrate understanding of critical course concepts” (DTWL 4) exhib-
ited small loadings (>0.400) and were not retained to define the SFI factor.
The second factor to emerge, cooperation among students, contained
two of the three items written to assess the use of collaborative learning
activities (CAS 1 and 3) in addition to one active learning item (AL 4).
The third factor, interpreted as Active Leaning, included active learning
items 1 and 2, three high expectations items (HE 1, HE 2, and HE 4), and
DTWL 5. The item “I was given choices about the types of activities or
assignments that I would complete to demonstrate learning of important
course concepts” (DTWL 5) exhibited a factor loading less than .400 and
was not retained for interpretation. The final factor interpreted, time on task,
captured all three of the time on task items (TT1 through 3) in addition to
one active learning item (AL 3) and one high expectations item (HE 3).
The descriptive statistics and factor loadings for all 23 retained SEOTE
items are presented in Table 3.
Confirmatory Factor Analysis
Once the exploratory factor analysis was completed, a confirmatory fac-
tor analysis using Lisrel 8.72 (J¨
orsekog & S¨
orbom, 2001) was completed
with the second subsample (n =403) to test the stability and replica-
bility of the latent model produced by the exploratory factor analysis.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 39
TABLE 3. Rotated Factor Structure for the SEOTE from Validation Study 2
Factor Factor 2 Factor 3 Factor 4
Student Faculty Cooperation Active Time on
Items Interaction Among Students Learning Task
My questions about course
assignments were
responded to promptly
(PF 2)
.865 .009 .013 .046
The amount of contact with
the instructor was
satisfactory (SFC 4)
.829 .025 .025 .075
The instructor was accessible
to me outside of this online
course (SFC 3).
.782 .051 .150 .097
I was provided with supportive
feedback related to course
assignments
.782 .054 .089 .003
The instructor communicated
effectively (SFC 1).
.720 .105 .252 .074
The instructor was respectful
of student’s ideas and views
DTWL 1).
.715 .138 .135 .071
I felt comfortable interacting
with the instructor and other
students (CAS 2).
.655 .257 .102 .122
The instructor was
enthusiastic about online
learning (SFC 2).
.615 .12.1 .301 .093
My questions about WebCT
were responded to promptly
(PF 1).
.605 .102 .010 .101
This course used examples
that clearly communicated
expectations for completing
course assignments (HE 1).
.412 .127 .363 .259
The course was structured so
that I could discuss
assignments with other
students (CAS 1).
.011 .938 .108 .075
The course was used to
stimulate thoughtful
discussions (AL 4).
.007 .690 .196 .105
This course included activities
and assignments that
provided students with
opportunities to interact with
one another (CAS 3).
.339 .540 .119 .052
(Continued on next page)
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
40 COMPUTERS IN THE SCHOOLS
TABLE 3. Rotated Factor Structure for the SEOTE from Validation Study 2
(Continued)
Factor Factor 2 Factor 3 Factor 4
Student Faculty Cooperation Active Time on
Items Interaction Among Students Learning Task
This course included
interactive assignments and
links to examples from the
web that directly involved
me in the learning process
(AL1).
.154 .225 .523 .009
This course used realistic
assignments and
problem-solving activities
that were interesting and
motivatedmetodomybest
work (AL 2).
.066 .088 .582 .337
This course used realistic
assignments and
problem-solving activities
related to situations that I
am likely to encounter
outside of this course or in a
future job situation (HE 4).
.078 .097 .545 .272
This course provided good
examples and links to other
examples published on the
Web that helped to explain
concepts and skills (HE 2).
.242 .074 .437 .132
The course was structured to
be user friendly (TT 1).
.108 .015 .075 .812
The course was designed to
provide an efficient learning
environment (TT 2).
.083 .008 .163 .743
The course allowed me to
complete assignments
across a variety of learning
environments (TT 3).
.035 .121 .099 .643
The course was designed so
that technology would
minimally interfere with
learning (DTWL 2).
.310 .060 .104 .511
This course allowed me to
take responsibility for my
own learning (AL 3).
.075 .302 .173 .484
The assignments for this
course were of appropriate
difficulty level (HE 3).
.325 .052 .158 .400
Alpha Coefficients .93 .86 .83 .82
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 41
Results from the confirmatory factor analysis indicated that the indepen-
dence model that tests the hypothesis that all variables are uncorrelated
was easily rejected . The hypothesized four-factor model identified by the
exploratory analyses was tested and found to be a superior fit to the data
(see Figure 1).
There is no clear consensus regarding the indices that are most ap-
propriate for evaluating model fit. However, Byrne (1998) and others
(e.g., Bentler, 1992; MacCallum, Browne, & Sugarwara, 1996) have sug-
gested that the root mean square error of approximation (RMSEA), the
comparative fit index (CFI), and the nonnormed fit index (NNFI) provide
optimal information for evaluating model fit. The RMSEA has been rec-
ognized as an informative index of fit because it provides a value that
describes the discrepancy or error between the hypothesized model and
an estimated population model derived from the sample. RMSEA values
less than 0.05 are indicative of good model fit (Byrne, 1998; MacCallum
et al., 1996). Both the CFI and the NNFI indices developed by Bentler are
advantageous for evaluating model fit because they consider both sample
size and model complexity. CFI and NNFI values greater than 0.90 are
indicative of good model fit.
The hypothesized four-factor model for the SEOTE yielded a RMSEA
of .042. The 90% confidence interval (0.038 to 0.047) around the obtained
RMSEA value provides additional evidence to support that the proposed
model is a “close fit” to the estimated population model. Both CFI (0.99)
and NNFI (0.99) values exceeded the recommended threshold value of
0.90 for those indices, providing further evidence of good model fit. The
power of the RMSEA test of close fit for this model with 224 degrees
of freedom and a sample size of 403 was found to exceed 1.00 (see
MacCallum et al. 1996 for power calculation procedures). The internal
consistency reliabilities for all four SEOTE factors exceeded 0.80, indicat-
ing acceptable to high levels of internal consistency reliability (Anastasi
& Ubana, 1997). Coefficient alphas for the four SEOTE factors defined
by the hypothesized measurement model were as follows: Student-faculty
interaction (0.94), cooperation among students (0.86), time on task (0.82)
and active Llearning (0.85).
DISCUSSION
The studies discussed were undertaken to examine the psychometric
properties and factor structure of the Student Evaluation of Online Teaching
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
42 COMPUTERS IN THE SCHOOLS
FIGURE 1. Hypothesized Four Factor Model for the Final Twenty Three
Item SEOTE.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 43
Effectiveness (SEOTE). The initial version of the SEOTE scale contained
35 items and was used to evaluate the effectiveness of online instruction
for an online statistics and assessment course. However, results from an
exploratory factor analysis of SEOTE responses from 489 students enrolled
in online courses indicated that 26 of the original 35 items represented
four factors that were determined to best represent the underlying traits
of online teaching effectiveness (Bangert, 2005b). This second validation
study conducted with 809 online students also identified four SEOTE
factors. However, this analysis retained 23 of the 26 items that were retained
as a result of the first validation study. Three of the five “diverse talents
and ways of learning” items (DTWL 3, 4, and 5) displayed small factor
loadings and were not included in the interpretation of the four factors
for this second validation study. In addition, one item, “The assignments
for this course were of appropriate difficulty level,” loaded on the Active
Learning factor. This item was originally associated with the Time on
Task factor when the first exploratory factor analysis was conducted with
489 online students (Bangert, 2005) . Results from the confirmatory factor
analysis verified that the four-factor structure consisting of the 23 items
identified in the second validation study fit the data well. These results
indicate that the most current form of the SEOTE represented in Table 3 is
best suited and appropriate for assessing the quality of online instructional
effectiveness of higher education faculty.
One reason offered to explain why four rather than seven factors
emerged from this analysis is that the dimensions of effective teaching
originally described for face-to-face classroom settings by Chickering and
Gamson’s framework have different causal relationships when applied to
online learning environments. Contextual influences such as student
characteristics, course content, and instructor skills manifest themselves
differently in online courses, implying that they will have different rela-
tionships to the processes and activities required for quality Internet-based
instruction.
Research in the area of student evaluations of teaching suggests that
teaching is a complex, mulitdimensional trait that is comprised of distinct
instructional acts (Abrami & d’Apollonia, 1991; Feldman, 1997). The
similarities in the processes and procedures that define quality teaching
can create difficulties for researchers who attempt to write specific items to
represent categories or factors representing the characteristics of effective
instruction. In past years there has been an ongoing debate about the use
of global factors versus single items to describe students’ perceptions of
effective teaching practices. Marsh (1991) argues that instructor quality is
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
44 COMPUTERS IN THE SCHOOLS
best defined by nine dimensions or factors. Abrami et al. (1997), on the
other hand, argue that effective teaching is better defined by single global
items that assess individual instructor activities that occur both before and
during teaching. They contend that the causal relationships between any
one teaching activity and teaching dimension (i.e., factor) vary as a function
of contextual influences such as student, instructor, and course content,
producing unstable causal relationships between the specific processes of
teaching and the factors they represent.
Despite the debate about the best use of information gathered from stu-
dent evaluations of teaching, the SEOTE can supply online instructors with
valuable diagnostic and summative feedback about students’ perceptions
of their teaching effectiveness. The four SEOTE factors offer a general
profile that describes an overview of students’ perceptions about the qual-
ity of teaching for online courses. The individual items are suitable for
supplying instructors with specific diagnostic feedback related to the use
of constructivist-compatible practices in the design and for the delivery
of their online courses. The main benefit of this type of feedback is that
instructors can use this information to improve both their courses and the
quality of educational experiences for their online students.
The Student Evaluation of Online Teaching Effectiveness provides a
useful tool for research and practice in the area of online instruction and can
be usefully employed in its current form. However, further research efforts
should be focused on studying the validity of the four-factor structure
of the SEOTE at other institutions where faculty are engaged in Web-
based instruction. These validation studies should also examine the effect
of student characteristics such as enrollment status (e.g., undergraduate
vs. graduate), gender, and ethnicity on the stability of the SEOTE factor
structure.
The items comprising the SEOTE were written to reflect research-based
teaching practices that have been identified as essential for delivering
quality online instruction. Results from this study suggest that the SEOTE
holds promise as an instrument that can be used by faculty to gather both
formative and summative feedback about the quality of their online teach-
ing efforts. The fact that an estimated 80% of faculty use feedback from
instructor evaluations to improve their teaching (Marsh & Bailey, 1993)
suggests that there is a continued need for a diagnosticinstrument designed
to assess the effectiveness of online instruction as more faculty deliver
courses via the World Wide Web. Organizations such as the American
Association of Higher Education (AAHE) have made longstanding rec-
ommendations for the use of frameworks such as The Seven Principles of
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 45
Effective Teaching to guide the design and delivery of traditional higher
education courses and programs. The use of the Student Evaluation of On-
line Teaching Effectiveness supports those recommendations by supplying
instructors with feedback about the quality of instructional practices that
have been identified as essential for creating and delivering quality Web-
based courses.
REFERENCES
Abrami, P. C., & d’Apollonia, S. (1991). Multidimensional students’ evaluation of teaching
effectiveness-generalizability of “n =1” research: Comment on Marsh (1991). Journal
of Educational Psychology, 30, 221–227.
Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (1997). The dimensionality of student
ratings of instruction: What we know and what we do not. In R. P. Perry & J. C. Smart
(Eds.), Effective teaching in higher education: Research and practice (pp. 321–367).
New York: Agathon.
Allen, I. E., & Seaman, J. (2003). Sizing the opportunity: The quality and extent of online
education in the United States, 2002 and 2003. Retrieved November 11, 2006, from the
Sloan Consortium Web site: http://www.sloan-c.org/publications/survey/survey03.asp
American Psychological Association. (1997, November). Learner-centered psychological
principles: A framework for school design and reform. Retrieved April 21, 2005 from
http://www.apa.org/ed/lcp.html#Background
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Englewood Cliffs, NJ:
Prentice-Hall.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.
Englewood Cliffs, NJ: Prentice Hall.
Bangert, A. W. (2006). The development of an instrument for assessing online teaching
effectiveness. The Journal of Educational Computing Research, 35(3), 227– 244.
Bangert, A. W. (2005a). The seven principles of effecting teaching: A framework for design-
ing, delivering and evaluating an Internet-based assessment course for nurse educators.
The Nurse Educator, 30(5), 221–225.
Bangert, A. W. (2005b). Identifying factors underlying the quality of online teaching
effectiveness: An exploratory study. Journal of Computing in Higher Education, 17(2),
79–99.
Bangert, A. W. (2004). The seven principles of good practice: A framework for evaluating
online teaching. The Internet and Higher Education, 7(3), 217–232.
Beldarin, Y. (2006). Distance education trends: Integrating new technologies to foster
student interaction and collaboration. Distance Education, 27(2), 139–153.
Bentler, P. M. (1992). On the fit of models to covariances and methodology to the bulletin.
Psychological Bulletin, 112, 400–404.
Billings, D. M. (2000). A framework for assessing outcomes and practices in Web-based
courses in nursing. Journal of Nursing Education, 39(2), 60–67.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
46 COMPUTERS IN THE SCHOOLS
Biner, P. M. (1993). The development of an instrument to measure student attitudes toward
televised courses. The American Journal of Distance Education, 7(1), 62–73.
Blumenstyk, G. (2005, January 7). For profit education: Online courses fuel growth. The
Chronicle of Higher Education, 15.
Bonk, C. J., & Cunningham, D. J. (1998). Searching for learner-centered, constructivist,
and sociocultural components of collaborative educational learning tools. In C. J. Bonk &
K. S. King (Eds.), Electronic collaborators: Learner-centered technologies for literacy,
apprenticeship, and discourse (pp. 25–50). Mahwah, NJ: Erlbaum.
Byrne. B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS:
Basic concepts, applications and programming. Mahwah, NJ: Erlbaum.
Cattell, R. B. (1966). The Scree test for the number of factors. Multivariate Behavioral
Research, 1, 245–276.
Chickering, A. W., & Erhmann, S. C. (1996). Implementing the seven principles: Technol-
ogy as lever. AAHE Bulletin, 49(2), 3–6.
Chickering, A. W., & Gamson, Z. F. (1987, March). Seven principles for good practice in
undergraduate education. AAHE Bulletin, 39(7), 3–7.
Cross, P. K. (1999). What do we know about students’ learning and how do we know it?
Innovative Higher Education, 23(2), 255–270.
Fabrigar, L. R., Wenger, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating
the use of exploratory factor analysis in psychological research. Psychological Methods,
4(3), 272– 299.
Feldman, K. A. (1997). Identifying exemplary teachers and teaching: Evidence from student
ratings. In R. P. Perry & J. C. Smart (Eds.), Effective teaching in higher education:
Research and practice (pp. 368–395). New York: Agathon Press.
Field, A. (2000). Discovering statistics using SPSS for Windows. Thousand Oaks, CA:
Sage.
Hacker, D. J., & Niederhauser, D. S. (2000). Promoting deep and durable learning in the
online classroom. New Directions for Teaching and Learning, 84, 53–63.
Institute for Higher Education Policy (2000, April). Quality on the line: Benchmarks for
success in Internet-based distance education. Washington, DC: Author.
Jonassen, D. H. (2000). Computers as mindtools for schools. Upper Saddle River, NJ:
Merrill Prentice Hall.
Jonassen, D. H. (2003). The vain quest for a unified theory of learning. Educational
Technology, 43, 5–8.
J¨
oreskog, K. G., & S¨
orbom, D. (2001). LISREL (Version 8.51) [Computer software].
Chicago: Scientific Software.
MacCallum, R. C., Browne, M. W., & Sugarwara. H. M. (1996). Power analysis and
determination of sample size for covariance structure modeling. Psychological Methods,
1(2), 130–149.
Magnani, L., Nersessian, N. J., & Thagard, P. (1999). Model-based reasoning in scientific
discovery. New York: Kluwer Acdemic/Plenum.
Marsh, H. W., & Bailey, M. (1993). Multidimensional students’ evaluations of teaching
effectiveness. A profile analysis. The Journal of Higher Education, 64(1), 1–18.
Millis, B. J., & Cottrell, P. G. (1998). Cooperative learning for higher education faculty.
Phoenix, AZ: Oryx Press.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
Arthur W. Bangert 47
Motiwalla, L., & Tello, S. (2000). Distance learning on the internet: An exploratory study.
The Internet and Higher Education, 2(4), 253–264.
Pajares, F. (2002). Gender and perceived self-efficacy in self-regulated learning. Theory
Into Practice, 41(2), 118–125.
Partlow, K. M., & Gibbs, W. J. (2003). Indicators of constructivist principles in Internet-
based courses. Journal of Computing in Higher Education, 14(2), 68–97.
Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students. San Francisco:
Josey- Bass.
Phipps, R., & Meristosis, J. (2000). Quality on the line: Benchmarks for suc-
cess in Internet-based distance education. Retrieved November 13, 2006, from
Columbia University, Institute for Higher Education Policy Web site: http://www.
ihep.com/organizations.php3?action =printContentItem&orgid =104&typeID =
906&itemID =9239&templateID =1418
Phipps, R. A., Wellman, J. V., & Merisotis, J. P. (1998). Assuring quality in distance learn-
ing: A preliminary review. Washington, DC: Council for Higher Education Accreditation.
Pintrich, P. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning com-
ponents of classroom academic performance. Journal of Educational Psychology, 82,
41–50.
Reeves, T. C., & Reeves, P. M. (1997). Effective dimensions of interactive learning on the
World Wide Web. In H. Kahn (Ed.), Internet-based instruction (pp. 59–66). Englewood
Cliffs, NJ: Educational Technology Publications.
Roberts, T. G., Irani, T. A., Telg, R. W., & Lundy. L. K. (2006). The Development of
an Instrument to Evaluate Distance Education Courses Using Student Attitudes. The
American Journal of Distance Education, 19(1), 51–64.
Schunk, D. (1983). Developing children’s self-efficacy and skills: The roles of social
comparative information and goal setting. Contemporary Educational Psychology, 8,
76–86.
Sheard, J., & Markham, S. (2005). Web-based learning environments: Developing a frame-
work for evaluation. Assessment and Evaluation in Higher Education, 30(4), 353–368.
Stella, A., & Gnanam, A. (2004). Quality assurance in distance education: The challenges
to be addressed. Higher Education, 47, 143–160.
Stevens, J. P. (2002). Applied multivariate statistics for the social sciences (4th ed.). Mah-
wah, NJ: Erlbaum.
Stewart, I., Hong, E., & Strudler, N. (2004). Development and validation of an instrument
for student evaluation of the quality of Web-based instruction. The American Journal of
Distance Education, 18(3), 131–150.
Svinicki, M. D. (1999). New directions in learning and motivation. New Directions for
Teaching and Learning, 80, 5–27.
Vye, N. J., Schwartz, D. L., Bransford, J. D., Barron, B. J., Zech, L., & Cognition and
Technology Group at Vanderbilt (1998). SMART environments that support monitoring,
reflection, and revision. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacog-
nition in educational theory and practice (pp. 305–346). Hillsdale, NJ: Erlbaum.
Downloaded by [Montana State University Bozeman] at 14:39 11 February 2014
... The literature on online learning, best practices and undergraduate education was carefully reviewed before creating the survey that was used in this study (e.g., (Bangert, 2008;Chickering & Gamson, 1987;Chu, Liu, So, & Lam, 2021;Eom & Ashill, 2016;Nikou & Maslov, 2021;Pal & Vanijja, 2020;Zhou & Zhang, 2021)). A demographics component was incorporated to offer an overview of the sample with respect to gender, academic level, and college. ...
Article
This study seeks to determine the effectiveness of Microsoft Teams as an online education system in higher education. The researchers developed and tested a path model with factors related to the effectiveness of Microsoft Teams according to the views of students. Using random sample procedures, data from 3,572 undergraduate students was acquired. According to the findings, undergraduate students thought that Microsoft Teams was a useful online teaching tool and had positive opinions of online learning. The outcomes also validate the postulated path model, according to which the two antecedent factors—technical support and Microsoft Teams’ class features—which had positive impacts on the usefulness of the instructional method. Additionally, a positive correlation was found between online interaction and the usefulness of the instructional method. Ultimately, the main factor that greatly affects exam satisfaction and perceived learning outcomes is online interaction. However, students' opinions of the university's technical assistance were less positive, and their level of exam satisfaction was below expectations. The study provides a number of practical and theoretical implications for the field of study. For example, it is expected that this research will provide valuable information for decision makers in the Jordanian government (e.g., Ministry of Higher Education) to take clear and precise steps to help higher education minimize the threats and challenges to students’ learning. Global researchers and practitioners can use this research model to cope with unexpected situations and crises, such as the COVID-19 pandemic, that may arise in the future.
... The original SEOTE was developed by Bangert et al in 2008, and its validity and reliability were approved (15,16). This tool has 26 questions and consists of different parts including: Student Faculty Contact (SFC), Cooperation Among Students (CAS), Active Learning (AL), Prompt Feedback (PF), Time on Task (TT), High Expectations (HE), Diverse Talents and Ways of Learning (DTWL). ...
Article
Full-text available
Background: Due to the increasing number of online educations, traditional methods for evaluation of different on-line education methods are not suitable and there is a need for a valid and reliable tool in this field. This study aimed to examine the validity and reliability of the Student Evaluation of Online Teaching Effectiveness (SEOTE) questionnaire among Medical Sciences’ students in Tabriz, Iran. Methods: Using convenience sampling method, the present study included 230 students of Medical Sciences (e.g., mean age of 21.96 ±3.47 years) in Tabriz, Iran. The SEOTE questionnaire was used to collect the data. Forward–backward translation, construct validity and content validity were utilized to check the validity of the questionnaire. In addition, temporal stability was calculated using the test-retest method and Internal Consistency Coefficient (ICC). Results: It was confirmed that the SEOTE has appropriate content validity using the content validity index (CVI) of 0.80 and content validity ratio CVR of 0.70. Exploratory factor analysis (EFA) identified the following two factors for domain identification of “communication” and “learning”. A Cronbach’s alpha of 0.97 was obtained for the questionnaire’s reliability, and ICC was used to confirm the temporal stability of the questionnaire (95% Confidence Interval [CI]: 0.966-0.977). Conclusion: This study confirms that the Persian version of SEOTE questionnaire is a valid and reliable tool for evaluation of on-line teaching among Medical Sciences’ students.
... Learner-centered pedagogy prioritizes the learner's needs, experiences, and goals to make the learning process more engaging and meaningful [18]. Research has shown that learner-centered approaches significantly enhance student satisfaction, motivation, and academic achievement in online learning environments [5,19]. These methods often involve interactive content, personalized learning pathways, and real-world applications, empowering learners to take ownership of their educational experiences [20]. ...
Conference Paper
Full-text available
The success of online learning environments depends on various factors that contribute to effective teaching and learning experiences. This literature review concisely examines the determinants of success in online learning, including pedagogical approaches, learner characteristics, interaction and engagement, technological infrastructure, assessment and feedback mechanisms, and support tools. The review highlights the importance of practical evaluations, learner motivation, reliable technical infrastructure, and meaningful interaction in promoting successful online learning. Key findings suggest that learner-centered pedagogy, consideration of individual learner characteristics, fostering interaction and engagement, providing reliable technological infrastructure, and implementing practical assessment and feedback practices are critical for optimizing online learning outcomes. The review identifies gaps in the literature and suggests future research directions, such as exploring the long-term impact of online learning, addressing equity and access issues, and investigating emerging technologies in online education. Understanding the factors determining the success of online learning environments is essential for educators and institutions to design and implement compelling online learning experiences that meet the needs of diverse learners.
... However, the online learning combined mode can improve the learning and attention of students and their evaluation of courses. Bangert [9] developed and verified students' evaluation of the online teaching effect. Based on an analysis of 498 undergraduates and graduates participating in online courses, he concluded that teacher-student interaction, active learning, time to complete a task, and cooperation of students are key factors influencing the online teaching effect. ...
Article
Full-text available
Internet teaching space requires teachers to be skilled in new technological platforms and tools and use new teaching interaction modes and evaluation means. Moreover, most front-line teachers have no online teaching experience before and bear the great pressure of online teaching. Hence, the evaluation of online teaching is challenge that has to be solved. A total of 169 teachers from six vocational colleges in Zhejiang Province were chosen as the research subjects in this study to improve the poor accuracy of the simple linear evaluation results of traditional classroom teaching. The relative efficacy of the online teaching quality of 169 teachers was estimated through the technique for order of preference by similarity to ideal solution (TOPSIS) technology. Next, the causal relationship between excellent achievement in online teaching training and the teaching effect of teachers with different titles was analyzed using the hierarchical chi-square model. Research results demonstrated that 12 level-2 indexes used in this study could depict the online teaching effect of university teachers comprehensively. The TOPSIS method calculated the scientific and reasonable ranking and evaluation values of 169 teachers. The Cochran–Mantel– Haenszel conditional independence test of the chi-square test presented significance. Our study of the instruments used to evaluate teacher success in online teaching shows that significant differences exist between evaluation grades for online teaching training and excellent achievement in online teaching training. This study can provide important references to comprehensively guide teachers to understand the value orientation of the new online teaching mode.
Article
The present study was conducted with the aim of presenting a causal model for academic engagement based on students' evaluation of the effectiveness of online teaching with the mediating role of online learning self-efficacy. The research method is basic in terms of purpose and in terms of the type of data collected, it is quantitative and based on the correlation approach. The participants included 375 students of online courses at Payame Noor University in Lorestan province in the academic year 2021-2022, who were selected by two-stage cluster random sampling method. Research tools were scales of student evaluation questionnaires on the effectiveness of online teaching by Bengert et al. (2008), self-efficacy of online learning by Zimmerman and Koli Kevich (2016) and the questionnaire of students' academic engagement by Gonios and Kozo (2015). Data analysis was done by structural equation modeling method in AMOS26 software platform. According to the findings, students' evaluation of the effectiveness of online teaching positively predicts self-efficacy of online learning and academic engagement variables. Also, predicts self-efficacy of online learning variable can predict academic engagement in a positive and meaningful way. In addition, students' evaluation of the effectiveness of online teaching predicts academic engagement through self-efficacy of online learning. According to the obtained results, by increasing the students' evaluation of the effectiveness of online teaching, it is possible to improve self-efficacy of online learning and improve their academic engagement.
Article
The present study was conducted with the aim of presenting a causal model for academic engagement based on students' evaluation of the effectiveness of online teaching with the mediating role of online learning self-efficacy. The research method is basic in terms of purpose and in terms of the type of data collected, it is quantitative and based on the correlation approach. The participants included 375 students of online courses at Payame Noor University in Lorestan province in the academic year 2021-2022, who were selected by two-stage cluster random sampling method. Research tools were scales of student evaluation questionnaires on the effectiveness of online teaching by Bengert et al. (2008), self-efficacy of online learning by Zimmerman and Koli Kevich (2016) and the questionnaire of students' academic engagement by Gonios and Kozo (2015). Data analysis was done by structural equation modeling method in AMOS26 software platform. According to the findings, students' evaluation of the effectiveness of online teaching positively predicts self-efficacy of online learning and academic engagement variables. Also, predicts self-efficacy of online learning variable can predict academic engagement in a positive and meaningful way. In addition, students' evaluation of the effectiveness of online teaching predicts academic engagement through self-efficacy of online learning. According to the obtained results, by increasing the students' evaluation of the effectiveness of online teaching, it is possible to improve self-efficacy of online learning and improve their academic engagement.
Article
Full-text available
It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.
Article
Full-text available
Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
We developed an instrument that allows instructors to conduct a comprehensive evaluation of the quality of Web-based courses. Development and validation of the instrument involved four phases. In Phase I, an instrument was developed based on survey results from students and instructors of Web-based courses and an extensive literature review. In Phase II, the instrument was placed on the World Wide Web to collect data. Phase III was a validity study using the data collected in Phase II. In Phase IV, the instrument was revised using feedback from the first three phases.
Book
The volume is based on the papers that were presented at the Interna­ tional Conference Model-Based Reasoning in Scientific Discovery (MBR'98), held at the Collegio Ghislieri, University of Pavia, Pavia, Italy, in December 1998. The papers explore how scientific thinking uses models and explanatory reasoning to produce creative changes in theories and concepts. The study of diagnostic, visual, spatial, analogical, and temporal rea­ soning has demonstrated that there are many ways of performing intelligent and creative reasoning that cannot be described with the help only of tradi­ tional notions of reasoning such as classical logic. Traditional accounts of scientific reasoning have restricted the notion of reasoning primarily to de­ ductive and inductive arguments. Understanding the contribution of model­ ing practices to discovery and conceptual change in science requires ex­ panding scientific reasoning to include complex forms of creative reasoning that are not always successful and can lead to incorrect solutions. The study of these heuristic ways of reasoning is situated at the crossroads of philoso­ phy, artificial intelligence, cognitive psychology, and logic; that is, at the heart of cognitive science. There are several key ingredients common to the various forms of model­ based reasoning to be considered in this book. The models are intended as in­ terpretations of target physical systems, processes, phenomena, or situations. The models are retrieved or constructed on the basis of potentially satisfying salient constraints of the target domain.
Article
A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.
Article
This article describes a method for developing a customized, empirically‐based attitudinal assessment instrument. Issues relating to the effective administration of the instrument and to faculty resistance are discussed. The author suggests that the structured assessment of student attitudes toward distance delivery made possible by such an instrument is an important initial step in the overall evaluation process.