Available via license: CC BY 4.0
Content may be subject to copyright.
The Journal of Educators Online, Volume 8, Number 2, July 2011 1
Student Effort, Consistency, and Online Performance
Hilde Patron, University of West Georgia in Carrollton
Salvador Lopez, University of West Georgia in Carrollton
Abstract
This paper examines how student effort, consistency, motivation, and marginal learning,
influence student grades in an online course. We use data from eleven Microeconomics courses
taught online for a total of 212 students. Our findings show that consistency, or less time
variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent
online, is not. Other independent variables include GPA and the difference between a pre-test
and a post-test. The GPA is used as a measure of motivation, and the difference between a post-
test and pre-test as marginal learning. As expected, the level of motivation is found statistically
significant at a 99% confidence level, and marginal learning is also significant at a 95% level.
The Journal of Educators Online, Volume 8, Number 2, July 2011 2
Literature Review
The role of study time or effort determining student grades or GPAs has been investigated for
many years and the results obtained have been mixed, from the expected positive, although
moderate, relationship found in early studies (Allen, Lerner, & Hinrichsen, 1972; Wagstaff &
Mahmoudi, 1976) to positive but insignificant (Schuman, Walsh, Olson, & Etheridge, 1985) and
even negative (Greenwald & Gillmore, 1997; Olivares, 2000). Early studies reported correlation
coefficients between study time and grades; later studies, such as the one done by Schuman et al.
(1985), added independent variables like aptitude measures (SAT) and self-reported attendance
and used much larger samples sizes (424 students) during a period of ten years (1973-1982).
Schuman et al. (1985) concluded that study time was not a significant factor explaining grades or
GPAs, but the paper has served as a major reference in the field. One subsequent paper (Rau &
Durand, 2000) observed that the lack of association found in the Schuman paper was due to
invariability of its SAT scores influenced by the selectivity of the sample (University of
Michigan). They used a sample of 252 students from the Illinois State University and found a
positive relationship between GPAs and a constructed index based on study time, study habits
and academic orientation. Another related study (Michaels & Miethe, 1989) found a positive
relationship between study time and grades and suggested that the Schuman findings might have
contained specification errors. The authors added to their model a total of fourteen dummy
variables: five “quality of study time” variables and nine background or control variables such as
gender, years in college, field of study, etc. However, the positive relationship was significant
only among freshmen and sophomores. Yet another paper (Olivares, 2000), also arguing
specification errors in the Schuman paper, added other variables like course difficulty level,
grade inflation, and student cognitive ability, and found that study time and grades are negatively
and significantly related.
All of the articles listed above have three things in common. First, they used surveys to obtain
self-reported data. Second, they used a regression technique called stepwise regression. Third,
their reported R-squares oscillated between 0.10 and 0.20, which is a relatively small percent of
student performance variance explained by the independent variables used.
The Journal of Educators Online, Volume 8, Number 2, July 2011 3
Student performance has also been analyzed in online courses. Some studies have continued
using web questionnaires or surveys (Cheo, 2003; Williams & Clark, 2004; Michinov, Brunot,
Le Bohec, Jacques, & Marine, 2011) while others have continued using the stepwise regression
approach (Ramos & Yudko, 2008; Waschull, 2005). Using the information obtained either from
surveys or the web-base system used in the course, these studies have concentrated on explaining
grades with student participation (Ramos & Yudko, 2008), procrastination (Michinov, Brunot,
Le Bohec, Jacques, & Marine, 2011; Wong, 2008), student ratings of instructor and course
quality (Johnson, Aragon, Shaik, & Palma-Rivas, 2000) and time-management (Taraban,
Williams, & Rynearson, 1999; Wong, 2008).
Method
As indicated above, the studies that have analyzed the relationship between grades and study
time, quality of time, procrastination level, student ratings, and time-management skills, have
used surveys to obtain that information. However, there has been some evidence indicating that
the use of surveys may lead to respondents lying or exaggerating their responses, especially
when the information involves possible embarrassment, punishment, or reward. Some
researchers have found that survey responses are not reliable when workers report hours worked
(Jacobs, 1998), consumers report amount of drugs used (Harrell, Kapsak, Cisin, & Wirtz, 1986),
and students their study time distribution (Taraban, Williams, & Rynearson, 1999). Another
technical paper (Stinebrickner & Stinebrickner, 2004) demonstrates how the reported errors from
survey questions can be relatively significant, discusses how the estimators can be improved, but
warns about the inaccuracies of the results obtained from such samples. On the other hand, the
stepwise regression method used by most studies cited above, is not a reliable method since it
leads to bias estimates (Kennedy, 2008, p. 49; Leamer, 2007, p. 101).
Due to the findings stated above, we do not estimate our dependent variable using surveys, nor
do we use a stepwise regression method. Instead, we just use the time spent online as a measure
of effort. In that sense, we follow the approach used by Damianov et al. (2009), who found a
positive and significant relationship between time spent online and grades, especially for
students who obtained grades between D and B. They obtained their results using a Multinomial
The Journal of Educators Online, Volume 8, Number 2, July 2011 4
Logit Model (MNLM), which they argue being more appropriate than Ordinary Least Squares
(OLS) (Damianov, Kupczynski, Calafiore, Damianova, Soydemir, & Gonzalez, 2009, p. 2)
when using letter grade as dependent variable. Our paper, however, uses the OLS technique
because our dependent variable is the numerical final grades obtained in the courses, and unlike
the stepwise regression approach, we use all the variables in a single model. While the use of
OLS would be inappropriate when the dependent variable is a discrete variable (Spector &
Mazzeo, 1980), this is not a problem with our model since our grades are continuous.
Variables and Model
Our sample consists of 212 students who were enrolled in 11 microeconomic courses offered
online by an accredited University located in Florida, during the academic year 2009-2010. On
the other hand, given that the amount of minutes per day were available for each student during
the one-month intensive courses, we use the total minutes spent online as one explanatory
variable, and calculated coefficients of variation of those minutes, the ratio of the standard
deviation to the mean times one hundred, to estimate student consistency as a second explanatory
variable. This variable is our measure of quality of time or time-management skills. Relatively
lower values of the coefficient of variation are evidence of higher consistency or better time-
management skills, and vice versa. The coefficient of variation is not sensitive to extreme values,
so it allows us to compare student usage of time given different levels of effort. A third
explanatory variable is the students cumulative Grade Point Average or GPA, which we suggest
as a measure of student motivation. Our fourth independent variable is the difference between a
pre-test and post-test, which consists of twenty multiple-choice identical questions. The students
take the pre-test and are not able to see their grades until the end of the course, and they are not
aware that the same questions will be asked at the end of the course in the post-test. The
difference between those two tests divided by the SAT scores of each student has been used
before as a measure of “scholastic effort” (Wetzel, 1977, p. 36). However, we did not have
access to the SAT scores, so we just call this variable “marginal learning”.
Our regression equation is: Yi = α0 + α1Xi1 + α2Xi2 + α3Xi3 + α4Xi4 + εi where Yi is the
grade obtained in the course by the ith student, Xi1 is the student’s GPA, Xi2 is the difference
between the grades obtained by the ith student in a pro-test and a pre-test, which contain twenty
The Journal of Educators Online, Volume 8, Number 2, July 2011 5
multiple-choice questions identical to each test, Xi3 is the amount of time spent by the ith
student during the course in minutes, and Xi4 is the coefficient of variation of time used by the
ith student during the course. The letters α0 and εi are the corresponding intercept and error
terms.
Data
Our data set consists of four-week Microeconomics courses at an online accredited University
located in Florida. The University uses the Learning Management System known as Angel and it
keeps records of the amount of minutes the students spend online per day. Each course has
approximately an average of 19 students, and our database does not include the students who
either did not log in after the second week of classes or did not take the final exam and/or the
post-test. Grades are the numerical grades obtained after completion of the course. We do not
include grades from students whose GPA were reported as zero. The Post-test – Pre-test variable
is the difference between an exit and entry test, which contain identical questions. Such variable
was allowed to contain only non-negative values since negative values are usually due to
students not taking the post-test, which would have introduced a bias in our results. “Total
minutes” is the final amount of logged-in minutes the students spent from the first day of classes
until completion of the course. Finally, the Coefficient of Variation is the ratio of the standard
deviation to the mean amount of minutes after completion of the course, expressed as a percent.
Tables 1 and 2 below show an overall summary statistics for each variable, and an average for
each value per course respectively.
TABLE 1: Data Summary
Variable
Mean
Median
Lowest
Highest
Grades
80.8
82.09
40.35
100
GPA
3.12
3.25
1
4
Post-test – Pre-test
33.09
32.5
0
80
Total Minutes
2393
2058
413
8001
Coefficient of
Variation
111.89%
107.87%
39.78%
247.12%
The Journal of Educators Online, Volume 8, Number 2, July 2011 6
TABLE 2: Average Values Per Course
Course
Grades
GPA
Pre-Test
Post-Test
Minutes
C.O.V.
1
81.93
3.21
28.88
56.11
1968.88
118
2
81.14
3.30
30.78
59.47
2680.91
93.78
3
83.05
2.99
30.41
52.7
2348.35
125.89
4
78.87
3.11
36.66
62.77
2772.54
99.19
5
82.79
3.21
35.2
65.62
3010.77
98.26
6
82.98
3.08
48.68
66.84
2653.72
102.8
7
82.15
3.18
29.2
60.8
2559.62
120.12
8
78.49
3.08
28.94
47.89
2653.72
102.8
9
77.17
3.06
32.96
60.55
1811.35
120.18
10
78.75
3.07
34.07
59.81
2088.66
124.18
11
78.00
3.09
32.27
52.04
2266.45
115.62
Findings
The OLS regression results are shown in Table 3 below. Our model explains about 46% of the
variance of grades. The studies cited in the literature review explained at most 20%. On the other
hand, it is not surprising to find that student motivation (GPA) is positively related to grades and
it is statistically significant at a 99% level of confidence. Such result is the same as early (Park &
Kerr, 1990) as well as recent (Crede, Roch, & Kieszczynka, 2010) studies. A 0.10 increase in a
student’s GPA is expected to increase the course grade by almost one point. The most surprising
result is that the amount of minutes spent online is not a statistically significant variable
explaining final grades. That is consistent with the lack of influence of study time on grades
reported by Schuman et al. (1985). Successful performance in online courses does not seem to be
a function of the amount of time spent online or effort. On the other hand, the results also reveal
something very interesting. The students who log in more frequently and with less variation of
minutes per day tend to get higher grades. Table 1 shows that student consistency varies
approximately between 40% and 250%. On the other hand, table 3 indicates that if, for example,
a student consistency is currently 150%, an improvement to 100% would increase her final grade
by an average of 2.5 points. This significant result is also found in face-to-face course research
that used other measures of consistency such as attendance (Romer, 1993; Durden & Ellis, 1995)
or different time-management skills (Britton & Tesser, 1991). It is also similar to online-course
research that has measured consistency with page hits (Ramos & Yudko, 2008) and
procrastination level (Michinov, Brunot, Le Bohec, Juhel, & Delaval, 2011).The last regressor in
The Journal of Educators Online, Volume 8, Number 2, July 2011 7
our model, the difference between the pro-test and pre-test grades or marginal learning, is also a
significant influence on the student grades. A student whose pre-test grade is 40 and post-test is
50, should expect on average an improvement of 0.6 points in her final grade. This is a result
that, in our opinion, should reflect the extent to which the objectives of the course, the pre and
post tests, and the assignments and tests given during the course are consistent with each other.
Even though our coefficient has the expected positive sign and it is statistically significant, its
value, 0.06, is not near what a one-to-one relationship between the two tests should be. Since the
range between pre and post test grades is about 80 and the range of final grades is 60, post-test
minus pretest grades ideally should have a coefficient of 0.75 (60 divided by 80). We did not
find any reference to this topic in the literature, but we suggest that as the coefficient approaches
an expected one-to-one relationship, it might be an indicator of course-design consistency.
TABLE 3: Regression Results
α0 (intercept)
α1 (GPA)
α2 (post-pre)
α3 (minutes)
σ4 (COV)
Coefficient
53.2
9.46
0.06
0.0005
-0.05
p-value
2.83 E-23
7.71 E-18
0.03
0.17
0.01
R2 = 0.46. F-value = 44.07. White Test: No heteroscedasticity at 5% significance level.
Residuals show an approximately normal distribution indicating any unexplained variation is due
to randomness.
Conclusions and Recommendations
As indicated in the beginning of this paper, the relationship between effort, as measured by study
time, and grades is not clear. We did not rely on self-reported study time and instead used the
recorded amount of minutes students spent logged into the courses as a proxy for effort. Our
results support the evidence that effort is not a significant influence on grades. However, the
coefficient of variation of time, or our measure of student consistency, is a significant influence
on grades. As the coefficient of variation is reduced by 10 percent, the overall grade is increased
by 0.5 points. Such result is crucial for administrators, advisors, and students. The students
should learn that it is not the amount of time logged in that is important to get good grades, but
how frequent and stable the amount of minutes is. Student advisors should emphasize that
“studying hard” (total minutes) is not as important as “studying smart” (consistency).
The Journal of Educators Online, Volume 8, Number 2, July 2011 8
Administrators who focus on the amount of minutes spent online as a measure of institutional
success, should also consider the coefficient of variation of those minutes. Lower coefficients of
variation should be a higher priority than high amounts of minutes. Finally, the difference
between a pre-test and a post-test could be used as a measure of course consistency with goals
and objectives. A well designed course should contain assignments and tests that evaluate
learning of objectives. If the questions on the pre-test and post-test are consistent with the
questions asked on quizzes, mid-term and final exams, and these in turn are also consistent with
the course objectives, the regression coefficient of a post-test minus pre-test should reflect a one-
to-one relationship with the final grades. The extent to which the resulting coefficient
approximates an expected one-to-one relationship could be used as a value of teaching
effectiveness. Since the same microeconomics course has just been redesigned with precisely the
purpose of making all assignments and tests more consistent with new goals and objectives, the
regression shown in this paper will be done again with the purpose of testing such hypothesis.
Hopefully, our model also will incorporate more variables indicating how individual students use
their time during the one-month course while taking tests and doing different assignments.
The Journal of Educators Online, Volume 8, Number 2, July 2011 9
References
Allen, G., Lerner, W., & Hinrichsen, J. J. (1972). Study behaviors and their relationships to test
anxiety and academic performance. Psychological Reports (30), 407-410.
Britton, B. K., & Tesser, A. (1991). Effects of Time-Management Practices on College Grades.
Journal of Educational Psychology , 83 (3), 405-410.
Cheo, R. (2003). Making the Grade through Class Effort Alone. Economic Papers, 22, 55-65.
Crede, M., Roch, S., & Kieszczynka, U. (2010). Class Attendance in College: A Meta-Analytic
Review of The Relationship of Class Attendance With Grades and Student
Characteristics. Review of Educational Research, 80 (2), 272-295.
Damianov, D., Kupczynski, L., Calafiore, P., Damianova, E., Soydemir, G., & Gonzalez, E.
(2009). Time Spent Online and Student Performance in Online Business Courses: A
Multinomial Logit Analysis. Journal of Economics and Finance Education, 8 (2), 11-19.
Durden, G. C., & Ellis, L. V. (1995). The Effects of Attendance on Student Learning in
Principles of Economics. American Economic Review , 85 (2), 343-346.
Greenwald, A., & Gillmore, G. M. (1997). No pain, no gain? The importance of measuring
course workload in student ratings of instruction. Journal of Educational Psychology , 89
(4), 743-751.
Harrell, A., Kapsak, K., Cisin, I. H., & Wirtz, P. W. (1986). The Validity of Self-Reported Drug
Use Data: The Accuracy of Responses on Confidential-Administered Answered Sheets.
Social Research Group, The George Washington University. National Institute on Drug
Abuse.
Jacobs, J. A. (1998, December). Measuring time at work: are self-reports accurate? Monthly
Labor Review , 43-52.
Johnson, S., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000). Comparative Analysis of
Learner Satisfaction and Learning Outcomes in Online and Face-to-Face Learning
Environment. Journal of Interactive Learning Research. , 11 (1), 29-49.
Kennedy, P. (2008). A Guide to Econometrics (6th Edition ed.). Malden, MA, USA: Blackwell
Publishing Ltd.
Leamer, E. E. (2007). A Flat World, a Level Playing Field, a Small World After All, or More of
the Above? A Review of Thomas L Friedman's The World is Flat. Journal of Economics
Literature (45), 83-126.
The Journal of Educators Online, Volume 8, Number 2, July 2011 10
Michaels, J., & Miethe, T. (1989). Accademic Effort and College Grades. Social Forces , 68 (1),
309-319.
Michinov, N., Brunot, S., Le Bohec, O., Jacques, J., & Marine, D. (2011). Procrastination,
participation, and performance in online learning environments. Computers and
Education (56), 243-252.
Olivares, O. (2000). Radical Pedagogy. Retrieved December 10, 2010, from ICAAP:
http://radicalpedagogy.icaap.org/content/issue4_1/06_olivares.html
Park, K. H., & Kerr, P. M. (1990). Determinants of Academic Performance: A Multinomial
Logit Approach. Journal of Economic Education , 21 (2), 101-111.
Ramos, C., & Yudko, E. (2008). "Hits" (not "Discussion Posts") predict student success in online
courses: A double-cross validation study. Computers and Education (50), 1174-1182.
Rau, W., & Durand, A. (2000). The academic ethic and college grades: Does hard work help
students to "make the grade"? Sociology of Education (73), 19-38.
Romer, D. (1993). Do Students Go to Class? Should They? Journal of Economic Perspectives ,
7, 167-174.
Schuman, H., Walsh, E., Olson, C., & Etheridge, B. (1985). Effort and Reward: The Assumption
that College Grades Are Affected by Quantity of Study. Social Forces , 63 (4), 945-966.
Spector, L., & Mazzeo, M. (1980). Probit Analysis and Economic Education. Journal of
Economic Education , 11, 37-44.
Stinebrickner, R., & Stinebrickner, T. R. (2004). Time-Use and College Outcomes. Journal of
Econometrics (121), 243-269.
Taraban, R., Williams, M., & Rynearson, K. (1999). Measuring study time distributions:
Implications for designing computer-based courses. Behavior Research Methods &
Instruments , 31 (2), 263-269.
Wagstaff, R., & Mahmoudi, H. (1976). Relation of study behaviors and employment to academic
performance. Psychological Reports , 38, 380-382.
Waschull, S. B. (2005). Predicting Success in Online Psychology Courses: Self-Discipline and
Motivation. Teaching of Psychology , 32 (3), 190-208.
Wetzel, J. E. (1977). Measuring Student Scholastic Effort: An Economic Theory of Learning
Approach. The Journal of Economic Education , 34-41.
The Journal of Educators Online, Volume 8, Number 2, July 2011 11
Williams, R., & Clark, L. (2004). College Students' Ratings of Student Effort, Student Ability
and Teacher Input as Correlates of Student Performance on Multiple-Choice Exams.
Educational Research , 46, 229-239.
Wong, W.-K. (2008). How Much Time-Inconsistency Is There and Does it Matter? Evidence on
Self-Awareness, Size, and Effects. Journal of Economic Behavior and Organization , 68
(3-4), 645-656.