ArticlePDF Available

Student Effort, Consistency and Online Performance

Authors:

Abstract and Figures

This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a post-test and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.
Content may be subject to copyright.
The Journal of Educators Online, Volume 8, Number 2, July 2011 1
Student Effort, Consistency, and Online Performance
Hilde Patron, University of West Georgia in Carrollton
Salvador Lopez, University of West Georgia in Carrollton
Abstract
This paper examines how student effort, consistency, motivation, and marginal learning,
influence student grades in an online course. We use data from eleven Microeconomics courses
taught online for a total of 212 students. Our findings show that consistency, or less time
variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent
online, is not. Other independent variables include GPA and the difference between a pre-test
and a post-test. The GPA is used as a measure of motivation, and the difference between a post-
test and pre-test as marginal learning. As expected, the level of motivation is found statistically
significant at a 99% confidence level, and marginal learning is also significant at a 95% level.
The Journal of Educators Online, Volume 8, Number 2, July 2011 2
Literature Review
The role of study time or effort determining student grades or GPAs has been investigated for
many years and the results obtained have been mixed, from the expected positive, although
moderate, relationship found in early studies (Allen, Lerner, & Hinrichsen, 1972; Wagstaff &
Mahmoudi, 1976) to positive but insignificant (Schuman, Walsh, Olson, & Etheridge, 1985) and
even negative (Greenwald & Gillmore, 1997; Olivares, 2000). Early studies reported correlation
coefficients between study time and grades; later studies, such as the one done by Schuman et al.
(1985), added independent variables like aptitude measures (SAT) and self-reported attendance
and used much larger samples sizes (424 students) during a period of ten years (1973-1982).
Schuman et al. (1985) concluded that study time was not a significant factor explaining grades or
GPAs, but the paper has served as a major reference in the field. One subsequent paper (Rau &
Durand, 2000) observed that the lack of association found in the Schuman paper was due to
invariability of its SAT scores influenced by the selectivity of the sample (University of
Michigan). They used a sample of 252 students from the Illinois State University and found a
positive relationship between GPAs and a constructed index based on study time, study habits
and academic orientation. Another related study (Michaels & Miethe, 1989) found a positive
relationship between study time and grades and suggested that the Schuman findings might have
contained specification errors. The authors added to their model a total of fourteen dummy
variables: five “quality of study time” variables and nine background or control variables such as
gender, years in college, field of study, etc. However, the positive relationship was significant
only among freshmen and sophomores. Yet another paper (Olivares, 2000), also arguing
specification errors in the Schuman paper, added other variables like course difficulty level,
grade inflation, and student cognitive ability, and found that study time and grades are negatively
and significantly related.
All of the articles listed above have three things in common. First, they used surveys to obtain
self-reported data. Second, they used a regression technique called stepwise regression. Third,
their reported R-squares oscillated between 0.10 and 0.20, which is a relatively small percent of
student performance variance explained by the independent variables used.
The Journal of Educators Online, Volume 8, Number 2, July 2011 3
Student performance has also been analyzed in online courses. Some studies have continued
using web questionnaires or surveys (Cheo, 2003; Williams & Clark, 2004; Michinov, Brunot,
Le Bohec, Jacques, & Marine, 2011) while others have continued using the stepwise regression
approach (Ramos & Yudko, 2008; Waschull, 2005). Using the information obtained either from
surveys or the web-base system used in the course, these studies have concentrated on explaining
grades with student participation (Ramos & Yudko, 2008), procrastination (Michinov, Brunot,
Le Bohec, Jacques, & Marine, 2011; Wong, 2008), student ratings of instructor and course
quality (Johnson, Aragon, Shaik, & Palma-Rivas, 2000) and time-management (Taraban,
Williams, & Rynearson, 1999; Wong, 2008).
Method
As indicated above, the studies that have analyzed the relationship between grades and study
time, quality of time, procrastination level, student ratings, and time-management skills, have
used surveys to obtain that information. However, there has been some evidence indicating that
the use of surveys may lead to respondents lying or exaggerating their responses, especially
when the information involves possible embarrassment, punishment, or reward. Some
researchers have found that survey responses are not reliable when workers report hours worked
(Jacobs, 1998), consumers report amount of drugs used (Harrell, Kapsak, Cisin, & Wirtz, 1986),
and students their study time distribution (Taraban, Williams, & Rynearson, 1999). Another
technical paper (Stinebrickner & Stinebrickner, 2004) demonstrates how the reported errors from
survey questions can be relatively significant, discusses how the estimators can be improved, but
warns about the inaccuracies of the results obtained from such samples. On the other hand, the
stepwise regression method used by most studies cited above, is not a reliable method since it
leads to bias estimates (Kennedy, 2008, p. 49; Leamer, 2007, p. 101).
Due to the findings stated above, we do not estimate our dependent variable using surveys, nor
do we use a stepwise regression method. Instead, we just use the time spent online as a measure
of effort. In that sense, we follow the approach used by Damianov et al. (2009), who found a
positive and significant relationship between time spent online and grades, especially for
students who obtained grades between D and B. They obtained their results using a Multinomial
The Journal of Educators Online, Volume 8, Number 2, July 2011 4
Logit Model (MNLM), which they argue being more appropriate than Ordinary Least Squares
(OLS) (Damianov, Kupczynski, Calafiore, Damianova, Soydemir, & Gonzalez, 2009, p. 2)
when using letter grade as dependent variable. Our paper, however, uses the OLS technique
because our dependent variable is the numerical final grades obtained in the courses, and unlike
the stepwise regression approach, we use all the variables in a single model. While the use of
OLS would be inappropriate when the dependent variable is a discrete variable (Spector &
Mazzeo, 1980), this is not a problem with our model since our grades are continuous.
Variables and Model
Our sample consists of 212 students who were enrolled in 11 microeconomic courses offered
online by an accredited University located in Florida, during the academic year 2009-2010. On
the other hand, given that the amount of minutes per day were available for each student during
the one-month intensive courses, we use the total minutes spent online as one explanatory
variable, and calculated coefficients of variation of those minutes, the ratio of the standard
deviation to the mean times one hundred, to estimate student consistency as a second explanatory
variable. This variable is our measure of quality of time or time-management skills. Relatively
lower values of the coefficient of variation are evidence of higher consistency or better time-
management skills, and vice versa. The coefficient of variation is not sensitive to extreme values,
so it allows us to compare student usage of time given different levels of effort. A third
explanatory variable is the students cumulative Grade Point Average or GPA, which we suggest
as a measure of student motivation. Our fourth independent variable is the difference between a
pre-test and post-test, which consists of twenty multiple-choice identical questions. The students
take the pre-test and are not able to see their grades until the end of the course, and they are not
aware that the same questions will be asked at the end of the course in the post-test. The
difference between those two tests divided by the SAT scores of each student has been used
before as a measure of “scholastic effort” (Wetzel, 1977, p. 36). However, we did not have
access to the SAT scores, so we just call this variable “marginal learning”.
Our regression equation is: Yi = α0 + α1Xi1 + α2Xi2 + α3Xi3 + α4Xi4 + εi where Yi is the
grade obtained in the course by the ith student, Xi1 is the student’s GPA, Xi2 is the difference
between the grades obtained by the ith student in a pro-test and a pre-test, which contain twenty
The Journal of Educators Online, Volume 8, Number 2, July 2011 5
multiple-choice questions identical to each test, Xi3 is the amount of time spent by the ith
student during the course in minutes, and Xi4 is the coefficient of variation of time used by the
ith student during the course. The letters α0 and εi are the corresponding intercept and error
terms.
Data
Our data set consists of four-week Microeconomics courses at an online accredited University
located in Florida. The University uses the Learning Management System known as Angel and it
keeps records of the amount of minutes the students spend online per day. Each course has
approximately an average of 19 students, and our database does not include the students who
either did not log in after the second week of classes or did not take the final exam and/or the
post-test. Grades are the numerical grades obtained after completion of the course. We do not
include grades from students whose GPA were reported as zero. The Post-test Pre-test variable
is the difference between an exit and entry test, which contain identical questions. Such variable
was allowed to contain only non-negative values since negative values are usually due to
students not taking the post-test, which would have introduced a bias in our results. Total
minutes is the final amount of logged-in minutes the students spent from the first day of classes
until completion of the course. Finally, the Coefficient of Variation is the ratio of the standard
deviation to the mean amount of minutes after completion of the course, expressed as a percent.
Tables 1 and 2 below show an overall summary statistics for each variable, and an average for
each value per course respectively.
TABLE 1: Data Summary
Variable
Mean
Median
Lowest
Highest
Grades
80.8
82.09
40.35
100
GPA
3.12
3.25
1
4
Post-test Pre-test
33.09
32.5
0
80
Total Minutes
2393
2058
413
8001
Coefficient of
Variation
111.89%
107.87%
39.78%
247.12%
The Journal of Educators Online, Volume 8, Number 2, July 2011 6
TABLE 2: Average Values Per Course
Course
Grades
GPA
Pre-Test
Post-Test
Minutes
C.O.V.
1
81.93
3.21
28.88
56.11
1968.88
118
2
81.14
3.30
30.78
59.47
2680.91
93.78
3
83.05
2.99
30.41
52.7
2348.35
125.89
4
78.87
3.11
36.66
62.77
2772.54
99.19
5
82.79
3.21
35.2
65.62
3010.77
98.26
6
82.98
3.08
48.68
66.84
2653.72
102.8
7
82.15
3.18
29.2
60.8
2559.62
120.12
8
78.49
3.08
28.94
47.89
2653.72
102.8
9
77.17
3.06
32.96
60.55
1811.35
120.18
10
78.75
3.07
34.07
59.81
2088.66
124.18
11
78.00
3.09
32.27
52.04
2266.45
115.62
Findings
The OLS regression results are shown in Table 3 below. Our model explains about 46% of the
variance of grades. The studies cited in the literature review explained at most 20%. On the other
hand, it is not surprising to find that student motivation (GPA) is positively related to grades and
it is statistically significant at a 99% level of confidence. Such result is the same as early (Park &
Kerr, 1990) as well as recent (Crede, Roch, & Kieszczynka, 2010) studies. A 0.10 increase in a
student’s GPA is expected to increase the course grade by almost one point. The most surprising
result is that the amount of minutes spent online is not a statistically significant variable
explaining final grades. That is consistent with the lack of influence of study time on grades
reported by Schuman et al. (1985). Successful performance in online courses does not seem to be
a function of the amount of time spent online or effort. On the other hand, the results also reveal
something very interesting. The students who log in more frequently and with less variation of
minutes per day tend to get higher grades. Table 1 shows that student consistency varies
approximately between 40% and 250%. On the other hand, table 3 indicates that if, for example,
a student consistency is currently 150%, an improvement to 100% would increase her final grade
by an average of 2.5 points. This significant result is also found in face-to-face course research
that used other measures of consistency such as attendance (Romer, 1993; Durden & Ellis, 1995)
or different time-management skills (Britton & Tesser, 1991). It is also similar to online-course
research that has measured consistency with page hits (Ramos & Yudko, 2008) and
procrastination level (Michinov, Brunot, Le Bohec, Juhel, & Delaval, 2011).The last regressor in
The Journal of Educators Online, Volume 8, Number 2, July 2011 7
our model, the difference between the pro-test and pre-test grades or marginal learning, is also a
significant influence on the student grades. A student whose pre-test grade is 40 and post-test is
50, should expect on average an improvement of 0.6 points in her final grade. This is a result
that, in our opinion, should reflect the extent to which the objectives of the course, the pre and
post tests, and the assignments and tests given during the course are consistent with each other.
Even though our coefficient has the expected positive sign and it is statistically significant, its
value, 0.06, is not near what a one-to-one relationship between the two tests should be. Since the
range between pre and post test grades is about 80 and the range of final grades is 60, post-test
minus pretest grades ideally should have a coefficient of 0.75 (60 divided by 80). We did not
find any reference to this topic in the literature, but we suggest that as the coefficient approaches
an expected one-to-one relationship, it might be an indicator of course-design consistency.
TABLE 3: Regression Results
α1 (GPA)
α2 (post-pre)
α3 (minutes)
σ4 (COV)
Coefficient
9.46
0.06
0.0005
-0.05
p-value
7.71 E-18
0.03
0.17
0.01
R2 = 0.46. F-value = 44.07. White Test: No heteroscedasticity at 5% significance level.
Residuals show an approximately normal distribution indicating any unexplained variation is due
to randomness.
Conclusions and Recommendations
As indicated in the beginning of this paper, the relationship between effort, as measured by study
time, and grades is not clear. We did not rely on self-reported study time and instead used the
recorded amount of minutes students spent logged into the courses as a proxy for effort. Our
results support the evidence that effort is not a significant influence on grades. However, the
coefficient of variation of time, or our measure of student consistency, is a significant influence
on grades. As the coefficient of variation is reduced by 10 percent, the overall grade is increased
by 0.5 points. Such result is crucial for administrators, advisors, and students. The students
should learn that it is not the amount of time logged in that is important to get good grades, but
how frequent and stable the amount of minutes is. Student advisors should emphasize that
“studying hard” (total minutes) is not as important as “studying smart” (consistency).
The Journal of Educators Online, Volume 8, Number 2, July 2011 8
Administrators who focus on the amount of minutes spent online as a measure of institutional
success, should also consider the coefficient of variation of those minutes. Lower coefficients of
variation should be a higher priority than high amounts of minutes. Finally, the difference
between a pre-test and a post-test could be used as a measure of course consistency with goals
and objectives. A well designed course should contain assignments and tests that evaluate
learning of objectives. If the questions on the pre-test and post-test are consistent with the
questions asked on quizzes, mid-term and final exams, and these in turn are also consistent with
the course objectives, the regression coefficient of a post-test minus pre-test should reflect a one-
to-one relationship with the final grades. The extent to which the resulting coefficient
approximates an expected one-to-one relationship could be used as a value of teaching
effectiveness. Since the same microeconomics course has just been redesigned with precisely the
purpose of making all assignments and tests more consistent with new goals and objectives, the
regression shown in this paper will be done again with the purpose of testing such hypothesis.
Hopefully, our model also will incorporate more variables indicating how individual students use
their time during the one-month course while taking tests and doing different assignments.
The Journal of Educators Online, Volume 8, Number 2, July 2011 9
References
Allen, G., Lerner, W., & Hinrichsen, J. J. (1972). Study behaviors and their relationships to test
anxiety and academic performance. Psychological Reports (30), 407-410.
Britton, B. K., & Tesser, A. (1991). Effects of Time-Management Practices on College Grades.
Journal of Educational Psychology , 83 (3), 405-410.
Cheo, R. (2003). Making the Grade through Class Effort Alone. Economic Papers, 22, 55-65.
Crede, M., Roch, S., & Kieszczynka, U. (2010). Class Attendance in College: A Meta-Analytic
Review of The Relationship of Class Attendance With Grades and Student
Characteristics. Review of Educational Research, 80 (2), 272-295.
Damianov, D., Kupczynski, L., Calafiore, P., Damianova, E., Soydemir, G., & Gonzalez, E.
(2009). Time Spent Online and Student Performance in Online Business Courses: A
Multinomial Logit Analysis. Journal of Economics and Finance Education, 8 (2), 11-19.
Durden, G. C., & Ellis, L. V. (1995). The Effects of Attendance on Student Learning in
Principles of Economics. American Economic Review , 85 (2), 343-346.
Greenwald, A., & Gillmore, G. M. (1997). No pain, no gain? The importance of measuring
course workload in student ratings of instruction. Journal of Educational Psychology , 89
(4), 743-751.
Harrell, A., Kapsak, K., Cisin, I. H., & Wirtz, P. W. (1986). The Validity of Self-Reported Drug
Use Data: The Accuracy of Responses on Confidential-Administered Answered Sheets.
Social Research Group, The George Washington University. National Institute on Drug
Abuse.
Jacobs, J. A. (1998, December). Measuring time at work: are self-reports accurate? Monthly
Labor Review , 43-52.
Johnson, S., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000). Comparative Analysis of
Learner Satisfaction and Learning Outcomes in Online and Face-to-Face Learning
Environment. Journal of Interactive Learning Research. , 11 (1), 29-49.
Kennedy, P. (2008). A Guide to Econometrics (6th Edition ed.). Malden, MA, USA: Blackwell
Publishing Ltd.
Leamer, E. E. (2007). A Flat World, a Level Playing Field, a Small World After All, or More of
the Above? A Review of Thomas L Friedman's The World is Flat. Journal of Economics
Literature (45), 83-126.
The Journal of Educators Online, Volume 8, Number 2, July 2011 10
Michaels, J., & Miethe, T. (1989). Accademic Effort and College Grades. Social Forces , 68 (1),
309-319.
Michinov, N., Brunot, S., Le Bohec, O., Jacques, J., & Marine, D. (2011). Procrastination,
participation, and performance in online learning environments. Computers and
Education (56), 243-252.
Olivares, O. (2000). Radical Pedagogy. Retrieved December 10, 2010, from ICAAP:
http://radicalpedagogy.icaap.org/content/issue4_1/06_olivares.html
Park, K. H., & Kerr, P. M. (1990). Determinants of Academic Performance: A Multinomial
Logit Approach. Journal of Economic Education , 21 (2), 101-111.
Ramos, C., & Yudko, E. (2008). "Hits" (not "Discussion Posts") predict student success in online
courses: A double-cross validation study. Computers and Education (50), 1174-1182.
Rau, W., & Durand, A. (2000). The academic ethic and college grades: Does hard work help
students to "make the grade"? Sociology of Education (73), 19-38.
Romer, D. (1993). Do Students Go to Class? Should They? Journal of Economic Perspectives ,
7, 167-174.
Schuman, H., Walsh, E., Olson, C., & Etheridge, B. (1985). Effort and Reward: The Assumption
that College Grades Are Affected by Quantity of Study. Social Forces , 63 (4), 945-966.
Spector, L., & Mazzeo, M. (1980). Probit Analysis and Economic Education. Journal of
Economic Education , 11, 37-44.
Stinebrickner, R., & Stinebrickner, T. R. (2004). Time-Use and College Outcomes. Journal of
Econometrics (121), 243-269.
Taraban, R., Williams, M., & Rynearson, K. (1999). Measuring study time distributions:
Implications for designing computer-based courses. Behavior Research Methods &
Instruments , 31 (2), 263-269.
Wagstaff, R., & Mahmoudi, H. (1976). Relation of study behaviors and employment to academic
performance. Psychological Reports , 38, 380-382.
Waschull, S. B. (2005). Predicting Success in Online Psychology Courses: Self-Discipline and
Motivation. Teaching of Psychology , 32 (3), 190-208.
Wetzel, J. E. (1977). Measuring Student Scholastic Effort: An Economic Theory of Learning
Approach. The Journal of Economic Education , 34-41.
The Journal of Educators Online, Volume 8, Number 2, July 2011 11
Williams, R., & Clark, L. (2004). College Students' Ratings of Student Effort, Student Ability
and Teacher Input as Correlates of Student Performance on Multiple-Choice Exams.
Educational Research , 46, 229-239.
Wong, W.-K. (2008). How Much Time-Inconsistency Is There and Does it Matter? Evidence on
Self-Awareness, Size, and Effects. Journal of Economic Behavior and Organization , 68
(3-4), 645-656.
... The relationship between study time and student performance has been studied for years with mixed results (Patron & Lopez, 2011). Early studies showed a positive but moderate link, while later studies had varied findings (Patron & Lopez, 2011). ...
... The relationship between study time and student performance has been studied for years with mixed results (Patron & Lopez, 2011). Early studies showed a positive but moderate link, while later studies had varied findings (Patron & Lopez, 2011). For example, Schuman et al. (1985) and more recently Christensen and colleagues (2019) found no significant connection, but subsequent studies challenged this, citing sample selectivity and specification errors. ...
... Aspek kelima adalah effort, yaitu upaya yang dilakukan untuk meningkatkan hasil akademik. Manfaat effort dalam pembelajaran yaitu dapat meningkatkan motivasi belajar (Patron & Lopez, 2011) dan meningkatkan selfregulated learning pada mahasiswa (Baars et al., 2020). Aspek ini berkontribusi terhadap self-regulated learning sebesar 0,33. ...
Article
Full-text available
ABSTRAK Pandemi COVID-19 yang melanda berbagai negara di dunia menuntut manusia untuk beradaptasi dalam aktivitas sehari-hari, termasuk aktivitas belajar mengajar yang dilakukan secara daring. Berbagai kesulitan terjadi saat belajar daring. Untuk mengatasi kesulitan yang terjadi, diperlukan strategi pembelajaran yang efektif yang dikenal dengan self-regulated learning. Sudah banyak alat ukur mengenai self-regulated learning dalam pembelajaran offline. Penelitian ini bertujuan untuk mengembangkan alat ukur self-regulated learning yang cocok digunakan dalam konteks pembelajaran daring. Alat ukur didasarkan pada teori self-regulated learning yang dikembangkan oleh Toering et al., (2012). Alat ukur bersifat multidimensi dengan enam aspek, diantaranya adalah planning, self-monitoring, evaluation, reflection, effort, dan self-efficacy. Subjek penelitian adalah mahasiswa aktif laki-laki dan perempuan sebanyak 307 siswa. Analisis data menggunakan analisis faktor konfirmatori dengan bantuan software Jamovi. Hasil analisis menunjukkan bahwa model yang digunakan fit dan alat ukurnya dapat digunakan untuk mengukur self-regulated learning dalam konteks pembelajaran daring dengan nilai validitas berkisar antara 0,54-0,79 dan nilai koefisien cronbach's alpha berstrata sebesar 0,93 (RMSEA = 0,041; SRMR = 0,04; TLI = 0,955; CFI = 0,964; χ2/ df = 1,535). Kata kunci: Self-regulated learning, pembelajaran daring, konstruksi skala ABSTRACT The COVID-19 pandemic that has hit various countries in the world requires humans to adapt in their daily activities, including teaching and learning activities that are carried out online. Various difficulties occur when learning online. To overcome the difficulties that occur, an effective learning strategy is needed, known as self-regulated learning. There have been many measuring tools regarding self-regulated learning in offline learning. This study aims to develop a self-regulated learning measuring tool that is suitable for use in the context of online learning. The measurement scale is multidimensional with six aspects, including planning, self-monitoring, evaluation, reflection, effort, and self-efficacy. The measuring instrument is based on the theory of self-regulated learning developed by Toering et al., (2012). The research subjects were active male and female students as many as 307 students. Data analysis using Confirmatory Factor Analysis with the help of Jamovi software. The results of the analysis show that the model used is fit and the measuring instrument can be used to measure self-regulated learning in the context of online learning with validity values ranging from 0.54-0.79 and stratified chronbach's alpha coefficient is 0.93 (RMSEA = 0.041; SRMR = 0.04; TLI = 0.955; CFI = 0.964; χ2/df = 1.535).
... Support for these variables exists in the literature. Many studies have examined the importance of GPA as a predictor of persistence [4,6,[23][24][25][26][27][28]. Maslov Kruzicevic et al. [28] and Mendoza-Sanchez et al. [29] identified both GPA and study duration (Term Count) as significant factors in medical and biochemistry Ph.D. programs, respectively. ...
Article
Full-text available
This work examines the indicators of master’s students’ persistence from 2014 to 2021 at a Hispanic-Serving Institution (HSI) in the southern United States. Demographic and academic variables were used in a logistic regression model to predict students’ successful completion across sixteen master’s programs. In this two-fold study, first, we examined the impact of COVID-19 on students enrolled in twelve face-to-face (F2F) programs and evaluated their performance against a pre-pandemic baseline period. Second, we compared student performance in four accelerated online programs against a pre-accelerated baseline. Most demographic variables were insignificant, while all academic variables were significant across program types. However, GPA became an insignificant variable when the F2F programs were forced to move online during the COVID-19 pandemic. During this period, GPA also increased for students who had discontinued their studies. The accelerated online programs recorded a significant decrease in terms enrolled (Term Count) compared to the pre-accelerated baseline. These results add to the limited literature on student success at the master’s level in HSIs, thus filling a vital knowledge gap. This study provides two case studies focusing on how the pandemic and the accelerated online learning model impacted academic persistence at the master’s level at an HSI.
... From the perspective of self-regulation theory, self-discipline orchestrates the deployment of individual efforts towards a target task, encompassing persistent task execution and self-encouragement, ensuring the successful accomplishment of set objectives (Patron & Lopez, 2011). Schwinger et al. (2009) also contended that students calibrate their exertions through self-discipline, culminating in enhanced performance. ...
Article
Full-text available
In the present research, cognitive ability was segmented into five distinct faculties: memory, representational, information processing, logical reasoning, and thinking conversion. The strength of a person’s cognitive ability affects the effectiveness of his or her knowledge acquisition and the efficiency of learning. The influence of these five faculties on academic performance was meticulously examined. Utilizing structural equation modeling (SEM), the mediating role of perseverance in the relationship between cognitive ability and academic performance was explored. Additionally, the intervening influence of self-discipline on this mediation was probed, positioning self-discipline as a moderating variable. Findings revealed a pronounced positive correlation between cognitive capability and academic performance. Perseverance was discerned to act as a partial mediator in this relationship. Notably, the moderation of self-discipline was salient in the initial phase of this mediation, suggesting that as self-discipline levels augment, the influence of cognitive ability on perseverance intensifies. Consequently, in scenarios of elevated self-discipline, the mediation becomes more robust, validating the moderated mediation model.
... Their results do not show a correlation between the collected measures and the learning outcomes. Patron and Lopez [134] study shows that consistency (i.e., lower variation of the time spent on tasks) is a statistically significant explanatory variable of higher grades, while the time spent online is not. Hill [74] also used the time spent as a measure of effort, but he was able to positively relate the time spent studying during weekends with the learning outcomes. ...
Thesis
Data exploitation is a growing phenomenon that is present in different scenarios, including the educational scenario, where it holds the promise of advancing our understanding and improving the learning process. From this promise emerged the learning analysis research field that, ideally, takes advantage of technology and educational theories to explore the educational data. On the technological side, we are interested in recommendation systems because they can help students, teachers and other stakeholders to find the best learning resources and thus achieve their learning goals and develop competencies in less time. On the theoretical side, we are interested in the social influence technique foot-in-the-door, which consists in making consecutive requests with an increasing cost. This technique seems particularly relevant to the educational context because it can not only be formalized into a recommendation system, but it is also compatible with the zone of proximal development that states that the challenge presented by the learning resources need to increase gradually in order to keep students motivated. However, we do not know to what extent explicitly applying this technique via recommendations can influence students. Therefore, in this thesis, we investigate such influences assuming that students’ effort is a good indicator of the cost of the requests, since not only every learning activity requires a certain level of effort and, but it is often cited as a key factor for students’ success. For this, we modeled the measurement and prediction of the students’ effort through machine learning models using data that can be used in real life and exploited it in order to explicitly apply the foot-in-the-door technique in a recommendation system. Our results show that, compared to recommendation models that do not formalize this technique, the proposed recommendation models have a positive influence on the students’ effort, compliance, performance and engagement. This suggests that this approach has the potential to improve the learning process as students will present the aforementioned behaviors.
...  Small Target Pieces for Reflections: Learners may be engaged with the course material at multiple times and over multiple sessions, creating a stop and start affect that should be addressed in the content design. Course creators should be aware of and design accordingly so that students experience the ability to interact with smaller pieces or sequences of material so they can continue to make effective use of the asynchronous learning experience [34][35]. By packaging the content in small sequential lessons, student can engage and reflect in small steps, thus proceeding to the next step only after the previous step is reflected upon. ...
Article
Full-text available
Academia is adapting to the new age of online teaching and learning as the online mode has rapidly spread during the past several months. This is a significant paradigm shift and can also be viewed as an opportunity to think and experiment outside the box and question the traditional age-old ways of functioning in the onsite mode. As expected, there is continuous need and strong demand for innovative flexible online activities that promote learning. Currently, most academic institutions are in the process of either evaluating or implementing the new online options and tools for their programs. This paper compares the assessment data for online and onsite offerings of electrical circuit STEM classes during 2007-2021 and suggests that online mode is as good as or better than onsite mode. Future research areas are recommended that may contribute to understanding these trends and results in other areas of engineering and computer science.
Chapter
This chapter provides an overview of the major motivation theories, and examines how the ARCS-V model applies motivational theory to instructional design. The chapter also provides a cognitive framework to support aspects of the ARCS-V model. Special attention is given to course design and instructional practices aimed at reducing online student attrition and improving academic performance. Additionally, the chapter reviews research examining the utility of the ARCS-V model, as well as recommendations for implementation within the online modality.
Article
The literature reveals that there has been significant research regarding online learning during the pandemic outbreak. This research project was conducted to investigate the experiences of university students by highlighting a few key barriers and perceptions to e-learning in Pakistan. A sample of 253 (age range: 17–31 years) students was recruited through a convenient sampling technique. According to participants the crucial online barriers included social interaction, family support, time, and motivation for studies. The majority of students were satisfied with the internet access and quality of online learning. In addition, the most intensively used web-platforms included Zoom, WhatsApp, Microsoft Teams, and Meet by Pakistani students. Furthermore, the findings revealed that students who sleep after midnight (M = 51.46, SD = 9.61) and wake-up around noon (M = 52, SD = 9.09) experience more online education barriers in comparison to students who sleep (M = 40.33, SD = 16.75) and rise early (M = 49.36, SD = 11.86). Hence, the findings of the current study might be helpful in addressing barriers and challenges faced by universities and the Higher Education Commission during such pandemic outbreaks.
Article
Full-text available
This study investigated the effects of online Process Oriented Guided Inquiry Learning approach on students' academic self-concept among Junior High School in Earth Science. Specifically, it aimed to assess the student's academic self-concept as exposed to an online process-oriented guided inquiry learning approach in relation to academic confidence and academic effort. A one-shot pre-experimental design was employed in the study. An adapted academic self-concept survey questionnaire was used with a reliable Chronbach's alpha of 0.89. Quantitative data from the questionnaire were analyzed using descriptive statistics. The findings of the study revealed that students exhibited positive academic self-concept after exposure to the online process-oriented guided inquiry learning approach. Further results show a moderately positive result in relation to academic confidence and positive academic effort. Quantitative analysis indicated that students positively perceived themselves as learners in Earth Science.
Article
Full-text available
Using a qualitative response analysis, the authors estimated and compared key determinants of course grades with those derived with OLS.
Article
Full-text available
Spector and Mazzeo assert that ordinary least squares regression analysis has been misused by many economics education researchers. They explain that OLS is inappropriate for the analysis of discrete dependent variables, and they suggest the use of probit analysis instead. They then show how probit analysis can be employed in an economics education research project and compare the results of this approach with the results obtained by using OLS. (Those who want to know more about probit analysis might be interested in a new book, Carlos Daganzo's Multinomial Probit: The Theory and Its Application to Demand Forecasting, published in 1979 by Academic Press, Inc. Also see the bibliography provided by Spector and Mazzeo.)
Article
Full-text available
The relation between college grades and self-reported amount of effort was examined in four major and several minor investigations of undergraduates in a large state university. Grades were operationalized mainly by using grade point average (GPA), though in one investigation grades in a particular course were the focus. Effort was measured in several different ways, ranging from student estimates of typical study over the term to reports of study on specific days. Despite evidence that these self-reports provide meaningful estimates of actual studying, there is at best only a very small relation between amount of studying and grades, as compared to the considerably stronger and more monotonic relations between grades and both aptitude measures and self-reported class attendance. The plausible assumption that college grades reflect student effort to an important extent does not receive much support from these investigations. This raises a larger question about the extent to which rewards are linked to effort in other areas of life—a connection often assumed but seldom investigated.
Article
Full-text available
Most scholars and teachers accept, as part of the natural order of the universe, a strong relationship between study efforts and students' academic performance. Yet, the only systematic investigation of this relationship a 12-year project at the University of Michigan, repeatedly found little to no correlation between hours studied and grades. The study presented here replicated parts of this project but did so with a different conceptualization of effort. This new perspective views effort as the outcome of an "academic ethic," a student worldview that emphasizes diligent, daily, and sober study. This article shows how this concept can be operationalized and measured and provides evidence for its existence among some students at Illinois State University. It then shows a significant and meaningful relationship between methodical, disciplined study and academic performance. It closes by considering how the selectivity of colleges and universities would affect the findings and suggests some new directions for research.
Article
Questionnaire data regarding study time, work time, and study habits, as well as transcript data, were gathered for 190 lower division science and engineering majors. A multiple regression analysis showed that the best predictor of grade point average was a question asking the extent to which students completed assigned work prior to examinations. Variables which added significantly to the prediction equation were ACT entrance exam scores, the estimated number of hours students spent studying per week, and high school GPAs. The data also indicated that students who hold part-time jobs spend about the same amount of time studying as those who do not.
Article
Wetzel asserts that “… using levels of achievement as the dependent variable may miss the major benefits of a particular teaching method, namely less effort and time spent studying economics and additional leisure time …” He then discusses the McKenzie-Staaf model, which is derived from the work-leisure model of wage theory, and asks if it is possible to construct an “effort variable.” Wetzel's paper reports the results of an attempt to develop an “effort variable” and examine its relationship to certain student characteristics. In fact, three different “effort variables” were constructed and tested. Both the results and the techniques employed should be of interest to all economic education researchers.
Article
This study examines the possibility that specification errors contribute to the Schuman et al (1985) findings of a weak relationship between study time and college grades. Our analyses investigate both main and interactive effects, measures of quantity and quality of study, and various context-specific models of college grades. In contrast to previous findings, we observe significant main and interactive effects of academic effort on college grades.
Article
A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, ρ = .44) and GPA (k = 33, N = 9,243, ρ = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed.