Available via license: CC BY 4.0
Content may be subject to copyright.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 1
[1]Date of publicati on xxxx 00, 0000, d ate of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.Doi Number
A Systematic Review of the Effects of
Automatic Scoring and Automatic Feedback in
Educational Settings
Marcelo Guerra Hahn1, Senior Member, IEEE, Silvia Margarita Baldiris Navarro1, Luis De La
Fuente Valentín1, and Daniel Burgos1, Senior Member, IEEE
1Universidad Internacional de La Rioja (UNIR), Logroño, La Rioja 26006 Spain
Corresponding author: Marcelo Guerra Hahn (e-mail: marcelo.guerrahahn261@comunidadunir.net).
“This Work partially funded by the PLeNTaS project, “Proyectos I+D+i 2019”, PID2019-111430RB-I00 and
the PL-NETO project, Proyecto PROPIO UNIR, projectId B0036.”
ABSTRACT Automatic scoring and feedback tools have become critical components of online learning
proliferation. These tools range from multiple-choice questions to grading essays using machine learning
(ML). Learning environments such as massive open online courses (MOOCs) would not be possible without
them. The usage of this mechanism has brought many exciting areas of study, from the design of questions
to the ML grading tools' precision and accuracy. This paper analyzes the findings of 125 studies published in
journals and proceedings between 2016 and 2020 on the usages of automatic scoring and feedback as a
learning tool. This analysis gives an overview of the trends, challenges, and open questions in this research
area. The results indicate that automatic scoring and feedback have many advantages. The most important
benefits include enabling scaling the number of students without adding a proportional number of instructors,
improving the student experience by reducing the time between submission grading and feedback, and
removing bias in scoring.
On the other hand, these technologies have some drawbacks. The main problem is creating a disincentive to
develop innovative answers that do not match the expected one or have not been considered when preparing
the problem. Another drawback is potentially training the student to answer the question instead of learning
the concepts. With this, given the exitance of a correct answer, such an answer could be leaked to the internet,
making it easier for students to avoid solving the problem. Overall, each of these drawbacks presents an
opportunity to look at ways to improve technologies to use these tools to provide a better learning experience
to students.
INDEX TERMS Education, Feedback, Inclusive Learning, Literature Reviews, Machine Learning
I. INTRODUCTION
Automatic scoring and feedback consist of calculating
grades on students' work and providing personalized feedback
using technological tools that do not require human
participation [1]. These tools play a significant role in online
learning. Like massive open online courses (MOOCs), many
new learning environments would not be possible without
them [2]. Automatic scoring has been a tool for a while, and
multiple-choice tests have been available for a long time.
Large-scale multiple-choice tests have been possible since the
introduction of the Scantron. This tool continues to be used
today [3].
With the rapid growth of technology and internet access, the
use of automatic scoring and feedback has accelerated [4, 5].
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 2
The benefits for institutions and instructors of the usage of
these tools are apparent. The institutions and instructors
acquire the ability to increase students per instructor and
provide fast and consistent results [6]. However, all these
advantages come with potential drawbacks.
Multiple areas of study can use automatic scoring. An early
implementation of immediate automatic feedback is multiple-
choice questions. Generating the questions in multiple-choice
format simplifies any issues related to interpreting the answers
[7]. Unit testing evaluation of programming assessments was
also an early entry into the area, given its simplicity of usage
and implementation [8, 9, 10]. With the broader availability of
machine learning in education [11], the field expanded to
include the grading of short essays [12] and long essays [13].
It also started to address other more complex problems, such
as grading code correctness [14].
With the expansion of automatic scoring and feedback as a
tool, several issues have emerged. From a technical
perspective, devices based on machine learning need data,
potentially in substantial amounts, to be accurate [15]. From
an educational perspective, authors, including Bancroft [16],
affirm that automatically scored tests, for example, multiple-
choice tests, "do not test anything more than just straight recall
of facts." Given these potential issues, studies on automatic
feedback, problem set up, and its effects on student's education
and experience are still being produced [17].
This review attempts to expand the literature on the effects
of using automatic scoring and feedback as a learning tool,
emphasizing its impact on the students' learning experience.
To achieve that goal, it focuses on these research questions:
-RQ1 What types of automatic scoring and automatic
feedback are in use?
-RQ2 What are the positive effects on education goals of
using automatic feedback and automatic scoring?
-RQ3 What are the positive effects on the student
experience of using automatic feedback and automatic
scoring?
-RQ4 What are the adverse effects on educational goals and
student experience using automatic feedback and automatic
scoring?
-RQ5 What type of evaluation was carried out to measure
the effect of automatic scoring and feedback on student
academic performance?
-RQ6 What improvements can be made to mitigate the
adverse effects in RQ4?
Below, we list most tools and application fields to present
current automatic scoring and feedback (RQ1). We present
most tools currently being used, even though some are on their
way to obsolesce, to show the field's evolution. Concerning
the positive effects (RQ2), we look at the experiences and
opportunities these technologies have enabled, focusing on the
student (RQ3). We then shift focus to the problems these tools
may introduce from a learning perspective and a student
experience perspective (RQ4). We also analyze the size
evaluations conducted to examine the effect of effect on the
academic performance of students (RQ5). Having studied the
adverse effects, we look at possible improvements that
mitigate the findings on RQ4 (RQ6).
The following ten sections of this paper continue with a
discussion of previous studies, the process used to carry out
the review, the most relevant findings, and interpretation of
those findings, and potential avenues for future research.
II. RELATED WORK
Multiple papers have investigated the state of automatic
scoring and feedback. These papers tend to focus on ways to
use the tools and the quality of their output. Few of them
concentrate on the educational effects of using the
technologies. Table I provides an overview of some of the
studies and their findings. Among their conclusions is that
automatic feedback is being used more in structured questions
that require well-defined answers. These questions include
multiple-choice [18], fill-in-the-blank [19], or those with a
solution presented in a structured language, i.e., a
mathematical formula [20] or a program [21, 22]. The main
positive effects of automatic feedback include the students
using the feedback for improvement [23], increased student
engagement [24, 25], and reduction of instructor bias [26].
Despite its benefits, automatic scoring is only one of the
potential uses for machine learning and can be expanded to
encompass others, including performance prediction, material
curation, and course adaptability [27]. However, automatic
feedback has some drawbacks, including the complexity of
measuring the feedback quality compared to a manual grader
[28].
This work looks at reviewing a broader set of papers
compared to Table I, focusing on examining the effect of
feedback on the student experience and identifying
opportunities to improve the automatic feedback mechanisms
from a student experience perspective.
TABLE I
RELATED WORKS
Study Purpose Findings
[27]
This paper conducted a
review of 146
publications from 2007
to 2018. It looked at
the overall state of the
usage of artificial
intelligence in
education.
Multiple levels of education use
types of artificial intelligence.
Automatic scoring is among these
types—however, there are others,
like student performance
prediction, material curation, and
course adaptation. The paper
predicts that artificial intelligence
will continue to expand its place
in education and that multiple
improvements will happen in the
next 20 years.
[28]
This work reviewed 93
works produced
between 2015 and
2019. It analyzed the
automatic generation
of questions as a way
to improve their
quality.
The review found that this is an
area under development. Multiple
different approaches are being
used. However, the complexity of
standardizing ways to measure
effectiveness makes it hard to
show improvements produced by
new methods.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 3
[29]
This review analyzed
44 papers published
between 2003 and
2016. It investigates
datasets, machine
learning techniques,
commonly used
features, and quality
results.
The study found that some of
these tools are being used. The
technology is under continuous
evolution, and that the
publication of data sets has
opened the field for further
research and improvement.
The previous reviews do not determine the effect of
automatic scoring and feedback on students' performance. In
this paper, we advance this issue.
III. METHODS
This study follows the steps described in [30] and involves
three stages: 1) planning, 2) conducting, and 3) reporting.
IV. PLANNING
This part of the work included creating a strategy to select
the most relevant result that helped address the research
questions. We performed an iterative search using Web of
Science as a platform for the search given the quality of its
database and the iterative filtering capabilities [31].
We searched for the terms "'automatic scoring' AND
education," "'automatic grading' AND education," "'automatic
feedback' AND education," and "'machine learning' AND
education." The search included only works published from
2016 to mid-2020. The first query returned 15 papers for the
first terms, the second 19, the third 27, and the fourth 1233.
We restricted the last one by refining the search results by
looking for "scoring" or "feedback." The refined result of all
the search refinements included 256 papers which were then
manually filtered by title using the inclusion criteria.
V. INCLUSION CRITERIA
The works selected for review fell under the following
parameters:
1-The study focuses on the use of technologies in education.
2-The study helps answer at least one of the research
questions.
3-The study was published after peer review.
After the criteria were applied, the list of works was reduced
to 125.
VI. CONDUCTING THE REVIEW AND REPORTING
After completing the planning, a content analysis was
carried out. As mentioned by [32], content analysis allows one
to find the research trends by analyzing the articles' content
and grouping them according to the shared characteristics. We
created a collection form to code the information relevant to
answering the research questions. The columns included in
that form are shown in Table II. Each paper was thoroughly
reviewed by three of the authors using a shared Excel file. A
simple majority (2 votes) was needed for the value to be
selected for the categorical values. For the open-ended
questions also two votes/appearances were used to keep an
answer. This analysis was used to group the papers, and the
groupings were used to answer the research questions.
TABLE II
CODING CRITERIA
Code Values
Related to automatic scoring
and/or feedback
Yes/No
Publication Year
2016-2020
Education
Yes/No
Research Question
1/2/3/4/5/6/Multiple
Automated Feature
Scoring/Feedback/Both
One time experience
Yes/No
Experiment
Yes/No
Field of Education
Subject Independent/Science, Math,
Computer/Arts, and
Humanity/Medicine
Technology Area
Structured Answer/Short Free
Form/Long Free Form/Others
Technology Type
Static/Dynamic
Positive Educational Effects
Open-Ended
Positive Student Experience
Open-Ended
Negative Educational Effects
Open-Ended
Negative Student Experience
Open-Ended
VII. FINDINGS
This section shows the current trends in automatic feedback
and scoring. First, we present general findings, followed by
the analysis of each research question.
A.
GENERAL FINDINGS
From the selected papers, 12% are from 2016, 16% from
2017, 27% from 2018, 28% from 2019, and 17% from the
early part of 2020. This trend suggests a likely increase in
interest in the subject. Figure 1 shows this behavior.
FIGURE 1. Number of Studies over Time.
B.
EDUCATIONAL LEVEL
According to the International Standard Classification of
Education [33], most of the work reviewed was at the
bachelor's or equivalent level (92% of papers), with small
numbers at the early education (2% of papers), such as Saha's
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 4
study of automatic grading of explanatory answers in middle
school [34] and secondary education (6% of papers) such as
Anohah's analysis of high school computing science courses
[35].
This distribution supports the fact that technologies used for
automatic grading and feedback require information
technology and a medium understanding of language and
mathematics. Despite this, some of the works dealt with
teaching topics in early education, including handwriting [36]
and basic math [20]. See Figure 2.
FIGURE 2. Educational Level.
C.
FIELD OF EDUCATION
Most of the papers addressed works that have effects across
disciplines, e.g., using student data to predict performance
[26]. For the discipline-specific ones and using the
International Standard Classification of Education (ISCED)
[33], most of the work fell into the categories of the sciences
[37], including areas like geology [38], mathematics [20],
computer science [24], computer networking [39] (47% of
papers). This is followed by cross-disciplinary applications
(32% of the papers), followed by art and humanities (21%
papers). This set is completed by applications in medicine
where virtual reality and other technologies are being used to
support immersive practical experiences such as virtual
artificial intelligent assistants [40], surgical skill assessments
[41, 42], physiotherapy training [43], and clinical skills [44].
A couple of papers target areas such as music where
immediate feedback is also used to improve the student
experience in general musical learning [45] and instrument
learning [46].
FIGURE 3. Field of Education.
The following sections will describe the answers according to
each research question.
D. RQ1 WHAT TYPES OF AUTOMATIC SCORING AND
AUTOMATIC FEEDBACK ARE IN USE?
The types of automatic scoring and feedback can be divided
into two dimensions. The first is the input-form, and the
second is the mechanism used for auto-grading and generating
feedback.
From the input perspective, the primary forms are
structured. These include mathematics [47], code [22], and
controlled environments such as simulations [40]. Other
inputs include a short free form (e.g., a short sentence [12]),
long free form (e.g., an essay [13]). The main mechanisms are
static or dynamic. Static ones include comparing the answer to
a key or set of keys [16] or running a fixed set of unit test cases
[48]. Dynamic ones include comparing the answer to other
student answers or using machine learning to learn expected
grades from past answers.
The tools used to produce grades and feedback dynamically
include ontologies [7], neural networks to identify possible
solutions [42, 47], machine learning used to identify learning
paths [49, 50], and machine learning used to determine
student's risk of falling [51].
Table III summarizes the tools and techniques in each area
identified in the works analyzed.
TABLE III
MECHANISMS
Input Static Dynamic
Structured
5%
46%
Short Free Form
-
2%
Long Free Form
Others (i.e., virtual
reality, handwriting,
etc.)
-
-
14%
33%
E. RQ2 WHAT ARE THE POSITIVE EFFECTS ON
EDUCATION GOALS OF USING AUTOMATIC
FEEDBACK AND AUTOMATIC SCORING?
Most of the analyzed papers concluded that there were
educational advantages to automatic feedback and scoring.
The most commonly mentioned advantages included bias
reduction [52] and grading consistency [53], the ability for the
instructor to shift focus away from grading into other activities
[49], and allowing more students to participate in the learning
experiences [54]. Table IV summarizes the benefits.
TABLE IV
BENEFITS
Benefit Papers
Reduction in bias and increase inconsistency of the
grading
68%
Instructors to focus on other activities instead of grading,
i.e., working with students struggling with the material.
21%
Ability to provide education to a larger number of
students at the same time (i.e., MOOCs)
12%
F. RQ3 WHAT ARE THE POSITIVE EFFECTS ON THE
STUDENT EXPERIENCE OF USING AUTOMATIC
FEEDBACK AND AUTOMATIC SCORING?
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 5
Very few of the works focused on the student experience,
instead emphasizing learning experiences. They primarily
highlighted students' positive reception of features such as
immediate grading with the allowance of multiple
submissions [55]. Together with this, several works focused
on the ability to create custom learnings paths based on student
performance [56, 26, 54, 55, 56, 57, 58, 59], and the ability to
flag students at high risk of not succeeding [51, 60, 61, 62, 63,
64, 62]. Table V summarizes the benefits found in the review.
TABLE V
BENEFITS
Benefit Papers
Ability to iterate over problems instead of only having one
opportunity to get the correct answer
51%
Ability to create personalized paths and learn at the
student's own pace.
6%
Ability to be warned of at-risk status
6%
G. RQ4 WHAT ARE THE ADVERSE EFFECTS ON
EDUCATIONAL GOALS AND STUDENT EXPERIENCE
OF USING AUTOMATIC FEEDBACK AND AUTOMATIC
SCORING?
Most of the papers focused on the usability of the tools and
techniques they work with and evaluated ways to replace
current practices with equivalent or better methods, with very
few adverse side effects. When detected, these side effects
included students losing the social aspect of learning [65]. This
was replaced by human-computer interactions and students
learning to work within the system (e.g., creating multiple
accounts in a MOOC to gain access to the answers [66].
Very few studies assessed the adverse effects on student
experience, although some included sections on potential
issues that need to be further studied. These issues included
students learning to solve the assessment questions without
understanding the underlying concepts [67]. This
phenomenon is not exclusive to automatic feedback, as studies
have shown that the number of past tests studied is a strong
indicator of future tests [68]. Other potential adverse effects
included loss of human interaction and lack of interpersonal
skills while solving problems [65], and lack of personalized
feedback that could help outlier students [69], especially
struggling students [70].
H. RQ5 WHAT TYPE OF EVALUATION WAS USED TO
MEASURE THE EFFECT OF AUTOMATIC SCORING AND
FEEDBACK ON STUDENT ACADEMIC PERFORMANCE?
As shown in Table VI, some of the papers contained
experiments related to the tool's quality or the algorithm.
Examples include [71, 72, 73, 74, 75] (36%). Others included
a one-group experiment (34%) and an individual case study
(29%).
TABLE VI
EVALUATION TYPE
Benefit Papers
Algorithm Performance
36%
One Group Experiment
34%
Individual Case Study
29%
These results show the need for further experiments
following this pattern to understand better the actual effects of
automatic feedback and scoring on student academic
performance.
I. RQ6 WHAT IMPROVEMENTS ARE BEING MADE TO
MITIGATE THE ADVERSE EFFECTS IN RQ4?
Some of the effects cannot be easily mitigated, i.e., bringing
back the student-professor interaction [29]. Automatically,
automatic systems are becoming popular are impossible to
keep large MOOCs [67]. Chatbot technologies could
eventually help this area provide a more personalized
experience [76, 77]. Specific learning can be mitigated by
generating dynamic problems unique for each student [78,
79]. From a student perspective, work can be developed to
improve the design and delivery of the automatic feedback to
improve the experience, including finding a way to
personalize the feedback [80]. Finally, it is essential to
mention the need for more long-term studies to understand the
impact of feedback on the students' experience.
VIII. DISCUSSION
The results of this review suggest that automatic scoring and
feedback is an area undergoing constant improvements as
technology evolves and data becomes available. The use of
automatic scoring and feedback has led to the expansion of
MOOCs and online courses and the ability to support large
students in the same program [54]. Automated scoring and
feedback are not only present both in MOOCs [81] and other
systems where the scale requires it but also in smaller settings
as a tool to support learning, including introductory
programming classes [82] [83] [84] [85]. This new capability
has led universities to open their programs to more applicants
and allowed more students to go through those programs.
The most common uses of automatic scoring and feedback
are in three areas: 1) programming problems through
mechanisms such as assisting the student with the coding [86,
87], analyzing coding patterns [88, 89], automatic grading [90,
91, 92, 93, 94], and customized feedback [86, 88, 87]; 2) short
essays [95]; and 3) extended essays [96]. Programming
problems are the easiest to use as input for this technology, as
they appear in a structured language that computers can
understand [97]. Short essays can also be looked at, as their
complexity tends to be low [98], while long essays prove the
most considerable challenge for this technology [99]. With
this in mind, automatic scoring and feedback are being
employed broadly in computer science, mathematics [100]
[101] [102] [103] [47], and similarly analytical courses,
together with language-learning areas [104, 105] [106] [107]
[36] [108].
With the expansion of these technologies, we expect to see
the benefits presented in the works surveyed in this paper
materialized beyond the furthering of access, particularly an
improvement in grading consistency [109] and the freeing of
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 6
instructor time to dedicate to other activities [110]. An analysis
of the effects of this time re-allocation would provide more
information on the end effects of this benefit. We also expect
to see an increase in student engagement driven by the ability
to solve problems in a more interactive way given the
feedback [111] [112] [113]. Looking at the effects of this
engagement on learning is another area of future study.
Similarly, a more personalized student experience would be
expected to lead to better matching between the learning
experience and the student's learning style [114] [115, 116,
117, 118].
On the other hand, the potential adverse effects of using
automatic feedback cannot be ignored. Students can solve
problems without learning the underlying concepts. Using the
feedback as a trial-and-error exercise or accessing answers
using the internet can cause a very detrimental effect on
learning [119]. Similarly, where solutions are not easily
defined or grouped, subjects will have a more challenging time
implementing these technologies as they are less developed in
those areas.
IX. LIMITATIONS
This study addresses only some relevant questions when
analyzing the extensive use of automatic scoring and
feedback. There are fundamental questions about the quality
of content and student privacy [120], for example, which are
not considered in the study. The study also does not reveal
funding and other possible biases affecting the underlying
studies and does not focus on the specific tools used to
implement the technologies. The research does not focus on
features and requirements for automatic scoring and feedback
tools or possible solutions to many challenges.
X. CONCLUSIONS AND FUTURE WORK
This work presents a systematic review of the literature with
an analysis of 125 studies focused on using automatic scoring
and feedback. Results indicate that these technologies play an
essential role in expanding access to education and are still
evolving. The use of these technologies is also growing both
in large and small classes in multiple areas. The number of
application areas, tools being used, and published works in this
area are increasing. This trend is most likely related to a
combination of technological advances and the need to serve
more students.
This review shows the current state of automatic scoring
and feedback and identifies areas of potential improvement
and further analysis. Among these areas, the study of the
effects on educational quality and student experience is highly
relevant.
REFERENCES AND FOOTNOTES
[1]
F. Z. Y. Dong, "Automatic Features for Essay - An
Empirical Study," in 2016 Conference On Empirical
Methods in Natural Language Processing, 2016.
[2]
A. Chauhan, "Massiv Open Online Courses
(MOOCS): Emerging Trends in assessment and
Accreditation," in 7-17, 2014.
[3]
Y. P. L. N. L. S. O. R. Cao, "Paper or Online?: A
Comparison of Exam Grading Techniques," in
2019
ACM Conference on innovation and Tech
nology in
Computer Science Education, 2019.
[4]
B. &. M. G. Csapó, "Online Diagnostic Assessment
in Support of Personalized Teaching and Learning:
The eDia System,"
Frontiers in Psychology,
vol. 10,
p. 1522, 2019.
[5]
A. P. a. d. M. R. F. L. a. R. V. a. A. M. a. F. F. a. G.
D. Cavalcanti, "An analysis of the use of good
feedback practices in online learning courses," in
2019 IEEE 19th International Conference on
Advanced Learning Technologies (ICALT)
, IEEE,
2019, pp. 153--157.
[6]
N. W. G. W. M. Alruwais, "Advantages and
Challenges of Using e
-assessment," International
Journal of information and Education Technology,
vol. 8, no. 1, pp. 34-37, 2018.
[7]
M. T. M. Cubric, "Design and Evaluation of an
Ontology
-Based Tool for Generating Multiple-
Choice
Questions," Interactive Technology and
Smart Education, vol. 17, no. 2, pp. 109-131, 2020.
[8]
S. H. P.-Q. M. A. Edwards, "Web-CAT:
Automatically Grading Programming Assignments,"
in
13th Annual Conference on i=Innovation and
Technology in Computer Science Education, 2008.
[9]
A. a. M. T. Wakatani, "Web applications for
learning CUDA programming," in
2017 8th
International Conference on Information,
Intelligence, Systems & Applications (IISA)
, 2017,
pp. 1--5.
[10]
J. a. S. S. a. P.-Q. M. a. N. A. a. L. B. DeNero,
"Beyond autograding: Advances in student feedback
platforms," in
Proceedings of the 2017 ACM
SIGCSE Technical Symposium on Computer Science
Education, 2017, pp. 651--652.
[11]
Y. a. V.-U.-L. P. Takahashi, "Toward
Understanding the Impact of Artificial Intelligence
on Education: An Empirical Research in Japan," in
ECIAIR 2019 European Conference on the Impact
of Artificial Intelligence and Robotics
, Academic
Conferences and publishing limited, 2019, p. 433.
[12]
F. S. Pribadi, T. B. Adji, A. E. Permanasari and A.
Mulwinda, "Automatic Short Answer Scoring Using
Word Overlapping Methods," in
AIP Conference
Proceedings, 2017.
[13]
K. N. H. T. Taghipour, "A Neural Approach to
Automated Essay Scoring," in Proceedings of the
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 7
2016 Conference on Empirical Methods in Natural
Language Proceedings, 2016.
[14]
J. Huang, C. Pieh, A. Nguyen and L. Guibas,
"Syntatic and Functional Variability of a Million
Code Submissions in a Machine Learning MOOC.,"
in AIED 2013 Workshops Proceedings, 2013.
[15]
M. Zhang, "Contrasting Automated nad human
Scoring of Essays,"
R&D Connections, vol. 21, no.
2, pp. 1-11, 2013.
[16]
P. Bancroft and K. Woodfors, "Using Multiple
Chice Questions Effectively in Information
Technology Education,"
ASCILITE, vol. 4, pp. 948-
955, 2004.
[17]
O. Bulut, M. Cutumisu, A. M. Aquilina and D.
Singh, "Effects of Digital Score Reporting and
Feedback
on Students' Learning in Higher
Education,"
Frontiers in Education, vol. 4, p. 65,
2019.
[18]
S. a. K. V. S. a. S. S. N. a. B. K. Narayanan,
"Question bank calibration using unsupervised
learning of assessment performance metrics," in
2017 Internationa
l Conference on Advances in
Computing, Communications and Informatics
(ICACCI), IEEE, 2017, pp. 19--25.
[19]
A. Sahu and P. K. Bhowmick, "Feature Engineering
and Ensemble
-Based Approach for Improving
Automatic Short
-Answer Grading Performance,"
IEEE Tran
sactions on Learning Technologies, vol.
13, no. 1, pp. 77-90, 2019.
[20]
J. a. R. S. a. D. G. Kadupitiya, "Automated
assessment of multi
-step answers for mathematical
word problems," in
2016 Sixteenth International
Conference on Advances in ICT for Emer
ging
Regions (ICTer), 2016, pp. 66--71.
[21]
S. Parihar, Z. Dadachanji, P. K. Singh, D. R. K. A.
and A. Bhattacharya, "Automatic grading and
feedback using program repair for introductory
programming courses," in 2017 ACM Conference on
Innovation and Technology in Computer, 2017.
[22]
S. a. X. X. a. B. B. a. X. T. a. T. N. Li, "Measuring
code behavioral similarity for programming and
software engineering education," in
2016
IEEE/ACM 38th International Conference on
Software Engineering
Companion (ICSE-C), 2016,
pp. 501--510.
[23]
M. Marchisio, A. Barana, M. Fioravera, S. Rabellino
and A. Conte, "A Model of Formative Automatic
Assessment and interactive Feedback for STEM," in
2018 IEEE 42nd Annual Comuter Software and
Applications Conference (COMPSAC), 2018.
[24]
S. Marwan, G. Gao, S. Fisk, T. W. Price and T.
Barnes, "Adaptive immediate feedback can improve
novice programming engagement and intention to
persist in computer science," in 2020 ACM
Conference on International Computing Education
Research, 2020.
[25]
H. a. W. T.-H. a. W. I. a. o. Guei, "2048-like games
for teaching reinforcement learning," ICGA Journal,
pp. 1--24, 2020.
[26]
Q. Chen, X. Wang and Q. Zhao, "Appearance
Discrimination in Grading?
- Evidence from
Migrant Schools in China,"
Economics Letters, pp.
116-119, 2019.
[27]
O. M. V. I. B. M. &. G. F. Zawacki-Richter,
"Systematic review of research on artificial
intelligence applications in higher education
–
where
are the educators?,"
International Journal of
Educational Technology in Higher Education,
pp. 1-
27, 2019.
[28]
G. L. J. P. B. S. U. &. A.-E. S. Kurdi, "A Systematic
Review of Automatic Question Generation for
Educational Purposes.,"
International Journal of
Artificial Intelligence in Education,
pp. 121-204,
2020.
[29]
L. b. Galhardi and J. D. Brancher, "Machine
Learning Approach for Automatic Short Answer
Grading: A Systematic Review," in
Ibero-American
Conference on Artificial Intelligence, 2018.
[30]
B. Kitchenham, P. Bereton, D. Budgen and M.
Khalil, "Lessons from applyng the systematic
review within the software engineering domain,"
Jornal of Systems and Software, pp. 571-583, 2007.
[31]
A. O.-M. E. &. L.-C. E. D. Martín-Martín,
"Coverage of highl
y-cited documents in Google
Scholar, Web of Science, and Scopus: a
multidisciplinary comparison,"
Scientometrics, pp.
2175-2188, 2018.
[32]
Y.-C. Hsu and Y.-H. Ching, "Peer feedback to
facilitate project
-based learning in an online
environment,"
International Review of Research in
Open and Distributed Learning, pp. 258-276, 2013.
[33]
U. ISCED, "International standard classification of
education," 2011.
[34]
S. K. a. R. C. D. Saha, "Development of a practical
system for computerized evaluation of de
scriptive
answers of middle school level students," Interactive
Learning Environments, pp. 1--14, 2019.
[35]
E. Anohah, "Pedagogy and Design of Online
Learning Environment in
Computer Science
Education for High Schools,"
International Journal
of Online Pedagogy and Course Design (IJOPCD),
vol. 6, no. 3, pp. 39--51, 2016.
[36]
M. a. R. M. G. A. a. E. J. a. R. M. J. Candeias,
"Using Android Tablets to develop handwriting
skill
s: A case study," Heliyon, vol. 5, no. 12, p.
e02970, 2019.
[37]
X. a. Y. Y. a. P. J. W. a. H. K. C. a. S. L. Zhai,
"Applying machine learning in science assessment:
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 8
a systematic review," Studies in Science Education,
vol. 56, no. 1, pp. 111--151, 2020.
[38]
S. M. a. B. M. R. Sit, "Creation and assessment of
an activ
e e-learning introductory geology course,"
Journal of Science Education and Technology,
vol.
26, no. 6, pp. 629--645, 2017.
[39]
V. a. E. I. M. a. A. M. Muniasamy, "Student's
Performance Assessment and Learning Skill
towards Wireless Network Simulation
Tool-Cisco
Packet Tracer.,"
iJET, vol. 14, no. 7, pp. 196--208,
2019.
[40]
N. Mirchi, V. Bissonnette, R. Yilmaz, N. Ledwos,
A. Winkler
-Schwartz and R. Del Maestro, "The
Virtual Operative Assistant: An explainable
artificial intelligence tool for simulat
ion-based
training in surgery and medicine,"
Journal of
Surgical Education, 2019.
[41]
M. a. M. T. a. K. S. a. G. T. P. a. G. M. Levin,
"Automated methods of technical skill assessment in
surgery: a systematic review,"
Journal of surgical
education, vol. 76, no. 6, pp. 1629--1639, 2019.
[42]
H. I. a. F. G. a. W. J. a. I. L. a. M. P.-A. Fawaz,
"Accurate and interpretable evaluation of surgical
skills from kinematic data using fully convolutional
neural networks,"
International journal of computer
assis
ted radiology and surgery, vol. 14, no. 9, pp.
1611--1617, 2019.
[43]
M. a. S. J. a. K. E. a. J. S. M. Jovanovic,
"Automated error detection in physiotherapy
training,"
Studies in health technology and
informatics, vol. 248, pp. 164--171, 2018.
[44]
Y. a. O. T. a. N. R. a. T. A. Sugamiya,
"Construction of Automatic Scoring System to
Support Objective Evaluation of Clinical Skills in
Medical Education," in
2019 41st Annual
International Conference of the IEEE Engineering
in Medicine and Biology Society
(EMBC), 2019, pp.
4177--4181.
[45]
S. a. W. G. a. N. I. a. O. A. a. M. O. a. P. A. a. W.
A. a. R. R. Giraldo, "Automatic assessment of tone
quality in violin music performance,"
Frontiers in
psychology, vol. 10, p. 334, 2019.
[46]
J. a. K. R. a. H. M.-F. Kosakaya, "A Cooperative
Multi
-agent-based Musical Scoring System for
Tsugaru and Nambu Shamisen,"
2018 IEEE
International Conference on Industrial Engineering
and Engineering Management (IEEM),
pp. 1061--
1067, 2018.
[47]
N. a. S. M. Wiggins, "Free response evaluation via
neural network for an IMathAS system," in
2019
IEEE International Symposium on Measurement
and Control in Robotics (ISMCR)
, IEEE, 2019, pp.
D1--1.
[48]
K. a. S. R. a. S. Z. Wang, "Search, align, and repair:
data
-driven feedback generation for introductory
programming exercises," in
Proceedings of the 39th
ACM SIGPLAN conference on programming
language design and implementation
, 2018, pp. 481-
-495.
[49]
T. a. W. Y. Saito, "Learning path recommendation
sy
stem for programming education based on neural
networks,"
International Journal of Distance
Education Technologies (IJDET),
vol. 18, no. 1, pp.
36--64, 2020.
[50]
C. a. Y. K. Srisa-An, "Applying Machine Learning
and AI on Self Automated Personalized Onl
ine
Learning.," in FSDM, 2019, pp. 137--145.
[51]
J. L. a. K. S. A. Harvey, "A practical model for
educators to predict student performance in k
-12
education using machine learning," in
2019 IEEE
Symposium Series on Computational Intelligence
(SSCI), IEEE, 2019, pp. 3004--3011.
[52]
K. DiCerbo, "Assessment for Learning with Diverse
Learners in a Digital World,"
Educational
Measurement: Issues and Practice,
vol. 39, no. 3,
pp. 90--93, 2020.
[53]
Q. a. W. X. a. Z. Q. Chen, "Appearance
Discrimination in Grading?
-
Evidence from Migrant
Schools in China,"
Economics Letters, vol. 181, pp.
116--119, 2019.
[54]
L. a. S. E. a. P. M. a. S. R. a. V. E. A. Singelmann,
"Design and development of a machine learning tool
for an innovation
-based learning mooc," in 2019
IEEE Learning With MOOCS (LWMOOCS)
, IEEE,
2019, pp. 105--109.
[55]
E. a. G. J. a. F. H. Faulconer, "If at first you do not
succeed: student behavior when provided
feedforward with multiple trials for online
summative assessments,"
Teaching in Higher
Education, vol. 26, no. 4, pp. 586--601, 2021.
[56]
M. A. a. B. A. H. a. A. K. Alsuwaiket, "Refining
Student Marks based on Enrolled Modules
Assessment Methods using Data Mining
Techniques,"
arXiv preprint arXiv:2009.06381,
2020.
[57]
K. a. F. R. a. K. G. a. K. J. a. B. M. J. Niemeijer,
"Constructin
g and predicting school advice for
academic achievement: a comparison of item
response theory and machine learning techniques,"
in
Proceedings of the Tenth International
Conference on Learning Analytics & Knowledge
,
2020, pp. 462--471.
[58]
S. a. B. M. a. B. W. Cunningham-Nelson,
"Visualizing student opinion through text analysis,"
IEEE Transactions on Education,
vol. 62, no. 4, pp.
305--311, 2019.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 9
[59]
S. A. Raza, "Predicting Collaborative Performance
at Assessment Level using Machine Lea
rning," in
2019 2nd International Conference on Computer
Applications
\& Information Security (ICCAIS),
IEEE, 2019, pp. 1--6.
[60]
M. a. M. S. R. a. A. G. A. a. d. C. E. L. M. a. Q. D.
M. A. Consuelo Saiz Manzanares, "Detection of at
-
risk students with Learning Analytics Techniques,"
European Journal of Investigation in Health,
Psychology and Education,
vol. 8, no. 3, pp. 129--
142, 2018.
[61]
A. M. a. C. Z. Radwan, "Improving performance
prediction on education data with noise and class
imbalance,"
Intelligent Automation & Soft
Computin, pp. 1--8, 2017.
[62]
M. a. T. A. F. a. B. L. a. S. P. M. Ciolacu,
"Education 4.0
-Artificial I
ntelligence assisted higher
education: early recognition system with machine
learning to support students' success," in
2018 IEEE
24th International Symposium for Design and
Technology in Electronic Packaging
(SIITME),
2018, pp. 23--30.
[63]
L. F. a. M. H. A. a. C. J. A. a. L. M. C. a. A. L. P.
Robles, "Learning process analysis using machine
learning techniques,"
The International journal of
engineering education,
vol. 34, no. 3, pp. 981--989,
2018.
[64]
F. a. C. Y. Chen, "Utilizing student time series
behaviour in learning management systems for early
prediction of course performance,"
Journal of
Learning Analytics, vol. 7, no. 2, pp. 1--17, 2020.
[65]
D. P. I. &. F. S. Gamage, "MOOCs Lack
Interactivity and Collaborativeness: Evaluating
MOOC Platforms.," iJEP, 2020.
[66]
G. R.-V. J. A. C. Z. M.-M. P. J. &. P. D. E.
Alexandron, "Copying@ Scale: Using harvesting
accounts for collecting correct answers i
n a
MOOC.,"
Computers & Education, vol. 108, pp. 96-
114, 2017.
[67]
P. O.-A. A. E. E. M.-M. A. V.-S. S. L. &. D. Y.
Topali, "Exploring the problems experienced by
learners in a MOOC implementing active learning
pedagogies.," in
European MOOCs Stakeholders
Summit, 2019.
[68]
O. O. T. D. A. &. S. N. Adesope, "Rethinking the
use of tests: A meta
-analysis of practice testing,"
Review of Educational Research 87,
vol. 3, pp. 659-
701, 2017.
[69]
E. D. M. D. P. J. S. C. K. S. G. A. &. D. S. K.
Jensen, "Tow
ard automated feedback on teacher
discourse to enhance teacher learning.," in 2020 CHI
Conference on Human Factors in Computing
Systems , 2020.
[70]
T. Walls and C. Zwicky, "Transforming Curriculum,
Exploring Identity, and Cultivating Culturally
Respons
ive Educators," in Cultural Cometence iin
Higher Education
, Emerald Publishing Limited,
2020.
[71]
X. a. Z. W. a. M. D. a. L. N. Wu, "UTCPredictor:
An uncertainty
-aware novel teaching cases
predictor,"
Computer Applications in Engineering
Education, vol. 27, no. 6, pp. 1518--1530, 2019.
[72]
A. a. B. D. a. C. R. Bey, "Human Scoring Versus
Automatic Scoring of Computer Programs: Does
Algo+ Score as well as Instructors? An
Experimental Study," in
2018 IEEE 18th
International Conference on Advanced Learning
Technologies (ICALT), 2018, pp. 355--357.
[73]
J. a. A. R. M. a. G.-T. J.-A. a. V. A. a. M. D. J. M.
Riera Guasp, "Students perception of auto
-scored
online exams in blended assessment: feedback for
improvement,"
Educacion XX1, vol. 21, no. 2, pp.
79--103, 2018.
[74]
J. Rico-Juan, A. Gallego and J. Calvo-Zaragoza,
"Automatic detection of inconsistencies between
numerical scores and textual feedback in peer
-
assessment processes with machine learning,"
Computers & Education, vol. 140, 2019.
[75]
J. a. O. M. E. a. N. J. a. T. M. C. a. M. D. a. K. J. a.
S. R. J. Ursenbach, "Scoring algorithms for a
computer
-based cognitive screening tool: An
illustrative example of overfitting machine learning
approaches and the impact on estimates of
classification accuracy.,"
Psychological assessment,
vol. 31, no. 11, p. 1377, 2019.
[76]
J. Pereira and M. A. Barcin, "A chatbot assistant for
writing good quality technical reports," in
Seventh
International Conference of technological
Ecosystems for Enhancing Multiculturality, 2019.
[77]
V. a. S. G.-A. a. G. C. a. S. F. Fernoaga, "Intelligent
education assistant powered by Chatbots," in
The
International Scientific Conference eLearning and
Software for Education
, "Carol I" National Defence
University, 2018, pp. 376--383.
[78]
M. Serra, A. Bikfalvi, J. Soler and J. Poch, "A
Generic Tool for Generating and Assessing
Problems Automatically using Spreadsheets,"
International Journal of Emerging Technologies in
Learning, vol. 13, no. 1, 2018.
[79]
S. A. A. a. S. R. D. C. a. L. T. F. R. a. R. E. d. N. a.
D. L. V. C. a. D. S. R. M. Freitas, "Smart quizzes in
the engineering education," in
2016 49th Hawaii
International Conference on System Sciences
(HICSS), 2016, pp. 66--73.
[80]
D. Azcona, I. Hsiao and A. Smeaton, "Detecting
students
-at-risk in computer programming classes
with learning analytics from students' digital
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 10
footprints," User Modeling and User-Adapted
Interaction, vol. 23, no. 4, pp. 759-788, 2019.
[81]
M. a. H. M. a. S.-E. P. A. a. L. E. J. M. a. d. J.-B. G.
Santamaria Lancho, "Using Semantic Technologies
for Formative Assessment and Scoring in Large
Courses and MOOCs.,"
Journal of Interactive
Media in Education, vol. 2018, no. 1, 2018.
[82]
S. D. Z. S. P. K. D. R. K. A. &. B. A. Parihar,
"Automatic grading and feedback using program
repair for introductory programming courses.," in
2017 ACM Conference on Innovation and
Technology in Computer, 2017.
[83]
J. a. H. P. Krugel, "Computational thinking as
springboard for learning objec
t-oriented
programming in an interactive MOOC," in
2017
IEEE Global Engineering Education Conference
(EDUCON), 2017, pp. 1709--1712.
[84]
S. Marginson, "Do rankings drive better
performance?,"
International Higher Education,
vol.
89, pp. 6-8, 2017.
[85]
B. a. H. Y. a. C. R. a. H. C. a. L. S. a. Z. G. Jiang,
"Progressive Teaching Improvement For Small
Scale Learning: A Case Study in China,"
Future
Internet, vol. 12, no. 8, p. 137, 2020.
[86]
A. a. P. N. a. T. Y. Rubinstein, "In-Depth Feedback
on Programming Assignments Using Pattern
Recognition and Real
-Time Hints," in Proceedings
of the 2019 ACM Conference on Innovation and
Technology in Computer Science Education
, 2019,
pp. 243--244.
[87]
W. a. B. Y. a. C. H. Zheng, "A computer-assisted
instructional method based on machine learning in
software testing class,"
Computer Applications in
Engineering Education,
vol. 26, no. 5, pp. 1150--
1158, 2018.
[88]
C. a. N. Y. a. J. M. Bhanuprakash, "Performance
Analysis of Students by Evaluating Their
Examination Answer Scripts by Using Soft
Computing Techniques," in
Emerging Research in
Electronics, Computer Science and Technology
,
Springer, 2019, pp. 549--568.
[89]
A. a. B. T. a. S. A. Lobanov, "Automatic
classification of error types in solutions to
programming assignments at online learning
platform," in
International Conference on Artificial
Intelligence in Education
, Springer, 2019, pp. 174--
178.
[90]
S. a. F. J. a. B. A. a. J. J. Pape, "STAGE: a software
tool for automatic grading of testing exercises: case
study paper," in
Proceedings of the 38th
International Conference on Software Engineering
Companion, 2016, pp. 491--500.
[91]
S. a. N. K. a. H. O. Kiraly, "Some aspects of grading
Java cod
e submissions in MOOCs," Research in
Learning Technology, vol. 25, 2017.
[92]
A. a. J. P. a. D. P. Bey, "A comparison between two
automatic assessment approaches for programming:
An empirical study on MOOCs,"
{Journal of
Educational Technology & Society
, vol. 21, no. 2,
pp. 259--272, 2018.
[93]
A. a. B. I. X. a. C. G. M. J. a. M. E. C. a. Q. C. a. R.
G. M. E. a. R. M. O. a. U. T. A. Abello Gamazo, "A
software tool for E
-
assessment of relational database
skills,"
International journal of engineering
education}, vol. 32, pp. 1289--1312, 2016.
[94]
G. a. L. F. a. G. V. a. D. C. a. M. P. Paravati, "Point
cloud
-based automatic assessment of 3D computer
animation courseworks,"
IEEE Transactions on
Learning Technologies,
vol. 10, no. 4, pp. 532--543,
2016.
[95]
G. G. a. H. R. a. Z. S. Smith, "Computer science
meets education: natural language processing for
automatic grading of open
-ended questions in
ebooks,"
Journal of educational computing
research, vol. 58, no. 7, pp. 1227--1255, 2020.
[96]
L. B. a. T. A. D. R. a. F. J. V. Moreira, "THE USE
OF TEXTS MINING IN THE SUPPORT TO
CORRECTIONS OF DISCURSIVE QUESTIONS
IN A HIGHER EDUCATION INSTITUTION,"
TEXTO LIVRE
-LINGUAGEM E TECNOLOGIA,
vol. 11, no. 3, pp. 213--227, 2018.
[97]
Z. Huang, Q. Liu, C. Zhai, Y. Yin, E. Chen, W. Gao
and G. Hu, "Exploring Multi
-Objective Exercise
Recommendations in Online Education Systems," in
28th ACM International Conference of information
and Knowledge Management, 2019.
[98]
A. Shehab, M. Faroun and M. Rashad, "An
automatic Arabic essay grading system based on
text similarty algorithms,"
International Journal of
Advanced Comouter Science and Applications,
vol.
9, no. 3, 2018.
[99]
M. Liu, Y. Wang, W. Xu and L. Liu, "Automated
Scoring of Chinese Engineering Stude
nts' English
Essays,"
International Journal of Distance
Education Technologies (IJDET),
vol. 15, no. 1, pp.
52-68, 2017.
[100]
A. Suresh, T. Sumner, I. Huang, J. Jacobs, B. Foland
and W. Ward, "Using deep learning to automatically
detect talk moves in teachers' mathematics lessons,"
2018 IEEE International Conference on Big Data
(Big Data), pp. 5445--5447, 2018.
[101]
M. a. K. M. a. N. E. a. W. K. Platz, "Electronic
proofs in mathematics education
—A South African
Teacher Professional Development (TPD) course
informing the conceptualisation of an e
-proof
system authoring support workshop," in 2017 IST-
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 11
Africa Week Conference (IST-Africa), 2017, pp. 1--
9.
[102]
T. a. M. R. a. F.-A. M. a. M.-C. N. Sancho-Vinuesa,
"Exploring the effectiveness of continuous activity
with automatic feedback in online calculus,"
Computer Applications in Engineering Education,
vol. 26, no. 1, pp. 62--74, 2018.
[103]
S. a. R. F. a. W. B. a. R.-G. J. a. R. K. Hoch,
"Design and research potential of interactive
textbooks: the case of fractions,"
ZDM, vol. 50, no.
5, pp. 839--848, 2018.
[104]
X. a. H. J. Dong, "An exploration of impact factors
influenc
ing students' reading literacy in Singapore
with machine learning approaches,"
International
Journal of English Linguistics,
vol. 9, no. 5, pp. 52--
65, 2019.
[105]
W. Qu, "Research on the Application of Automatic
Scoring System in College English Writin
g [C]," in
International Conference on Economics, 2016.
[106]
N. a. K. J.-H. a. W. M. K. a. M. S. E. a. P. J. C. Kim,
"Automatic scoring of semantic fluency,"
Frontiers
in psychology, vol. 10, p. 1020, 2019.
[107]
Y. a. Y. X. a. Z. F. a. Z. L. a. Y. S. Huang,
"Automatic Chinese reading comprehension grading
by LSTM with knowledge adaptation," in
Pacific-
Asia Conference on Knowledge Discovery and Data
Mining, 2018, pp. 118--129.
[108]
M. a. W. Y. a. X. W. a. L. L. Liu, "Automated
scoring of Chinese engineering students' English
essays,"
International Journal of Distance
Education Technologies (IJDET),
vol. 15, no. 1, pp.
52--68, 2017.
[109]
E. Hegarty-Kelly and D. A. Mooney, "Analysis of
an automatic grading system within first year
Computer Science programming modules," in
In
Computing Education Practice 2021, 2021.
[110]
W. a. D. M. a. K. H. a. R. S. a. T.-M. S. Westera,
"Automated essay
scoring in applied games:
Reducing the teacher bandwidth problem in online
training,"
Computers & Education, vol. 123, pp.
212--224, 2018.
[111]
R. a. R.-G. J. C. Cobos, "Improving learner
engagement in MOOCs using a learning
intervention system: A research study in engineering
education,"
Computer Applications in Engineering
Education.
[112]
K. a. M. K. a. K. R. Grindrod, "Assessing
Performance and Engagement on a Computer
-
Based
Education Platform for Pharmacy Practice,"
Pharmacy, vol. 8, no. 1, p. 26, 2020.
[113]
S. a. G. H. a. Y. B. Fu, "The affordances of AI-
enabled automatic scoring applications on learners'
continuous learning intention: An empirical study in
China," British Journal of Educational Technology,
vol. 51, no. 5, pp. 1674--1692, 2020.
[114]
H.-C. a. L. I.-F. a. L. C.-T. a. S. Y.-S. Hung,
"Applying educational data mining to explore
students' learning patterns in the flipped learning
approach for coding education,"
Symmetry, vol. 12,
no. 2, p. 213, 2020.
[115]
C. A. Bacos, "Machine learning and education in the
human age: A review of emerging technologies," in
Science and Information Conference
, Springer,
2019, pp. 536--543.
[116]
F. a. G. A. Duzhin, "Machine learning-based app for
self
-evaluation of teacher-
specific instructional style
an
d tools," Education Sciences, vol. 8, no. 1, p. 7,
2018.
[117]
M. a. J. S. a. A. M. a. H. M. a. A. M. a. K. S. a. K.
M. a. H. K. Farhan, "IoT
-based students interaction
framework using attention
-scoring assessment in
eLearning,"
Future Generation Computer Systems,
vol. 79, pp. 909--919, 2018.
[118]
A. a. K. M. a. C. Y. H. V. a. M. T. a. N. A. M. a. B.
R. a. D. J. James, "Inferring the climate in
classrooms from audio and video recordings: a
machine learning approach,"
2018 IEEE
International Conferenc
e on Teaching, Assessment,
and Learning for Engineering (TALE),
pp. 983--
988, 2018.
[119]
T. a. M. D. E. a. S. J. O'Riordan, "Is critical thinking
happening? Testing content analysis schemes
applied to MOOC discussion forums,"
Computer
Applications in Engineering Education, 2020.
[120]
N. Norouzi and R. Hausen, "Quantitative Evaluation
of Student Engagement in a Large
-Scale
Introduction to Programming Course using a Cloud
-
based Automatic Grading System," in
2018 IEEE
Frontiers in Education Conferenc (FIE), 2018.
[121]
L. Bayerlein, "Students' feedback preferences: How
do students react to timely and automatically
generated assessment feedback?,"
Assessment &
Evaluation in Higher Education,
vol. 8, no. 39, pp.
916-931, 2014.
[122]
H. a. L. H. a. X. M. a. W. Y. a. Q. H. Wei,
"Predicting student performance in interactive
online question pools using mouse interaction
features,"
Proceedings of the Tenth International
Conference on Learning Analytics & Knowledge,
pp. 645--654, 2020.
[123]
S. M. R. a. N. J. a. G. S. a. W. X. a. D. H. a. Z. W. a.
Z. W. Abidi, "Demystifying help
-seeking students
interacting multimodal learning environment under
machine learning regime," in
Eleventh International
Conference on Graphics and Image Processing
(ICGIP 2019)
, International Society for Optics and
Photonics, 2020, p. 113732V.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 12
[124]
A. a. P. N. a. T. Y. Rubinstein, "In-Depth Feedback
on Programming Assignments Using Pattern
Recognition and Real
-Time Hints," in Proceedings
of the 2019 ACM Conference on Innovation and
Technology in Co
mputer Science Education, 2019,
pp. 243--244.
[125]
B. a. L. C. a. W. B. Hodgkinson, "glGetFeedback—
Towards automatic feedback and assessment for
OpenGL 3D modelling assignments," in
2016
International Conference on Image and Vision
Computing New Zealand (IVCNZ), 2016, pp. 1--6.
[126]
O. a. B. Y. a. G. S. a. F. E. a. D. C. Mirmotahari, "A
case
-study of automated feedback assessment," in
2019 IEEE Global Engineering Education
Conference (EDUCON), 2019, pp. 1190--1197.
[127]
C. a. o. Niculescu, "Intelligent tutoring systems-
trends on design, development and deployment," in
Conference proceedings of eLearning and Software
for Education (eLSE)
," Carol I" National Defence
University Publishing House, 2016, pp. 280--285.
[128]
O. a. K. M. a. B. M. Kassak, "Student behavior in a
web
-based educational system: Exit intent
prediction,"
Engineering Applications of Artificial
Intelligence, vol. 51, pp. 136--149, 2016.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100890, IEEE Access
VOLUME XX, 2017 13
MARCELO GUERRA HAHN is a PhD
candidate at the Universidad Internacional de la
Rioja. He got his bachelor's and master's in
Computer Science from Universidad de la
Republica in Uruguay. He is a guest lecturer with
the University of Washington and the Director of
Engineering for Sound Commerce. Marcelo is
studying technologies associated with automatic
assignment feedback and their effects on learning
achievements and experiences.
SILVIA BALDIRIS received a bachelor's degree
in systems and industrial engineering from the
Industrial University of Santander (UIS),
Colombia, a master's degree in industrial
informatics and automation, and a Ph.D. degree in
technologies the University of Girona. She is
currently an Associate Professor with Universidad
Internacional de La Rioja, Spain, and Fundación
Universitaria Tecnológico Comfenalco, Colombia.
Since her early twenties, she has been interested in research on how
technologies can facilitate all students' inclusion in the educational system.
She has coordinated and participated in international projects and initiatives
in Europe and North/South American, including serving on the editorial
boards of high-impact scientific journals.
LUIS DE LA FUENTE VALENTÍN is a full-time
associate professor at Universidad Internacional de
La Rioja, UNIR. He got his PhD at Universidad
Carlos III de Madrid, in 2011. He has authored
more than 40 papers and participated in several
national and European public-funded projects, one
of them as an investigator in charge. His current
research interest is in machine learning tools
applied to the educational field.
DANIEL BURGOS works as Full Professor of
Technologies for Education & Communication
and Vice-rector for International Research at
Universidad Internacional de La Rioja (UNIR). He
holds a UNESCO Chair in eLearning and the
ICDE Chair in Open Educational Resources. He
also works as Director of the Research Institute for
Innovation & Technology in Education (UNIR
iTED, http://ited.unir.net). He is or has been
involved in +60 European and Worldwide R&D
projects. He is Professor at An-Najah National University (Palestine),
Adjunct Professor at Universidad Nacional de Colombia (UNAL,
Colombia), an Extraordinary Professor at North-West University (South-
Africa), and Visiting Professor at Coventry University (United Kingdom).
He works as a consultant for the United Nations (UNECE), European
Commission & Parliament, and Russian Academy of Science. He holds
degrees in Communication (Ph.D.), Computer Science (Dr. Ing), Education
(Ph.D.), Anthropology (Ph.D.), Business Administration (DBA), Theology
(Ph.D.), Management (Ph.D.), Open Science and STEM (Ph.D.), and
Artificial Intelligence (MIT, postgraduate).