Content uploaded by Markus Orthaber
Author content
All content in this area was uploaded by Markus Orthaber on Jun 26, 2020
Content may be subject to copyright.
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Concepts for E-Assessments in STEM on the Example of
Engineering Mechanics
How to Assess Complex Engineering Problems Electronically
https://doi.org/10.3991/ijet.v15i12.13725
Markus Orthaber (), Dominik Stütz, Thomas Antretter
Montanuniversitaet Leoben, Leoben, Austria
markus.orthaber@unileoben.ac.at
Martin Ebner
Graz University of Technology, Graz, Austria
Abstract—We discuss if and how it is possible to develop meaningful e-
assessments in Engineering Mechanics. The focus is on complex example prob-
lems, resembling traditional paper-pencil exams. Moreover, the switch to e-
assessments should be as transparent as possible for the students, i.e., it
shouldn’t lead to additional difficulties, while still maintaining sufficiently high
discrimination indices for all questions. Example problems have been designed
in such a way, that it is possible to account for a great variety of inputs ranging
from graphical to numerical and algebraic as well as string input types. Thanks
to the implementation of random variables it is even possible to create an indi-
vidual set of initial values for every participant. Additionally, when dealing
with complex example problems errors carried forward have to be taken into
account. Different approaches to do so are detailed and discussed, e.g., pre-
defined paths for sub-questions, usage of students’ previous inputs or decision
trees. The main finding is that complex example problems in Engineering Me-
chanics can very well be used in e-assessments if the design of these questions
is well structured into meaningful sub-questions and errors carried forward are
accounted for.
Keywords—Engineering Mechanics, e-assessment, STEM, higher education,
complex problems
1 Overview
1.1 Main goals
Digitalization is becoming more and more important in all areas of education, es-
pecially in the form of e-assessments. At the Institute of Mechanics at the Montanuni-
versitaet Leoben, the learning management system Moodle [1] has been used for
teaching as well as testing the students’ theoretical knowledge for 6 years now [2, 3].
136
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
However, the practical part of the exam, where the students have to solve two com-
plex example problems, is still utilizing a classical paper-pencil approach. On the
other hand, students, as well as teachers, typically give excellent feedback on the
already implemented e-assessments. Thus, it is planned to take automated testing one
step further by changing the exam to a full e-assessment, completely substituting the
paper-pencil part. Moreover, such an approach might help to further foster objectivity
of exams, since automated testing completely omits possible subjectivity in evaluating
paper-pencil exams that might occur, especially when teachers personally know their
students. Additionally, a substantial time-saving effect is expected once enough ques-
tions have been designed. The latter is because obviously, apart from rephrasing cer-
tain problematic questions or correcting small typos now and then, full e-assessments
do not have to be corrected manually anymore. At the moment such manual correc-
tions take about 330 hours of correction work per year in the authors‘ case, consider-
ing that approx. 1000 individual exams – each consisting of 2 paper-pencil example
problems - have to be reviewed, where it takes about 10 minutes of correction time for
each example-problem. In order to check whether such a switch to full e-assessments
is possible in Engineering Mechanics with a reasonable amount of effort, research has
been conducted recently within a bachelor’s thesis [4]. Its main goal was to show
possibilities to test students’ abilities to solve complex example problems in an auto-
mated way. Furthermore, the aim is to fully retain the level of complexity of current
paper-pencil exams. Thus, it is necessary to evaluate errors carried forward in the e-
assessments in close analogy to manually corrected paper-pencil exams. This, in turn,
makes it necessary to consider various approaches as well as types of questions of-
fered by Moodle and third-party plugins to lay out a roadmap towards full e-
assessments in Engineering Mechanics.
The main questions to answer in this publication are: Is it possible to design com-
plex example problems in such a way that the analytical skills necessary to solve them
can be tested with sufficient validity within an e-assessment? Which limitations must
be considered when switching from a classical paper-pencil approach to automated
testing?
2 Current Research in Automated Testing in STEM
The use of digital media supporting learning processes is increasing. Still, the im-
plementation of e-assessments, especially in STEM disciplines, poses many difficul-
ties. Creating an environment that is as transparent as possible, in the sense that it
does not impose additional barriers to the students’ approaches to solve a specific
example problem, becomes increasingly difficult as example problems become in-
creasingly complex.
Within a recently finished master’s thesis, a PHP-based quiz-application was creat-
ed to support physics lectures at Graz University of Technology [5, 6]. This solution
allows for maximum freedom and flexibility when creating questions. Moreover, it is
possible to directly implement the quizzes into Moodle using LTI (Learning Tools
Interoperability) protocols.
iJET ‒ Vol. 15, No. 12, 2020
137
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Gamage et al. [7] discuss the effectiveness of multimodal quizzes in teaching and
assessing a theoretical engineering course for third-year undergraduate students, Hy-
draulics and Hydrology. According to that publication, such quizzes are efficient to
replace conventional assessments and benefit time-poor academics. The quiz-
questions used are comparable to the ones discussed in a conference paper of the
authors [3]. However, in the present paper more complex problems are discussed.
Furthermore, [8-10] highlight the benefits of online quizzes such as improving student
motivation, enhancing understanding and active learning and deterring cheating if
questions are not too easy. When talking about formative assessments immediate high
quality and detailed feedback is crucial as it enhances and reinforces student learning
[11, 12]. An elaborate discussion on students’ satisfaction with different formative
assessments is provided in [13]. A. Rasila et al. look at the interplay of automated
assessments and conceptual understanding in mathematics. They argue that the peda-
gogical background of teaching and presenting mathematical knowledge is heavily
based on concepts that emerged from using traditional books. Thus, information tech-
nology is anticipated to be a game-changer in learning and teaching mathematics [14].
These findings are based on experiences with an automated assessment system called
STACK [15-20], as discussed later in this work. Furthermore, A. Rasila developed a
collaborative e-assessment material bank for STACK called Abacus [21] and provides
experiences with automatic assessments in the field of mathematics [22-25], while
[26] discusses the use of STACK in circuit theory.
An approach combining e-assessments and learning analytics can be found in [27],
where the aim is to develop a simulator that is able to provide automatic but neverthe-
less personalized feedback to students based on their level of activity. Moreover,
activities provided to the students are personalized based on their level of ability. The
publication aims to develop a framework for this so-called “assessment analytics”.
However, most of the cited research discusses e-assessments in formative scenari-
os, either in the form of homework assignments or as a general means of learning.
Taking the existing literature one step further, the paper at hand details works on (au-
tomated) e-assessments in formative as well as (final) summative scenarios.
2.1 Types of assessments
The problem of objectivity of oral and the time-consuming evaluation of paper-
pencil exams is discussed in [28]. These disadvantages can be eliminated by using
Learning Management Systems (LMS). When creating such e-assessments it is neces-
sary to not simply transfer the contents to digital media, but rather consider methodi-
cal and organizational aspects in order to maintain validity, objectivity, and reliability
[29]. Assessments are differentiated as shown in Table 1.
138
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Table 1. Different types of assessments according to Ehlers [29], translated from German
Summative Assessment
Formative Assessment
Diagnostic Assessment
Self-Assessment
Qualifying Exam
Determine, whether a
level of proficiency is
reached
Evaluate the process of
learning; apply
knowledge
Allows the student to self-
diagnose the learning
process
Initial determination of a
level of competence
Examination at the end of
a learning process
Test examples within the
process of learning
Continuous examinations
Exam at the beginning of
a learning process
Summative assessments are used in order to determine and prove a level of profi-
ciency. Formative assessments, on the other hand, are expected to foster the process
of learning while being part of the learning process itself. Diagnostic assessments can
further be divided into voluntary self-assessment tests (SAT) as well as qualifying
examinations.
2.2 Types of questions
Test questions for e-assessments might either be convergent or divergent. Conver-
gent questions are characterized by a clearly defined set of solutions and can, there-
fore, be realized, e.g., with multiple-choice questions. Whereas the evaluation of such
tasks is easy, the biggest challenge for the teacher is to formulate the question and
generate reasonable distractors. Convergent questions are best suited for factual
knowledge, even though they might also be used in combination with graphical repre-
sentations in order to assess comprehension [29].
Divergent questions, on the other hand, are best for testing background knowledge,
general approaches as well as explanations, as they require the student to work con-
structively in order to be able to solve the task [29].
Especially for the solution of example problems in the field of Engineering Me-
chanics, it is necessary to combine theoretical knowledge and computational skills in
order to solve the given tasks. Thus, divergent questions requiring the student to pro-
vide inputs into blank fields are the preferred choice for the authors. For accompany-
ing convergent questions enough plausible distractors are provided in order to ensure
meaningful results.
2.3 Learning analytics
Benchmarking has been identified as one of the goals of learning analytics [30]. It
might help to spot weaknesses in a learning environment or in certain teaching activi-
ties themselves. A detailed literature review of learning analytics in higher education
can be found in [31]. The possibilities arising in conjunction with e-assessments are
virtually endless and will not be discussed in this work. For more details on the appli-
cation of learning analytics and its implications, the reader is referred to [32, 33].
iJET ‒ Vol. 15, No. 12, 2020
139
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
3 Examples of Exam Questions and their Construction
3.1 Prerequisites
The aim of the work at hand is to create a full e-assessment that is as transparent as
possible for the students while offering the teachers the possibility to assess a variety
of input types. Transparency, in that context, means that the technology used to create
and conduct the e-assessment should interfere as little as possible with a specific stu-
dent’s own approach to solve a given example-problem. Furthermore, example-
problems should be constructed such that grading can be done in close analogy to a
classical paper-pencil exam. One of the main aspects considered is the need to take
the possibility of errors carried forward into account in automated grading. The first
idea is to apply one-way navigation where students are only allowed to navigate for-
ward in the examination environment. After having made a final input for a specific
sub-question, the correct answers for that sub-question are displayed to the student to
be used for further calculations. This means, however, that students cannot change
already registered inputs, so revision of their previous answers is not possible. More-
over, disclosing the information whether a sub-question has been answered correctly
has some further drawbacks, e.g., it implicitly hints at the general validity of the ap-
proach the student has chosen. More importantly, though, it may deject the student
thus creating an undesirable psychological bias on the final outcome of the exam.
A more advanced method to account for errors carried forward is to calculate suc-
ceeding sub-questions based on the students’ previous inputs. Such an option is avail-
able using the STACK question type [15-18]. This is not only more elegant, but it is
also more transparent in the sense that it does not require interfering in the progress of
the exam since errors carried forward are taken into account quietly without having to
pass the information on to the student. In combination with decision trees, which we
are planning to elaborate on in a future publication, there are almost no limits to grad-
ing with that second approach.
Additionally, an important aspect is that not only numerical but also algebraic input
is accepted as this is very common in Engineering Mechanics. This, of course, re-
quires the underlying algorithm to treat algebraically equivalent inputs as equal.
A further requirement for transparency is a clear layout and easy-to-understand in-
put syntax that allows effortless navigation and a good overview in order to guarantee
that the students can completely focus on the example problems and are not confused
by the examination environment in any way.
3.2 Variable numeric question “Dimension a shaft” using one-way navigation
This example problem requires the student to dimension a shaft. The instructions
are followed by an explanation of how to use the exam-environment.
In order to ensure a clear layout, the question is divided into several logical subsec-
tions with corresponding headlines displayed in the navigation bar on the left-hand
side as shown in Fig. 1. All sub-questions belonging to one subsection are displayed
140
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
on one page during the exam. The use of different pages allows a clear subdivision of
the complex example problem. Moreover, this approach makes it possible to provide
certain correct answers to sub-questions wherever necessary. This is done in order to
enable the student to always use correct intermediate results, as explained earlier. Of
course, the one-way navigation is vital in combination with these subdivisions.
As the student moves forward in the quiz following the predefined direction, al-
ready answered questions appear greyed out in the navigation section and can – for
obvious reasons - not be changed anymore.
Fig. 1. Example problems are divided into subsections for orientation and progress-tracking.
In the context of Engineering Mechanics, it is important to be able to test the stu-
dents’ abilities to construct free-body diagrams. These are not only vital parts of most
problems in Engineering Mechanics but also indicate a general understanding of the
underlying system. Using the “drag-and-drop”-question types available in Moodle,
the input and evaluation of graphical free body diagrams are possible. As shown in
Fig. 2, moveable tiles can be dropped onto predefined areas of a background picture
to build a free-body diagram. Not all tiles need to be used, but every drop zone needs
to be filled, allowing for a greater selection in moveable fields to make the question
less prone to being guessable. Moodle also allows for an infinite number of tiles, such
that a tile remains in the pool even though it has already been used. That way one tile
can be used for several drop-zones.
Tiles can easily be uploaded in the form of vector-graphics (.svg), which even sup-
port translucent backgrounds. Blank tiles are used to represent a zone where no force
or momentum exists, which adds to the variety of possible answers, resulting in a
higher discrimination efficiency.
iJET ‒ Vol. 15, No. 12, 2020
141
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Fig. 2. Free-body-diagrams can be realized with Drag-and-Drop question types in Moodle
Once the free-body diagram is set up, the values of the forces are to be provided.
Such numerical values can best be tested using the variable numeric question type
[34]. It allows for random numbers as input data for the given problem thus individu-
alizing the question for each student. This obviously requires the question designer to
provide a mathematical function of the input data in a system-specific syntax that
reproduces the correct result. The individual numerical values for each student are
taken from a range within well-defined limits in order to ensure physically meaningful
results for each combination of variables.
To prevent errors from being carried forward all the way to the final solution, dif-
ferent subdivisions are created, as already explained above. Whenever necessary for
further calculations, the correct solutions are displayed, which implies that the student
cannot change any entries at a later stage. In order to enable the student to use the
corrected intermediate results for calculations in his or her preferred way, not only the
numerical value is shown, but also a solution including the given variables as shown
142
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
in Fig. 3. Wherever possible displaying correct solutions is avoided to not discourage
the student.
Fig. 3. Input field for a numerical solution. The correct result is displayed at the bottom.
In order to be able to solve this specific example problem, it is also necessary to
calculate the torsional moment and identify the position of its maximum. This is real-
ized using a multiple-choice single-select question. Asking for the numerical value of
the maximum, by contrast, requires the “variable numeric” question type explained
above. The syntax to be used to formulate mathematical expressions in that question
type is shown in Fig. 4. It is important to note that numerical values must be typed
into the answer field by the student in the given unit system but without adding the
unit symbol. This must be clearly stated in the working instructions of the exam.
3.3 STACK question “Calculate the forces and moments”
STACK question type. STACK (System for Teaching and Assessment using a
Computer algebra Kernel) is a question type that utilizes the computer algebra system
(CAS) Maxima [35]. Due to its variety of options and possibilities regarding input
values, correction and grading, it is suitable for many applications in STEM disci-
plines.
Creation of the exam question using STACK. A lot of emphasis has been put into a
clear layout. To arrive at a well readable representation, proper display of vectors and
matrices plays a crucial role. The question at hand has deliberately been chosen in
order to experiment with exactly that, as it requires input in the form of vectors. This
is, however, not possible with the previously described methods, as variable numeric
questions are, e.g., restricted to one input field per question. The question type
STACK, on the contrary, provides several input fields allowing for more than one
entry to be evaluated within a single question. Moreover, input fields may be dis-
played in the format of vectors and even matrices automatically, if the linked solution
is a vector or a matrix. STACK, furthermore, allows evaluating algebraic expressions.
To exploit that feature, the example problem at hand expects students’ inputs to be
algebraic, using only the given variables. For grading, the inputs are compared with
iJET ‒ Vol. 15, No. 12, 2020
143
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
the solutions with regards to algebraic equivalence, meaning they do not need to be
represented in exactly the same way as the master solution.
In this specific example problem, students are expected to calculate the existing
forces and moments shown in Fig. 5. The absolute value of the vector S is of particu-
lar importance. We are going to describe the step-by-step procedure necessary to set
up such an example problem using STACK.
Fig. 4. Computation of variables from random values when using variable numeric questions.
STEP 1 – Definition of variables. When starting to create a question using STACK
in Moodle, one must define variables, as shown in Fig. 6. They might be used for
subsequent calculations or expressions. In this example, the variables have been set to
be 3x1 vectors, including initial values given in the description of the example prob-
lem.
STEP 2 – Definition of input fields. In the next step, working instructions are giv-
en. At this point, input fields are defined as shown in Fig. 7. While creating the ques-
tion, one can choose the name of the variables to store students’ inputs. These varia-
bles may be used subsequently for further calculations or display purposes.
STEP 3 – Definition of correct answers. The input variables are linked to the cor-
rect solutions. This may also be an already defined variable. In the case of the exam-
ple, the input variable “anss” has been linked to the variable “s” defined in the first
step. Here, algebraic equivalence with strict syntax was chosen, as displayed in Fig. 8.
Strict syntax requires the student to type in a multiplication sign whenever needed and
thus, for example, interprets aligned variables without multiplication signs as one
single variable. An example is the occurring variable “rho” which can only be inter-
preted using strict syntax, as it otherwise would be interpreted as “r*h*o”. If the cor-
rect solution is a vector or a matrix, the input field will automatically be displayed
accordingly.
144
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Fig. 5. Free-body diagram of the example problem described in chapter 3.3.
Fig. 6. Definition of variables, which is also possible without assigning a numerical value.
Fig. 7. Definition of input fields. Students’ input is stored in "anss".
STEP 4 – Feedback Variables. Next, so-called feedback variables can optionally be
introduced which are accessible during the grading process. These include expres-
sions, initially defined variables or input variables that contain the student’s inputs. As
shown in Fig. 9, the final solutions which will be used for grading, are calculated
based on the input variable “anss” that stores the student’s result for the absolute val-
iJET ‒ Vol. 15, No. 12, 2020
145
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
ue of the vector S. If the student obtained a wrong value for the vector S but did not
make any further mistakes, the subsequent results are still considered correct.
STEP 5 – Feedback Tree. Finally, defining a feedback tree is mandatory when us-
ing STACK. One can choose, which variables should be compared in what manner as
shown in Fig. 10. Various methods are available, with algebraic equivalence, string,
string sloppy and numeric amongst them. For this specific task, algebraic equivalence
is a suitable choice. The tree structure allows to add or subtract a certain number of
points or even reset the points to zero altogether depending upon the outcome of the
check at each node, i.e., if a comparison yields true or false. This even allows to skip
certain checks, e.g., if they logically cannot be true due to incorrect prior inputs. The
simplest possible feedback tree is displayed in Fig. 11. Considerations regarding more
complex feedback trees will be discussed in a future publication.
From a student’s point of view, the full example problem is well-structured. After
entering the algebraic result, the student clicks the “check”-button to have the inputs
displayed in classical mathematical notation, with STACK showing how the input has
been interpreted by the underlying CAS, see Fig. 12. A second click on the same
button confirms the input.
Advantages of STACK. Clearly, the option to take errors carried forward into ac-
count offers a lot of opportunities. It is possible to create complex example problems
that process the student’s input while providing random numbers as initial values.
Hence, there is no need for one-way navigation anymore as discussed earlier. Fur-
thermore, it is not necessary to display the correct values of intermediate results for
the purpose of avoiding double-counting of errors. Moreover, the possibility to use
algebraic as well as numerical and other types of input formats side by side allows
STACK to be used in a wide variety of applications. It is especially helpful when
trying to assess complex example problems as this is typically the case in Engineering
Mechanics.
4 Further Remarks
The presented approach aims at fully substituting paper-pencil exams using auto-
mated testing. However, solving typical paper-pencil exams in Engineering Mechan-
ics usually requires considerable computational effort. Consequently, students need to
be allowed to use draft paper in order to take notes during working on the example
problems. Ultimately it is planned to entirely dispense with this draft paper for grad-
ing in order to arrive at fully automated testing. During the transition phase, however,
it is sensible to collect all the draft papers from the students and cross-check the solu-
tions of the e-assessments with the calculations found on the draft paper. Such a pro-
cedure is expected to help gain further insights into how a complex example problem
should be designed in an automated testing environment. Thus, these cross-checks can
be used to refine questions and the overall design of the exam to finally arrive at a
methodologically sound e-assessment for Engineering Mechanics. Furthermore, col-
lecting all draft paper can also help when unexpected problems occur during the e-
assessment, e.g., leading to incomplete solutions within the LMS. In such a case the
146
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
draft paper could be used as a safety net to ensure proper grading of the exam. In any
case, students will always need to be allowed to use draft paper for calculations,
sketches, etc. even if it is ultimately not used for grading anymore. Solving complex
example problems by mere “thinking” about the problem is neither deemed practical
nor feasible and thus not considered at all.
Fig. 8. Defining the input type and link the input field to a solution.
Fig. 9. Final results calculated using previous inputs to account for errors carried forward.
Fig. 10. Using feedback trees several elements can be compared using different methods. In
this case, “anss” and “s” are compared with regards to algebraic equivalence.
iJET ‒ Vol. 15, No. 12, 2020
147
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
Fig. 11. Simplest possible feedback tree always leading to the next node (flipped by 90°). The
green path represents correct answers and the pertaining grading scheme, whereas the
red one is followed when an incorrect answer has been given.
Fig. 12. Students can enter their solutions in vector format. STACK previews the input, which
is especially helpful in combination with more complex results.
5 Limitations
As long as the draft paper used by the students during their calculations is collected
and can be used as a safety net, limitations are kept within reasonable bounds. Of
course, certain classes of example problems are better suited for automated testing
than others, while some types of example problems might have to be excluded at all.
Furthermore, free-body diagrams can only be asked using drag and drop questions,
which is clearly easier than drawing a free-body diagram from scratch as is required
in the paper-pencil approach. Nonetheless, generally speaking most of the example
problems that can be formulated for paper-pencil exams can also be reasonably trans-
ferred to electronic questions using the framework described above. However, when
switching to a full e-assessment without taking the draft paper into account anymore
the limitations increase. Often in STEM disciplines the particular method, i.e., the
path that led to a certain solution is as important as the solution itself and thus plays
an important role when correcting paper-pencil exams. Apparently, with fully auto-
mated testing as described above this path to the solution becomes somewhat ob-
148
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
scured and cannot be tested anymore, at least not to the extent of a paper-pencil ap-
proach. One possibility to mitigate this problem is to clearly define which sub-steps
on the way to a final solution are of special importance and then design the individual
questions of an example problem such that these sub-steps can be validly tested indi-
vidually. In such a case meaningful allocation of points is of special importance in
order to ensure a valid exam.
6 Conclusion and Outlook
The present work has shown possibilities to switch from classical paper-pencil ex-
ams to full e-assessments in Engineering Mechanics using the LMS Moodle. Several
types of questions are available as a basis to assess complex example problems elec-
tronically. Not only numerical values can be evaluated, but also graphical relations,
e.g., free-body diagrams can be part of a question which is vital for Engineering Me-
chanics. Of special relevance for the authors is the possibility of algebraic compari-
sons using the STACK question type. In combination with complex decision trees and
the opportunity to take errors that have been carried forward into account, it is indeed
possible to closely digitize classical paper-pencil exams in Engineering Mechanics
without sacrificing informative value. Clearly, a lot of effort has to be put into con-
structing comprehensive e-assessments, e.g., to track the path to a students’ final solu-
tion. Furthermore, e-assessments have to be evaluated continuously starting with their
use in formative exams, e.g., to allow the students to evaluate their own knowledge
and monitor their learning progress during a course or lecture or in SAT prior to
summative exams. Only then it is advisable to introduce complex problems into
summative exams. It is recommended to start with a combination of one electronic
and one paper-pencil question to generate further insight into the pros and cons of
electronic questions.
Obviously, Engineering Mechanics is by far not the only discipline where sophisti-
cated e-assessments are needed. The challenges faced when transferring complex,
computationally intensive example problems from paper-pencil to e-assessments are
similar in most STEM fields. Intuitive handling and display, accounting for errors
carried forward and complex grading schemes or correct handling of various types of
student input, all of those aspects have to be taken into account in order to set up an e-
assessment environment that is as transparent as possible for the students. Thus, the
approach described in the work at hand is considered highly relevant for many STEM
disciplines.
7 References
[1] Moodle Pty Ltd. "Moodle LMS." https://moodle.org/ (accessed December 6, 2019).
[2] M. Orthaber, "Experiences with a Blended Learning Concept in a First Year Engineering
Mechanics Course," 12th International Conference of Education, Research and Innovation,
pp. 9229-9239, 2019. https://doi.org/10.21125/iceri.2019.2231
iJET ‒ Vol. 15, No. 12, 2020
149
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
[3] M. Orthaber, T. Antretter, R. Jurisits, and M. Schemmel, "E-Assessment in Engineering
Mechanics: How Does It Compare to Classical Paper-Pencil Exams?," 12th International
Conference of Education, Research and Innovation, pp. 9381-9390, 2019. https://doi.org/
10.21125/iceri.2019.2272
[4] D. Stütz, "Konzeptionierung eines digitalen Prüfungsmodus für Aufgabenstellungen der
Technischen Mechanik," Bachelor's Thesis, Institute of Mechanics, Montanuniversitaet
Leoben, Leoben, 2019.
[5] J. Schweighofer, "Development of a Quiz / (Self-) Assessment Tools and their Integration
in Moodle," Master's Thesis, ISDS - Institute for Interactive Systems and Data Science,
Graz University of Technology, Graz, 2019.
[6] J. Schweighofer, B. Taraghi, and M. Ebner, "Development of a Quiz – Implementation of a
(Self-) Assessment Tool and its Integration in Moodle," International Journal of Emerging
Technologies in Learning (iJET), vol. 14, p. 141, 12/06 2019, https://doi.org/10.3991/ijet.
v14i23.11484
[7] S. H. P. W. Gamage, J. R. Ayres, M. B. Behrend, and E. J. Smith, "Optimising Moodle
quizzes for online assessments," International Journal of STEM Education, vol. 6, no. 1, p.
27, 2019/08/13 2019, https://doi.org/10.1186/s40594-019-0181-4.
[8] D. Cohen and I. Sasson, "Online quizzes in a virtual learning environment as a tool for
formative assessment," 2016, formative assessment, higher education, online quizzes,
physics education, virtual learning environment vol. 6, no. 3, p. 21, 2016-09-27 2016, https
://doi.org/10.1016/b978-0-12-803637-2.00010-5
[9] R. Wallihan, K. G. Smith, M. D. Hormann, R. R. Donthi, K. Boland, and J. D. Mahan,
"Utility of intermittent online quizzes as an early warning for residents at risk of failing the
pediatric board certification examination," BMC Medical Education, vol. 18, no. 1, p. 287,
2018/12/04 2018, https://doi.org/10.1186/s12909-018-1366-0.
[10] B. R. Cook and A. Babon, "Active learning through online quizzes: better learning and less
(busy) work," Journal of Geography in Higher Education, vol. 41, no. 1, pp. 24-38,
2017/01/02 2017, https://doi.org/10.1080/03098265.2016.1185772.
[11] J. L. Schneider, S. M. Ruder, and C. F. Bauer, "Student perceptions of immediate feedback
testing in student centered chemistry classes," Chemistry Education Research and Practice,
10.1039/C7RP00183E vol. 19, no. 2, pp. 442-451, 2018, https://doi.org/10.1039/c7rp0018
3e.
[12] K. Wojcikowski and L. Kirk, "Immediate detailed feedback to test-enhanced learning: An
effective online educational tool," Medical Teacher, vol. 35, no. 11, pp. 915-919,
2013/11/01 2013, https://doi.org/10.3109/0142159x.2013.826793.
[13] B. Bahati, U. Fors, P. Hansen, J. Nouri, and E. Mukama, "Measuring Learner Satisfaction
with Formative e-Assessment Strategies," International Journal of Emerging Technologies
in Learning (iJET), vol. 14, p. 61, 04/11 2019, https://doi.org/10.3991/ijet.v14i07.9120.
[14] A. Rasila, J. Malinen, and H. Tiitu, "On automatic assessment and conceptual understand-
ing," Teaching Mathematics and its Applications: An International Journal of the IMA,
vol. 34, no. 3, pp. 149-159, 2015, https://doi.org/10.1093/teamat/hrv013.
[15] TU Clausthal. "Stack (Maxima)." https://doku.tu-clausthal.de/doku.php?id=multimedia:mo
odle:stack_maxima (accessed December 6, 2019).
[16] C. Sangwin and T. Hunt. "Question types: STACK." https://moodle.org/plugins/qtype_
stack (accessed December 6, 2019).
[17] Universität zu Köln. "STACK Documentation (german)." https://stack2.maths.ed.ac.uk/de
mo2018/question/type/stack/doc/doc.php/index.md (accessed December 6, 2019).
[18] The University of Edinburgh. "STACK." https://www.ed.ac.uk/maths/stack/ (accessed De-
cember 6, 2019).
150
http://www.i-jet.org
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
[19] C. Sangwin, "Computer Aided Assessment of Mathematics Using STACK," in Selected
Regular Lectures from the 12th International Congress on Mathematical Education, S. J.
Cho Ed. Cham: Springer International Publishing, 2015, pp. 695-713. https://doi.org/10.10
07/978-3-319-17187-6_39
[20] C. Sangwin, "Who uses STACK? A report on the use of the STACK CAA system," 05/15
2010.
[21] A. Rasila, "E-ASSESSMENT MATERIAL BANK ABACUS," in 8th International Con-
ference on Education and New Learning Technologies (EDULEARN), Barcelona, SPAIN,
Jul 04-06 2016, VALENICA: Iated-Int Assoc Technology Education a& Development, in
EDULEARN Proceedings, 2016, pp. 898-904. [Online]. Available: <Go to
ISI>://WOS:000402955900120
[22] A. Rasila, L. Havola, H. Majander, and J. Malinen, Automatic assessment in engineering
mathematics: evaluation of the impact. 2010.
[23] A. Rasila and C. Sangwin, Development of STACK Assessments to Underpin Mastery
Learning. 2016.
[24] H. Majander and A. Rasila, Experiences of continuous formative assessment in formative
assessment in engineering mathematics engineering mathematics. 2010.
[25] A. Rasila, M. Harjula, and K. Zenger, "Automatic assessment of mathematics exercises:
Experiences and future prospects," The second ReflekTori 2007 symposium of Engineer-
ing Education, 01/23 2007.
[26] M. Neitola, "Circuit Theory E-Assessment Realized in an Open-Source Learning Envi-
ronment," International Journal of Engineering Pedagogy (iJEP), vol. 9, no. 1, pp. 4-18,
2019, https://doi.org/10.3991/ijep.v9i1.9072.
[27] M. A. Huertas, "AN E-ASSESSMENT ANALYTICS FRAMEWORK FOR STEM IN
HIGHER EDUCATION," in 8th International Conference on Education and New Learning
Technologies (EDULEARN), Barcelona, SPAIN, Jul 04-06 2016, VALENICA: Iated-Int
Assoc Technology Education a& Development, in EDULEARN Proceedings, 2016, pp.
1592-1600. [Online]. Available: <Go to ISI>://WOS:000402955901095
[28] M. Kopp, M. Ebner, W. Nagler, and E. Höfler, "Technologie in der Hochschullehre. Rah-
menbedingungen, Strukturen und Modelle," 2013, pp. 475-482.
[29] J. P. Ehlers, C. Guetl, S. Hoentzsch, C. Usener, and S. Gruttmann, "Prüfen mit Computer
und Internet - Didaktik, Methodik und Organisation von E-Assessment," 2013, pp. 227-
238.
[30] M. Grandl, B. Taraghi, M. Ebner, P. Leitner, and M. Ebner, "Learning Analytics," in
Handbuch E-Learning: Expertenwissen aus Wissenschaft und Praxis - Strategien, Instru-
mente, Fallstudien, vol. 72. Erg-Lfg.: Wolters Kluwer Deutschland, 2017, pp. 1-16.
[31] P. Leitner, M. Khalil, and M. Ebner, "Learning Analytics in Higher Education-A Literature
Review," in Learning Analytics: Fundaments, Applications, and Trends: a View of the
Current State of the Art to Enhance E-Learning, vol. 94, A. PenaAyala Ed., (Studies in
Systems Decision and Control, 2017, pp. 1-23. https://doi.org/10.1007/978-3-319-52977-
6_1
[32] P. Leitner and M. Ebner, "Development of a Dashboard for Learning Analytics in Higher
Education," Learning and Collaboration Technologies: Technology in Education, Lct
2017, Pt Ii, vol. 10296, pp. 293-301, 2017, https://doi.org/10.1007/978-3-319-58515-4_23.
[33] P. Leitner et al., "Learning Analytics: Einsatz an österreichischen Hochschulen," ed. Graz:
Forum Neue Medien in der Lehre Austria, 2019.
[34] T. Hunt, J. Pratt, and P. Butcher. "Question types: Variable numeric." https://moodle.org/
plugins/view.php?plugin=qtype_varnumeric (accessed January 8, 2020).
[35] "Project Maxima." http://maxima.sourceforge.net/ (accessed December 6, 2019).
iJET ‒ Vol. 15, No. 12, 2020
151
Paper—Concepts for E-Assessments in STEM on the Example of Engineering Mechanics
8 Authors
Markus Orthaber is Senior Lecturer at the Institute of Mechanics at the Monta-
nuniversitaet Leoben, Austria. He is experimenting with blended-learning, flipped-
classroom, inverse-classroom and other approaches as well as researching ways to
incorporate (complex) engineering problems into e-assessments in formative as well
as summative exams. https://maomech.wordpress.com/
Dominik Stütz completed his bachelor’s thesis on E-Assessments in Engineering
Mechanics at the Institute of Mechanics at Montanuniversitaet Leoben, Austria in July
2019 and is now a Master’s student at the ETH Zurich, Switzerland.
Thomas Antretter is the head of the Institute of Mechanics at the Montanuniversi-
taet Leoben, Austria. His main interest in higher education is to find ways to incorpo-
rate meaningful e-assessment concepts in engineering education.
Martin Ebner is with the Department of Educational Technology at the Graz Uni-
versity of Technology, Graz, Austria. As head of the Department, he is responsible for
ll university wide e-learning activities. He holds an Assoc. Prof. on media informatics
and works at the Institute of Interactive Systems and Data Science as a senior
researcher. For publications as well as further research activities, please visit:
http://martinebner.at. Email id: martin.ebner@tugraz.at
Article submitted 2020-02-12. Resubmitted 2020-03-21. Final acceptance 2020-03-22. Final version
published as submitted by the authors.
152
http://www.i-jet.org