Fößl, T., Ebner, M., Schön, S., & Holzinger, A. (2016). A Field Study of a Video Supported Seamless-Learning-Setting with
Elementary Learners. Educational Technology & Society, 19 (1), 321–336.
ISSN 1436-4522 (online) and 1176-3647 (print). This article of the Journal of Educational Technology & Society is available under Creative Commons CC-BY-ND-NC
3.0 license (https://creativecommons.o rg/licenses/by-nc-nd/3.0/). For further queries, please contact Journal Editors at email@example.com.
A Field Study of a Video Supported Seamless-Learning-Setting with
Thomas Fößl1, Martin Ebner1*, Sandra Schön2 and Andreas Holzinger3
1Graz University of Technology, Austria // 2Salzburg Research, Salzburg, Austria // 3Medical University of Graz,
Austria // firstname.lastname@example.org // email@example.com // firstname.lastname@example.org
(Submitted December 8, 2014; Revised May 11, 2015; Accepted July 12, 2015)
Seamless Learning shall initiate human learning processes that exceeds lesson and classroom limits. At the same
time this approach fosters a self-regulated learning, by means of inspirational, open education settings.
Advanced learning materials are easily accessible via mobile digital devices connected to the Internet. In this
study it was explored whether and to what extent an open learning approach can be initiated by support of
videos and incentives. The study took place in a real-world setting during a conventional mathematics class in
an Austrian secondary school with N = 85 children of average age of 10, 6 years. For the investigation a
traditional face-to-face maths-teaching environment was completely replaced by an open learning environment.
In our study, the elementary learners were able to select their own learning pace and preferences via example
videos. In addition to the use of an open education approach and videos, their learning was also incentivised via
a reward system of “stars.” A pre-test-post-test-control-group study showed that the learning performance
significantly increased. The reason was due to the combination of a novel teaching and learning setting and
coupled incentives to foster the learning process.
Open education, Mathematics, Elementary learners, Video, Seamless learning
Developing a seamless real-world learning setting using open learning and video in primary maths education
Ideally, learning should be a seamless flow across context (Wong & Looi, 2011) and therefore be possible anywhere
and anytime in many different scenarios (Ally et al., 2014). Nevertheless, “seamless learning” is much more than just
using (mobile) technologies to assist and enhance learning. It could also be described as a learner’s frame of mind
that should not be taken for granted (Chan et al., 2006; Wong & Looi, 2011).
Nevertheless there is not much related work available in our particular area, so we have to emphasize that the
combination of the seamless learning concept with mathematics in a real-world study is novel. In a very recent work
Schmitz et al. (2015) report about a mobile game application where children play an active role in the simulation of a
dynamic process and followed also a design-based research approach and demonstrated that mobile game-based
learning environments can productively support seamless learning activities for children, although they also reported
- and that is in line to our experience - that the approach to seamless learning design is difficult to achieve;
nevertheless they emphasize also that it can help us to bridge the gap between learning both in physical and digital
worlds. Moreover, Muñoz-Cristóbal et al. (2014) emphasize that most previous projects have limited support to the
connection of learning activities across spaces, which can be exemplified fundamentally by the flow of artefacts
between activities conducted in different spaces, which is similar to our approach. Guided by design-based research
and the notion of seamless learning, Sollervall et al. (2012) have designed a learning activity in mathematics with
mobile computer support of transitions between different learning contexts. A different approach in medical
education was implemented by Bloice et al. (2014).
Although this contribution focuses on the evaluation and investigation of our new learning setting, its development
itself was a first important step. In contrary to traditional research-questions driven investigation on a theoretical
base, a “design-based research” approach is a means to develop an educational setting that fits to a specific “real
world” challenge using existing theoretical and empirical knowledge (Brown, 1992; Collins, 1992). “The ultimate
goal of design-based research to build a stronger connection between educational research and real-world problems”
(Amiel & Reeves 2008, p. 34, see Figure 1)
Figure 1. Design-based research approach
Building on this approach we analysed the practical challenges of an Austrian math teacher in elementary school,
when (s)he wants to implement a seamless real-world learning setting with elementary learners. To initiate seamless
learning an open learning approach as well as an individual usage of mobile devices is central. Pragmatically, our
possibility to implement and analyse a variation of the traditional learning setting was limited: It had to build on the
curriculum and the covered topics in the textbook, it had to be limited to a set of lessons in one discipline and it had
to address the prior learning and teaching experiences and equipment of pupils as well as teachers.
Our discussions lead to the idea to develop an open learning setting with worked example video podcasts on the topic
“circle.” “Open education” practice means that learners are permitted and motivated to select learning paths and
learning references according to their personal needs and interests inside the classroom. There is a large variety of
approaches of open education practices. The “openness” of such settings refers to the possibility to choose certain
learning activities and even sometimes learning goals by the learner. Such learning is also known as self-regulated
learning and gained attention in the realm of educational research (Wang, 2011; Kauffman, 2004). Interestingly
especially in the field of mathematic instruction many studies over the last decades have pointed out that open
learning approaches – e.g., problem-based learning, project-based learning or the Montessori Method – are not only
associated with more student achievement and academic success (Lopata et al., 2005), but also to be more effective
than traditional methods in teaching mathematics (Kazemi & Ghoraishi, 2012).
The “open education” approach in our setting at this stage was the idea of developing learning materials and
possibilities to train or learn on a “map” with a certain list of “basic” work and as well as optional tasks, where the
pupils are to be able to fulfil the tasks in teams and with the help of a learning video. We choose a certain unit in
maths (“the circle”). To make additional learning and tasks attractive, the usage of “stars” and the potential to re-do
tests and work was additionally given. Based upon seamless learning considerations and our existing requirements
for learning and teaching settings, we decided to use internet-based videos as the main component in teaching.
According to McGarr (2009) video podcasts or vodcasts are video files in a digital format, which are distributed
through the Internet using RSS-feeds. Literature often distinguishes between receptive viewing of video podcasts
(e.g., entire lecture recordings), which happens in a mainly passive manner, and the watching of worked examples
video podcasts, which provide audio-visual explanations of a specific procedural problem (Kay & Kletskin, 2012).
Observation of the didactical approach using worked-out examples within maths education demonstrated clearly how
our videos should be developed. The benefits of learning with written worked examples, often in the field of
mathematic instruction, are well established and are often summarized by the term “worked example effect” (Carroll,
1994). Written worked examples provide a step-by-step solution of a specific problem and it has been shown that
especially weaker students or students with no or low problem specific knowledge benefit by working with it (Caroll,
1994). Unfortunately research studies about the use of worked example videos in undergraduate mathematic
instruction are very limited (Kay & Edwards, 2012; Kay, 2012). Nevertheless there are at least some results, which
indicate that this research field should be investigated more in depth. For example Boster et al. (2007) pointed out
that middle school students who watched mathematical video podcasts performed significantly better in a post-test
than those students who did not. Furthermore Kay and Edwards (2012) reported that the learning performance of
math students (between 11 and 13 years old) who watched worked example videos increased significantly.
Additionally students stated that they liked working with those video podcasts and considered them quite helpful
(Kay & Edwards, 2012).
Building on the design-based research approach we discussed our plans with teachers as well as researchers,
produced videos and materials, tested them with a small group of peers, and re-vised them if needed. The following
is the evaluation of the field experiences.
Question and hypotheses
The aim of this study is to evaluate the effects of an open education approach applied in a fifth grade maths class
supported by the usage of worked example videos. Accordingly the overall research question is:
Q: Is it possible to initiate seamless learning within open education approach in Maths education, while achieving
at least the same learning performance of traditional approaches?
On the basis of this research question, six hypotheses have been formed:
H1: The average learning performance of the experimental group is not lower than the learning performance
achieved by the control group.
H2: The participants of the experimental group like the open education approach and feel motivated.
Previous studies have shown that the learning performance of students learning in an “open educational” setting is
lower than in traditional settings (Boekaerts, 1999; Greene & Land, 2000). However some other work suggest, that
learning with problem based examples and problem based video podcasts can influence learner’s achievement in a
positive way (Kay & Edwards, 2012; Kay & Kletskin, 2012; Kay, 2014).
H3: The provided worked example videos are used steadily during class by the participants of the experimental
H4: The participants of the experimental group also use the provided worked example videos outside the math
In several studies participants reported, that they had enjoyed learning with worked example videos (Kay &
Edwards, 2012; Kay & Kletskin, 2012; Kay, 2014; Copley, 2007; Dupagne et al., 2009). Furthermore previous work
has shown a highly usage of educational video podcasts outside of school and university (Heilesen, 2010; Copley,
2007; Hill & Nelson, 2011).
H5: Not dealing with learning contents covered by worked example videos during the working phase (in or outside
the class) has a negative influence on the post-test.
H6: Continuous feedback from the teacher during the working phase – via incentives and comments – motivates
students to correct their mistakes
For this study, all fifth grade pupils of an Austrian secondary school in Graz were recruited providing N = 85
participants. Of these, 8 were females and 77 were males. The average age was about 10.6 years (SD = 0.31).
The study took place in a secondary school in Graz and involved students and teachers of four fifth grade math
classes over a two weeks period (see Table 1). The same teacher instructed two of these classes; therefore three
different teachers (all female) were part of this study. As shown in Figure 1, the same teacher (Teacher A) was used
for the control group. The experimental group was named the E-group and the control group was named C-group.
The remaining two classes served as further control groups (FC1-group and FC2-group).
Figure 2. Study design
Table 1. Distribution of the participants according to the four groups
One essential fact is that this field study took place in the daily maths class of the participants and not in an artificial
environment. Therefore it was not possible to control the interaction between the experimental group and the control
groups. Hence, the regular teachers taught all groups of pupils (as shown in Figure 2). Moreover, the examiner of this
research study only appeared as a silent observer in the different classes during the whole experimental phase.
The independent variable in this study is the instructional setting, which was applied in the different classes. Whereas
in the experimental group (E-group) an innovative open education approach (see later on for detail information) was
used, the control groups (C, FC1 and FC2) experienced a traditional face-to-face mathematics instruction. We
applied a typical pretest and post-test 2 x 2 design (type of instructional setting x pretest and post-test) to compare
the different groups. Furthermore the learning performance as the dependent variable was measured by comparing
the pretest and post-test results of the participants (learning performance = score in post-test – score in pretest)
(Holzinger et al., 2009).
As shown in Figure 3, the present study started on the 29th of January 2014, when all participants (of all four groups)
completed the pretest. Additionally, the participants of the E-group were introduced to the experiment. The working
phase (experimental phase) took place from the 3rd of February till the 12th of February and included 8 regular
maths lessons (50 minutes) in school for each group. Finally on the 13th of February all participants completed this
study by doing the post-test.
Figure 3. Experimental setting
Boundary conditions for the experimental group
The boundary conditions applied to the E-group were:
• No teacher-centred teaching at all: The open education approach was student-centred through the whole
working phase. The teacher was only allowed to answer specific students’ questions, but only if the pupils
themselves initiated these questions.
• Worked example videos and further learning material: Overall 21 worked example videos were provided and a
pool of 56 exercises, which were accessible for students by a course within the learning management system
Moodle. In this context, it is important to mention that only 12 of these exercises (and the corresponding videos)
had to be done in a specific order, called the “basic exercises.” These basic exercises had to be done beforehand;
only then were pupils allowed to do the other exercises and to watch the corresponding videos. In contrast to the
basic exercises, students could chose autonomously which exercises they wanted to do (not all of them needed to
be done) and in which order. Finally, consistently pursuing the open education approach, students were also
allowed to use other material, for example their textbook, to do their exercises if they did not want to use the
provided worked example videos.
• Working in teams: The pupils were divided into eight teams, as heterogeneously as possible (three students per
team) by their teacher (Teacher A). According to Cohen (1994) working in heterogeneous teams is beneficial,
especially for weaker students, due to the fact that good students do not suffer under these circumstances.
Finally, pupils were not only allowed to work with students from other teams, but also alone too.
• Incentives and Feedback mode (“stars”): After pupils finished their exercises, they handed them in and received
a written feedback (short statements, if they had made any errors) from their teacher in the next lesson. If
everything was done correctly, the pupils were rewarded with “stars.” It is important to note that the students
received a work-plan from their teacher at the beginning of the work phase. This explained how many stars
could be accrued for the completion of which exercise (at least 1 star, at most 3 stars per exercise, see 3.7).
Another crucial factor of the incentive mode was that the stars were added up per team. In short, every pupil was
just as good as his/her team. The hidden agenda of this mode was that a lightly competitive environment would
emerge between the different teams, which was intended to not only be “fun” for the pupils but also to motivate
them to work closely together within their teams. Moreover we emphasized that it would also motivate pupils if
they could correct their mistakes to receive stars for it at the end. Finally it should be mentioned that this
feedback mode was only applied to the experimental group. The students of the control groups only received
verbal feedback from their teachers during their math lessons.
The work plan (Figure 4) provided an overview of all possible exercises students could do on a voluntarily basis
during the working phase. These exercises were divided into seven different parts, which were named:
• Basic exercises (12 exercises): These exercises were the only ones that had to be done in the given order and
within the predefined teams.
• The training ground (15 exercises): The training ground provided all different kinds of exercises.
• The drawing meadow (7 exercises): The “drawing meadow” included just exercises where students had to draw
something (e.g., a chord, a circle pattern).
• The quiz triangle (6 exercises): The “quiz triangle” provided six exercises that mainly dealt with theoretical
knowledge (e.g., the definition of a tangent). These exercises were mostly multiple-choice questions.
• The senior ring (5 exercises): The five exercises from the “senior ring” were relatively difficult, so these
exercises were rewarded with 3 stars each.
• The crafting corner (1 exercise): This was just one exercise, where students could work with scissors and paper.
• Homework (9 exercises): The homework sector included 9 exercises that students could do at home or simply
outside the class, but the exercises were not mandatory. However the intention was these exercises would
support the open education approach
Figure 4. Work plan
Figure 4 shows that every exercise of the 56 was marked with a specific token (e.g., “G1” or “Z4”), so that students
could easily find the corresponding worksheets or the worked example videos according to the exercise in Moodle.
Furthermore the number of stars each exercise was worth was easily visible.
For each of the 56 exercises on the work plan an additional worksheet was provided in Moodle. Every worksheet
was marked with the appropriate token (e.g., “G1”) – according to the exercise on the work plan. Additionally, the
related worked example video was clearly visible on the sheet. The number of stars (incentives) was indicated at the
bottom of the worksheet. Figure 5 shows an example worksheet.
Figure 5. A worksheet
Worked example videos
Overall, 21 worked example videos were developed. These videos did not contain any further information than that
provided by the students’ textbook (in other words: the students of the E-group did not have any advantage compared
to the students of the control groups).
The videos were filmed with a simple digital camera and afterwards edited with the open film editing software
Lightworks (www.lkws.com). The average development time for one worked example video was about 180 minutes.
Table 2. Worked example videos
Duration (min – max)
159 s – 373 s
258 s (76 s)
The training ground
74 s – 130 s
100 s (18 s)
The senior ring
84 s (0 s)
93 s (0 s)
We tried to keep every video as short as possible to address issues of limited attention span (Kay & Kletskin, 2012).
The duration of the videos varied between 74 s (1:14) and 373 s (6:13). On average the videos last 189 s (3:09) with
a standard deviation of 98 s (1:38). See Table 2 more for detailed information. Figure 6 shows some screenshots of
Figure 6. Video screenshots
Pre- and posttests
Identical pre-tests and post-tests were used in this research study. For each test, students had a time contingent of 20
minutes and the test combined two different kinds of exercises:
• 12 single choice questions (Figure 7)
• 4 practical exercises, where students had to draw something (Figure 8)
What is shown in the figure on the right?
Figure 7. One typical single choice question in the pre-post-test
Draw the longest chord of this circle through the Point P!
Label it with s1!
Draw the shortest chord of this circle through the Point P!
Label it with s2!
Figure 8. One practical exercise in the pret-post-test
A Moodle (www.moodle.org) course was provided for the E-group, so that students could easily access all learning
resources such as worksheets and worked example videos – not only in school, but also externally.
Control sheet for students’ working progress
A control sheet was provided for the teacher of the E-Group to document every student’s working progress for each
day of the experimental working phase. The teacher had to mark every exercise a student did in this sheet and
whether the student did it correctly or not. In addition, the teacher had to flag each exercise that was not initially
correct, but was subsequently corrected by the student after receiving feedback from the teacher.
The students of the E-group had to fill in a 17 item, five-point Likert-type scale survey, assessing the pupils’ attitude
concerning the worked example videos. A translated and slightly adopted version of the survey of Kay and Edwards
(2012) was used for this, because they created this survey for similar purposes for students of the same age and the
internal reliability of the entire scale is adequate (0.84).
During the entire experiment the following data was collected:
• Learning performance (pre- and posttest)
• Video views
• Working progress (measured in “stars”)
• Students’ attitude towards worked example videos (survey)
• Student’s interview
• After the post-test, we interviewed the students of the E-group in groups of six and elicited their opinion
concerning the entire experimental learning style approach.
• Teacher’s interview
• Observations during the experiment
H1: The average learning performance of the experimental group (E-group) is not lower than the learning
performance achieved by the control group (C-group)
An identical pretest and post-test was used to measure learning performance of the participants. The calculated
learning performance is calculated as the difference between the post-test and pretest results (e.g., Holzinger et al.,
2009). As can be seen in Table 3 and in Figure 9, the E-group and C-group – which were taught by the same teacher
– nearly scored the same points in the pretest (split-half reliability coefficient with Spearman-Brown-Correction r
= .67 - most results were near zero), but differ enormously in the results of the post-test (reliability r = .77) – the E-
group performed much better. Moreover, the E-group also scored much better in the post-test than the other two
control groups (FC1 and FC2). In total, a maximum of 22 points was achievable.
Table 3. Results of the pre-post-test
Standard deviation (SD)
Standard deviation (SD)
Figure 9. Learning performance of all 4 groups
The average learning performance of the E-group is M = 16.13 (SD = 3.55) and the average learning performance of
the C-group is M = 13.74 (SD = 3.80). Hence the results show that the E-group’s learning performance is better than
the performance of the C-group. Moreover a univariate analysis of variance (ANOVA) yielded a significant effect of
the learning style approach (F(1, 45) = 4.94; p = 0.0313) within the 5-%-level. The effect size is r = 0.31 and
Cohen’s d = 1.13.
H3: The worked example videos were used steadily during class by the participants of the experimental group
Figure 10. Video views on each day of the experimental working phase
Table 4. Results of the students’ survey
1. Overall, I liked using the clips.
2. The clips were easy to follow.
3. The problem was explained well. 4.42 0.57 96 % 0 %
4. All steps were explained clearly.
5. I was confused by some steps.
6. The videos helped me to understand.
7. Writing in the clips was easy to read.
8. Diagrams helped me understand. 5.00 0.00 100 % 0 %
9. Good tips were provided.
10. The clips were too long.
11. The clips went too fast for me.
12. I used the pause feature to stop the clips sometimes.
13. The videos were boring. 2.17 0.99 13 % 67 %
14. I liked using videos better than using the textbook.
15. These clips were helpful for homework.
16. I would use these clips to review for assignments.
17. These clips would be helpful for extra help.
Note. Agree: Both, Agree (4) and Strongly Agree (5). Disagree: Both, Disagree (2) and Strongly Disagree (1).
Figure 10 suggests steady usage of the video podcasts during the experimental working phase, especially in the first
four days of experimental working phase. This makes sense, since the students had no relevant prior knowledge
concerning “the circle” according to the pretest results.
Certainly, the numbers in Figure 10 also include views outside the maths class and with different (mobile) digital
devices. However the results of students’ interviews and the student’s survey show that the worked example videos
provided were often and willingly used during class by students. For example 96 % of the students said in the survey
that they liked working with the provided video podcasts. All students reported that they preferred working with the
video podcasts instead working with the textbook (see Table 4). Additionally, all of the students thought that the
videos helped them to understand the learning content. All in all, we counted 17 positive comments in the students’
interview concerning the worked example videos; for example, some mentioned that they enjoyed working with the
videos. One student explained that he liked the possibility to pause and rewind the clip whenever he wanted to. One
participant argued that he did not need the video clip and another complained about the intro, which was in the
beginning of every clip and in his opinion too long (ca. 30 s).
H4: The participants of the experimental group also use the provided worked example videos outside the
In the students’ survey (Table 4) all of the students stated that the provided worked example videos were helpful for
doing their homework, e.g., outside the classroom. They also rated this statement with 4.63 (whereby 1 means: “No,
I definitely do not agree” and 5 means: “Yes, I fully agree”). Additionally, some students explicitly mentioned their
usage of the videos outside the class during the interview.
H5: Not dealing with learning content covered by worked example videos during the working phase (in or
outside the class) has a negative influence on dealing with these contents during the post-test
To validate this hypothesis, we included two exercises in our post-test (these exercises were already in the pretest
since they were identical), which refer to two specific exercises (and videos) during the experimental working phase.
Exercise 1 from the pre/post-test refers to exercise “Ü9” from the work plan, and exercise 3 from the pre/post-test
refers to exercise “P5” from the work plan.
According to the work progress control sheet for every student, exactly 13 students did the exercise “Ü9” during the
working phase and 11 did not. As shown in Table 5, the 13 students who did exercise “Ü9” scored in the according
post-test exercise 1 in average M = 1.85 points (SD = 0.38), while the 11 students who did not do “Ü9” only scored
M = 1.09 points (SD = 0.70).
If we compare the average score of exercise 3 in the post-test from the 14 students who did “P5” during the working
phase (M = 1.79; SD = 0.58) with those 10 who did not (M = 0.50; SD = 0.85), the result is even more impressive
(F(1,22) = 19.54; p < 0.001).
Table 5. Post-test results of exercise 1 and 3 dependent on whether the corresponding exercises (Ü9, P5) were done
during the experimental working phase
Post-test result of exercise 1
Yes (n = 13)
No (n = 11)
Standard deviation (SD)
Post-test result of exercise 3
Yes (n = 14)
No (n = 10)
Standard deviation (SD)
H6: Continuous feedback from the teacher during the working phase – via incentives and comments –
motivated students to correct their mistakes
An evaluation of the students’ control sheet of their individual working progress showed that overall 396 exercises
needed to be corrected by the teacher during the working phase and that in fact 299 of them were corrected again by
the students. This is equal to a rate of 75.5 %, which is surprisingly high regarding the circumstance that students
were not forced to correct any of them. As can be seen in Table 6, stars seemed to play a big role in students
correcting their mistakes. While “only” 68.2 % of the exercises rewarded with 1 star were corrected, 85.4 % of the
difficult “3-star-exercises” were corrected. Furthermore, many students mentioned in the interview that they liked
“collecting stars” and that they felt more motivated to do their exercises (11 statements).
Table 6. Rate of the corrected exercises dependent on their value (stars)
Needed to be corrected
H2: The participants of the experimental group liked the open education approach and felt motivated
As already discussed (H3 and H4) the participants liked the worked example videos. In summary, 76 statements in
the interviews from 19 different students were counted concerning how much they liked the whole setting as such. In
Figure 11 the cumulative progress of the incentives (stars) every team had collected during the whole experimental
working phase is shown. As can be seen, the progress was very similar in each team. Considering this, it can be
stated that the open education approach was very motivating for most of the students. Finally, the teacher of the E-
group (who was also the teacher for the C-Group) stated in the interview that in her opinion the students of the E-
group were much more motivated during the whole working phase than the C-group’s students in the traditional
Figure 11. Cumulative progress of the incentives (stars) of every team
In H1 it has been shown that the open education approach – which supported self-regulated learning, learning with
worked example videos and learning in teams – leads to a significantly better learning result than the traditional
method. The experimental group (E-group) was not only better than the control group (C-group), which was
instructed by the same teacher, but was also – even much better than the other two control groups (FC1-group and
FC2-group), which were taught by different teachers. Furthermore it has been shown in H2 that students of the E-
group also liked working in this entire experimental setting much more than other students.
Worked example videos
H3 and H4 demonstrated that students liked working with the provided worked example videos not only in school
but also at home. One student for example mentioned that he often used typical video features – like pausing and
rewinding. These findings correspond with previous experiments in middle school (Kay & Edwards, 2012) and in
higher education (e.g., Copley, 2007; Hill & Nelson, 2011; Kay, 2012).
H5 pointed out that there is really a correlation between watching worked example videos and the learning result.
This was proven by the fact that those students, who did not watch two specific videos during the experimental
phase, did significantly worse in the corresponding exercises in the post-test than those students who did watch the
videos. Kay and Kletskin (2012) got similar results: Watching worked example videos can lead to better learning
Incentives, feedback and working in teams
A high level of motivation is probably the sin qua non of a good learning success (Ebner & Holzinger, 2007).
Keeping this in mind, we tried to figure out how to motivate students in a way which will be (a) age-appropriate, (b)
beneficial for team work and (c) kind of “fun” for children in the age-group around 10.
As shown in H6 and H2, the proposed approach of using stars for rewarding students’ efforts combined with the
aspect that the stars were added up per team seems to fulfil the three named requirements. H6 proved for example the
fact that students were more motivated to correct their mistakes when they were rewarded with more stars for their
In addition to this, H2 indicates that most of the students liked working in teams – but they particularly valued the
possibility that they always were allowed to choose with whom to work with, even across teams or alone. Moreover,
the students stated in the interviews that they often worked very closely in teams and that the “better” team members
helped the “weaker” ones, so that they could earn a high amount of stars. Finally, the teacher of the E-group said in
the interview that in her opinion this “star-collecting” modus was very much age-appropriate since children of this
age often play games (e.g., computer games) where they have to collect objects like stars or mushrooms to succeed.
There are, however, a few limitations within our study. First there was no random selection of the participants for the
experimental group and the control groups since we had to assign whole classes for our groups. Secondly, not only
the experimental group but also the direct control group (C-group) only included male participants. In some previous
research, female students outperformed male students by learning with video podcasts (e.g., Bolliger et al., 2010),
while in other studies no correlation between gender and learning performance by the usage of video podcasts was
observed (e.g., Chester et al., 2011; Kay & Kletskin, 2012). Nevertheless, gender influences might be possible for the
used incentive systems and the sporty metaphor (“start”/“goal,” etc.).
A convincing number of studies have found that learners with poor self-regulated learning abilities tend to be less
academically successful than learners with eminent SRL abilities (Zimmerman, 1989; Butler & Winne, 1995;
Boekaerts, 1999). Besides the theoretically clear ambition to initiate and foster SRL competencies with an open
education setting, studies show that open educational settings do not always fulfil the expectancies. Additionally,
many studies have indicated that especially in e-learning and hypermedia learning environments students with poor
SRL abilities mostly fail to achieve considerable learning performance (Hu & Gramling, 2008; Hadwin & Winne,
2001; Wang, 2011). Existing differences of SRL competencies as well as other influences may lead for example to
undesirable differences in learning performance in open education settings.
Finally, with respect to the limited duration of this study, it may be possible that the so-called “novelty effect”
enhanced the motivation of the participants of the experimental group per se and in further consequence also
influenced their learning performance. As always in such field interventions, as in our case, the Pygmalion
(Rosenthal) effect (Rosenthal & Jacobson, 1968) and the Hawthorn effect (McCarney et al., 2007) might be
considered as a potential influencer or enforcer of positive results. Thus the need of a long-term study should be
considered for further research studies.
First and foremost, this study has clearly shown that an open education approach using worked example videos in
maths can lead to a better learning performance than the traditional approach. Against the background of the age of
the participants (about ten years), their limited experiences with self-regulated learning and limited experiences with
usage of videos for learning, this result is even more impressive – and perhaps surprising.
The design of the teaching approach as an open learning setting, where the pupils were given a lot of freedom in the
way they go through and use the learning materials (videos) on the one hand, and a well-prepared, entertaining, and
helpful learning setting with incentives (stars) with relatively prompt feedback (each evening) on the other, obviously
fits the pupils needs and pre-requisites.
Finally, the study has corroborated that teachers should provide open learning environments more often for their
students in which they can experience self-regulated learning and develop self-regulated learning strategies. Thus the
usage of worked example videos seems to look very promising not only for enhancing self-regulated learning
environments, but even more so for the facilitation of seamless learning.
Ally, M., Grimus, M., & Ebner, M. (2014). Preparing teachers for a mobile world, to improve access to education. Prospectus,
Amiel, T., & Reeves, T. C. (2008). Design-based research and educational technology: Rethinking technology and the research
agenda. Educational Technology & Society, 11(4), 29–40.
Bloice, M., Simonic, K. M., & Holzinger, A. (2014). Casebook: A Virtual patient iPad application for teaching decision-making
through the use of electronic health records. BMC Medical Informatics and Decision Making, 14(1).
Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in
classroom settings. The Journal of the Learning Sciences, 2(2), 141-178.
Boekaerts, M. (1999). Self-regulated learning: Where we are today. International Journal of Educational Research, 31, 445-457.
Bolliger, D. U., Supanakorn, S., & Boggs, C. (2010). Impact of podcasting on student motivation in the online learning
environment. Computers & Education, 55(2), 714–722.
Boster, F. J., Meyer, G. S., Roberto, A. J., Lindsey, L., Smith, R., Inge, C., & Strom, R. E. (2007). The Impact of video streaming
on mathematics performance. Communication Education, 56(2), 134–144.
Butler, D. L. & Winne, P. H. (1995). Feedback and self-regulated learning: A Theoretical synthesis. Review of Educational
Research, 65, 245–281.
Carroll, W. M. (1994). Using worked examples as an instructional support in the algebra classroom. Journal of Educational
Psychology, 86(3), 360-367.
Chan, T. W., Roschelle, J., Hsi, S., Kinshuk, Sharples, M., Brown, T., Patton, C., Cherniavsky, J., Pea, R., Norris, C., Soloway, E.,
Balacheff, N., Scardamalia, M., Dillenbourg, P., Looi, C. K., Milrad, M., & Hoppe, U. (2006). One-to-one technology-enhanced
learning: an opportunity for global research collaboration. Research and Practice in Technology-Enhanced Learning, 1(1), 3–29.
Chester, A., Buntine, A., Hammond, K., & Atkinson, L. (2011). Podcasting in education: Student attitudes, behaviour and self-
efficacy. Educational Technology & Society, 14(2), 236–247.
Cohen, E. G. (1994). Restructuring the classroom: conditions for productive small groups. Review of Education Research, 64(1),
Collins, A. (1992). Toward a design science of education. In E. Scanlon & T. O’Shea (Eds.), New Directions in Educational
Technology (pp. 15-22). Heidelberg, Germany: Springer.
Copley, J. (2007). Audio and video podcasts of lectures for campus-based students: Production and evaluation of student use.
Innovations in Education and Teaching International, 44(4), 387–399.
Ebner, M., & Holzinger, A. (2007). Successful implementation of user-centered game based learning in higher education: An
Example from civil engineering. Computer & Education, 49(3), 873 – 890.
Dupagne, M., Millette, D. M., & Grinfeder, K. (2009). Effectiveness of video podcast use as a revision tool. Journalism & Mass
Communication Educator, 64, 1, 54–70.
Greene, B. A., & Land, S. M. (2000). A Qualitative analysis of scaffolding use in a resource-based learning environment involving
the World Wide Web. Journal of Educational Computing Research, 23, 151–179.
Hadwin, A. F., & Winne, P. H. (2001). CoNoteS2: A Software tool for promoting self regulation. Educational Research and
Evaluation 7, 313–334.
Heilesen, S. B. (2010). What is the academic efficacy of podcasting? Computers & Education, 55(3), 1063–1068.
Hill, J. L., & Nelson, A. (2011). New technology, new pedagogy? Employing video podcasts in learning and teaching about exotic
ecosystems. Environmental Education Research, 17(3), 393–408.
Holzinger, A., Kickmeier-Rust, M. D., Wassertheurer, S., & Hessinger, M. (2009). Learning performance with interactive
simulations in medical education: Lessons learned from results of learning complex physiological models with the
HAEMOdynamics SIMulator. Computers & Education, 52, 292-301.
Hu, H., & Gramling, J. (2008). Learning strategies for success in a web-based course: A Descriptive exploration. Quarterly
Review of Distance Education, 10(2), 123–134.
Kauffman, D. F. (2004). Self-regulated learning in web-based environments: Instructional tools designed to facilitate self-
regulated learning. Journal of Educating Computing Research, 30, 139–162.
Kay, R., & Edwards, J. (2012). Examining the use of worked example video podcasts in middle school mathematics classrooms: A
Formative analysis. Canadian Journal of Learning and Technology, 38(2), 1-20.
Kay, R., & Kletskin, I. (2012). Evaluating the use of problem-based video podcasts to teach mathematics in higher education.
Computers & Education, 59, 619-627.
Kay, R. H. (2012). Exploring the use of video podcasts in education: A Comprehensive review of the literature. Computers in
Human Behavior, 28, 820-831.
Kay, R. H. (2014). Developing a framework for creating effective instructional video podcasts. International Journal of Emerging
Technologies in Learning, 9(1), 22-30.
Kazemi, F., & Ghoraishi, M. (2012). Comparison of problem-based learning approach and traditional teaching on attitude,
misconceptions and mathematics performance of university students. Procedia – Social and Behavioral Sciences, 46, 3852 –
Lopata, C., Wallace, N. V., & Finn, K. V. (2005). Comparison of academic achievement between Montessori and traditional
education programs. Journal of Research in Childhood Education, 20(1), 5-13.
McCarney, R., Warner, J., Iliffe, S., van Haselen, R., Griffin, M., & Fisher, P. (2007). The Hawthorne effect: A Randomised,
controlled trial. BMC Medical Research Methodology, 7(1), 30.
McGarr, O. (2009). A Review of podcasting in higher education: Its influence on the traditional lecture. Australasian Journal of
Educational Technology, 25(3), 309–321.
Muñoz-Cristóbal, J. A., Prieto, L. P., Asensio-Pérez, J. I., Martínez-Monés, A., Jorrín-Abellán, I. M., & Dimitriadis, Y. (2014).
Deploying learning designs across physical and web spaces: Making pervasive learning affordable for teachers. Pervasive and
Mobile Computing, 14, 31-46.
Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 16-20.
Schmitz, B., Klemke, R., Walhout, J., & Specht, M. (2015). Attuning a mobile simulation game for school children using a
design-based research approach. Computers and Education, 81, 35-48.
Sollervall, H., Otero, N., Milrad, M., Johansson, D., & Vogel, B. (2012). Outdoor activities for the learning of mathematics:
Designing with mobile technologies for transitions across learning contexts. In Proceedings of the 2012 IEEE Seventh
International Conference on Wireless, Mobile and Ubiquitous Technology in Education (WMUTE) (pp. 33-40).
Wang, T. H. (2011). Developing Web-based assessment strategies for facilitating junior high school students to perform self-
regulated learning in an e-Learning environment. Computers and Education, 57, 1801-1812.
Wong, L. H., & Looi, C. K. (2011). What seams do we remove in mobile assisted seamless learning? A Critical review of the
literature. Computers and Education, 57, 2364–2381.
Zimmerman, B. J. (1989). A Social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81,