ArticlePDF Available

Exploring the Creation of Instructional Videos to Improve the Quality of Mathematical Explanations for Pre-Service Teachers

Authors:

Abstract and Figures

One of the primary skills required by mathematics teachers is the ability to provide effective explanations to their students. Using Kay's (2014) theory-based framework for creating instructional videos, this study explored the quality and growth of explanations embedded in mathematical instructional videos created by 37 pre-service teachers (female = 26, male = 11). The Instructional Video Evaluation Scale (IVES), comprised of four constructs (establishing context, explanation heuristics, minimizing cognitive load, engagement), was used to assess the quality of two videos (pre-feedback and post-feedback). The initial video created by pre-service teachers (pre-feedback) revealed a number of problem areas, including providing a clear problem label, using visual supports, noting potential errors that might occur, writing legibly, highlighting key areas, listing key terms and formulas, being concise, and using a clear, conversational voice. After receiving detailed feedback based on the IVES, the ratings of the second video (post-feedback) for each of the initial problem areas increased significantly. The IVES scale, grounded on Kay's (2014) framework, helped identify and improve the effectiveness of pre-service teachers' explanations of mathematics concepts. 2 Résumé: L'une des principales compétences requises des professeurs de mathématiques est de fournir des explications efficaces à leurs élèves. À l'aide du cadre théorique de Kay (2014) pour la création de vidéos pédagogiques, cette étude a exploré la qualité et la croissance des explications intégrées dans les vidéos pédagogiques mathématiques créées par 37 enseignants stagiaires (femmes = 26, hommes = 11). L'échelle d'évaluation de la vidéo pédagogique (IVES), composée de quatre concepts (établissement du contexte, heuristique d'explication, minimisation de la charge cognitive, engagement), a été utilisée pour évaluer la qualité de deux vidéos (pré-feedback et post-feedback). La vidéo initiale créée par les enseignants stagiaires (pré-rétroaction) a révélé un certain nombre de domaines problématiques, notamment fournir une étiquette claire du problème, utiliser des supports visuels, noter les erreurs potentielles qui pourraient survenir, écrire lisiblement, mettre en évidence les domaines clés, énumérer les termes et formules clés , utilisant une voix claire et conversationnelle et concis Après avoir reçu des commentaires détaillés, basés sur l'IVES, les notes de la deuxième vidéo (post-feedback) pour chacun des problèmes initiaux ont augmenté de manière significative. L'échelle IVES, fondée sur le cadre de Kay (2014), a permis d'identifier et d'améliorer l'efficacité des explications des enseignants de formation sur les concepts mathématiques.
Content may be subject to copyright.
ISSN: 2292-8588 Volume 35, No. 1, 2020
This work is licensed under a Creative Commons Attribution 3.0 Unported License
Exploring the Creation of Instructional Videos to Improve
the Quality of Mathematical Explanations for Pre-Service
Teachers
Dr. Robin Kay and Dr. Robyn Ruttenberg-Rozen
Abstract: One of the primary skills required by mathematics teachers is the ability to provide
effective explanations to their students. Using Kays (2014) theory-based framework for creating
instructional videos, this study explored the quality and growth of explanations embedded in
mathematical instructional videos created by 37 pre-service teachers (female = 26, male = 11).
The Instructional Video Evaluation Scale (IVES), comprised of four constructs (establishing
context, explanation heuristics, minimizing cognitive load, engagement), was used to assess the
quality of two videos (pre-feedback and post-feedback). The initial video created by pre-service
teachers (pre-feedback) revealed a number of problem areas, including providing a clear
problem label, using visual supports, noting potential errors that might occur, writing legibly,
highlighting key areas, listing key terms and formulas, being concise, and using a clear,
conversational voice. After receiving detailed feedback based on the IVES, the ratings of the
second video (post-feedback) for each of the initial problem areas increased significantly. The
IVES scale, grounded on Kay’s (2014) framework, helped identify and improve the
effectiveness of pre-service teachersexplanations of mathematics concepts.
2
Keywords: pre-service teachers, instructional videos, mathematics teaching,
explanation
Résumé: L'une des principales compétences requises des professeurs de
mathématiques est de fournir des explications efficaces à leurs élèves. À l'aide du
cadre théorique de Kay (2014) pour la création de vidéos pédagogiques, cette
étude a exploré la qualité et la croissance des explications intégrées dans les
vidéos pédagogiques mathématiques créées par 37 enseignants stagiaires
(femmes = 26, hommes = 11). L'échelle d'évaluation de la vidéo pédagogique
(IVES), composée de quatre concepts (établissement du contexte, heuristique
d'explication, minimisation de la charge cognitive, engagement), a été utilisée
pour évaluer la qualité de deux vidéos (pré-feedback et post-feedback). La vidéo
initiale créée par les enseignants stagiaires (pré-troaction) a révélé un certain
nombre de domaines problématiques, notamment fournir une étiquette claire du
problème, utiliser des supports visuels, noter les erreurs potentielles qui
pourraient survenir, écrire lisiblement, mettre en évidence les domaines clés,
énumérer les termes et formules clés , utilisant une voix claire et
conversationnelle et concis Après avoir reçu des commentaires détaillés, basés
sur l'IVES, les notes de la deuxième vidéo (post-feedback) pour chacun des
problèmes initiaux ont augmenté de manière significative. L’échelle IVES, fondée
sur le cadre de Kay (2014), a permis d’identifier et d’améliorer l’efficacité des
explications des enseignants de formation sur les concepts mathématiques.
Mots-clés: enseignants stagiaires, vidéos pédagogiques, enseignement des
mathématiques, explication
3
Introduction
Instructional explanations are developed to help learners understand or apply concepts
related to a specific subject area (Leinhardt, 2001). Exemplification, or providing
examples, is critical for effective mathematics explanations (Bills et al., 2006; Inoue,
2009) and helps simplify abstract mathematical concepts (Rowland, 2008). Kirschner et
al. (2006) provide substantial evidence that direct instruction through the use of well-
explained worked examples is particularly useful when students have a limited
understanding of concepts to be learned. Learning to provide clear and explicit
explanations of specific mathematical examples, then, is an essential skill to develop for
secondary school pre-service teachers (Atkinson et al., 2000). Developing those
explanation skills, especially within pre-service mathematics teacher education, is a
complex process (Kay, 2014). A teacher’s pedagogical decisions are multifaceted and
might involve highlighting mathematical elements, procedures used, choice of
technology, and type of representations. Each one of these decisions strongly influences
student learning (Kay, 2014).
Educational research on creating effective instructional explanations in mathematics
and other subject areas has been side-stepped for many years due to heavy emphasis on
problem-based and exploratory methods (Schopf et al., 2019). Consequently,
comprehensive, evidence-based frameworks on developing explanatory competency
are limited (Kay, 2014; Schopf et al., 2019). However, a set of general guidelines has
emerged including referencing previous knowledge, providing clear objectives and
structures, demonstrating use of knowledge, presenting examples and general rules,
offering visual aids, using straightforward language, following an appropriate pace,
and creating a positive atmosphere of humour and enthusiasm (Schopf et al., 2019;
Wittwer & Renkl, 2008).
4
An amalgam of heuristics for creating effective mathematics explanations includes
providing an overview at the beginning, following a series of steps in logical order,
offering clear definitions, incorporating adequate visualizations, and using appropriate
language to match the learner (Schopf et al., 2019). However, a coherent framework for
designing and delivering effective mathematical explanations through technology has
yet to be developed. Furthermore, data collection on the quality of mathematics
explanations is predominantly passive and does not adequately assess the process of
explaining.
Literature Review
An alternative approach to examining and fostering high-quality mathematical
explanations is to investigate how technology can be leveraged with video-based
worked examples. With over 1.3 billion users and 5 billion videos watched each day on
YouTube (MerchDope, 2020), an argument could be made that video explanations are
becoming more relevant and dominant than face-to-face explanations. The use of videos
to explain worked examples has been examined by researchers under different labels,
including podcasts (e.g., Crippen & Earl, 2004; Kay, 2014; Loomes et al., 2002), flipped
learning (e.g., Long et al., 2016; Sahin et al., 2015; Triantafyllou & Timcenko, 2015), and
video lectures (e.g., Giannakos et al., 2015; Ljubojevic et al., 2014). A video format is
useful for critically investigating the quality of explanations because it allows both
students and instructors to carefully review, replay, compare, and reflect upon critical
elements presented.
Consistent with previous research heuristics on explanatory competence (Schopf et al.,
2019; Wittwer & Renkl, 2008), Kay (2014) developed an evidence-based framework to
guide the creation of effective video-based explanations of worked examples. This
framework includes four areas: establishing the context of the problem, explanation
heuristics, minimizing cognitive load, and engaging students. Establishing context
5
includes providing a clear problem label (Bransford et al., 2000), explaining what the
problem is asking (Willingham, 2009), and identifying the type of problem presented
(Ball & Bass, 2000). Providing effective explanations involves breaking a problem into
meaningful steps (e.g., Mason et al., 2010; Polya, 2004), explaining the reasoning for
each step, and using visual supports (Atikinson et al., 2000; Clark & Mayer, 2008; Renkl,
2005). Minimizing cognitive load encompasses factors such as presenting problems in a
well-organized layout, writing clearly, and drawing studentsattention to key aspects
of the problem using visual highlighting (Clark & Mayer, 2008; Willingham, 2009).
Finally, engaging students while explaining worked examples refers to using a clear,
personalized voice and proceeding at a pace that is suitable for learning (not too fast,
not too slow), and minimizing distractions (Atkinson et al., 2005; Clark & Mayer, 2008;
Kester et al., 2006).
This study explored the evolution of pre-service teachers’ video-based explanations of
Grade 7 and 8 mathematical concepts using feedback provided by Kay’s (2014)
instructional framework.
Research Design and Methods
Participants
Thirty-seven pre-service teachers (female = 26, male = 11) participated in this study.
They were enrolled in a 12-week course focusing on teaching mathematics taught in
Grades 7 to 12 (intermediate/senior level). This course was part of a 1-year Bachelor of
Education program, situated in a small university (8,000 students) within a community
of 650,000 people. English was the second language for 32% (n = 12) of the participants.
Data Collection
The Instructional Video Evaluation Scale (IVES), based on Kay’s (2014) framework, was
used to analyze the video-based mathematical explanations of the pre-service students.
6
The IVES, consisting of 19 items, focuses on four constructs: establishing context (n = 3
items), creating effective explanations (n = 7 items), minimizing cognitive load (n = 4
items), and engagement (n = 5 items). Each item in the IVES was rated on a three-point
scale (0 = No, 1 = Sort of, 2 = Yes) assessing whether a pre-service teacher demonstrated
a specific explanation quality.
The first construct, establishing context (problem label, type, key elements), had an
internal reliability coefficient of 0.77. The second construct, creating effective
explanations (all key steps, clear reasoning, mathematical conventions, appropriate
strategy, tips, visuals, potential errors), had an internal reliability coefficient of 0.85. The
third construct, minimizing cognitive load (organized layout, readability, highlighting,
support information), had an internal reliability coefficient of 0.60. The final construct,
engagement (limiting distractions, pace, voice, length, tone), had an internal reliability
coefficient of 0.69. With the exception of the cognitive load construct, the internal
reliability values are considered acceptable for scales used in social sciences (Kline,
1999; Nunnally, 1978).
Procedure
During a 2-hour teaching session, we introduced pre-service teachers to screencasting
software (Camtasia) and how to use the required hardware (laptop computer with a
Wacom tablet). At the end of the teaching session, we asked them to create instructional
videos covering one or more mathematics concepts in the Grade 7 or 8 Ontario
Elementary Mathematics Curriculum (Ontario Ministry of Education, 2005). We then
provided pre-service teachers with a detailed description of the criteria for each of the
19 items in the IVES to help guide the creation of their instructional mathematics
videos.
7
Each student created one 4- to 6-minute instructional video on a self-selected topic
within the Grade 7 or 8 Ontario Mathematics Curriculum (Ontario Ministry of
Education, 2005). These videos addressed four of the five strands in the Ontario
Mathematics Curriculum: (i) numbers and number sense (n = 10 videos), (ii) geometry
and spatial sense (n = 13 videos), (iii) patterning and algebra (n = 6 videos), and (iv) data
management and probability (n = 8 videos).
After students created the first video (pre-feedback), they received detailed feedback
based on the IVES framework. We then asked the students to create a second 4- to 6-
minute instructional video (post-feedback) on the same topic as the first.
Data Analysis
For the first video (pre-feedback), we calculated means and standard deviations for
each item on the IVES to provide an overview of pre-service teachersinitial explanation
skills. Next, we compared the percentage of pre-service teachers who fully achieved
items on the IVES to identify where students excelled and struggled with mathematical
explanations. Finally, we conducted paired t-tests for the entire IVES scale (total score),
individual IVES items, and the length of videos to determine whether the quality of
mathematics explanations changed based on feedback provided by the IVES.
Research Questions
We addressed two research questions in this study:
1. What strengths and challenges do pre-service teachers demonstrate when
creating instructional video explanations of Grade 7 and 8 mathematics content?
2. How does the quality of mathematical explanations provided by pre-service
teachers change based on feedback from the IVES?
8
Results
Overview
A paired t-test revealed that the average IVES post-feedback score (M = 1.51, SD = 0.48)
was significantly higher than the average IVES pre-feedback score (M = 1.37, SD = 0.45)
t(36) = 2.54, p < .05 with a medium effect size of 0.29 (Cohen, 1988, 1992). In other words,
the overall quality of explanations increased significantly after pre-service teachers
received feedback based on the IVES framework. A second paired t-test indicated that
the average post-feedback mathematics instructional video (M = 234.0 seconds, SD =
73.7) was significantly longer than the average pre-feedback video (M = 352.5 seconds,
SD = 147.4), with a large effect size of 1.02 (Cohen, 1988, 1992).
Establishing Context
The average initial (pre-feedback) context item score was 1.53 (SD=0.56), the highest of
the four IVES themes. Seven out of ten students were able to articulate the context and
type of problem addressed in the video, and six out of ten were successful at noting the
key elements for solving the problem, but only half of the students fully achieved the
criteria of correctly labelling the problem (Table 1).
A paired t-test comparing average pre- and post-feedback total context scores was not
significant (t(36) = 0.09, ns). Paired t-tests conducted on the three individual context
items on the IVES revealed no significant differences between pre-feedback and post-
feedback scores.
9
Table 1
Pre- vs. Post-Feedback Video Scores for Establishing Context Items (n = 37)
Item
Pre-Feedback
Mean (SD)
Item Fully
Achieved
Post-Feedback
Mean (SD)
Item Fully
Achieved
Context and type of problem
articulated
1.59 (0.64)
68%
1.57 (0.65)
65%
Key elements of problem
explained
1.54 (0.61)
60%
1.54 (0.69)
65%
Clear problem label
1.46 (0.65)
54%
1.51 (0.73)
65%
Total context score
1.53 (0.56)
NA
1.54 (0.62)
NA
Creating Effective Explanations
The average initial (pre-feedback) explanation item score was 1.27 (SD = 0.52), the lowest
among the four IVES constructs. Approximately six out of ten students were successful
at showing and explaining all the key steps in their mathematics problems and using
the correct mathematical conventions. About half the students were able to offer a
suitable strategy or tip for solving problems. Only one quarter of the students offered
visuals to support their explanations or noted potential errors one might make when
solving the problem addressed in the
video (Table 2).
The average post-feedback total explanation score was significantly higher than the
average pre-feedback score, with a medium effect size (Table 2). Paired t-tests
comparing pre- and post-feedback scores for the seven individual explanation items on
the IVES indicated significant gains for two items: the use of visuals to support
explanations, and communicating potential errors that students could make when
10
trying to solve the mathematics problem (Table 2). The effect size for the use of visual
supports item is considered medium, according to Cohen (1988, 1992). The effect size
for the noting potential errors item is considered small, according to Cohen (1988, 1992).
It is worth noting that these two items were the lowest-rated explanation items on pre-
feedback videos. The remaining five explanation construct items showed no significant
increases between pre- and post-feedback scores.
Table 2
Pre- vs. Post-Feedback Video Scores for Effective Explanation Items (n = 37)
Pre-Feedback
Mean (SD)
Item Fully
Achieved
Post- Feedback
Mean (SD)
Item Fully
Achieved
1.59 (0.60)
65%
1.65 (0.59)
70%
step
1.49 (0.65)
57%
1.62 (0.64)
70%
1.46 (0.69)
57%
1.54 (0.61)
60%
1.41 (0.64)
49%
1.54 (0.65)
62%
1.35 (0.75)
51%
1.46 (0.80)
65%
0.89 (0.74)
22%
1.27 (0.87)
1
54%
0.70 (0.88)
27%
0.92 (0.92)
2
38%
core
1.27 (0.52)
NA
1.43 (0.56)
3
NA
1t (36) = 2.90, p <. 01
2t (36) = 2.10, p <. 05
3t (36) = 2.37, p <. 05
11
Minimizing Cognitive Load
The average cognitive load item score was 1.43 (SD=0.42), the second highest among the
four IVES themes. Eight out of ten students provided a clear, organized layout for
presenting their problems. Half the students had legible writing and visually
highlighted key points in the video explanations. Only one third of the students listed
key supportive elements like key terms or formulas needed to solve the problems (Table
3).
The average post-feedback total cognitive load score was significantly higher than the
average pre-feedback score, with a medium effect size (Table 3). Paired t-tests
comparing pre- and post-feedback scores for the four cognitive load items on the IVES
revealed a significant increase in listing key supportive elements when providing a
mathematical explanation (Table 3). The effect size for this increase is considered
medium, according to Cohen (1988, 1992). It is worth noting that listing key supportive
elements was the lowest-rated cognitive load item in the pre-feedback videos (Table 3).
The remaining three cognitive load items increased, but not significantly (Table 3).
Table 3
Pre- vs. Post-Feedback Video Scores for Cognitive Load Items (n=37)
Item
Pre-Feedback
Mean (SD)
Item Fully
Achieved
Post- Feedback
Mean (SD)
Item Fully
Achieved
Clear, organized layout of
problem
1.78 (0.48)
81%
1.81 (0.46)
84%
Readability of writing
1.43 (0.60)
49%
1.54 (0.69)
65%
Visually highlighting key points
1.41 (0.69)
51%
1.46 (0.77)
62%
Listing supportive elements
1.11 (0.74)
32%
1.46 (0.73)
1
59%
12
Total cognitive load
1.43 (0.43)
NA
1.57 (0.52)
2
NA
1t (36) = 2.10, p <. 05
2t (36) = 2.22, p <. 05
Engagement
The average engagement item score was 1.37 (SD = 0.52), the third highest among the
four IVES themes. Six out of ten students limited distracting behaviour (e.g., saying
uhmtoo often, clearing throat, poor sound quality) and proceeded at a pace that was
effective for learning a new concept. Half the students were successful at using a clear,
engaging conversational voice to present concepts and provide an explanation that was
not too long or too short.
The average post-feedback engagement score was significantly higher than the average
pre-feedback score, with a medium effect size (Table 4). All five individual engagement
items increased from pre- to post-feedback scores. However, paired t-tests conducted
on the five individual engagement items on the IVES revealed that these gains were not
significant (Table 4).
Table 4
Pre- vs. Post-Feedback Video Scores for Engagement Items (n=37)
Item
Pre-Feedback
Mean (SD)
Item Fully
Achieved
Post- Feedback
Mean (SD)
Item Fully
Achieved
Limiting distractions
1.46 (0.80)
65%
1.73 (0.60)
76%
Effective pace for learning
1.43 (0.80)
62%
1.57 (0.60)
62%
Clear voice
1.41 (0.73)
54%
1.57 (0.69)
58%
13
Appropriate length of
explanation
1.32 (0.78)
51%
1.46 (0.80)
65%
Engaging conversational voice
1.24 (0.80)
46%
1.46 (0.69)
46%
Total engagement score
1.37 (0.52)
NA
1.56 (0.52)
1
NA
1t (36) = 2.68, p <. 05
Discussion
Providing effective explanations is an essential skill for mathematics teachers (Bills et
al., 2006). In support of developing this skill, we assessed changes in video-based
explanations created by pre-service secondary teachers using four constructs:
establishing context, creating effective explanations, minimizing cognitive load, and
engagement.
Establishing Context
Regarding establishing context in their initial (pre-feedback) video explanations, most
pre-service teachers were able to communicate the type of problem presented.
However, 40% struggled with presenting the key elements required to solve the
problem. Pre-service teachers are just beginning to unpack their previously automatized
mathematical knowledge to make it useful for teaching (Ball & Forzani, 2009). The more
automatized their knowledge, the harder it is to unpack and communicate the required
steps to explain the mathematics concepts. Pre-service teachers might need more direct
guidance with elementary mathematics problems, or they may need to observe students
trying to solve these problems to better understand which elements are essential for
naïve or new learners (Santagata & Bray, 2016).
Almost half of the pre-service teachers were unable to provide a clear label for their
problem, an issue that may be related to new teachers not having an evolved schema of
14
how to organize and categorize problems. Without entirely unpacking their knowledge,
beginning teachers can solve mathematics problems, but may not understand the big
picture well enough to provide adequate labels or descriptions. Further research,
perhaps in the form of interviews, could be used to understand the challenges that pre-
service teachers have with establishing context.
Effective Explanations
Half of the pre-service teachers had difficulty selecting an appropriate strategy and
offering tips to solve a problem in their first video (pre-feedback). This finding
highlights the need for teacher education programs to spend more time explicitly
focusing on the connections between problem solving and making thinking explicit.
After making these connections, pre-service teachers can use that knowledge to provide
more effective explanations for their students. Pre-service teachers need to be cognizant
of the difference between solving and explaining problems.
Additionally, only 20% of pre-service teachers used visual aids to support their video-
based mathematical explanations. This result signifies a need for teacher education
programs to focus on how representations and visual aids might improve the quality of
explanations (Arcavi, 2003). After receiving direct feedback from the IVES, though, pre-
service teachers’ scores on providing visual aids increased significantly for the second
video (post-feedback). Directing attention to the value of visual aids can lead to short-
term changes.
Finally, only one third of pre-service teachers were able to note potential errors in the
pre-feedback videos. This finding is in line with other research showing that pre-service
and novice teachers require explicit instruction in noticing (Mason, 2002) and
anticipating student misconceptions and errors (e.g., Lee & Francis, 2018; Son, 2013).
Again, the advanced mathematical knowledge that pre-service teachers have, especially
15
when solving relatively straightforward Grade 7 and 8 problems, undermines their
ability to identify potential errors because they do not make these errors and therefore
have difficulty anticipating them. Consequently, this research highlighted the continued
necessity of including mathematical error analysis to create effective explanations in
teacher education programs. Drawing attention to this weakness resulted in significant
improvements on this item in the second video (post-feedback)
Cognitive Load
For cognitive load, most pre-service teachers presented problems in a clear, organized
format. Half the pre-service teachers had legible writing and highlighted key points;
however, feedback based on the IVES scale did not significantly improve these qualities.
Future research endeavours could focus on why these two features are resistant to
change.
Writing down supportive elements such as the definition of terms or formulas proved
to be challenging for the pre-service teachers. Similar to establishing context, writing
down supportive elements is connected to pre-service teachersunpacking of their
automized mathematical knowledge. Our finding aligns with recent calls by Krupa et
al. (2017) for more research regarding mediating secondary teachers in noticing student
thinking. Pre-service teachers could improve their explanations by identifying,
highlighting, and writing down key elements to support student thinking. Making
students aware of this problem through feedback from the IVES resulted in significant
increases in pre-service students listing supportive elements.
Engagement
Overall, pre-service students scored highest on creating engaging mathematical
explanations for their first videos (pre-feedback). Many pre-service teachers limited
distracting behaviours and explained the mathematical examples at a pace that was
16
neither too fast nor too slow. Using a clear and engaging conversational tone was a little
more challenging for the students. Voice may be more critical in a video than a face-to-
face explanation but using a more personalized tone is likely more effective, regardless
of the environment (Kay, 2014). Changing voice may be one of the more challenging
skills to develop with pre-service teachers. Finally, about half the students struggled
with creating a video that was long enough to provide sufficient detail. The
automatization of knowledge might make pre-service teachers somewhat blind to the
level of detail required for a novice learner (Ball & Forzani, 2009). However, feedback
from the IVES led students to create significantly longer post-feedback videos. It is also
worthwhile noting that although no individual item in the engagement construct
improved significantly as a result of receiving feedback from the IVES, the overall
average construct score did increase significantly. Again, pre-service students appeared
to be responsive to direct, explicit feedback on their explanation skills.
Conclusions
Overall, the IVES appeared to be a useful metric for analyzing instructional
mathematics videos created by pre-service teachers and identifying potential
opportunities for improvement in explanatory competence. Feedback based on the IVES
was significantly helpful in improving particularly weak problem areas such as
providing visual supports (representations), noting potential errors that students could
make, and listing supportive elements. However, the majority of the 19 items assessed
by the IVES did not show a significant improvement after pre-service teachers received
feedback. Developing high-quality explanation skills takes time because pre-service
teachers have to unpack their automized mathematical knowledge (Ball & Forzani,
2009), observe and understand naïve learners (Santagata & Bray, 2016), and notice
student misconceptions and thinking (Krupa et al., 2017; Lee & Francis, 2018; Son, 2013).
17
This study is a first step in systematically exploring the use of videos to improve the
quality of pre-service teachers’ mathematical explanations. However, the sample was
small, and the period for examining improvements in these explanations was relatively
short. Future research might explore the progression of pre-service teachers’
explanation skills over an entire semester, year, or program to identify the rate at which
specific skills develop. In addition, interviews would help identify the source and
progression of acquiring specific explanation skills identified by the IVES. Finally,
student ratings of explanations would help validate the criteria noted in the IVES and
possibly add new essential elements to aid the development of mathematics
explanation skills.
References
Arcavi, A. (2003). The role of visual representations in the learning of mathematics. Educational
Studies in Mathematics, 52(3), 215–241. https://doi.org/10.1023/A:1024312321077
Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples:
Instructional principles from the worked examples research. Review of Educational
Research, 70(2), 181–214. https://doi.org/10.3102/00346543070002181
Atkinson, R. K., Mayer, R. E., & Merrill, M. M. (2005). Fostering social agency in multimedia
learning: Examining the impact of an animated agent’s voice. Contemporary Educational
Psychology, 30(1), 117–139. https://doi.org/10.1016/j.cedpsych.2004.07.001
Ball, D. L., & Bass, H. (2000). Interweaving content and pedagogy in teaching and learning to
teach: Knowing and using mathematics. In J. Boaler (Ed.), Multiple perspectives on
mathematics of teaching and learning (pp. 83104). Ablex Publishing.
Ball, D. L., & Forzani, F. M. (2009). The work of teaching and the challenge for teacher
education. Journal of Teacher Education, 60(5), 497–511.
https://doi.org/10.1177/0022487109348479
Bills, L., Dreyfus, T., Mason, J., Tsamir, P., Watson, A., & Zaslavsky, O. (2006). Exemplification
in mathematics education. In J. Novotná, H. Moraová, M. Krátká, & N. Stehlíková (Eds.),
Proceedings of the 30th Conference of the International Group for the Psychology of Mathematics
Education (Vol. 1, pp. 126–154). Charles University.
18
Bransford, J. D., Brown, A. L., & Cocking. R. R. (2000). How people learn: Brain, mind, experience,
and school (Expanded ed.). National Academy Press.
Clark, R. C., & Mayer, R. E. (2008). e-Learning and the science of instruction. Pfeiffer.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Academic Press.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.
https://doi.org/10.1037//0033-2909.112.1.155
Crippen, K. J., & Earl, B. L. (2004). Considering the efficacy of web-based worked examples in
introductory chemistry. Journal of Computers in Mathematics and Science Teaching, 23(2),
151–167. https://www.learntechlib.org/primary/p/12876/
Giannakos, M. N., Jaccheri, L., & Krogstie, J. (2015). Exploring the relationship between video
lecture usage patterns and students’ attitudes. British Journal of Educational Technology,
47(6), 1259–1275. https://doi.org/10.1111/bjet.12313
Inoue, N. (2009). Rehearsing to teach: Content-specific deconstruction of instructional
explanations in pre-service teacher training. Journal of Education for Teaching, 35(1), 47–60.
https://doi.org/10.1080/02607470802587137
Kay, R. H. (2014). Developing a framework for creating effective instructional video
podcasts. International Journal of Emerging Technologies in Learning, 9(1), 22–30.
http://dx.doi.org/10.3991/ijet.v9i1.3335
Kester, L., Lehnen, C., Van Gerven, P. W., & Kirschner, P. A. (2006). Just-in-time, schematic
supportive information presentation during cognitive skill acquisition. Computers in
Human Behavior, 22(1), 93–112. https://doi.org/10.1016/j.chb.2005.01.008
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction
does not work: An analysis of the failure of constructivist, discovery, problem-based,
experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.
https://doi.org/10.1207/s15326985ep4102_1
Kline, P. (1999). The handbook of psychological testing (2nd ed.). Routledge.
Krupa, E. E., Huey, M., Lesseig, K., Casey, S., & Monson, D. (2017). Investigating secondary
preservice teacher noticing of students’ mathematical thinking. In E. O. Schack, M. H.
Fisher, & J. Wilhelm (Eds.), Research in mathematics education (Vol. 6, pp. 49–72). Springer
International Publishing.
Lee, M. Y., & Francis, D. C. (2018). Investigating the relationships among elementary teachers’
perceptions of the use of students’ thinking, their professional noticing skills, and their
19
teaching practices. The Journal of Mathematical Behavior, 51, 118–128.
https://doi.org/10.1016/j.jmathb.2017.11.007
Leinhardt, G. (2001). Instructional explanations: A commonplace for teaching and location for
contrast. In V. Richardson (Ed.), Handbook of research on teaching (4th ed., pp. 333–357).
American Educational Research Association.
Ljubojevic, M., Vaskovic, V., Stankovic, S., & Vaskovic, J. (2014). Using supplementary video in
multimedia instruction as a teaching tool to increase efficiency of learning and quality of
experience. International Review of Research in Open and Distributed Learning, 15(3), 275–
291. https://doi.org/10.19173/irrodl.v15i3.1825
Loomes, M., Shafarenko, A., & Loomes, M. (2002). Teaching mathematical explanation through
audiographic technology. Computers & Education, 38(1–3), 137–149.
https://doi.org/10.1016/S0360-1315(01)00083-5
Long, T., Logan, J., & Waugh, M. (2016). Students’ perceptions of the value of using videos as a
pre-class learning experience in the flipped classroom. TechTrends, 60(3), 245–252.
http://dx.doi.org/10.1007/s11528-016-0045-4
Mason, J. (2002). Researching your own practice: The discipline of noticing. Routledge-Falmer.
Mason, J., Burton, L., & Stacey, K. (2010). Thinking mathematically (2nd ed.). Pearson.
MerchDope. (2020, February 26). 37 mind blowing YouTube facts, figures and statistics – 2020.
https://merchdope.com/youtube-stats
Nunnally, J. C. (1978). Psychometric theory. McGraw-Hill.
Ontario Ministry of Education. (2005). The Ontario curriculum grades 1-8: Mathematics revised.
Queens Printer for Ontario.
Polya, G. (2004). How to solve it: A new aspect of mathematical method (Expanded Princeton Science
Library ed.). Princeton University Press.
Renkl, A. (2005). The worked-out examples principle in multimedia learning. In R. E. Mayer
(Ed.), The Cambridge handbook of multimedia learning (pp. 229–245). Cambridge University
Press.
Rowland, T. (2008). The purpose, design and use of examples in the teaching of elementary
mathematics. Educational Studies in Mathematics, 69(2), 149–163.
https://doi.org/10.1007/s10649-008-9148-y
Sahin, A., Cavlazoglu, B., & Zeytuncu, Y. E. (2015). Flipping a college calculus course: A case
study. Educational Technology & Society, 18(3), 142–152.
https://www.jstor.org/stable/jeductechsoci.18.3.142
20
Santagata, R., & Bray, W. (2016). Professional development processes that promote teacher
change: The case of a video-based program focused on leveraging students
mathematical errors. Professional Development in Education, 42(4), 547–568.
https://doi.org/10.1080/19415257.2015.1082076
Schopf, C., Raso, A., & Kahr, M. (2019). How to give effective explanations: Guidelines for
business education, discussion of their scope and their application to teaching
operations research. RISTAL, 2, 32–50. https://doi.org/10.23770/rt1823
Son, J.- W. (2013). How preservice teachers interpret and respond to student errors: Ratio and
proportion in similar rectangles. Educational Studies in Mathematics, 84(1), 49–70.
https://www.jstor.org/stable/43589772
Triantafyllou, E., & Timcenko, O. (2015). Student perceptions on learning with online resources in a
flipped mathematics classroom [Paper presentation]. 9th Congress of European Research in
Mathematics Education (CERME9), Prague, Czech Republic.
https://www.researchgate.net/publication/277016310_Student_perceptions_on_learning_
with_online_resources_in_a_flipped_mathematics_classroom
Willingham, D. T. (2009). Why don’t students like school? A cognitive scientist answers questions
about how the mind works and what it means for the classroom. John Wiley & Sons.
Wittwer, J., & Renkl, A. (2008). Why instructional explanations often do not work: A framework
for understanding the effectiveness of instructional explanations. Educational
Psychologist, 43(1), 49–64. https://doi.org/10.1080/00461520701756420
Authors
Dr. Robin Kay is currently a full professor and dean in the Faculty of Education at Ontario Tech
University in Oshawa, Ontario, Canada. He has published over 160 articles, chapters, and
conference papers in the area of technology in education, is a reviewer for five prominent
computer education journals, and has taught in the fields of computer science, mathematics,
and educational technology for over 25 years at the high school, college, undergraduate, and
graduate levels. Current projects include research on laptop use in higher education, BYOD in
K-12 education, web-based learning tools, e-learning and blended learning in secondary and
higher education, video podcasts, scale development, emotions and the use of computers, the
impact of social media tools in education, and factors that influence how students learn with
technology. Dr. Kay received his MA in Computer Applications in Education and his PhD in
21
Cognitive Science (Educational Psychology) at the University of Toronto. Email:
Robin.Kay@uoit.ca
Dr. Robyn Ruttenberg-Rozen is an assistant professor of STEAM education and graduate
program director in the Faculty of Education at Ontario Tech University in Oshawa, Ontario,
Canada. She explores pedagogical practices and current discourses in STEAM education around
typically underserved, linguistically and culturally diverse, and exceptional populations of
learners and their teachers. At the centre of her research is the study of change, innovation, and
access in pedagogical spaces (virtual and face-to-face) with a focus on strategies and
interventions. Her graduate and undergraduate teaching includes courses in integrated STEAM
learning, mathematics methods and content, qualitative research methods, curriculum theory,
and theories of learning. Email: Robyn.Ruttenberg-Rozen@uoit.ca
... Since the quality of videos is a prerequisite for their effectiveness, preservice teachers should learn how to develop videos of high quality. For instructional explanations from teacher to learner (in classroom settings), different studies have shown that the quality of the explanation can be improved through training and experience (e.g., Charalambous et al., 2011;Findeisen et al., 2020;Kay & Ruttenberg-Rozen, 2020;Kulgemeyer et al., 2020). ...
... Based on previous studies that show that the quality of instructional videos is an important factor for student learning (Kulgemeyer, 2018a) and that the quality can improve after receiving feedback (Kay & Ruttenberg-Rozen, 2020), we investigated differences in the quality of instructional videos created by preservice economic teachers before and after a training. More precisely, we analyzed differences in the videos' instructional quality along twelve criteria from the five different categories mentioned above (content, learner orientation, representation & design, language and process structure; see Table 1). ...
... More precisely, we analyzed differences in the videos' instructional quality along twelve criteria from the five different categories mentioned above (content, learner orientation, representation & design, language and process structure; see Table 1). Adding to the understanding of how preservice teachers can be supported in creating high-quality instructional videos, we analyzed the effect of a longer training that included peer feedback (rather than instructor feedback in Kay & Ruttenberg-Rozen, 2020). We assumed that giving feedback to each other would help the preservice teachers in identifying weaknesses in their initial instructional videos and might promote a more substantial change in the revision (Liu & Carless, 2006;Nicol et al., 2014). ...
... We use the term "guideline" to refer to studies that aim at providing recommendations to support instructors in developing and selecting more effective videos (Brame, 2016;Kay, 2014;Kulgemeyer, 2018a;Schopf, 2020;Siegel & Hensch, 2021). In comparison, we categorize studies as "measures" when they develop measurements for existing videos by operationalizing criteria in a coding manual (Kay & Ruttenberg-Rozen, 2020;Kulgemeyer & Peters, 2016;Marquardt, 2016). As we have already discussed the underlying research of what constitutes the quality of a video explanation by analyzing instructional explanations and multimedia and cognitive load research, we do not go into detail regarding the design recommendations. ...
... In summary, previous instruments have different limitations, which are the reason for the development of a new measure: First, from a theoretical point of view, they do not include all relevant criteria or include criteria that are not easily transferable to other contexts (e.g., Kay & Ruttenberg-Rozen, 2020). Second, from a methodical perspective, they lack evidence regarding interrater reliability and internal consistency (e.g., Marquardt, 2016) or due to a dichotomous approach do not provide enough information about variance within a criterion (Kulgemeyer & Peters, 2016). ...
... It goes beyond earlier frameworks by integrating conceptual and empirical research from multimedia research and research on instructional explanations in different domains (Findeisen, 2017;Kulgemeyer, 2018b;Lee & Anderson, 2013;Leinhardt, 2001;Schopf & Zwischenbrugger, 2015;Schopf et al., 2019;Wittwer & Renkl, 2008). All criteria which we had identified as relevant and included in our instrument, have in the meantime (i.e., since the development of the instrument) also been part of newer frameworks dealing with video explanations (Kay & Ruttenberg-Rozen, 2020;Schopf, 2020;Siegel & Hensch, 2021) which can also be taken as an indicator for the instrument's content validity. In line with similar frameworks and recommendations (Brame, 2016;Kay, 2014;Schopf, 2020;Siegel & Hensch, 2021), the instrument can also be used as a guideline for the creation of new video explanations by, for example, teachers, teacher educators, or students. ...
Article
Full-text available
More and more teachers create video explanations for their instruction. Whether or not they are effective for learning depends on the videos’ instructional quality. Reliable measures to assess the quality of video explanations, however, are still rare, especially for videos created by (preservice) teachers. We developed such a measure in a two-step process: First, the categories were theoretically derived. Second, a coding manual was developed and used with 36 videos, which were created by preservice teachers during a university seminar. The resulting framework, which can be used as a coding manual for future research, consists of twelve criteria in five different categories: video content, learner orientation, representation and design, language, and process structure. With this framework, we contribute a reliable measure to evaluate the quality of existing videos. In practice, teachers can also use this measure as a guideline when creating or choosing video explanations for the classroom.
... As is evident studies in this area are directed to motivation, feedback, affective and cognitive attitudes, analysis and reflection of initial training and professional service, so, understanding a mathematical object means having traveled through various experiences that allow the student to produce, organize, reorganize and evaluate possible strategies for teaching. A study has not yet been developed to analyze the effectiveness of mathematical explanations through videos [12]. ...
... The scale of evaluation of instructional videos in their minimum, maximum, average values met the expectations of students, with the topic of areas being the best evaluated and the least evaluated those of problem-solving fractions and geometric patterns. Agreeing that Total Observed 100 98 97 88 78 65 86 612 Expected 100,0 98,0 97,0 88,0 78,0 65,0 86,0 612,0 instructional videos in mathematics create potential opportunities in the supported explanatory competence of visuals [12]. Just as the potential of asynchronous videos allow discussions and mutual supportive well-being [40]. ...
... The results of our research allow us to correct that there is a relationship between the development of instructional videos and the positive emotions manifested by the students, that is, if the instructional videos are adequately prepared following the minimum requirements for their preparation, they will promote positive emotions. in students, bearing in mind that initial training students could improve their explanations by identifying and properly writing the key elements to support the development of mathematical thinking [12]. In such a way that it will allow the design of useful instructional material that causes less emotional wear, tear and stress. ...
... According to these researchers, there seems to be a shift from the traditional method of instruction to the incorporation of videos in many blended and online courses. In their study captioned "Exploring the Creation of Instructional Videos to Improve the Quality of Mathematical Explanations for Pre-Service Teachers", Kay and Ruttenberg-Rozen (2020) found that preservice teachers struggled in the creation of instructional videos for mathematical course. Among the challenges were "providing a clear problem label, using visual supports, noting potential errors that might occur, writing legibly, highlighting key areas, listing key terms and formulas, being concise, and using a clear, conversational voice" (p. 1). ...
Article
Full-text available
The current study looked at how instructional videos were created and used to augment the practical teaching and learning of information and communication technology (ICT) course. With a sample size of 1620 students, the researcher created customized short instructional videos on the different topics on the course outline and shared videos on the platform of the students in the Fall of 2021/2022 school year. The researcher found evidence that the use of the instructional videos was helpful in the practical lessons. Second, participants indicated that using the instructional videos was their first time in this university. Finally, the participants expressed their desire to use the instructional videos in the near future as well as recommend to their peers in other programmes. Implications for practice are discussed.
... Bukhatwa et al. (2022) found that the technology-based interactive learning strategy is highly effective in mathematics learning. Utilizing technology to teach mathematics leads to better learning outcomes, improved performance, and quality (Homa-Agostinho & Oliveira-Groenwald, 2020) while supporting student-centered learning (Kay & Ruttenberg-Rozen, 2020). Therefore, future design improvements will focus on further enhancing the utilization of technology-based learning media. ...
Article
Full-text available
The study material on circle areas is contextually oriented and aids students in comprehending their surrounding environment. Higher-order thinking skills are imperative for the success of circular learning, as they help students grasp concepts holistically and solve concept problems. "What-if" questions can enhance students' higher-order thinking skills through problem-solving activities, fostering critical and creative thinking. However, applying "what-if" questions is limited to serving as problem-posing triggers, resulting in minimal variation in the material. Therefore, this study aims to design a learning trajectory for the area of circles based on "what-if" questions to enhance students' higher-order thinking skills. This research also addresses the gap by utilizing "what-if" questions to construct and evaluate circle area learning activities. We employed design research as the research method, conducted in three stages: preliminary design, experimental design, and retrospective analysis. The results demonstrated that the designed learning trajectory enhanced higher-order thinking skills in various aspects. Students exhibited critical thinking and profound analysis when working on worksheets and addressing the provided problems. Moreover, students showcased creative and divergent thinking abilities, enabling them to generate alternative problem solutions. Furthermore, optimizing technology usage and emphasizing reasoning in learning should be augmented to enhance student motivation and foster innovative learning.
... Videos can facilitate a good learning environment and enhance students' learning processes (Othman & Amiruddin, 2010). This study agrees with existing literature which states that using technology in teaching mathematics and statistics leads to improved learning and enhances learning performance and quality (Homa & Oliveira, 2020;Kay & Ruttenberg-Rozen, 2020;Slaviša et al., 2019). It was found that the students started to depend more on themselves, leading to student-centred learning. ...
Article
Full-text available
This study aims to investigate the advantages of implementing multimedia resources in the teaching and learning environment of mathematics and statistics. It examines the use of tablet PCs to create video learning resources. Such practices allow lecturers to provide additional learning support to students via the learning platform Moodle. This paper discusses the experiences of three lecturers in developing a technology-based, interactive teaching method to support student learning. The results found that “solved examples” in the video resources are useful in demonstrating topics about statistics. Furthermore, the paper encourages lecturers to learn from their experiences and develop learning resources to enable students to better engage in the learning process.
Article
Instructional videos are often a key component of online learning, and their quality significantly influences online learning outcomes and student satisfaction. However, instructional video evaluation is time-consuming. To solve this problem, this study developed an automatic evaluation method for instructional videos. This method first establishes a metric to evaluate instructional videos based on two aspects: video features and watching experience. An automatic scoring method for each indicator was developed based on video and clickstream data. Finally, all the scores were input into the evaluation model to obtain the evaluation result. Our experimental results showed that 85% of the evaluation results using our proposed model are consistent with manual quality evaluation. Therefore, our method can perform automatic evaluation of instructional videos while achieving acceptable accuracy, which is helpful in reducing the workload associated with manual evaluation and improving the quality of online teaching.
Article
Full-text available
The flipped classroom is an instructional model in which students viewed the learning content before class through instructor-provided video lectures or other pre-class learning materials, and in-class time is used for student-centered active learning. Video is widely utilized as a typical pre-class learning material in the flipped classroom. This paper reports the findings from a survey about students’ attitudes and preferences regarding the pre-class learning experiences in an undergraduate science course that utilized a flipped classroom model. Findings demonstrate that students had positive attitudes towards using pre-class videos in the flipped classroom. Students had different perceptions towards the four types of pre-class learning materials used in this course, including three types of videos and text-formatted materials. Students’ attitudes and preferences on pre-class learning materials did not differ across class levels, major fields, or previous experience of learning via videos. Students suggested that pre-class videos should be kept short and engaging.
Article
Full-text available
The main objective of this research is to investigate efficiency of use of supplementary video content in multimedia teaching. Integrating video clips in multimedia lecture presentations may increase students' perception of important information and motivation for learning. Because of that, students can better understand and remember key points of a lecture. Those improvements represent some important learning outcomes. This research showed that segmentation of teaching materials with supplementary video clips may improve lecture organization and presentation in order to achieve effective teaching and learning. The context of the video content and the position of supplementary video clips in teaching material are important influences on factors for motivation and efficiency of learning. This research presents the effects of the use of supplementary videos with different context of content (entertainment and educational) as well as the effects of their position within the teaching material. The experimental results showed that the most efficient method of use of supplementary video is integration with educational video content in the middle of a lecture. This position of video insertion provides the best results. The context of video content influences efficiency of learning also. Entertainment video was not as efficient as educational, but it can be used to engage and motivate students for learning. The given results have been confirmed with a subjective assessment of students' quality of experience with different methods of embedding video clips.
Article
Full-text available
As online videos have become more easily available and more attractive to the new generation of students, and as new student-learning approaches tend to have more technology integration, the flipped classroom model has become very popular. The purpose of this study was to understand college students’ views on flipped courses and investigate how the flipping affects their achievement in mathematics. We also studied how college students prepared for flipped classroom sections. Finally, college students’ views were analyzed to see what they think about flipping in terms of benefits and preparation. Participants were 96 college students consisting of mostly freshmen & sophomores. We utilized descriptive statistics and paired t-test to analyze the data. Descriptive statistics revealed that participants preferred watching flip class videos (44%) over reading the sections from the textbook (17%) for preparation. Dependent t-test results showed that there is a statistically significant difference between students’ average quiz scores from non-flipped sections and flipped sections. Students achieved significantly higher quiz scores in flipped sections than non-flipped ones. Overall, most of the students (83%) stated that flipped-taught lessons prepared them better.
Article
Full-text available
Videos have enhanced the value of teaching and learning, particularly in tertiary education. Recent studies have investigated students' attitudes toward video lectures for educational purposes; however, the relationship between students' attitudes and different usage patterns such as platforms used, video duration, watching period and stu-dents' experience, is yet to be explored. To investigate potential attitudinal differences among the diverse video lectures usage patterns, the present study incorporates responses from 40 students who participated in a video-assisted software engineering course. Our results suggest that usage patterns affect students' attitudes to video lectures as a learning tool. The overall outcomes are expected to promote theoretical development of students' attitudes, video-platform design principles, and better and more efficient use of video lectures. Introduction Across the globe, many preeminent universities (eg, Stanford, Oxford, MIT, EPFL and Harvard) offer video lectures in most subjects. An increasing number of educators in tertiary education and training organizations are implementing videos in a variety of ways, such as on-demand or live video lectures, capturing and broadcasting face-to-face meetings for review purposes, and assigning videos before class to flip (invert) class time for hands-on activities and critical discussions (Maag, 2006).
Article
This study is an investigation of relationships among elementary teachers' perceptions of the use of students' thinking in instructional decision-making, their professional noticing skills, and their use of students' thinking during instruction. Interviews were conducted with 33 participants using a two-part, semi-structured protocol and 25 teachers' instructional videos were collected. The data were analyzed using the Mathematical Quality of Instruction instrument and grounded theory techniques including open coding, identification of themes, and the development and description of categories. Preliminary findings suggest that there is a relationship between elementary teachers' perceptions of the use of students' thinking and their professional noticing skills, but misalignment was found between teachers' perceptions of the use of students' thinking and their practices as observed in videos of their own teaching. Implications are discussed for teacher knowledge and the design of effective professional development programs to encourage productive use of students' thinking in lesson planning and teaching.
Chapter
Based on promising work conducted with practicing and preservice teachers at the elementary level to scaffold teacher noticing, we propose that secondary preservice teachers (PSTs) can similarly be supported in their development of professional noticing. Through the lens of research results obtained by studying the effects of a curricular module designed to develop secondary mathematics PSTs’ noticing, we discuss aspects of teacher noticing constructs at the elementary level that are applicable to secondary and aspects that require modification for transferability. Further, we describe the impact of the curricular module, including a task-based clinical interview, on secondary PSTs’ ability to attend, interpret, and respond to student thinking.
Article
This study examined processes at the core of teacher professional development (PD) experiences that might positively impact teacher learning and more specifically teacher change. Four processes were considered in the context of a PD program focused on student mathematical errors: analysis of students’ mathematical misconceptions as a lever for changing practice; review and discussion of images of existing and innovative teaching practices; video-enhanced reflection and detailing of new practices; and cycles of examination/implementation and feedback/revision. Participants were four US elementary school teachers, whose engagement in the PD program was studied through video observations and interviews. Findings revealed several positive shifts in teachers’ interests and classroom practices. Challenges involved teacher content knowledge and counteracting cultural beliefs and practices on the role of errors in learning. The conclusions discuss implications for PD scale-up. © 2015 International Professional Development Association (IPDA)