Content uploaded by Vu Minh Chieu
Author content
All content in this area was uploaded by Vu Minh Chieu on Apr 01, 2017
Content may be subject to copyright.
Content uploaded by Vu Minh Chieu
Author content
All content in this area was uploaded by Vu Minh Chieu on Dec 07, 2016
Content may be subject to copyright.
Can a Teaching Simulation Predict Novice
and Expert Teachers’ Decision-Making?
Vu Minh Chieu, Nicolas Boileau, Mollee Shultz, Patricio Herbst, and Amanda Milewski
University of Michigan - School of Education
610 East University Avenue, Ann Arbor, MI 48109-1259
vmchieu@umich.edu, nboilea@umich.edu, mollee@umich.edu, pgherbst@umich.edu, amilewsk@umich.edu
Abstract: The primary goal of this paper is to investigate whether a computer-based simulation
can detect the difference between novice and expert teachers’ decision-making in mathematics
instruction, which is complex in nature. The design of the simulation is grounded in a
sociological perspective on practical rationality of mathematics teaching. The simulation consists
of classroom scenarios, in the form of cartoon-based storyboards, with a series of decision
moments to simulate the instructional situation of doing proofs in geometry. Empirical data
helped verify and revise our design hypotheses/principles and showed that the simulation was
able to detect some differences between novice-teacher and expert-teacher decision-making.
Results of this study could inform the development of more advanced, computational models of
mathematics teachers’ decision-making.
Introduction
Teaching is complex in nature (Grossman et al., 2009; Lampert, 2010; Leinhardt & Ohlsson,
1990), because the teacher must attend to and interpret many simultaneous events, and
orchestrate various types of resources to make moment-to-moment teaching decisions. Chieu
and Herbst (2011) define a teaching simulation as a virtual environment that simulates the
practice of teaching, reducing some complexities but instantiating others. A teaching simulation
must enable practice from a first-person perspective. It could be used to assess a teacher
practitioner's knowledge state and to support practice-based learning. For example, Figure 1
illustrates the architecture of SimTeach, which can be described as follows: The apprentice
interacts with the Simulation User Interface, which captures their input (e.g., what action of
teaching to do next). The component of Apprentice Diagnosis and Modeling analyzes that input
and updates the apprentice’s current state of knowledge, using the Teaching Expertise
component. Then, the Educational Feedback component searches for relevant feedback from
the Teaching Expertise component, on the basis of their current state of knowledge. Finally, the
feedback is provided to the apprentice through the Simulation User Interface, and they continue
to play the simulation. This paper focuses on the assessment role of the simulation.
Existing teaching simulations (e.g., Dieker et al., 2013; Gibson, Aldrich, & Prensky, 2006;
Girod & Girod, 2008) have only demonstrated that they can enable and assess the practice of
basic teaching and classroom management skills, connecting general theoretical principles with
classroom practices (e.g., monitoring students’ behaviors). The development of SimTeach is
strongly grounded in recent research on mathematics teaching knowledge. Such development,
however, is very difficult because of the complexity of teaching expertise. Thus, a reasonable
first step, and the primary goal of this paper, is to investigate if a preliminary version of
SimTeach can detect the difference between novice- and expert-teacher decision-making,
before further development can be done to diagnose more fine-grained knowledge status (e.g.,
if they master a certain piece of knowledge or not).
Figure 1. Architecture of SimTeach, an intelligent teaching simulation (adapted from Luengo et
al., 2007 and Chieu & Herbst, 2011).
Theoretical Framework
The design of the first version of SimTeach is grounded in Herbst and Chazan’s (2011, 2012,
2015) account of the practical rationality
of mathematics teaching
. According to that account,
mathematics teachers’ instructional decisions can be understood as requiring mathematical
knowledge for teaching (MKT; Ball, Thames & Phelps, 2008), as based on their beliefs and
instructional goals (Schoenfeld, 2012), and as regulated by instructional norms
(i.e., routine and
tacitly-expected ways of working on routine mathematical tasks) and professional obligations
(to
their students, the class, the institutions that they work in, and to the discipline of mathematics),
in an instructional situation
— recurrent segments of classroom interaction in which students
work on routine mathematical tasks. Many of these norms are specific to the instructional
situation that the class is in (i.e., to the type of task assigned) and they regulate how student
work is exchanged for the teacher’s claims about the extent to which students have acquired the
knowledge and/or skills that the task was assigned to assess. See Figure 2 for examples of
norms of the situation of doing proofs.
Although these norms are default ways of behaving in instructional situations, they are
sometimes breached. Herbst and Chazan (2012) include knowledge, beliefs, and professional
obligations in their model of teacher decision-making as possible sources of justification for such
breaches. Below, we explain how these ideas influenced our hypotheses about how
instructional decisions of novice and expert teachers may differ and how we design the
simulation to test those hypotheses.
● The students identify the reason for each statement after it is made.
● The justification of a statement needs to be a previously studied theorem, definition,
postulate, or the given.
● Each of the reasons is stated in a conceptual register.
● After a statement in a proof is made and before the next statement is made, a reason
for the first statement is needed.
● The duration of the proof production is gauged in terms of the number of steps.
● Every single statement or reason is produced in a handful of seconds.
Figure 2. Examples of norms that regulate behavior in the instructional situation of doing proofs
in geometry, taken from a longer list provided by Herbst, Chen, Weiss, & González (2009); see
also Nachlieli & Herbst, (2009).
A Preliminary Version of SimTeach
In this study, we explore the hypothesis that novice teachers and expert teachers differ in
terms of their behavior towards norms and professional obligations. Specifically, we
conjecture that novice teachers would comply with instructional norms more often than
expert teachers.
To test the above hypotheses, we designed a computer-based teaching simulation in
which participants are asked to play the role of the teacher in a high school geometry class,
working on a proof problem, represented using a cartoon-based storyboard. The scenario
consists of parts that the participant is simply asked to view, which we will henceforth refer to as
stems
, and parts where participants are asked to indicate what they would do next, which we
will refer to as decision points
. The first step in designing this scenario was selecting a proof
problem for the class to work on. We chose what we hypothesized to be a normative proof
problem, chosen from commonly-used high school geometry resources, in order to cue
participants to the instructional situation. This was important because, if we chose a task that
most participants would perceive as novel (i.e., non-normative), they might breach norms that
they would not normally, simply because they would not have their common expectations about
how work on such proof problems should unfold. The next step in the design was to write a
story that we thought would comply with (at least) experienced high school geometry teachers’
expectations for how work on that problem might unfold. We did this by having the teacher
comply with all of the instructional norms described in Herbst, Chen, Weiss, & González (2009);
Figure 3 shows its beginning. Then, we identified points in that story where we hypothesized an
expert teacher might see reason to breach a norm (e.g., in order to comply with one of their
professional obligations) and mapped out what alternative decisions would be made at those
point; see Figure 4). At each decision point, we created four close-ended options: two options
that we hypothesize represent less productive
moves (by using the above conjecture), which we
hypothesize novice teachers would be more likely to choose — option 1 (Figure 5) and option 2
— and two options that we hypothesize represent more productive
moves (gain by using the
above conjecture), which we hypothesize expert teachers would be more likely to choose —
option 3 and option 4 (Figure 6). Last, we imagined how students would react to each of those
four moves (at each decision point) and used those to design branches of the scenario leading
to other decision points. The preliminary version of the simulation included 20 decision points,
although one would only be presented with a subset of these each time they play the simulation,
depending on the choices that they make.
Figure 3. Beginning of the simulation.
To provide us with insights into our design of the simulation (e.g., whether the simulation,
including the stems and the decision options, can capture essential aspects of teaching
practice), we selected four expert teachers from four different local high schools and one
pre-service teacher (PST) at our institution. We conducted four separate study sessions with
four experts. In each session, we asked the expert to complete at least two different branching
scenarios of the simulation. The PST sat next to the expert to facilitate the sharing of their
thinking. Before choosing a close-ended option at each decision point, the expert was asked to
explain and justify to the PST what they would do next. While they were making a choice, the
PST asked them to explain their analysis of the teaching event and the justification of their
choice. Because of the branching characteristic of the simulation, there are several decision
points that were not visited by any participant. So, we used a similar protocol to invite other two
expert teachers to navigate through the simulation, but we guided them to those specific
decision points and interviewed them about the design of the decision options at each of those
decision points.
Figure 4. The first decision point of the simulation.
Figure 5. Option 1 that we hypothesized as less productive
for the first decision point.
Figure 6. Option 4 that we hypothesized as more productive
for the first decision point.
Then, we used a constant comparative method (Glaser & Strauss, 1967; Fram, 2013;
Morse et al., 2009) to analyze videos and screencasts that captured the interaction between
each expert and the simulation and the PST. We used an iterative analysis process to look for
the similarities and differences of patterns of decision-making across decision points, scenarios,
and participants. This analysis showed that the simulation is able to capture many aspects of
teaching practice by the expert teachers described earlier. This analysis also helped us revise
the characteristic of the options (i.e., less productive
or more productive
) of three decision
points. A demo of the preliminary version of SimTeach is available at
https://www.lessonsketch.org/viewer.php?e=knqcMj9B0fS.
Methods
Research Questions
Our research questions, related to our hypotheses about how one might distinguish the
instructional decisions of novice and expert teachers stated above, are the following:
1. What differences exist between the patterns of novice teachers and expert teachers’
responses at decision points?
2. Are these patterns consistent with our hypothesis (i.e., that novice teachers tend to
choose less productive
options than expert teachers do)? In other words, can the
novice/expert status predict their performance in the simulation?
Procedure
To investigate the main hypothesis described above, we selected 30 novice teachers and 30
expert teachers from a national sample of 341 secondary mathematics teachers who had
previously completed a background questionnaire and MKT assessment (Herbst et al., 2017),
which we used to determine these two groups (“novice teachers” were teachers with less than
six years of geometry teaching experience and low MKT scores and “expert teachers” are
teachers more than five years of geometry teaching experience and high MKT scores). We
asked them to complete four different branching scenarios of the simulation so that they could
go through most decision points we designed.
Data Analysis
We used linear regression model to investigate whether teachers’ expertise could predict their
choice to make what we hypothesized to be more productive (or less productive) decisions. In
order to do this, we dichotomized the variables representing the choice that they made at each
decision option as 0 = less productive
move and 1 = more productive
move. We then calculated
the proportion of more productive choices chosen by each participant, and we name this
proportion the performance score
in the simulation. Then, we regressed that performance score
on the dichotomous variable indicating whether the participant was an expert teacher or a
novice teacher.
Results and Discussion
As certain choices at certain decision points were rarely chosen, only 13 out of the total 20
decision points were visited by enough participants to statistically compare the differences
between the novice teachers’ and expert teachers’ choices. The patterns of participants’ choice
at 10 out of those 13 (about 77%) decision points were somewhat similar to the one illustrated in
Figure 7, which shows that their choices varied for both novice teachers and expert teachers,
which is predictable as the literature points out that teachers’ decision-making is influenced by
many attributes (e.g., Schoenfeld, 2012; Westerman, 2010). Yet, there existed an important
pattern that was consistent with the hypothesis stated above: novice teachers were more
likely to choose less productive
options than expert teachers were, or in other words,
expert teachers were more likely to choose more productive
options than novice
teachers were. To strengthen that hypothesis further, we also run a linear regression model to
investigate the relationship between participants’ overall performance score in the simulation
and their expert / novice status. The model indicated that the expert / novice status positively
predicted participants’ performance score (effect size = 0.09, p < 0.001; see also Figure 8).
Figure 7. Distribution of choices (1, 2, 3, 4, from left to right) of a decision point in the middle of
the simulation (the above part represents novice teachers’ responses and the below part
represent expert teachers’).
Figure 8. Difference between novice teachers’ and expert teachers’ performance score.
This study, however, is still preliminary because it has not looked into the difference
between the chains of decisions that novice teachers and expert teachers made in a whole
scenario. For example, in some subsequent, exploratory analysis, we have found significant
correlations between expert teachers’ choices in some decision points and their choices in the
preceding ones (p < 0.05). Thus, it would be useful to model the temporal dimension
in
teachers’ decision-making, for instance, by using temporal / dynamic bayesian networks (see
Russell & Norvig, 2009), to detect if there is any difference in the chains of decisions made by
novice teachers and expert teachers.
Scientific Significance of the Study
This paper shows evidence that a simulation can be used to test and refine hypotheses about
differences between novice teachers’ and expert teachers’ decision-making in mathematics
instruction, from a sociological perspective on practical rationality (Herbst & Chazan, 2011,
2012, 2015). Results of this study could inform the development of more advanced,
computational models of mathematics teachers’ decision-making.
Acknowledgments
The work reported in this paper is supported by NSF grant DRL-1420102 to Patricio Herbst and
Vu Minh Chieu. Opinions expressed here are the sole responsibility of the authors and do not
necessarily reflect the views of the Foundation.
SimTeach has been designed and developed by using resources and tools of
LessonSketch
(https://www.lessonsketch.org), a multimedia platform that enables teachers and
other human service professionals to represent, examine, share, and discuss their own
practices, as well as the practices of other members of their profession.
References
Ball, D. L., Thames, M., & Phelps, G. (2008). Content knowledge for teaching: What makes it
special? Journal of Teacher Education, 59
(5), 389–407.
Chieu, V. M., & Herbst, P. (2011). Designing an intelligent teaching simulator for learning by
practicing in the practice of mathematics teaching. ZDM—The International Journal of
Mathematics Education, 43
, 105-117.
Dieker, L. A., Rodriguez, J. A., Lignugaris, B., Hynes, M. C., & Hughes, C. E. (2013). The
potential of simulated environments in teacher education: Current and future possibilities.
Teacher Education and Special Education: The Journal of the Teacher Education Division of
the Council for Exceptional Children
, DOI: 10.1177/0888406413512683.
Fram, S. (2013). The constant comparative analysis method outside of grounded theory. The
Qualitative Report, 18
, 1–25.
Gibson, D., Aldrich, C., & Prensky, M. (Eds.) (2006). Games and simulations in online learning.
Hershey, PA: Ideas Group.
Girod, M., Girod, G.R. (2008). Simulation and the need for practice in teacher preparation.
Journal of Technology and Teacher Education, 16
(3), 307–337.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for
qualitative research
. New York, NY: Aldine Publishing Company.
Grossman, P., Compton, C., Igra, D., Ronfeldt, M., Shahan, E., & Williamson, P. (2009).
Teaching practice: A cross-professional perspective. Teachers College Record, 111
(9),
2055–2100.
Herbst, P., Chazan, D., Dimmel, J. K., Erickson, A., Boileau, N., & Inah, K. (2017, April).
Understanding the rationality of teaching using multimedia questionnaires
. Paper to be
presented at the Research Pre-session of the Annual Meeting of the National Council of
Teachers of Mathematics, San Antonio, TX.
Herbst, P., & Chazan, D. (2015). Using multimedia scenarios delivered online to study
professional knowledge use in practice. International Journal of Research and Method in
Education, 38
(3), 272-287.
Herbst, P., & Chazan, D. (2012). On the instructional triangle and sources of justification for
actions in mathematics teaching. ZDM The International Journal of Mathematics Education,
44
(5), 601-612.
Herbst, P. & Chazan, D. (2011). Research on practical rationality: Studying the justification of
actions in mathematics teaching. The Mathematics Enthusiast, 8
(3), 405-462.
Herbst, P., Chen, C., Weiss, M., & González, G., with Nachlieli, T., Hamlin, M., & Brach, C.
(2009). “Doing proofs” in geometry classrooms. In M. Blanton, D. Stylianou, & E. Knuth
(Eds.), Teaching and learning of proof across the grades: A K-16 perspective
(pp. 250-268).
New York, NY: Routledge.
Lampert, M. (2010). Learning teaching in, from, and for practice: What do we mean? Journal of
Teacher Education, 61
(1-2), 21–34.
Leinhardt, G., & Ohlsson, S. (1990). Tutorials on the structure of tutoring from teachers. Journal
of Artificial Intelligence in Education, 2
, 21–46.
Luengo, V., Mufti-Alchawafa, D., & Vadcard, L. (2007). Design of adaptive surgery learning
environment with bayesian network. Proceedings of the 2007 International Technology,
Education and Development Conference
(CD publication, ISBN: 978-84-611-4517-1).
Valencia, Spain: International Association of Technology, Education and Development.
Morse, J. M., Stern, P. N., Stern, P. N., Faan, N., Corbin, J., Bowers, B., ... & Clarke, A. E.
(2009). Developing grounded theory: The second generation
. Routledge.
Nachlieli, T., & Herbst, P. with González, G. (2009). Seeing a colleague encourage a student to
make an assumption while proving: What teachers put to play in casting an episode of
geometry instruction. Journal for Research in Mathematics Education, 40
(4), 427-459.
Russell, S., & Norvig, P. (2009). Artificial intelligence: A modern approach
(3rd ed.). Upper
Saddle River, NJ: Prentice Hall.
Schoenfeld, A. H. (2012). “How we think: A theory of human-decision making, with a focus on
teaching”. In S. J. Cho (ed.), The Proceedings of the 12th International Congress on
Mathematical Education
, Seoul, Korea, DOI 10.1007/978-3-319-12688-3_16.
Westerman, D. A. (2010). Expert and novice teacher decision making. Journal of Teacher
Education, 42
(4), 292–305.