ArticlePDF Available

Logging Students' Model-Based Learning and Inquiry Skills in Science

Authors:
  • The Concord Consortium
Logging Students’ Model-Based Learning and Inquiry Skills in Science
Janice Gobert, Paul Horwitz, Barbara Buckley, Amie Mansfield
Edmund Burke & Dimitry Markman
The Concord Consortium
10 Concord Crossing Suite 300 Concord MA 01742
jgobert@concord.org, paul@concord.org, bbuckley@concord.org, eburke@concord.org, amansfield@concord.org,
dmarkman@concord.org
Overview to the project
Recent advances in educational technology have allowed
us to capture students’ learning on a more fine-grained
level than before. This has important implications for
Cognitive Science by informing us about students’ learning
processes on a much deeper level, and for Education
because students’ log files can be used to provide
formative assessments for teachers so that they can make
curricular and pedagogical decisions in real time.
We describe the Modeling Across the Curriculum Project
(mac.concord.org; IERI # 0115699), a project in which we
developed and refined Pedagogica™ (Horwitz and Burke
2002), a runtime and authoring environment, as well as
three interactive curricular units for high school science in
Genetics, Newtonian Mechanics, and Gas Laws.
Pedagogica™ provides runtime support for scripting of Java
components such as general-purpose software tools,
manipulable models, and assessments that pose challenges
to students; Pedagogica™ also monitors what students do
with the models and tools. The scripts enable a researcher or
curriculum developer to control all aspects of the learner’s
interaction with software tools by changing the nature of the
scaffolding and the assessments provided (Gobert et al,
2004). Our curricular activities for Biology (Genetics),
Physics (Newtonian Mechanics), and Chemistry (Gas Laws)
combine aspects of open-ended exploratory tools with
content-specific text and multimedia –a combination we call
a “hypermodel” (Horwitz 1995).
The main goals of the MAC project are to promote students’
content knowledge and model-based inquiry skills. Our data
extraction tools, when applied to the log files, afford us a
very detailed view of the students’ interactions with our
models. Some the general types of data extracted are: time
in screen, model interaction time, inputs to models, (i.e.,
numbers entered into variables as in Dynamica, or parents or
gametes chosen for breeding in BioLogica), answers (for
auto-scorable answers only), and typing time (for open
response questions). These data are used as components of
their model-based learning (Gobert & Buckley, 2000) such
as how systematic or haphazard students are in learning with
models. Additionally, log files are used as indices of their
model-based inquiry skills both within and across domains,
allowing us to assess transfer from one domain to another,
and assess how a student’s inquiry skills are progressing
independent of content learning. Since our activities are
enacted over multiple days and in three domains, we avoid
the problems faced by earlier studies of inquiry in which
there was not enough data to get at students’ inquiry skills
(Shavelson et al, 1999). Lastly, we can provide teachers
with formative and summative assessment data bearing on
both students’ content learning and model-based inquiry
skills. We are currently collecting data from 13
Member/Partner schools and 200+ Contributing schools.
Log data from BioLogica™ and Dynamica™ will be
presented in order to demonstrate how we use our data
extraction tools to characterize both students’ systematicity
in learning with models and their inquiry skills.
References
Gobert, J. D., & Buckley, B. C. (2000). Introduction to
model-based teaching and learning in science education.
International Journal of Science Education, 22(9), 891-
894.
Gobert, J., Buckley, B., Dede, C., Horwitz, P., Wilensky,
U., & Levy, S. (2004). Modeling Across the Curriculum
(MAC): Technology, Pedagogy, Assessment, & Research.
Paper presented at the American Educational Research
Association, San Diego, CA.
Horwitz, P. (1995). Linking Models to Data: Hypermodels
for Science Education. The High School Journal, 79(2),
148 - 156.
Horwitz, P., & Burke, E. J. (2002). Technological
advances in the development of the hypermodel. Paper
presented at the American Educational Research
Association, New Orleans, LO.
Shavelson, R. J., & Ruiz-Primo, M. A. (1999). Note on
sources of sampling variability in science performance
assessments. Journal of Educational Measurement, 36(1),
61-71.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In 1993, we reported in Journal of Educational Measurement that task-sampling variability was the Achilles’ heel of science performance assessment. To reduce measurement error, tasks needed to be stratified before sampling, sampled in large number, or possibly both. However, Cronbach, Linn, Brennan, & Haertel (1997) pointed out that a task-sampling interpretation of a large person x task variance component might be incorrect. Task and occasion sampling are confounded because tasks are typically given on only a single occasion. The person x task source of measurement error is then confounded with the pt x occasion source. If pto variability accounts for a substantial part of the commonly observed pt interaction, stratifying tasks into homogenous subsets—a cost-effective way of addressing task sampling variability—might not increase accuracy. Stratification would not address the pro source of error. Another conclusion reported in JEM was that only direct observation (DO) and notebook (NB) methods of collecting performance assessment data were exchangeable; computer simulation, short-answer, and multiple-choice methods were not. However, if Cronbach et al. were right, our exchangeability conclusion might be incorrect. After re-examining and re-analyzing data, we found support for Conbach et al. We concluded that large task-sampling variability was due to both the person x task interaction and person x task x occasion interaction. Moreover, we found that direct observation, notebook and computer simulation methods were equally exchangeable, but their exchangeability was limited by the volatility of student performances across tasks and occasions.
Modeling Across the Curriculum (MAC): Technology, Pedagogy, Assessment, & Research
  • J Gobert
  • B Buckley
  • C Dede
  • P Horwitz
  • U Wilensky
  • S Levy
Gobert, J., Buckley, B., Dede, C., Horwitz, P., Wilensky, U., & Levy, S. (2004). Modeling Across the Curriculum (MAC): Technology, Pedagogy, Assessment, & Research. Paper presented at the American Educational Research Association, San Diego, CA.
Technological advances in the development of the hypermodel
  • P Horwitz
  • E J Burke
Horwitz, P., & Burke, E. J. (2002). Technological advances in the development of the hypermodel. Paper presented at the American Educational Research Association, New Orleans, LO.