Steve Ritter’s research while affiliated with Carnegie Learning and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (18)


Figure 1. Dialogue mode profiles of top versus bottom 25% sessions, respectively. 
An Analysis of Human Tutors’ Actions in Tutorial Dialogues
  • Conference Paper
  • Full-text available

May 2017

·

351 Reads

·

5 Citations

Vasile Rus

·

·

·

[...]

·

Steve Ritter

Understanding effective human tutors’ strategies is one approach to discovering effective tutorial strategies. These strategies are described in terms of actions that tutors take while interacting with learners. To this end, we analyze in this paper dialoguebased interactions between professional tutors and tutees. There are two challenges when exploring patterns in such dialogue based tutorial interactions. First, we need to map utterances, by the tutor and by the tutee, into actions. To address this challenge, we rely on the language-as-action theory according to which when we say something we do something. A second challenge is detecting effective tutorial sessions using objective measurements of learning. To tackle this challenge we align tutorial conversations with pre- and post- measures of student mastery obtained from an intelligent tutoring system with which the students interacted before and after interacting with the human tutor. We present performance results of the automated tools that we developed to map tutor-tutee utterances onto dialogue acts and dialogue modes. We also report the most interesting emerging patterns in terms of tutor and tutees’ actions. These patterns could inform our understanding of the tutoring process and the development of intelligent tutoring systems.

Download

How Mastery Learning Works at Scale

April 2016

·

419 Reads

·

80 Citations

Nearly every adaptive learning system aims to present students with materials personalized to their level of understanding (Enyedy, 2014). Typically, such adaptation follows some form of mastery learning (Bloom, 1968), in which students are asked to master one topic before proceeding to the next topic. Mastery learning programs have a long history of success (Guskey and Gates, 1986; Kulik, Kulik & Bangert-Drowns, 1990) and have been shown to be superior to alternative instructional approaches. Although there is evidence for the effectiveness of mastery learning when it is well supported by teachers, mastery learning's effectiveness is crucially dependent on the ability and willingness of teachers to implement it properly. In particular, school environments impose time constraints and set goals for curriculum coverage that may encourage teachers to deviate from mastery-based instruction. In this paper we examine mastery learning as implemented in Carnegie Learning's Cognitive Tutor. Like in all real-world systems, teachers and students have the ability to violate mastery learning guidance. We investigate patterns associated with violating and following mastery learning over the course of the full school year at the class and student level. We find that violations of mastery learning are associated with poorer student performance, especially among struggling students, and that this result is likely attributable to such violations of mastery learning.



Figure 1. The Super Experiment Framework showing how each of the component scales informs the others. In the SEF framework, each component provides an experimental level that can be used to answer specific questions that might be difficult or impossible to answer using one of the other components. Further, the various components can be used to expand or validate findings of the other components. A feedback loop can also be used with the framework where internet scale experiments can identify areas of focus for lab scale experiments, which can then be validated in school scale experiments. An overview of each of the SEF components can be seen in Table 1. School scale and lab scale experiments typically recruit subjects and then randomly assign them to different experimental conditions as part of a single experiment. However, internet-scale research creates situations where multiple experiments are randomly drawing from the same pool of subjects. Just as a single experiment contains multiple experimental conditions, the SEF 
Figure 2. Illustrates the average improvement from the first opportunity to the second opportunity, by item presented. The clear patterns of difficulty are used to generate knowledge component models in Datashop. 
The Rise of the Super Experiment

June 2012

·

151 Reads

·

18 Citations

Traditional experimental paradigms have focused on executing experiments in a lab setting and eventually moving successful findings to larger experiments in the field. However, data from field experiments can also be used to inform new lab experiments. Now, with the advent of large student populations using internet-based learning software, online experiments can serve as a third setting for experimental data collection. In this paper, we introduce the Super Experiment Framework (SEF), which describes how internet-scale experiments can inform and be informed by classroom and lab experiments. We apply the framework to a research project implementing learning games for mathematics that is collecting hundreds of thousands of data trials weekly. We show that the framework allows findings from the lab-scale, classroom-scale and internet-scale experiments to inform each other in a rapid complementary feedback loop.


Riding the Third Wave

January 2010

·

32 Reads

·

1 Citation

Lecture Notes in Computer Science

Intelligent tutoring systems work falls into three waves. The first wave involves basic research on technical implementation, including authoring systems and tutoring architectures. Second wave work takes this technological development beyond the laboratory. This work involves deep analysis of domain knowledge and empirical validation of systems. The emerging “third wave” takes advantage of widespread use of systems to refine and improve their effectiveness. Work in this area includes data mining and end-user authoring. Although many types of systems have followed this evolution, intelligent tutoring systems are uniquely positioned among educational software to take advantage of the third wave. The architecture and authoring work from the first wave and the ability to incorporate domain knowledge and test pedagogical approaches in the second wave make us well positioned to ride this third wave. In this talk, I will describe Carnegie Learning’s experience in riding these waves. We have taken intelligent tutoring systems for mathematics originally developed at Carnegie Mellon to scale with over 500,000 users per year, and are now riding the third wave to leverage this user base and improve the effectiveness and utility of our systems.




An architecture for plug-in tutoring agents

11 Reads

·

19 Citations

We describe our efforts to build new learning environments that incorporate tutoring elements into pre-existing software packages. Two systems are described, one which provides tutoring support in the Geometer's Sketchpad and the other which supports students using Microsoft Excel. Although the implementation of these two systems was somewhat different, they share many basic components. An analysis of their similarities and differences allows us to move towards a set of standards for tutor agents that interact with complex tools. By constructing learning environments in this manner, we can leverage the power of existing workplace software and educational microworlds to create more powerful learning environments.


Citations (11)


... For example, students often struggle to read the text of mathematics word problems, and LLMs have the potential to adapt problems to assist emerging readers. Having GPT-4 rewrite middle school mathematics problems to improve their readability can result in similar effects on student performance as having humans rewrite the problems (Norberg et al., 2024a). And compared to original problems that have not been rewritten, the problems rewritten for improved readability using GPT-4 could in some cases improve students' mastery rates. ...

Reference:

The Implications of Generative Artificial Intelligence for Mathematics Education
Rewriting Content with GPT-4 to Support Emerging Readers in Adaptive Mathematics Software
  • Citing Article
  • July 2024

International Journal of Artificial Intelligence in Education

... Studies on Bayesian Knowledge Tracing and carelessness detectors have shown promising results, with performance being relatively equal across demographic groups (Zambrano et al., 2024). However, traditional bias metrics may not be suitable for educational settings due to hierarchical dependencies in classrooms, necessitating adapted measurements using hierarchical linear models (Belitz et al., 2024). To address these challenges, researchers recommend focusing on solidifying understanding of concrete impacts, moving from unknown to known bias, and transitioning from fairness to equity (Baker & Hawn, 2021). ...

Hierarchical Dependencies in Classroom Settings Influence Algorithmic Bias Metrics
  • Citing Conference Paper
  • March 2024

... The expression "Augmented Humans" [4] refers to the adoption of methods and technology that enhance physical, cognitive, or sensory skills beyond what is common for humans. This paradigm shift is transforming various aspects of daily life and industries, such as education [5] and healthcare [6], by providing immersive learning environments, virtual training simulations, and new forms of entertainment. ...

Towards the Future of AI-Augmented Human Tutoring in Math Learning

Communications in Computer and Information Science

... This data can then be used to design educational content and activities for individual needs. For example, platforms such as Carnegie Learning's MATHia use AI to provide personalized math instruction that adapts to the student's level of understanding and pace of learning (Almoubayyed et al., 2023). Recent studies show the impact of personalized learning on student outcomes (Hashim et al., 2022). ...

Rewriting Math Word Problems to Improve Learning Outcomes for Emerging Readers: A Randomized Field Trial in Carnegie Learning’s MATHia
  • Citing Chapter
  • June 2023

Communications in Computer and Information Science

... A case in point is that of Carnegie Learning in the case of mathematics education. For instance, the personalization of mathematics instruction for every student through artificial intelligence, such as while designing the MATHia system, would be realized because it provides real-time feedback and adjusts the difficulty of the problems based on performance [48], [49]. For example, AutoTutor was designed for conversational learning as a tutor for a human conversational partner using natural language processing. ...

Instruction-Embedded Assessment for Reading Ability in Adaptive Mathematics Software
  • Citing Conference Paper
  • March 2023

... The same group of researchers explored the role of Hidden Markov Models (HMMs), a generative model, and Conditional Random Fields (CRFs), a discriminative model, in classifying speech acts in one to one human tutorial sessions [13]. They demonstrated that the CRF model with features constructed from the first three tokens and last token of previous, next and current utterances, length of current utterance, and other surface features such as bigrams and the speech acts of context utterances performed better than HMM models. ...

An Analysis of Human Tutors’ Actions in Tutorial Dialogues

... Digital learning software for secondary education has been developed for decades [2,3,9,18] and has seen widespread adoption, with millions of students using it globally across different platforms (e.g., [19,22,24,27,32]). While a substantial body of research has focused on how such software should be constructed to most effectively support students' learning (e.g., [1,15,21,25,31]), surprisingly little is known about its long-term usage once integrated into classrooms (but see [4,14]). ...

How Mastery Learning Works at Scale
  • Citing Conference Paper
  • April 2016

... Learning engineering is an emerging field that uses evidence to inform educational design. For example, researchers developed and disseminated different versions of a game's script [12] and various conditions of difficulty and support [13] to large audiences to determine which versions produced desirable outcomes and inform design theory. ...

The Rise of the Super Experiment

... On the other hand, such experiments facilitate the logging of detailed ecological data and patterns that do emerge are likely to be more robust to the variability of real classrooms. Student interface actions are logged as selection-action-input triples, representing the element of the interface with which students interact, the action students took, and the input to the action (Ritter & Koedinger, 1997). For example, entering 25 in a table would be represented as (selection = cell A1, action = enterValue, input = "25"). ...

An architecture for plug-in tutoring agents
  • Citing Article

... These architectures generally provide a programming language that allows users to model human cognitive processes and behaviors. However, these languages almost always are specified at a primitive level, similar to assembly code (Cohen, et al., 2005, Ritter andKoedinger, 1995). This makes it very time consuming to develop the models and usually the models are over-specified. ...

Towards lightweight tutoring agents
  • Citing Conference Paper
  • January 1995