March 2025
·
4 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
March 2025
·
4 Reads
July 2024
·
13 Reads
·
4 Citations
International Journal of Artificial Intelligence in Education
Large language models (LLMs) offer an opportunity to make large-scale changes to educational content that would otherwise be too costly to implement. The work here highlights how LLMs (in particular GPT-4) can be prompted to revise educational math content ready for large scale deployment in real-world learning environments. We tested the ability of LLMs to improve the readability of math word problems and then looked at how these readability improvements impacted learners, especially those identified as emerging readers. Working with math word problems in the context of an intelligent tutoring system (i.e., MATHia by Carnegie Learning, Inc), we developed an automated process that can rewrite thousands of problems in a fraction of the time required for manual revision. GPT-4 was able to produce revisions with improved scores on common readability metrics. However, when we examined student learning outcomes, the problems revised by GPT-4 showed mixed results. In general, students were more likely to achieve mastery of the concepts when working with problems revised by GPT-4 as compared to the original, non-revised problems, but this benefit was not consistent across all content areas. Further complicating this finding, students had higher error rates on GPT-4 revised problems in some content areas and lower error rates in others. These findings highlight the potential of LLMs for making large-scale improvements to math word problems but also the importance of additional nuanced study to understand how the readability of math word problems affects learning.
July 2024
·
6 Reads
July 2024
·
10 Reads
March 2024
·
25 Reads
This paper presents a conceptual exploration of how Digital Learning Platforms (DLPs) can be utilized to investigate the impact of language clarity, precision, engagement, and contextual relevance on mathematics learning from word problems. Focusing on three distinct DLPs—ASSISTments/E-TRIALS, MATHia/UpGrade, and Canvas/Terracotta—we propose hypothetical studies aimed at uncovering how nuanced language modifications can enhance mathematical understanding and engagement. While these studies are illustrative in nature, they provide a blueprint for researchers interested in leveraging DLPs for empirical investigation so that future investigators gain a better understanding of the emerging infrastructure for research in digital learning platforms and the opportunities provided by them. In highlighting three distinct implementations of the same core research question, we reveal both commonalities as well as differences in how different educational technologies might build evidence, offering a unique opportunity to advance the field of math education and other education research fields.
June 2023
·
202 Reads
·
6 Citations
Communications in Computer and Information Science
We present a randomized field trial delivered in Carnegie Learning’s MATHia’s intelligent tutoring system to 12,374 learners intended to test whether rewriting content in “word problems” improves student mathematics performance within this content, especially among students who are emerging as English language readers. In addition to describing facets of word problems targeted for rewriting and the design of the experiment, we present an artificial intelligence-driven approach to evaluating the effectiveness of the rewrite intervention for emerging readers. Data about students’ reading ability is generally neither collected nor available to MATHia’s developers. Instead, we rely on a recently developed neural network predictive model that infers whether students will likely be in this target sub-population. We present the results of the intervention on a variety of performance metrics in MATHia and compare performance of the intervention group to the entire user base of MATHia, as well as by comparing likely emerging readers to those who are not inferred to be emerging readers. We conclude with areas for future work using more comprehensive models of learners.Keywordsmachine learningA/B testingintelligent tutoring systemsreading abilitymiddle school mathematics
... For example, students often struggle to read the text of mathematics word problems, and LLMs have the potential to adapt problems to assist emerging readers. Having GPT-4 rewrite middle school mathematics problems to improve their readability can result in similar effects on student performance as having humans rewrite the problems (Norberg et al., 2024a). And compared to original problems that have not been rewritten, the problems rewritten for improved readability using GPT-4 could in some cases improve students' mastery rates. ...
July 2024
International Journal of Artificial Intelligence in Education
... This data can then be used to design educational content and activities for individual needs. For example, platforms such as Carnegie Learning's MATHia use AI to provide personalized math instruction that adapts to the student's level of understanding and pace of learning (Almoubayyed et al., 2023). Recent studies show the impact of personalized learning on student outcomes (Hashim et al., 2022). ...
June 2023
Communications in Computer and Information Science