November 2024
·
9 Reads
The Autobiographical Interview is a widely used tool for examining memory and related cognitive functions. It provides a standardized framework to differentiate between internal details, representing the episodic features of specific events, and external details, including semantic knowledge and other non-episodic information. This study introduces an automated scoring model for autobiographical memory and future thinking tasks, using large language models (LLMs) that can analyze personal event narratives without preprocessing. Building on the traditional Autobiographical Interview protocol, we fine-tuned a LLaMA-3 model to identify internal and external details at a narrative level. The model was trained and tested on narratives from 284 participants across three studies, spanning past and future thinking tasks, multiple age groups, and collected in lab and virtual interviews. Results demonstrate strong correlations with human scores of up to r = 0.87 on internal and up to r = 0.84 on external details, indicating the model aligns as closely with human raters as they do with each other. Additionally, as evidence of the algorithm’s construct validity, the model replicated known age-related trends that cognitively normal older adults generate fewer internal and more external details than younger adults across three datasets, finding this age group difference even in one dataset where human raters did not. This automated approach offers a scalable alternative to manual scoring, making large-scale studies of human autobiographical memory more feasible.