PosterPDF Available

The Similar Situations Task: An Assessment of Analogical Reasoning in Healthy and Clinical Populations

Authors:

Abstract

Analogical reasoning—the ability to understand and utilize relational similarities between entities despite surface-level differences—helps individuals solve problems and navigate through novel situations. This ability varies across healthy and clinical populations, yet current analogical reasoning tasks often fail to capture subtle performance variations across different populations. To address this problem, we developed the Similar Situations Task (SST), in which participants are presented 48 line-art scene analogy problems, with source and target scenes presented separately. In each source, two sets of items (humans, animals, or objects) interact in distinct areas within the scene. One or two arrows direct participants to encode and remember specific items and their relational roles. In each target, two matching items interact analogously to one set of items in the source, while two distractor items interact in a superficially similar manner to the alignable items. Participants are tasked with determining which item, if any, is in a similar situation as one of those pointed to in the source. SST problems were found to be reliable measures of performance and presented a range of challenges for both college students and chronic-phase traumatic brain injury patients. Moreover, SST performance correlated with neuropsychological cognitive measures, but notably did not correlate with measures of verbal working memory or intelligence. The SST appears to be a sensitive, reliable, and realistic test of analogical reasoning that captures the ability to discern analogous relations and roles across different situations. Importantly, SST results suggest this ability may be independent of other cognitive capacities.
Study 2
15 participants (12 Female; age M= 21 years, SD = 3).
Experimental procedure differed from Study 1: in target scenes,
participants had to confirm their initial choice by clicking on a blue dot
or cloud area.
The Similar Situations Task: An Assessment of
Analogical Reasoning in Healthy and Clinical Populations
Introduction
Traditional assessments of analogical reasoning, such as 4-term
verbal analogy problems, are often insensitive at capturing reasoning
deficits in clinical populations (e.g., traumatic brain injury patients).
A sufficiently challenging and sensitive analogical reasoning task
could help to better characterize executive function deficits in such
populations, as well as inform cognitive rehabilitation programs.
Method
The Similar Situations Task (SST) presented participants with 48 line-
art scene analogy problems in source–target pairs.
In each source scene one or two arrows directed participants to
encode and remember the relations and roles of the items pointed to.
For each target scene, participants were tasked with identifying which
item, if any, was in a similar situation as one pointed to in the source.
If no close analog was present, participants could click “No Match.”
Participants then rated their confidence (from -4 to 4) in their answer.
After the SST, various other cognitive and neuropsychological
measures (different between Studies 1 and 2) were administered.
Four trial types (see columns) were presented in pseudorandom order.
Matthew J. Kmiecik1, Guido F. Schauer1, David Martinez1& Daniel C. Krawczyk1,2
1The University of Texas at Dallas, 2University of Texas Southwestern Medical Center at Dallas
Study 1
38 participants (25 Female; age M = 23 years, SD = 7).
SST Calibration—how accurate a person is in their confidence rating—was
calculated by multiplying each participant’s confidence rating by the
accuracy (1 if correct or -1 if incorrect) for each trial.
References
1. Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4), 433-459.
2. Beaton, D., Chin Fatt, C. R., & Abdi, H. (2014). An ExPosition of multivariate analysis with the singular value decomposition in R.
Computational Statistics & Data Analysis, 72, 176-189.
3. Burgess, P. W., & Shallice, T. (1996). Response suppression, initiation and strategy use following frontal lobe lesions. Neuropsychologia,
34(4), 263-272.
0
0.2
0.4
0.6
0.8
1
Match NoMatch
Proportion Correct
SST Accuracy
0
2
4
6
8
Match NoMatch
RT (seconds)
SST Correct RT
0
0.2
0.4
0.6
0.8
1
Match NoMatch
Proportion Correct
SST Accuracy
0
2
4
6
8
Match NoMatch
RT (seconds)
SST Correct RT
Match No Match
No Match
No Match
Low
Relational Load
12 Trials
No Match
No Match
High
Relational Load
24 Trials
No Match
No Match
Low
Relational Load
4 Trials
No Match
No Match
High
Relational Load
8 Trials
-4 -3 -2 -1 0 +1 +2 +3 +4
Wrong
for
Sure
Right
for
Sure
Unsure
Wrong
or Right
For SST accuracy, we observed main effects of relational load, F(1, 37) = 4.19, p= .048, and matchability,
F(1, 37) = 29.81, p< .001, but no interaction. Similarly, for SST RT for correct trials, we observed main
effects of relational load, F(1, 37) = 18.30, p< .001, and matchability, F(1, 37) = 8.26, p = .007, but not
their interaction. Error bars represent SEM.
For SST accuracy, we observed a main effect of matchability, F(1, 14) = 15.90, p= .001, but no main
effect of relational load nor their interaction. However, for SST RT for correct trials, we observed main
effects of relational load, F(1, 14) = 12.27, p= .004, and matchability, F(1, 14) = 7.72, p = .015, but not
their interaction. Error bars represent SEM.
Remote Associates Task
SST Spring - Measures
Component 1 variance: 32.71% p = 0.0005
Component 2 variance: 12.78% p = 0.73
SST Accuracy
SST Calibration
SST Correct RT
SST Confidence Rating
Hayling Section 1 RT
Symbol Span
Hayling Section 2 RT
Hayling Semantic Errors (A)
Letter Number Span
Stroop
Digit Span
Raven’s Matrices
Component 1 Variance 33% (p< .001)
Component 2 Variance 13% (p= .73)
Hayling Semantic Errors (B)
Principal Component Analysis 1
EDRG - Measures
Component 1 variance: 33.83% p = 0.0005
Component 2 variance: 23.19% p = 0.003
Component 1 Variance 34% (p< .001)
Component 2 Variance 23% (p= .003)
Symbol Span
Verbal Analogies Correct RT
Beck Depression Inventory
Beck Anxiety Inventory
SST Correct RT Abstraction RT
Abstraction+Memory RT
Abstraction+Memory Accuracy
Verbal Analogies Accuracy
SST Accuracy
SST Calibration
SST Confidence Rating Abstraction Accuracy
Principal Component Analysis 2
Inferential principal component analyses1,2 were performed using permutation and bootstrapping techniques across 2,000 iterations to calculate significance for components and measures, respectively. Significant
measures had bootstrap ratios greater than 2 (p < .05).Dot sizes signify contributions1, or how much variance each measure contributes to the components, with greater sizes signifying larger contributions.
Acknowledgements
Many thanks go to Pranali Kamat, Brandon Pires, Niki Allahyari, and
Rudy Perez for their help with data collection and Lara Jones for
allowing us to use her verbal analogy task in Study 2.
Low High
Relational Load
Component(s) on Which the Measure Was Significant
2 1 & 2 n.s.1
Time
Discussion
Variability of performance on the SST across healthy individuals and how SST performance relates to performance on clinically relevant
cognitive measures suggests that the SST may be sensitive to executive function deficits in clinical populations.
Future work will investigate the efficacy of the SST in predicting difficulties in everyday reasoning within various clinical groups.
... We quantified verbal analogy performance as overall task accuracy as percentage correct out of 60 total trials collapsing across semantic distance and distracter salience conditions. Scene analogy performance was assessed using the Similar Situations Task (SST ) developed in-house (Martinez et al., in preparation;Kmiecik, Schauer, Martinez, & Krawczyk, 2016). Briefly, participants were shown 48 line-art scene analogy problems. ...
Article
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks, and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, whereas Experiment 2 additionally directed participants' attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those with above-chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
... Examine the effects of ESA compared to EIA on cognitive outcomes in a military veteran chronic TBI population. [14], the Trail Making Test [15,16], and the Similar Situations Task [17]) compared to EIA participants. ...
Article
Full-text available
Background: Some individuals who sustain traumatic brain injuries (TBIs) continue to experience significant cognitive impairments chronically (months to years post injury). Many tests of executive function are insensitive to these executive function impairments, as such impairments may only appear during complex daily life conditions. Daily life often requires us to divide our attention and focus on abstract goals. In the current study, we compare the effects of two 1-month electronic cognitive rehabilitation programs for individuals with chronic TBI. The active program (Expedition: Strategic Advantage) focuses on improving goal-directed executive functions including working memory, planning, long-term memory, and inhibitory control by challenging participants to accomplish life-like cognitive simulations. The challenge level of the simulations increases in accordance with participant achievement. The control intervention (Expedition: Informational Advantage) is identical to the active program; however, the cognitive demand level is capped, preventing participants from advancing beyond a set level. We will evaluate these interventions with a military veteran TBI population. Methods/design: One hundred individuals will be enrolled in this double-blinded clinical trial (all participants and testers are blinded to condition). Each individual will be randomly assigned to one of two interventions. The primary anticipated outcomes are improvement of daily life cognitive function skills and daily life functions. These are measured by a daily life performance task, which tests cognitive skills, and a survey that evaluates daily life functions. Secondary outcomes are also predicted to include improvements in working memory, attention, planning, and inhibitory control as measured by a neuropsychological test battery. Lastly, neuroimaging measures will be used to evaluate changes in brain networks supporting cognition pre and post intervention. Discussion: We will test whether electronically delivered cognitive rehabilitation aimed at improving daily life functional skills will provide cognitive and daily life functional improvements for individuals in the chronic phase of TBI recovery (greater than 3 months post injury). We aim to better understand the cognitive processes involved in recovery and the characteristics of individuals most likely to benefit. This study will also address the potential to observe generalizability or to transfer from a software-based cognitive training tool toward daily life improvement. Trial registration: ClinicalTrials.gov, NCT03704116 . Retrospectively registered on 12 Oct 2018.
Article
Full-text available
Ninety-one patients with cerebral lesions were tested on a task involving two conditions. In the first condition (response initiation) subjects were read a sentence from which the last word was omitted and were required to give a word which completed the sentence reasonably. In the second condition (response suppression) subjects were asked to produce a word unrelated to the sentence. Patients with frontal lobe involvement showed longer response latencies in the first condition and produced more words which were related to the sentence in the second, in comparison to patients with lesions elsewhere. Moreover, in the second condition patients with frontal lobe lesions produced fewer words which showed the use of a strategy during response preparation. Performance on the initiation and suppression conditions was unrelated at the group or single case level. The relationship between response initiation, suppression and strategy use are discussed.
Article
ExPosition is a new comprehensive R package providing crisp graphics and implementing multivariate analysis methods based on the singular value decomposition (svd). The core techniques implemented in ExPosition are: principal components analysis, (metric) multidimensional scaling, correspondence analysis, and several of their recent extensions such as barycentric discriminant analyses (e.g., discriminant correspondence analysis), multi-table analyses (e.g.,multiple factor analysis, Statis, and distatis), and non-parametric resampling techniques (e.g., permutation and bootstrap). Several examples highlight the major differences between ExPosition and similar packages. Finally, the future directions of ExPosition are discussed.
Article
Principal component analysis (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the PCA model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife. PCA can be generalized as correspondence analysis (CA) in order to handle qualitative variables and as multiple factor analysis (MFA) in order to handle heterogeneous sets of variables. Mathematically, PCA depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition (SVD) of rectangular matrices. Copyright © 2010 John Wiley & Sons, Inc.For further resources related to this article, please visit the WIREs website.