ArticlePublisher preview available

Correcting Smith et al.’s (2018) Criticisms of All Rorschach Studies in Mihura, Meyer, Dumitrascu, and Bombel’s (2013) Meta-Analyses

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Smith et al. (2018) describe their article as “an evaluation as to the extent that individual studies have conformed to [ Exner’s (1995a) ] proposed methodological criteria” (Abstract). However, the authors did not conduct analyses to compare research before and after Exner (1995a) in order to assess its impact nor were the set of criteria they used Exner’s. Instead, they critiqued the individual studies in Mihura and colleagues’ (2013) meta-analyses, declaring all methodologically unsound (including Exner’s). They conjectured that Mihura et al. omitted studies with less “methodological bias” that would have provided more support for Rorschach validity. I explain why most of the criteria they use to criticize the studies’ methodology are not sound. But to directly test their hypotheses, I requested their ratings of study methodology. Findings from studies they rated as having more methodological “issues” (e.g., not reporting IQ or Lambda range) or as being “application studies” – which they said should be excluded – were not less supportive of Rorschach validity as they assumed would be the case. The small effect size associations ( r < |.10|) were also in the opposite direction of which Smith et al. argued to be true, indicating that the criteria by which they evaluated other researchers’ studies were not sound. Our findings do indicate that researchers are responding to the one criterion that is clearly stated in Exner (1995a) , which is Weiner’s (1991) recommendation to report interrater reliability; before 1991, 12% of studies reported interrater reliability, which afterward jumped to 78.4%. Other claims in the article by Smith et al. are also addressed.
Commentary
Correcting Smith et al.s(2018)
Criticisms of All Rorschach Studies
in Mihura, Meyer, Dumitrascu, and
Bombels(2013) Meta-Analyses
Joni L. Mihura
Department of Psychology, University of Toledo, OH, USA
Abstract: Smith et al. (2018) describe their article as an evaluation as to the extent that
individual studies have conformed to [Exners (1995a)] proposed methodological criteria
(Abstract). However, the authors did not conduct analyses to compare research before and
after Exner (1995a) in order to assess its impact nor were the set of criteria they used Exners.
Instead, they critiqued the individual studies in Mihura and colleagues(2013) meta-analyses,
declaring all methodologically unsound (including Exners). They conjectured that Mihura
et al. omitted studies with less methodological biasthat would have provided more support
for Rorschach validity. I explain why most of the criteria they use to criticize the studies
methodology are not sound. But to directly test their hypotheses, I requested their ratings of
study methodology. Findings from studies they rated as having more methodological issues
(e.g., not reporting IQ or Lambda range) or as being application studies”–which they said
should be excluded were not less supportive of Rorschach validity as they assumed would
be the case. The small effect size associations (r< |.10|) were also in the opposite direction of
which Smith et al. argued to be true, indicating that the criteria by which they evaluated other
researchersstudies were not sound. Our findings do indicate that researchers are
responding to the one criterion that is clearly stated in Exner (1995a), which is Weiners
(1991) recommendation to report interrater reliability; before 1991, 12% of studies reported
interrater reliability, which afterward jumped to 78.4%. Other claims in the article by Smith
et al. are also addressed.
Keywords: Rorschach, meta-analysis, methodology
Smith et al. (2018;inRorschachiana) published a critique of the 215 studies included
in the 210 articles in Mihura, Meyer, Dumitrascu, and Bombels(2013) meta-
analyses of 65 Rorschach variables meta-analyses that had resulted in the critics
lifting their call for an all-out moratorium on the use of the Rorschach (Wood, Garb,
Nezworski, Lilienfeld, & Duke, 2015; see also Mihura, Meyer, Bombel, &
Dumitrascu, 2015)and declared that the work of hundreds of Rorschach
©2019 Hogrefe Publishing Rorschachiana (2019), 40(2), 169186
https://doi.org/10.1027/1192-5604/a000118
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
... Just as the RCS was created from the most empirically robust bits and pieces of earlier American Rorschach systems, R-PAS was based only on those aspects of the RCS that have passed strict empirical muster (Mihura et al., 2013). Comparison of the systems has sparked a lively debate about the degree of overlap between the RCS and R-PAS (Mihura, 2019;Smith et al., 2018). Due largely to Mihura's meta-analytic studies (2013), R-PAS was used to score and interpret Peter's Rorschach. ...
Article
Full-text available
This manuscript presents a single case study of a psychotically disturbed adult male (whom we call “Peter”), focusing on similarities and differences in Rorschach interpretation based on three different Rorschach approaches. Specific questions were raised as to whether the client suffered from a paranoid psychosis (paranoia) or paranoid schizophrenia. Three distinct models of psychopathology and Rorschach interpretation are initially presented. We then address Peter’s psychotic symptoms, according to the Parisian approach (specifically the Nancy French subgroup), the Lausanne Rorschach approach, and the American Rorschach approach (Comprehensive System and R-PAS). Analysis shows many convergences between the three approaches on the client’s nature of conflicts and links to reality, object relations, self-representation and anxiety, defense mechanisms, and disordered thinking, but interpretation of these variables differed somewhat despite agreement on a diagnosis within the psychotic spectrum. Concluding remarks discuss the divergences and point out the limitations of a case study method. Future research is suggested.
Article
Story completion (SC) – where respondents are presented with the start of a story (the story ‘stem’ or ‘cue’) and asked to complete it – originally developed as a projective technique for clinical and research assessment. While SC continues to be used in this way, it has also evolved into a qualitative data generation technique, providing qualitative researchers with a creative and novel alternative to the self-report data typical of qualitative research. In this paper, we outline the growing interest in the method within psychotherapy and counselling psychology research and explain what we think the method offers to this field of research. To support psychotherapists and counselling psychologists in adding SC to their methodological toolkit, we also provide practical guidance on the design and implementation of SC, drawing on an example study exploring perceptions of ethnic/racial differences between a therapist and client.
Chapter
Long before psychology, bias has existed in science. From the beginning, concerns have been raised about the reliability, validity, and accuracy of social science research (Meehl, 1954). In this chapter, we define and discuss the origins of bias and how it can erode the scientific method. We focus specifically on bias in psychological research, theory, assessment, and treatment. We discuss the range of common misconceptions and misinformation that permeates the female offender literature. Finally, we conclude with ten myths about female offenders and offer guidelines for identifying bias and how to avoid it.
Chapter
In this chapter, we provide a theoretical and empirically based understanding of antisocial and psychopathic women. We begin by clarifying the differences between psychopathy, sociopathy, and ASPD, and then provide a historical perspective of hysteria. While the underlying personality of the female psychopath is paranoid, malignant hysteria is their predominant personality style (Gacono & Meloy, 1994). Overviews of the Hare Psychopathy Checklist-Revised (PCL-R), Personality Assessment Inventory (PAI), and Rorschach are offered as a refresher for those experienced clinicians and as a resource for those that are not. Finally, we present group PAI and Rorschach data (also Trauma Symptom Inventory-2 [TSI-2]) for 337 female offenders including subsets of psychopathic (N = 124) and non-psychopathic (N = 57) females. We make note of the differences between female and male psychopaths.
Article
Full-text available
It is essential to understand that CS validity research does not translated directly to the R-PAS. In this article we dicuss essential issues to consider prior to using the R-PAS in an applied context.
Article
Full-text available
We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p< .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p< .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied
Article
Full-text available
Exner’s (1995a) Issues and Methods in Rorschach Research provided a standard of care for conducting Rorschach research; however, the extent to which studies have followed these guidelines has not been examined. Similarly, meta-analytic approaches have been used to comment on the validity of Exner’s Comprehensive System (CS) variables without an evaluation as to the extent that individual studies have conformed to the proposed methodological criteria (Exner, 1995a; Gacono, Loving, & Bodholdt, 2001). In this article, 210 studies cited in recent meta-analyses by Mihura, Meyer, Dumitrascu, and Bombel (2013) were examined. The studies were analyzed in terms of being research on the Rorschach versus research with the Rorschach and whether they met the threshold of validity/generalizability related to specific Rorschach criteria. Only 104 of the 210 (49.5%) studies were research on the Rorschach and none met all five Rorschach criteria assessed. Trends and the need for more stringent methods when conducting Rorschach research were presented.
Article
Full-text available
Empirically analyzing empirical evidence One of the central goals in any scientific endeavor is to understand causality. Experiments that seek to demonstrate a cause/effect relation most often manipulate the postulated causal factor. Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study. Science , this issue 10.1126/science.aac4716
Article
Full-text available
Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Ror-schach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.
Article
Full-text available
We comment on the meta-analysis by Mihura, Meyer, Dumitrascu, and Bombel (2013), which examined the validity of scores in Exner's Comprehensive System (CS) for the Rorschach. First, we agree there is compelling evidence that 4 categories of cognitive scores-the "Rorschach cognitive quartet"-are related to cognitive ability/impairment and thought disorder. We now feel comfortable endorsing the use of these scores in some applied and research settings. Second, we conducted new meta-analyses (k = 44) for the 4 noncognitive Rorschach scores with highest validity in the Mihura et al. findings. Unlike Mihura et al., we included unpublished dissertations (although we did not attempt to exhaustively unearth all unpublished studies), calculated correlations instead of semipartial correlations, and used the Rorschach International Norms for a larger proportion of comparisons. Our validity estimates for the Suicide Constellation and Weighted Sum of Color were similar to or even higher than those of Mihura et al., although we concluded that support for the Suicide Constellation is limited and that Weighted Sum of Color probably does not measure its intended target. Our validity estimates for Sum Shading and the Anatomy and X-ray score were much lower than those of Mihura et al. We conclude that their meta-analysis accurately reflects the published literature, but their exclusion of unpublished studies led to substantial overestimates of validity for some and perhaps many Rorschach scores. Therefore, the evidence is presently insufficient to justify using the CS to measure noncognitive characteristics such as emotionality, negative affect, and bodily preoccupations. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
Female psychopathy has been conceived as a malignant form of hysteria organized at the borderline level of personality function. In this study, the PCL-R was used to assess psychopathy, and the Rorschach Comprehensive System, Extended Aggression Scores, Rorschach Defense Scales, Rorschach Oral Dependency, Trauma Content Index, and Primitive Modes of Relating scoring systems were used to examine psychodynamic variables in female psychopaths. Our findings support the conceptualization of female psychopathy as a malignant form of hysteria and lead us to suggest the need for modifying several PCL-R items with female offenders.
Book
From Guilford: From codevelopers of the Rorschach Performance Assessment System (R-PAS), this essential casebook illustrates the utility of R-PAS for addressing a wide range of common referral questions with adults, children, and adolescents. Compelling case examples from respected experts cover clinical issues (such as assessing psychosis, personality disorders, and suicidality); forensic issues (such as insanity and violence risk assessments, child custody proceedings, and domestic violence); and use in neuropsychological, educational, and other settings. Each tightly edited chapter details R-PAS administration, coding, and interpretation. Designed to replace the widely used Comprehensive System developed by John Exner, R-PAS has a stronger empirical foundation, is accurately normed for international use, is easier to learn and use, and reduces ambiguities in administration and coding, among other improvements.