ArticlePDF Available

Using the Inventory of Problems-29 (IOP-29) with the Inventory of Problems Memory (IOP-M) in Malingering-Related Assessments: a Study with a Slovenian Sample of Experimental Feigners

Authors:

Abstract and Figures

A recently published article harshly criticized forensic practitioners operating in Slovenia for not including in their assessments any tests specifically designed to assess negative distortion (Areh, 2020). To promote better forensic assessment practice and stimulate future research on symptom and performance validity assessment in Slovenia, the current study translated the Inventory of Problems-29 (IOP-29; Viglione & Giromini, 2020) and its recently developed memory module (IOP-M; Giromini et al., 2020) into Slovene language and tested their validity and effectiveness by conducting a simulation/analogue study. Among 150 volunteers, 50 completed the IOP-29 and IOP-M under standard instructions; 50 were asked to respond as if they suffered from depression; and 50 were asked to respond pretending to suffer from schizophrenia. Statistical analyses showed that (1) the IOP-29 discriminated well between simulators and honest test-takers (d ≥ 3.56), demonstrating the same effectiveness when inspecting feigned depression (sensitivity = 88%) and feigned schizophrenia (sensitivity = 88%) at an almost perfect specificity (98%); (2) the IOP-M identified 50% of simulators of depression and 80% of simulators of schizophrenia at perfect specificity (100%); and (3) combining the results of the IOP-29 with those of the IOP-M notably improved classification accuracy so as to demonstrate incremental validity. Taken together, these findings provide initial support for using the IOP-29 and IOP-M in applied settings in Slovenia. Limitations related to the design of the study and recommendations for further research are provided.
Content may be subject to copyright.
Vol.:(0123456789)
1 3
Psychological Injury and Law
https://doi.org/10.1007/s12207-021-09412-2
Using theInventory ofProblems‑29 (IOP‑29) withtheInventory
ofProblems Memory (IOP‑M) inMalingering‑Related Assessments:
aStudy withaSlovenian Sample ofExperimental Feigners
MajaMašaŠömen1· StašaLesjak1· TejaMajaron1· LucaLavopa2· LucianoGiromini2· DonaldViglione3·
AnjaPodlesek1
Received: 1 February 2021 / Accepted: 6 May 2021
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
Abstract
A recently published article harshly criticized forensic practitioners operating in Slovenia for not including in their assess-
ments any tests specifically designed to assess negative distortion (Areh, 2020). To promote better forensic assessment prac-
tice and stimulate future research on symptom and performance validity assessment in Slovenia, the current study translated
the Inventory of Problems-29 (IOP-29; Viglione & Giromini, 2020) and its recently developed memory module (IOP-M;
Giromini etal., 2020) into Slovene language and tested their validity and effectiveness by conducting a simulation/analogue
study. Among 150 volunteers, 50 completed the IOP-29 and IOP-M under standard instructions; 50 were asked to respond as
if they suffered from depression; and 50 were asked to respond pretending to suffer from schizophrenia. Statistical analyses
showed that (1) the IOP-29 discriminated well between simulators and honest test-takers (d 3.56), demonstrating the same
effectiveness when inspecting feigned depression (sensitivity = 88%) and feigned schizophrenia (sensitivity = 88%) at an
almost perfect specificity (98%); (2) the IOP-M identified 50% of simulators of depression and 80% of simulators of schizo-
phrenia at perfect specificity (100%); and (3) combining the results of the IOP-29 with those of the IOP-M notably improved
classification accuracy so as to demonstrate incremental validity. Taken together, these findings provide initialsupport for
using the IOP-29 and IOP-M in applied settingsin Slovenia. Limitations related to the design of the study and recommenda-
tions for further research are provided.
Keywords Malingering· Psychological assessment· Symptom validity tests· Performance validity tests· Inventory of
Problems
Malingered mental illness appears to be as old as mental
illness itself (Resnick, 1984). According to the Diagnostic
and Statistical Manual of Mental Disorders (DSM-5; Ameri-
can Psychiatric Association, 2013), the term “malingering”
refers to “the intentional production of false or grossly exag-
gerated physical or psychological symptoms, motivated by
external incentives, such as avoiding military duty, avoiding
work, obtaining financial compensation, evading criminal
prosecution, or obtaining drugs” (p. 726). Resnick (1984)
also mentions purposes such as wanting to transfer out of
prison, avoid civil litigation, or achieve hospital admission,
which is most frequent among the homeless. Among pris-
oners specifically, relocation, medication, compensation,
attention, or amusement may all be reasons why inmates
may feign mental illness. However, it must be emphasized
that, while feigning symptomatology is present in both facti-
tious disorder and malingering, the DSM-5 considers facti-
tious disorder a genuine mental disorder as it is motivated
by internal incentives, whereas malingering is considered
deliberate behavior and as such not a form of psychopathol-
ogy (Rogers & Bender, 2018).
Three different models have been proposed (Rogers
& Bender, 2018) as explanations for malingering. The
MajaMaša Šömen, Staša Lesjak, and Teja Majaron contributed
equally to this paper.
* Anja Podlesek
anja.podlesek@ff.uni-lj.si
1 Department ofPsychology, Faculty ofArts, University
ofLjubljana, Ljubljana, Slovenia
2 Department ofPsychology, University ofTurin, Turin, Italy
3 California School ofProfessional Psychology, Alliant
International University, SanDiego, CA, USA
Psychological Injury and Law
1 3
adaptational model describes malingering as the result of
a cost-benefit analysis, where the malingerer predicts the
utility of malingering will be greater than any of alternative
solutions. The pathogenic model hypothesizes that, at first,
malingerers invent their symptoms because of an actual dis-
ability that they are experiencing and trying to control and
that, only later on, they lose the control over malingering.
The criminological model describes malingering as an anti-
social act which is more often committed by people with
antisocial traits. Regardless of which of these explanations
is more suitable, malingering should be recognized and pre-
vented in its early stages as it presents a tremendous cost for
society. Indeed, undetected malingering cases are provided
with compensations or unnecessary psychiatric treatment
creating enormous financial expenses (Chafetz & Underhill,
2013), and more broadly malingering compromises the
efficacy of the entire mental health system, as practitioners
waste medical resources and time that they should dedicate
to provide treatment to true patients (Viglione etal., 2017).
Malingering is not an uncommon condition. Mittenberg
etal. (2003) estimate that 29% of personal injury cases,
30% of disability cases, 19% of criminal cases, and 8% of
medical cases probably involve malingering or symptom
exaggeration. Larrabee etal. (2009) even suggested that the
base rate of malingering in psychological injury cases is
40%, although Young (2015, 2019) has convincingly char-
acterized this estimate as too high. Nevertheless, given that
malingering can cause mistaken, life-changing decisions in
high stake evaluations and the misuse of mental health and
financial resources, forensic and other high-stake evaluations
should always evaluate the credibility of symptom presenta-
tions and claims of impairment (Bush etal., 2014).
Symptom andPerformance Validity
Assessment
To evaluate the credibility of presented complaints, foren-
sic assessors rely on various techniques and tests. A widely
accepted tool, in this context, is the Structured Interview
of Reported Symptoms (SIRS; Rogers, Bagby, etal., 1992;
Rogers, Kropp, etal., 1992; for an updated version, see also
Rogers etal., 2010, and Rogers etal., 2020). It is a compre-
hensive, interview measure that is frequently administered
to evaluate response styles associated with intentional distor-
tion of self-reported psychiatric symptoms. Another inter-
view that is widely used in the field is the Miller Forensic
Assessment of Symptoms Test (M-FAST; Miller, 2001). In
contrast to the SIRS, the 25-item M-FAST is typically used
for screening purposes only.
In addition to structured interviews, practitioners usually
administer both self-report symptom validity tests (SVTs)
and performance validity tests (PVTs) too. An SVT is an
instrument designed to evaluate the extent to which the
test-takers complain about symptoms or problems that do
not really exist in the real clinical world or that occur very
rarely (sometimes called “pseudosymptoms”). An example
is the Structured Inventory of Malingered Symptomatol-
ogy (SIMS; Smith & Burger, 1997), a 75-item, true/false
questionnaire covering a broad spectrum of improbable
symptoms concerning conditions such as psychosis, neuro-
logical impairment, and affective disorders. Other examples
are the embedded validity scales in multiscale personality
inventories such as the Minnesota Multiphasic Personality
Inventory (MMPI-3; Ben-Porath & Tellegen, 2020a,b), Per-
sonality Assessment Inventory (PAI; Morey, 1991, 2007),
and Millon Clinical Multiaxial Inventory (MCMI-IV; Millon
etal., 2015).
PVTs, in contrast, are performance-based measures of
cognitive ability that are typically aimed at detecting poor
cooperation, motivation, or effort. Examples of PVTs are the
test of memory malingering (TOMM; Tombaugh, 1996), the
Victoria Symptom Validity Test (VSVT; Slick etal., 2005),
and the Word Memory Test (WMT; Green etal., 1996). The
main reason for their efficacy in discriminating valid versus
invalid cognitive symptom presentations is that most feign-
ers do not realize that even brain-injured patients typically
perform quite well on simple recognition tasks, and many
mistakenly believe that severe memory problems might
occur with a variety of different mental health disorders.
As such, they tend to exert inadequate or less than optimal
effort and often perform more poorly than bona fide patients.
During the past few years, research has shown that com-
bining multiple methods with different symptom validity
assessment approaches on a single case can yield substan-
tial incremental validity (Boone, 2013; Erdodi etal., 2017;
Giger etal., 2010; Giromini etal., 2019a, b, c; Larrabee,
2008). The underlying assumption is that different tools
may tap different feigning strategies, so that using multiple,
diverse tests might provide incremental validity compared to
using one test alone or two similar measures using the same
method or feigning strategies. Statistically, the lower the
correlation between any two tests, the greater the potential
for incremental validity and better prediction (Bush etal.,
2014). In addition, because of the large amount of variance
shared by measures using the same method, using tests that
differ in the method employed is preferable. Practitioners are
therefore encouraged to include multiple SVTs and PVTs in
their assessments, so as to provide incremental validity and
increased signal detection (Sherman etal., 2020).
The Inventory ofProblems‑29 andInventory
ofProblems—Memory
Because malingerers differ in the preferred strategies of defi-
ance and different situations induce different approaches to
Psychological Injury and Law
1 3
malingering, Viglione and Giromini developed two measures
that contain multiple detection strategies, namely, the Inven-
tory of Problems-29 (IOP-29; Viglione etal., 2017; Viglione
& Giromini, 2020) and the Inventory of Problems—Memory
(IOP-M; Giromini etal., 2020). The IOP-29 is typically con-
ceived of as an SVT, and the IOP-M is a forced-choice PVT;
each takes about 10min to be completed. When used together,
the IOP-29 and IOP-M might offer a quick and yet effective
and multimethod validity check (Giromini etal., 2020).
The SVT component of the “IOP combo” is the IOP-29, a
29-item, self-administered test focused on the credibility of
various symptom presentations (Viglione & Giromini, 2020).
Two of its items have an open-ended format, whereas all
other 27 offer three response options, i.e., “true,” “false,” and
“doesn’t make sense.” Differently from many other measures
used in the field, its chief feigning score—the False Disorder
probability Score (FDS)—does not compare the responses
of the test-taker against a single set of normative reference
values obtained from a large sample of non-clinical respond-
ers. Instead, it considers two different sets of reference val-
ues, one coming from bona fide patients and one coming
from experimental simulators. A low score suggests that
the IOP-29 under examination closely resembles the IOP-
29s included in the bona fide reference sample, and a high
score suggests that it closely resembles those included in the
simulators reference sample. Derived from logistic regres-
sion, the FDS thus is a probability score that ranges from
zero to one, with higher scores reflecting less credible symp-
tom presentations. According to the test manual (Viglione
& Giromini, 2020), a score of FDS 0.50 should offer sen-
sitivity and specificity values of about 80% across different
conditions. A higher cutoff, of FDS 0.65, would yield a
specificity of about 90%, and a lower cutoff, of FDS 0.30,
would yield a sensitivity of about 90%. Research, so far, has
largely supported these claims (e.g., Gegner etal., 2021;
Giromini etal., 2018; Giromini etal., 2019a, b, c; Ilgunaite
etal., 2020; Roma etal., 2019; Winters etal., 2020). IOP-
29 generates similar validity results to the MMPI and PAI
validity scales (Viglione etal., 2017), even outperforming
the SIMS (Giromini etal., 2018) and the Rey Fifteen Item
Test (Gegner etal., 2021), and providing incremental validity
when combined with the TOMM (Giromini etal., 2019a, b,
c) and the MMPI-2 (Giromini etal., 2019a, b, c). Thus, in
their introductory description and conceptualization of the
field of psychological injury and law, the Editor-in-Chief of
Psychological Injury and Law and his colleagues referred to
the IOP-29 as “a newer stand-alone SVT that has the required
psychometric properties for use in forensic disability and
related assessments. Its research profile is accumulating, a
hallmark for use in legal settings” (Young etal., 2020; p. 9).
The PVT component of the “IOP combo” is the IOP-M
(Giromini etal., 2020), a performance validity test mod-
ule designed to be used in combination with the otherwise
free-standing symptom validity test, IOP-29. Its main pur-
pose is to detect feigned memory deficits or, more broadly,
cognitive impairment. It is administered immediately after
completing the IOP-29. It contains 34 implicit recognition
two-alternative-forced-choice test items. The results of the
developmental study conducted by Giromini etal. (2020), in
which 192 participants were instructed to respond honestly
(honest controls) and 168 were instructed to feign mental
illness (experimental simulators), suggested that the IOP-M
has the potential to yield incremental validity and that it
might improve classification accuracy over using the IOP-
29 alone. In fact, only 6 of the 168 simulators (i.e., less
than 4%) passed both the IOP-29 and IOP-M, and only 3 of
the 192 honest responders (i.e., less than 2%) failed both.
However, differently from the IOP-29, the IOP-M has not
been thoroughly investigated. To our knowledge, only two
studies to date — one in Australia by Gegner etal. (2021)
focusing on feigned mild traumatic brain injury (mTBI) and
one in Brazil by de Francisco Carvalho etal. (2021) focusing
on post-traumatic stress disorder (PTSD) — have replicated
the initial findings of Giromini etal. (2020). As such, addi-
tional research on the effectiveness of the IOP-M would be
beneficial.
The Aim ofthePresent Study
A recent article by Areh (2020) has pointed out that forensic
assessors pay little or no attention to possible malingering in
Slovenia. In a quite provocative article (its title is Forensic
assessment may be based on common sense assumptions
rather than science), he summarized the psychological
tests that had been used more frequently in 166 forensic
personality assessments conducted in Slovenia in the period
2003–2018 and argued that “possible malingering of the per-
son being evaluated was not detected” (p. 1). In fact, out of
166 inspected evaluations, 42 on criminal cases and 124 on
civil cases, none included any stand-alone SVTs or PVTs,
and very few included any broadband personality inventories
that incorporate embedded measures of response style. For
instance, the MMPI was used in 3 evaluations only, repre-
senting less than 2% of the total. As such, he criticized Slo-
venian forensic practitioners for not including in their assess-
ments “specific psychological instruments used to detect
malingering” (p. 7). It should be noted, however, that a brief
literature search revealed that commonly used SVTs or PVTs
such as the SIMS or TOMM have not been researched or
validated in Slovenia. Thus, providing Slovenian practition-
ers with an empirically sound measure of negative response
bias in Slovene language would be beneficial.
To respond to this call, we developed a Slovene adaptation
of the IOP-29 and IOP-M and tested their joint validity. The
IOP-29 has already been adapted to numerous other languages
Psychological Injury and Law
1 3
— English, German, French, Dutch, Italian, Spanish, Brazil-
ian and European Portuguese, traditional and simplified Chi-
nese, and Lithuanian (www. iop- test. com). Published studies
have shown solid support in its initial English language and
promising adaptation to other languages (e.g., Ilgunaite etal.,
2020). However, the IOP-M has been cross-validated only by
Gegner etal. (2021), and no Slovene version of either IOP
instrument was available, when we designed our study. The
primary purpose of our research project was thus to determine
how a healthy Slovene-speaking population would respond to
both tests and how many simulators that present themselves
as psychologically injured would get detected by the IOP
instruments when both test components (i.e., the IOP-29 and
IOP-M) are administered. We expected results similar to those
with samples speaking other languages, that is that the IOP-29
and IOP-M would both individually discriminate simulators
from an honest non-patient sample, and that the IOP-M would
identify simulators who were not identified by the IOP-29.
Method
Participants
We decided to test whether the IOP-29 is able to effectively
discriminate simulators of depression and schizophre-
nia from honest responders. Based on previous research
(Giromini etal., 2020), no differences were expected
between simulators of depression and schizophrenia. Thus,
considering an alpha of 0.05, a power of 0.80, and an alloca-
tion ratio of 2 to 1 (two simulator groups, one control group),
it was determined that 144 participants would be needed (48
in one group and 96 in the other) to detect a Cohen’s d effect
size of 0.50. Accordingly, we aimed to recruit approximately
50 participants for the honest group and approximately 50
participants for each of the two simulator groups.
The overall sample included 150 Slovenian partici-
pants, aged 18 to 75years (M = 30.5, SD = 13.3), with 57
(38%) being men. To validate both measures, participants
were randomly assigned to three groups with one respond-
ing honestly and the other two attempting to convince the
examiner that they suffered depression or schizophrenia
following a work-related accident causing physical pain.
The three groups, each with 50 participants, did not differ
significantly with regard to gender, χ2(2) = 0.68, p = 0.71;
age, F(2, 147) = 1.24, p = 0.29; and education, χ2(2) = 5.03,
p = 0.08. In the “honest” group, there were 19 (38%) men.
This group had an average age of 28.2years (SD = 11.1),
and 27 (54%) participants had high school education or less,
and 23 (46%) participants had a bachelor’s degree or more.
Simulators of depression had an average age of 32.3years
(SD = 15.5), with 34 (68%) participants having a high school
education or less and 16 (32%) having bachelor’s degree or
more, and 17 (34%) of them were men. The third group,
which was simulating schizophrenia, had an average age of
27.5 (SD = 11.1) and included 21 (42%) men; 23 (46%) par-
ticipants had a high school education or less, and 27 (54%)
had a bachelor degree or more.
Measures
As with other linguistic adaptations of the IOP-29 and the
IOP-M, our Slovenian versions were developed by follow-
ing the classic translation-back translation procedure method
(Brislin, 1980; Geisinger, 2003; Van de Vijver & Hambleton,
1996). All participants were then administered both the IOP-
29 and the IOP-M in Slovene language.
Inventory of Problems-29 (Viglione & Giromini,2020).
The IOP-29 is a self-administered test designed to assist
practitioners to evaluate the credibility of symptom presen-
tations related to various psychiatric or cognitive disorders.
Its main purpose is to discriminate bona fide patients from
feigners. It is composed of 29 items and administered via
classic paper-and-pencil format or online, using a tablet or
a PC. Items address diverse mental health symptoms, atti-
tudes towards one’s own condition, test-related behaviors,
claims of impairment, and problem-solving abilities. As
noted above, the chief feigning measure of the IOP-29 is the
FDS, a probability value derived from logistic regression,
which compares the responses of the test-taker against those
provided by a group of bona fide patients and those provided
by a group of experimental feigners (Viglione & Giromini,
2020). The higher the score, the lower the credibility of the
Table 1 Means (and standard deviation in parentheses) of IOP-29
FDS and IOP-M scores in different groups
Simulator
Honest Depression Schizophrenia
IOP-29 0.16 (0.13) 0.75 (0.19) 0.78 (0.21)
IOP-M 33.5 (0.8) 28.6 (4.4) 23.1 (6.7)
Table 2 Classification accuracy of IOP-29 FDS and IOP-M
a Specificity
b Sensitivity
Simulator
Honest Depression Schizophrenia
IOP-29
FDS ≥ .50 49 (98%)a6 (12%) 6 (12%)
FDS < .50 1 (2%) 44 (88%)b44 (88%)b
IOP-M
# of correct ≥ 30 50 (100%)a25 (50%) 10 (20%)
# of correct < 30 0 (0%) 25 (50%)b40 (80%)b
Psychological Injury and Law
1 3
presentation. According to the test authors, the cutoff score
of FDS 0.50 ensures the best balance between sensitivity
and specificity (Giromini etal., 2018; Viglione & Giromini,
2020; Viglione etal., 2017).
Inventory of Problems–Memory (Giromini etal.,2020).
The IOP-M is administered immediately after completing the
IOP-29. It consists of 34 two-alternative-forced-choice items.
Each item presents two words or brief statements — one that
was part of the IOP-29-item content (target) and one that was
not (foil). To preserve the standard IOP-29 administration
procedure, incidental memory is tested: there is no mention
of a subsequent memory test or an expectation to remember
the IOP-29 items. Based on Giromini etal. (2020) findings, at
least 30 of the 34 IOP-M items should be answered correctly
by individuals who do not suffer from relatively severe cogni-
tive problems. As such, if the total number of IOP-M items
answered correctly is lower than 30, the performance is con-
sidered to be non-credible. Conversely, a total score of 30
is interpreted as a credible result.
Procedure
The study was approved by the Ethical Committee of the
Faculty of Arts, University of Ljubljana. Participants were
recruited from the general population via convenience
snowball sampling. That is, we first distributed flyers in
the faculty and invited our family members, friends, and
acquaintances to participate in the study. We also asked
our participants to help us spread the word and invite their
acquaintances to participate as well, if there was interest.
Although participation was completely voluntary, all partici-
pants were informed that, upon the completion of data analy-
sis, three of them would receive a 20€ Amazon voucher.
Individuals who have met the inclusion criteria (having
Slovenian nationality, not having any psychiatric or cogni-
tive disorders, no familiarity with the IOP-29) were then
asked to sign an informed consent form and were divided
into three groups of 50. The first group was asked to answer
as honestly as possible, the second group was instructed to
Fig. 1 Distribution of IOP-29
FDS scores by group
IOP-29 FDS
1.00
.90
.80
.70
.60
.50
.40
.30
.20
.10
.00
20151050 20151050
IOP-29 FDS
1.00
.90
.80
.70
.60
.50
.40
.30
.20
.10
.00
20151050
S
imulator of
SC
Z
S
imulator of DE
P
Ho
n
es
t
Honest Simulator of depression Simulator of schizophrenia
Fig. 2 Distribution of IOP-M
scores by group
IOP-M (# of correct)
35
30
25
20
15
10
5
3020100
3020100
IOP-M (# of correct)
35
30
25
20
15
10
5
3020100
Si
mu
l
ator of SCZ
Si
mu
l
ator of DEP
H
onest
Honest Simulator of depression Simulator of schizophrenia
Psychological Injury and Law
1 3
simulate schizophrenia, and the third group was instructed
to simulate depression. Specifically, participants assigned to
the schizophrenia and depression group were presented with
a short vignette on the situation in which being diagnosed
with mental illness would lead to an economic advantage
and were instructed to take the tests as if they wanted to con-
vince the examiner that they were experiencing symptoms
associated with schizophrenia or depression, respectively. A
list of symptoms of the disorder to be feigned was presented.
Additionally, both groups were cautioned not to overdo the
expression of the disorder in order to not be detected as
feigners. All participants were administered a short soci-
odemographic questionnaire in addition to the IOP-29 and
IOP-M. For each participant, FDS values were calculated
using the official IOP-29 scoring program, which can be
found at www. iop- test. com, and IOP-M errors were counted
using a scoring sheet created by the test authors (Giromini
etal., 2020).
Results
Table1 shows the scores obtained on the two instruments
in different groups of participants. As expected, the IOP-
29 FDS values statistically significantly differed among the
three groups, F(2, 147) = 193.52, p < 0.001. More specifi-
cally, Bonferroni corrected post hoc tests revealed that the
“honest” group scored notably lower than both simulators
Fig. 3 Scatterplot IOP-29
versus IOP-M scores within the
“honest” group
IOP-M (# of correct)
3530252015105
IOP-29 FDS
1.00
.90
.80
.70
.60
.50
.40
.30
.20
.10
.00
3530252015105
IOP-29 FDS
1.00
.90
.80
.70
.60
.50
.40
.30
.20
.10
.00
Fig. 4 Scatterplot IOP-29 versus IOP-M scores within the simulators of depression
Psychological Injury and Law
1 3
of depression (d = 3.69, p < 0.001) and simulators of schiz-
ophrenia (d = 3.56, p < 0.001), whereas the two simulator
groups did not statistically significantly differ from each
other (d = 0.14, p ≈ 1.00). The IOP-M yielded statistically
significant group differences as well, F(2, 147) = 62.93,
p < 0.001. In this case, however, all pairwise comparisons
were statistically significant: the “honest” group had a nota-
bly higher IOP-M score than both the simulators of depres-
sion (d = 1.55, p < 0.001) and the simulators of schizophre-
nia (d = 2.18, p < 0.001), and the simulators of depression
scored higher than the simulators of schizophrenia (d = 0.97,
p < 0.001).
In terms of classification accuracy, the standard IOP-29
cutoff score of FDS 0.50 yielded a specificity of 0.98 and
a sensitivity of 0.88 (Table2). Noteworthy, the same exact
sensitivity results emerged when inspecting feigned depres-
sion and feigned schizophrenia. The IOP-M showed a perfect
specificity, but its sensitivity was notably higher in the schiz-
ophrenia simulators’ group (0.80) than it was for the simula-
tors of depression (0.50). To offer a better appreciation of
the performance of both the IOP-29 and IOP-M, Figs.1 and
2 show full frequency distributions of FDS scores across
the three groups.
To test the incremental validity, three scatterplots were
examined. As shown in Fig.3, the only false-positive gener-
ated by the IOP-29 FDS was correctly classified as a (true)
negative outcome by the IOP-M. Figure4 shows that, when
inspecting the simulator depression group, four of the six
false-negative classifications generated by the IOP-29 FDS
were correctly classified as (true) positive outcomes by the
IOP-M. Likewise, Fig.5 shows that, when inspecting the
simulator schizophrenia group, four of the six false-negative
classifications generated by the IOP-29 FDS were correctly
classified as (true) positive outcomes by the IOP-M. Also
noteworthy, in the “honest” group, no one failed both the
IOP-29 and IOP-M, and in each of the simulator subgroups,
only two cases out of 50 passed both tests. The overall fre-
quency with which both the IOP-29 and IOP-M misclassified
the same case was 4 out of 150, i.e., 2.7%.
Discussion
The aim of our study was to test the validity of our Slovenian
adaptation of the Inventory of Problems-29 (IOP-29; Viglione
& Giromini, 2020) and Inventory of Problems–Memory (IOP-
M; Giromini etal., 2020). Our statistical analyses showed
that, when comparing the mean values of the IOP-29 FDS
in the honest versus the two simulating groups, the IOP-29
discriminated significantly between the groups. However,
no differences between the two simulating groups emerged.
The average IOP-29 FDS values found in the two simulat-
ing groups (0.75 for the depression simulants and 0.78 for
the schizophrenia simulants) are similar to those observed in
experimental simulator samples in other studies (e.g., 0.77
in Ilgunaite etal., 2020; 0.82 in Giromini etal., 2019a, c).
Honest versus both simulator groups also differed for the IOP-
M, but the pairwise comparison between the two simulating
groups was significant as well. More importantly, combining
the results of the IOP-29 with those of the IOP-M remark-
ably increased classification accuracy, both when inspecting
feigned depression and when inspecting feigned schizophre-
nia. All in all, thus, our study provides additional support for
the growing research base for using the IOP-29 and IOP-M
in applied settings. Besides, this article fills an important gap
within the Slovenian forensic context (Areh, 2020), given
that (to our knowledge) no stand-alone SVTs or PVTs with
research support were available to Slovenian practitioners,
prior to this publication.
The effect sizes generated by the IOP-29 when comparing
the honest group against the two simulator groups were very
Fig. 5 Scatterplot IOP-29
versus IOP-M scores within the
simulators of schizophrenia
IOP-M (# of correct)
3530252015105
IOP-29 FDS
1.00
.90
.80
.70
.60
.50
.40
.30
.20
.10
.00
Psychological Injury and Law
1 3
large, d ≥ 3.56 (for characterization of d values in malinger-
ing research, please see Rogers etal., 2003). Besides, the
Slovenian IOP-29 yielded excellent classification accuracy
both in terms of specificity (98%) and sensitivity (88% in
both simulating groups). Given that, the Slovenian IOP-29
proved to be very accurate in our study. Also, noteworthy,
there were no significant differences between simulators who
faked depression versus schizophrenia. This finding is con-
sistent with a previous study conducted by Giromini etal.
(2019c), in which IOP-29 scores produced by feigners of
depression, schizophrenia, mTBI, and PTSD did not signifi-
cantly differ from each other. As such, our study seems to
confirm Viglione and Giromini’s (2020) claim that the IOP-
29 likely performs similarly well when used in remarkably
different contexts and with different symptom presentations.
Such generalizability of a single cutoff score across different
disorders, cultures, and languages has rarely if ever demon-
strated for other SVTs.
Another fact that deserves mentioning is that, in our inves-
tigation, the IOP-29 performed similarly well as it previously
did in other simulation studies conducted in Italy (Giromini
etal., 2019b), Lithuania (Ilgunaite etal., 2020), Portugal
(Giromini etal., 2019a), UK (Winters etal., 2020), and Aus-
tralia (Gegner etal., 2021). To some extent, thus, our inves-
tigation also contributes to the growing empirical research
suggesting that the IOP-29 can be applied cross-culturally
with no need to make any significant adjustments to its FDS
formula and cutoffs.
When compared to the IOP-29, the IOP-M showed
higher specificity (100%), but lower sensitivity (≈ 90% for
the IOP-29; 80% for the IOP-M). Furthermore, the sensi-
tivity of the IOP-M was notably higher when considering
simulators of schizophrenia (80%) rather than when consid-
ering simulators of depression (50%). This finding could be
possibly attributed to the nature of the IOP-M and the dif-
ference between the two conditions. More specifically, one
might speculate that, while people simulating schizophrenia
likely linked symptoms of abnormal reality interpretation
with memory deficits, depression simulators perhaps did not
recognize memory deficits as the most common symptom of
depression — and this may be the reason why the difference
between the two simulating groups was found on IOP-M.
Nevertheless, the examination of scatterplots revealed that
the combination of both test components (IOP-29 and IOP-
M) yielded promising results regarding the classification of
the participants both when testing feigned depression and
when testing feigned schizophrenia. Indeed, the informa-
tion that, with only two exceptions, at least one of the test
components was able to identify the feigners’ testing results
as invalid is very encouraging. Similarly, importantly, the
only one false-positive result generated by the IOP-29 was
correctly classified as a credible performance by the IOP-
M. Thus, it does seem that adding the IOP-M contributes
to both the sensitivity and specificity of the original IOP-
29, consistent with Giromini etal. (2020) and Gegner etal.
(2021).
Several limitations, however, should be underscored.
First, since we did not include in our study any other SVTs
or PVTs, comparative validity could not be investigated.
On the other hand, the lack of such instruments in Slovene
language is exactly one of the primary reasons why this
study was initiated. Second, our findings are limited by the
fact that we did not have access to any clinical samples,
so that our study essentially is a sensitivity study. Actual
overall classification accuracy is likely to be lower because
of reduced specificity if clinical samples were employed.
Thus, research with genuinely impaired individuals and
feigners is needed to better appreciate the specificity of the
Slovenian IOP-29 and IOP-M. Third, this study only inves-
tigated feigned depression and feigned schizophrenia, so
that additional research is needed to appreciate the extent
to which the Slovenian IOP-29 and IOP-M could be used in
applied settings in which other problems (e.g., pain, mTBI
symptoms, etc.) may be feigned. Fourth, as is the case for
all simulation/analogue studies, the ecological validity of
our investigation may be questioned, as there is no way to
assess whether real-life malingerers would adopt the same
strategies utilized by our experimental simulators, when pre-
tending to be mentally ill. Additionally, group assignment
was a quasi-independent variable, and we did not manipulate
the valid versus invalid response condition but randomly
assigned participants to the given instruction. The internal
validity of such a research paradigm depends on the fidelity
with which participants perform the given instructions (Rai
etal., 2019). In a study using the experimental malinger-
ing paradigm, An etal. (2019) found that the group that
feigned cognitive decline performed well despite being
asked to try to feign cognitive deficits, possibly because
they made an effort to achieve credible feigning as required
by the instructions given, and that some participants in the
control group performed worse than expected according to
their abilities, possibly due to lack of interest and low effort,
leading to an underestimation of the difference between the
control and simulating groups. Unfortunately, we could not
rely on established SVTs or PVTs as criteria for monitor-
ing participants’ compliance with the given instructions, as
such instruments do not exist in Slovene. In the absence
of a gold standard, we could recommend using tests such
as the TOMM or the Rey Fifteen Item Test, which is not
language-based, to determine the extent to which simula-
tors follow the feigning instructions. Another option would
be to include bilingual participants who speak both Slovene
and English (or some other languages, e.g., Italian) and give
them the IOP in Slovene and another consolidated SVT (e.g.,
the MMPI, SIMS, etc.) in another language to check their
compliance with the feigning instructions.
Psychological Injury and Law
1 3
Nevertheless, this study is the first to independently rep-
licate Giromini etal. (2020) encouraging findings concern-
ing the potential utility of the IOP-M when investigating
feigned depression and feigned schizophrenia, and the first
to contribute to the study of the IOP-29 and IOP-M within a
Slovenian sample. Given the encouraging results, we invite
Slovenian researchers and practitioners to contact the cor-
responding author, in case they were interested in using or
further researching the IOP instruments.
Funding The authors acknowledge the financial support from the Slo-
venian Research Agency (research core funding No. P5-0110).
Data Availability For test security reasons, our data will not be placed
in an open access repository. However, we will be willing to share it
with interested readers upon reasonable request.
Declarations
Ethics Approval All procedures performed in studies involving human
participants were in accordance with the ethical standards of the insti-
tutional research committee and with the 1964 Helsinki declaration and
its later amendments or comparable ethical standards. The study was
approved by the Ethical Committee of Faculty of Arts, University of
Ljubljana, Approval No. 172-2019.
Informed Consent Informed consent was obtained from all individual
participants included in the study.
Conflict of Interest The fifth and sixth authors declare that they own a
share in the corporate (LLC) that possesses the rights to Inventory of
Problems. The other five authors declare that they have no conflict of
interest to report.
References
American Psychiatric Association. (2013). Diagnostic and statisti-
cal manual of mental disorders. (5th ed.). American Psychiatric
Association.
An, K. Y., Charles, J., Ali, S., Enache, A., Dhuga, J., & Erdodi, L.
A. (2019). Reexamining performance validity cutoffs within the
Complex Ideational Material and the Boston Naming Test-Short
Form using an experimental malingering paradigm. Journal
of Clinical and Experimental Neuropsychology, 41(1), 15–25.
https:// doi. org/ 10. 1080/ 13803 395. 2018. 14834 88
Areh, I. (2020). Forensic assessment may be based on common sense
assumptions rather than science. International Journal of Law and
Psychiatry, 71, 101607. https:// doi. org/ 10. 1016/j. ijlp. 2020. 101607
Ben-Porath, Y. S., & Tellegen, A. (2020a). MMPI-3 Manual for admin-
istration, scoring, and interpretation. University of Minnesota
Press.
Ben-Porath, Y. S., & Tellegen, A. (2020b). MMPI-3 Technical manual.
University of Minnesota Press.
Boone, K. B. (2013). Clinical practice of forensic neuropsychology.
New York, NY: Guilford.
Brislin, R. W. (1980). Translation and content analysis of oral and writ-
ten material. In H. C. Triandis & J. W. Berry (Eds.), Handbook
of cross-cultural psychology (Vol.1., pp. 389–444). Boston, MA:
Allyn & Bacon.
Bush, S. S., Heilbronner, R. L., & Ruff, R. M. (2014). Psychologi-
cal assessment of symptom and performance validity, response
bias, and malingering: Official position of the Association for
Scientific Advancement in Psychological Injury and Law. Psycho-
logical Injury and Law, 7(3), 197–205. https:// doi. org/ 10. 1007/
s12207- 014- 9198-7
de Francisco Carvalho, L., Reis, A., Colombarolli, M. S., Pasian, S.
R., Miguel, F. K., Erdodi, L. A., Viglione, D. J., & Giromini, L.
(2021). Discriminating feigned from credible PTSD symptoms: A
validation of a Brazilian version of the Inventory of Problems-29
(IOP-29). Advanced online publication. https:// doi. org/ 10. 1007/
s12207- 021- 09403-3
Chafetz, M., & Underhill, J. (2013). Estimated costs of malingered
disability. Archives of Clinical Neuropsychology, 28(7), 633–639.
https:// doi. org/ 10. 1093/ arclin/ act038
Erdodi, L. A., Abeare, C. A., Lichtenstein, J. D., Tyson, B., & T.,
Kucharski, B., Zuccato, B. G., & Roth, R. M. . (2017). Wechsler
Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing
speed scores as measures of non-credible responding – The third
generation of embedded performance validity indicators. Psy-
chological Assessment, 29(2), 148–157. https:// doi. org/ 10. 1037/
pas00 00319
Gegner, J., Erdodi, L. A., Giromini, L., Viglione, D. J., Bosi, J., &
Brusadelli, E. (2021). An Australian study on feigned mTBI
using the Inventory of Problems–29 (IOP-29), its Memory
Module (IOP-M), and the Rey Fifteen Item Test (FIT). Applied
Neuropsychology: Adult. Advance online publication. https:// doi.
org/ 10. 1080/ 23279 095. 2020. 18643 75
Geisinger, K. F. (2003). Testing and assessment in cross-cultural psy-
chology. In J. R. Graham, J. A. Naglieri, & I. B. Weiner (Eds.),
Handbook of psychology (Vol. 10). Assessment Psychology
(pp. 95–118). New Jersey, NJ: John Wiley & Sons. https://doi.
org/https:// doi. org/ 10. 1002/ 04712 64385. wei10 05
Giger, P., Merten, T., Merckelbach, H., & Oswald, M. (2010). Detec-
tion of feigned crime-related amnesia: A multi-method approach.
Journal of Forensic Psychology Practice, 10, 440–463. https:// doi.
org/ 10. 1080/ 15228 932. 2010. 489875
Giromini, L., Barbosa, F., Coga, G., Azeredo, A., Viglione, D. J., &
Zennaro, A. (2019a). Using the inventory of problems-29 (IOP-
29) with the Test of Memory Malingering (TOMM) in symptom
validity assessment: A study with a Portuguese sample of experi-
mental feigners. Applied Neuropsychology: Adult, 27(6), 504–516.
https:// doi. org/ 10. 1080/ 23279 095. 2019. 15709 29
Giromini, L., Carfora Lettieri, S., Zizolfi, S., Zizolfi, D., Viglione, D.,
Brusadelli, E., Perfetti, B., Angiola di Carlo, D., & Zennaro, A.
(2019b). Beyond rare-symptoms endorsement: A clinical com-
parison simulation study using the Minnesota Multiphasic Per-
sonality Inventory-2 (MMPI-2) with the Inventory of Problems-29
(IOP-29). Psychological Injury and Law, 12, 212–224. https:// doi.
org/ 10. 1007/ s12207- 019- 09357-7
Giromini, L., Viglione, D., Pignolo, C., & Zennaro, A. (2018). A clini-
cal comparison, simulation study testing the validity of SIMS and
IOP-29 with an Italian sample. Psychological Injury and Law, 11,
340–350. https:// doi. org/ 10. 1007/ s12207- 018- 9314-1
Giromini, L., Viglione, D. J., Pignolo, C., & Zennaro, A. (2019c).
An Inventory of Problems–29 sensitivity study investigating
feigning of four different symptom presentations via malingering
experimental paradigm. Journal of Personality Assessment, 102,
563–572. https:// doi. org/ 10. 1080/ 00223 891. 2019. 15669 14.
Giromini, L., Viglione, D., Zennaro, A., Maffei, A., & Erdodi, L.
A. (2020). SVT meets PVT: Development and initial valida-
tion of the Inventory of Problems-Memory (IOP-M). Psycho-
logical Injury and Law, 13, 261–274. https:// doi. org/ 10. 1007/
s12207- 020- 09385-8
Psychological Injury and Law
1 3
Green, P., Allen, L. M., & Astner, K. (1996). The Word Memory Test:
A user’s guide to the oral and computer administered forms. Cog-
niSyst Inc.
Ilgunaite, G., Giromini, L., Bosi, J., Viglione, D. J., & Zennaro, A.
(2020). A clinical comparison simulation study using the Inven-
tory of Problems-29 (IOP-29) with the Center for Epidemiologic
Studies Depression Scale (CES-D) in Lithuania. Applied Neu-
ropsychology: Adult. Advance online publication. https://doi.
org/https:// doi. org/ 10. 1080/ 23279 095. 2020. 17255 18
Larrabee, G. J. (2008). Aggregation across multiple indicators improves
the detection of malingering: Relationship to likelihood ratios. The
Clinical Neuropsychologist, 22, 666–679. https:// doi. org/ 10. 1080/
13854 04070 14949 87
Larrabee, G. J., Millis, S. R., & Meyers, J. E. (2009). 40 plus or minus 10,
a new magical number: Reply to Russell. The Clinical Neuropsy-
chologist, 23, 841–849. https:// doi. org/ 10. 1080/ 13854 04090 27967 35
Miller, H. A. (2001). M-FAST: Miller forensic assessment of symptoms
test professional manual. Psychological Assessment Resources
Inc.
Millon, T., Grossman, S., & Millon, C. (2015). Millon Clinical Multi-
axial Inventory–IV (MCMI-IV) manual. Bloomington, MN: NCS
Pearson.
Mittenberg, W., Patton, C., Morgan, E., & Condit, D. (2003). Base rates
of malingering and symptom exaggeration. Journal of Clinical
and Experimental Neuropsychology, 24, 1094–1102. https:// doi.
org/ 10. 1076/ jcen. 24.8. 1094. 8379
Morey, L. C. (1991). Personality assessment inventory–professional
manual. Odessa, FL: Psychological Assessment Resources.
Morey, L. C. (2007). Personality Assessment Inventory (PAI). Profes-
sional manual. (2nd ed.). Psychological Assessment Resources.
Rai, J. K., An, K. Y., Charles, J., Ali, S., & Erdodi, L. A. (2019).
Introducing a forced choice recognition trial to the Rey Complex
Figure Test. Psychology & Neuroscience, 12(4), 451–472. https://
doi. org/ 10. 1037/ pne00 00175
Resnick, P. J. (1984). The detection of malingered mental illness.
Behavioral Sciences & the Law, 2(1), 21–38. https:// doi. org/ 10.
1002/ bsl. 23700 20104
Rogers, R., Bagby, R. M., & Dickens, S. E. (1992). Structured Inter-
view of Reported Symptoms (SIRS) and professional manual.
Odessa, FL: Psychological Assessment Resources.
Rogers, R., & Bender, D. (2018). Clinical assessment of malingering
and deception. (3rd ed.). Guilford Press.
Rogers, R., Kropp, P., Bagby, M., & Dickens, S. (1992). Faking specific
disorders: A study of the structured interview of reported symp-
toms (SIRS). Journal of Clinical Psychology, 48(5), 643–648.
https:// doi. org/ 10. 1002/ 1097- 4679(199209) 48:5% 3c643:: AID-
JCLP2 27048 0511% 3e3.0. CO;2-2
Rogers, R., Sewell, K. W., & Gillard, N. D. (2010). Structured inter-
view of reported symptoms, second edition: professional test
manual. (2nd ed.). Psychological Assessment Resources.
Rogers, R., Sewell, K. W., Martin, M. A., & Vitacco, M. J. (2003).
Detection of feigned mental disorders: A meta-analysis of the
MMPI-2 and malingering. Assessment, 10(2), 160–177. https://
doi. org/ 10. 1177/ 10731 91103 01000 2007
Rogers, R., Velsor, S. F., & Williams, M. M. (2020). A brief commen-
tary on SIRS versus SIRS-2 critiques. Advance online publication.
https:// doi. org/ 10. 1007/ s12207- 020- 09379-6
Roma, P., Giromini, L., Burla, F., Ferracuti, S., Viglione, D. J.,
& Mazza, C. (2019). Ecological validity of the inventory of
problems-29 (IOP-29): An Italian study of court-ordered, psy-
chological injury evaluations using the Structured Inventory
of Malingered Symptomatology (SIMS) as criterion variable.
Psychological Injury and Law, 13, 57–65. https:// doi. org/ 10. 1007/
s12207- 019- 09368-4
Sherman, E. M. S., Slick, D. J., & Iverson, G. L. (2020). Multidimen-
sional malingering criteria for neuropsychological assessment: A
20-year update of the malingered neuropsychological dysfunc-
tion criteria. Archives of Clinical Neuropsychology, 35, 735–764.
https:// doi. org/ 10. 1093/ arclin/ acaa0 19
Slick, D., Hopp, G., Strauss, E., & Thompson, G. B. (2005). VSVT
Victoria Symptom Validity Test. Odessa, FL: Psychological
Assessment Resources.
Smith, G. P., & Burger, G. K. (1997). Detection of malingering: Valida-
tion of the Structured Inventory of Malingered Symptomatology
(SIMS). Journal of the American Academy on Psychiatry and
Law, 25, 180–183.
Tombaugh, T. N. (1996). Test of memory malingering (TOMM). New
York, USA: Multi Health Systems.
Van de Vijver, F., & Hambleton, R. K. (1996). Translating tests. Euro-
pean Psychologist, 1(2), 89–99. https:// doi. org/ 10. 1027/ 1016-
9040.1. 2. 89
Viglione, D. J., Giromini, L., & Landis, P. (2017). The development of
the Inventory of Problems–29: A brief self-administered measure
for discriminating bona fide from feigned psychiatric and cogni-
tive complaints. Journal of Personality Assessment, 99(5), 534–
544. https:// doi. org/ 10. 1080/ 00223 891. 2016. 12338 82
Viglione, D. J., & Giromini, L. (2020). Inventory of Problems–29:
Professional manual. Columbus, OH: IOP-Test, LLC.
Winters, C. L., Giromini, L., Crawford, T. J., Ales, F., Viglione, D. J.,
& Warmelink, L. (2020). An Inventory of Problems–29 (IOP–29)
study investigating feigned schizophrenia and random responding
in a British community sample. Psychiatry, Psychology and Law.
Advance online publication. https:// doi. org/ 10. 1080/ 13218 719.
2020. 17677 20.
Young, G. (2015). Malingering in forensic disability-related assess-
ments: Prevalence 15±15%. Psychological Injury and Law, 8(3),
188–199. https:// doi. org/ 10. 1007/ s12207- 015- 9232-4
Young, G. (2019). The cry for help in psychological injury and law:
Concepts and review. Psychological Injury and Law, 12(3–4),
225–237. https:// doi. org/ 10. 1007/ s12207- 019- 09360-y
Young, G., Foote, W. E., Kerig, P. K., Mailis, A., Brovko, J., Kohutis,
E. A., McCall, S., Hapidou, E. G., Fokas, K. F., & Goodman-
Delahunty, J. (2020). Introducing psychological injury and law.
Psychological Injury and Law, 13, 452–463. https:// doi. org/ 10.
1007/ s12207- 020- 09396-5
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... Specifically, symptom validity tests (SVTs) are tests that assess the credibility (or validity) of self-reported symptoms, and performance validity tests (PVTs) are tests that assess the credibility (or validity) of observed performance on cognitive tests. The joint use of both types of instruments is beneficial because they assess semi-independent but not mutually exclusive constructs, and allow a greater number of feigning strategies to be covered Šömen et al., 2021). ...
... Overall, the Spanish IOP-29 findings are similar to both the original version of the instrument ) and other validation studies in different countries (e.g. Carvalho et al., 2021;Šömen et al., 2021). The cutoff FDS ≥ .30 is suitable for use in screening assessments where it is necessary to prioritize sensitivity over specificity, whereas the cutoff FDS ≥ .50 is recommended for the general use of the instrument. ...
... On the other hand, cutoffs of ≤ 30 and ≤ 31 decreased specificity to 87.8% and 78.0%, respectively, in exchange for increasing sensitivity in the uncoached group to 89.0% and 90.9%, respectively, and in the coached group to 96.7% and 98.3%, respectively. These findings are in line with Bosi et al. (2022), Giromini et al. (2020), andŠömen et al. (2021), where the cutoff of ≤ 29 generated similar scores and was recommended for general use. The cutoff of ≤ 27 resulted in a very large specificity in exchange for a slight decrease in sensitivity, making it appropriate for evaluations in high-stake contexts where the minimization of false positives should be prioritized. ...
Article
Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.
... The Inventory of Problems-29 (IOP-29; Viglione et al., 2017) and its subsequent memory module, the IOP-M , offer a rare symptom validity test and PVT combination that is gathering empirical support across both English-speaking (L. Erdodi et al., 2023;Gegner et al., 2022;Holcomb et al., 2022;Winters et al., 2021) and culturally diverse populations (Banovic et al., 2022;De Francisco Carvalho et al., 2021;Giromini, Barbosa, et al., 2020;Giromini et al., 2018;Grønnerød et al., 2023;Ilgunaite et al., 2021;Šömen et al., 2021). The IOP-29 False Disorder Probability Scale (FDS) was proven to accurately detect a wide range of simulated disorders in experimental malingering (expMAL) studies (e.g., depression- Grønnerød et al., 2023;Ilgunaite et al., 2021;posttraumatic stress disorder-Blavier et al., 2023;de Francisco Carvalho et al., 2021;traumatic brain injury-Gegner et al., 2022;Giromini, Barbosa, et al., 2020;schizophrenia-Banovic et al., 2022;Winters et al., 2021). ...
... The addition of the IOP-M increased the classification accuracy of the IOP-29 in expMAL studies across culturally diverse populations (De Francisco Carvalho et al., 2021;Gegner et al., 2022;Giromini, Barbosa, et al., 2020;Šömen et al., 2021). This recognition-based PVT also accurately classified invalid performance against free-standing PVTs in English-speaking realworld clinical participants, yielding optimal combinations of classification parameters at the liberal cutoff of ≤30 (L. ...
... A cutoff of ≤30 correctly identified targets was previously reported to provide the best combination of specificity and sensitivity (L. Erdodi et al., 2023;Gegner et al., 2022;Holcomb et al., 2022;Šömen et al., 2021). ...
Article
Full-text available
Objective: The present study investigated classification accuracies of the IOP-29-M as a validity measure in a Romanian sample and examined differences between language versions. Method: Ninety-five undergraduates (65 controls and 30 experimental malingerers were administered the Inventory of Problems–29 (IOP-29) and the IOP-M, in English and Romanian, as part of a neurocognitive test battery. IOP-29 False Disorder Score and IOP-M accuracy scores in English and Romanian were compared, and classification accuracies against the experimental malingering criterion were computed. Results: Both indicators differentiated between groups and produced excellent area under the curves (.86–.96) in classifying experimental malingerers. At previously published cutoffs, the Romanian version of the IOP-29 proved more accurate than the English version. The IOP-M yielded virtually identical accuracies for both versions at a standard cutoff of ≤30. Conclusions: The IOP-29-M accurately discriminates valid from invalid protocols in Romanian bilingual undergraduates. The administration language could influence cutoff accuracies of the IOP-29. The IOP-M appears robust to language effects.
... Similarly, Shura et al. (2021) administered an SVT and a PVT to 417 veterans assessed for possible mild traumatic brain injury (mTBI) or PTSD and found that although 20.4% produced invalid results on the administered PVT (independent of SVT results) and 13.8% produced invalid results on the administered SVT (independent of PVT results), only 4.6% produced invalid results on both tests. The results of several other studies also lead to similar conclusions (e.g., Banovic et al., 2022;Carvalho et al., 2021;Gegner et al., 2021;Sabelli et al., 2021;Šömen et al., 2021). Thus, converging research evidence currently suggests that assessees who fail the administered SVT(s) do not often fail the administered PVT(s) and, vice versa, those who fail the administered PVT(s) do not often fail the administered SVT(s). ...
... Unlike the IOP-29, the IOP-M has been little researched so far. In fact, apart from the introductory article by , there are only a few published studies in the literature that have used the IOP-M (i.e., Banovic et al., 2022;Bosi et al., 2022;Carvalho et al., 2021;Erdodi et al., 2023;Gegner et al., 2021;Holcomb et al., 2022;Šömen et al., 2021). Noteworthy, with the sole exception of Erdodi et al. (2023), which is a follow-up of Holcomb et al. (2022), all of these studies used nonclinical controls, so additional IOP-M research -especially with clinical or subclinical samples -would be beneficial. ...
... In contrast, individuals who feign neuropsychological problems (e.g., mild traumatic brain injury) often fail a substantial number of IOP-M items. Thus, according to and in agreement with subsequent studies (Banovic et al., 2022;Bosi et al., 2022;Carvalho et al., 2021;Erdodi et al., 2023;Gegner et al., 2021;Šömen et al., 2021), it is unlikely that an individual without moderate or severe cognitive problems will correctly answer fewer than 30 of the 34 items of the IOP-M, making the standard cutoff score for the IOP-M "≤ 29." However, because the IOP-M is a new test, additional research on its optimal cutoff would be beneficial, so in this study we examined the effectiveness of five different cutoff scores, from IOP-M ≤ 31 to IOP-M ≤ 27. ...
Article
Sometimes forensic psychologists are asked to determine whether the symptoms of PTSD presented by the plaintiff are genuine or feigned. To this end, they may use both symptom validity tests (SVTs) and performance validity tests (PVTs), but SVTs are used far more frequently in these assessments. Thus, we conducted a natural experiment and administered an SVT (i.e., the IOP-29) and a PVT (i.e., the IOP-M) to 76 individuals instructed to feign PTSD and to 34 controls who self-reported exposure to a devastating flood several months earlier. The results confirm the utility of both measures in detecting feigned PTSD.
... Consistent with previous reports (Giromini et al., 2020b;Šömen et al., 2021) and our a priori prediction, the mean FDS was lower for HON than for SIM (g = 2.68, extremely large effect). Compared with Winters et al. (2020), the FDS for the HON condition was higher (.14 versus .25; ...
Article
Full-text available
Because the actuarial evidence base for symptom validity tests (SVTs) is developed in a specific population, it is unclear whether their clinical utility is transferable to a population with different demographic characteristics. To address this, we report here the validation study of a recently developed free-standing SVT, the Inventory of Problems-29 (IOP-29), in a Turkish community sample. We employed a mixed design with a simulation paradigm: The Turkish IOP-29 was presented to the same participants (N = 125; 53.6% female; age range: 19-53) three times in an online format, with instructions to respond honestly (HON), randomly (RND), and attempt to feign a psychiatric disorder (SIM) based on different vignettes. In the SIM condition, participants were presented with one of three scripts instructing them to feign either schizophrenia (SIM-SCZ), depression (SIM-DEP), or posttraumatic stress disorder (SIM-PTSD). As predicted, the Turkish IOP-29 is effective in discriminating between credible and noncredible presentations and equally sensitive to feigning of different psychiatric disorders: The standard cutoff (FDS ≥ .50) is uniformly sensitive (90.2% to 92.9%) and yields a specificity of 88%. Random responding produces FDS scores more similar to those of noncredible presentations, and the random responding score (RRS) has incremental validity in distinguishing random responding from feigned and honest responding. Our findings reveal that the classification accuracy of the IOP-29 is stable across administration languages, feigned clinical constructs, and geographic regions. Validation of the Turkish IOP-29 will be a valuable addition to the limited availability of SVTs in Turkish. We discuss limitations and future directions.
... The PVT module of the Inventory of Problems (IOP-29; , the IOP-M (Giromini et al., 2020a, b), is a novel instrument that is quickly gathering empirical support across non-English speaking populations (Banovic et al., 2022;Carvalho et al., 2021;Giromini et al., 2019;Ilgunaite et al., 2020;Somen et al., 2021). The IOP-M increased classification accuracy when added to the IOP-29 in experimental malingering (expMAL) studies (Gegner et al., 2021;Giromini et al., 2020a, b;Šömen et al., 2021). However, its classification accuracy has not been tested in examinees who were administered the test in a language in which they were not native speakers. ...
Article
Full-text available
This study was designed to evaluate the susceptibility of various performance validity tests (PVTs) to limited English proficiency (LEP). A battery of free-standing and embedded PVTs was administered to 95 undergraduate students at a Romanian university, randomly assigned to the control (n = 65) or experimental malingering group (n = 30). Overall correct classification (OCC) at the first cutoff to clear .90 specificity (with group membership as criterion) was used as the main metric to compare PVTs. Mean OCC for free-standing PVTs (.784) was comparable to mean OCC for embedded PVTs (.780). Cutoffs on embedded PVTs often had to be adjusted (more conservative) to meet the specificity standard. Contrary to our predictions, embedded PVTs with high verbal mediation outperformed those with low verbal mediation (mean OCC .807 versus .719). Although multivariate models of PVTs performed very well (mean OCC = .892), several individual freestanding and embedded PVTs produced comparable mean OCC (.863-.895). Other embedded PVTs had trivial sensitivity (.03-.13) at ≥ .90 specificity. PVTs administered in both languages (English and Romanian) provided conclusive evidence of both the deleterious effects of LEP and the cross-cultural validity of existing methods of performance validity testing. Results defied most of our a priori predictions: level of verbal mediation was an influential, but not a decisive factor in the classification accuracy of PVTs; free-standing PVTs were not necessarily superior to embedded PVTs; multivariate models of performance validity assessment outperformed most, but not all their individual components. Our findings suggest that some PVTs may be inherently unfit to be used with examinees with LEP. The multiple unexpected findings signal a fundamental uncertainty about the psychometric properties of instruments developed and validated in North America when applied to examinees outside the US or Canada. Although several existing PVTs have the potential to be useful in examinees with LEP, their relevant psychometric properties should be independently verified in new target populations to ensure the validity of their clinical interpretation. The classification accuracy observed in native speakers of English cannot be assumed to transfer to members of linguistically and culturally different communities – doing so risks potentially consequential errors in performance validity assessment. Of course, the abundance of counterintuitive findings also serves as a note of caution: our findings may not generalize to different samples.
... Turning to the articles in this special issue that deal with SVTs, Grønnerød and colleagues developed a Norwegian version of the IOP-29 (Viglione et al., 2017) and tested its effectiveness in discriminating the responses of 137 nonclinical participants who were asked to feign depression from those of 138 healthy controls instructed to respond honestly. Consistent with results observed with several other non-English versions of the IOP-29 (e.g., Banovic et al., 2022;Blavier et al., 2023;Boskovic et al., 2022;Carvalho et al., 2021;Giromini et al., 2018Giromini et al., , 2019Ilgunaite et al., 2022;Šömen et al., 2021), the standard cutoff score of the IOP-29 (i.e., False Disorder probability Score ≥ .50) achieved a sensitivity of .77 ...
Article
Full-text available
This editorial article introduces a special issue of Psychology & Neuroscience dealing with performance and symptom validity testing (SVTs). We first discuss the importance of assessing the credibility of observed performance on cognitive tasks and of symptoms reported in questionnaires or clinical interviews, both in research and in clinical and forensic settings. We then briefly summarize the content of each article in this special issue and discuss their contribution to this topic. We conclude that practitioners have an increasing number of embedded performance validity tests (PVTs) at their disposal, so current research trends are focused on finding newer and better algorithms for integrating results from multiple PVTs. In contrast, there are significantly fewer SVTs available to practitioners, so researchers in this area currently seem to be focused on developing and validating both embedded and free-standing SVTs.
Article
Full-text available
Some widely agreed-upon, official recommendations for professionals conducting psychological assessments suggest employing multiple symptom validity tests (SVTs) to screen the validity of symptom reports. Yet, SVTs are rarely validated in languages other than English, and no free-standing SVT exists in Serbia. To address this gap and stimulate further research on symptom validity within populations from the Balkans, we developed and tested a Serbian version of the Inventory of Problems – 29 (IOP-29). Following the same procedures used in prior IOP-29 validation studies (e.g., Akca et al., 2023), we administered the Serbian IOP-29 to 110 adult volunteers from Serbia. Participants completed the IOP-29 three times under different conditions: responding honestly, randomly, or by feigning a mental disorder (schizophrenia, depression, or post-traumatic stress disorder). We examined the utility of both the False Disorder Probability Score (FDS), which is the chief feigning index of the IOP-29, and of a new index embedded in the IOP-29, which is aimed at detecting random or careless responding. Overall, our results demonstrated that the FDS effectively differentiated between feigned and honest presentations, achieving a sensitivity of 0.86 and a specificity of 0.96 when using the standard cutoff (FDS ≥ 0.50). In addition, the random responding index also successfully identified random responding, achieving a sensitivity of 0.64 and a specificity greater than 0.90 when using a midrange cutoff of T ≥ 67. These findings closely align with outcomes of Akca et al. (2023) and support meta-analytic literature reviews on the IOP-29. More broadly, this study advances and encourages further exploration of symptom validity testing in culturally diverse populations.
Article
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Article
Full-text available
Objective: Since its third edition (American Psychiatric Association, 1980) and in subsequent editions, the Diagnostic and Statistical Manual of Mental Disorders has favored the criminological model in relation to malingering. However, research on the relationship of psychopathy and antisocial personality disorder to the propensity and ability to feign mental illness has yielded mixed results. Importantly, no study has yet examined the relationship between the Dark Tetrad (Machiavellianism, narcissism, psychopathy, and sadism) and the ability to fake mental illness in a credible manner. Our study aimed to fill this research gap by examining whether individuals with higher Dark Tetrad traits report more schizophrenia symptoms and more credibly fake schizophrenia when asked to do so. Method: Eighty-one nonclinical volunteers from Portugal took the Short Dark Tetrad (SD4) and were instructed to respond honestly. They were then instructed to feign schizophrenia on the following tests: the Eppendorf Schizophrenia Inventory (ESI), the Inventory of Problems–29 (IOP-29), and the memory add-on of the IOP-29 (i.e., IOP-M). Results: None of the SD4 scores correlated significantly with the ESI, IOP-29, and IOP-M scores, and all effect sizes were small. Of note, the standard cutoff score of the IOP-29 (i.e., ≥.50) correctly classified 90.1% of participants as fakers (i.e., sensitivity = .90), and half of the very few false-negative classifications of the IOP-29 were correctly classified as noncredible results by the IOP-M. Conclusions: The ability to plausibly fake schizophrenia is comparable in individuals with higher and lower Dark Tetrad traits. Also, the IOP-29 and the IOP-M showed excellent validity, further supporting their effectiveness in assessing symptom and performance validity.
Article
Full-text available
Current guidelines for conducting symptom validity assessments require that professionals administer multiple symptom validity tests (SVTs) and that the SVTs selected for their evaluations provide nonredundant information. However, not many SVTs are currently available, and most of them rely on the same, (in)frequency-based, feigning detection strategy. In this context, the Inventory of Problems (IOP-29) could be a valuable addition to the assessor’s toolbox because of its brevity (29 items) and its different approach to assessing the credibility of presented symptoms. As its ecological validity has been poorly investigated, the present study used a criterion groups design to examine the classification accuracy of the IOP-29 in a data set of 174 court-ordered psychological evaluations focused on psychological injury. The validity scales of the Minnesota Multiphasic Personality Inventory–2 Restructured Form and the total score of the Structured Inventory of Malingered Symptoms were used as criterion variables. Overall, the results of this study confirm that the IOP-29 is an effective measure (1.70 ≤ d ≤ 2.67) that provides valuable information when added to the multimethod assessment of symptom validity in civil forensic contexts.
Article
Full-text available
Objective: To evaluate the convergent validity and diagnostic accuracy of the Miller Forensic Assessment of Symptoms Test (M-FAST) in a Veteran sample. Method: Participants were identified and recruited for a study of neurocognition of traumatic brain injury (TBI) and posttraumatic stress disorder in post 9/11 Veterans. A standardized neuropsychological battery was administered. From the parent study sample, 405 completed both the M-FAST and the Personality Assessment Inventory (PAI). Nonparametric tests were used to compare the M-FAST Total score across diagnostic and disability variable groupings. Correlations were calculated for the M-FAST Total score in comparison to the PAI symptom validity indices and clinical scales. Diagnostic accuracy analyses were employed to assess M-FAST Total score cutoffs to identify a noncredible group per PAI overreporting scales. Results: The M-FAST Total score was not significantly higher for individuals with a TBI history, but was higher in those with major depressive disorder, posttraumatic stress disorder, and receiving Veterans Affairs disability. The M-FAST correlated well to established symptom validity scales in the PAI, with smaller effects seen when correlated to PAI clinical scales. Using a cutoff of ≥5, the M-FAST achieved an area under the curve of .754 but resulted in a very poor sensitivity of 24. Conclusions: This study evaluated the M-FAST as a screening or adjunct measure of symptom validity in postdeployed Veterans. Even after reducing the Total score cutoff from the manual recommended score, sensitivity remained poor; thus, the M-FAST should not be used as a sole symptom validity tests outside of screening contexts.
Article
Full-text available
The Inventory of Problems-29 (IOP-29) is a recently introduced free-standing symptom validity test (SVT) with a rapidly growing evidence base. Its classification accuracy compares favorably with that of the widely utilized Structured Inventory of Malingered Symptomology (SIMS), and it provides incremental validity when used in combination with other symptom and performance validity tests. This project was designed to cross-validate the IOP-29 in a Brazilian context. Study 1 focused on specificity and administered the IOP-29 and a PTSD screening checklist to 154 Brazilian firefighters who had been exposed to one or more potentially traumatic stressors. Study 2 implemented a simulation/analogue research design and administered the IOP-29, together with a new IOP-29 add-on memory module, to nonclinical volunteers; 101 asked to respond honestly, 100 instructed to feign PTSD. Taken together, the results of both study 1 (specificity = .96) and study 2 (Cohen’s d = 2.15; AUC = .92) support the validity, effectiveness, and cross-cultural applicability of the IOP-29. Additionally, study 2 provides preliminary evidence for the incremental utility of the newly introduced, IOP-29 add-on memory module. Despite the encouraging findings, we highlight that the determination of feigning or malingering should never be made off a single test alone.
Article
Full-text available
We investigated the classification accuracy of the Inventory of Problems − 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Article
Full-text available
Psychology injury and law is a specialized forensic psychology field that concerns reaching legal thresholds for actionable negligent or related injuries having a psychological component, such as for posttraumatic stress disorder, chronic pain, and mild traumatic brain injury. The presenting psychological injuries have to be related causally to the event at issue, and if pre-existing injuries, vulnerabilities, or psychopathologies are involved at baseline, they have to be exacerbated by the event at issue, or added to in unique ways such that the psychological effects of the event at issue go beyond the de minimis range. The articles in this special issue deal with the legal aspects of cases of psychological injury, including in legal steps and procedures to follow and the causal question of whether an index event is responsible for claimed injuries. They deal with the major psychological injuries, and others such as somatic symptom disorder and factitious disorder. They address best practices in assessment such that testimony and reports proffered to court are probative, i.e., helping the trier of fact to arrive at judicious decisions. The articles in the special issue review the reliable and valid tests in the field, including those that examine negative response bias, negative impression management, symptom exaggeration, feigning, and possible malingering. The latter should be ruled in only through the most compelling evidence in the whole file of an examinee, including test results and inconsistencies. The court will engage in admissibility challenges when testimony, reports, opinions, conclusions, and recommendations do not meet the expected standards of being scientific, comprehensive, impartial, and having considered all the reliable data at hand. The critical topics in the field that cut across the articles in the special issue relate to (a) conceptual and definitional issues, (b) confounds and confusions, (c) assessment and testing, (d) feigning/malingering, and (e) medicolegal/legal/court implications. The articles in the special issue are reviewed in terms of these five themes.
Article
Full-text available
A growing literature indicates that to evaluate the credibility of a clinical presentation it would be optimal to rely on multiple sources of information, and use both symptom validity tests (SVTs) and performance validity tests (PVTs) whenever possible. In this paper, we present the development and initial validation of a PVT module designed to be used in combination with a free-standing SVT. Named Inventory of Problems – Memory (IOP-M), this new PVT module is given to the examinee immediately after completing the Inventory of Problems – 29 (IOP-29). It consists of a 34-item, two-alternative, forced-choice, implicit recognition test. Results from 360 nonclinical volunteers – 192 instructed to respond honestly (honest controls) and 168 instructed to feign mental illness (experimental simulators) – suggest that the IOP-M has the potential to yield incremental validity over using the IOP-29 alone. In fact, a series of hierarchical logistic regressions using group as criterion variable (0 = honest control; 1 = experimental simulator) and the IOP-29 and IOP-M as predictors showed that the models including both measures significantly improved classification accuracy over those including the IOP-29 only, Δχ2 ≥ 19.1, p < .01. When considering the optimal cut scores for each measure, only 6 of the 168 simulators (i.e., less than 4%) passed both the IOP-29 and IOP-M, and only 3 of the 192 honest responders (i.e., less than 2%) failed both. A closer examination of false positive classifications, however, revealed that the IOP-M could be prone to false positive errors in examinees with moderate to severe cognitive impairment.
Article
Full-text available
This commentary reviews and critiques three recent SIRS/SIRS-2 comparison studies that reported strongly worded criticisms of the SIRS-2 and appeared to conclude that the original SIRS was far more accurate than its revision. Research designs and methodological considerations for replication research are outlined, and these comparison studies are systematically evaluated regarding their strengths and limitations. As a particularly concerning finding, SIRS/SIRS-2 comparison studies have routinely collapsed SIRS-2 classification categories (genuine, indeterminate-general, indeterminate-evaluate, and feigning) rather than following its well-defined decision rules, rendering comparison study results inapplicable to the SIRS-2 Decision Model. Relevant issues are discussed more generally so that scholars and practitioners may draw their own thoughtful conclusions about the psychometric strengths of the SIRS-2 and its utility for clinical and forensic practice.
Article
Full-text available
Objectives: Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field's operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561). However, the MND criteria are long overdue for revision to address advances in the field of malingering research and to address limitations identified by experts in the field. Method: The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. Results: The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. Conclusions: The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.
Article
Forensic assessments must be scientifically founded, because courts should obtain expert evidence with acceptable evidential value. In Slovenia, professional guidelines of forensic personality assessment are too general and not always in line with international professional recommendations. Thus, experts have no strict guidelines which would lead them to scientifically grounded expert opinions. The aim of the research was to establish which tests are employed in forensic assessment in Slovenia and to what extent the professional guidelines for expert opinions are followed. A total of 166 forensic personality assessments were reviewed, representing the majority of expert opinions issued in the period 2003–2018. The results of the analysis revealed that questionable projective tests are most commonly used. Typically, an expert opinion was rendered based on two tests, at least one of which was projective. What is more, expert opinions did not include hypotheses, in-text citations, reference lists, or proof of the expert witness's competence. The tests and their results were mentioned briefly and inadequately, without mention of their reliability and validity. Possible malingering of the person being evaluated was not detected. Professional guidelines were not followed and non-standardized tests without normative values and of questionable scientific merit were predominantly used, despite lack of proof that they truly measure what they claim to be measuring. These findings significantly differ from the results of similar research, raising serious concerns over the credibility of expert opinions in Slovenia.
Article
Compared to other Western countries, malingering research is still relatively scarce in the United Kingdom, partly because only a few brief and easy-to-use symptom validity tests (SVTs) have been validated for use with British test-takers. This online study examined the validity of the Inventory of Problems–29 (IOP–29) in detecting feigned schizophrenia and random responding in 151 British volunteers. Each participant took three IOP–29 test administrations: (a) responding honestly; (b) pretending to suffer from schizophrenia; and (c) responding at random. Additionally, they also responded to a schizotypy measure (O-LIFE) under standard instruction. The IOP–29’s feigning scale (FDS) showed excellent validity in discriminating honest responding from feigned schizophrenia (AUC = .99), and its classification accuracy was not significantly affected by the presence of schizotypal traits. Additionally, a recently introduced IOP–29 scale aimed at detecting random responding (RRS) demonstrated very promising results.
Article
This article contributes to the growing research on the validity of the recently developed, Inventory of Problems – 29 (IOP-29) in the discrimination of feigned from bona fide mental or cognitive disorders. Specifically, we first developed a Lithuanian version of the IOP-29 and tested its validity on a sample of 50 depressed patients and 50 healthy volunteers instructed to feign depression. Next, we reviewed all previously published IOP-29 studies reporting on depression-related presentations (k = 5), and compared our results against previously reported findings. Statistical analyses showed that the Lithuanian IOP-29 discriminated almost perfectly between genuine and experimentally feigned major depression, with Area Under the Curve (AUC) = .98 (SE = .01) and Cohen’s d = 3.31. When compared to previously published IOP-29 literature on this same topic, these findings may be characterized as similar or perhaps slightly more encouraging. Indeed, across all international, empirical studies considered in this article, Cohen’s d ranged from 1.80 to 4.30, and AUC ranged from .89 to .99. Taken together, these findings contribute to supporting the strong validity and cross-cultural applicability of the IOP-29. They also provide additional support for its use in forensic evaluations.