ArticlePDF Available

Abstract and Figures

We present direct and conceptual replications of the influential taxometric analysis of Type A Behavior (TAB; Strube, 1989), which reported evidence for the latent typology of the construct. Study 1, the direct replication (N = 2,373), duplicated sampling and methodological procedures of the original study, but results showed that the item indicators used in the original study lacked sufficient validity to unambiguously determine latent structure. Using improved factorial subscale indicators to further test the question, multiple taxometric procedures, in combination with parallel analyses of simulated data, failed to replicate the original typological finding. Study 2, the conceptual replication, tested the latent structure of the wider construct of TAB using the sample from the Caerphilly Prospective Study (N = 2,254), which contains responses to the three most widely-used self-report measures of TAB: the Jenkins Activity Survey, Bortner scale, and Framingham scale. Factorial subscale indicators were derived from the measures and submitted to multiple taxometric procedures. Results of Study 2 converged with those of Study 1, providing clear evidence of latent dimensional structure. Overall, results suggest there is no evidence for the type in TAB. Findings imply that theoretical models of TAB, assessment practices, and data analytic procedures that assume a typology should be replaced by dimensional models, factorial subscale measures, and corresponding statistical approaches. Specific subscale measures that tap multiple Big Five trait domains, and show evidence of predictive utility, are also recommended.
No caption available
… 
No caption available
… 
No caption available
… 
Content may be subject to copyright.
Running head: NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR
!
Direct and Conceptual Replications of the Taxometric Analysis of Type A Behavior
Michael P. Wilmot
University of Toronto
Nick Haslam
University of Melbourne
Jingyuan Tian and Deniz S. Ones
University of Minnesota
Accepted version (2/27/2018) before copyediting at
Journal of Personality and Social Psychology
Authors Note
Michael P. Wilmot, Department of Management, University of Toronto; Nick Haslam,
School of Psychological Sciences, University of Melbourne; Jingyuan Tian and Deniz Ones,
Department of Psychology, University of Minnesota.
We are grateful to Fred Bryant for sharing data with us. We also thank Yoav Ben-Shlomo
and the School of Social and Community Medicine at the University of Bristol for sharing data
from the Caerphilly Prospective Study. The Caerphilly Prospective Study was undertaken by the
former MRC Epidemiology Unit of South Wales and funded by the Medical Research Council of
the United Kingdom.
Correspondence should be addressed to Michael P. Wilmot, Department of Management,
University of Toronto, 1265 Military Trail, Toronto, ON M1C 1A4, Canada. E-mail:
mp.wilmot@utoronto.ca.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
2
Abstract
We present direct and conceptual replications of the influential taxometric analysis of Type A
Behavior (TAB; Strube, 1989), which reported evidence for the latent typology of the construct.
Study 1, the direct replication (N = 2,373), duplicated sampling and methodological procedures
of the original study, but results showed that the item indicators used in the original study lacked
sufficient validity to unambiguously determine latent structure. Using improved factorial
subscale indicators to further test the question, multiple taxometric procedures, in combination
with parallel analyses of simulated data, failed to replicate the original typological finding. Study
2, the conceptual replication, tested the latent structure of the wider construct of TAB using the
sample from the Caerphilly Prospective Study (N = 2,254), which contains responses to the three
most widely-used self-report measures of TAB: the Jenkins Activity Survey, Bortner scale, and
Framingham scale. Factorial subscale indicators were derived from the measures and submitted
to multiple taxometric procedures. Results of Study 2 converged with those of Study 1, providing
clear evidence of latent dimensional structure. Overall, results suggest there is no evidence for
the type in TAB. Findings imply that theoretical models of TAB, assessment practices, and data
analytic procedures that assume a typology should be replaced by dimensional models, factorial
subscale measures, and corresponding statistical approaches. Specific subscale measures that tap
multiple Big Five trait domains, and show evidence of predictive utility, are also recommended.
Keywords: Type A Behavior; taxometrics; replication; personality structure; categories and
dimensions.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
3
Direct and Conceptual Replications of the Taxometric Analysis of Type A Behavior
A fundamental distinction can be made between two kinds of personality variables
(Meehl, 1992). Dimensions refer to attributes that all people possess in some quantitative degree
(e.g., Extraversion). By contrast, types (or, classes) refer to variables by which individuals differ
qualitatively (e.g., the presence versus absence of Down syndrome or Huntington’s disease). One
variable that has been claimed as a type is the construct of Type A Behavior (TAB; also called,
Type A personality). TAB refers to a pattern of behavior characterized by competitive drive,
speed, impatience, irritability, and time pressure (Bortner, 1969; Friedman & Rosenman, 1959;
Haynes, Levine, Scotch, Feinleib, & Kannel, 1978; Jenkins, Zyzanski, & Rosenman, 1979;
Yarnold, Bryant, & Grimm, 1987). Traditionally, TAB has been conceptualized as typological
(i.e., Types A and B) and an early meta-analysis seemed to support the interpretation (Matthews,
1982, p. 317). However, it was not until Strube (1989) that the conjectured typological status of
TAB received empirical support. In a large undergraduate sample, taxometric analyses appeared
to provide evidence for the type in TAB.
Although the typological conceptualization of TAB remains dominant (e.g., Alarcon,
Eschleman, & Bowling, 2009; Allen, Johnson, Saboe, Cho, Dumani, & Evans, 2012; Chida &
Hamer, 2008), competing evidence shows the construct is better modeled as a multidimensional
syndrome (Bryant & Yarnold, 1995; Edwards, Baglioni, & Cooper, 1990). Meta-analyses also
support this interpretation, indicating that both normal and abnormal personality constructs are
better modeled and assessed as dimensional (Markon, Chmielewski, & Miller, 2011). Further, in
a comprehensive review of taxometric research, Haslam, Holland, and Kuppens (2012) reported
that, of the 41 taxometric investigations of normal personality variables, only 8 (19.5%) yielded
typological results, most of which have been challenged in more recent studies. As a result, the
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
4
authors concluded, “most latent variables of interest [to psychologists] are dimensional, and . . .
many influential typological findings of early taxometric research are likely spurious” (p. 903).
At present, TAB is the only remaining normal personality construct in the published literature
with unchallenged evidence of typological structure (Wilmot, 2015, p. 362). It is surprising then
that in nearly 30 years since its publication, Strube (1989) has not yet been replicated. In view of
the preceding work, and in response to calls for a more replicable psychological science (Cooper,
2016; Open Science Collaboration, 2015), we believe that now is the time for such a replication.
The purpose of the present work is to reexamine the claim that TAB is typological. To do
so, we use contemporary taxometric procedures to conduct direct and conceptual replications of
Strube (1989). In a direct replication, a different set of researchers duplicates the sampling and
methodology of the original research. In a conceptual replication, these original procedures are
purposefully altered to test the rigor of the underlying hypothesis. Whereas a direct replication
tests the dependability of the original data and the findings drawn from them to verify facts, a
conceptual replication tests the validity of the construct and associated theory with the goal of
producing knowledge (Makel, Plucker, & Hegarty, 2012; Schmidt, 2009). Seeing as we were
“interested in the construct . . . not in the datum” (Lykken, 1968, p. 156), the combination of
replications provides the most rigorous investigation of the central research question.
The Meaning and Importance of Personality “Types”
The history of modern psychological research on personality types extends back to Carl
Jung and his typology of introverts and extraverts (Meehl, 1992). To propose that a personality
attribute is typological is to make the claim that individual differences on that characteristic are
distributed in a manner that is discontinuous, rather than continuous. Put differently, it is a claim
that the differences are in some sense qualitative matters of kind, rather than quantitative matters
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
5
of degree. In the taxometric literature, a typological variable is referred to as “taxonic,” where a
taxon is defined as a non-arbitrary latent class. On this definition, taxon membership is discrete
and either/or, with a metaphorical category boundary separating members from non-members.
These categories are an empirical fact of nature, not merely a social or linguistic convention.
However, three clarifications must be made on this point. First, claims about a personality
attribute being typological refer to latent structure, not to observed variation. In theory, a taxon is
a binary latent entity that cannot be directly inferred from the phenotypic distribution of scores
from individuals. Contrary to widespread intuition, a bimodal score distribution does not provide
strong evidence that a latent category exists. In fact, there are a number of ways one might obtain
a bimodal distribution when the variable in question is not typological, including sampling error
in small samples, selective sampling of individuals or items at low and high extremes of a latent
dimension, and observer biases (see Haslam, 1999). Second, and relatedly, to say an attribute is
typological does not mean that it must be measured in a dichotomous rather than a continuous
way, or that no meaningful quantitative variations exist across group members. Members may
differ systematically in their levels of characteristics that are distinctive to that taxon, and the
phenotypic variations among members and non-members will likely be continuously distributed.
For example, the taxonic nature of Down syndrome at the latent level is based on the presence
rather than absence of a third copy of chromosome 21. Nevertheless, the syndrome is compatible
with variability in height and cognitive ability among class members, both variables of which are
correlates of the syndrome, as well as variance in height and ability among non-members. Thus,
a taxon and its complement can be thought of as two latent distributions overlapping on observed
indicators. Consequently, it is more accurate to describe the typological/dimensional distinction
as “taxonic and dimensional” versus “dimensional only.” Third, although people frequently refer
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
6
to groups in a categorical fashion, using nouns like extraverts and introverts, this does not mean
that the variable is a real latent type. Noun classes are often used to refer to arbitrary distinctions
on a continuum (e.g., “tall people” being defined as >6 feet tall or “obese people” as those with a
BMI >30), or to define groups based on social prototypes or a configuration of relevant attributes
(e.g., “working class” vs “middle class). These usages involve arbitrary classifications and do not
imply that the latent structure of height, body mass, or social class is truly typological. Much to
the contrary, determining whether or not a variable is taxonic is an empirical question; it is not a
matter of social convention, or either theoretical or political preference.
Determination of taxonicity has important implications for personality psychology. For
descriptions of, and broad theories about, personality structure, finding evidence of a personality
type would challenge the sufficiency of the dominant dimensional framework embodied in the
Big Five trait taxonomy (John, Naumann, & Soto, 2008). Categories and dimensions also have
different implications for assessment. For dimensional variables, the goal of measurement is to
capture individual differences along the full extent of the construct. For typological variables, the
goal is to differentiate taxon members from non-members at the category boundary, in a manner
similar to a medical diagnosis. Third, typological and dimensional structures implicate different
accounts of causation. Dimensional variables typically reflect combined effects of many small
influences, which may be complex, interactive, and configural, or simply additive. By contrast,
typological variables may reflect the influence of a single dichotomous cause (e.g., the effect of
binary polymorphism of a particular gene or a specific environmental influence), an epigenetic or
nonlinear interaction between major influences, or a threshold effect, in which a categorically
distinct outcome occurs when a latent dimensional influence exceeds a critical value. This latter
possibility raises an important point: Evidence that an attribute is typological does not mean its
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
7
underpinnings are categorical at the deepest level (i.e., genotypic)—only that it is categorical
below the level of observed variation (i.e., endophenotypic). In summary, empirical findings
about the latent structure of personality have major implications for description, assessment, and
interpretation. Hence, the importance of retesting the latent structure of TAB, which is the final
remaining normal personality construct with unchallenged evidence of typological structure.
Method
Taxometric analysis is used to test whether a latent variable is typological or dimensional,
in the sense described above. Because the latent variable cannot be observed directly, indirect
indicators must be used to assess it. If the variable is indeed typological, and if indicators can
validly distinguish between its latent classes (i.e., the taxon, Type A, and its complement, Type
B), then the observed covariance among indicators should be a function of class membership, not
a result of dimensional variation. Taxometric scholars have developed a suite of independent
procedures for examining covariation patterns of latent structure (Ruscio et al., 2006; Waller &
Meehl, 1998), which have been used in more than 200 published studies.
Summary and Limitations of Strube (1989)
To test the latent structure of TAB, Strube (1989) used a convenience sample of 1,239
undergraduates (62% male) who studied at Washington University between 1982 and 1986, and
completed the full 21-item Type A-B scale of the Student Jenkins Activity Survey (SJAS; Glass,
1977).
1
Items were administered using a variable response format with anchors ranging from two
to four options. Item responses were subsequently recoded (1 = Type A, 0 = Type B) per Glass’
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
1
Reported under its original name in Strube (1989), the Jenkins Activity Survey-Form T (Krantz, Glass, & Snyder,
1974) was later republished under the name, Student Jenkins Activity Survey (SJAS; Glass, 1977).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
8
(1977) dichotomous scoring system. Four independent taxometric methods were used,
2
which
appeared to show convergent evidence of latent typological structure (Strube, 1989, p. 975-976).
Strube (1989) employed several best practices in taxometrics that remain until this day,
including using large samples, reporting standardized mean difference scores between latent
classes (i.e., indicator validities), computing multiple base rate estimates of class membership,
and utilizing simulated comparison data. However, other practices used have been subsequently
discontinued due to evidence that they may produce false-positives. Haslam et al. (2012) showed
that certain methodological features and data analytic procedures are biased towards typological
results. These methodological features include using dichotomous response formats and single
items (vs summed-item scales) as indicators. Problematic data analytic procedures include the
use of several techniques that have since been abandoned,
3
and the failure to report numerical
indices that compare the fit of the data to a simulated typological structure (p. 910). All of these
factors apply to Strube (1989) and may have biased study conclusions in a typological direction.
Fortunately, better-validated techniques have been developed (Ruscio, Ruscio, & Carney, 2011).
Therefore, we used these contemporary procedures in our replications to remedy limitations of
the original study. Across both replication studies, we tested the following research question:
Research question: Are patterns of covariation among TAB indicators better explained
by two latent classes (i.e., Types A and B) or by one (or more) latent dimensions?
Patterns of indicator covariation take different forms when the latent variable of interest
is typological versus dimensional. A conceptual illustration of differences between typological
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
2
The four methods were (a) principal components analysis, (b) revised bootstraps (Golden, 1982), (c) consistency
hurdles (Golden & Meehl, 1979), and (d) maximum covariance (MAXCOV; Meehl & Yonce, 1996).
3
The two methods developed by Golden (see Note 2) have been abandoned in modern taxometric research (Ruscio
et al., 2006) and are among procedures associated with increased false-positive rates (Haslam et al., 2012). Thus, we
did not use them in our direct replication. However, we used Method 4 (i.e., MAXCOV) and a modern advance on
Method 1 (i.e., latent mode factor analysis; Waller & Meehl, 1998) in our taxometric analyses.!!
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
9
and dimensional patterns is as follows. For typological variables, indicator covariation patterns
primarily reflect differences between taxon and complement classes (i.e., Types A and B), rather
than covariation within each class. Thus, indicators covary to the extent that classes overlap (i.e.,
where there is a maximum mixture of Types A and B), and tend to do so most at the middle of
the distribution, unless the base rate of the taxon is very small. By contrast, at the extremes of the
distribution, which, alternatively, reflect virtually all Type A or B members, there would be low
covariation, because the main source of variability (i.e., class membership) would be accounted
for. For dimensional variables, indicator covariation tends to be relatively constant along the
entire range of the distribution. The reason is because indicators would covary within a single,
homogeneous population. Alternative taxometric procedures represent different, mathematically
independent, ways of testing latent structure-dependent patterns of indicator covariation.
Participants
Study 1: Direct replication. Study 1 used two large convenience samples. Sample 1a
was collected by Bryant and Yarnold (1995) from 1,609 undergraduates (36% male) who studied
at the University of Illinois, Chicago between 1980 and 1989. Smith and Bryant (2012) collected
Sample 1b from 764 undergraduates (20% male) who studied at Loyola University between 2007
and 2010. Participants provided complete responses to the 21-item Type A-B scale of the SJAS,
which was administered using the same response format and recoding scheme described above.
All data were collected in a manner consistent with ethical standards for the treatment of human
subjects as determined by their respective institutions. Portions of the data have been analyzed in
published work using confirmatory factor analysis to test alternative measurement models of the
SJAS (i.e., Bryant & Yarnold, 1995; Smith & Bryant, 2012), but the data have not been analyzed
using taxometric methods.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
10
Study 2: Conceptual replication. Study 2 used a representative sample of the Caerphilly
Prospective Study, which was designed by the former Medical Research Council Epidemiology
Unit of South Wales to study cardiovascular disease in the United Kingdom. The study contacted
all men between ages 45 to 59 years from the town of Caerphilly and its adjoining villages; 2,512
participants (response rate of 89%) identified from the electoral register and general practice lists
were examined between July 1979 and September 1983. As a part of the initial intake form of the
study, participants completed the three most widely-used self-report measures of TAB: (a) the
21-item Type A-B scale from the Jenkins Activity Survey (JAS; Jenkins et al., 1979),
4
(b) the
10-item Bortner scale (Bortner, 1969), and (c) the 14-item Framingham scale (Haynes et al.,
1978). A total of 2,254 participants (90%) provided complete responses to all three scales.
Response formats differed across the TAB scales. Like the SJAS, the JAS used a variable
response format with anchors ranging from two to four response options; items covering the first
half of the Framingham scale used four options, but the latter half used two; and the Bortner used
a 25-point Likert-type format, which approximated the scale’s original graphic response format.
To combine items with heterogeneous scores into common subscales, a common scaling metric
was required. To protect against possible methodological biases, we did not use a dichotomous
recoding scheme (cf. Glass, 1977) or differential scoring weights (cf. Jenkins et al., 1979), but
instead rekeyed all items in the direction of TAB and then standardized them prior to analyses.
All data were collected in a manner consistent with ethical standards for the treatment of
human subjects as determined by the Medical Research Council of the United Kingdom. Portions
of the data have been analyzed and published in more than 400 medical publications (for further
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
4
Due to administrative constraints, the study only included items assessing the Type A-B scale. Items used in other
JAS-based subscales (e.g., the H-scale [Hard-Driving and Competitiveness], the S-scale [Speed and Impatience],
and the J-scale [Job Involvement]; Zyzanski & Jenkins, 1970) were omitted.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
11
information, see https://www.bris.ac.uk/social-community-medicine/projects/caerphilly/).
However, the data have not been analyzed using taxometric methods.
Because the present study involved secondary analyses of previously collected datasets,
which received ethics committee approval for their original collection, are publically available,
and contain no identifying information, Institutional Review Broad approval was not required.
Indicator Construction
Our replication studies used both item and subscale indicators, which are detailed below.
Study 1: Direct replication. Single-item indicators were often used in early taxometric
research, but this practice is now discouraged because single items tend to be less reliable, less
discriminating between latent classes, more susceptible to skewness, and more biased toward
taxonic findings than summed-item indicators (Haslam et al., 2012). Nevertheless, for purposes
of direct replication, Study 1 used the identical 12-item indicator set as reported in the Methods
section of Strube (1989). Items are presented in see Table 1.
Because Strube (1989) used single-item indicators only, we also constructed multiple-
item indicators as a supplement. Indicators used in taxometric analysis should be empirically
non-redundant. If redundancy exists, it is possible their covariance may reflect a common factor,
not a latent typological variable (Meehl, 1995). Similarly, to adequately model the structure of a
multifactorial construct like TAB, each meaningfully distinct and independent factor should be
represented by its own indicator. Therefore, following recommendations of Ruscio et al. (2011),
in both replications studies, we developed sets of summed-item subscale indicators that were
empirically derived using factor analysis.
To form subscale indicators for Study 1, we built on findings that a correlated four-factor
model provides the best-fitting and most interpretable solution for the SJAS (Bryant & Yarnold,
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
12
1995; Edwards et al., 1990). These factors are Hard-Driving/Competitiveness (e.g., “Do people
consider you hard-driving and competitive?”), Achievement Striving (e.g., “Do you set deadlines
for yourself?”), Speed/Impatience (e.g., “When you listen to someone talking, do you hurry them
along?”), and Rapid Eating (e.g., “How rapidly do you eat?”).
5
We used items in Table 1 and confirmatory factor analysis to fit a four-factor model to
Sample 1a. We used the root mean square error of approximation (RMSEA) and the standardized
root mean square residual (SRMR) to evaluate fit.
6
Results indicated excellent fit (RMSEA =
.048; SRMR = .053) and factors’ constituent items met the conventional factor loading criterion
of | .30|. Hard-Driving/Competitiveness and Achievement Striving correlated strongly (r = .59),
but the remaining factors related weakly by comparison (Mean = .17; Range = .07 to .26).
We fit the same model to Sample 1b. Results again evidenced good fit (RMSEA = .049;
SRMR = .064) and similar relations. Hard-Driving/Competitiveness and Achievement Striving
correlated similarly (r = .50), as did the other factors (Mean = .19; Range = .07 to .33). Satisfied
with model fit and interpretability, we computed subscale indicators of each factor. As Table 1
shows, the Hard-Driving/Competitiveness, Achievement Striving, Speed/Impatience, and Rapid
Eating subscales contained three, seven, two, and two items, respectively.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
5
Bryant and Yarnold (1995) used Sample 1a to compare various measurement models of the SJAS. Results showed
the best-fitting model was a three-factor model comprised of the above factors, but which combined Hard-Driving/
Competitiveness and Achievement Striving into one overall factor, General Hard-Driving/Competitiveness.
However, based on Edwards et al. (1990) and our own exploratory factor analysis, fit and interpretability improved
appreciably by allowing the items tapping Hard-Driving/Competitiveness (i.e., SJAS 8 to 10) to form their own
separate factor. Items ask respondents to answer the same question, “Are you hard-driving and competitive?”, three
times, based on their own self-rating and perceived ratings of an acquaintance and an intimate. Factor analyses of
such items can result in a methodological artifact that has been termed a ‘bloated specific factor’ (Cattell, 1973),
which refers to a spurious dimension formed by having subjects respond to identical items. Seeing as this artifact
had the potential to distort our taxometric analyses, we decided on a four-factor model. As for the label used for the
Achievement Striving factor, it was chosen based on item content and literature precedent (cf. Spence et al., 1987).
6
For both indexes, values < .08 represent acceptable fit and those with 95% confidence intervals overlapping .05
represent excellent fit (Kline, 2015). All analyses were performed using AMOS version 23.0.0 (Arbuckle, 2014)
with maximum likelihood estimation of full covariance matrices.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
13
Data collection for Samples 1a and 1b was separated by two decades. Nevertheless, to see
whether responses differed substantively across time, we computed between-sample differences
in their respective subscale scores. Results indicated that differences were negligible (Mean d = -
.02; Range = -.11 to .10). In view of excellent model fit across samples and minor differences in
subscale scores, and due to the desirability of larger samples and more streamlined reporting, we
combined samples into one larger sample (N = 2,373), which was used in all Study 1 analyses.
7
Study 2: Conceptual replication. To form subscale indicators for Study 2, we consulted
the relevant literature and used both exploratory and confirmatory factor analyses. Regarding the
former, Edwards et al. (1990) used factor analysis to examine factors underlying the same three
self-report TAB measures used in the present study. Results showed three major factors across
scales: (1) General Hard-Driving/Competitiveness, (2) General Speed/Impatience, and (3) Time
Pressure. The first two factors dominated JAS and Bortner scale items, but Framingham items
reflected a combination of factors one and three (p. 448). Successive analyses also showed the
presence of various specific sub-factors for General Hard-Driving/Competitiveness (e.g., Hard-
Driving/Competitiveness-Specific, Ambition), and General Speed/Impatience (e.g., Impatience,
Irritability; p. 451). Based on these findings, we also expected at least three factors to emerge in
our data, including the possibility of additional specific sub-factors.
To determine the number of factors present in the 45 items comprising Sample 2, we used
both Velicer’s (1976) minimum average partial (MAP) test and parallel analysis (Horn, 1965).
We also examined the extracted eigenvalues, as well as RMSEA and SRMR indices.
8
MAP test
results indicated four factors, and parallel analysis showed five. Extracted eigenvalues exceeded
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
7
Taxometric analyses were also run for Samples 1a and 1b separately, but sample-specific results did not alter our
substantive conclusions. For the interested reader, complete output is provided in the supplemental online materials.
8
EFA was performed via the open-source statistics software R (R Development Core Team, 2017), using the
‘psych’ package (Revelle, 2017).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
14
one for the first four factors (λs = 6.44, 2.23, 1.44, 1.13, and .77) and fit indices were excellent
for both four and five factor solutions (RMSEAs and SMSRs < .05). Based on this evidence, in
combination with prior findings from Edwards et al. (1990), we selected the four-factor solution.
EFA. We conducted exploratory factor analysis (EFA) using minres factoring, extracting
and rotating the four factors toward simple structure using a promax rotation, which allows for
correlated factors. We used the loading criterion of | .30| to interpret the factor pattern matrix
and to select items for inclusion in confirmatory analysis; any cross-loading items were ceded to
the factor with the strongest loading. Overall, results largely replicated the structure of Edwards
et al. (1990). General Hard-Driving/Competitiveness was clearly factor one; 12 items defined it.
However, of these 12 items, the mean loading of four Hard-Driving/Competitiveness items was
appreciably stronger than the mean loading of the eight items reflecting Achievement Striving
(Mean λ = .61 vs .41). To ensure against methodological artifacts and to maintain consistency
with Study 1, these two factors were also separated in subsequent analyses (see Table 1). Nine
items reflecting speed, impatience, and irritability defined factor two, General Speed/Impatience.
Factor three, Time Pressure, consisted of 10 items about urgency and preoccupation with time.
Factor four had four items describing eating rapidly and it was labeled accordingly. Based on
EFA findings, the preceding 35 items were input into confirmatory factor analysis (CFA) as
observed indicators of their respective latent factors. However, before doing so, three extra
Achievement Striving items were introduced on a post-hoc basis. Specifically, JAS items 20 and
21 were added because they approached the | .30| loading criterion and were found to be good
indicators of the factor in Study 1. In addition, Bortner item 14 was included because it was part
of a relevant specific factor (i.e., Ambition) reported by Edwards et al. (1990, p. 451).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
15
CFA. Having found that a four-factor model fit the data well, but that a five-factor model
was more interpretable and consistent with Study 1, we tested the validity of our items via CFA.
Table 1 presents the final 37 items.
9
Four items defined Hard-Driving/Competitiveness and 10
items indicated Achievement Striving. Speed/Impatience, Time Pressure, and Rapid Eating were
marked by nine, 10, and four items, respectively. Correlations among all five factors were freely
estimated. Results of CFA indicated very good fit (RMSEA = .052, SRMR = .056) and factors’
constituent items met loading criteria. Hard-Driving/Competitiveness and Achievement Striving
correlated strongly (r = .81), but the remaining factors correlated more moderately (Mean = .44;
Range = .17 to .73). Overall, factors observed in Study 2 strongly resembled those in Study 1. In
fact, 11 out of 14 items (79%) included in the subscales for Study 1 were included in the same
subscales for Study 2. EFA and CFA results can be found in the online supplemental materials.
Table 1 presents the item and subscale indicator sets used in our taxometric analyses.
Study 1 used the 12-item SJAS indicator set from Strube (1989), as well as the four summed-
item subscale indicators empirically derived from the SJAS. Study 2 used the five summed-item
subscale indicators empirically derived from the JAS, Bortner, and Framingham scales.
[Insert Table 1 about here.]
Taxometric Procedures
The base rate classification method (Ruscio, 2009) was used to estimate taxon base rates
and classify cases into putative classes for subsequent analysis. As part of analyses, distributions,
correlations, and validities of the indicator sets were also calculated. All indicators were first
subjected to preliminary analyses to test if they met minimum criteria for taxometric analysis.
Concerning criteria, evidence supports Meehl’s (1995) recommendations that indicators should
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
9
Despite its SJAS equivalent being included as a marker of Achievement Striving in Study 1, JAS 12 failed to meet
the loading criterion of | .30| by some margin in Study 2, and was therefore removed prior to fitting the final model.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
16
(a) separate latent classes by d 1.25, (b) classes should have an intraclass correlation of rC
|.30|, and (c) that the minimum estimated base rate exceed 10% of the sample (BR .10; Ruscio,
Walters, Marcus, Kaczetow, & 2010). If indicators fail to meet these standards, then they cannot
unambiguously determine latent structure. However, if criteria are met, taxometric analyses are
performed using the mean base rate estimate, which is used to generate an identical population of
categorical comparison data that is randomly sampled from for each taxometric procedure.
Consistency rather than significance testing characterizes taxometrics. Confidence in a
structural solution increases as findings converge across multiple mathematically non-redundant
procedures. Across studies, we used four taxometric procedures: (a) mean above minus below a
cut (MAMBAC; Meehl & Yonce, 1994), (b) maximum covariance (MAXCOV; Meehl & Yonce,
1996), maximum slope (MAXSLOPE; Grove, 2004; Grove & Meehl, 1993), and (d) latent mode
factor analysis (L-Mode; Waller & Meehl, 1998). Detailed descriptions are found in the literature
(Ruscio et al., 2006). All analyses used Ruscio’s (2017) suite of taxometric programs for R.
MAMBAC. The MAMBAC procedure is based on the principle that if two latent classes
exist, then they could be distinguished using an optimal cut score of a valid observed indicator.
That cut score, which maximizes the accuracy of classification of individuals into the two latent
groups, corresponds to the point where their latent distributions intersect. However, if the latent
variable is not typological, then no optimal cut score will differentiate groups because there are
no latent classes to distinguish. To aid in visual interpretation, cut scores are plotted, with the x-
axis representing the cut score and the y-axis representing the mean difference score between
groups. Graphed curves for typological data typically peak near the optimal cut score, but when
the data are dimensional, the curves generally appear concave, or U-shaped.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
17
The MAMBAC procedure requires a minimum of two indicators, one to serve as the
input, and the other as the output, which is used to compute mean differences for cases falling
above and below a moving cutting score on the input indicator score distribution. To implement
MAMBAC in the present study, 50 equally spaced cuts were made along the input indicator
starting with 25 cases from either end; 1% of cases from each extreme of the distribution were
also trimmed to reduce the influence of outliers. Ten internal replications were performed.
MAXCOV. The MAXCOV procedure requires a minimum of three indicators, one to
serve as the input along which cases are sorted, and the other two to serve as output indicators
whose covariance is calculated within subsamples of cases. If the latent variable is typological,
and indicator covariation primarily reflects differences between the two classes, this covariation
should be maximized in the subsample of cases wherein the two classes are maximally mixed.
Maximum mixture occurs in the region where the two latent distributions overlap most, which is
typically towards the middle of the input indicator. If the latent variable is dimensional, however,
then indicator covariation should be relatively constant along the full range of the input indicator.
To aid in visual interpretation, the covariation estimates are plotted, with the x-axis representing
scores on the input indicator and the y-axis representing the magnitude of the covariance value.
Typological data produces graphed curves that typically peak at the spot in which the subsample
is most evenly divided between classes, whereas typical dimensional data curves tend to be flat.
Concerning indicators that vary along a small number of ordered categories (i.e., two for
dichotomously scored items), there is a longstanding debate about whether to compute ‘summed
input’ indicators or to allow each indicator to serve as a ‘single input’ as-is. Somewhat
counterintuitively, results show that single inputs yield more accurate and interpretable findings
(Walters & Ruscio, 2009). Therefore, we used single input indicators in our analyses.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
18
Samples were divided into a set of overlapping subsamples (or, ‘windows’) along the
input indicator. All indicators were standardized and 25 windows that overlapped 90% with
adjacent subsamples were used. Monte Carlo evidence shows that 90% overlap generates more
accurate results than using cuts based on discrete intervals (e.g., standard units; Walters &
Ruscio, 2010). Finally, ten internal replications were again performed, which function to
counteract the effects of arbitrarily cutting between cases with tied scores (Ruscio et al., 2011).
MAXSLOPE. MAXSLOPE is a simplified version of MAXCOV that is used in its place
when only two indicators are available. It uses nonlinear regression to fit a scatterplot of scores
on the two indicators, which is then smoothed using a locally weighted scatterplot smoother
procedure (LOWESS; Cleveland, 1979). If the latent variable is typological, then the scatterplot
should contain two generally overlapping clouds of cases—one that is relatively high on both
indicators and one that is relatively low on both—representing distributions of the taxon and
complement classes, respectively. Because indicators should be relatively uncorrelated within
each class, the regression line should be relatively flat within the non-overlapping region of each
cloud of cases, but should rise more steeply where they overlap. Indicator scores are represented
by the x-axis and the y-axis represents the regression slope. Typological variables yield nonlinear
curves in which the slope is steeper in the intermediate range, whereas dimensional variables,
which contain no mixed latent classes, show a consistent rising slope (Ruscio et al., 2011).
L-Mode. L-Mode follows somewhat different logic than the other taxometric procedures,
though like MAXCOV, it requires a minimum of three indicators of the latent variable. L-Mode
involves extracting a single latent factor from the indicators and examining the distribution of
factor scores. For purposes of interpretation, the x-axis represents factor score estimates and the
y-axis represents the density of cases. Factor scores should separate latent classes more validly
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
19
than observed scores, so if the latent variable is typological, then scores should depict a bimodal
distribution. However, if the variable is dimensional, the distribution should appear unimodal.
Comparison Data
Outputs of taxometric analysis consist of graphed curves, which are visually inspected for
signs of typological or dimensional structure. To supplement curves, we used bootstrap methods
to simulate comparison data for parallel analysis (Ruscio & Kaczetow, 2008; Ruscio, Ruscio, &
Meron, 2007). Simulated data reproduced the characteristics of the observed data (i.e., equivalent
sample size, indicator validities, intercorrelations), but its latent structure was typological. For
consistency, 100 comparison data sets were generated, and both observed and simulated data
were submitted to identical analyses. The resulting outputs yielded three graphed curves: (a) an
average observed curve, (b) an average categorical curve, and (c) an average dimensional curve.
Average observed curves were overlaid across categorical and dimensional curves to aid in the
visual interpretation of findings (see Figures 1 and 2).
An objective fit index was also calculated, the comparison curve fit index (CCFI; Ruscio
et al., 2010; Ruscio & Kaczetow, 2009; Ruscio & Walters, 2011). The CCFI is computed using
the root mean square residual between the typological comparison and observed data curves (i.e.,
RMSRDim / [RMSRDim + RMSRCat]). CCFI values range from 0 (dimensional) to 1 (categorical),
with a value of .50 showing equally good fit to both latent structures. The more values deviate
from .50, the stronger and more certain the result. However, findings should be interpreted with
caution when .40 < CCFI < .60. Extensive simulation evidence indicates that the mean CCFI
across procedures provides the best cumulative evidence of latent structure (Ruscio et al., 2010).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
20
Results
Study 1: Direct Replication
The aim of Study 1 was to conduct a direct replication of Strube (1989). We duplicated
sampling and methodological procedures from the original study by using SJAS responses from
a sample of 2,373 undergraduates. We then subjected the 12-item indicator set to contemporary
taxometric procedures. We also tested four subscale indicators: Hard-Driving/Competitiveness,
Achievement Striving, Speed/Impatience, and Rapid Eating (see Table 1).
Preliminary Analyses
In preliminary analyses, we first obtained an estimate of the taxon base rate by taking the
average estimate across the MAMBAC, MAXCOV, and L-Mode procedures. Using this base
rate estimate as the input, we then calculated indicator validities, which are reported in Table 2.
Item indicators. Preliminary analyses of the 12-item indicator set were conducted for the
direct replication. Results showed that the mean indicator validity (Mean d = .89) fell far short of
the criterion of d 1.25 for taxometric analysis. In fact, only three of the 12 items (see Table 2)
met qualifications. Notably, these insufficiently valid items were the same ones used in Strube
(1989) and cast retrospective doubt on the original finding based on methodological grounds.
What this finding means is that the item indicator set was too weak to unequivocally determine
latent structure and so no further analyses could be conducted using them. Despite this failed
direct replication, we continued on by testing the empirically-derived subscale indicator set.
Subscale indicators. Next, preliminary analyses of the four-subscale indicator set were
conducted. Again, results showed that the mean indicator validity (Mean d = 1.16) fell short of
the standard. Low validity was mostly due to the Speed/Impatience and Rapid Eating subscales
(d = .96 and .80, respectively). By comparison, Hard-Driving/Competitiveness met the criterion
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
21
(Mean d = 1.65) and Achievement Striving fell just short of it (d = 1.23; see Table 2). Seeing as
Achievement Striving was near the standard, we inspected indicator validities estimated by each
of the three specific procedures. Results showed that validities for Achievement Striving were
acceptable for MAMBAC and MAXCOV (Mean d = 1.26), but fell short for L-Mode (d = 1.11).
Validities for Speed/Impatience and Rapid Eating, in contrast, had low estimates across methods
(Mean d < .91). Based on the collective evidence, we decided to exclude these weaker indicators
and to retain Hard-Driving/Competitiveness and Achievement Striving for further analyses.
The exclusion of Speed/Impatience and Rapid Eating subscales precluded us from using
taxometric procedures that require a minimum of three indicators (e.g., MAXCOV, L-Mode).
Nevertheless, removal was not a problem for either procedural or theoretical reasons. Concerning
procedures, questions about latent structure can be accurately tested using only two indicators
(Ruscio & Walters, 2011; Wilmot, 2015) and both MAMBAC and MAXSLOPE methods are fit
for that purpose. Concerning theory, in most TAB research, overall SJAS scores have been used
to distinguish between Types A and B (Strube, 1989). Overall SJAS scores primarily reflect the
two correlated factors of Hard-Driving/Competitiveness and Achievement Striving. Indeed, part-
whole correlations of overall scores to scores from these two subscales in Sample 1 were strong
(r = .70 and .75, respectively). Thus, if overall SJAS scores primarily reflect these two factors,
then analyzing their covariance should constitute an adequate test of our focal research question.
Because the accuracy of base rate estimates depends on indicator quality, we sought to
obtain a more precise estimate using only the two valid subscale indicators. We subjected Hard-
Driving/Competitiveness and Achievement Striving to MAMBAC and MAXSLOPE procedures,
and then used the average base rate estimate in our taxometric analyses. Results showed that the
validities for Hard-Driving/Competitiveness and Achievement Striving now exceeded standards
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
22
(d = 2.19 and 1.76, respectively). Intraclass correlations for taxon and complement groups were
small-to-moderate (rC = -.295 and -.172, respectively) and met the criterion (rC |.30|; see Table
3). Finally, the mean estimated base rate across methods of .421 exceeded the BR >.10 criterion.
Accordingly, the two-subscale indicator set qualified for taxometric analysis. In accordance with
the base rate classification method, the average base rate estimate of .421 was used to generate a
common population of typological comparison data (Walters & Ruscio, 2011).
[Insert Table 2 about here.]
Taxometric Analyses
Table 3 presents results of taxometric analyses and Figure 1 depicts graphical output for
the two procedures. MAMBAC curves are plotted in the top two panels and MAXSLOPE curves
are plotted in the bottom two panels. For both procedures, left panels depict the observed curves
overlaid across simulated curves from categorical comparison data, whereas right panels reflect
the observed curves overlaid across dimensional comparison data. In all panels, the observed
curve is presented as a bold line, the gray band represents the middle 50% of simulated data
points, and the two darker gray lines reflect minimum and maximum estimated values.
MAMBAC results show the observed curve more closely resembled the shape of the
dimensional comparison data (Figure 1, upper-right panel) than it did the mountain peak-shape
distinctive of categorical data (Figure 1, upper-left panel). A CCFI value of .388 provided
corroborating evidence of latent dimensional structure (see Table 3).
MAXSLOPE results also provided visual and quantitative evidence of dimensionality.
Except for a slight peak in the center, the observed curve was mostly flat and more strongly
resembled the dimensional comparison data (see Figure 1, lower-right panel) than it did the
typological comparison data with its steep intermediate slope (Figure 1, lower-left panel). A
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
23
CCFI value of .288 provided further dimensional evidence (see Table 3). Finally, the mean CCFI
of .338 across both procedures provided cumulative evidence of latent dimensional structure.
[Insert Table 3 about here.]
[Insert Figure 1 about here.]
Summary. Study 1 attempted to replicate typological evidence reported in Strube (1989).
To do so, we copied the sampling and methodological procedures used in the original study. We
sampled from the same population (i.e., undergraduates studying at Midwestern universities, one
of which was collected during the same decade, the 1980s), and used the same measure, response
format, and recoding scheme. We then subjected the same set of 12-item indicators to identical
taxometric procedures, or, as appropriate, to better-validated modern techniques. Overall, results
showed that the item indicators used in the original study were not sufficiently valid to qualify
for taxometric analysis, let alone to reveal evidence of typological structure. Moreover, of the
two (out of four) subscale indicators found to be suitable for analysis, results provided evidence
of latent dimensional structure. In sum, we failed to replicate findings reported in Strube (1989).
Notwithstanding our fidelity to the original study, a limitation of Strube (1989) and our
direct replication may be the scale used to measure TAB. Although the SJAS is the most widely-
used self-report measure of TAB in undergraduate assessment, it may lack sufficient coverage of
the Speed/Impatience and Time Pressure factors. If the goal is to determine the latent structure of
TAB as a construct, then all its major components should be adequately represented for that test.
Consequently, we proceeded to Study 2, which used multiple measures of TAB.
Study 2: Conceptual Replication
The aim of Study 2 was to conduct a conceptual replication of Strube (1989). We altered
the methods used in the original study to test the rigor of its underlying hypothesis by using a
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
24
representative sample of 2,254 middle-aged men who participated in the Caerphilly Prospective
Study and completed three different measures of TAB: the Type A-B scale of the JAS, Bortner
scale, and Framingham scale. We consulted the literature, and used factor analyses to derive a set
of subscales capturing the factors underlying the three measures: Hard-Driving/Competitiveness,
Achievement Striving, Speed/Impatience, Time Pressure, and Rapid Eating (see Table 1).
Preliminary Analyses
Subscale indicators. Preliminary analyses of the five-subscale indicator set were
conducted for the conceptual replication. Results showed that the mean indicator validity (Mean
d = 1.43) met the minimum criterion of d 1.25. However, on review, the Rapid Eating subscale
failed to meet this standard (d = .96; see Table 2). As a result, we removed it and reran analyses.
Preliminary analyses of the four-subscale indicator set indicated that all of the remaining
subscales met the inclusion criterion (Mean d = 1.68; Range = 1.36 to 1.82; see Table 2). Mean
intraclass correlations for the taxon and complement groups also met the standard of |.30| (rC =
.149 and .101, respectively; see Table 3). Finally, the average estimated base rate of .401 across
procedures exceeded the BR >.10 criterion. Taken together, the four-subscale indicator set met
the qualifications for taxometric analyses. Like Study 1, we used the mean base rate to generate a
common population of typological comparison data for all three procedures.
Taxometric Analyses
Table 3 presents results of taxometric analyses and Figure 2 depicts graphical output for
the three procedures. MAMBAC curves are plotted in the top two panels, MAXCOV curves are
plotted in the middle two, and L-mode curves are plotted in the bottom two panels. As before,
left panels depict observed curves overlaid across simulated curves from categorical comparison
data, and right panels reflect the observed curves overlaid across dimensional comparison data.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
25
Again, the observed curves are bolded, gray bands represent the middle 50% of simulated data
points, and the two darker gray lines reflect the minimum and maximum estimated values.
MAMBAC results showed that the observed curve more closely resembled the U-shaped
curve characteristic of dimensional data (Figure 2, upper-right panel) than it did the typological
comparison curve (Figure 2, upper-left panel). Nevertheless, the CCFI of .521 showed relatively
equal fit to both latent structures.
MAXCOV results, by comparison, showed evidence of dimensionality. The observed
curve did not appear like the mountain peak characteristic of typological data (Figure 2, middle-
left panel), but rather resembled the straight positively sloping line (Figure 2, middle-right panel)
distinctive to dimensional structure. The CCFI value of .332 supported this latter interpretation.
Finally, L-Mode results also showed evidence of latent dimensional structure. The mean
observed curve was clearly unimodal (Figure 2, lower-right panel), not bimodal (Figure 2, lower-
right panel), and the CCFI was .404. Across all three procedures, the mean CCFI value of .419
provided cumulative evidence of latent dimensionality.
[Insert Figure 2 about here.]
Summary. Study 2 attempted to overcome methodological limitations of Strube (1989)
by conducting a conceptual replication using multiple measures that provided better construct
coverage of the components of TAB. We used a representative sample with responses to three
different TAB scales and derived a set of factorial subscale indicators that replicated the factor
structure found in prior research (Edwards et al, 1990). Indicators were subsequently subjected to
preliminary analyses. Results showed that the Rapid Eating subscale lacked sufficient validity to
qualify for analysis, but all the remaining indicators met criteria. Across taxometric procedures,
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
26
conceptual replication results extended those of the direct replication. Altogether, they provide
clear and unambiguous evidence of the latent dimensionality of the construct of TAB.
Discussion
Arguably no personality construct has captured the imagination of scholars and the public
alike like Type A Behavior. Though research shows that most normal personality variables differ
in degree, not in kind, evidence from taxometric analysis appeared to indicate that TAB might be
a naturally occurring typology (Strube, 1989). This work presents the first direct and conceptual
replications of the taxometric analysis of TAB since its original publication nearly 30 years ago.
Study 1 attempted a direct replication of Strube (1989). Sampling and methodological
procedures used in the original study were duplicated and its 12-item indicator set of SJAS items
was subjected to taxometric procedures. Preliminary analyses showed that item indicators lacked
sufficient validity to definitely detect typological structure. Subsequent analyses showed that two
of the four factorial subscale indicators were similarly unsuitable. Nevertheless, the remaining
two subscales, Hard-Driving/Competitiveness and Achievement Striving, had properties fit for
taxometric analysis. Results of MAMBAC and MAXSLOPE procedures, in combination with
parallel analyses of simulated data, failed to replicate the prior claim. To the contrary, multiple
comparison curves and a mean CCFI of .338 provided evidence of latent dimensional structure.
Study 2 attempted a conceptual replication of Strube (1989). To test the latent structure of
TAB at the construct level, original study methods were purposefully altered and a sample from
the Caerphilly Prospective Study, which reported responses to three different TAB scales, was
used to ensure adequate coverage of the underlying components of TAB. Five factorial subscale
indicators were derived and subjected to preliminary analyses. Results showed four subscales,
Hard-Driving/Competitiveness, Achievement Striving, Speed/Impatience, and Time Pressure,
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
27
qualified for analysis. Findings from MAMBAC, MAXCOV, and L-Mode methods, comparison
curve evidence, and a mean CCFI of .419 converged with Study 1 results, providing clear and
unambiguous evidence of dimensionality. Simply put, there is no evidence for the type in TAB.
Contributions and Future Directions
This paper makes two important contributions. First and foremost, contrary to its familiar
name, we provide evidence that TAB does not represent a genuine typology, but rather is better
conceptualized as a multidimensional syndrome (Bryant & Yarnold, 1985; Edwards et al., 1990).
Despite concerted efforts to maintain fidelity to Strube (1989), our replication results contradict
the typological evidence reported in the original study. Results not only substantiate the review
claim about the spuriousness of some typological findings of early taxometric research (Haslam
et al., 2012), but also add to knowledge about the role of latent typological structure in human
personality. Finding evidence against TAB being typological after a similar disconfirming result
for self-monitoring personality (Wilmot, 2015), no normal personality construct now remains
with unchallenged evidence of typological structure. Based on the extant taxometric research, the
domain of normal personality appears to be uniformly dimensional (also cf. Markon et al., 2011).
With that said, this claim does not rule out the possibility that some normal personality variables
may reflect more complex configurations, compounds, or interactions of multiple components or
influences. Concluding that normal personality is dimensional does not mean that all personality
variables are the results of simple additive influences, but only that the influences that bear on
them do not generate latent discontinuities.
Second, results replicated Edwards et al. (1990) regarding the main factors underlying the
construct of TAB. Namely, General Hard-Driving/Competitiveness, General Speed/Impatience,
and Time Pressure. Sub-factor findings were also replicated (i.e., Hard-Driving/Competitiveness,
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
28
Achievement Striving, and Rapid Eating) and proved instrumental to our taxometric analyses. In
view of this replicable factor structure, as well as finding no evidence of a common latent class
variable linking the dimensions, we echo the call for “global measures of TAB to be abandoned
in favor of . . . measures of specific TAB dimensions” (Edwards et al., 1990, p. 452).
Among specific dimensions, scales assessing General Hard-Driving/Competitiveness and
General Speed/Impatience may represent promising research directions. These dimensions show
distinct nomological networks to psychological and external variables. Among psychological
variables, General Hard-Driving/Competitiveness correlates with Big Five Conscientiousness,
Extraversion, and Openness/Intellect, but measures of General Speed/Impatience are related to
Neuroticism and low Agreeableness (Bruck & Allen, 2003; Day, Therrien, & Carroll, 2005). As
for external variables, General Hard-Driving/Competitiveness correlates mostly positively with
educational and occupational variables (e.g., academic performance; Spence et al., 1989; sales
performance; Burns & Bluen, 1992; job satisfaction; Bruk-Lee, Khoury, Nixon, Goh, & Spector,
2009). By comparison, General Speed/Impatience shows mostly negative relations to variables
reflecting physical health (e.g., headaches, respiratory and digestive problems, low sleep quality;
Spence et al., 1987) and psychological wellness (e.g., stress, anxiety, depression; Edwards &
Baglioni, 1990). Based on evidence of their respective predictive and divergent validities, efforts
to empirically integrate General Hard-Driving/Competitiveness and General Speed/Impatience
into the contemporary taxonomy of personality traits appears to be promising. Integration may
shine new light on existing TAB evidence and help shape future research questions. As for the
dimensions of Time Pressure and Rapid Eating, interested scholars are directed to the literature
on the multidimensional construct of Time Urgency, which has since subsumed them (Conte,
Landy, & Mathieu, 1995; Landy, Rastegary, Thayer, & Colvin, 1991).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
29
Limitations and Constraints on Generality
Like all studies, the present investigation has its limitations, as well as constraints on the
generalizability of its results (Simons, Shoda, & Lindsay, 2017). We discuss these issues below.
First, the validities of the 12-item SJAS indicator set were very weak (Mean d = .89) and
prevented their inclusion in taxometric analyses. Nevertheless, these insufficiently valid items
were the same ones used to generate the typological finding in Strube (1989); thus, this limitation
offers illumination about the likelihood of the original claim. Speed/Impatience and Rapid Eating
subscales were also excluded due to poor indicator properties (d = .96 and .80, respectively). As
a result, only Hard-Driving/Competitiveness and Achievement Striving met criteria for inclusion
in taxometric analysis. Strict adherence to standards may seem exacting, but extensive simulation
results show that latent types cannot be definitely detected when validities fail to meet minimum
criteria (e.g., d 1.25). The CCFI is accurate under conditions of satisfactory data, but accuracy
degrades considerably when assumptions are violated (Ruscio et al., 2010). Moreover, evidence
shows that latent structure can be accurately tested using only two indicators (Ruscio & Walters,
2011). Thus, there is no reason to discount results, or put constraints of generalizability on them,
based solely on the fact that only two indicators qualified for taxometric analysis.
Second, although we could adapt to the data-related limitations of Study 1, its design-
related limitations (i.e., using the SJAS) prompted Study 2. Samples that are large enough for
taxometric analysis (i.e., N > 1,000) and report responses to multiple TAB scales are very rare.
To our knowledge, the sample from the Caerphilly Prospective Study is the only existing data set
that meets both criteria. Despite considerable strengths, these data also have limitations. Due to
administrative constraints, only the Type A-B scale of the JAS, Bortner scale, and Framingham
scale were assessed. Although these scales are the three most widely-used, self-report measures
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
30
of TAB in the literature (Edwards et al., 1990, p. 440), nevertheless, other scales (e.g., Thurstone
Temperament Schedule; Thurstone, 1953) and components (e.g., Job Involvement;
10
Jenkins et
al., 1979) were not assessed as part of the study. As a result, we could not test them. A related
limitation is that the sample was composed of predominantly middle-class, middle-aged
Caucasian males. However, because this sample is representative of the population used in the
conceptualization of the TAB construct, as well as in the design, development, and validation of
the three scales used in Study 2, it is unlikely that evidence of latent dimensionality is an artifact
of the characteristics of the sample. Indeed, a similar dimensional finding was produced in Study
1, which used a sample of predominantly female undergraduates. Accordingly, results appear to
be generalizable to both males and females in English-speaking populations, and to the four most
widely used self-report scales. Future studies may be warranted to determine whether results are
generalizable to other populations (e.g., non-English speakers) and to non-self-report methods of
assessing TAB (i.e., structured interviews; Ganster, Schaubroeck, Sime, & Mayes, 1991).
The third limitation is associated with taxometric analysis. Taxometric procedures test
whether relations between observed indicators can be better explained by two latent classes or by
one (or more) dimensions. In contrast, an alternative method, latent class analysis, tests whether
the relations could also be explained by two or more latent classes (Lubke & Miller, 2015). Still
other approaches permit latent structures to have both continuous and categorical characteristics
(Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer, & Dolan, 2016). As a result, taxometric
models test a more restricted set of hypotheses about latent structure than do other, more general
models. Although these alternative approaches go beyond the scope of the present replication,
future researchers may benefit from using them to explore other questions of interest about TAB.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
10
Job Involvement is major construct in its own right and has accrued a sizable literature in industrial-organizational
psychology (see Brown, 1996). Thus, it is unlikely that typological structure remains undetected in this component.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
31
That said, researchers should be aware that the alternatives also have limitations. For example, in
latent class analysis, its assumption of local independence within classes is often unrealistic and
it tends to over-identify classes. The method may be more useful for modeling heterogeneity in
samples in categorical terms, rather than providing a rigorous test between latent typological and
dimensional structures. Unlike taxometric procedures, methods that fit mixture distributions to
data depend on restrictive and potentially inaccurate assumptions about the form of those
distributions (e.g., normality; McLachlan & Peel, 2001).
Mindful of these limitations and constraints on generalizability, we have no reason to
believe that our results depend on other characteristics of the participants, materials, or context.
Consequently, the present findings indicate that dimensionality should be considered the null
position in any future investigation of the latent structure of TAB; the burden of proof for
demonstrating otherwise has been shifted to the proponent of typological latent structure.
Practical Implications
Finally, a practical implication of evidence of latent dimensionality is that researchers
and practitioners alike are cautioned against continued treatment of TAB as a naturally occurring
typological variable. Dichotomized theorizing (i.e., Types A vs B), assessment practices, and
data analytic methods (i.e., artificial dichotomization, range enhancement using extreme scores)
should be replaced by dimensional conceptualizations and corresponding statistical procedures.
Instead, researchers may benefit from using subscale-scoring approaches that reflect the various
factorial components of the TAB construct, which are more interpretable and show evidence of
greater predictive and explanatory utility (Edwards & Baglioni, 1991).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
32
References
Arbuckle, J. L. (2014). Amos (Version 23.0) [Computer program]. Chicago, IL: SPSS.
Alarcon, G., Eschleman, K. J., & Bowling, N. A. (2009). Relationships between personality
variables and burnout: A meta-analysis. Work & Stress, 23, 244-263.
https://doi.org/10.1080/02678370903282600
Allen, T. D., Johnson, R. C., Saboe, K. N., Cho, E., Dumani, S., & Evans, S. (2012).
Dispositional variables and work–family conflict: A meta-analysis. Journal of Vocational
Behavior, 80, 17-26. https://doi.org/10.1016/j.jvb.2011.04.004
Borsboom, D., Rhemtulla, M., Cramer, A. O. J., van der Maas, H. L. J., Scheffer, M., & Dolan,
C. V. (2016). Kinds versus continua: A review of psychometric approaches to uncover
the structure of psychiatric constructs. Psychological Medicine, 46, 1567-1579.
https://doi.org/10.1017/S0033291715001944
Bortner, R. W. (1969). A short rating scale as a potential measure of pattern A behavior. Journal
of Chronic Diseases, 22, 87-91. https://doi.org/10.1016/0021-9681(69)90061-7
Brown, S. P. (1996). A meta-analysis and review of organizational research on job involvement.
Psychological Bulletin, 120, 235-255. https://doi.org/10.1037/0033-2909.120.2.235
Bruck, C. S., & Allen, T. D. (2003). The relationship between Big Five personality traits,
negative affectivity, Type A behavior, and work–family conflict. Journal of Vocational
Behavior, 63, 457-472. https://doi.org/10.1016/S0001-8791(02)00040-4
Bruk-Lee, V., Khoury, H. A., Nixon, A. E., Goh, A., & Spector, P. E. (2009). Replicating and
extending past personality/job satisfaction meta-analyses. Human Performance, 22, 156-
189. https://doi.org/1.1080/08959280902743709
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
33
Bryant, F. B., & Yarnold, P. R. (1995). Comparing five alternative factor-models of the Student
Jenkins Activity Survey: Separating the wheat from the chaff. Journal of Personality
Assessment, 64, 145-158. http://doi.org/1.1207/s15327752jpa6401_10
Cattell, R. B. (1973). Personality and mood by questionnaire. San Francisco, CA: Jossey-Bass.
Chida, Y., & Hamer, M. (2008). Chronic psychosocial factors and acute physiological responses
to laboratory-induced stress in healthy populations: A quantitative review of 30 years of
investigations. Psychological Bulletin, 134, 829-885. https://doi.org/10.1037/a0013342
Cleveland, W. S. (1979). Robust locally-weighted regression and smoothing scatterplots. Journal
of the American Statistical Association, 74, 829-836.
https://doi.org/10.1080/01621459.1979.10481038
Conte, J. M., Landy, F. J., & Mathieu, J. E. (1995). Time urgency: Conceptual and construct
development. Journal of Applied Psychology, 80, 178-185. https://doi.org/10.1037/0021-
9010.80.1.178
Cooper, M. L. (2016). Editorial. Journal of Personality and Social Psychology, 110, 431–434.
https://doi.org/10.1037/pspp0000033
Day, A. L., Therrien, D. L., & Carroll, S. A. (2005). Predicting psychological health: Assessing
the incremental validity of emotional intelligence beyond personality, Type A behaviour,
and daily hassles. European Journal of Personality, 19, 519-536.
https://doi.org/10.1002/per.552
Edwards, J. R., & Baglioni, A. J. (1991). Relationship between Type A behavior pattern and
mental and physical symptoms: A comparison of global and component measures.
Journal of Applied Psychology, 76, 276-29. https://doi.org/10.1037/0021-9010.76.2.276
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
34
Edwards, J. R., Baglioni, A. J., & Cooper, C. L. (1990). Examining the relationships among self-
report measures of the Type A behavior pattern: The effects of dimensionality,
measurement error, and differences in underlying constructs. Journal of Applied
Psychology, 75, 440-454. https://doi.org/10.1037/0021-9010.75.4.440
Friedman, M., & Rosenman, R. H. (1959). Association of specific overt behavior pattern with
blood and cardiovascular findings: Blood cholesterol level, blood clotting time, incidence
of arcus senilis, and clinical coronary artery disease. Journal of the American Medical
Association, 169, 1286-1296. http://doi.org/1.1001/jama.1959.03000290012005
Ganster, D. C., Schaubroeck, J., Sime, W. E., & Mayes, B. T. (1991). The nomological validity
of the Type A personality among employed adults. Journal of Applied Psychology, 76,
143-168. https://doi.org/10.1037/0021-9010.76.1.143
Glass, D. C. (1977). Behavior patterns, stress, and coronary disease. Hillsdale, NJ: Erlbaum.
Golden, R. R. (1982). A taxometric model for the detection of a conjectured latent taxon.
Multivariate Behavioral Research, 17, 389-416.
https://doi.org/1.1207/s15327906mbr1703_6
Golden, R. R., & Meehl, P. E. (1979). Detection of the schizoid taxon with MMPI indicators.
Journal of Abnormal Psychology, 88, 217-233. https://doi.org/1.1037/0021-
843X.88.3.217
Grove, W. M. (2004). The MAXSLOPE taxometric procedure: Mathematical derivation,
parameter estimation, consistency tests. Psychological Reports, 95, 517-55.
https://doi.org/1.2466/pr.95.2.517-550
Haslam, N. (1999). Taxometric and related methods in relationships research. Personal
Relationships, 6, 519-534. https://doi.org/10.1111/j.1475-6811.1999.tb00207.x
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
35
Haslam, N., Holland, E., & Kuppens, P. (2012). Categories versus dimensions in personality and
psychopathology: a quantitative review of taxometric research. Psychological Medicine,
42, 903-920. https://doi.org/10.1017/S0033291711001966
Haynes, S. G., Levine, S., Scotch, N., Feinleib, M., & Kannel, W. B. (1978) The relationship of
psychosocial factors to coronary heart disease in the Framingham study. American
Journal of Epidemiology, 107, 362-402. https://jhu.pure.elsevier.com/en/publications/the-
relationship-of-psychosocial-factors-to-coronary-heart-diseas-8
Horn, J. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika,
30, 179-185. https://doi.org/10.1007/BF02289447
Jenkins, C., Zyzanski, S. J., & Rosenman, R. H. (1979). Jenkins Activity Survey Manual. New
York: Psychological Corporation.
John, O. P., Naumann, S. E., & Soto, C. J. (2008). Paradigm shift to the integrative Big Five trait
taxonomy. History, measurement, and conceptual issues. In O. P. John, R. W. Robins, &
L. A. Pervin (Eds.), Handbook of personality: Theory and research (3rd ed., pp. 114-
158). New York, NY: Guilford Press.
Kline, R. B. (2015). Principles and practice of structural equation modeling (4th Ed). New York,
NY: Guilford Press.
Krantz, D. D., Glass, D. C., & Snyder, M. L. (1974). Helplessness, stress level, and the coronary
prone behavior pattern. Journal of Experimental Social Psychology, 10, 284-300.
https://doi.org/10.1016/0022-1031(74)90074-2
Landy, F. J., Rastegary, H., Thayer, J., & Colvin, C. (1991). Time urgency: The construct and its
measurement. Journal of Applied Psychology, 76, 644-657.
https://dx.doi.org/10.1037/0021-9010.76.5.644
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
36
Lubke, G. H., & Miller, P. J. (2015). Does nature have joints worth carving? A discussion of
taxometrics, model-based clustering and latent variable mixture modeling. Psychological
Medicine, 45, 705-715. https://doi.org/10.1017/S003329171400169X
Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin,
70, 151-159. https://doi.org/10.1037/h0026141
Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research how
often do they really occur? Perspectives on Psychological Science, 7, 537-542.
https://doi.org/10.1177/1745691612460688
Markon, K. E., Chmielewski, M., & Miller, C. J. (2011). The reliability and validity of discrete
and continuous measures of psychopathology: A quantitative review. Psychological
Bulletin, 137, 856-879. https://doi.org/1.1037/a0023678
Matthews, K. A. (1982). Psychological perspectives on the Type A behavior pattern.
Psychological Bulletin, 91, 293-323. https://dx.doi.org/10.1037/0033-2909.91.2.293
McLachlan, G., & Peel, D. (2001). Finite mixture models. New York, NY: Wiley-Interscience.
Meehl, P. E. (1992). Factors and taxa, traits and types, difference of degree and differences in
kind. Journal of Personality, 60, 117-174. https://doi.org/1.1111/j.1467-
6494.1992.tb00269.x
Meehl, P. E. (1995). Bootstraps taxometrics: Solving the classification problem in
psychopathology. American Psychologist, 50, 266-274. https://doi.org/1.1037/0003-
066X.5.4.266
Meehl, P. E., & Yonce, L. J. (1994). Taxometric analysis: I. Detecting taxonicity with two
quantitative indicators using means above and below a sliding cut (MAMBAC
procedure). Psychological Reports, 74, 1059-1274.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
37
Meehl, P. E., & Yonce, L. J. (1996). Taxometric analysis: II. Detecting taxonicity using
covariance of two quantitative indicators in successive intervals of a third indicator
(MAXCOV procedure). Psychological Reports, 78, 1091-1227.
https://doi.org/1.2466/pr.1996.78.3c.1091
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science, 349(6251), aac4716. https://doi.org/1.1126/science.aac4716
Revelle, W. (2017). Procedures for psychological, psychometric, and personality research:
User’s manual [computer software and manual]. Retrieved from: http://www.personality-
project.org/r
Ruscio, J. (2009). Assigning cases to groups using taxometric results: An empirical comparison
of classification techniques. Assessment, 16, 55-7.
https://doi.org/10.1177/1073191108320193
Ruscio, J. (2017). Taxometric programs for the R computing environment: User’s manual
[computer software and manual]. Retrieved from:
http://ruscio.pages.tcnj.edu/quantitative-methods-program-code/
Ruscio, J., Haslam, N., & Ruscio, A. M. (2006). Introduction to the taxometric method: A
practical guide. New York, NY: Routledge.
Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative
algorithm. Multivariate Behavioral Research, 43, 355-381.
https://doi.org/10.1080/00273170802285693
Ruscio, J., & Kaczetow, W. (2009). Differentiating categories and dimensions: Evaluating the
robustness of taxometric analyses. Multivariate Behavioral Research, 44, 259-28.
https://doi.org/1.1080/00273170902794248
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
38
Ruscio, J., Ruscio, A. M., & Carney, L. M. (2011). Performing taxometric analysis to distinguish
categorical and dimensional variables. Journal of Experimental Psychopathology, 2, 170-
196. https://doi.org/1.5127/jep.01091.
Ruscio, J., Ruscio, A. M., & Meron, M. (2007). Applying the bootstrap to taxometric analysis:
Generating empirical sampling distributions to help interpret results. Multivariate
Behavioral Research, 44, 349-386. https://doi.org/10.1080/00273170701360795
Ruscio, J., & Walters, G. D. (2011). Differentiating categorical and dimensional data with
taxometric analysis: Are two variables better than none? Psychological Assessment, 23,
287-299. https://doi.org/1.1037/a0022054
Ruscio, J., Walters, G. D., Marcus, D. K., Kaczetow, W. (2010). Comparing the relative fit of
categorical and dimensional latent variable models using consistency tests. Psychological
Assessment, 22, 5-21. https://doi.org/1.1037/a0018259
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected
in the social sciences. Review of General Psychology, 13, 90-100.
https://doi.org/10.1037/a0015108
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed
addition to all empirical papers. Perspectives on Psychological Science, 12, 1123-1128.
https://doi.org/10.1177/174569161770863
Smith, J. L., & Bryant, F. B. (2012). Are we having fun yet? Savoring, Type A behavior, and
vacation enjoyment. International Journal of Wellbeing, 3, 1-19.
https://doi.org/1.5502/ijw.v3i1.1
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
!
39
Strube, M. J. (1989). Evidence for the type in Type A behavior: A taxometric analysis. Journal
of Personality and Social Psychology, 56, 972-987. https://doi.org/10.1037/0022-
3514.56.6.972
Thurstone, L. L. (1953). Thurstone Temperament Schedule. Chicago, IL: Science Research
Associates.
Velicer, W. F. (1976). Determining the number of components from the matrix of partial
correlations. Psychometrika, 41, 321-327. https://doi.org/10.1007/BF02293557
Waller, N. G., & Meehl, P. E. (1998). Multivariate taxometric procedures: Distinguishing types
from continua. Thousand Oaks, CA: Sage.
Walters, G. D., & Ruscio, J. (2009). To sum or not to sum: Taxometric analysis with ordered
categorical assessment items. Psychological Assessment, 21, 99-111.
https://doi.org/1.1037/a0015010
Walters, G. D., & Ruscio, J. (2010). Where do we draw the line? Assigning cases to subsamples
for MAMBAC, MAXCOV, and MAXEIG taxometric analyses. Assessment, 17, 321-333.
https://doi.org/10.1177/1073191109356539
Wilmot, M. P. (2015). A contemporary taxometric analysis of the latent structure of self-
monitoring. Psychological Assessment, 27, 353-364. http://doi.org/10.1037/pas0000030
Yarnold, P. R., Bryant, F. B., & Grimm, L. G. (1987). Comparing the long and short forms of the
student version of the Jenkins Activity Survey. Journal of Behavioral Medicine, 10, 75-
90. https://doi.org/10.1007/BF00845129
Zyzanski, S. J., & Jenkins, C. D. (1970). Basic dimensions within the coronary-prone behavior
pattern. Journal of Chronic Diseases, 22, 781-795. https://doi.org/10.1016/0021-
9681(70)90080-9
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
40
Tables
Table 1
Item and Subscale Indicator Sets Used in Preliminary and Taxometric Analyses
Indicator
Scale Items
Jenkins Activity
Survey
Bortner
Scale
Framingham
Scale
Study 1: Direct replication
Item indicators
12-item set
1, 8, 9, 10, 11, 12, 15,
16, 17, 19, 20, 21
Subscale indicators
Hard-Driving/Competitiveness
8, 9, 10
Achievement Striving
11, 12, 15, 16, 19, 20, 21
Speed/Impatience
5, 6
Rapid Eating
3, 4
Study 2: Conceptual replication
Subscale indicators
Hard-Driving/Competitiveness
8, 9, 10
1
Achievement Striving
11, 19, 20, 21
2, 6, 11, 14
3, 4
Speed/Impatience
5, 6, 13
3, 4, 5, 7, 8
10
Time Pressure
1, 14, 15, 16, 18
2, 6, 7, 8, 9
Rapid Eating
3, 4
10
5
Note. For Study 1, all item numbers refer to the order as presented in the 21-item Type A-B scale of the Student
Jenkins Activity Survey (SJAS; Glass, 1977; Yarnold et al., 1987). The 12-item indicator set is identical to the set
used in Strube (1989, Table 1, p. 978). For Study 2, all item numbers refer to the order as presented in the 21-item
Type A-B scale of the Jenkins Activity Survey (JAS; Jenkins et al., 1979), the 14-item Bortner scale (Bortner, 1969),
and the 10-item Framingham scale (Haynes et al., 1978).
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
41
Table 2
Descriptive Statistics for Item and Subscale Indicator Sets Used in Preliminary and Taxometric Analyses
Study 1: Direct Replication (N = 2,373)
Study 2: Conceptual Replication (N = 2,254)
Indicator
M
SD
Skew
Validity
α
M
SD
Skew
Validity
α
Item indicators
1
.53
.50
-.11
.57
8
.48
.50
.06
1.35
9
.55
.50
-.20
1.28
10
.54
.50
-.15
1.31
11
.21
.41
1.43
.88
12
.39
.49
.47
.67
15
.46
.50
.29
.79
16
.26
.44
1.09
.61
17
.07
.26
3.32
.44
19
.34
.47
.70
.89
20
.28
.45
.96
.95
21
.25
.43
1.15
.90
Mean
.36
.75
.89
Subscale indicators
Hard-Driving/Competitiveness
1.56
1.29
-.09
1.65 (2.19)
.82
.00
3.00
.58
1.53 (1.73)
.74
Achievement Striving
2.17
1.68
.55
1.23 (1.76)
.55
.00
5.20
.32
1.60 (1.79)
.71
Speed/Impatience
.52
.72
1.28
.96
.50
.00
4.92
.13
1.39 (1.36)
.71
Time Pressure
.00
5.51
.47
1.68 (1.82)
.75
Rapid Eating
.64
.74
.76
.80
.57
.00
3.28
.66
.96
.84
Mean
1.22
.63
1.16 (1.97)
.61
.00
.43
1.43 (1.68)
.75
Note. Validity = standardized mean difference (i.e., d-value) between latent classes across indicators.
Regarding the minimum criterion for inclusion in taxometric analyses, evidence supports Meehl’s (1995) recommendations that indicators should separate
classes by d 1.25 (see Ruscio et al., 2010). Based on failure to meet criterion in preliminary analyses, (a) the 12-item indicator set, (b) the Speed/Impatience
and Rapid Eating subscale indicators (Study 1), and (c) the Rapid Eating subscale (Study 2) were eliminated from consideration. After weak subscale indicators
were removed, validities were re-estimated (in brackets) for those indicators retained for taxometric analyses.
NO EVIDENCE FOR THE ‘TYPE’ IN TYPE A BEHAVIOR!
!
42
Table 3
Results of Taxometric Analyses
Study 1: Direct Replication (N = 2,373)
Study 2: Conceptual Replication (N = 2,254)
Procedure
Validity
Intraclass Correlations
Base Rate
Estimate
Validity
Intraclass Correlations
Base Rate
Estimate
CCFI
Taxon
Complement
Taxon
Complement
S1
S2
MAMBAC
1.97
-.295
-.172
.421
1.68
.149
.101
.401
.388
.521
MAXCOV
1.68
.149
.101
.401
.332
MAXSLOPE
1.97
-.295
-.172
.421
.288
L-Mode
1.68
.149
.101
.401
.404
Mean
.338
.419
Note. MAMBAC = mean above minus below a cut, MAXCOV = maximum covariance, MAXSLOPE = maximum slope, L-Mode = latent mode factor analysis.
Validity = average standardized mean difference (i.e., d-value) between latent classes across indicators; Intraclass Correlations = average within-group correlation
across indicators for taxon and complement groups; Base Rate Estimate = average base rate estimate across procedures used in preliminary analyses; CCFI =
comparison curve fit index. CCFI values range from 0 (dimensional) to 1 (categorical), with a value of .50 representing equally good fit of both structures. The
more values deviate from .50, the stronger and more certain the result. Findings should be interpreted with caution when .40 < CCFI < .60.
Regarding the minimum criterion for inclusion in taxometric analyses, evidence supports Meehl’s (1995) recommendations that indicators should (a) separate the
latent classes by d 1.25, (b) classes should have an intraclass correlation that is rC |.30|, and (c) that the estimated base rate be greater than 10% of the sample
(BR .10; see Ruscio et al., 2010). Accordingly, taxometric procedures that require a minimum of two indicators (i.e., MAXCOV and MAXSLOPE) were used in
Study 1 because only two indicators met inclusion criteria. By comparison, taxometric procedures that require a minimum of three indicators (i.e., MAXCOV and
L-Mode) were used in Study 2 because four indicators met inclusion criteria. MAXSLOPE is a simplified version of MAXCOV and is used in its place when only
two indicators are available; thus, this procedure was used in Study 1 only.
Running head: LATENT STRUCTURE OF TYPE A BEHAVIOR
!
Figures
MAMBAC
MAXSLOPE
Figure 1. Study 1 (N = 2,373). Results of taxometric analyses for the two-subscale indicator set empirically derived
from the Type A-B scale of the Student Jenkins Activity Survey (SJAS; Glass, 1977; Yarnold et al., 1987). Because
only two subscale indicators met inclusion criteria, taxometric procedures that require a minimum of two indicators
were used: Mean above minus below a cut (i.e., MAMBAC) and maximum slope (i.e., MAXSLOPE). MAXSLOPE
is a simplified version of the MAXCOV procedure and is used in its place when only two indicators are available.
To aid in visual interpretation, the observed curve (bold line) from each taxometric procedure is overlaid across
simulated curves from categorical comparison data (left panel) and dimensional comparison data (right panel). Gray
bands represent the middle 50% of simulated data points, with the two darker gray lines reflecting minimum and
maximum estimated values.
!
44
MAMBAC
MAXCOV
!
45
L-Mode
Figure 2. Study 2 (N = 2,254). Results of taxometric analyses for the four-subscale indicator set empirically derived
from the Type A-B scale of the Jenkins Activity Survey (JAS; Jenkins et al., 1979), the Bortner scale (Bortner,
1969), and the Framingham scale (Haynes et al., 1978). Because four subscale indicators met inclusion criteria,
taxometric procedures that require a minimum of two or three indicators were used: Mean above minus below a cut
(i.e., MAMBAC), maximum covariance (i.e., MAXCOV), and latent mode factor analysis (i.e., L-Mode). To aid in
visual interpretation, the observed curve (bold line) from each taxometric procedure is overlaid across simulated
curves from categorical comparison data (left panel) and dimensional comparison data (right panel). Gray bands
represent the middle 50% of simulated data points, with the two darker gray lines reflecting minimum and maximum
estimated values.
... Notably, however, the typological approach to personality disorder classification and the underlying structure of personality disorder presented in the DSM is strongly debated. Such debates are primarily a result of most people now coming to understand that individuals generally do not fit into clear categories or types (e.g., Wilmot et al., 2019), as presented in the DSM. In recent years, many have argued for a major change in the entire DSM classification system (e.g., Clark et al., 2017;Newson et al., 2020). ...
Chapter
Full-text available
Today, the development of new technologies means that there are many advanced tools that can be used to improve our understanding of personality disorder, and, in turn, the treatment of personality disorder. One particularly promising tool — indeed, the focus of this chapter — is computerized language analysis. Through the exploration and analysis of verbal behavior, it is possible to empirically develop new insights into personality disorder, broadly defined. That is, by looking at patterns in the way that people with personality disorder use language — the words that they use and the way in which they use them — we can gain access into their broad constellation of thinking, feelings, and behaviors, as well as how precisely each of these features contributes to their pathology. To date, however, there has been very little research at the intersection of verbal behavior and personality pathology. Accordingly, the goal of this chapter is to describe and discuss how personality disorder may become better understood through the application of natural language analysis, providing a rough roadmap for the development of personality disorder studies using modern methods. Specifically, in this chapter we will provide: 1. A brief background and overview of personality disorder; 2. An overview of how natural language processing (NLP) methods have advanced understanding within the wider field of psychology, focusing on personality psychology and psychopathology specifically; 3. Examples that demonstrate how NLP methods can help to resolve some of the fundamental, unanswered questions and debates in the personality disorder literature.
... Training judges to attend to criterion-relevant profile features identified through MACPA can also enhance the accessibility and acceptability of quantitative assessment data to decision-makers. A likely reason for the persistence of categorical type-based personality assessments in lay and applied practice (e.g., Myers-Briggs Type Indicator), despite contrary evidence (McCrae & Costa, 1989;Wilmot, Haslam, Tian, & Ones, 2018), is that typological systems are easier for untrained users to understand and interpret than multi-indicator score profiles-particularly when presented as percentiles or other norm-referenced scores. By describing the predictive validity of an assessment battery in terms of a characteristic successful "type," decision-makers may be more willing to rely on information derived from construct-and criterion-valid, multidimensional, quantitative measures versus easier-to-understand, but less valid, categorical, or qualitative assessments. ...
Article
Intraindividual patterns or configurations are intuitive explanations for phenomena, and popular in both lay and research contexts. Criterion profile analysis (CPA; Davison & Davenport, 2002) is a well-established, regression-based pattern matching procedure that identifies a pattern of predictors that optimally relate to a criterion of interest and quantifies the strength of that association. Existing CPA methods require individual-level data, limiting opportunities for reanalysis of published work, including research synthesis via meta-analysis and associated corrections for psychometric artifacts. In this article, we develop methods for meta-analytic criterion profile analysis (MACPA), including new methods for estimating cross-validity and fungibility of criterion patterns. We also review key methodological considerations for applying MACPA, including homogeneity of studies in meta-analyses, corrections for statistical artifacts, and second-order sampling error. Finally, we present example applications of MACPA to published meta-analyses from organizational, educational, personality, and clinical psychological literatures. R code implementing these methods is provided in the configural package, available at https://cran.r-project.org/package=configural and at https://doi.org/10.17605/osf.io/aqmpc. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
... Since the advent of data simulation-based procedures, taxonic findings have become scarce. It has grown increasingly clear that taxa are rare in the field of psychopathology and perhaps mythical in the field of personality (Haslam, 2019). This meta-analysis supports the findings of previous, non-meta-analytic reviews of the taxometric literature by establishing that taxonic findings are infrequent when appropriate methodological controls are applied, and by quantifying the powerful bias toward taxonic inferences that exists when they are not. ...
Article
Full-text available
Taxometric procedures have been used extensively to investigate whether individual differences in personality and psychopathology are latently dimensional or categorical (‘taxonic’). We report the first meta-analysis of taxometric research, examining 317 findings drawn from 183 articles that employed an index of the comparative fit of observed data to dimensional and taxonic data simulations. Findings supporting dimensional models outnumbered those supporting taxonic models five to one. There were systematic differences among 17 construct domains in support for the two models, but psychopathology was no more likely to generate taxonic findings than normal variation (i.e. individual differences in personality, response styles, gender, and sexuality). No content domain showed aggregate support for the taxonic model. Six variables – alcohol use disorder, intermittent explosive disorder, problem gambling, autism, suicide risk, and pedophilia – emerged as the most plausible taxon candidates based on a preponderance of independently replicated findings. We also compared the 317 meta-analyzed findings to 185 additional taxometric findings from 96 articles that did not employ the comparative fit index. Studies that used the index were 4.88 times more likely to generate dimensional findings than those that did not after controlling for construct domain, implying that many taxonic findings obtained before the popularization of simulation-based techniques are spurious. The meta-analytic findings support the conclusion that the great majority of psychological differences between people are latently continuous, and that psychopathology is no exception.
Article
Objective The default assumption among most psychologists is that personality varies along a set of underlying dimensions, but belief in the existence of discrete personality types persists in some quarters. Taxometric methods were developed to adjudicate between these alternative dimensional and typological models of the latent structure of individual differences. The aim of the present review was to assess the taxometric evidence for the existence of personality types. Method A comprehensive review yielded 102 articles reporting 194 taxometric findings for a wide assortment of personality attributes. Results Structural conclusions differed strikingly as a function of methodology. Primarily older studies that did not assess the fit of observed data to simulated dimensional and typological comparison data drew typological conclusions in 65.2% (60/92) of findings. Primarily newer studies employing simulated comparison data supported the typological model in only 3.9% (4/102) of findings, and these findings were largely in the domain of sexual orientation rather than personality in the traditional sense. Conclusions In view of strong Monte Carlo evidence for the validity of the simulated comparison data method, it is highly likely that personality types are exceedingly scarce or non‐existent, and that many early taxometric research findings claiming evidence for such types are spurious.
Article
Full-text available
The question of whether psychopathology constructs are discrete kinds or continuous dimensions represents an important issue in clinical psychology and psychiatry. The present paper reviews psychometric modelling approaches that can be used to investigate this question through the application of statistical models. The relation between constructs and indicator variables in models with categorical and continuous latent variables is discussed, as are techniques specifically designed to address the distinction between latent categories as opposed to continua (taxometrics). In addition, we examine latent variable models that allow latent structures to have both continuous and categorical characteristics, such as factor mixture models and grade-of-membership models. Finally, we discuss recent alternative approaches based on network analysis and dynamical systems theory, which entail that the structure of constructs may be continuous for some individuals but categorical for others. Our evaluation of the psychometric literature shows that the kinds-continua distinction is considerably more subtle than is often presupposed in research; in particular, the hypotheses of kinds and continua are not mutually exclusive or exhaustive. We discuss opportunities to go beyond current research on the issue by using dynamical systems models, intra-individual time series and experimental manipulations.
Article
Full-text available
Taxometric procedures, model-based clustering and latent variable mixture modeling (LVMM) are statistical methods that use the inter-relationships of observed symptoms or questionnaire items to investigate empirically whether the underlying psychiatric or psychological construct is dimensional or categorical. In this review we show why the results of such an investigation depend on the characteristics of the observed symptoms (e.g. symptom prevalence in the sample) and of the sample (e.g. clinical, population sample). Furthermore, the three methods differ with respect to their assumptions and therefore require different types of a priori knowledge about the observed symptoms and their inter-relationships. We argue that the choice of method should optimally match and make use of the existing knowledge about the data that are analyzed.
Article
Full-text available
Although studies of Type A behavior ceased in the early 1990s because of failures to replicate its connections with heart disease, the Type A behavior pattern of speed, impatience, perfectionism, drivenness, and hostility may nevertheless be important in understanding individual differences in the subjective quality of life. The present study tested the hypothesis that Type A behavior undermines the enjoyment of leisure time and that this detrimental effect is mediated by savoring responses that hamper enjoyment. Confirming hypotheses, analysis of self-report survey data (N = 764) revealed that: (a) higher levels of Type A impatience in social situations predicted significantly less vacation enjoyment; (b) social impatience predicted significantly lower levels of memory building and counting blessings, and higher levels of kill-joy thinking, as savoring responses to vacation experiences, which in turn predicted less enjoyment; and (c) variations in these three savoring responses significantly mediated the link between Type A impatience and enjoyment, and together explained 85% of the total effect of impatience on vacation enjoyment. Furthermore, Type A impatience significantly predicted kill-joy thoughts about how one’s vacation could have been better, but not kill-joy thoughts about other things one should be doing instead of vacationing, suggesting that A-B differences in dampening savoring responses reflect differences in perfectionism rather than time urgency. Finally, temporal awareness moderated the detrimental impact of Type A impatience on enjoyment, by weakening the negative relationship between impatience and enjoyment. Clearly, Type A behavior has implications for understanding the quality, if not the quantity, of people’s lives.
Article
Full-text available
Recent controversies in psychology have spurred conversations about the nature and quality of psychological research. One topic receiving substantial attention is the role of replication in psychological science. Using the complete publication history of the 100 psychology journals with the highest 5-year impact factors, the current article provides an overview of replications in psychological research since 1900. This investigation revealed that roughly 1.6% of all psychology publications used the term replication in text. A more thorough analysis of 500 randomly selected articles revealed that only 68% of articles using the term replication were actual replications, resulting in an overall replication rate of 1.07%. Contrary to previous findings in other fields, this study found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications). However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. Moreover, despite numerous systemic biases, the rate at which replications are being published has increased in recent decades. © The Author(s) 2012.
Article
Psychological scientists draw inferences about populations based on samples—of people, situations, and stimuli—from those populations. Yet, few papers identify their target populations, and even fewer justify how or why the tested samples are representative of broader populations. A cumulative science depends on accurately characterizing the generality of findings, but current publishing standards do not require authors to constrain their inferences, leaving readers to assume the broadest possible generalizations. We propose that the discussion section of all primary research articles specify Constraints on Generality (i.e., a “COG” statement) that identify and justify target populations for the reported findings. Explicitly defining the target populations will help other researchers to sample from the same populations when conducting a direct replication, and it could encourage follow-up studies that test the boundary conditions of the original finding. Universal adoption of COG statements would change publishing incentives to favor a more cumulative science.
Article
One of the most provocative findings in the personality psychology literature is evidence that the latent structure of self-monitoring is categorical. That is, individuals can be classified as either high or low self-monitors (Gangestad & Snyder, 1985). Surprisingly, in the three decades since its original publication, this study has never been replicated. Using the sample from the original study (N = 1,918) and a replication sample (N = 2,951), the latent structure of self-monitoring was retested using contemporary taxometric procedures. Preliminary analyses indicated that the eight-item indicator set used in the original study lacked sufficient indicator validities for unambiguously detecting latent categorical structure. In addition, the Other-Directedness subscale, one of the three factor analytically derived subscale indicators used in the original investigation, was likewise found to be unsuitable, because of a combination of low validity and relative orthogonality vis-à-vis its fellow subscales. The 2 remaining subscales, Acting and Extraversion, had excellent properties as indicators, and were subsequently subjected to multiple taxometric procedures and consistency tests. Results failed to support the original taxonic claim; to the contrary, multiple comparison curves and a grand mean comparison curve fit index (CCFI) of .214 provided strong, convergent evidence that the latent structure of self-monitoring is dimensional rather than categorical. Dimensional findings indicate that the conventional model of self-monitoring may merit reexamination, and that theoretical models, measurement practices, and data analytic procedures that assume taxonicity should be replaced by dimensional conceptualizations and corresponding statistical procedures. Findings underscore the importance of replication in psychological science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).