Available via license: CC BY 4.0
Content may be subject to copyright.
Page 1/25
Licensing Exam Pass Rate Disparities in Marriage
and Family Therapy: Using an Analysis of Predictive
Factors to Inform a More Equitable Licensing Exam
Process
Kevin Lyness
Antioch University New England
Diane Gehart
California State University, Northridge
Brian Hannigan
Antioch University New England
Barrie Birge
Antioch University New England
Sheiketha Ross
Antioch University New England
Research Article
Keywords: licensing exams, marriage and family therapy, pass rates, racial disparities, age disparities
Posted Date: September 24th, 2024
DOI: https://doi.org/10.21203/rs.3.rs-4959863/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License
Additional Declarations: No competing interests reported.
Page 2/25
Abstract
This article describes the ndings of a study that explored potential factors that inuence the pass rate
for those taking marriage and family therapy (MFT) licensing exams, both the national and California
exams. An online, national survey was conducted to determine factors associated with passing the MFT
licensing exams. The survey included measures of test anxiety, coping strategies, perceived stress, and
experience of discrimination. The demographic results included patterns of racial and age disparities
similar to those reported by the Association of Social Work Boards (2022), especially for Black
respondents. Specic and readily implemented recommendations for making the current exams more
equitable include (a) changing the phrasing of questions, (b) clarifying and reducing the scope of the
content, (c) reducing the number of questions during the 4-hour period, and (d) ensuring adequate
accommodations for disabilities.
Introduction
Standardized examinations are a common step towards state licensure within the mental health
professions: marriage and family therapy (MFT), clinical mental health counseling
(CMHC), clinical social work (CSW), and psychology. Despite the signicance of such tests in the
profession, little has been explored around specic factors that contribute to one’s likelihood of passing
or failing (Caldwell & Rousmaniere, 2022).
This article describes the ndings of a study that explored prominent factors that inuence the pass rate
for those taking the MFT licensing exams. Based on these ndings, the discussion section includes
recommendations for exam improvement, including bridging age and racial pass rate disparities.
Current State of Research on Licensure Examinations
The research pool regarding factors that inuence passing the MFT, CMHC, CSW, and psychologist
examinations is limited, with much of the related literature being oriented towards other elds of health,
law, and teaching professions (Allen & O’Dell, 2007; ABA, 2022; ASWB, 2022; Caldwell & Rousmaniere,
2022; Nettles et al., 2011).
Family therapy has minimal published research on its licensing exam. In 2011, Caldwell and colleagues
conducted a study that examined whether students from schools accredited by the Commission on
Accreditation for Marriage and Family Therapy Education (COAMFTE) had higher exam pass rates than
those from regionally accredited schools on the California licensing exams. They found that students
who graduated from COAMFTE programs had signicantly higher pass rates on the California exams
than those who came from non-accredited programs. A potential difference may be that students with
stronger traditional academic skills are accepted into the COAMFTE program (Caldwell et al., 2011).
Page 3/25
One article has looked specically at the MFT licensure exam, though it is quite old at this point, using
data from 1994-1996 (Lee, 1998). In that study, the author was able to examine actual total exam scores
on the national exam in relation to questionnaire data gathered at the time the participant took the exam
for over 1000 test takers over a three-year period. In that study, gender and age both affected scores,
with women and younger respondents scoring signicantly higher (Lee, 1998). The sample was over 90%
White, with only 1.9% reported as Black. Lee reported that there was overall signicant variability in
scores by race, but the cell sizes were too small for meaningful group comparisons. Lee also found that
86% of this sample was taking the exam for the rst time, with 10% taking it the second time, and 4%
taking it for the third or more time, and that those who were taking the exam the rst time scored
signicantly higher. In addition, test takers who had graduated more recently relative to taking the exam
did better, and participants who used multiple preparation methods also scored signicantly higher. Lee
did not report pass rates for the exam, just total scores on the exam. One advantage of Lee’s study is
that the exam scores came from the testing service and the questionnaire was administered to those
taking the exam directly, reducing response bias potential
Regarding the exam for CMHC, one dissertation (Carr, 2016) was found that began to explore the
presence of test-anxiety in students taking the National Clinical Mental Health Counseling Examination
(NCMHCE). The CSW exam—governed by the Association of Social Work Boards (ASWB)—is more
prominently written about, with Allen and O’Dell (2007) describing corollary, yet statistically insignicant
relationships between passing the ASWB exam and being involved with a preparation course. More
historically, Borenzweig (1977) concluded that neither age, sex, ethnicity, graduate school orientation,
eldwork, or supervisor credentials had any statistical bearing on passing the social work exam in
California in 1977. Interestingly, the only statistically signicant conclusion for Borenzweig was that
people who passed were more likely to be in their own personal therapy than those who had failed.
Most recently, the ASWB (2022) published an in-depth analysis of pass rate data for their licensing
exams. Their analysis included ndings from test-takers from 2011-2021 on all ve of their exams, with
their clinical exam closest to licensing exams in family therapy, counseling, and psychology. They
compared the “eventual pass rate” (p. 4) over the four-year period of 2018-2021 for all candidates taking
the clinical exam based on gender, age, ethnicity/race, and primary language. Their analysis identied
signicant disparities based on age, ethnicity/race, and language but not gender. The eventual pass rate
over a four-year period was 82.7% for women and 80.1% for men. In terms of age, they found that rates
dropped signicantly for older candidates. Over the four-year period of 2018-2021, pass rates varied by
age as follows: 18-29 = 91.0%, 30-39 = 86.1%, 40-49 = 75.5%, 50 and over = 64.8%. A similar disparity in
pass rates was found related to race/ethnicity: Black = 57.0%, Native American/Indigenous peoples =
73.5%, Hispanic/Latino/a = 76.6%, Asian = 79.7%, Multiracial = 86.6%, White = 90.7% (ASWB, 2022). The
ASWB (2022) also found that candidates whose rst language was English had an 83.4% pass rate,
whereas 70% of candidates who spoke English as a second language passed.
In their discussion of age disparities, the ASWB (2022) identied factors such as increased family,
nancial, and professional responsibilities as a possible reason that older exam candidates may nd it
Page 4/25
hard to prioritize exam preparation. Similarly, regarding racial disparities, they suggested that lower
household income and wealth, educational inequities, and lower rates of health coverage as possible
explanations. Additionally, they posited that stereotype threat—an individual’s fear that their test
performance may conrm negative stereotypes—could also be a factor.
The Examination for Professional Practice in Psychology (EPPP; Association of State and Provincial
Psychology Boards, 2024) is the exam required for licensure as a psychologist. Sharpless (2019, 2021;
Sharpless & Barber, 2013) has published several recent articles exploring factors that inuence pass
rates for the EPPP, including demographics and program characteristics. Sharpless has consistently
found that minority racial status (especially being Black) is related to poorer pass rates on the EPPP.
Regarding program characteristics, Sharpless and Barber (2013) found that GRE scores, percentage of
minorities in the program, and internship match rates were all predictive of pass rates for graduates, and
they also found that those with PhDs passed the exam at higher rates than those with PsyDs. Chaparro
(2020) also explored the effects of numerous variables (i.e., GRE scores, gender, program type, years to
completion, etc.) on EPPP pass rates and found only admission rates of one’s college/university to be
statistically related to pass rates. As admittance rates decreased, pass rates increased (Chaparro, 2020).
All other variables within Chaparro’s work were unpredictive of passing the EPPP. In another study,
Macura and Ameen (2021) spoke to pass rates for the EPPP and identied statistically signicant
relationships between passing the test and race (White psychologists had higher rst-time pass rates),
degree type, and institution accreditation status. Macura and Ameen (2021) also highlight anecdotal
accounts of study material usage, study time, personal life factors (i.e., unexpected life events at time of
exam), and challenges with test accessibility.
Caldwell (2023) analyzed the current mental health licensing exams in terms of their adherence to
industry standards for testing, specically
Standards for Educational and Psychological Testing
established by the American Educational Research Association (AERA)
.
Caldwell notes that mental
health licensing exams fail to meet many of the required standards for fair and equitable testing. First,
the exams do not meet the standards for
construct clarity,
which would require a clear description of
what exactly is being tested. In stark contrast to testing norms, licensing exam test developers provide
vague lists of general topics rather than clearly identifying the specic knowledge covered on the exam.
Second, the author notes that these exams also fail to meet the standards of
construct validity
--evidence
that the test accurately measures what it says it measures--and
criterion validity
—the standard by which
the scores are interpreted and used to make decisions
.
None of the licensing exam developers in mental
health have taken reasonable efforts to assess whether these exams meaningfully assess a person’s
ability to be an effective, independent practitioner despite the use of these exams for decades to
determine such readiness. The AERA standards also require that test developers ensure that their tests
are
fair
to all test-takers, which most licensing exam test developers have ignored until recently. Finally,
testing standards also require rigorous
statistical analysis
to ensure that individual exam items as well
as the exam itself are not biased toward groups of test takers. However, developers of mental health
licensing tests either ignore or downplay the importance of using statistical analysis to reduce test bias.
Page 5/25
Caldwell concludes that “the
overall structure
of these exams (primarily four-option multiple choice, with
a single correct answer; often based on a very brief case vignette) is an inappropriate vehicle for
performing and assessing ‘knowledge’ relevant to professional clinical practice” and recommends
suspending licensing exams until the exams are able to meet the industry standards (p. 15; emphasis in
the original).
Factors Inuencing Examination Anxiety
A critical consideration when exploring exam pass rates is test anxiety. The consequences associated
with test anxiety are far reaching and well documented, resulting in lower motivation (Elliot & McGregor,
1999), diminished cognitive ability, and reduced immune system function, which all lead to lower test
scores, grades, and opportunity (Eysenk & Calvo, 1992, Sarason, 1988; Zatz & Chassin, 1983). Test
anxiety has been found to affect females more than males (El-Zahhar & Hocevar, 1991; Speilberger, 1980;
Zeidner & Nevo, 1993). Females may perceive test taking as more threatening, experiencing emotions
such as fear, worry, and anger. Males may experience test taking as more of a “personal challenge”
(Peleg-Popko, 2004, p. 649), using anxiety in a more productive way.
Research about specic factors affecting individuals taking the MFT licensing examination was sparse.
However, becoming a licensed MFT takes an enormous commitment of time, energy, and resources.
Although states have the nal say in what is required to be a licensed practicing MFT, the amount of
work to become a clinician is signicant. The overall stress of completing: eleven required courses, a
specic amount of supervised client clinical hours, at least 1000 hours of postgraduate clinical work and
200 hours of postgraduate supervision, and a passed licensing exam is anxiety provoking (West et al.,
2010). Research to identify specic stressors that affect Doctor of Pharmacy students was done at two
diverse universities, Howard University and the University of Houston (Sansgiry et al., 2005). Test anxiety
has been negatively attributed to academic performance, academic competence, test competence, and
time management (Sansgiry et al., 2005). An empirical study of test anxiety revealed that academic
competence and test competence predictors were the most signicant predictors of test anxiety
(Sansgiry et al., 2015).
Test anxiety, a multidimensional issue, includes worrying about exams, lack of condence in test
performance, thinking about failure and the consequences. The emotional part of test anxiety are
feelings of “tension, apprehension, and nervousness towards exam” (p. 122) with congruent somatic
feelings experienced as “nausea, sweating and increased heart rate” (Sansgiry et al., 2015, p. 122). Also
contributing to test anxiety is a student’s perception of how dicult the study material is. At the
University of Houston, students were comfortable with their pharmacology classes. However, the study
material given to the students for the exam and the amount of preparation time necessary may have
increased test-taking anxiety (Sansgiry et al., 2015).
Stress, in general, is part of most academic and licensure testing. However, when stress becomes
extreme, it leads to anxiety and ultimately impacts academic achievement. Student stress seems to be
Page 6/25
universal. A review of the study done by Kumari and Jain (2014) in India showed signs of stress that
impact exam achievement is: insucient or irregular sleep, feeling tired, isolated or sad, experiencing
somatic conditions; upset stomach, restlessness, which all led to the inability to recall what the students
have studied (Kumari & Jain, 2014). Addressing stressors: lifestyle (rest, nutrition, and time
management), preparation of information before the test (date of exam, location of exam, content
covered, paperwork required), while reducing catastrophic thinking (“there is no way I am going to pass
this") and irrational thoughts ("I will hate myself if I fail") (Kumari & Jain, 2014) are all critical to reducing
examination stress.
Another study researched undergraduate nursing students and the factors that inuenced their
examination anxiety. Once again, the researchers determined too much stress confuses, exhausts, and
overwhelms students' test-taking ability. Three hundred and forty undergraduate nursing students (90.3%
female) with a majority of 61% experiencing average amounts or no test anxiety, 25% with mild test
anxiety, and 2% experiencing severe test anxiety were the sample (Vaz et al., 2018). The study looked at
four factors impacting test anxiety: “learning process” (study habits, preparedness, course content, sleep
pattern, motivation), “perceptions related to examinations” (condence level, expectations, experience,
test situation, health aspects, and recall), “learning patterns” (how students deal with challenging
subjects, time management, revision), and “over expectations related to learning outcomes”
(expectations of parents and student) (Vaz et al., 2018). The research showed that all four factors had a
positive correlation (0.05) to examination stress. Perceptions related to examination and learning
patterns had a moderate correlation (
r
= 0.655 and
r
= 0.368), and the least correlated factor was over
expectations related to learning outcomes factor and had the weakest correlation (
r
= 0.017 and
r
=
0.132) (Vaz et al., 2015). Results from the study point to elements that contribute to stress; study habits,
past experiences, health aspects, course content, test situation, motivation, self-concept, the expectation
of student and parental pressure may impact examination anxiety.
Marriage and Family Therapy Licensing Exam
Currently, there are two clinical exams used for licensing marriage and family therapists: the “National
Exam,” overseen by the American Association of Marriage and Family Therapy Regulatory Boards
(AMFTRB), and the California Clinical Exam, administered by California’s Board of Behavioral Sciences
(BBS). All states use the National Exam except California (AMFTRB, 2024; BBS, 2024). Many states add a
supplemental part to cover their specic legal matters (AMFTRB, 2024; Caldwell et al., 2011). Both
exams use a multiple-choice format, with four options (i.e., A, B, C, D) and rely heavily on vignette-based
questions that require the exam candidate to apply professional knowledge. The national MFT exam has
180 questions, and California’s Clinical exam has 170 questions.
Method
Sampling
Page 7/25
Participants were recruited through convenience sampling via emails to program directors in accredited
marriage and family therapy programs, via postings to social media, and via emails to email lists
accessible to the authors. A second round of emails and postings was designed to recruit additional
participants who had not passed the exam as our initial sample was overrepresentative of those who
had passed. Data collection took place over about two-and-a-half months at the end of 2021 and start of
2022.
Procedure
We utilized SurveyMonkey (www.surveymonkey.com) as our online survey platform, and our survey was
only available in English. The survey started with an informed consent document followed my
demographics and measures. The research was approved by the university Institutional Review Board as
an exempt study. It took respondents an average of 29 minutes to complete the survey.
Measures
Westside Test Anxiety Scale
The Westside Test Anxiety Scale (WTAS; Driscoll, 2007) is a short 10-item measure of test anxiety that
has been demonstrated to be correlated with test performance and is a reliable indicator of impairment
in test performance (Driscoll, 2007). The ten items are measured on a 5-point Likert-type scale (1 –
not
at all or never true
to 5 –
extremely or always true
), and the overall score is an average of those scores
(ranging from 1 to 5). The WTAS was designed to identify individuals who would benet from an anxiety-
reduction intervention to improve test performance.
Coping Strategies Inventory – Short Form
The Coping Strategies Inventory – Short Form (CSI; Tobin et al.,1989) is a 32-item measure of coping to
manage stress. There are several ways to interpret the measure. The rst is to look at 8 primary
subscales or primary factors (Problem Solving, Cognitive Restructuring, Emotional Expression, Social
Support, Problem Avoidance, Wishful Thinking, Self-Criticism, and Social Withdrawal). These can be
combined into four secondary factors (Problem Engagement, Emotion Engagement, Problem
Disengagement and Emotion Disengagement) and two tertiary factors (Engagement and
Disengagement) (Tobin et al., 1989). We utilized the primary factors in our analyses.
Perceived Stress Scale
The Perceived Stress Scale (PSS; Cohen et al., 1983) is a short 10-item measure of respondents’
perceptions of the amount of stress experienced in the past month. The items are measured using a 5-
point Likert-type scale (0 –
never
to 4 –
very often
). Four of the items are reverse-scored, and the overall
score is a total of the items, ranging from 0 to 40. In addition to perceived stress, we asked a few
questions about history of trauma.
Everyday Discrimination Scale
Page 8/25
The Everyday Discrimination Scale (EDS; Williams et al., 1997) is a short measure of 10 questions that
get at experiences of discrimination as well as the attributed reason for that discrimination (including a
wide range of possible reasons). The rst nine items are measured on a 6-point frequency scale (1 –
never
to 6 –
almost everyday
) while the nal item asks what the respondent believes is the main reason
for experiences, with response items include ancestry or national origin, gender, race, age, religion, and
so forth. The overall score is a sum of the rst nine items.
Additional Measures
Our primary measure of exam success was the question “Have you passed the LMFT licensure exam?”
(
yes
or
no
). We also asked how many times the respondent had taken the exam, with those answering
“one” having passed the exam on the rst try. With this information, we were able to know whether
someone passed on the rst attempt, passed on a subsequent attempt, or had not yet passed the exam,
though for many analyses we simply used pass/not passed as the criterion variable. Respondents were
also asked whether the exam was the national exam, the California exam, or both.
We asked a series of demographic questions, about age, ethnicity, gender, sexual orientation, marital
status, income, and employment status. In addition, we asked about highest degree level, type of degree
and subject area, and whether the program was accredited by COAMFTE (
yes/no/unsure
), as well as the
location of the program (by state) and the delivery model (face-to-face, online, hybrid).
Respondents were also asked questions about challenges and barriers to successfully passing the
exam. One question asked about whether the respondent struggled with test anxiety, with knowledge of
the content of the exam, and/or with knowing the correct test-taking strategy for the exam, while another
question about barriers to taking the exam (such as logistics, inadequate disability supports, language
barriers, etc.). For some analyses, we looked the number of barriers reported. Finally, we asked some
questions about exam preparation materials used and strategies and time put in to studying for the
exam.
Results
Of the 340 people who started the survey, 317 submitted it. On average it took about 29 minutes to
complete the survey. After accounting for missing data, we had 270 complete surveys. IBM SPSS
Statistics (Version 26) was used to analyze the data.
General Exam Outcomes for Respondents
Overall, 78% had passed the exam (passed 1st attempt,
n
= 197, passed on 2nd or subsequent attempt,
n
= 47, not passed,
n
= 69). Nearly 70% of respondents had taken the exam once while nearly 7% had
taken the exam 4 or more times (the highest was 12 times). Regarding which exam was taken, 64.2%
took the National exam, 31.9% took the CA exam, 3.8% took both. Pass rates were similar for those
Page 9/25
taking the national exam (
n
= 198; 75.8% passed), the CA exam (
n
= 99; 80.8% passed), or both (
n
= 12;
91.7% passed) (X2(4) = 3.02,
p
= .56).
Demographics
The mean age was 43.46 (
SD
= 12.2). Those who passed on the rst attempt were signicantly younger
(
M
= 41.86) than those who had not passed (
M
= 46.25) (overall
F
(2,309) = 4.68,
p
= .01, Tukey HSD
posthoc test
p
= .03). (The
p
value comparing passed on 1st vs. subsequent attempts was .07, and mean
age of that group was 46.13).
The sample was 88% female, 11.7% male, and .3% gender queer. Gender identity was unrelated to pass
rates (X2(4) = 1.53,
p
= .82). The reported sexual orientation in the sample was 86% straight, 5% bi, 3.5%
lesbian, 2.5% queer, 2% something else, .6% gay, .6% preferred not to say. Reported sexual orientation
was also unrelated to pass rates (X2(12) = 11.63,
p
= .476).
The sample was relatively middle class, with only 15% reporting incomes less than $50k/year, and
almost 22% reporting family income over $150k/yr. Nearly 75% were employed full-time, with fewer than
6% not working. Income was related to pass rates (X2(12) = 57.85,
p
< .001), with those with higher
income more likely to have passed on the rst attempt and less likely to have not passed. Similarly,
employment status was signicantly related to pass rates (X2(4) = 11.47,
p
= .02), with those working
part-time more likely to have passed on the rst attempt and those not working for pay more likely not to
have passed yet.
Regarding marital status, the sample was 60.8% married, 19.3% single, 10.8% cohabiting, and 9.2%
divorced. Marital status was signicantly related to pass rates (X2(6) = 28.35,
p
< .001), with married
respondents being more likely to have passed on the rst attempt than those reporting other marital
statuses.
The sample was fairly diverse regarding race; 59.5% White (
n
= 187), 13.9% Black (
n
= 44), 12% Latino/a
(
n
= 38), 4.1% Asian American/Asian (
n
= 12), 2.5% other race (
n
= 8), 7.9% multiracial (
n
= 24). Race was
signicantly related to passing the exam (X2(5) = 38.34,
p
< .001). The pass rates for each group were as
follows: White, 87%, Black, 48%, Latino/a, 66%, Asian or Asian/American, 75%, other race, 63%,
multiracial, 87.5%.
Regarding education, 86.7% had a master’s degree, with 10.8% having a PhD, and 2.5% a professional
doctorate. Regarding COAMFTE accreditation, 74.4% said yes, 17% said no, and 8.5% were unsure; 88.6%
had degrees in C/MFT. Those with PhDs and professional doctorates were more likely than those with
master’s degrees to have passed on the rst attempt (X2(4) = 9.83,
p
= .04), but graduating from a
COAMFTE-Accredited degree program was unrelated (74.1% graduated from a COAMFTE program;
X2(4) = 8.37,
p
= .08). Program delivery model was strongly related to pass rates, with students from
online programs (37% passed, group
n
= 31) being less likely to have passed the exam, while those from
face-to-face programs (82% passed, group
n
= 244) and hybrid or low-residency programs (82% passed,
Page 10/25
group
n
= 49) were more likely to have passed (X2(2) = 28.95,
p
< .001). Because the face-to-face and
hybrid programs had essentially the same pass rates, for later comparisons online vs not-online was
used. Programs were located in 38 states or territories and 4 other countries (but not Canada), and 36%
of respondents went to CA graduate programs (next highest was KY and MN at 6% each).
Preliminary Analyses
We conducted a number of preliminary analyses exploring variables of interest. Self-reported hours of
studying was signicantly related to pass rates—those who studied more were less likely to pass
(probably because folks who knew they might struggle studied more but still struggled). There was also
a small but signicant positive correlation (
r
= .123,
p
= .032) between hours studying and number of
times taking the test.
We asked if the respondents’ programs had provided exam preparation training as part of the program
(nearly 70% of programs did not provide this), but this was unrelated to pass rates (X2 (4) = 7.13,
p
= .13).
Nearly 93% of respondents reported using formal test preparation materials (and 20 of the 22 people
who did not use formal test prep materials passed the exam on the rst attempt, skewing the results
such that using test prep materials was related to not passing the exam). Those who reported using test
prep materials reported signicantly
more
test anxiety than those who did not, likely indicating that those
who are more anxious are more likely to use those materials (
t
(309) = 2.50, p = .013). A more useful
measure looked at how useful respondents found the test prep materials, and unsurprisingly those who
had not passed the exam were less likely to nd the materials helpful (scale of 1 –
a great deal
to 5 –
not
at all
) (oneway ANOVA
F
(2, 287) = 18.43,
p
< .001;
M
(passed 1st) = 1.56,
M
(passed 2+) = 1.48,
M
(not passed) =
2.30).
Kendall’s Tau b
correlation was .228 (
p
< .01).
We explored barriers experienced that may have interfered with taking the exam. We looked at this data
a couple of different ways. First, we created an index that was a count of how many of the barriers were
experienced (range 0 to 7,
M
= 1.25,
SD
= 1.34). This index was signicantly related to pass rates, with
those having not passed reporting signicantly more barriers than those who has passed on the rst or
subsequent attempts (
M
(not passed) = 2.03,
M
(passed 1st) = 1.01 [
p
< .001],
M
(passed 2+) = 1.13 [
p
= .001]). We
also used logistic regression (looking at the binary variable passed or not passed) to explore which of
the barriers was most predictive of not passing the exam and found that only two barriers signicantly
predicted not passing while controlling for all the others. These were
inadequate disability
accommodations
(OR = 4.71,
p
= .008) and
diculty with logistics
(OR = 5.19,
p
< .001). One note is that
African American respondents were more likely to report having diculty with logistics (X2(1) = 10.05,
p
= .002) and there was also a signicant effect when looking at passing or not passing the exam (X2(1) =
21.10,
p
< .001), where there was a signicant difference in percentage of those reporting no diculty
with logistics by race (but the lack of logistical diculties was more salient for non-Black participants,
with 85.4% of non-Black participants who passed the exam reporting no logistical diculties, but only
59.0% of Black respondents who passed reported no logistical diculties—meaning that not
experiencing logistical diculties was more benecial for non-Black respondents).
Page 11/25
We also explored the role of trauma, focusing on a question that asked how much negative effect on
current functioning the participant reported (from 1 =
a great deal
to 5 =
none
). Those who had passed
the exam reported signicantly lower effects from trauma than those who had not passed (
M
= 3.07 vs
3.78,
t
(285) = 3.96,
p
< .001). This variable was also related to other study variables: WTA
r
= − .224**, ED
r
= − .183**, PSS
r
= − .377**. There were also signicant correlations with almost all of the coping
measures.
Primary Results
Logistic regression was chosen as the best analytic strategy to explore what best predicts passing the
exam. While we also had a three-group classication (passed on 1st attempt, passed on 2nd or
subsequent attempt, not passed), the relationship of nearly all of the predictor variables with this three-
group classication was linear. While we could use discriminant function analysis, it is less robust,
especially to unequal group sizes like we have here. Logistic regression also provides easy-to-
understand odds ratios and each predictor shows how strong the effect is while controlling for all of the
other variables and we felt it provides the best and most parsimonious analysis of the data.
To that end, the rst step was to examine bivariate relationships, looking for signicant relationships
with the binary pass/not-pass variable. The everyday discrimination variable was eliminated due to very
low correlations with study variables. We decided to include all of the coping strategies in the rst
regression analysis even though several showed low correlations with the primary outcome variable for
theoretical reasons (variables that have theoretical similarity are often retained in hierarchical regression
models at the rst step).
The second step was to create a stepwise logistic regression. The rst block included demographic
variables (age, being Black vs non-Black, family income, being married vs non-married). Because the
primary differences in preliminary analyses were between Black vs. non-Black and married vs. non-
married respondents, these simplied dummy variables were used in the regression (in the second block
we similarly used online vs other delivery models). A second block included variables related to the
testing (specic barriers related to logistics and disability accommodations, test anxiety, perceived
stress, online delivery model, trauma effects, struggles with test content and test strategy). The third
block included coping strategies.
The nal step in the regression process was to simplify the model by eliminating variables with
p
values
greater than .25 (trauma effects, social contact, prob avoidance, and wishful thinking). Each step of the
regression showed signicant improvement in the model.
See table for nal model results.
Page 12/25
Table 1
Hierarchical Logistic Regression Examining Passing or not Passing Licensure Exam
Predictor B SE B Odds Ratio
Age 0.07** .02 1.07
Black 1.2* .58 3.30
Family Income -0.45** .16 0.64
Not Married 0.56 .45 1.75
Westside Test Anxiety 0.79** .29 2.19
Perceived Stress Scale 0.10** .04 1.11
Barrier: Logistics 1.19 .64 3.29
Barrier: Inadequate Disability Accommodations 1.59* .74 4.90
Online Delivery Model (vs F2F or Hybrid) 2.01** .71 7.45
Struggled with test content 0.77 .48 2.16
Struggled with test taking strategy 0.89 .50 2.44
Expressing Emotions -0.55* .25 0.58
Self-Criticism 0.50* .22 1.66
Social Withdrawal -0.73* .29 0.48
Constant -6.65 2.15
Notes
: Overall model X2(15) = 131.96, p < .001. Outcome variable coded 1 =
Passed Exam
, 2 =
Not
Passed
*
p
< .05, **
p
< .01
The predictor with the largest odds ratio is coming from a program with an online delivery model (OR =
7.45), followed by inadequate disability accommodations (OR = 4.90) and being Black (OR = 3.30).
Although struggling with test content and strategy both had relatively large odds ratios, they were not
signicant predictors of passing when controlling for these other variables.
One result of note is that one strong predictor of not passing the exam is being Black (OR = 3.30). We did
several additional analyses to explore this result. There were no signicant differences by race on any
predictors of pass rates (e.g., text anxiety, number of barriers, perceived stress, etc.) except for family
income and program delivery model. Black respondents had less income (
t
(312) = 3.80,
p
< .001), and
were more likely to have attended online programs (X2(2) = 16.96,
p
< .001), and each of these are
predictors of lower pass rates. Further analysis of delivery model also shows a racial effect. More Black
students went to online programs (24% of Black students vs 6% for non-Black; overall X2(2) = 16.96,
p
Page 13/25
< .001) and Black students who went to online programs were less likely to have passed the exam than
non-Black students at online programs (17.4% of non-Black students in online programs had not passed
compared to 39.1% of Black students). In fact, if you remove delivery model from the regression analysis,
being Black becomes the variable with the largest odds ratio predicting not passing the exam.
Barriers: most did not show signicant differences comparing Black to non-Black respondents. One
exception was Diculty with Logistics of the exam (X2(1) = 10.05,
p
= .002). (Black respondents reported
this diculty 22% vs non-Black 7.5%). However, follow-up analysis showed that Diculty with Logistics
was only signicantly related to not passing exam for non-Black respondents, so even though they
reported this barrier more often, it was less likely to affect their passing the exam. The pass rate for
Black respondents was virtually identical when looking at the national exam (51% passed) vs CA exam
(50% passed). For non-Black participants, it was 81% passed for national and 84% passed for CA. So, the
exam pass rates for Black respondents cannot be explained by differences in other variables (like test
anxiety, coping strategies, stress, barriers, or diculties, each controlled for in the analysis and not
demonstrating any difference between Black and non-Black respondents). One key consideration,
though, is program delivery.
Discussion
The data from this study closely parallel the ndings from ASWB’s 2022 exam analysis related to gender,
age, and ethnoracial identity. Both studies found no signicant difference in terms of pass rate based on
gender. Similarly, our study did not nd differences based on sexual orientation.
However, both studies found a signicant difference based on age, as did Lee (1998).
Table 2.
Comparison of this Study’s Age Data and ASWEB 2022 Data
Pass rate by Age Current Study of MFTs ASWB 2022 (Clinical Exam)
18-29 88.5% 91.0%
30-39 81.6% 86.1%
40-49 77.8% 75.5%
50+ 70.7% 64.8%
In terms of ethnoracial identity, both our study and the ASWB data showed signicant disparities, which
is similar to research results on licensing exams in teaching, nursing, law, pharmacy, and social work
(ASWB, 2022). In both this study and the ASWB study, Black identifying exam candidates had
signicantly lower pass rates compared with white candidates, followed by “other” race, Native
American/indigenous, and Latino/a/x. Lee (1998) did not have enough minority respondents to
meaningfully analyze group differences.
Page 14/25
In contrast to other standardized exams such as the Scholastic Aptitude Test (SAT) in which Asian-
American candidates often outperform all other candidates (Reglin & Adams, 1990), in both our study
and the ASWB study, white candidates had an 11-12% higher pass rate than Asian-Americans, indicating
that a different testing dynamic may be at play with licensing exams compared with traditional high-
stakes college exams.
Table 3.
Comparison of this Study’s Ethnoracial Identity Data and ASWB 2022 Clinical Exam Data
Pass rate by Ethnoracial identity Current Study of MFTs ASWB 2022 (Clinical Exam)
Black 48% 57%
Other race 63% Not reported
Native American No responses 74%
Latino/a/x 66% 77%
Asian/Asian American 75% 80%
White 87% 91%
Multiracial 88% 87%
Interpreting these racial disparities can be challenging. The ASWB (2022) concluded that systemic
issues are likely related to these differences in pass rates. They noted that historically marginalized
groups often experience higher rates of socioeconomic hardship, higher poverty rates, inequities in
educational resources, as well as lower rates of health coverage, wealth, and home ownership. These
factors may affect exam candidates’ access to preparation resources and time to study. Another
possible contributing factor to pass rate disparities the ASWB identied was
stereotype threat
, dened
as an individual’s fear that their performance may reinforce preexisting negative stereotypes.
McWhorter (2022), a Black linguist at Columbia University, offered another explanation for ethnoracial
disparities in exam pass rates. McWhorter believes that these disparities have an added dimension of
social class, a factor identied as contributing to pass rates in this study. McWhorter cites a classic
study in linguistics in which the language socialization of working-class Black families was compared to
middle-class white families. In working class Black families, the conversations between parents and
their children focused on practical problems: addressing problems in the real world with less reliant on
book knowledge. In contrast, middle-class suburban parents engaged their children in conversations that
involved “disembodied information-seeking,” (para. 9) discussing facts for facts’ sake with no direct real-
world value. These linguistic differences have been observed across ethnoracial groups, meaning that
White working-class families have patterns more similar to Black working-class families.
There have been several race-based critiques of licensure exams that are relevant here. Caldwell and
Rousmaniere (2022) state:
Page 15/25
After more than 50 years of use, there remains no evidence that clinical exams in mental health care
improve the quality or safety of that care. Absent such evidence, our reliance on these exams is built on
trust, from professionals, policymakers, and the public...With ample evidence of racial disparity in exam
performance, credible and longstanding criticisms that have not been adequately addressed, and
potential conicts of interest among boards serving as both exam buyers and sellers, that trust is not
deserved. (Caldwell & Rousmaniere, 2022, p. 3)
Similarly, Kendi has said this:
[T]oday, many Americans still imagine an achievement gap rather than an opportunity gap. We still think
there’s something wrong with the kids rather than recognizing the[re is] something wrong with the tests.
Standardized tests have become the most effective racist weapon ever devised to objectively degrade
Black and Brown minds and legally exclude their bodies from prestigious schools. (2020, para. 12)
The National Education Association subtitled their report on racism in standardized testing “From grade
school to college, students of color have suffered from the effects of biased testing” (Rosales & Walker,
2021, p. 1). In discussing bias in teacher preparation programs Petchauer (2014) says: “Because African
American test takers are roughly half as likely to pass basic skills exams on their rst attempt compared
to White test takers, this portion of the licensure exam is a key gatekeeper to the eld and directly
shapes the racial diversity of the profession.” (p. 1)
Caldwell and Rousmaniere (2022) summarize the issue of race in licensure exams well:
Clinical exams have been repeatedly shown to produce disparate outcomes on the basis of race and
ethnicity. Rather than being passive recipients of existing disparities, evidence suggests that clinical
exams add a unique layer of structural racism to the process of mental health licensure. Clinical exams
also limit the mental health workforce by constraining licensure–a function that would make sense if
there was evidence of their benet, but without such evidence, only serves to reduce the supply and
diversity of mental health care professionals available to serve the public. (p. 4)
There is a clear ongoing critique in the literature around race and racial bias in these exams, and our data
contributes to that critique.
Coping Strategies
The CSI-SF secondary factors include Emotion-Focused Engagement and Emotion-Focused
Disengagement (as well as Problem-Focused versions of both; Tobin et al., 1989). Three of the four
Emotion-Focused scales were signicant predictors in the nal regression analysis, with two of those in
expected directions (self-criticism positively predicts not passing the exam while emotional expression
is associated with passing the exam). What was interesting is that social withdrawal was associated
with passing the exam (the fourth Emotion-Focused strategy is Social Contact, which was not a
signicant predictor of passing the exam). It may be that in this instance withdrawing from friends and
Page 16/25
others may be an adaptive strategy—perhaps reducing negative inuences. Interventions focused on
coping strategies may then focus on reducing self-criticism
Recommendations for Short-Term Adjustments to
Licensing Exams
Licensing of family therapists is a complex ecosystem with multiple interlocking bureaucratic systems,
including individual state licensing boards, state legislatures, exam creators and administrators,
hundreds of universities, several professional organizations, and professional accrediting bodies. None
of these systems makes changes quickly alone. Trying to make sweeping changes across multiple of
these systems is a daunting task that will require higher levels of cooperation than we have typically
seen in the past. Nonetheless, strategic modications to existing exams and exam practices should be
the rst step in reducing disparities in exam outcomes. These modications include:
Changing the Phrasing of Questions
The use of over one hundred hypothetical scenarios to test a candidate’s clinical skill appears to be
contributing to ethnoracial disparities in pass rates. The ability to analyze disembodied scenarios
unfairly favors those who have more experience with such linguistic exercises and has been largely
attributed to the social class of one’s family of origin. Vignettes never capture the richness and
complexity of real-life clinical situations, and exam candidates tend to ll in the gaps of the vignette with
their own clinical and personal experience, often introducing assumptions and issues that were not in
the written vignette. To reduce the bias, test-writers should write exam questions that clearly and
objectively measure the candidate’s knowledge of well-established principles without the use of
hypothetical scenarios that easily introduce bias and cultural assumptions. It is interesting to note that
the recent revision of the National Clinical Mental Health Counselor Exam takes the opposite approach: It
offers much *longer* clinical vignettes than other mental health exams do, utilizing 11 case examples
(NBCC, 2024). This may serve the same ends: It may reduce instances of examinees adding their own
assumptions and biases into exam questions, though there is not yet evidence of this.
For example, California’s current clinical exam outline includes the following sample question (BBS,
2023, p. 27):
A therapist is currently involved in a contentious divorce and perceives his spouse as aggressive and
unreasonable. The therapist begins meeting weekly with a colleague for consultation to prevent his
feelings from impacting therapy with his clients. Three weeks later, a client who has been in ongoing
therapy for symptoms of depression begins describing relationship diculties that are similar to what
the therapist is experiencing. Which of the following actions should the therapist take to manage the
ethical issues involved in this case?
Page 17/25
a. Provide continued treatment to the client and discuss the case with the colleague to monitor own
feelings.
b. Utilize limited self-disclosure and reassure the client of the therapist’s understanding to enhance
therapeutic empathy.
c. Explain the potential for bias on the part of the therapist and refer the client to an alternate therapist
to provide ongoing treatment.
d. Contain the therapist’s own feelings and focus discussions on the client’s depression to maintain
consistency with established treatment goals.
This question is attempting to assess the candidate’s knowledge related to ethical practice, specically
3.3 of the AAMFT (2015) Code of Ethics:
3.3 Seek Assistance. Marriage and family therapists seek appropriate professional assistance for issues
that may impair work performance or clinical judgment.
None of the four answers clearly and unambiguously align with the ethical standard. Instead, the
candidate is expected to
apply
the standard to the vignette and the four answer options. However, many
assumptions and leaps of interpretation need to be made, which is the most likely source of the current
ethnoracial pass rate disparities. Consider the process of answering this question:
1. The rst option involves continuing doing what he is already doing, which is discussing the case
with a colleague. The problem with this response is twofold: (a) he is not doing anything new or
different now that the client reveals he is in the same situation as the therapist, and (b) arguably
talking with a colleague is not “seeking appropriate professional assistance.” A colleague is likely to
be a friend and biased and may or may not have sucient clinical experience to be helpful.
Professional assistance would more commonly be dened as a supervisor, personal therapist, or
consultant.
2. The second answer involves providing limited self-disclosure to promote empathy, which some
theories would support. Some therapists considered this an appropriate clinical response, but not all
would agree. However, the question asks about managing the ethical issues in the case, and this
response relates more to clinical issues.
3. In the third option, the therapist is going to great lengths to ensure crystal clear boundaries and
avoid harming the client due to his personal issues. However, from the vignette, it is not clear
whether he is impaired to the point of warranting a referral, which ultimately costs clients money
and lost time and typically creates emotional distress.
4. The last option describes how the therapist manages the situation by not bringing up their personal
issues and instead focusing on the presenting problem. Depending on the situation, this may be an
appropriate direction.
An exam candidate who is familiar with the standard reads these four answers, there is no quickly
identiable correct answer that directly aligns with the standard. Instead, the candidate must sift through
Page 18/25
several options that each has some merit. To identify the correct answer, the candidate must notice in
the stem that the question asks about what the therapist “should” do to manage ethical issues, which
makes B and D less desirable because they primarily address clinical issues. This is the rst place where
candidates have more experience with “disembodied information” can catch the subtle linguistic
distinction in the question.
Then the candidate is left deciding between A and C, with the former describing continuing to do what
the therapist was doing before, which is talking with a colleague but not a typical form of “professional
assistance,” and the later moving in a very cautious and conservative direction, which is often the correct
answer on licensing exams. The candidate is left to decide whether to go with an option that is less
formal than the described in standard or a more conservative option based on the vignette. This is
another point where experience and uency with “disembodied information” benets the candidate in
weighing the pros and cons. This is also an example of how knowing the ethical standard does not
ensure answering the question correctly.
In contrast, shifting to a style of question that eliminates ambiguity and directly measures knowledge of
clearly identiable and agreed upon exam content would make it possible for all exam candidates to
identify the correct answer. Using the above question as an example, a less ambiguous way to assess
knowledge of the same ethical standard.
If a therapist is experiencing countertransference with a client, the ethical standards include the
following guidance:
1. The therapist is instructed to seek professional assistance to help them address issues that may
impair their judgment or performance.
2. The therapist is encouraged to use transparency and appropriate self-disclosure to communicate
their empathy with the client’s experience.
3. The therapist is required to immediately refer out all clients with whom they experience
countertransference to ensure clear boundaries and avoid any possible harm.
4. The therapist is directed to redirect the session to focus on topics that are more comfortable for the
therapist.
The correct answer (A) is much easier to identify when the question is phrased with a focus on factual
knowledge and without applying the ethical standard to a hypothetical case.
Increasing Content Clarity of and Reducing the Scope of Content
As noted in Caldwell (2023), the MFT licensing exams fail to meet industry standards for construct
clarity, which may be due to the eld as a whole not having a clearly dened scope of required
knowledge. Arguably, the closest approximation to a description of required knowledge for MFT
licensure are the MFT Core Competencies. Published in 2004 by AAMFT, these competencies are a set
of 128 statements about general areas of knowledge and skill to be an independently practicing MFT.
Page 19/25
However, to ensure their applicability over time and across contexts, these competencies do not name
specics, such as essential theories or areas of research. Similarly, the condensed version of these
competencies proposed by Northey and Gehart (2019) does not list specic theories or areas of
knowledge to increase its applicability. Similarly, the MFT accreditation standards (COAMFTE, 2023,
Version 12.5) summarizes the required curriculum in nine broad areas of knowledge that does not dene
specic theories, practices, research, or areas of knowledge. Thus, the eld does not have any readily
agreed upon specic areas of knowledge for licensing boards to build their exams around.
To add further confusion to the exam content, the questions are initially drafted by
subject matter
experts
, which the BBS simply denes as licensed MFTs in good standing and currently practicing, which
is lower than typical academic standards for an expert. When developing items for the national exam,
the AMFTRB has the questions written by experts then reviewed and revised by a second committee
appointed by the board, for which the standards are again not clearly dened (AMFTRB, 2024; BBS,
2024). In sum, the level of competence of the persons writing exam questions is dicult to determine.
The BBS (2024) identies 356 areas of content knowledge described over 25 pages of their exam
handbook and specically lists 19 different theories, several of which are listed as “general family
systems theories,” “general cognitive behavioral theories,” “general postmodern theories,” general
psychodynamic theories,” and “general humanistic-existential theories” theories (p. 23-24). Similarly, the
AMFTRB (2024) lists 106 clinical tasks, such as “practice therapy in a manner consistent with the
philosophical perspectives found in systemic theory” (p. 17) as well as 70 general knowledge areas, such
as “family studies and science,” “models of marital, couple, and family therapy,” and “individually based
theory and therapy models” (p. 22). Even with the long lists of possible content, the exact theories,
research, and practice standards that need to be studied is unclear.
In addition to lacking clarity, the scope of the MFT exams is arguably innite, with the entirety of the
knowledge foundation of multiple disciplines listed as “what to study.” The age disparity in pass rates
that begins with candidates in their 30s in both the present study and the ASWB (2022) study is best
explained by older students who typically have more family and work commitments and less time to
study. Having more clearly dened and narrowly focused content to study would likely reduce the current
age disparities. In the course evaluations of an exam preparation program, candidates reported studying
an average of 100 hours in order to pass their exam on the rst attempt, which equates to adding a part-
time job to their regular workload: 10 hours per week for 10 weeks or 5 hours per week for 20 weeks
(XXXX, personal communication).
The content of the MFT licensing exams could be claried and narrowed to focus on the core knowledge
necessary to protect the welfare of the public and render competent care by focusing on:
1. Diagnosis: Know how to assess and diagnose all mental health disorders in the DSM-5-TR that are
within our scope of practice; know when to refer out for those outside of our scope of practice.
Page 20/25
2. Law and ethics: Know how to use the the AAMFT code of ethics (CA and national exam) and CAMFT
code of ethics (CA) to protect clients and render professional care.
3. Core MFT theories: Know 8 of the foundational theories in the eld, upon which newer approaches
are built. Specifying the foundational theories rather than newer theories signicantly reduces the
costs of exam preparation. We recommend the following 8 theories due to their enduring inuence
in the eld and centrality in most major theory textbooks in the eld (Gehart, 2024; Gladding, 2018;
Nichols & Davis, 2016):
Strategic family therapy
Structural family therapy
Bowen intergenerational family therapy
Satir family therapy
Emotionally focused couple and family therapy
Cognitive-behavioral family therapy
Solution-focused therapy
Narrative therapy
Reducing the Total Number of Questions
Anxiety was a signicant predictor of passing the exam in this study, which is consistent with other
studies on text anxiety (Eysenk & Calvo, 1992; Sarason, 1988; Vaz et al., 2018; Zatz & Chassin, 1983).
One of the major sources of anxiety for licensing exams is time pressure. On the National MFT exam,
candidates have 240 minutes to answer 180 questions, approximately 1.33 minutes per questions, while
on the California exam they answer 170 questions in 4 hours, approximately 1.3 minutes per question.
Most questions require reading vignettes that are 200 words or more. Test-takers who have English as a
second language, read slowly, experience stereotype threat, or become anxious for other reasons are
likely to score lower for reasons other than their mastery of the content (ASWB, 2022).
Thus, another strategy for reducing racial and age disparities is to reduce the number of questions by
50% (90 on the national exam and 85 on the California) over the same 4-hour exam period. Reducing the
time pressure will create more equitable testing conditions for candidates from diverse and working-
class backgrounds, who speak English as a second language, who experience more test anxiety, and/or
who may be older.
Accommodations
Inadequate disability accommodations were identied as one of the most signicant predictors of
passing the exam. These ndings suggest that the exam testing sites may not be providing adequate
accommodations for those with disabilities, which is concerning. The AMFTRB and BBS should do
further investigation to determine how best to meet the needs of test-takers with disabilities.
Page 21/25
Research Limitations
There are several important limitations in this study. First, and most importantly, this is self-report data
from a convenience sample. There is no way to determine if there is signicant response bias (perhaps
participants in one racial or ethnic group who had not passed were more likely to respond that those
from that group who had passed, skewing the results). In addition, this is correlational data and we
cannot establish causaility—we can only note that the variables are related in this data set. However,
given how our data compares to other similar data, we feel that this is an important start in exploring
these issues on the LMFT exams.
Conclusion
Similar to other mental health professions, the eld of marriage and family therapy must critically
examine its existing approach to licensing exams. Data from multiple studies indicate clear patterns of
racial and age disparities that cannot be ignored. Although a long-term solution may move away from
multiple choice exams altogether, all existing evidence demands immediate action. One clear implication
is that we need data on the current exam. We believe that the AMFTRB and BBS must produce and
publish exam performance data disaggregated by demographic factors, just as ASWB did. The AMFTRB
and BBS can produce much higher quality data because they would not be reliant on convenience
sampling, and this data is vital in understanding the factors that inuence success on the exam. In
addition, the current exams can be rapidly improved by (a) changing how questions are phrased, (b)
clarifying and reducing the scope of the content, (c) reducing the number of items during the 4-hour
period, and (d) ensuring adequate accommodations for disabilities.
Declarations
Author Contribution
KL wrote the main draft including tables and gures, with BH, BB, and SR all contributing to the literature
review and DG contributing to the discussion. Data collection was conducted by KL, BH, BB, and SR, with
DG contributing research questions. All authors reviewed the manuscript.
Acknowledgement
We would like to thank Dr. Benjamin Caldwell for comments on a draft of this manuscript.
Data Availability
The dataset cannot currently be shared in order to protect the privacy of respondents.
Page 22/25
References
1. Allen, S. C., & O’Dell, K. J. (2007). Exploring factors associated with passing the basic social work
license examination.
Arete, 30
(2), 21-34.
2. American Association for Marriage and Family Therapy. (2004).
Marriage and family therapy core
competencies
. Alexandria, VA: Author.
3. American Association of Marriage and Family Therapy. (2015).
Code of ethics.
Author.
4. Association of Marriage and Family Therapy Regulatory Boards. (2024).
Handbook for candidates of
the AMFTRB marital and family therapy national examination.
Professional Testing Corporation.
5. Association of Marriage and Family Therapy Regulatory Boards (2024, June 20). Test construction.
https://amftrb.org/exam-info/
. Association of Social Work Boards (2022, August).
2022 ASWB exam pass rate
analysis.
www.aswb.org/wp-content/uploads/2022/07/2022-ASWB-Exam-Pass-Rate-Analysis.pdf
7. Association of State and Provincial Psychology Boards. (2024).
EPPP candidate handbook:
Examination for professional practice in psychology.
www.asppb.net/resource/resmgr/eppp_/eppp_candhnbk_jan24.pdf
. Board of Behavioral Sciences (2024).
Licensed marriage and family therapy written clinical exam
outline.
Pearson Vue.
9. Board of Behavioral Sciences, (2024, June 20). Exam development.
https://bbs.ca.gov/exams/news.html
10. Borenzweig, H. (1977). Who passes the California licensing examinations?
Social Work, 22
(3), 173-
177.
11. Caldwell, B. E. (2022, March 28).
The controversy over racial bias in mental health clinical exams.
https://www.simplepractice.com/blog/racial-bias-mental-health-clinical-exams/
12. Caldwell, B. E. (2023). Mental health clinical exams’ evident adherence to industry standards for
testing.
Journal of Mental Health and Clinical Psychology, 7
(3), 9-18.https://doi.org/10.29245/2578-
2959/2023/3.1283
13. Caldwell, B. E., Kunker, S.A., Brown, S. W. & Saiki, D. Y. (2011). COAMFTE accreditation and California
MFT licensing exam success.
Journal of Marital and Family Therapy
,
37
, 4, 468-478.
14. Caldwell, B. E., & Rousmaniere, T. (2022, October 31).
Clinical licensing exams in mental health
care.
https://www.psychotherapynotes.com/wp-content/uploads/2022/10/Clinical-Licensing-
Exams-in-Mental-Health-Care-October-2022.pdf
15. Carr, A. M. (2016).
An exploratory study of test anxiety as it relates to the national clinical mental
health counseling examination
[Doctoral dissertation, University of South Florida]. ProQuest
Dissertations Publishing.
1. Chaparro, E. (2020).
Predictors for passing the psychology license examination
[Doctoral
dissertation, Walden University]. ProQuest Dissertations Publishing.
Page 23/25
17. Cohen, S., Kamarck, T., & Mermelstein, R. (1983). A global measure of perceived stress.
Journal of
Health and Social Behavior, 24
, 386-396.
1. Commission on Accreditation of Marriage and Family Therapy Education (2023).
Accreditation
Standards. Version 12.5.
COAMFTE.
19. Driscoll, R. (2007).
Westside test anxiety scale validation
. Distributed by ERIC Clearinghouse.
Retrieved July 28, 2022, from https://eric.ed.gov/?id=ED495968.
20. Elliot, A. J., & McGregor, H. A. (1999). Test anxiety and the hierarchical model of approach and
avoidance achievement motivation.
Journal of Personality and Social Psychology, 76
(4), 628–644.
https://doi.org/10.1037/0022-3514.76.4.628
21. El-Zahhar, N. E., & Hocevar, D. (1991). Cultural and sexual differences in test anxiety, trait anxiety and
arousability: Egypt, Brazil, and the United States. Journal of Cross-Cultural Psychology, 22(2), 238-
249. https://doi.org/10.1177/0022022191222005
22. Eysenck, M. W. & Calvo, M. G. (1992) Anxiety and performance: The processing eciency theory.
Cognition and Emotion, 6(6), 409-434, DOI: 10.1080/02699939208409696
23. Gehart, D. (2024).
Mastering competencies in family therapy: A practical approach to theory and
clinical case documentation
(4th ed.)
.
Cengage: Brooks/Cole.
24. Gladding, S. (2018).
Family therapy: History, theory, and practice
(7th ed.)
.
Pearson
25. Kendi, I. X. (2020, October 21).
Testimony in support of the working group recommendation to
#suspendthe test.
https://www.bosedequity.org/blog/read-ibram-x-kendis-testimony-in-support-of-
the-working-group-recommendation-to-suspendthetest
2. Kumari, A. & Jain, J. (2014). Examination stress and anxiety: A study of college students.
Global
Journal of Multidisciplinary Studies
, 12/31/2014.
27. Macura, Z., & Ameen, E. J. (2021). Factors associated with passing the EPPP on rst attempt:
Findings from a mixed methods survey of recent test takers.
Training and Education in Professional
Psychology, 15
(1), 23-32. http://dx.doi.org/10.1037/tep0000316
2. McWhorter, J. (2022, August 27).
Lower Black and Latino pass rates don’t make a test racist.
https://www.nytimes.com/2022/08/27/opinion/racism-test.html
29. NBCC. (2024, April).
National Clinical Mental Health Counseling Examination.
https://www.nbcc.org/exams/ncmhce.
30. Nettles, M. T., Scatton, L. H., Steinberg, J. H., & Tyler, L. L. (2011). Performance and passing rate
differences of African American and white prospective teachers on Praxis examinations: A joint
project of the National Education Association (NEA) and Educational Testing Service (ETS).
ETS
Research Report Series
,
2011
(1), i-82. https://doi.org/10.1002/j.2333-8504.2011.tb02244.x
31. Nichols, M., & Davis, S. (2016).
Family therapy: Concepts and methods
(11th ed). Pearson.
32. Northey, W., & Gehart, D. (2019). The condensed MFT core competencies: A streamlined approach
for measuring student and supervise learning using the MFT core competencies.
Journal of Marital
and Family Therapy, 46,
42-61
.
doi: 10.1111/jmft.12386
Page 24/25
33. Peleg-Popko, O. (2004). Differentiation and test anxiety in adolescents. Journal of Adolescence,
27(6), 645-662.
34. Petchauer, E. (2014). “Slaying ghosts in the room”: Identity contingencies, teacher licensure testing
events, and African American preservice teachers.
Teachers College Record, 116
, 1-40.
35. Reglin, G. L., & Adams, D. R. (1990). Why Asian-American high school students have higher grade
point averages and SAT scores than other high school students.
The High School Journal
,
73
(3),
143-149.
3. Rosales, J., & Walker, T. (2021, March 20).
The racist beginnings of standardized
testing.
https://www.nea.org/nea-today/all-news-articles/racist-beginnings-standardized-testing
37. Sansgiry, S. S., Bhosle, M., & Dutta, A. P. (2005). Predictors of test anxiety in Doctor of Pharmacy
students: An empirical study.
Pharmacy Education, 5
(2), 121–129.
3. Sarason, I. G. (1988) Anxiety, self-preoccupation and attention.
Anxiety Research, 1
(1), 3-7. DOI:
10.1080/10615808808248215
39. Sharpless, B. A. (2019). Are demographic variables associated with performance on the
Examination for Professional Practice in Psychology (EPPP)?
The Journal of Psychology, 153
(2),
161-172. DOI: 10.1080/00223980.2018.1504739
40. Sharpless, B. A. (2021). Pass rates on the Examination for Professional Practice in Psychology
(EPPP) according to demographic variables: A partial replication.
Training and Education in
Professional Psychology, 15,
18-22. DOI:10.1037/tep0000301
41. Sharpless, B. A., & Barber, J. P. (2013). Predictors of program Performance on the Examination for
Professional Practice in Psychology (EPPP).
Professional Psychology: Research and Practice, 44
(4),
208-217. DOI: 10.1037/a0031689
42. Spielberger, C. D. (1980).
Test anxiety inventory: Preliminary professional manual.
Consulting
Psychologist Press.
43. Tobin, D. L., Holroyd, K. A., Reynolds, R. V., & Wigal, J. K. (1989). The hierarchical factor structure of
the Coping Strategies Inventory.
Cognitive Therapy and Research, 13
, 343-361.
44. Vaz, J. C., Pothiyil, T. D., George, L. S., Alex, S., Pothiyil, D. I., & Kamath, A. (2018). Factors inuencing
examination anxiety among undergraduate nursing students: An exploratory factor analysis.
Journal
of Clinical and Diagnostic Research, 12
(7): JC19.
45. West, C., Jeff, H. W., Grames, H., & Adams, M. A. (2013). Marriage and family therapy: Examining the
impact of licensure on an evolving profession.
Journal of Marital and Family Therapy, 39
(1), 112–26.
https://doi.org/10.1111/jmft.12010
4. Williams, D. R., Yu, Y., Jackson, J. S., & Anderson, N. B. (1997). Racial differences in physical and
mental health: Socioeconomic status, stress, and discrimination.
Journal of Health
Psychology, 2
(3),
335-351.
47. Zatz, S., & Chassin, L. (1983). Cognitions of test-anxious children. Journal of Consulting and Clinical
Psychology, 51(4), 526–534. https://doi.org/10.1037/0022-006X.51.4.526
Page 25/25
4. Zeidner, M., & Nevo, B. (1993). Test Anxiety Inventory/Hebrew adaptation (TAI/HB): Scale
development, psychometric properties, and some demographic and cognitive correlates. Megamot,
35(2-3), 293–306.