ArticlePDF Available

Measuring Faking in the Employment Interview: Development and Validation of an Interview Faking Behavior Scale

Authors:

Abstract

An Interview Faking Behavior (IFB) scale is developed and validated in 6 studies (N = 1,346). In Study 1, a taxonomy of faking behavior is delineated. The factor structure of a measure is evaluated and refined (Studies 2 and 3). The convergent and discriminant validity of the measure is examined (Study 4). The IFB scale consists of 4 factors (Slight Image Creation, Extensive Image Creation, Image Protection, and Ingratiation) and 11 subfactors (Embellishing, Tailoring, Fit Enhancing, Constructing, Inventing, Borrowing, Masking, Distancing, Omitting, Conforming, and Interviewer Enhancing). A study of actual interviews shows that scores on the IFB scale are related to getting a 2nd interview or a job offer (Study 5). In Study 6, an experiment is conducted to test the usefulness of the new measure for studying methods of reducing faking using structured interviews. It is found that past behavior questions are more resistant to faking than situational questions, and follow-up questioning increases faking. Finally, over 90% of undergraduate job candidates fake during employment interviews; however, fewer candidates engage in faking that is semantically closer to lying, ranging from 28% to 75%.
Measuring Faking in the Employment Interview: Development and
Validation of an Interview Faking Behavior Scale
Julia Levashina
Indiana University Kokomo
Michael A. Campion
Purdue University
An Interview Faking Behavior (IFB) scale is developed and validated in 6 studies (N 1,346). In Study
1, a taxonomy of faking behavior is delineated. The factor structure of a measure is evaluated and refined
(Studies 2 and 3). The convergent and discriminant validity of the measure is examined (Study 4). The
IFB scale consists of 4 factors (Slight Image Creation, Extensive Image Creation, Image Protection, and
Ingratiation) and 11 subfactors (Embellishing, Tailoring, Fit Enhancing, Constructing, Inventing, Bor-
rowing, Masking, Distancing, Omitting, Conforming, and Interviewer Enhancing). A study of actual
interviews shows that scores on the IFB scale are related to getting a 2nd interview or a job offer (Study
5). In Study 6, an experiment is conducted to test the usefulness of the new measure for studying methods
of reducing faking using structured interviews. It is found that past behavior questions are more resistant
to faking than situational questions, and follow-up questioning increases faking. Finally, over 90% of
undergraduate job candidates fake during employment interviews; however, fewer candidates engage in
faking that is semantically closer to lying, ranging from 28% to 75%.
Keywords: Interview Faking Behavior scale, taxonomy of faking behavior, faking during structured
interviews, base rates of faking in job interviews
Faking or intentional response distortion has been discussed and
studied extensively in the literature on personality measures (Com-
rey & Backer, 1975; Furnham, 1986; Stark, Chernyshenko, Chan,
Lee, & Drasgow, 2001). However, few empirical studies have
explored faking in the employment interview (e.g., Fletcher,
1990). The purpose of this article was threefold. First, we offer a
conceptual definition of faking and develop a taxonomy of faking
behaviors in the employment interview (Study 1). Second, we
develop a measure of faking behaviors from the proposed taxon-
omy, evaluate the factor structure of the measure (Studies 2 and 3),
examine the convergent and discriminant validity of the new scale
against other measures (Study 4), and examine the criterion-related
validity of the new scale (Study 5). Finally, we conduct an exper-
imental study to test the usefulness of the new scale for conducting
research on interview faking (Study 6).
Definitional Issues
The present conceptual foundation for faking is provided by the
overlapping constructs of social desirability responding and im-
pression management (Leary & Kowalski, 1990; Levin & Zickar,
2002). Conflicting definitions have made it unclear as to how
faking is similar to or different from impression management and
social desirability.
Faking and Social Desirability
Social desirability is generally defined as the tendency for
people to present themselves in a socially favorable light (Ed-
wards, 1957; Holden & Fekken, 1989). For example, respondents
may create a good-citizen image by emphasizing socially desirable
personal characteristics with respect to the current social norms
and standards (e.g., Zerbe & Paulhus, 1987) or create an ideal
self-image by claiming good traits and denying negative ones (e.g.,
Furnham, 1990; Isaksen & Davis, 1979). Traditionally, most re-
search on faking in personality measures equated faking with
socially desirable responding (SDR; Ellingson, Smith, & Sackett,
2001; Ones & Viswesvaran, 1998). Individuals who scored high
on the SDR scales and whose self-report measures correlated
highly with the SDR scales were assumed to be “faking good.”
However, this conceptualization of faking is not fully appropriate
in the context of the employment interview. First, during the
employment interview, candidates would try to fake the selection
instrument in order to gain a specific job by presenting themselves
as having necessary job-related credentials rather than simply
emphasizing socially desirable personal characteristics. Second,
deception is not necessarily present in SDR (e.g., Holden &
Fekken, 1989). People might score high on SDR in the interview
when they have the tendency to actually behave in socially desir-
able ways.
Faking and Impression Management
There is probably even more confusion with the definition of
impression management (IM) and its relation to faking. This
confusion stems from the fact that IM has been defined differently
in the personality literature than in the literature on social behav-
iors in organizations. In the framework of personality research, IM
Julia Levashina, School of Business, Indiana University Kokomo; Mi-
chael A. Campion, Krannert Graduate School of Management, Purdue
University.
Correspondence concerning this article should be addressed to Julia
Levashina, School of Business, Indiana University Kokomo, 2300 South
Washington Street, Kokomo, IN 46904-9003. E-mail: jlevashi@iuk.edu
Journal of Applied Psychology Copyright 2007 by the American Psychological Association
2007, Vol. 92, No. 6, 1638 –1656 0021-9010/07/$12.00 DOI: 10.1037/0021-9010.92.6.1638
1638
has been conceptualized as one of the two components of SDR
(Paulhus, 1984). According to this tradition, IM equates to faking
and refers to the intentional distortion of responses to create a
favorable impression. It is distinguished from self-deception or
unintentional distortion of responses. Self-deception is manifested
in socially desirable, positively biased self-descriptions that the
respondents actually believe to be true. Many researchers (e.g.,
Zerbe & Paulhus, 1987) have argued that self-deception will not
vary as a function of social factors such as publicity or presence of
extrinsic rewards. IM, however, is a situation-induced temporary
state to present oneself in a positive (or otherwise appropriate)
way. Thus, if faking in the employment interview is affected by
social and situational variables, then it is more likely that the
behavior is motivated by IM rather than by ego protection (Mor-
rison & Bies, 1991). Consequently, faking should be linked with
the intentional distortion or IM component of social desirability
only.
On the contrary, an established tradition in the literature on
social behaviors in organizations is to define IM as a negotia-
tion of the interpretations attached to behaviors in social set-
tings (Gilmore, Stevens, Harrell-Cook, & Ferris, 1999; Schlen-
ker, 1980). According to this tradition, IM is not necessarily
deceptive or intentional. Some researchers would argue that
faking is a part of IM; others believe that they are two separate
constructs. For example, Baumeister (1982, 1989) argued that
there are two kinds of IM: “pleasing the audience,” which
involves conforming to others’ preferences and changing one’s
behavior and appearance depending on the others’ expectations,
and “self-construction,” which is motivated by the self-
presenter’s own values and involves constructing an identity
that fits one’s own personal ideas and desires. In the employ-
ment interview context, Gilmore et al. (1999) defined self-
presentation as attempts to influence self-relevant images; thus,
IM is distinguished from misrepresentation.
The employment interview research has adapted the view of the
literature on social behaviors in organizations on IM, defining it as
a conscious or unconscious attempt to influence images during
interaction (e.g., Ellis, West, Ryan, & DeShon, 2002; McFarland,
Ryan, & Kriska, 2003). The issue of whether IM is deceptive or
not has not been studied, despite Gilmore and Ferris’s (1989) call
to investigate deceptive IM in the interview. Moreover, observa-
tion has been recommended and used primarily as the method to
study IM during the employment interview (e.g., Stevens &
Kristof, 1995). However, this methodology does not allow us to
measure deceptive IM tactics because intent cannot be observed
directly.
To unify these two views on IM, we need to include both honest
and deceptive IM. Not all of IM that occurs in the employment
interview is deceptive. Although some forms of IM used in the
employment interview could be honest and necessary to accurately
present and highlight one’s attributes and credentials (e.g., clear
articulation of job-related credentials), there are others that involve
deceptive behaviors and intentional misrepresentation (e.g., stating
nonexisting achievements) and constitute faking (Fletcher, 1989,
1990). Job candidates may use IM tactics to look good without
being untruthful, or they may use them and be dishonest and
untruthful.
Expanding the Definition of Faking in the Employment
Interview
In the present study, we integrate two distinctions from the
personality literature (intentional distortion vs. unintentional dis-
tortion) and the literature on social behaviors (dishonest vs. honest
IM) into our definition. We define faking in the employment
interview as deceptive IM or the conscious distortions of answers
to the interview questions in order to obtain a better score on the
interview and/or otherwise create favorable perceptions.
There are two main implications of this conceptualization of
faking. First, candidates may engage in faking to meet the require-
ments of the interview question and to make a positive impression
on the interviewer. For example, to answer a past behavior ques-
tion that asks the candidate to describe a specific situation (e.g.,
“Give me an example of a job or project where problems were a
regular occurrence”), candidates without such past experience may
invent the situation by describing a nonexistent one. Second, a
central component of this perspective is the treatment of informa-
tion. Information could be added or subtracted from the perceived
truth or information could be invented (Hopper & Bell, 1984;
Knapp & Comadena, 1979).
Information can be added to the perceived truth in many ways.
Job candidates might answer interview questions having in mind
the image of an ideal candidate for the job or an ideal job incum-
bent. They might exaggerate their job-related credentials or past
achievements. Information can also be omitted or taken away from
what is perceived as the truth. Omission occurs when job candi-
dates intentionally omit some aspects of the requested information
that might decrease their score in the interview or make a negative
impression of the candidate. For example, a job candidate may
have left a previous job for multiple reasons, such as a perceived
lack of opportunity for career progression, conflict with a super-
visor, and job burnout, but mentioned only one reason during an
interview, such as lack of opportunity, and intentionally omitted
the other two due to the possible negative impression they might
create. Finally, information can be invented, and applicants might
present information that is verifiably false (Levin & Zickar, 2002).
This approach equates faking with lying, defined as completely
untrue verbal statements. For instance, lying would occur if job
candidates claimed to have a master’s degree when they only took
classes but never graduated. Thus, a wider view on faking should
be adopted and include not only lying but also pretense, conceal-
ment, exaggeration, and so forth.
Study 1: Identification of Faking Behaviors
Our identification of faking behaviors was driven by three
sources. First, we performed a review of the literature on IM and
influence behaviors in organizations (e.g., Ellis et al., 2002; Gia-
calone & Rosenfeld, 1989; Kipnis, Schmidt, & Wilkinson, 1980;
Kristof-Brown, Barrick, & Franke, 2002; Kumar & Beyerlein,
1991; Tedeschi & Melburg, 1984). Research was focused on three
groups of IM behaviors. First, assertive tactics were used by the
applicants to acquire and promote favorable impressions by por-
traying themselves as a particular type of person with particular
beliefs, opinions, knowledge, and experience. Second, defensive
tactics were used to protect images. Third, ingratiation was used to
evoke interpersonal liking and attraction between interviewers and
themselves.
1639
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
Second, we conducted content analysis of popular press books
on preparing for the employment interview (e.g., Drake, 1997;
Medley, 1993; Sincoff & Goyer, 1984) to identify recommended
lay strategies on how to improve performance in the employment
interview, how to deal with questions asked about weaknesses and
work-related conflicts, and how to fake successfully during an
interview without lying. Palmer, Campion, and Green (1999) ar-
gued that many job seekers use how-to books to train themselves
in interview preparation.
Third, we conducted a qualitative study using semistructured
interviews with 35 job candidates. The classification of IM behav-
iors was used to develop questions for the interview. Thirty can-
didates were first-year master’s of business administration students
who were interviewing with companies for internship positions at
the time and had an average of six recent employment interviews.
Five participants were doctoral students who were in the job
market at the time and had completed an average of seven recent
employment interviews.
The three sources identified 125 faking behaviors. These
items were analyzed by five judges, doctoral students from
psychology and management departments. The judges reviewed
the items for clarity, appropriateness, and content validity.
Thirty behaviors were eliminated. Then, the authors sorted the
remaining 95 items in terms of the purpose of the faking
behaviors into three groups: image creation (faking in order to
create an image of a good candidate), image protection (faking
in order to protect an image of a good candidate), and ingrati-
ation (faking in order to gain a favorable interviewer’s percep-
tion). Of these 95 items, 52 were related to image creation, 21
to image protection, and 22 items to ingratiation. The item
composition was consistent with the finding in the IM literature
that job candidates used more assertive tactics than defensive
tactics in the interview (e.g., Stevens & Kristof, 1995). Next,
the authors sorted these faking behaviors into several subcate-
gories within each group. Consensus among the coders was
used as the criterion for assigning a given behavior to a cate-
gory. To refine the assignment of items to subcategories, three
coders (doctoral students from a management department) in-
dependently back-translated the 95 behaviors into the three
categories and 11 subcategories. The degree of agreement be-
tween the coders was used as a criterion to retain the behavior
for further analysis. Items were removed if at least one rater
disagreed with the other two raters. The rater disagreement was
uniform across categories. This procedure resulted in the elim-
ination of 31 items, leaving a pool of 64 items for the initial test
instrument.
The developed taxonomy suggests that candidates might fake in
order to create an image of a good candidate, protect an image of
a good candidate, or ingratiate. The subcategories of embellishing,
tailoring, fit enhancing, constructing, inventing, and borrowing
could be used to create an image of a good candidate. The
subcategories of omitting, masking, and distancing could be used
to defend an image of a good candidate. Finally, the subcategories
of opinion conforming and interviewer/organization enhancing
could be used to ingratiate or to gain favor with the interviewer to
improve the appearance of a good candidate. The proposed tax-
onomy of faking behaviors and the Interview Faking Behavior
(IFB) scale are shown in the Appendix.
Study 2: Validation and Refinement of the IFB Scale
Using Exploratory Factor Analysis (EFA)
The purpose of this study was to validate and refine the factor
structure of the IFB scale.
Method
The IFB scale was developed on the basis of the taxonomy of
faking behaviors proposed in Study 1. The 64-item IFB scale was
administered to 260 senior-level undergraduate students (36%
women) of a large university located in the midwestern United
States who were in the job market and had several job interviews.
Respondents were asked to describe, on a 5-point scale, the degree
to which they had used each behavior in their employment inter-
views. The 5-point scale had the following anchors: 5 to a very
great extent,4 to a considerable extent,3 to a moderate
extent,2 to a little extent, and 1 to no extent (Bass, Cascio,
& O’Connor, 1974). To ensure that respondents answered the
instrument honestly, no names were collected. Respondents were
asked to be honest about this topic. They were assured that the
researchers would have no way of connecting their answers back
to them, their answers would be used for research purposes only,
and they would not be used to evaluate them.
EFA. The means and standard deviations for the IFB scale are
presented in Table 1. The item-total correlations ranged from 0.39
to 0.68.
1
All items met the standard criteria of 0.30 (Nunnally &
Bernstein, 1994) and were retained. Scores obtained on the IFB
scale were initially factor analyzed with the maximum likelihood
extraction method and oblique factor rotation (promax). We used
oblique rotation because the factors were expected to be correlated.
To determine the number of “meaningful” factors to retain, we
used four criteria: Kaiser criterion (Kaiser, 1960), the scree test
(Cattell, 1966), percentage of variance accounted for, and inter-
pretability. On the basis of the first three criteria, four factors were
retained. Four items with cross-loadings were eliminated. The four
high cross-loadings could have occurred due to chance, and al-
though future research could determine whether the cross-loadings
would replicate, dropping the items in the present study improves
scale discriminability while leaving enough items remaining for
reliable scales.
A subsequent factor analysis with maximum likelihood method
and the same oblique rotation was conducted on the remaining 60
items. Four factors were retained on the basis of the described
criteria. The rotated factors accounted for 63% of the variance.
Factors, percentages of variance explained, factor loadings, com-
munalities, and reliability statistics are presented in Table 1. The
extracted four factors demonstrated a simple structure, meaning
that most of the variables had relatively high factor loadings on
only one factor and near-zero loadings for the other factors. The
extracted four-factor solution was consistent with the proposed
taxonomy of faking behaviors. The only difference was that the
hypothesized factor Image Creation had divided into two factors:
Factor 1 and Factor 4. Factor 1, which was named Extensive Image
Creation, reflected faking behaviors that were closer to pure forms
1
Table with item-total correlations for the IFB scale is available from
Julia Levashina upon request.
1640
LEVASHINA AND CAMPION
Table 1
Means, Standard Deviations, and Exploratory Factor Analysis of the IFB Scale
Item MSD
Factor pattern (standardized regression coefficients)
h
2
Extensive image creation Ingratiation Image protection Slight image creation
ICCON18 1.56 0.93 0.77 0.02 0.13 0.13 0.63
ICCON19 1.80 1.07 0.77 0.16 0.19 0.05 0.62
ICCON20 1.67 1.05 0.87 0.02 0.14 0.04 0.71
ICCON21 1.55 0.97 0.81 0.01 0.01 0.03 0.62
ICCON22 1.85 1.09 0.77 0.14 0.02 0.07 0.61
ICCON23 1.91 1.06 0.65 0.13 0.07 0.15 0.60
ICCON24 1.35 0.81 0.84 0.01 0.09 0.02 0.63
ICINV25 1.48 0.78 0.52 0.06 0.20 0.02 0.37
ICINV26 1.68 0.85 0.46 0.02 0.29 0.11 0.50
ICINV27 1.38 0.80 0.71 0.22 0.16 0.00 0.53
ICINV28 1.97 1.08 0.36 0.18 0.22 0.05 0.33
ICINV29 1.71 0.93 0.64 0.03 0.28 0.04 0.61
ICINV30 2.20 1.09 0.45 0.03 0.26 0.16 0.53
ICINV31 1.63 1.03 0.85 0.02 0.01 0.04 0.70
ICINV32 1.84 1.09 0.66 0.01 0.12 0.07 0.58
ICBOR33 1.52 0.88 0.56 0.03 0.23 0.06 0.43
ICBOR34 1.43 0.87 0.66 0.10 0.12 0.04 0.49
ICBOR35 1.47 0.88 0.64 0.12 0.18 0.03 0.51
INCON53 2.51 1.07 0.09 0.60 0.01 0.13 0.52
INCON54 2.32 1.11 0.22 0.62 0.04 0.04 0.61
INCON55 2.45 1.07 0.01 0.62 0.00 0.15 0.50
INCON56 2.49 1.06 0.10 0.78 0.08 0.02 0.63
INCON57 2.66 1.06 0.08 0.73 0.00 0.05 0.63
INCON58 2.72 1.15 0.06 0.73 0.00 0.06 0.52
INCON59 2.42 1.14 0.04 0.59 0.13 0.02 0.43
INCON60 2.35 1.09 0.09 0.73 0.08 0.01 0.67
INENH61 3.14 1.24 0.22 0.62 0.14 0.08 0.36
INENH62 2.69 1.14 0.06 0.56 0.14 0.04 0.40
INENH63 2.99 1.15 0.11 0.65 0.16 0.03 0.50
INENH64 2.98 1.19 0.13 0.56 0.09 0.11 0.39
IPOMI37 2.11 1.11 0.10 0.07 0.40 0.06 0.22
IPOMI38 2.23 1.13 0.03 0.02 0.50 0.13 0.35
IPOMI39 2.45 1.18 0.03 0.05 0.48 0.12 0.31
IPOMI40 2.54 1.17 0.09 0.29 0.46 0.02 0.39
IPOMI41 2.15 1.17 0.10 0.24 0.40 0.06 0.33
IPOMI42 1.68 1.08 0.14 0.02 0.54 0.04 0.38
IPMAS44 2.29 1.17 0.11 0.15 0.33 0.00 0.23
IPMAS46 1.92 1.06 0.21 0.14 0.56 0.16 0.48
IPMAS47 2.18 1.21 0.09 0.25 0.41 0.00 0.39
IPMAS48 2.91 1.21 0.06 0.27 0.44 0.08 0.40
IPMAS49 1.80 1.11 0.09 0.01 0.63 0.03 0.43
IPDIS50 1.99 1.06 0.08 0.03 0.82 0.04 0.67
IPDIS51 2.08 1.11 0.04 0.06 0.74 0.08 0.62
IPDIS52 2.00 1.08 0.04 0.02 0.70 0.10 0.61
ICBOR36 1.80 0.89 0.27 0.01 0.46 0.07 0.44
ICEMB2 2.30 0.98 0.14 0.14 0.23 0.36 0.30
ICEMB3 2.27 1.15 0.05 0.04 0.15 0.32 0.23
ICEMB4 2.57 1.16 0.13 0.11 0.08 0.57 0.38
ICEMB5 2.42 1.10 0.18 0.04 0.13 0.49 0.42
ICEMB6
3.04 1.14 0.08 0.02 0.20 0.44 0.28
ICTAI7 2.59 1.02 0.05 0.00 0.02 0.70 0.45
ICTAI8 3.02 1.03 0.03 0.12 0.08 0.68 0.47
ICTAI9 2.74 1.03 0.00 0.19 0.17 0.68 0.52
ICTAI10 2.44 1.12 0.09 0.03 0.01 0.63 0.46
ICTAI11 2.33 1.09 0.08 0.01 0.00 0.64 0.46
ICTAI12 2.39 1.18 0.05 0.24 0.02 0.43 0.39
ICFIT13 2.52 1.03 0.04 0.13 0.06 0.41 0.28
ICFIT14 2.35 1.02 0.07 0.09 0.02 0.50 0.39
ICFIT15 2.36 1.05 0.05 0.04 0.07 0.59 0.48
ICFIT17 2.63 1.13 0.02 0.16 0.12 0.54 0.50
% of Variance (rotated solution) 22.78 14.39 13.14 11.17
Alpha coefficient 0.95 0.92 0.91 0.90
Note. The first two letters in each variable name correspond to three big groups of faking behaviors (IC image creation, IN ingratiation, and IP
image protection); the following letters correspond to 11 subfactors of faking behaviors: CON conforming, INV inventing, BOR borrowing, CON
opinion conforming, ENH interviewer or organization enhancing, OMI omitting, MAS masking, DIS distancing, EMB embellishing, TAI
tailoring, and FIT fit enhancing). Numbers correspond to the item number in the instrument. For example, IPOMI38 is the item number 38 in Image
Protection, Omission. Analysis is based on N 260. Interfactors correlations are in the range from 0.39 to 0.55. IFB Interview Faking Behavior h
2
item communalities at extraction. Boldface values indicate that the item loads on the factor.
of lying and deception and contained items that measured Con-
structing, Inventing, and Borrowing of experiences or accomplish-
ments (e.g., “I claimed that I have skills that I do not have”). Factor
4, named Slight Image Creation, reflected mild types of faking and
was composed of items that measured Embellishing, Tailoring,
and Fit Enhancing (e.g., “I exaggerated my responsibilities on my
previous jobs”). Factor 2 was identical to the hypothesized Ingra-
tiation Factor (e.g., “I tried to show that I shared the interviewer’s
views and ideas even if I did not”). Finally, Factor 3 was almost
the same as the hypothesized Image Protection factor (e.g., “When
asked directly, I did not mention some problems that I had in past
jobs”) with one exception. One item (image creation-embellishing
6) that was hypothesized to reflect the Image Creation factor
showed higher factor loadings on Factor 3. The internal consis-
tency reliability for Factor 1, Factor 2, Factor 3, and Factor 4 were
0.95, 0.92, 0.91, and 0.90, respectively. Interfactor correlations
ranged from 0.39 to 0.55.
Descended EFA. The first EFA extracted four factors that
corresponded to higher order factors in our hypothesized taxon-
omy of faking behaviors. At the same time, we believe that the
hypothesized 11 subcategories are meaningful in terms of measur-
ing specific types of faking behaviors. Thus, we performed de-
scended EFA, meaning that we conducted the factor analysis
within each category to see whether the hypothesized subcatego-
ries would emerge.
We again used the maximum likelihood method and oblique
factor rotation (promax) and the same four criteria to determine the
number of factors to retain. Six items were eliminated because they
had high cross-loadings. EFA was repeated for the factors after the
items with cross-loadings were eliminated. The final results are
shown in Tables 2-5.
Factor analysis of Factor 1 (Extensive Image Creation) resulted
in a three-factor solution that mirrored the hypothesized three
factors: Constructing, Inventing, and Borrowing. EFA of Factor 2
(Ingratiation) resulted in a two-factor solution that was identical
with the hypothesized two factors: Opinion Conforming and In-
terviewer/Organization Enhancing. EFA of Factor 3 (Image Pro-
tection) resulted in a three-factor solution. The first factor (Dis-
tancing) was composed of all the items hypothesized to assess the
Distancing factor (e.g., “I clearly separated myself from my past
work experiences that would reflect poorly on me”) and one item
that was intended to assess the Masking factor (“I covered up some
skeletons in my closet”). The second factor was similar to the
hypothesized Omitting factor. Finally, the third factor (Masking)
had three items that were intended to measure Masking and one
item that was intended to measure Omitting (“When asked di-
rectly, I did not mention my true reason for quitting my previous
job”). Analysis of Factor 4 (Slight Image Creation) resulted in a
three-factor solution that corresponded with the hypothesized fac-
tors: Tailoring, Fit Enhancing, and Embellishing.
Summary of the Results of the EFA of the IFB Scale
The factor analysis suggested a hierarchical factor structure of
faking with four factors and 11 subfactors. On the basis of the IM
research, we hypothesized that there were three factors. The factor
analysis suggested that there were four factors. Two of them
(Image Protection and Ingratiation) were identical to the hypoth-
esized factors, and they were represented by the hypothesized
subfactors. Two other factors represented the hypothesized Image
Creation factor. Content analysis of the subfactors that loaded on
the two emerging factors revealed the conceptual difference be-
tween them. Extensive Image Creation represented socially less
appropriate behaviors (e.g., borrowing experiences of others, in-
venting job-related credentials), which are semantically closer to
lying. Slight Image Creation represented mild forms of faking
(e.g., embellishing job-related credentials). The items with cross-
loadings were removed, leaving 54 items in our instrument. Fi-
nally, internal consistencies of factors and subfactors were ade-
quate.
Study 3: Confirmatory Factor Analysis (CFA) of the IFB
Scale
We next tested the scale using CFA and a new sample of job
applicants. In Study 3, the hypothesized taxonomy of faking was
compared with the results of the EFA. Six models were compared
with each other to identify the model that best fits the data. The
first model was the hypothesized model with one third-order
general factor (Faking), three factors (Image Creation, Image Pro-
tection, and Ingratiation), and 11 subfactors. The second model
was a variation of the hypothesized model, with three factors and
11 subfactors but no general factor (Faking). The third model was
a model derived from the EFA with four factors: Slight Image
Creation, Extensive Image Creation, Image Protection, and Ingra-
Table 2
Descended EFA of Extensive Image Creation
Item
Factor pattern (standardized regression
coefficients)
h
2
Constructing Inventing Borrowing
ICCON18 0.82 0.02 0.04 0.69
ICCON19 0.77 0.08 0.04 0.65
ICCON20 0.79 0.08 0.04 0.76
ICCON21 0.62 0.06 0.33 0.66
ICCON22 0.65 0.26 0.05 0.66
ICCON23 0.67 0.24 0.11 0.63
ICCON24 0.79 0.08 0.14 0.68
ICINV25 0.04 0.44 0.22 0.38
ICINV26 0.13 0.56 0.10 0.50
ICINV28 0.12 0.53 0.24 0.38
ICINV29 0.04 0.75 0.10 0.73
ICINV30 0.07 0.82 0.08 0.65
ICINV31 0.32 0.52 0.13 0.73
ICINV32 0.21 0.60 0.09 0.64
ICBOR33 0.09 0.22 0.49 0.49
ICBOR34 0.01 0.05 0.89 0.82
ICBOR35 0.03 0.09 0.82 0.82
% of variance
a
14.61 10.99 12.07
Alpha coefficient 0.93 0.89 0.87
Note. The first two letters in each variable name correspond to three big
groups of faking behaviors (IC image creation), and the following letters
correspond to 3 of 11 subfactors of faking behaviors: CON conform-
ing, INV inventing, and BOR borrowing. Analysis is based on N
260. Interfactors correlations are r
Factor1 Factor2
0.66;
r
Factor1 Factor3
0.57; and r
Factor2 Factor3
0.56. EFA exploratory
factor analysis; h
2
item communalities at extraction. Boldface values
indicate that the item loads on the factor.
a
Rotated solution.
1642
LEVASHINA AND CAMPION
tiation. The fourth model was derived from descended EFA. It had
four factors (Slight Image Creation, Extensive Image Creation, Image
Protection, and Ingratiation), and 11 subfactors. The fifth model was
a variation of the fourth model, in which the variances in four factors
were explained by one more general third-order factor (Faking).
Finally, the sixth model had one first-order general factor (Faking).
Sample and Procedure
To perform CFA, new independent data were collected. The
new sample consisted of 589 undergraduate students (40%
women) from a large midwestern university, who were in the job
market and had several job interviews. As in the previous study,
respondents were asked to describe on the same 5-point scale used
in Study 1 the extent to which they had used each behavior in their
interviews. Also, to ensure that respondents answered the instru-
ment honestly, no names were collected. Respondents were told
that their answers would be used for research purposes.
Analysis
CFA is sensitive to the violation of multivariate normality. In
our study, we assumed that the departure from normality was not
very extensive. The decision was based on the examination of
frequency distributions and kurtosis and skewness of item-level
data.
2
The analysis of distributions revealed that we had two types
of distributions: an approximation of univariate normal and the
Poisson distributions. Items that had the Poisson distribution were
transformed by using a log function (Rummel, 1970). After the
transformation, the kurtosis and skewness of all items did not
exceed 1.5, indicating that the normality assumption was not
violated (Kline, 1998). Maximum likelihood estimation has been
recommended for use with ordered categorical data when item-
level characteristics are approximately normal (DiStefano, 2002;
Dolan, 1994). CFA with maximum likelihood estimation was
performed using AMOS 5.0 (Arbuckle, 2003). To assess model fit,
five recommended measures were used: the chi-square/df ratio,
root mean residuals (RMRs), comparative fit index (CFI), Tucker-
Lewis Index (TLI), and root-mean-square error of approximation
(RMSEA) (e.g., Marsh, Balla, & Hau, 1996; Maruyama, 1998).
The chi-square/df and RMR provide information about how
closely the model fit compares with a perfect fit. Generally, values
of chi-square/df that are less than 3 and values of RMR that are less
than 0.05 are interpreted as indicating a good fit of a model to the
data (e.g., Kelloway, 1998; Kline, 1998). CFI, TLI, and RMSEA
are relative indexes that are used to compare the fit of different
models with the same data set. For both, CFI and TLI values
exceeding .90 are indicative of a good fit to the data (Bentler &
Bonett, 1980). Finally, RMSEA values of .05 or less indicate close
fit between the model and the sample data (e.g., Bentler, 1990).
Simulation studies have found that RMSEA does not appear to be
affected by sample size or model size and was recommended for
use with ordered categorical data (DiStefano, 2002).
Results
The measures of fit for the different models are in Table 6.
Models 4 and 5 provided the best fit to the data when compared
2
Data are available from Julia Levashina upon request.
Table 3
Descended EFA of the Ingratiation Factor
Item
Factor pattern (standardized
regression coefficients)
h
2
Opinion
conforming
Interviewer or
organization
enhancing
INCON53 0.73 0.00 0.53
INCON54 0.68 0.12 0.58
INCON55 0.73 0.01 0.55
INCON56 0.91 0.08 0.76
INCON57 0.84 0.01 0.71
INCON58 0.68 0.09 0.53
INCON59 0.44 0.27 0.41
INCON60 0.64 0.24 0.64
INENH61 0.02 0.74 0.52
INENH62 0.03 0.86 0.72
INENH63 0.12 0.77 0.71
INENH64 0.10 0.68 0.55
% of variance
a
12.04 8.31
Alpha coefficient 0.92 0.87
Note. The first two letters in each variable name correspond to three big
groups of faking behaviors (IN ingratiation), and the following letters
correspond to 2 of 11 subfactors of faking behaviors: CON conforming
and ENH interviewer or organization enhancing. Analysis is based on N
260. Interfactors correlations are r
Factor1 Factor 2
0.58. EFA
exploratory factor analysis; h
2
item communalities at extraction. Bold
-
face values indicate that the item loads on the factor.
a
Rotated solution.
Table 4
Descended EFA of the Image Protection Factor
Item
Factor pattern (standardized regression
coefficients)
h
2
Distancing Masking Omitting
IPDIS50 0.86 0.04 0.00 0.79
IPDIS51 0.83 0.02 0.03 0.74
IPDIS52 0.61 0.11 0.13 0.60
IPMAS49 0.52 0.22 0.04 0.44
IPMAS44 0.07 0.46 0.25 0.30
IPMAS46 0.22 0.63 0.02 0.58
IPMAS47 0.12 0.58 0.06 0.47
IPOMI42 0.21 0.54 0.03 0.46
IPOMI37 0.03 0.24 0.45 0.31
IPOMI38 0.16 0.12 0.82 0.68
IPOMI39 0.05 0.11 0.75 0.66
% of variance
a
7.89 4.20 5.34
Alpha coefficient 0.87 0.75 0.76
Note. The first two letters in each variable name correspond to three big
groups of faking behaviors (IP image protection), and the following letters
correspond to 3 of 11 subfactors of faking behaviors: DIS distancing,
MAS masking, and OMI omitting. Analysis is based on N 260.
Interfactors correlations are r
Factor1 Factor2
0.55; r
Factor1 Factor3
0.63;
and r
Factor2 Factor3
0.46. EFA exploratory factor analysis; h
2
item
communalities at extraction. Boldface values indicate that the item loads on the
factor.
a
Rotated solution.
1643
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
with the other models investigated. The sixth model and the third
model that were derived from EFA and had four first-order cor-
related factors were the worst fitting models. Also, a nested com-
parison of Model 3 and Model 4 indicates that the higher order
four factors alone are not empirically sufficient, and subfactors are
important,
2
(586, N 589) 3,666, p .01. These results
support our decision to perform descended factor analysis. Be-
cause Models 4 and 5 were nested models, chi-squares could be
compared to test what model provided a better statistical fit to the
data. Although fit indexes of Model 4 were slightly better than
those of Model 5, the difference in chi-squares,
2
(2, N 589)
4, p .2, showed that Model 5 fits no worse than Model 4 (e.g.,
Kline, 1998) and was preferred on the grounds of parsimony and
theory. The results of the CFA provided strong support for the
multidimensional nature of faking during the interview. CFA sup-
ported the concepts of a total faking score as well as four scales
and 11 subscales.
Study 4: Convergent and Discriminant Validity and Test–
Retest Reliability of the IFB Scale
If the IFB scale is measuring meaningful constructs, then it
should demonstrate convergent and discriminant validity by a
predictable pattern of relationships with other variables within the
“nomological network” (Cronbach & Meehl, 1955). To build the
nomological network, we used findings and recent developments
in the integrity testing, IM, and deception literature. Although
alternative measures of faking during the employment interview
were not available, we assessed the correlation between our mea-
sure and measures of SDR and IM. At the beginning of this article,
we argued that faking, SDR, and IM are overlapping constructs but
cannot be used interchangeably. Thus, we expected that the IFB
scale would be moderately correlated with measures of SDR and
IM (Hypothesis 1).
Overt integrity tests mainly consist of two parts (Sackett &
Wanek, 1996). One part includes measures of theft attitudes.
People who believe that others engage in dishonest behaviors tend
to behave fraudulently themselves. The other part refers to the
assessment of one’s own honesty and admissions of theft and other
wrongdoing. Therefore, we argue that people who think that others
are untruthful will engage more in faking during the employment
interview, but people who value honesty will engage less in faking
(Hypothesis 2).
The findings in the deception literature suggest that Machiavel-
lianism and self-monitoring should predict faking in the employ-
ment interview. People who self-monitor more fully consider
characteristics of the social situation in presenting themselves to
others and vary their actual behavior in response to subtle changes
in social norms (Snyder, 1974; Snyder & Monson, 1975). At the
same time, people high in Machiavellianism, who believe that
others can be manipulated, are particularly likely to engage in
strategic self-presentation to influence others (Christie & Geis,
1970). They tend to tell more everyday lies (e.g., Kashy & De-
Paulo, 1996). Thus, we expect that high-self-monitoring people
will engage more often in faking during the employment interview
Table 5
Descended EFA of the Slight Image Creation Factor
Item
Factor pattern (standardized regression
coefficients)
h
2
Tailoring Fit enhancing Embellishing
ICTAI7 0.77 0.04 0.01 0.57
ICTAI8 0.81 0.09 0.03 0.62
ICTAI9 0.78 0.02 0.03 0.60
ICTAI10 0.55 0.15 0.08 0.48
ICTAI11 0.43 0.24 0.11 0.43
ICTAI12 0.49 0.25 0.06 0.39
ICFIT13 0.12 0.82 0.02 0.60
ICFIT14 0.04 0.80 0.00 0.66
ICFIT15 0.21 0.56 0.09 0.55
ICFIT17 0.25 0.53 0.05 0.53
ICEMB2 0.13 0.11 0.33 0.25
ICEMB3 0.13 0.18 0.56 0.35
ICEMB4 0.11 0.07 0.79 0.68
ICEMB5 0.04 0.02 0.84 0.74
% of variance
a
6.78 5.71 5.92
Alpha coefficient 0.85 0.83 0.78
Note. The first two letters in each variable name correspond to three big
groups of faking behaviors (IC image creation), and the following letters
correspond to 3 of 11 subfactors of faking behaviors: TAI tailoring,
FIT fit enhancing, and EMB embellishing. Analysis is based on N
260. Interfactors correlations are r
Factor1 Factor2
0.52; r
Factor1
Factor3
0.56; and r
Factor2 Factor3
0.45. EFA exploratory factor
analysis; h
2
item communalities at extraction. Boldface values indi
-
cate that the item loads on the factor.
a
Rotated solution.
Table 6
Alternative Models and Significance Test
Model
2
df
2
df
RMR CFI TLI RMSEA (90% CI)
Model 1 4,809 1938 2.5 0.074 0.873 0.868 0.050 (0.048, 0.052)
Model 2 4,809 1938 2.5 0.074 0.873 0.868 0.050 (0.048, 0.052)
Model 3 6,665 1946 3.4 0.057 0.791 0.784 0.064 (0.063, 0.066)
Model 4 2,999 1360 2.2 0.049 0.917 0.913 0.045 (0.043, 0.047)
Model 5 3,003 1362 2.2 0.050 0.915 0.911 0.046 (0.044, 0.048)
Model 6 11,379 1952 5.8 0.087 0.583 0.569 0.091 (0.089, 0.092)
Note. Analysis is based on N 589. Model 1 has one third-order factor (Faking), three factors (Image Creation, Image Protection, and Ingratiation), and
11 subfactors. Model 2 has 11 subfactors. Model 3 has four factors (Slight Image Creation, Extensive Image Creation, Image Protection, and Ingratiation).
Model 4 has four factors and 11 subfactors. Model 5 has a third-order factor (Faking), four factors, and 11 subfactors. Model 6 has one first-order factor
(Faking). Absolute indexes:
2
/df normed chi-square; RMR root-mean-residuals. Relative indexes: CFI comparative fit index; TLI Tucker-Lewis
Index. Fit Indexes for comparing nonnested models: RMSEA root-mean-square error of approximation.
1644
LEVASHINA AND CAMPION
(Hypothesis 3). Scores on the Machiavellianism scale can be
expected to correlate with scores on the IFB scale (Hypothesis 4).
Discriminant validity refers to the extent to which there are
negligible relationships between measures of unrelated constructs.
The findings in the deception literature suggest that there are no
gender differences in the use of deceptive behaviors. Women and
men tend to tell the same amount of lies (DePaulo, Epstein, &
Wyer, 1993; DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996;
Tyler & Feldman, 2004). Thus, we expect negligible correlations
between gender and our measure (Hypothesis 5). The findings in
the research on cheating advocate that there is no correlation
between ethnicity and cheating (Cizek, 1999). Moreover, Topp
(2001) found no relation between ethnicity and IM. Thus, we
expected no relationship between ethnicity and our measure (Hy-
pothesis 6). Also, research on cheating revealed that there is a
slight to moderate negative correlation between cheating and grade
point average (GPA; Cizek, 1999). Students with lower grades are
more likely to report and engage in cheating, whereas students
with a higher GPA are less likely to do either. Recently, Higgins
and Judge (2004) reported only a slight negative correlation be-
tween GPA and self-promotion tactics and no correlation between
GPA and ingratiation during the employment interview. Thus, we
expected slight negative or zero correlations between GPA and the
IFB scale and its subscales (Hypothesis 7). Finally, at the begin-
ning of this article, we argued that faking does not relate to the
self-deception component of SDR. Thus, we expect an insignifi-
cant correlation between a measure of self-deception and our
measure (Hypothesis 8).
Method
Participants. The data were collected from a new sample of
156 undergraduate students (39% women) who were in the job
market for either permanent full-time or temporary internship
positions and had on average four employment interviews.
Measures. To measure social desirability, we used the Social
Desirability Scale (SDS) developed by Crowne and Marlowe
(1960). To measure IM and self-deception (SD) we used the
Balanced Inventory of Desirable Responding-7 (BIDR-7; Paulhus,
1991). To measure the attitude towards honesty of other people,
we used the Trustworthiness scale (Wrightsman, 1964). We used
seven items from the Trustworthiness scale that ask specifically
about opinions on whether other people usually behave dishon-
estly. Because items in the Trustworthiness scale ask about dis-
honest behaviors of other people in everyday life, we developed
three additional items that asked specifically about the likelihood
of faking of other people during an interview (“Given a chance,
most people would try to fake during the employment interview”;
“People usually tell the truth during the employment interview,
even when they know they would be better off by lying”; “Most
people are honest during the employment interview”). These items
composed what we called the Interview Trustworthiness scale. To
measure personal honesty, we used the Honesty scale (Scott,
1965), which includes 20 items (e.g., “Always representing one’s
own true thoughts and feelings honestly”). There is relatively
strong evidence for the construct validity of the scale (e.g., Braith-
waite & Scott, 1991). To measure self-monitoring, we used the
Self-Monitoring scale (Snyder, 1974). To measure Machiavellian-
ism, we used the Machiavellianism scale developed by Allsopp,
Eysenck, and Eysenck (1991). The items in this scale measure the
respondent’s own behavior rather than attitudes toward Machia-
vellian behaviors, as in the original Machiavellianism scale devel-
oped by Christie and Geis (1970). The ethnicity variable had six
categories: American Indian or Alaska Native, Asian or Pacific
Islander, Black, Hispanic, White, and “other.” Gender was coded
as female 1, male 0. To measure GPA, we asked respondents
to indicate current cumulative GPA as it was reported on their
most recent official grade report.
Procedure. Undergraduate students who were in the job mar-
ket were recruited to participate in this study for extra credit in a
college course. Participants completed all measures except for the
IFB scale at the beginning of the 16-week semester, and the IFB
scale was completed at the end of that semester. This helped
reduce common method variance. The participation in this study
was voluntary and anonymous. Each participant was assigned a
random anonymous number that he or she had to put on two forms.
Results and Discussion
The convergent and discriminant validity correlations are re-
ported in Table 7. Hypothesis 1 stated that our measure would be
significantly correlated with a measure of SDR and a measure of
IM. The IFB scale and all four of its subscales were moderately
correlated with the Social Desirability scale, with rs ranging from
.18 to .29 ( ps .05). Also, the IFB scale and its subscales
were moderately correlated with the IM scale of the BIDR-7, with
rs ranging from .16 to .31 ( ps .05). Therefore, the IFB scale
showed convergent validity, and Hypothesis 1 was supported.
Hypothesis 2 stated that the IFB scale and its subscales would be
positively correlated with a measure of attitude toward other
people’s dishonesty and negatively correlated with an assessment
of one’s own honesty. The IFB scale and its subscales were
positively correlated with the Trustworthiness scale (rs ranging
from .21 to .27, ps .01), indicating that participants who be-
lieved that others behave dishonestly engaged more often in faking
during an interview. Also, the IFB scale and its subscales corre-
lated with the Interview Trustworthiness scale (rs ranging from .18
to .26, ps .05). As expected, we obtained negative significant
correlations (rs ranging from .39 to .27, ps .001) between
the Honesty scale and our measure, meaning that people who value
honesty tend not to engage in faking behaviors. Thus, Hypothesis
2 was supported.
Hypotheses 3 and 4 stated that the IFB scale and its subscales
would be positively correlated with measures of self-monitoring
and Machiavellianism. The Self-Monitoring scale correlated sig-
nificantly with the IFB scale and its subscales (rs ranging from .16
to .29, ps .05), except for the Extensive Image Creation sub-
scale. This later finding is consistent with research that found that
self-monitoring and success at deception, defined as lying, were
unrelated (e.g., Miller, DeTurck, & Kalbfleisch, 1983; Riggio,
Tucker, & Throckmorton, 1987). This result provides a prelimi-
nary explanation of the existing paradox that although self-
monitoring is theoretically linked to deception ability, empirical
research has failed to support this link. On the basis of the findings
from past research and our study, it could be concluded that
self-monitoring does not relate to deception, defined as lying, but
relates to other types of deception (e.g., exaggeration, omitting).
The IFB scale was significantly correlated with the Machiavellian-
1645
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
ism scale (rs ranging from .23 to .40, ps .01). Therefore,
Hypotheses 3 and 4 were supported.
Hypothesis 5 stated that the IFB scale would not be related to
gender. Accordingly, neither the IFB scale nor its subscales were
correlated with gender, and Hypothesis 5 was supported. Hypoth-
esis 6 stated that there would be no relationship between faking
and ethnicity. We found the predicted pattern of relationships
between ethnicity and Image Protection and Ingratiation. How-
ever, White candidates tend to engage less often in Slight and
Extensive Image Creation, whereas Asian candidates report more
of these behaviors. Thus, Hypothesis 6 was partially supported.
Hypothesis 7 stated that the IFB scale would be insignificantly
related to GPA. Neither the IFB scale nor its subscales were
correlated with GPA. Thus, Hypothesis 7 was supported. Finally,
Hypothesis 8 stated that the IFB scale would not be related to
self-deception. We found the predicted pattern of relationships
between self-deception and the IFB scale and its subscales except
for Ingratiation. Therefore, Hypothesis 8 was partially supported.
In addition to the reported internal consistency reliability, test–
retest reliability was also assessed. Retest reliability shows the
extent to which scores on a measure can be generalized over
different occasions and the degree to which the latent construct
determines observed scores over time (Nunnally & Bernstein,
1994). We administered the IFB scale twice at 1-month intervals to
70 undergraduate students (44% women, 56% men) with an aver-
age of four recent job interviews. Test–retest reliabilities for the
IFB total score, Slight Image Creation, Extensive Image Creation,
Image Protection, and Ingratiation were .87, .82, .84, .71, and .83,
respectively. These results confirm the stability of responses to the
instrument.
Study 5: The Effect of Candidate Faking Behaviors on
Interview Outcomes
The main purpose of this study was to provide a criterion-related
validity study of the IFB scale by examining whether interviewee
faking behaviors affect outcomes of actual employment inter-
views. Research on IM found a positive link between IM behaviors
(e.g., ingratiation, self-promotion) and interview outcomes such as
perceived applicant suitability (Stevens & Kristof, 1995), per-
ceived applicant fit and similarity (Kristof-Brown et al., 2002), and
perceived fit and hiring recommendations (Higgins & Judge,
2004). Therefore, we hypothesize that faking behaviors, as mea-
sured by the IFB scale, will correlate with getting another inter-
view or job offer.
Method
Participants and procedure. The data were collected from a
new sample of 85 undergraduate students (48% women) who were
in the job market for either permanent full-time or temporary
internship positions. The students were recruited to participate in
this study for extra credit in a college course. Participants were
asked to complete the IFB scale on the basis of their interview
within 24 hr after the interview. We also measured several control
variables at that time. Finally, students wrote the name of the
interviewing company and a password that they must remember to
claim their extra credit and to indicate the outcome of that inter-
view at the end of the semester. The participation in this study was
voluntary and anonymous.
Measures. The measure of faking was the IFB scale. The
measure of interview outcome was scored 1 if they were invited
for the next round of interviews with the company or received a
job offer and scored 0 otherwise. Several control variables were
also collected. Because applicant quality and interviewing skills
might affect the outcomes of the interview, we asked applicants to
indicate the number of previous employment interviews that they
had. Participants were asked to indicate their current cumulative
GPA as it was reported on their most recent official grade report.
Also, the round of the interview (first, second, etc.) was collected.
Finally, gender was coded as female 1, male 0.
Table 7
Convergent and Discriminant Validity Correlations of the IFB Scale and Measures of Related Constructs
a
Measure (and M/SD) Alpha coefficient
Slight image
creation
Extensive image
creation Image protection Ingratiation Faking
SDS (15.24/4.85) .73 .29
***
.18
*
.26
***
.23
**
.29
***
BIDR-7 (9.23/4.68) .78 .27
***
.14 .19
*
.31
****
.27
***
BIDR-7_SD (1.35/1.91) .70 .14 .05 .07 .16
*
.12
BIDR-7-IM (7.89/3.68) .72 .27
***
.16
*
.21
**
.31
****
.28
***
Honesty scale (8.96/4.18) .80 .35
****
.36
****
.31
****
.27
***
.39
****
Trustworthiness scale (27.77/4.88) .75 .21
**
.21
**
.25
***
.24
**
.27
***
Interview Trustworthiness scale (10.17/4.00) .78 .20
**
.24
**
.23
**
.18
*
.26
***
Self-Monitoring scale (12.26/3.56) .63 .21
**
.05 .16
*
.29
***
.21
**
Machiavellianism scale (10.19/5.15) .84 .33
****
.23
**
.35
****
.40
****
.38
****
Gender
a
(0.39/0.49)
.04 .02 .15 .05 .07
GPA (3.26/0.35) .02 .07 .07 .02 .05
White (n 115) .16
*
.24
**
.14 .02 .18
*
Asian (n 27) .17
*
.23
**
.13 .03 .18
*
Hispanic (n 8) .00 .08 .00 .07 .00
Black (n 3) .04 .08 .03 .07 .03
Note. N 156. IFB Interview Faking Behavior; SDS Social Desirability Scale; BIDR-1 Balanced Inventory of Desirable Responding -7; GPA
grade point average.
*
p .05.
**
p .01.
***
p .001.
****
p .0001.
a
1 female, 0 male.
1646
LEVASHINA AND CAMPION
Analysis and Results
The means, standard deviations, and correlations are shown in
Table 8. Because the outcome variable was dichotomous, hierar-
chical logistic regression analysis was used to test the hypothesis
(see Table 9). Variables were entered into the regression in two
steps (Step 1 control variables [number of employment inter-
views, interview round, GPA, and gender], Step 2 four types of
faking behaviors [Slight Image Creation, Extensive Image Cre-
ation, Image Protection, and Ingratiation]). The Step 1 model
chi-square is significant, indicating that the model with only the
control variables fits significantly better than does a model con-
taining only the constant. For Step 2, the chi-square model im-
provement,
2
(4, N 85) 13.36, p .05, indicates that faking
behaviors did improve the model. The same conclusion can be
made by considering two other indicators. Pseudo R
2
(Aldrich &
Nelson, 1984) was improved from 0.23 to 0.39. Akaike’s infor-
mation criterion was decreased from 112 to 107, indicating a better
model fit at Step 2 (Greene, 1990).
The logistic regression analysis uses Wald statistics (that is
conceptually equivalent to the t tests reported in the ordinary least
squares regressions; Greene, 1990) to test significance of individ-
ual variables in the model. This test indicates that Extensive Image
Creation was a positive significant predictor of the interview
outcome (Wald 8.81, p .01), and Image Protection was a
negative predictor that approached significance (Wald 3.29, p
.069). Therefore, our hypothesis was partially supported. Thus, the
results indicate that the probability of having a positive interview
outcome increases when an interviewee engages in Extensive
Image Creation or does not engage in Image Protection.
Further analyses illustrate the practical impact of faking in the
interview. If an interviewee had two employment interviews, a
first round interview, did not engage in either Extensive Image
Creation or Image Protection, then the probability of receiving a
next interview or job offer was 0.31. However, if the interviewee
engaged in Extensive Image Creation to a little extent, then the
probability of a positive interview outcome rose to 0.77. However,
if the interviewee engaged in Image Protection to a little extent,
then the probability of a positive interview outcome fell to 0.11.
The results are similar to the findings in the literature on influence
tactics showing that interviewee influence attempts have a signif-
icant impact on recruiters’ judgments (e.g., Gilmore & Ferris,
1989; Higgins & Judge, 2004; Kristof-Brown et al., 2002). At the
same time, the estimates of the probability values from the logistic
analysis should be considered with caution. They might be differ-
ent if other variables that impact interview outcomes were in-
cluded in an equation.
Study 6: The Use of the IFB Scale for Studying Methods
of Reducing Faking in the Employment Interviews
In this section, we report on an experimental study of two
factors that are important aspects of structured interviews and
likely to influence faking in the interview. On the basis of a review
of the literature on faking in personality tests and the literature on
deception, we proposed a model of faking during the employment
interview and developed 19 testable propositions for when people
would fake during the interview (Levashina & Campion, 2006). In
this study, we tested two propositions. This study also demon-
strated how the IFB scale might be used in future research.
Hypothesis 1: Candidates will engage more in faking when
answering situational questions than when answering past
behavioral questions during a structured interview.
Hypothesis 2: Candidates will engage more in faking when
there is no follow-up questioning during a structured inter-
view.
Method
Participants and interviewers. A new sample of 151 under-
graduate students (25% women) who were enrolled in a manage-
ment career course voluntarily participated in this study. One of
the requirements of this course was to have an interview for a
grade and to prepare for the postgraduation job search process. Six
interviewers, graduate students with either concentrations or past
work experience in human resource management, participated in
the interview process. All interviewers had experience in conduct-
ing interviews (on average 50 interviews).
Study design. A2 2 between-subjects design with factors of
question type (past behavioral vs. situational) and follow-up ques-
Table 8
Means, Standard Deviations, and Intercorrelations Among Study 5 Variables
Variable M (SD)123 4 5678 9 10
1. Number of employment interviews 3.49 (2.73)
2. Interview round 1.21 (0.41) .04
3. GPA 3.41 (0.69) .04 .07
4. Gender
a
0.48 (0.50) .22
*
.02 .08
5. Slight image creation 2.22 (0.83) .18 .03 .09 .19 (0.93)
6. Extensive image creation 1.62 (0.74) .08 .05 .12 .14 .71
**
(0.96)
7. Image protection 1.91 (0.77) .04 .07 .08 .27
*
.78
**
.67
**
(0.90)
8. Ingratiation 2.63 (0.96) .06 .08 .10 .26
*
.62
**
.48
**
.62
**
(0.95)
9. Faking 2.09 (0.71) .10 .07 .12 .26
*
.90
**
.82
**
.89
**
.82
**
(0.98)
10. Interview outcome 0.51 (0.50) .31
**
.28
**
.04 .13 .11 .25
*
.01 .05 .12
Note. N 85. Reliabilities are on the diagonal in parentheses. GPA grade point average.
a
1 female, 0 male.
*
p .05.
**
p .01.
1647
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
tioning (presence vs. absence) was used. Each participant was
randomly assigned to the type of interview. All interviews con-
sisted of 14 questions. Four questions were the same in each
interview (e.g., “Why did you choose to major in_?”), and there
were either 10 past behavioral or 10 situational questions, depend-
ing on the condition. The 10 questions were designed to assess 10
competencies deemed essential to most jobs that undergraduate
students might be interviewing for in the future (e.g., oral com-
munication, ability to influence and persuade, leadership). Two
parallel questions were written (one past behavioral and one situ-
ational) to address each competency. For example, the past behav-
ioral question asked the participants to “Describe a time when you
had a good idea, but there was opposition to it. How did you
persuade others to ‘see things your way?’” The parallel situational
question was “Suppose you have a great idea, but there is oppo-
sition to it among your colleagues. What would you do to persuade
your colleagues to ‘see things your way?’” In interviews with
follow-up questioning, interviewers asked at least one of the fol-
lowing standardized probing questions after each interview ques-
tion: “Why?”, “Could you please elaborate?”, “Please explain in
more detail”, “What do you mean?”, “Tell me why you did that?”
In interviews with no follow-up questioning, all prompting was
prohibited.
Procedure and measures. Before any of the interviews began,
the six interviewers participated in a 1-hr training session. The
interviewers were instructed how to administer the different types
of interviews. Interviewers were kept blind to the hypotheses of the
study.
To recruit participants, the author made an announcement in the
class asking students to voluntarily participate in a study by
completing an instrument (the IFB scale) after their mock inter-
views. Students were told that the purpose of the study was to
examine different behaviors that could be used to impress inter-
viewers and to increase scores on the interview. It is important to
emphasize that at no point were participants instructed to fake
mock interviews, but rather they were told to use this opportunity
to better prepare themselves for their real future employment
interviews. Also, it was explained to participants that their partic-
ipation would be completely anonymous, no names would be
recorded, and neither the researcher nor the course instructor
would be able to link their responses on the instrument back to
them.
All interviews were conducted in an interview room in one of
the university career centers during a 3-month period. Students
signed up for their mock interviews according to their availability.
Different interview types (past behavioral with follow-up, situa-
tional with follow-up, past behavioral with no follow-up, and
situational with no follow-up) were assigned in a way that all four
types were conducted each week for 1 day each, and all six
interviewers conducted approximately the same number of each
interview. Students were unaware of the type of interview they
would have or about the different interviews. One interviewer
conducted each interview. Within each interview, four nonbehav-
ioral questions were asked first and then the 10 past behavioral or
situational questions were asked next, with the order of the items
randomized across participants. After the interview, interviewees
returned to the orientation room to complete the IFB scale, seal it
in a provided envelope, and put it in a special box. This box was
emptied by Julia Levashina after each day. Cumulative GPA and
gender were measured as controlled variables. Two subscales (Fit
Enhancing and Interviewer or Organization Enhancing) were re-
moved from the IFB scale because the mock interview was not job
or organization specific.
Results
All types of faking behaviors were regressed on gender and
GPA. Both variables were found to be insignificant predictors of
Table 9
Results of a Logistic Regression of Interview Outcome
a
Variable
Step 1 Step 2
B (SE) Wald pB(SE) Wald p
Constant 2.17 (1.44) 2.26 .133 3.34 (1.83) 3.32 0.068
Number of employment interviews 0.26 (0.12) 5.67 .017 0.33 (0.14) 5.69 0.017
Interview round 1.49 (0.64) 5.45 .020 1.63 (0.68) 5.77 0.016
GPA 0.09 (0.33) 0.08 .783 0.02 (0.34) 0.01 0.951
Gender
b
0.32 (0.48) 0.44 .505 0.50 (0.56) 0.78 0.381
Slight image creation 0.19 (0.59) 0.10 0.747
Extensive image creation 2.08 (0.70) 8.81 0.003
Image protection 1.22 (0.67) 3.29 0.069
Ingratiation 0.04 (0.37) 0.01 0.912
2 log likelihood
2
101.72 88.37
AIC 111.72 106.37
Model
2
16.10
**
29.46
***
Pseudo R
2
0.23 0.39
Note. N 85. GPA grade point average; AIC Akaike’s information criterion. Values in the last four rows describe the model fit at step 1 or step
2.
a
Interview outcome was coded as 1 if an interviewee was invited for the next round of interviews with the company or received a job offer, and coded
as 0 otherwise.
b
1 female, 0 male.
**
p .01.
***
p .001.
1648
LEVASHINA AND CAMPION
faking. Thus, they were not included as control variables in the
subsequent analysis. To test the major hypotheses, data on faking
behaviors were analyzed by a two-way analysis of variance
(ANOVA), with the following between-subjects factors: question
type (past behavioral–situational) and follow-up questioning
(follow-up–no follow-up). An unbalanced two-way ANOVA was
used due to an unequal number of observations per condition. The
means and standard deviations of reported faking behaviors across
different conditions are presented in Table 10. The two-way
ANOVA revealed that there were statistically significant differ-
ences in the extent to which candidates engage in faking behaviors
among the four types of interview (all F ratios were significant at
p .01), and the interaction between question type and follow-up
questioning was insignificant (see Table 10). Therefore, the main
effects could be interpreted.
Hypothesis 1 stated that applicants would engage more often in
all types of faking behaviors when answering situational questions
rather than past behavioral questions. The main effect of question
type was significant, indicating that participants engaged more in
total faking behaviors, Slight Image Creation, and Ingratiation
when answering situational questions than past behavioral ques-
tions (see Table 10). However, the effect of question type was
insignificant for Extensive Image Creation and Image Protection.
Thus, Hypothesis 1 was mostly supported.
Hypothesis 2 stated that applicants would engage more often in
all types of faking when there was no follow-up questioning during
structured interviews. As shown in Table 10, a main effect of
follow-up questioning was significant for all types of faking be-
haviors ( ps .01). Follow-up questioning did not decrease faking
as it was hypothesized but actually significantly increased all types
of faking behaviors (see Table 10). Therefore, Hypothesis 2 was
not supported. However, the strong opposite finding is very im-
portant both theoretically and practically.
Although the interaction effect between question type and
follow-up questioning was not significant, we performed multiple
comparisons of means of all types of faking behaviors across the
four interview conditions. A Tukey-Kramer test was used because
of unequal group sizes. The means and standard deviations are
reported in Table 10. This test revealed that the means of all types
of faking behaviors in two conditions (past behavioral interviews
with follow-up questioning and situational interviews with no
follow-up questioning) were not significantly different from each
other, whereas the means of all types of faking behaviors in
situational interviews with follow-up questioning were signifi-
cantly greater (all t ratios were significant at ps .01) than the
means for past behavioral interviews with no follow-up question-
ing.
Discussion
Main findings. This study focused on the issue of faking in the
employment interview. Following established psychometric pro-
cedures for scale development (e.g., Nunnally & Bernstein, 1994),
six separate studies were conducted. Study 1 involved item gen-
eration. On the basis of the three sources, we identified and
proposed a taxonomy of faking behaviors. The identified faking
behaviors were converted into the IFB scale. Study 2 involved
evaluation of the proposed taxonomy of faking behaviors and item
reduction through EFA of data collected from (N 260) under-
Table 10
Means and Standard Deviations of Faking Behaviors Across Different Conditions
Faking behavior
Question type
F (1, 147)
Follow-up questioning
F (1,147)
Interaction
F (1,147)
Past
behavioral
(n 76)
Situational
(n 75)
Follow-up
(n 81)
No follow-up
(n 70)
Past behavioral Situational
No follow-up
(n 38)
Follow-up
(n 38)
No follow-up
(n 32)
Follow-up
(n 43)
M SD M SD M SD M SD M SD M SD M SD M SD
Total 1.62 0.60 1.83 0.60 4.35
*
1.91 0.64 1.52 0.50 15.91
**
1.41 0.39 1.83 0.70 1.65 0.59 1.98 0.57 0.20
Slight image creation 1.69 0.63 2.01 0.71 7.16
**
2.05 0.71 1.62 0.58 15.18
**
1.51 0.51 1.89 0.68 1.75 0.63 2.19 0.71 0.13
Extensive image creation 1.36 0.57 1.41 0.55 0.15 1.52 0.63 1.22 0.41 10.97
**
1.17 0.27 1.54 0.71 1.28 0.53 1.50 0.56 0.40
Image protection 1.69 0.70 1.86 0.73 1.76 1.95 0.77 1.58 0.60 10.14
**
1.46 0.47 1.92 0.81 1.71 0.71 1.97 0.74 0.75
Ingratiation 1.73 0.91 2.08 0.86 5.01
*
2.12 0.91 1.66 0.84 9.42
**
1.50 0.76 1.97 0.98 1.85 0.90 2.25 0.80 0.06
Note. N 151.
*
p .05.
**
p .01.
1649
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
graduate job candidates. Study 3 described a CFA of data collected
from a new sample (N 589) to confirm the factor structure.
Study 4 provided preliminary convergent and discriminant validity
evidence for the IFB scale using a separate sample (N 156) of
undergraduate job candidates. Also, test–retest reliability was as-
sessed on a separate sample (n 70) of job candidates. Study 5
described a criterion-related study using a new sample (N 85) to
show that the IFB scale relates to actual interview outcomes.
Finally, Study 6 described a study using the new measure for
studying methods of reducing faking during structured interviews
(N 151).
The first four studies showed that faking construct was repre-
sented by four factors (Slight Image Creation, Extensive Image
Creation, Image Protection, and Ingratiation) and 11 subfactors
(Embellishing, Tailoring, Fit Enhancing, Constructing, Inventing,
Borrowing, Masking, Distancing, Omitting, Opinion Conforming,
and Interviewer or Organization Enhancing). This structure of
faking is useful because of conceptual, empirical, and practical
reasons. Conceptually, when candidates engage in Slight Image
Creation, they exaggerate, but they are still close to the truth.
When candidates engage in Extensive Image Creation, they invent
information (e.g., they lie). When candidates engage in Image
Protection, they intentionally omit job-related information. Fi-
nally, when job candidates insincerely ingratiate, they are trying to
make interviewers like them and give them a better score on the
interview regardless of their performance. Empirically, the results
of the EFA and CFA indicate that faking is a multidimensional
construct. It is likely that different variables would predict the
likelihood of engaging in different faking behaviors. For example,
if candidates do not know the job requirements, then they would
not be able to tailor but would be able to exaggerate. In addition,
results from the criterion-related study indicate that different fak-
ing factors impact the interview outcome in different ways. Prac-
tically, there are statistically significant differences in the means of
job candidate faking behavior. Undergraduate job candidates use
significantly more ingratiation, followed by slight image creation,
image protection, and extensive image creation (see Table 11).
Thus, this structure gives practitioners useful information about
different types of faking behavior and the likelihood of their
occurrence.
Study 5 showed that faking behaviors affect interview out-
comes. Particularly, Extensive Image Creation increases the prob-
ability of getting another interview or job offer, whereas Image
Protection decreases the probability. Study 6 showed that past
behavioral interviews were more resilient to faking compared with
situational interviews. This is practically important because even
though applicants may want to fake, there may be ways to inter-
view that constrain faking. One of the most interesting findings
was that follow-up questioning increased faking in both situational
and past behavioral structured interviews. We hypothesized the
opposite effect by assuming that probing would be a response
verification mechanism that would inhibit faking. Informal de-
briefing with participants revealed that follow-up questioning was
perceived not as response verification but rather as a cue signaling
what types of answers were important and critical. Finally, this
study showed that past behavioral interviews with no follow-up
questioning were the most resilient to faking, whereas situational
interviews with follow-up questions were the least resilient to
faking. These studies show that the scale demonstrated content
validity, consistent factor structure, reliabilities above the recom-
mended level for new scales, convergent and discriminant validity,
criterion-related validity, and initial empirical evidence of inter-
view methods of reducing faking.
Base rate of faking behaviors. Table 11 provides the percent-
ages, means, and standard deviations of undergraduate candidates’
use of faking behaviors. The table shows data obtained in Studies
3, 5, and 6. In Study 3, undergraduate candidates were asked to
report faking behaviors on the basis of all employment interviews
Table 11
Base Rate of Faking Behaviors Across Three Studies
Type of faking behavior
Percentage of candidates using faking
behaviors
Means and (standard deviations) of job
candidates’ faking behavior use
Study 3
(n 589)
Study 5
(n 85)
Study 6
(n 151)
Study 3
(n 589)
Study 5
(n 85)
Study 6
(n 151)
Slight image creation 99.49 95.29 85.43 2.49 (0.74) 2.22 (0.83) 1.85 (0.69)
Embellishing 96.10 85.88 72.19 2.39 (0.86) 2.05 (0.91) 1.65 (0.67)
Tailoring 96.60 91.76 72.85 2.56 (0.84) 2.29 (0.94) 2.05 (0.93)
Fit enhancing 94.57 90.59 2.52 (0.89) 2.30 (0.92)
Extensive image creation 91.85 80.00 64.9 1.68 (0.72) 1.62 (0.74) 1.38 (0.56)
Constructing 71.31 63.51 51.66 1.71 (0.85) 1.66 (0.86) 1.42 (0.71)
Inventing 88.12 74.71 58.28 1.82 (0.76) 1.81 (0.81) 1.43 (0.58)
Borrowing 42.95 34.12 27.81 1.50 (0.81) 1.38 (0.76) 1.30 (0.65)
Image protection 95.76 85.88 86.75 2.09 (0.74) 1.91 (0.77) 1.78 (0.72)
Omitting 85.40 74.12 78.81 2.28 (0.93) 2.06 (1.00) 2.16 (0.96)
Masking 84.21 82.35 59.60 2.01 (0.84) 1.87 (0.84) 1.58 (0.77)
Distancing 75.21 58.82 60.00 1.99 (0.91) 1.78 (0.93) 1.59 (0.79)
Ingratiation 98.64 95.29 77.48 2.76 (0.87) 2.63 (0.96) 1.90 (0.90)
Opinion conforming 96.26 95.29 77.48 2.56 (0.91) 2.52 (0.95) 1.90 (0.90)
Interviewer or organization enhancing 96.60 91.76 2.97 (1.02) 2.73 (1.08)
Total 99.49 98.82 93.38 2.25 (0.63) 2.09 (0.71) 1.73 (0.61)
Note. Data for the percentage of candidates using Fit enhancing are not available for study 6 because of the nature of mock interview.
1650
LEVASHINA AND CAMPION
they have had, whereas in Studies 5 and 6, undergraduate candi-
dates were asked to report faking behaviors after an actual inter-
view and mock interview, respectively. Results of all studies are
consistent and indicate that undergraduate job candidates engage
more in Ingratiation and Slight Image Creation and engage less in
Image Protection and Extensive Image Creation. On the basis of
the results of Study 5, a dependent sample multivariate t test
(similar to T
2
test for equality of treatments in a repeated measures
design) indicates that there are significant differences in the means
of types of these faking behaviors used ( p .001).
Base rates in multiple studies show that over 90% of undergrad-
uate job candidates fake during employment interviews; however,
fewer candidates engage in faking behaviors that are semantically
closer to lying, ranging from 28% to 75%. On the basis of the
results of Study 5, 95% of undergraduate candidates engage in
Slight Image Creation and Ingratiation during their actual employ-
ment interviews. They use all types of faking that comprise these
two factors (e.g., 86% of undergraduate job candidates embellish,
91% engage in fit enhancing). However, fewer candidates engage
in Extensive Image Creation (80%) or Image Protection (86%)
during actual employment interviews, and they tend to be more
selective in the use of different types of faking that comprise these
two factors (e.g., 64% of undergraduate candidates engage in
Constructing, 75% engage in Inventing, and 34% engage in Bor-
rowing). Also, undergraduate job candidates who engage in Ex-
tensive Image Creation tend to engage in all other types of faking
so that the impact of Slight Image Creation is being overcome by
the extreme cases.
At the same time, there are some differences in the base rates of
faking behaviors across different studies. The percentages and
means of faking in Study 6 are much lower than in Study 3 and
Study 5. This could be due to at least two reasons (Levashina &
Campion, 2006). First, the structured interviews that were used in
Study 6 may provide less opportunity for job candidates to engage
in faking behaviors. Second, motivation to engage in faking may
be lower in mock employment interviews (see Study 6) than in
actual employment interviews (see Study 5).
Theoretical and practical implications. There are theoretical
as well as practical implications of this research. First, this is the
first study that directly investigated faking during employment
interviews. A number of studies have been devoted to the issue of
detection of deception in different contexts (e.g., interviews, court
testimonies, interpersonal interactions). The main finding of this
research is that humans cannot detect deception accurately (e.g.,
DePaulo, Stone, & Lassiter, 1985). Therefore, self-report measures
of faking behaviors are needed to understand, predict, and prevent
faking during employment interviews. Second, the IFB scale pro-
vides a conceptually useful framework for understanding factors of
interview behavior. The IFB scale is developed to be used to
improve the selection process and not as a selection device. Third,
this study examined two assumptions of valid interview questions.
Motowidlo (1999) argued that the validity of situational questions
rests on the assumption that what people say they will do in
hypothetical situations accurately represents their true intentions
for future actions in such situations, and the validity of past
behavior questions rests on the assumption that applicants truth-
fully describe their past behaviors, and they are likely to behave
the same in the future. However, applicants might be inclined to
deceive recruiters by telling them what they think will create a
favorable impression. When candidates do not state their true
intentions or do not truthfully describe their past behaviors, the
assumptions are not met, and the validity of both interview types
suffers. However, past research has failed to examine these as-
sumptions. This study demonstrates that past behavioral interviews
may be more resilient to faking. The IFB scale could be used to
improve the theory of interviews by assessing the likelihood of
faking as a function of other components of interview structure.
Fourth, this study investigates the role of follow-up questioning in
promoting faking. We could not locate any study that investigates
follow-up questioning in interviews. However, it has been pro-
posed that follow-up questions could be used to get a specific and
detailed answer (Motowidlo et al., 1992), keep candidates on track
(Janz, 1989), or test hypotheses about the candidate (Drake, 1982).
In this study, standardized follow-up questions (e.g., “Could you
please elaborate?”) were perceived by interviewees as cues signal-
ing that the requested information was important for the inter-
viewer and prompting more detailed answers that encouraged
respondents to fake. Fifth, the results of this study indicate that
undergraduate candidates engage in different types of faking dur-
ing their employment interviews, and they fake to different ex-
tents. Finally, we hope that the results of this article are not
perceived as an endorsement of faking behaviors and do not
suggest to students that faking is practically necessary to gain a job
offer. Candidates should answer interview questions in a way that
will help them get a job offer that fits their true personality and
credentials. For example, they might use self-promotion tactics
(Stevens & Kristof, 1995) by presenting their best experiences, but
they should not outright fake or lie.
Limitations
There are two limitations caused by the samples used. First, the
taxonomy of faking behaviors was studied in samples of job
candidates with limited work experience. Some types of faking
behaviors may be more common for candidates with more work
experience. Second, the fact that college students served as can-
didates in Study 6 may have decreased the generalizability of the
results. However, students were motivated to do their best during
the interview because of not only extrinsic motivation (a better
grade) but also intrinsic motivation (to prepare themselves for
future employment interviews). Also, the nature of the mock
interview could have affected the behaviors of participants. How-
ever, the experimental design that we used allowed us to manip-
ulate the interview conditions without imposing any artificial
restrictions on the behaviors of participants (e.g., asking them to
fake an interview). At the same time, participants may have still
felt some encouragement to impression manage in the interviews
despite our instructions. Nevertheless, we believe that our results
provide some useful information about the behaviors of job appli-
cants.
Future Research
There are many areas for future research. First, future research
may refine the scale. For example, data from different samples
could be collected to cross-validate the factor structure of the scale.
Also, future research could examine the relationships between the
IFB scale and additional measures in the nomological network.
1651
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
This may include other measures of faking and measures of
applicant reactions (e.g., perceptions of justice). Second, future
research should examine the relationships between identified types
of faking and different types of IM to see whether, for instance,
Slight Image Creation is the typical case of the self-promotion
tactic (Stevens & Kristof, 1995). Third, future research should
examine further the criterion-related validity of the IFB scale and
assess whether faking is related to job performance if faking
candidates are hired. Fourth, future research should explore other
components of interview structure to identify those that encourage
less faking (e.g., number of interviewers, interview length, rating
scales). Fifth, more studies on follow-up questioning are needed.
There is some evidence that candidates as well as interviewers may
prefer follow-up questioning (Dipboye, 1994). Thus, future re-
search could investigate whether there are types of follow-up
questioning that could be used to clarify answers and seek infor-
mation without prompting faking. Sixth, future research could
investigate whether interviewers are able to detect different faking
behaviors that candidates use during an interview. For example,
interviewer’s perceptions of whether faking behaviors appeared to
occur could be compared with behaviors that interviewees self-
reported on the IFB. In the end, it is hoped that this study and the
availability of the IFB scale will encourage more research on this
long-neglected topic.
References
Aldrich, J. H., & Nelson, F. D. (1984). Linear probability, logit, and probit
models. Beverly Hills, CA: Sage.
Allsopp, J., Eysenck, H. J., & Eysenck, S. B. (1991). Machiavellianism as
a component in psychoticism and extraversion. Personality and Individ-
ual Differences, 12, 29 41.
Arbuckle, J. L. (2003). AMOS 5.0 user’s guide. Chicago: Smallwaters.
Bass, B. M., Cascio, W. F., & O’Connor, E. J. (1974). Magnitude estima-
tions of frequency and amount. Journal of Applied Psychology, 59,
313–320.
Baumeister, R. F. (1982). A self-presentational view of social phenomena.
Psychological Bulletin, 91, 3–26.
Baumeister, R. F. (1989). Motives and costs of self-presentation in orga-
nizations. In R. A. Giacalone & P. Rosenfeld (Eds.), Impression man-
agement in the organization (pp. 57–73). Hillsdale, NJ: Erlbaum.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psy-
chological Bulletin, 107, 238 –246.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of
fit in the analysis of covariance structures. Psychological Bulletin, 88,
588 606.
Braithwaite, V. A., & Scott, W. A. (1991). Values. In J. P. Robinson, P. R.
Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social
psychological attitudes (pp. 661–753). San Diego, CA: Academic Press.
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate
Behavioral Research, 1, 254 –276.
Christie, R., & Geis, F. L. (Eds.). (1970). Studies in Machiavellianism.
New York: Academic Press.
Cizek, G. J. (1999). Cheating on tests: How to do it, detect it, and prevent
it. Mahwah, NJ: Erlbaum.
Comrey, A. L., & Backer, T. E. (1975). Detection of faking on the Comrey
Personality Scales. Multivariate Behavioral Research, 10, 311–319.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological
tests. Psychological Bulletin, 52, 281–302.
Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability
independent of psychopathology. Journal of Consulting Psychology, 24,
349 –354.
DePaulo, B. M., Epstein, J. A., & Wyer, M. M. (1993). Sex differences in
lying: How women and men deal with the dilemma of deceit. In M.
Lewis & C. Saarni (Eds.), Lying and deception in everyday life (pp.
126 –147). New York: Guilford Press.
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein,
J. A. (1996). Lying in everyday life. Journal of Personality and Social
Psychology, 70, 979 –995.
DePaulo, B. M., Stone, J. I., & Lassiter, G. D. (1985). Deceiving and
detecting deceit. In B. R. Schlenker (Ed.), The self and social life (pp.
323–370). New York: McGraw-Hill.
Dipboye, R. L. (1994). Structured and unstructured selection interviews:
Beyond the job-fit model. In G. R. Ferris (Ed.), Research in personnel
and human resources management (Vol. 12, pp. 79 –123). Greenwich,
CT: JAI Press.
DiStefano, C. (2002). The impact of categorization with confirmatory
factor analysis. Structural Equation Modeling, 9, 327–346.
Dolan, C. V. (1994). Factor analysis of variables with 2, 3, 5, and 7
response categories: A comparison of categorical variable estimators
using simulated data. British Journal of Mathematical and Statistical
Psychology, 47, 309 –326.
Drake, J. D. (1982). Interviewing for managers: A complete guide to
employment interviewing. New York: AMACOM.
Drake, J. D. (1997). The perfect interview: How to get the job you really
want. New York: AMACOM.
Edwards, A. L. (1957). The social desirability variable in personality
assessment and research. Ft. Worth, TX: Dryden Press.
Ellingson, J. E., Smith, D. B., & Sackett, P. R. (2001). Investigating the
influence of social desirability on personality factor structure. Journal of
Applied Psychology, 86, 122–133.
Ellis, A. P. J., West, B. J., Ryan, A. M., & DeShon, R. P. (2002). The use
of impression management tactics in structured interviews: A function of
question type? Journal of Applied Psychology, 87, 1200 –1208.
Fletcher, C. (1989). Impression management in the selection interview. In
R. A. Giacalone & P. Rosenfeld (Eds.), Impression management in the
organization (pp. 269 –281). Hillsdale, NJ: Erlbaum.
Fletcher, C. (1990). The relationships between candidate personality, self-
presentation strategies, and interviewer assessments in selection inter-
views: An empirical study. Human Relations, 43, 739 –749.
Furnham, A. (1986). Response bias, social desirability, and dissimulation.
Personality and Individual Differences, 7, 385– 400.
Furnham, A. (1990). Faking personality questionnaires: Fabricating differ-
ent profiles for different purposes. Current Psychology: Research &
Reviews, 9, 46 –55.
Giacalone, R. A., & Rosenfeld, P. (Eds.). (1989). Impression management
in the organization. Hillsdale, NJ: Erlbaum.
Gilmore, D. C., & Ferris, G. R. (1989). The effects of applicant impression
management tactics on interviewer judgments. Journal of Management,
15, 557–564.
Gilmore, D. C., Stevens, C. K., Harrell-Cook, G., & Ferris, G. R. (1999).
Impression management tactics. In R. W. Eder & M. M. Harris (Eds.),
The employment interview handbook (pp. 321–336). Thousand Oaks,
CA: Sage.
Greene, W. H. (1990). Econometric analysis. New York: Macmillan.
Higgins, C. A., & Judge, T. A. (2004). The effect of applicant influence
tactics on recruiter perceptions of fit and hiring recommendations: A
field study. Journal of Applied Psychology, 89, 622– 632.
Holden, R. R., & Fekken, G. C. (1989). Three common social desirability
scales: Friends, acquaintances, or strangers? Journal of Research in
Personality, 23, 180 –191.
Hopper, R., & Bell, R. A. (1984). Broadening the deception construct.
Quarterly Journal of Speech, 70, 288 –302.
Isaksen, S. G., & Davis, G. A. (1979). Faking high and low creativity
scores on the Adjective Check List. Journal of Creative Behavior, 13,
139 –145.
1652
LEVASHINA AND CAMPION
Janz, T. (1989). The patterned behavior description interview: The best
prophet of future is the past. In R. W. Eder & G. R. Ferris (Eds.), The
employment interview: Theory, research, and practice (pp. 158 –168).
Newbury Park, CA: Sage.
Kaiser, H. F. (1960). The application of electronic computers to factor
analysis. Educational and Psychological Measurement, 20, 141–151.
Kashy, D. A., & DePaulo, B. M. (1996). Who lies? Journal of Personality
and Social Psychology, 70, 1037–1051.
Kelloway, E. K. (1998). Using LISREL for structural equation modeling:
A researcher’s guide. Thousand Oaks, CA: Sage.
Kipnis, D., Schmidt, S. M., & Wilkinson, I. (1980). Intraorganizational
influence tactics: Explorations in getting one’s way. Journal of Applied
Psychology, 65, 440 452.
Kline, R. B. (1998). Principles and practice of structural equation mod-
eling. New York: Guilford Press.
Knapp, M. L., & Comadena, M. E. (1979). Telling it like it isn’t: A review
of theory and research on deceptive communications. Human Commu-
nication Research, 5, 270 –285.
Kristof-Brown, A., Barrick, M. R., & Franke, M. (2002). Applicant im-
pression management: Dispositional influences and consequences for
recruiter perceptions of fit and similarity. Journal of Management, 28,
27– 46.
Kumar, K., & Beyerlein, M. (1991). Construction and validation of an
instrument for measuring ingratiatory behaviors in organizational set-
tings. Journal of Applied Psychology, 76, 619 627.
Leary, M. R., & Kowalski, R. M. (1990). Impression management: A
literature review and two-component model. Psychological Bulletin,
107, 34 47.
Levashina, J., & Campion, M. A. (2006). A model of faking likelihood in
the employment interview. International Journal of Selection and As-
sessment, 14, 299 –316.
Levin, R. A., & Zickar, M. J. (2002). Investigating self-presentation, lies,
and bullshit: Understanding faking and its effects on selection decisions
using theory, field research, and simulation. In J. M. Brett & F. Drasgow
(Eds.), The psychology of work: Theoretically based empirical research
(pp. 253–276). Mahwah, NJ: Erlbaum.
Marsh, H. W., Balla, J. R., & Hau, K. (1996). An evaluation of incremental
fit indices: A clarification of mathematical and empirical properties. In
G. A. Marcoulides & R. E. Schumacker (Eds.), Advanced structural
equation modeling: Issues and techniques (pp. 315–353). Mahwah, NJ:
Erlbaum.
Maruyama, G. M. (1998). Basics of structural equation modeling. Thou-
sand Oaks, CA: Sage.
McFarland, L. A., Ryan, A. M., & Kriska, S. D. (2003). Impression
management use and effectiveness across assessment methods. Journal
of Management, 29, 641– 661.
Medley, H. A. (1993). Sweaty palms: The neglected art of being inter-
viewed. Berkeley, CA: Ten Speed.
Miller, G. R., DeTurck, M. A., & Kalbfleisch, P. J. (1983). Self-
monitoring, rehearsal, and deceptive communication. Human Commu-
nication Research, 10, 97–117.
Morrison, E. W., & Bies, R. J. (1991). Impression management in the
feedback-seeking process: A literature review and research agenda.
Academy of Management Review, 16, 522–541.
Motowidlo, S. J. (1999). Asking about past behavior versus hypothetical
behavior. In R. W. Eder & M. M. Harris (Eds.), The employment
interview handbook (pp. 179 –190). Thousand Oaks, CA: Sage.
Motowidlo, S. J., Carter, G. W., Dunnette, M. D., Tippins, N., Werner, S.,
Burnett, J. R., & Vaughan, M. J. (1992). Studies of the structured
behavioral interview. Journal of Applied Psychology, 77, 571–587.
Nunnally, J., & Bernstein, I. H. (1994). Psychometric theory. New York:
McGraw-Hill.
Ones, D. S., & Viswesvaran, C. (1998). The effects of social desirability
and faking on personality and integrity assessment for personnel selec-
tion. Human Performance, 11, 245–269.
Palmer, D. K., Campion, M. A., & Green, P. C. (1999). Interviewing
training for both applicant and interviewer. In R. W. Eder & M. M.
Harris (Eds.), The employment interview handbook (pp. 337–353).
Thousand Oaks, CA: Sage.
Paulhus, D. L. (1984). Two-component models of socially desirable re-
sponding. Journal of Personality and Social Psychology, 46, 598 609.
Paulhus, D. L. (1991). Measurement and control of response bias. In J. P.
Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of
personality and social psychological attitudes (pp. 17–59). San Diego,
CA: Academic Press.
Riggio, R. E., Tucker, J., & Throckmorton, B. (1987). Social skills and
deception ability. Personality and Social Psychology Bulletin, 13, 568
577.
Rummel, R. J. (1970). Applied factor analysis. Evanston, IL: Northwestern
University Press.
Sackett, P. R., & Wanek, J. E. (1996). New developments in the use of
measures of honesty, integrity, conscientiousness, dependability, trust-
worthiness, and reliability for personnel selection. Personnel Psychol-
ogy, 49, 787– 829.
Schlenker, B. R. (1980). Impression management: The self-concept, social
identity, and interpersonal relations. Monterey, CA: Brooks/Cole.
Scott, W. A. (1965). Values and organizations: A study of fraternities and
sororities. Chicago: Rand McNally.
Sincoff, M. Z., & Goyer, R. S. (1984). Interviewing. New York: Macmil-
lan.
Snyder, M. (1974). Self-monitoring of expressive behaviors. Journal of
Personality and Social Psychology, 30, 526 –537.
Snyder, M., & Monson, T. C. (1975). Persons, situations, and the control
of social behavior. Journal of Personality and Social Psychology, 32,
637– 644.
Stark, S., Chernyshenko, O. S., Chan, K., Lee, W. C., & Drasgow, F.
(2001). Effects of the testing situation on item responding: Cause for
concern. Journal of Applied Psychology, 86, 943–953.
Stevens, C. K., & Kristof, A. L. (1995). Making the right impression: A
field study of applicant impression management during job interviews.
Journal of Applied Psychology, 80, 587– 606.
Tedeschi, J. T., & Melburg, V. (1984). Impression management and
influence in the organization. In S. Bacharach & E. J. Lawler (Eds.),
Research in the sociology of organization (Vol. 3, pp. 31–58). Green-
wich, CT: JAI Press.
Topp, K. L. (2001). Perceptions of organizational cultures and impression
management behaviors: An investigation of the relationship. Disserta-
tion Abstracts Internationa, 62(05), 2526b. (UMI No. 3014293)
Tyler, J. M., & Feldman, R. S. (2004). Truth, lies, and self-presentation:
How gender and anticipated future interaction relate to deceptive behav-
ior. Journal of Applied Social Psychology, 34, 2602–2615.
Wrightsman, L. S. (1964). Measurement of philosophies of human nature.
Psychological Reports, 14, 743–751.
Zerbe, W. J., & Paulhus, D. L. (1987). Socially desirable responding in
organizational behavior: A reconception. Academy of Management Re-
view, 12, 250 –264.
(Appendix follows)
1653
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
Appendix
Taxonomy of Faking Behaviors and the Interview Faking Behavior Scale
Please think about your last employment interviews that you had. What strategies from the list below have you
used during your interview? Rate the extent to which you used each strategy by circling the appropriate
number.
To no extent
To a little
extent
To a moderate
extent
To a considerable
extent
To a very great
extent
12345
Your answers will remain completely confidential and anonymous. We have no way of connecting the answers back to you.
Please answer as honestly as possible.
I. SLIGHT IMAGE CREATION
(to make an image of a good candidate for the job)
Embellishing (to overstate or embellish answers beyond a reasonable description of the truth)
ICEMB1 I said that I am an expert in an area even though I am only familiar with it.
a
12345
ICEMB2 I said that it would take less time to learn the job than I knew it would. 12345
ICEMB3 I exaggerated my future goals. 12345
ICEMB4 I exaggerated my responsibilities on my previous jobs. 12345
ICEMB5 I exaggerated the impact of my performance in my past jobs. 12345
ICEMB6 I used examples of my best performance to answer questions about my everyday performance.
a
12345
Tailoring (to modify or adapt answers to fit the job)
ICTAI7 During the interview, I distorted my answers based on the comments or reactions of the
interviewer.
12345
ICTAI8 During the interview, I distorted my answers to emphasize what the interviewer was looking for. 12345
ICTAI9 I distorted my answers based on the information about the job I obtained during the
interview.
12345
ICTAI10 I distorted my work experience to fit the interviewer’s view of the position. 12345
ICTAI11 I distorted my qualifications to match qualifications required for the job. 12345
ICTAI12 I tried to find out about the organization’s culture and then use that information to fabricate
my answers.
12345
Fit Enhancing (to create the impression of a fit with the job or organization in terms of beliefs,
values, or attitudes)
ICFIT13 I enhanced my fit with the job in terms of attitudes, values, or beliefs. 12345
ICFIT14 I inflated the fit between my values and goals and values and goals of the organization. 12345
ICFIT15 I inflated the fit between my credentials and needs of the organization. 12345
ICFIT16 When asked, I did not mention any disagreements with the organization’s philosophies.
a
12345
ICFIT17 I tried to use information about the company to make my answers sound like I was a better
fit than I actually was.
12345
II. EXTENSIVE IMAGE CREATION
(to invent an image of a good candidate for the job)
Constructing (to build stories by combining or arranging work experiences to provide better
answers)
ICCON18 I told fictional stories prepared in advance of the interview to best present my credentials. 12345
ICCON19 I fabricated examples to show my fit with the organization. 12345
ICCON20 I made up stories about my work experiences that were well developed and logical. 12345
ICCON21 I constructed fictional stories to explain the gaps in my work experiences. 12345
ICCON22 I told stories that contained both real and fictional work experiences. 12345
ICCON23 I combined, modified and distorted my work experiences in my answers. 12345
ICCON24 I used made-up stories for most questions. 12345
1654
LEVASHINA AND CAMPION
Inventing (to cook up better answers)
ICINV25 I claimed that I have skills that I do not have. 12345
ICINV26 I made up measurable outcomes of performed tasks. 12345
ICINV27 I claimed work experiences that I do not actually have.
a
12345
ICINV28 I promised that I could meet all job requirements (e.g., working late or on weekends),
even though I probably could not.
12345
ICINV29 I misrepresented the description of an event. 12345
ICINV30 I stretched the truth to give a good answer. 12345
ICINV31 I invented some work situations or accomplishments that did not really occur. 12345
ICINV32 I told some “little white lies” in the interview. 12345
Borrowing (to answer based on the experiences or accomplishments of others)
ICBOR33 My answers were based on examples of job performance of other employees. 12345
ICBOR34 When I did not have a good answer, I borrowed work experiences of other people and made
them sound like my own.
12345
ICBOR35 I used other people’s experiences to create answers when I did not have good experiences of
my own.
12345
ICBOR36 I described team accomplishments as primarily my own.
a
12345
II. IMAGE PROTECTION
(to defend an image of a good candidate for the job)
Omitting (to not mention some things in order to improve answers)
IPOMI37 When asked directly, I tried to say nothing about my real job-related weaknesses. 12345
IPOMI38 I tried to avoid discussion of job tasks that I may not be able to do. 12345
IPOMI39 I tried to avoid discussing my lack of skills or experiences. 12345
IPOMI40 I tried not to admit that I did not know an answer.
a
12345
IPOMI41 I did not mention that I believed I needed additional training to do the job.
a
12345
IPOMI42 When asked directly, I did not mention my true reason for quitting previous job. 12345
Masking (to disguise or conceal aspects of background to create better answers)
IPMAS43 I tried to mention only my limitations that are easily remedied
a
12345
IPMAS44 I did not reveal my true career intentions about working with the hiring organization. 12345
IPMAS45 I tried not to show my true personality.
a
12345
IPMAS46 When asked directly, I did not mention some problems that I had in past jobs. 12345
IPMAS47 I did not reveal requested information that might hurt my chances of getting a job. 12345
IPMAS48 I talked mainly about my strengths to mask my weaknesses.
a
12345
IPMAS49 I covered up some “skeletons in my closet.” 12345
Distancing (to improve answers by separating from negative events or experiences)
IPDIS50 I tried to suppress my connection to negative events in my work history. 12345
IPDIS51 I clearly separated myself from my past work experiences that would reflect poorly on me. 12345
IPDIS52 I tried to convince the interviewer that factors outside of my control were responsible for some
negative outcomes even though it was my responsibility.
12345
(Appendix continues)
1655
MEASURING FAKING IN THE EMPLOYMENT INTERVIEW
III. INGRATIATION
(to gain favor with the interviewer to improve the appearance of a good candidate for the job)
Opinion Conforming (to express beliefs, values, or attitudes held by the interviewer or
organization)
INCON53 I tried to adjust my answers to the interviewer’s values and beliefs. 12345
INCON54 I tried to agree with interviewer outwardly even when I disagree inwardly. 12345
INCON55 I tried to find out interviewer’s views and incorporate them in my answers as my own. 12345
INCON56 I tried to express the same opinions and attitudes as the interviewer. 12345
INCON57 I tried to appear similar to the interviewer in terms of values, attitudes, or beliefs. 12345
INCON58 I tried to express enthusiasm or interest in anything the interviewer appeared to like even if I
did not like it.
12345
INCON59 I did not express my opinions when they contradicted the interviewer’s opinions. 12345
INCON60 I tried to show that I shared the interviewer’s views and ideas even if I did not. 12345
Interviewer or Organization Enhancing (to insincerely praise or compliment the interviewer or
organization)
INENH61 I laughed at the interviewer’s jokes even when they were not funny. 12345
INENH62 I exaggerated the interviewer’s qualities to create the impression that I think highly of him/her. 12345
INENH63 I exaggerated my positive comments about the organization. 12345
INENH64 I complimented the organization on something, however insignificant it may actually be to me. 12345
Note. The first two letters in each variable name correspond to three big groups of faking behaviors (IC image creation,
IP image protection, IN and ingratiation), the following letters correspond to 11 subfactors of faking behaviors
(EMB embellishing, TAI tailoring, FIT fit enhancing, CON constructing, INY inventing, BOR borrowing,
OMI omitting; MAS masking, DIS distancing, CON opinion conforming, and ENH interviewer or organization
enhancing), and the number corresponds to the item number in the instrument. For example, IPOM138 is the item number
38 in Image Protection, Omission.
a
Items were eliminated on the basis of the results of the factor analysis. The final Interview Faking Behavior Scale has 54
items.
Received March 13, 2006
Revision received March 1, 2007
Accepted March 14, 2007
Instructions to Authors
For Instructions to Authors, please consult the January 2006 issue of the volume or visit
www.apa.org/journals/apl and click on the “Instructions to authors” link in the Journal Info box on
the right.
1656
LEVASHINA AND CAMPION
... Although validity is of primordial importance, faking and negative applicant reactions can also undermine organizations' ability to hire the best job candidates because they can negatively impact validity and recruitment outcomes (Levashina & Campion, 2006;McCarthy et al., 2017). Unfortunately, comparisons of faking (e.g., Bill & Melchers, 2023;Bourdage et al., 2018;Levashina & Campion, 2007) and applicant reactions (e.g., Conway & Peneno, 1999;Day & Carroll, 2003) across question types have been largely limited to situational and behavioural questions. A more comprehensive comparison of all four question types is thus warranted. ...
... Faking is a form of deceptive impression management, and can be defined as "conscious distortions of answers to the interview questions in order to obtain a better score on the interview and/or otherwise create favourable perceptions" (Levashina & Campion, 2007, p. 1639. Faking may be a serious issue as applicants' attempts to misrepresent their qualifications or characteristics may lead to inflated interview ratings, ill-advised job offers, and poor work performance . ...
... Only a few studies offer preliminary empirical evidence about faking across question types. Levashina and Campion (2007) directly compared interviews using only situational questions or only behavioural questions, and they found more faking in the former. Bourdage et al. (2018) asked applicants to report whether those two question types were used or not in their last interview. ...
Article
Full-text available
Structured job interviews are often built around four better question types: behavioral, situational, background, and job knowledge questions. This study provides the first comparative examination of these four question types in terms of interviewee faking and reactions. Prolific respondents (N = 150) completed an asynchronous video interview comprising eight questions (i.e., two of each type), then rewatched their responses, and reported their faking and reactions. Overall, question type had a small effect on faking and a small–medium effect on reactions. Specifically, situational and job knowledge questions were associated with less faking than behavioral and background questions. Finally, background questions were associated with worse affective, utility, and procedural justice reactions, particularly compared to situational questions.
... They opt for honest IM when they can genuinely showcase their skills and experiences in response to questions (Bourdage et al., 2018). On the other hand, when they find it challenging to answer, they might resort to deceptive IM (or "faking"), which involves using slight exaggerations, insincere flattery, or creating false impressions (Levashina & Campion, 2007). ...
... Further research should also include additional constructs like perceived social presence, AVI performance, and deceptive IM using a scale like the Interview Faking Behavior scale (Levashina & Campion, 2007) to investigate if providing information about the evaluator could influence these outcomes. Furthermore, future research should explore other ways of providing evaluator's information to applicants. ...
Article
Asynchronous video interviews (AVIs) are widely used in hiring, but the lack of social presence (e.g., uncertainty about the identity of evaluators) may hinder effective impression management (IM) for applicants. This study examined whether providing information about evaluators facilitates applicant IM use in AVIs, specifically ingratiation or self-promotion. It also explored the experience involved in applicants’ response generation. In a mock AVI, 160 participants were randomly assigned to one of two conditions (with or without information about the evaluator). They reported their thoughts after watching their interview recordings. Providing information about the evaluator enhanced ingratiation but did not affect self-promotion. Qualitative analyses revealed that participants with evaluator information were more likely to reference organizational values and align themselves with the evaluator, whereas those without it concentrated more on demonstrating their job-relevant skills. Participants' reported thoughts and emotions suggested that formulating suitable answers and interacting with a computer represent major concerns.
... Job boards are the primary resource used by candidates -although not the only one (Smith 2015), who aim to leverage digital tools without risking damaging their professional image (Gershon 2014). In fact, individuals strive to manage the impression they make on potential employers, also in person during employment interviews (Levashina et al. 2013;Levashina and Campion 2007), and on PSNs like LinkedIn, which can even be used to actively seek a job (Krings et al. 2021). While LinkedIn users tend to disclose a variety of elements to attract recruiters (Krings et al. 2021), some of them may feel uneasy about online self-promotion, as indicated by Nicol et al. (2022). ...
Article
Possible side effects of using web job boards in the e-recruitment context, such as candidates dropping out from the hiring process, may emerge if these tools are not transparent about data usage, collection, and processing. In response, we developed a novel web job board designed to enhance transparency, simulating a job-matching recommender system. A qualitative study with 20 Italian participants, combining direct observation of the job board use with the Thinking Aloud protocol and interviews, examines participants’ privacy behaviours in terms of data disclosure and seclusion. Findings indicate a general willingness among participants to share personal data, except for information related to their identity. We found that both the design of the job board and the meanings ascribed by participants to data shaped their privacy behaviours. Features enhancing user understanding of data usage and control of privacy settings were positively received, underscoring the importance of design in fostering thoughtful engagement with job board technologies. We contribute to research on privacy behaviours in the context of job search and we draw suggestions from the study findings on how to design platforms that support data protection and allow safe and purposeful disclosure of personal data, sustaining job seekers throughout the recruitment process.
... These authors were the only ones to use a statistical algorithm to classify their participants' truthful and false responses, with a discriminant analysis accuracy ranging from 50.8% for lies to 71.6% for truths. Schneider et al. (2015) instructed their student participants to achieve as high ratings as possible in a simulated structured interview for the position of administrative assistant, after which they were asked to complete the Interview Faking Behavior scale (Levashina & Campion, 2007). Slight image creation was associated with fewer silent pauses in participants' speech, extensive image making was associated with fewer smiles and more speech errors, while deceptive ingratiation was correlated with fewer smiles and fewer silent pauses in speech. ...
Article
The aim of this study was to investigate the possibility of faking detection in a selection interview using a multimodal approach based on paraverbal, verbal/nonverbal cues, and facial expressions. In addition, we compared detection accuracies of simple linear and complex nonlinear machine learning algorithms. A sample of 102 participants were interviewed in two conditions—honest responding and simulated highly realistic selection. Results showed only several significant univariate effects of experimental condition for paraverbal, verbal, and facial expression cues. All the algorithms performed comparably and above chance levels, except for random forests, which overfitted on the training sets and underperformed on the testing sets. Still, considering the algorithms' accuracy was limited, usefulness of multimodal data for deception detection remains questionable.
... Independent of the specific tactic used, IM is classified into honest versus deceptive behaviour (Gilmore & Ferris, 1989). In the latter case, candidates may distort true facts by inventing qualifications, embellishing prior accomplishments and omitting or masking undesirable characteristics as well as experiences (Levashina & Campion, 2007). ...
Article
Full-text available
The paper investigates the signalling behaviour of digital native applicants in employment interviews and analyses how their reactions differ in face-to-face versus video-mediated contexts. The social presence within the interview setting and the possibility of employing impression management tactics are of particular interest to understanding the subjective acceptance and perceived fairness of the two types of selection procedures. The analyses of novel primary data from a German survey with 513 valid responses reveal that digital natives, similar to older applicants, appreciate signalling to lower information asymmetries. Regardless of interview mode, social presence and impression management are strong positive drivers of acceptance and perceived fairness. While members of the generational cohort still accept face-to-face interviews more than those mediated by videoconferencing technology, they perceive the former as less fair. This result, which may be explained by the specific characteristics of digital natives, contradicts the findings of studies that have investigated preceding generations. Hence, the paper complements the literature on applicant reactions by focusing on two younger generational cohorts, namely Generation Y and Z. Furthermore, the adoption of the signalling framework in this context suggests that the beneficial effects of signalling may stand vis-à-vis feelings of unfairness, which can be interpreted as additional psychological costs that are driven by moral considerations.
Chapter
Human behavior in cyber space is extremely complex. Change is the only constant as technologies and social contexts evolve rapidly. This leads to new behaviors in cybersecurity, Facebook use, smartphone habits, social networking, and many more. Scientific research in this area is becoming an established field and has already generated a broad range of social impacts. Alongside the four key elements (users, technologies, activities, and effects), the text covers cyber law, business, health, governance, education, and many other fields. Written by international scholars from a wide range of disciplines, this handbook brings all these aspects together in a clear, user-friendly format. After introducing the history and development of the field, each chapter synthesizes the most recent advances in key topics, highlights leading scholars and their major achievements, and identifies core future directions. It is the ideal overview of the field for researchers, scholars, and students alike.
Chapter
Human behavior in cyber space is extremely complex. Change is the only constant as technologies and social contexts evolve rapidly. This leads to new behaviors in cybersecurity, Facebook use, smartphone habits, social networking, and many more. Scientific research in this area is becoming an established field and has already generated a broad range of social impacts. Alongside the four key elements (users, technologies, activities, and effects), the text covers cyber law, business, health, governance, education, and many other fields. Written by international scholars from a wide range of disciplines, this handbook brings all these aspects together in a clear, user-friendly format. After introducing the history and development of the field, each chapter synthesizes the most recent advances in key topics, highlights leading scholars and their major achievements, and identifies core future directions. It is the ideal overview of the field for researchers, scholars, and students alike.
Article
Asynchronous video interviews can use many configurations of design features to create the interviewee experience, but not all designs are equal. Design features may influence interviewees' deceptive and honest impression management, their reactions to the procedure, and interview performance evaluations. Three experiments using mock interviews tested the effects of preparation time and self‐views ( N = 206, from Prolific), reviewing and re‐recording ( N = 230, from Prolific), and giving faking warnings with human versus automated evaluation ( N = 297 university students) on interview outcomes. The design had limited effects on interviewee behavior, but some features may increase interviewees' willingness to fake when used in combination. Opportunities for longer preparation time and re‐recording increased interview performance ratings. Warnings and evaluator type did not affect behavior, reactions, or performance. The implications of these effects are discussed.
Article
Employing an interim CEO is one of the key strategies organizations use to address urgent changes in leadership, yet there is a notable lack of attention in existing corporate governance literature regarding their impact on non‐market strategic behaviors. In an effort to bridge this gap, our study integrates institutional theory with impression management literature. Based on unbalanced panel data from Chinese non‐state‐owned listed companies from 2010 to 2019, the study finds that the succession of an interim CEO is associated with a simultaneous reduction in both corporate social responsibility (CSR) and corporate social irresponsibility (CSI) activities. The negative relationship between interim CEO and CSI activities is weaken in the context of high institutional voids. Mechanism analysis reveals that interim CEOs tend to focus more on the present and allocate more attention toward external stakeholder management strategies and low‐cost and efficiency strategies. Additional analysis indicates that in the face of negative financial performance aspirations, interim CEOs are more likely to reduce CSR activities. Similarly, when confronted with negative social performance aspirations, interim CEOs tend to decrease CSI activities to a greater extent.
Article
Full-text available
This study provides a comprehensive investigation into whether social desirability alters the factor structure of personality measures. The study brought together 4 large data sets wherein different organizational samples responded to different personality measures. This facilitated conducting 4 separate yet parallel investigations. Within each data set, individuals identified through a social desirability scale as responding in an honest manner were grouped together, and individuals identified as responding in a highly socially desirable manner were grouped together. Using various analyses, the fit of higher order factor structure models was compared across the 2 groups. Results were the same for each data set. Social desirability had little influence on the higher order factor structures that characterized the relationships among the scales of the personality measures. Department of Management and Human Resources,.