Conference PaperPDF Available

Do structured interviews eliminate bias? A meta-analytic comparison of structured and unstructured interviews

Authors:

Abstract

We conducted a meta-analysis of studies investigating the extent to which structured and unstructured interviews are affected by such sources of potential bias as applicant attractiveness, pregnancy, weight, sex, race, and use of non-verbal cues. To be included in the meta-analysis a study had to use an experimental design and directly compare interviews scores of structured and unstructured interviews. On the basis of 24 effect sizes, we found that both unstructured (d = .59) and structured interviews (d = .23) were affected by sources of bias. Though both interviews were affected, unstructured interviews were significantly more susceptible to bias than were structured interviews.
1
Poster presented at the annual meeting of the Society for Industrial-Organizational Psychology, May 2006, Dallas Texas
Do structured interviews eliminate bias? A meta-analytic comparison of
structured and unstructured interviews
Michael G. Aamodt Ellyn G. Brecher Eugene J. Kutcher Jennifer D. Bragger
Radford University The College of New
Jersey Virginia Tech Montclair State
University
We conducted a meta-analysis of studies investigating the extent to which structured and unstructured interviews
are affected by such sources of potential bias as applicant attractiveness, pregnancy, weight, sex, race, and use of
non-verbal cues. To be included in the meta-analysis a study had to use an experimental design and directly
compare interviews scores of structured and unstructured interviews. On the basis of 24 effect sizes, we found that
both unstructured (d = .59) and structured interviews (d = .23) were affected by sources of bias. Though both
interviews were affected, unstructured interviews were significantly more susceptible to bias than were structured
interviews.
Though the high costs of employee turnover and incompetent employees have resulted in
the development of sophisticated and technologically advanced selection techniques, the
employment interview continues to be used by virtually all organizations in making selection
decisions. The employment interview has been defined as “an interpersonal interaction of limited
duration between one or more interviewers and a job-seeker for the purpose of identifying
interviewee knowledge, skills, abilities and behaviors that may predict success in subsequent
employment. The operational indicators of this success include the criteria of job performance,
training success, promotion, and tenure (Wiesner & Cronshaw, 1988, p. 276). Although the job
interview remains the preeminent selection tool for most organizations, research has not found a
strong relationship between scores from interviews low in structure and measures of job
performance (Huffcutt & Arthur, 1994). In addition to its lack of predictive validity, meta-
analysis results indicate that interviews low in structure (d = .32) are more prone to adverse
impact than are interviews high in structure (d = .23; Huffcutt & Roth, 1998). Furthermore,
research has indicated that selection decisions based on interviews low in structure yield lower
scores for applicants that are disabled (Wennet, 1994), obese (Kutcher & Bragger, 2004), or
pregnant (Bragger, Kutcher, Morgan, & Firth (2002). The purpose of our research is to conduct a
meta-analysis of research investigating the susceptibility of unstructured and structured
interviews to biases against applicants on the basis of reactions to non job-related cues such as
pregnancy and obesity.
The Predictive Validity of the Job Interview
To assess the utility of the job interview in predicting job performance, Hunter and
Hunter (1984) conducted a meta-analysis comparing the mean validity coefficients (correlations
with supervisory ratings) from studies that investigated any of 11 different predictors (e.g.
cognitive ability tests, assessment centers, biodata) used in selection for entry-level jobs. The
mean validity calculated for interviews ranked 6th at 0.14 (Hunter & Hunter, 1984). This finding
suggests that less than two percent of the variance in job performance is explained by
performance on the job interview.
2
Predictive validity and structure. Subsequent research has established that adding
structure to the interview process can improve the predictive validity, as well as other
psychometric properties, of the selection interview as a predictor. The term “structure”, when
referring to an interview, can be broadly defined as “any enhancement of the interview that is
intended to increase the psychometric properties by increasing standardization or otherwise
assisting the interviewer in determining what questions to ask or how to evaluate responses”
(Campion, Palmer, & Campion, 1997, p. 656).
Campion et al. (1997) presented a review of the many ways that structure can be
integrated into an interview, and how these components of structure impact the validity and
reliability of the interview. Among the 15 components of structure are the following: base
questions on a job analysis; ask all candidates the same questions in the same order; limit
prompting, follow-up questioning, and elaboration; ask questions that are situational, behavior-
based, or focused on job knowledge; ask a greater number of questions; control ancillary
information such as application forms, resumes, test scores, etc.; rate each answer on scales
tailored for each question; use behaviorally anchored rating scales (BARS); take detailed notes
on applicants’ responses; use multiple interviewers; use the same interviewers for all candidates;
do not allow interviewers to discuss the candidates answers; provide extensive training to the
interviewers; use statistical procedures to determine the best candidate.
While it would be ideal to incorporate all 15 components of structure, it is more likely to
find a few components used at a time. Accordingly, any interview’s overall degree of structure
falls somewhere on a continuum, where any or all of the suggestions above are applied to some
degree. That is, whereby some organizations implement the highest degree of a given component
of structure (e.g., interviewers ask the exact same questions in the exact same order to all
applicants), other organizations use a milder form (e.g., interviewers are given more flexible
questioning guidelines). Because there are so many methods used to structure an interview, and
such variability in the intensity with which each method is applied, there are innumerable ways
to add more structure to an interview; hence, each additional structuring element applied will add
incrementally to the interview’s validity (Campion et al., 1997).
The identification of structure as a way to improve the interview has led to numerous
research studies, and some meta-analyses, documenting the structured interview as a more valid
selection tool than originally thought, as well as a vast improvement over the unstructured
interview. Wiesner and Cronshaw (1988) conducted a meta-analysis comparing structured vs.
unstructured interviews. The structured interview corrected validity coefficient (ρ = .62) was
twice that of the unstructured interview (ρ = .31). Wright, Lichtenfels and Pursell (1989) found a
meta-analytic estimated effect size for structured interviews of r = .39, though the analysis did
not compare this to unstructured interviews. A third meta-analysis by McDaniel, Whetzel,
Schmidt, and Maurer (1994) found the mean corrected validity (ρ)for the unstructured interview
to be .33, the mean corrected validity (ρ) for the structured interview to be .44, and the mean
corrected validity for the situational interview (a type of structured interview) to be even higher
at .50. Recognizing that structure can be applied to different degrees, Huffcut and Arthur (1994)
coded 114 interview validity coefficients into four categories of structure, ranging from (1) no
formal structure to (4) structured questioning and scoring. The resulting meta-analytic validity
coefficients ranged from ρ = 0.20 (no structure) to ρ = 0.57 (greatest structure). Similarly,
Conway, Jako, and Goodman (1995) coded interviews along a structured continuum of high,
moderate and low structure. The mean validities were r = 0.67, 0.56, and 0.34 respectively,
3
indicating that (a) the predictive validity of the structured interview was almost twice that of the
unstructured interview and (b) that increasing degrees of structure incrementally add to its utility.
All of the above meta-analyses found improved psychometric properties of the structured
interview, identifying it as a more valid tool for predicting job performance than the unstructured
interview. A structured interview incorporates more standardization and decreases or eliminates
subjectivity leading to a greater reliance on job-related criteria (Campion, Pursell, & Brown,
1988). Therefore, besides increasing predictive validity, the structured interview should also
reduce the effect of bias in employment decisions.
Bias in the interview. Many research studies (e.g. Latham & Saari,1984; Latham, Saari,
Pursell, Campion, 1980) have explored the relationship between structuring the job interview and
resulting predictive validity. Relatively fewer studies have introduced specific systematic
sources of bias into the job interview, and then systematically investigated the influence of bias
on structured and unstructured interviewer ratings. Pingitore, Dugoni, Tindale, and Spring (1994)
considered the influence of weight and gender bias by having participants watch mock
employment interviews of male and female normal-weight vs. overweight job applicants and rate
whether they would hire the applicant. Through the use of costumes and scripts, the
qualifications of the applicants remained constant across conditions. Ratings portrayed a
significant effect of both weight and gender, where both overweight and female applicants were
recommended for hire significantly less than their normal-weight and male counterparts,
respectively. Additional research has also found an obvious selection bias against physically
unattractive and overweight job applicants (Cash, Gillen, & Burns, 1977; Morrow, 1990;
Kutcher & Bragger, 2004). Other biases that have been found to influence interviewer scores
include disability (Bricout & Bentley, 2000; Miceli, Harvey, & Buckley, 2001; Ravaud, Madiot,
& Bille, 1992), attire (Forsythe, Drake, & Cox, 1985), age (Perry, Kulik, & Bourhis, 1997), and
non-verbal expression (Burnett & Motowidlo, 1998; DeGroot & Motowidlo, 1999).
If predictive validity is the relationship between a job applicant’s score on a pre-
employment selection test (i.e., the interview) and an ultimate measurement of job performance,
then bias refers to all sources (systematic and random) that influence the selection test scores
(and decisions) but are not related to how the applicant would perform on the job. Though it is
important to assess the predictive validity of the structured and unstructured interview, a problem
with doing so is that criterion measures of job performance (i.e. sales, goals met, performance
appraisal ratings) are also associated with bias. The sources of bias in job performance
measurement may be the same as or similar to the sources of bias in the job interview
measurement, causing inflated correlations between the two scores. This may be especially true
when the same people are involved in both measurement events. This is particularly problematic
when the sources of bias are the age, race, gender, ethnicity or disability of the job candidate.
Can structure actually eliminate this bias? Several research studies indicate that structure seems
at least to reduce it (e.g., Bragger, Kutcher, Morgan, & Firth, 2002; Brecher, Bragger, Kutcher,
& Miller, 2004; Kutcher & Bragger, 2004).
Experimental studies that research bias in the job interview can control the credentials of
the job candidate as determined by their interview responses; this establishes a candidate’s “true
score”, which can then be compared to an interviewer’s ratings. This “true score” would not be
influenced by measurement bias in job performance. We therefore see an importance in
determining the influence of specific sources of bias on interviewer ratings and the influence of
structuring the job interview in reducing rating bias. The purpose of our research is to conduct a
4
meta-analysis of those studies that introduce and investigate bias against candidates in structured
versus unstructured job interviews. Accordingly, we present the following predictions regarding
the literature studying bias in interviews:
Hypothesis 1: Potential sources of bias will significantly affect unstructured interview scores.
Hypothesis 2: Potential sources of bias will not significantly affect structured interview scores.
Hypothesis 3: Potential sources of bias will have a greater affect on unstructured interview scores
than on structured interview scores.
Method
Finding Studies
The first step in the meta-analysis was to locate studies directly comparing the effect of a
source of irrelevant information (e.g., attractiveness, pregnancy, obesity) on structured and
unstructured interview scores. The search for such studies was concentrated on journal articles,
theses, and dissertations published between 1970 and 2005. To find relevant studies, the
following sources were used:
Dissertation Abstracts Online was used to search for relevant dissertations.
WorldCat was used to search for relevant master’s theses, dissertations, and books.
WorldCat is a listing of books contained in many libraries throughout the world and was
the single best source for finding relevant master’s theses. There were a few theses that
could not be obtained because their home library would not loan them and they were not
available for purchase.
PsycInfo, InfoTrac OneFile, ArticleFirst, ERIC, Periodicals Contents Index, Factiva, and
Lexis-Nexis were used to search for relevant journal articles and other periodicals.
Hand searches were made of the Journal of Applied Psychology, Personnel Psychology,
Applied H.R.M. Research, and the International Journal of Selection and Assessment.
Reference lists from journal articles, theses, and dissertations were used to identify other
relevant material.
Keywords used to search electronic databases included combinations of interview terms
(e.g., structured, situational, behavioral, interview, unstructured) with potential sources of bias
(e.g., sex, race, attractiveness, obesity, pregnancy, first impressions, contrast effects).
The search for documents stopped when computer searches failed to yield new sources
and no new sources from reference lists appeared. To be included in our meta-analysis, a study
had to directly compare structured and unstructured interviews article and had to include a d
score, another statistic that could be converted to a d score (e.g., r, t, F,
χ
2), or tabular data or raw
data that could be analyzed to yield a d score. Studies that investigated a source of bias on only
one type of interview were not included. The literature search yielded nine relevant studies: 5
journal articles, 2 master’s theses, and 2 conference presentations. From these nine studies, 24
independent effect sizes (12 structured, 12 unstructured) were used in the meta-analysis.
5
Converting Research Findings to d Scores
Once the studies were located, statistical results that needed to be converted into d scores
were done so using the formulas provided in Arthur, Bennett, and Huffcutt (2001). In some
cases, raw data or frequency data listed in tables were entered into an Excel program to directly
compute a d score. If a study provided more than two levels of structure, we categorized the
highest level as being structured, the lowest level as being unstructured, and ignored the levels in
between.
Cumulating d Scores
After the individual d scores were computed, the effect size for each study was weighted
by the size of the sample and the coefficients combined using the method suggested by Hunter
and Schmidt (1990) and Arthur, Bennett, and Huffcutt (2001). In addition to the mean effect
size, the observed variance, amount of variance expected due to sampling error, and 95%
confidence interval were calculated. All meta-analysis calculations were performed using Meta-
Analyzer 5.2, an Excel-based meta-analysis program.
Searching for Moderators and Generalizing Results
Being able to generalize meta-analysis findings across all similar organizations and
settings (validity generalization) is an important goal of any meta-analysis. In this meta-analysis
when variance due to sampling error accounted for less than 75% of observed variance, the next
step was to remove outliers. Outliers were defined as effect sizes that were at least three
standard deviations from the mean. Outliers are removed from meta-analyses because the
assumption is that a study obtaining results that are very different from those found in other
studies did so due to such factors as calculation errors, coding errors, or the use of a unique
sample. After removing outliers, if the variance accounted for was still less than 75%, a search
for such potential moderators was conducted. Potential moderators explored included in this
meta-analysis were the type of potential bias (priming, nonverbal cues, disability, pregnancy,
race, disability, weight, sex), interview medium (face-to-face, video), provision of other
information about the applicant (no, yes), interview scoring method (sum of question ratings,
overall rating), and question type (situational only, situational and behavioral, general
conversation).
Results
Meta-analyses were conducted separately for the structured and unstructured interview
effect sizes. We hypothesized that unstructured interviews would be significantly affected by
potential sources of bias. As shown in Table 1, the mean effect size for unstructured interviews is
not significant as the confidence interval included zero. Because less than 75% of the observed
variance in unstructured interviews could be expected by sampling error, we looked for outliers.
The d of 2.24 from Study 2 of Kutcher and Bragger (2004) was removed as it was more than
three standard deviations from the mean effect size. As shown in Table 1, after removing this
study, the mean effect size for unstructured interview was significantly different from zero and
100% of the observed variability in effect sizes would have been expected by sampling error.
6
Thus, hypothesis one was supported and these results can be generalized as there is no need to
search for moderators.
Our second hypothesis was that structured interviews would not be significantly affected
by potential sources of bias. As shown in Table 1, this hypothesis was not supported as the mean
effect size for structured interviews was significantly different from zero.
Our third hypothesis that structured interviews would be less susceptible to sources of
bias than unstructured interviews was supported. We tested this hypothesis by comparing the
effect size for structured interviews (d = .23) with the effect size for unstructured interviews (d =
.59). Because the 95% confidence intervals surrounding the two effect sizes do not overlap, we
can conclude that they are significantly different from one another.
Table 1: Meta-analysis results
95% Confidence
Interval
Interview type K N d
Lower Upper SE% Qw
Overall 24 1,359 .47 - .19 1.13 40% 60.4*
Structured 12 663 .23 .23 .23 100% 3.2
Unstructured 12 696 .70 - .08 1.50 32% 37.6*
Outlier removed 11 648 .59 .59 .59 100% 10.7
K=number of studies, N=sample size, d = mean effect size, SE% = percentage of variance explained by sampling error
* Effect sizes are not homogeneous
Discussion
Since the introduction of structured interviewing in the personnel selection literature,
several studies have attempted to show its superiority over traditional interviews. Meta-analytic
reviews have demonstrated how more structure in the collection and evaluation of interview
information effects greater validity coefficients. One of the primary mechanisms through which
this greater validity operates is the reduction of contamination by irrelevant biases. The current
meta-analysis contributes to the literature by specifically comparing the effect sizes of
interviewer biases during structured and unstructured interviews.
The evidence from the current investigation clearly shows that bias affects interviews. In
both structured and unstructured interviews, the estimated effect size of biases is considerable (d
= .23, .59 respectively). While the support of Hypothesis 1 (that bias is indeed significantly
associated with unstructured interviews) was expected, the lack of support of Hypothesis 2 (that
bias would not be associated with structured interview scores) was not. This indicates that biases
also affect the decision making when the collection and evaluation of information is guided by
stricter standardization and guidance. Hypothesis 3 was also supported; while bias does appear to
have a meaningful affect on structured interviews, it is even stronger in unstructured interviews.
Although counter to our hypotheses, it is understandable that biases may impact a highly
structured situation. One of the more common structuring elements in an interview setting is the
nature of the decision making. Whereas in an unstructured interview, a single holistic hiring
decision is formed, in a structured interview, several question-level or dimension-level decisions
or evaluations are made. Although other structuring elements would ideally be encouraging more
thoughtful and careful processing of relevant information only, it is possible that interviewer
7
biases are simply affecting more decisions. The finding of a small but significant bias/structured
interview effect size should be considered along with the support for the final hypothesis – that
there is a larger association between biases and unstructured interviews. New research may seek
to investigate which of the many structuring elements are most efficacious at reducing the impact
of interviewer bias.
Some limitations should be noted with the overall conclusions. In our method, we
discarded any data representing intermediate or partially structured interviews. Many studies
present the interview structure variable as dichotomous, where interviews are either completely
unstructured or highly structured. The reality is that most interviews, when conducted in practice,
are likely mildly structured. Furthermore, there has been a call for research in structured
interviewing to represent more than two levels of structure (Lievens & DePaepe, 2004).
Therefore, the studies that attempt to represent more than two levels of structure are probably the
most informative in terms of generalizability to practice. In our study, we looked solely at
unstructured and structured interviews to establish the main effect finding that bias has a greater
impact on unstructured interviews. Other primary studies, and ultimately other meta-analyses,
would benefit from the incorporation of intermediately structured interviews.
Although the findings did not point toward a need for tests of potential moderating
variables, the different biases examined in the collection of studies were diverse. Biases were
related to demographic features, behaviors, and appearance factors in the applicant (e.g., Bragger
et al., 2002; Martin & Stockner, 2000), orientations of the interviewer (e.g., Gousie, 1993), and
properties of the interview or format (e.g., Beech, 1996). One might expect that a significant
moderation effect would have appeared, but no such heterogeneity in effect size distribution was
evident. This lends both more confidence to the effect size estimates found, and motivation to
test additional biases in structured interview settings. For example, although some studies have
linked structured interviews to more legally defensible hiring practices with respect to racial
discrimination (Williamson, Campion, Malos, Roehling & Campion, 1997), no laboratory
studies have manipulated interview structure to examine racial biases across interviews. Other
biases that have yet to be examined alongside interview structure include religious affiliation,
sexual orientation, and deeper psychological biases such as similarity (between the interviewer
and interviewee) and order effects.
Furthermore, there is the potential for the file-drawer problems or secondary sampling
error. The researchers undertook the steps necessary to identify and locate presentations, theses,
and dissertations. In fact, four of the nine studies included in the meta-analysis were not
published in peer-reviewed journals. The fact that these source studies were not subjected to the
strict peer-review process should not detract from the results. Rather, it should serve as evidence
that common criticisms of meta-analyses were addressed and, with no evidence of heterogeneity
in effect size distributions, that the effects of bias in these studies were not materially different
from those in published studies.
A main purpose of meta-analysis is to collect relevant studies on a common topic, and
accumulate data to represent the nature and strength of important relationships. From here,
additional qualification and generalization can be suggested and pursued. In the current study,
we have recognized that biases have a small but significant effect on structured interviews, and a
larger effect on unstructured interviews. Perhaps the most clear next steps are to test for the
influence of bias in intermediately structured interview contexts, to determine the specific
structuring elements that may allow for these biases to emerge, any additional biases that are
8
generally recognized to affect interview situations, and most importantly – any interventions or
behaviors that may inhibit the impact of interviewer biases.
References
References with an asterisk were used in the meta-analysis
Arthur, W., Bennett, W., & Huffcutt, I. (2001). Conducting meta-analysis using SAS. Mahwah,
NJ: Lawrence Erlbaum Associates
*Beech, B. A. (1996). Preinterview bias effects and number of benchmarks in the situational
interview. Unpublished master’s thesis, Radford University.
*Bragger, J. D., Kutcher, E., Morgan, J., & Firth, P. (22). 00The effects of the structured
interview on reducing biases against pregnant job applicants. Sex Roles, 46(7/8), 215-
226.
*Brecher, E. G., Bragger, J. D., Kutcher, E. J., & Miller, J. (2004, April). The structured
interview: Reducing biases towards disabled job applicants. Paper presented at the 19th
Annual Conference of the Society of Industrial Organizational Psychologists. Chicago,
Illinois.
Bricout, J., & Bentley, K. (2000). Disability status and perceptions of employability by
employers. Social Work Research, 24, 87-95.
Burnett, J.R. & Motowidlo, S.J. (1998). Relations between different sources of information in the
structured selection interview. Personnel Psychology, 51(4), 963-983.
Campion, M. A., Palmer, D. K., & Campion, J. E. (1997). A review of structure in the selection
interview. Personnel Psychology, 50, 655-702.
Campion, M. A., Pursell, E. D., & Brown, B. K. (1988). Structured interviewing: Raising the
psychometric properties of the employment interview. Personnel Psychology. 41 (1), 25-
42.
Cash, T. F., Gillen, B. & Burns, S. (1977). Sexism and beautyism in personnel consultant
decision making. Journal of Applied Psychology, 62, 301-310.
*Chapman, D.S. & Rowe, P.M. (2001). The impact of videoconference media, interview
structure, and interviewer gender on interviewer evaluations in the employment
interview: A field experiment. Journal of Occupational and Organizational Psychology,
74, 279-298.
Coway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis of interrater and internal
consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565-
579.
DeGroot, T., & Motowidlo, S.J., (1999). Why visual and vocal interview cues can affect
interviewers' judgments and predict job performance. Journal of Applied Psychology,
84(6), 986-993.
Forsythe, F., Drake, M.F, & Cox, C.E., (1985). Influence of applicant's dress on interviewer's
selection decisions. Journal of Applied Psychology, 70(2), 374-378.
*Gousie, L. J. (1993). Interview structure and interviewer prejudice as factors in the evaluations
and selection of minority and non-minority applicants. Applied H.R.M. Research, 4(1), 1-
13.
9
Huffcut, A. I., & Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for
entry-level jobs. Journal of Applied Psychology, 79, 184-190.
Huffcutt, A. I., & Roth, P. L. (1998). Racial group differences in employment interview
evaluations. Journal of Applied Psychology, 83(2), 179-189.
Hunter, J. E. & Hunter, R. F. (1984). Validity and utility of alternative predictors of job
performance. Psychological Bulletin, 96, 72-98.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis. Newbury Park, CA: Sage.
*Kutcher, E. J., & Bragger, J. D. (2004). Selection interviews of overweight job applicants: Can
structure reduce the bias? Journal of Applied Social Psychology, 34(10), 1993-2022.
Latham, G. P., & Saari, L. M. (1984). Do people do what they say? Further studies on the
situational interview. Journal of Applied Psychology, 69, 569-573.
Latham, G. P., & Saari, L. M., Pursell, E. D., & Campion, M. A. (1980). The situational
interview. Journal of Applied Psychology, 69, 422-427.
Lievens, F., DePaepe, A. (2004). An empirical investigation of interviewer-related factors that
discourage the use of high structure interviews. Journal of Organizational Behavior,
25(1), 29-46.
*McShane, T. D. (1993). Effect of nonverbal cues and verbal first impressions in unstructured
and situational interview settings. Applied H.R.M. Research, 4(2), 137-150.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of
employment interviews: A comprehensive review and meta-analysis. Journal of Applied
Psychology, 79(4), 599-616.
*Martin, J. & Stockner, J. (2000). Does the structured interview control for the effects of
applicant gender and nonverbal cues? Paper presented at the 21st annual Graduate
Student Conference in Industrial-Organizational Psychology and Organizational
Behavior, Knoxville, TN.
Miceli, N. S., Harvey, M., & Buckley, M. R. (2001). Potential discrimination in structured
employment interviews. Employee Responsibilities and Rights Journal, 13, 15-38.
Morrow, P. C. (1990). Physical attractiveness and selection decision making. Journal of
Management, 16, 45-60.
Perry, E.L., Kulik, C.T., & Bourhis, A.C., (1996). Moderating effects of personal and contextual
factors in age discrimination. Journal of Applied Psychology, 81(6), 628-647.
Pingitore, R., Dugoni, B. L., Tindale, R. S., & Spring, B. (1994). Bias against overweight job
applicants in a simulated employment interview. Journal of Applied Psychology, 79,
909-917.
Ravaud, J., Madiot, B., & Ville, I. (1992). Discrimination towards disabled people seeking
employment. Social Science and Medicine, 35, 951-958.
*Wennet, C. L. (1994). Effects of a past disability on selection interview decisions.
Unpublished master’s thesis, Radford University.
Wiesner, W. H., & Croshaw, S. F. (1988). A meta-analytic investigation of the impact of
interview format and degree of structure on the validity of the employment interview.
Journal of Occupational Psychology, 61, 275-290.
Williamson, L.G., Campion, J.E., Malos, S.B. Roehling, M. V., & Campion, M. A. (1997).
Employment interview on trial: Linking interview structure with litigation
outcomes. Journal of Applied Psychology, 82(6), 900-912.
Wright, P. M., Lichtenfels, P. A., & Pursell, E. D. (1989). The structured interview: Additional
studies and a meta-analysis. Journal of Occupational Psychology, 62, 191-199.
... The instrument of structured interviews allowed us to gather consistent and comparable data and to reduce biases and inconsistencies that are more likely to be introduced with unstructured or semi-structured interviews [17,18], especially if (as in our case) different interviewers are involved that could ask different freely formulated questions in different ways. Moreover, structured interviews are faster to execute than unstructured or semi-structured interviews, as the questions are restricted to the ones defined in the interview protocol. ...
... The conducted interviews (17) in the area of supply chain identified the following problems and challenges in regard to needs specified below: ...
... In particular, universities create new education and training courses in the broader area of cybersecurity. Examples are presented by the cybersecurity education database created by ENISA, 17 and the cybersecurity training and education review created by the CyberSec4Europe project [106] • Common technologies: Encryption and cryptography techniques, distributed ledger technologies, strong and useable authentication and authorisation mechanisms, trust management, tools based on Big Data, and Artificial Intelligence. These technologies have now formed the backbone of the CyberSec4Europe project [107]. ...
Article
Full-text available
This article presents an overview and analysis of the key cybersecurity problems, challenges and requirements to be addressed in the future, which we derived through 63 interviews with European stakeholders from security-critical sectors including Open Banking, Supply Chain, Privacy-preserving Identity Management, Security Incident Reporting, Maritime Transport, Medical Data Exchange, and Smart Cities. We show that common problems, challenges and requirements across these sectors exist in relation to building trust, implementing privacy and identity management including secure and useable authentication, building resilient systems, standardisation and certification, achieving security and privacy by design, secure and privacy-compliant data and information sharing, and government regulations. Our results also indicate cybersecurity trends and allow to derive directions for future research and innovation activities that will be of high importance for Europe.
... Existing research on HR shows that interviewers assess certain personality traits of a candidate disproportionately highly and draw false conclusions about the candidate's competency. For example, a meta-analysis by Aamodt et al. (2006) and a laboratory experiment by Kutcher and Bragger (2004) found that physical appearance produces a halo effect: physically attractive people are often perceived as more intelligent and competent than they are. ...
Article
Research on human resources (HR) indicates that many biases (e.g., halo effect, confirmation bias, stereotyping bias) affect decisions taken by HR employees. However, it remains unclear whether HR employees are aware of their susceptibility to bias. To improve understanding, this study examines the "bias blind spot" phenomenon, the tendency of individuals to believe they are less likely to be biased than their peers. This quantitative survey among 234 HR employees in Switzerland measured the bias blind spot on seven interview biases in recruitment decision-making. The study shows that participants rated the average HR colleague as more susceptible to bias than themselves. Furthermore, male HR employees partly showed a greater bias blind spot than female HR employees. These findings contribute to behavioral research in HR and offer practical insights.
... For instance, research has documented that relying on structured rather than unstructured job interviews can reduce biases toward candidates (Bragger et al., 2002;Kutcher and Bragger, 2004), and Gilliland (1993) proposed that structured interviews indicate greater consistency, which leads to higher fairness perceptions. Nonetheless, structured interviews are only able to reduce interviewer bias rather than eradicate it (Aamodt et al., 2006). This means that biases related to, for instance, overweight (Kutcher and Bragger, 2004), race (de Kock and Hauptfleisch, 2018), pregnancy (Bragger et al., 2002), etc., may decrease but will remain present in structured interviews to a certain extent. ...
Article
Full-text available
This research examines the perceived fairness of two types of job interviews: robot-mediated and face-to-face interviews. The robot-mediated interview tests the concept of a fair proxy in the shape of a teleoperated social robot. In Study 1, a mini-public (n=53) revealed four factors that influence fairness perceptions of the robot-mediated interview and showed how HR professionals' perception of fair personnel selection is influenced by moral pragmatism despite clear moral awareness of discriminative biases in interviews. In Study 2, an experimental survey (n=242) conducted at an unemployment center showed that the respondents perceived the robot-mediated interview as fairer than the face-to-face interview. Overall, the studies suggest that HR professionals and jobseekers exhibit diverging fairness perceptions and that the business case for the robot-mediated interview undermines its social case (i.e., reducing discrimination). The paper concludes by addressing key implications and avenues for future research.
Chapter
„Gender and ethnic diversity are clearly correlated with profitability“ (Hunt et al. Delivering through diversity. Mckinsey & Company Report. https://www.mckinsey.com/business-functions/organization/our-insights/delivering-through-diversity. Zugegriffen am 19.10.2021, 2018). Trotz des steigenden Bewusstseins für die Vorteile von Diversität sind Personalentscheidungen in vielen Unternehmen bezüglich Genderfairness noch immer suboptimal. In diesem Kapitel setzen wir uns mit den Fragen auseinander, was Genderfairness bedeutet und warum diese im Interviewkontext relevant ist. Im nächsten Schritt werden entlang des gesamten Prozesses von Personalentscheidungen – von der Anforderungsanalyse über den Interviewprozess inklusive Entscheidungsfindung, bis hin zur Evaluation des Verfahrens – Hinweise sowie Maßnahmen verdeutlicht, wie dieser genderfair gestaltet werden kann. Abschließend präsentieren wir eine Checkliste zur Beurteilung der Genderfairness des Prozesses.
Article
Little progress has been made in improving racial, gender, or intersectional diversity within academic internal medicine (IM). Chief Residency fulfills a unique opportunity to target diversity efforts; Chief Residents (CR) are integral in creating an inclusive environment and support system for IM trainees, and the position serves as a steppingstone for future leadership positions within academia. However, the CR selection process often lacks transparency and includes steps that are fraught with bias, thereby disadvantaging underrepresented minority groups from gaining important experience needed to climb the academic ladder. We describe a more standardized selection process that will improve recruitment and selection of more diverse CRs and ultimately improve the recruitment, retention, and promotion of more diverse faculty within academic internal medicine. Key recommendations include an open call for applications, the use of standardized and structured interviews, and the formation of a diverse selection committee to conduct a transparent selection process based on explicitly defined criteria.
Article
Full-text available
The researchers explored personal and contextual factors that inhibit or facilitate the use of older worker stereotypes in a selection context. The authors suggest that older worker stereotypes are more likely to be used and influence applicant evaluations when raters are biased against older workers, when raters do not have the cognitive resources to inhibit the use of age-associated stereotypes, or when applicants apply for age-incongruent jobs. The researchers explored the extent to which raters differing in older worker bias make discriminatory decisions about young or old individuals applying for age-typed jobs under conditions of high- and low-cognitive demands. A laboratory study was conducted with 131 undergraduate students who evaluated applicants in a simulated employment context. Results indicated that older worker bias, cognitive busyness, and job age-type interact to affect the extent to which applicant age plays a role in selection decisions.
Article
Full-text available
This meta-analytic review presents the findings of a project investigating the validity of the employment interview. Analyses are based on 245 coefficients derived from 86,311 individuals. Results show that interview validity depends on the content of the interview (situational, job related, or psychological), how the interview is conducted (structured vs. unstructured; board vs. individual), and the nature of the criterion (job performance, training performance, and tenure; research or administrative ratings). Situational interviews had higher validity than did job-related interviews, which, in turn, had higher validity than did psychologically based interviews. Structured interviews were found to have higher validity than unstructured interviews. Interviews showed similar validity for job performance and training performance criteria, but validity for the tenure criteria was lower.
Article
Full-text available
The purpose of this investigation was to assess the effect of race on employment interview evaluations. A mete-analysis of 31 studies found that both Black and Hispanic applicants received interview ratings that on average were only about one quarter of a standard deviation lower than those for White applicants. Thus, interviews as a whole do not appear to affect minorities nearly as much as mental ability tests. Results also suggested that (a) high-structure interviews have lower group differences on average than low-structure interviews, (b) group differences tend to decrease as the complexity of the job increases, and (c) group differences tend to be higher when there is a greater proportion of a minority in the applicant pool. Implications and directions for future research are discussed.
Article
Full-text available
Using videotaped interviews with 60 managers in utility companies, the authors found that a composite of vocal interview cues (pitch, pitch variability, speech rate, pauses, and amplitude variability) correlated with supervisory ratings of job performance (r = .18, p < .05). Using videotaped interviews with 110 managers in a news-publishing company, the authors found that the same composite of vocal cues correlated with performance ratings (r = .20, p < .05) and with interviewers’ judgments (r = .20, p < .05) and that a composite of visual cues (physical attractiveness, smiling, gaze, hand movement, and body orientation) correlated with performance ratings (r = .14, p < .07) and with interviewers’ judgments (r = .21, p < .05). Results of tests of mediation effects indicate that personal reactions such as liking, trust, and attributed credibility toward interviewees explain relationships (a) between job performance and vocal cues and (b) between interviewers’ judgments and both visual and vocal cues.
Article
Each of 72 professional personnel consultants rated the suitability of one bogus applicant for selected masculine, feminine, and neuter jobs, and for alternatives to employment. Each resume was identical with the exception of the systematic variation of the applicant's sex and the omission or inclusion of a photo depicting the applicant as physically attractive or unattractive. As predicted, personnel decisions strongly reflected the operation of sex-role stereotypes as well as sex-relevant and sex-irrelevant attractiveness stereotypes. These factors similarly affected consultants' recommendations of alternatives to employment and consultants' causal attributions of applicants' projected occupational successes and failures. Sex-role typing provides a significant example of the powerful effects of stereotypes in the expansion and restriction of alternatives of expression and action available to men and to women in our society (Bern, 197S; Block, von der Lippe, & Block, 1973; Broverman, Vogel, Broverman, Clarkson, & Rosenkrantz, 1972). The influence of sex-role stereotypes on both access and employee treatment is centrally important to sex discrimination in employment, a practice prohibited by Title VII of the U.S. Civil Rights Act of 1964, The social sciences have begun to systematically examine sex discrimination in a number of settings, both naturalistic and experimental. The greatest amount of research has assessed discrimination against females in traditionally masculine, that is, male-dominated, occupations. Men have been evaluated more favorably than women for writing journal articles (Goldberg, 1968), for painting pictures (Pheterson, Kiesler, & Goldberg, 1971), and for
Article
The study discussed in this article used a correlational design to examine the discrepancies among employers' employability ratings of hypothetical job applicants with different disability statuses. A survey packet was mailed to a random sample of 1,000 employers selected from a national membership list of human resources professionals. The survey included a standardized measure for rating employers' impressions of job applicants' employability with respect to 22 key employment-related traits. Employers were asked to rate the job applicants' suitability for employment in a hypothetical administrative assistant position. Findings show that job applicants without a disability received the highest mean employability ratings. Job applicants with an acquired brain injury were rated substantially the same as those with schizophrenia. Implications for social work practice and research are discussed.
Article
Despite the growing use of communication technologies, such as videoconferencing, in recruiting and selection, there is little research examining whether these technologies influence interviewers' perceptions of candidates. The present field experiment analysed evaluations of 92 real job applicants who were randomly assigned either to be interviewed face-to-face (FTF) (N = 48) or using a desktop videoconference system (N = 44). The results show a bias in favour of the videoconference applicants relative to FTF applicants, F(1,91) = 7.35, p = .01. A significant interaction of interview structure and interviewer gender was also found, F(1,91) = 3.70, p < .05, with female interviewers using an unstructured interview rating applicants significantly higher than males or females using a structured interview. Interview structure did not significantly moderate the influence of interview medium on interviewers' evaluations of applicants. These findings highlight the need to be aware of potential biases resulting from the use of communication technologies in the hiring process.
Article
Empirical research on the role of physical attractiveness in employment selection is reviewed. Physical attractiveness is conceptualized as a beneficial status characteristic, although further investigation of the magnitude of the bias is needed. Conceptual and methodological problems impeding understanding of physical attractiveness are noted and a descriptive model specifying the role of attractiveness in selection decision-making is offered.