Table 12 - uploaded by In-Sue Oh
Content may be subject to copyright.
1. Reanalysis of Validity of Employment Interview Data

1. Reanalysis of Validity of Employment Interview Data

Source publication
Chapter
Full-text available
Employment interviews are one of the most widely used selection tools across organizations, industries, and countries (Dipboye, 1992, 1997; Dip-boye & Jackson, 1999; Ryan, McFarland, Baron, & Page, 1999; Salgado, Viswesvaran, & Ones, 2001; Wilk & Cappelli, 2003, Table 1). Interviews also play an important role in government employment decisions, pa...

Contexts in source publication

Context 1
... we show additional information about how much the traditional way of correcting for range restriction (DRR) underestimates the operational validity of employment interviews (the % under column in Tables 12.1-12.2) and the 95% confidence intervals for each meta-analytic estimate. Table 12.1 shows the overall validities of structured and unstructured interviews for job and training performance regardless of performance rating type. For job performance, when corrected/adjusted for IRR, publication bias, and measurement error in the criterion measure, the operational validity of unstructured interviews ( = .556) ...
Context 2
... the case of training performance (Table 12.1), when corrected/ adjusted for IRR, publication bias, and measurement error in the crite- rion, the operational validity of unstructured interviews ( = .530) is about 35% greater than that of structured interviews ( = .392). ...
Context 3
... Table 12.2, following Schmidt and Zimmerman (2004), in addition to interview structure, we also considered the types of job performance ratings (research-purpose vs. administrative) as a moderator which could affect the magnitude of operational validity estimates for job performance. ...
Context 4
... difference is not substantial and thus caution is needed given that the 95% confidence intervals for the two estimates completely overlap. As shown in Table 12.2, the 95% confidence interval for ...
Context 5
... difference is not small, but caution is still needed given that the 95% confidence intervals for the two estimates overlap substantially. As shown in Table 12.2, the 95% confidence interval for the operational validity of structured interviews (.398 − .672) ...
Context 6
... about 20%. Taking research-purpose and administrative job performance ratings together, as found in Table 12.2, these findings suggest that optimal correction methods for artifacts changed the sign of advantage in favor of unstructured interviews over structured interviews. ...
Context 7
... Research-purpose Administrative the 95% confidences for both the estimates completely overlap, whereas when interviews are structured the rating type makes a larger difference (.135 = .567 − .432) in validity and the 95% confidence intervals do not overlap substantially (see Table 12.2 for more details). ...
Context 8
... lowest validity estimate for unstructured interview with administrative ratings ( = .348). Table 12.3 provides an overall summary of the interview types with the higher valid- ity estimate categorized by correction method(s) applied and criterion measure/interview purpose. ...
Context 9
... interaction between interview structure and rating type on interview validity for job performance; the validity estimates used for this plot are those cor- rected for measurement error in the criterion measure, indirect range restriction, and publication bias (see the note for Table 12.2 for details) as reported in the intersections between the second row (validity adj. ...
Context 10
... interaction between interview structure and rating type on interview validity for job performance; the validity estimates used for this plot are those cor- rected for measurement error in the criterion measure, indirect range restriction, and publication bias (see the note for Table 12.2 for details) as reported in the intersections between the second row (validity adj. for pub bias) and the fourth column (IRR) in Table 12.2. ~ Equal represents a validity difference between structured and unstructured interviews of less than .01; ...
Context 11
... the results of this study indicate, applying recent developments in meta-analytic methodology to an existing inter- view data set can result in rather dramatic changes in long-held assump- tions and knowledge about the operational validity estimates of structured and unstructured employment interviews. It is useful to briefly consider in turn how each of the recent meta-ana- lytic advancements applied in this study affect the difference in validity between structured and unstructured interviews (a summary is presented in Table 12.3). When interview validities were estimated in a traditional manner (using DRR correction methods), structured interviews displayed a higher operational validity estimate than unstructured interviews in all cases except for overall training performance (Table 12.1). ...
Context 12
... is useful to briefly consider in turn how each of the recent meta-ana- lytic advancements applied in this study affect the difference in validity between structured and unstructured interviews (a summary is presented in Table 12.3). When interview validities were estimated in a traditional manner (using DRR correction methods), structured interviews displayed a higher operational validity estimate than unstructured interviews in all cases except for overall training performance (Table 12.1). In this excep- tion, structured and unstructured interviews were estimated to have approximately equal validities. ...
Context 13
... interview operational validities were estimated using IRR correc- tions (unadjusted for publication bias), the structured and unstructured interviews displayed approximately the same operational validity estimate for predicting overall job performance (Table 12.1). Correcting for IRR (and with publication bias adjustment, further) reversed the sign advan- tage for structured interviews in favor of unstructured interviews for over- all training performance (Table 12.1) and job performance measured using research-purpose ratings or administrative ratings (Table 12.2). ...
Context 14
... interview operational validities were estimated using IRR correc- tions (unadjusted for publication bias), the structured and unstructured interviews displayed approximately the same operational validity estimate for predicting overall job performance (Table 12.1). Correcting for IRR (and with publication bias adjustment, further) reversed the sign advan- tage for structured interviews in favor of unstructured interviews for over- all training performance (Table 12.1) and job performance measured using research-purpose ratings or administrative ratings (Table 12.2). These findings illustrate the consequences that may result from correcting for direct range restriction when range restriction is known to be indirect. ...
Context 15
... should again be noted that the information in Table 12.3 is based on comparisons at the mean level alone and thus should be carefully interpreted with reference to the relevant 95% confidence intervals shown in Tables 12.1 and 12.2 because the 95% confidence intervals overlap considerably in some cases. However, it can be safely concluded that unstructured interviews may be as valid as structured interviews in most cases. ...
Context 16
... for publication bias using the trim and fill method also reduced the advantage of structured interviews by lowering their estimated validities (right panel in Table 12.3). Publication bias was found only in structured interview data. ...
Context 17
... to this, Cooper (2003) argued as follows: "In particular, research that fails to achieve standard lev- els of statistical significance is frequently left in researchers' file drawers.... Published estimates of effect may make relationships appear stronger than if all estimates were retrieved by the synthesist" (p. 6). In our reanalysis, publication bias was greatest in the overall sample of structured interviews used to predict job performance, with the trim and fill method indicating 19 missing studies in addition to the original 106 (Table 12.1 and Figure 12.1). This adjustment resulted in a 20% reduction of the operational valid- ity estimate for structured interviews used to predict job performance, which broadly confirms Cooper's (2003) argument. ...

Similar publications

Article
Full-text available
Modeling of three-dimensional channel flow in a chemically-reacting fluid between two long vertical parallel flat plates in the presence of a transverse magnetic field is presented. The stationary plate is subjected to a transverse sinusoidal injection velocity distribution while the uniformly moving plate is subjected to a constant suction and sli...
Article
Full-text available
Purpose ‐ In light of the increasingly aging workforce, it is interesting from both a theoretical and practical perspective to investigate empirically the commonly held stereotype that older workers are more resistant to change (RTC). Thus, the main purpose of this paper is to investigate the age/RTC relationship, considering tenure and occupationa...
Article
Full-text available
While much research has investigated the predictors of operators' performance such as personality, attitudes and motivation in high-risk industries, its cognitive antecedents and boundary conditions have not been fully investigated. Based on a multilevel investigation of 312 nuclear power plant main control room operators from 50 shift teams, the p...
Article
Full-text available
This article explores the effects of emotional intelligence (EI) in a team setting. Informational diversity is theorized to moderate the relationship between team EI and performance. I propose that EI increases the ability of team members to engage each other in information elaboration and that elaboration leads to better performance when teams are...

Citations

... Until recently, the available meta-analytic data indicated that the unstructured interview was less valid than the structured interview. Application of the new, more accurate method of correcting for range restriction changed that conclusion (Oh, Postlethwaite, & Schmidt, 2013). As shown in Table 1, the average operational validity of the structured and unstructured interviews is equal at .58. ...