Table 12 - uploaded by In-Sue Oh
Content may be subject to copyright.
Source publication
Employment interviews are one of the most widely used selection tools across organizations, industries, and countries (Dipboye, 1992, 1997; Dip-boye & Jackson, 1999; Ryan, McFarland, Baron, & Page, 1999; Salgado, Viswesvaran, & Ones, 2001; Wilk & Cappelli, 2003, Table 1). Interviews also play an important role in government employment decisions, pa...
Contexts in source publication
Context 1
... we show additional information about how much the traditional way of correcting for range restriction (DRR) underestimates the operational validity of employment interviews (the % under column in Tables 12.1-12.2) and the 95% confidence intervals for each meta-analytic estimate. Table 12.1 shows the overall validities of structured and unstructured interviews for job and training performance regardless of performance rating type. For job performance, when corrected/adjusted for IRR, publication bias, and measurement error in the criterion measure, the operational validity of unstructured interviews ( = .556) ...
Context 2
... the case of training performance (Table 12.1), when corrected/ adjusted for IRR, publication bias, and measurement error in the crite- rion, the operational validity of unstructured interviews ( = .530) is about 35% greater than that of structured interviews ( = .392). ...
Context 3
... Table 12.2, following Schmidt and Zimmerman (2004), in addition to interview structure, we also considered the types of job performance ratings (research-purpose vs. administrative) as a moderator which could affect the magnitude of operational validity estimates for job performance. ...
Context 4
... difference is not substantial and thus caution is needed given that the 95% confidence intervals for the two estimates completely overlap. As shown in Table 12.2, the 95% confidence interval for ...
Context 5
... difference is not small, but caution is still needed given that the 95% confidence intervals for the two estimates overlap substantially. As shown in Table 12.2, the 95% confidence interval for the operational validity of structured interviews (.398 − .672) ...
Context 6
... about 20%. Taking research-purpose and administrative job performance ratings together, as found in Table 12.2, these findings suggest that optimal correction methods for artifacts changed the sign of advantage in favor of unstructured interviews over structured interviews. ...
Context 7
... Research-purpose Administrative the 95% confidences for both the estimates completely overlap, whereas when interviews are structured the rating type makes a larger difference (.135 = .567 − .432) in validity and the 95% confidence intervals do not overlap substantially (see Table 12.2 for more details). ...
Context 8
... lowest validity estimate for unstructured interview with administrative ratings ( = .348). Table 12.3 provides an overall summary of the interview types with the higher valid- ity estimate categorized by correction method(s) applied and criterion measure/interview purpose. ...
Context 9
... interaction between interview structure and rating type on interview validity for job performance; the validity estimates used for this plot are those cor- rected for measurement error in the criterion measure, indirect range restriction, and publication bias (see the note for Table 12.2 for details) as reported in the intersections between the second row (validity adj. ...
Context 10
... interaction between interview structure and rating type on interview validity for job performance; the validity estimates used for this plot are those cor- rected for measurement error in the criterion measure, indirect range restriction, and publication bias (see the note for Table 12.2 for details) as reported in the intersections between the second row (validity adj. for pub bias) and the fourth column (IRR) in Table 12.2. ~ Equal represents a validity difference between structured and unstructured interviews of less than .01; ...
Context 11
... the results of this study indicate, applying recent developments in meta-analytic methodology to an existing inter- view data set can result in rather dramatic changes in long-held assump- tions and knowledge about the operational validity estimates of structured and unstructured employment interviews. It is useful to briefly consider in turn how each of the recent meta-ana- lytic advancements applied in this study affect the difference in validity between structured and unstructured interviews (a summary is presented in Table 12.3). When interview validities were estimated in a traditional manner (using DRR correction methods), structured interviews displayed a higher operational validity estimate than unstructured interviews in all cases except for overall training performance (Table 12.1). ...
Context 12
... is useful to briefly consider in turn how each of the recent meta-ana- lytic advancements applied in this study affect the difference in validity between structured and unstructured interviews (a summary is presented in Table 12.3). When interview validities were estimated in a traditional manner (using DRR correction methods), structured interviews displayed a higher operational validity estimate than unstructured interviews in all cases except for overall training performance (Table 12.1). In this excep- tion, structured and unstructured interviews were estimated to have approximately equal validities. ...
Context 13
... interview operational validities were estimated using IRR correc- tions (unadjusted for publication bias), the structured and unstructured interviews displayed approximately the same operational validity estimate for predicting overall job performance (Table 12.1). Correcting for IRR (and with publication bias adjustment, further) reversed the sign advan- tage for structured interviews in favor of unstructured interviews for over- all training performance (Table 12.1) and job performance measured using research-purpose ratings or administrative ratings (Table 12.2). ...
Context 14
... interview operational validities were estimated using IRR correc- tions (unadjusted for publication bias), the structured and unstructured interviews displayed approximately the same operational validity estimate for predicting overall job performance (Table 12.1). Correcting for IRR (and with publication bias adjustment, further) reversed the sign advan- tage for structured interviews in favor of unstructured interviews for over- all training performance (Table 12.1) and job performance measured using research-purpose ratings or administrative ratings (Table 12.2). These findings illustrate the consequences that may result from correcting for direct range restriction when range restriction is known to be indirect. ...
Context 15
... should again be noted that the information in Table 12.3 is based on comparisons at the mean level alone and thus should be carefully interpreted with reference to the relevant 95% confidence intervals shown in Tables 12.1 and 12.2 because the 95% confidence intervals overlap considerably in some cases. However, it can be safely concluded that unstructured interviews may be as valid as structured interviews in most cases. ...
Context 16
... for publication bias using the trim and fill method also reduced the advantage of structured interviews by lowering their estimated validities (right panel in Table 12.3). Publication bias was found only in structured interview data. ...
Context 17
... to this, Cooper (2003) argued as follows: "In particular, research that fails to achieve standard lev- els of statistical significance is frequently left in researchers' file drawers.... Published estimates of effect may make relationships appear stronger than if all estimates were retrieved by the synthesist" (p. 6). In our reanalysis, publication bias was greatest in the overall sample of structured interviews used to predict job performance, with the trim and fill method indicating 19 missing studies in addition to the original 106 (Table 12.1 and Figure 12.1). This adjustment resulted in a 20% reduction of the operational valid- ity estimate for structured interviews used to predict job performance, which broadly confirms Cooper's (2003) argument. ...
Similar publications
In the present study, we used a Chinese sample to investigate work-home interference (WHI) in relation to psychological strain, physical strain, and job performance. In addition, we tested the moderating effect of Chinese work value (CWV) on those relations. As expected, WHI was positively associated with physical and psychological strains; however...
These strategies help to influence the performance of the employer. The strategies help to increase the performance of the employer to a certain level. From the calculation, the result showed that providing compensation and benefits helps to increase the working performance of the employees. The relative Importance Index of the surveyed strategies...
Social network detection and identification constitute an important topic in the field of sociology. Previous graph similarity has focus on either the topological structure of graph or the feature value of vertex. In this work, a multi-similarity measure method for community is described. The approach devised by using multi-similarity properties ba...
This report provides the results of a survey of South African companies which pertain to the application of high performance work practices (HPWPs) in their organisations. The report provides an overview of the findings. The basis of the survey was a questionnaire completed by the financial, marketing and human resource managers of the companies th...
Citations
... Utfordringen, med begrensninger i distribusjonen i datamaterialet, har kanskje gjort seg mer gjeldende i senere tid. Dette ved at nyere og mer avanserte meta-analyser, som hevder å i større grad ta høyde for problemstillingen, avdekker funn som stiller spørsmålstegn ved forskjellen i intervjueformenes nulte-ordens validitet (Oh, Postlehwaitte, & Schmidt, 2013;Schmidt et al., 2016). Altså, viste det strukturerte og ustrukturererte intervjuet like korrelasjoner med prestasjonsdata, r=0.58, når metodenes treffsikkerhet ble vurdert isolert sett (Oh et al., 2013;Schmidt et al., 2016). ...
... Dette ved at nyere og mer avanserte meta-analyser, som hevder å i større grad ta høyde for problemstillingen, avdekker funn som stiller spørsmålstegn ved forskjellen i intervjueformenes nulte-ordens validitet (Oh, Postlehwaitte, & Schmidt, 2013;Schmidt et al., 2016). Altså, viste det strukturerte og ustrukturererte intervjuet like korrelasjoner med prestasjonsdata, r=0.58, når metodenes treffsikkerhet ble vurdert isolert sett (Oh et al., 2013;Schmidt et al., 2016). Funnene indikerte videre at det ustrukturerte intervjuet faktisk kunne vaere en bedre indikator, enn det strukturerte, ved seleksjon til utdannings-og opplaeringsløp (Oh et al., 2013). ...
... Altså, viste det strukturerte og ustrukturererte intervjuet like korrelasjoner med prestasjonsdata, r=0.58, når metodenes treffsikkerhet ble vurdert isolert sett (Oh et al., 2013;Schmidt et al., 2016). Funnene indikerte videre at det ustrukturerte intervjuet faktisk kunne vaere en bedre indikator, enn det strukturerte, ved seleksjon til utdannings-og opplaeringsløp (Oh et al., 2013). Noe som taler i mot bruken av strukturerte intervju ved FOS. ...
The Norwegian Home Guard plays a crucial role within the Norwegian Armed Forces. They are the first, and some places only, military contribution in a crisis of war an peace, and inhabits a unique understanding of the local community. However, most of its personnel are conscripts who are obliged to participate in combat training on a yearly basis, from the age of 19 to 36. It’s NCO’s are generally young adults who have just finished high school, and have little to no prior leadership experience. Taking on such a great leadership challenge obviously demands talent. The question of this thesis is to what extent the Home Guards Non-Commissioned Officer School succeeds in identifying the right personnel? Selection data and school performances from two school years (N=78) were statistically analyzed, first by correlation then by hierarchical regression. In general, the results are somewhat surprising as they diverge from prior similar studies, in some areas. However, the overall accuracy of the selection process corresponds well to that of prior similar studies. Results are discussed and concluded.
... Until recently, the available meta-analytic data indicated that the unstructured interview was less valid than the structured interview. Application of the new, more accurate method of correcting for range restriction changed that conclusion (Oh, Postlethwaite, & Schmidt, 2013). ...
Key digested message
This article explores the role of intuition in judgement and decision-making in assessment and selection and proposes that the shift towards statistically-driven approaches has led to the dismissal of intuition with negative consequences.
Der Artikel von Schmidt und Hunter (1998) bietet einen Überblick und Vergleich der Kriteriumsvalidität verschiedenster Personalauswahlverfahren. Für ihren Artikel haben Schmidt und Hunter jeweils metaanalytisch ermittelte Validitäten ausgewählt, bei denen Korrekturen für Varianzeinschränkung im Auswahlverfahren und Unreliabilität des Kriteriums (d. h. der Messung der Arbeitsleistung) vorgenommen wurden. Hiermit wollten Schmidt und Hunter einen umfassenden Vergleich der wahren Validität der verschiedenen Auswahlverfahren ermöglichen. Trotzdem gibt es eine Reihe von Aspekten, die die Interpretation dieses Vergleichs erschweren bzw. die zu falschen Schlussfolgerungen führen: Erstens die Qualität und Datengrundlage der Metaanalysen, die für diesen Vergleich herangezogen wurden, zweitens die fehlenden Informationen zu unkorrigierten metaanalytischen Validitäten der verschiedenen Auswahlverfahren, drittens unterschiedliche Vorgehensweisen und Annahmen im Rahmen der durchgeführten Korrekturen in den verwendeten Metaanalysen, viertens das gewählte Kriteriums, das durch die verschiedenen Auswahlverfahren vorhergesagt werden soll und fünftens das Problem von nicht berücksichtigte Auswahlverfahren in dieser Übersicht. Im Rahmen dieses Beitrags werden die verschiedenen Aspekte und ihre Bedeutung jeweils genauer erläutert, und es wird dargestellt, inwiefern sich Schlussfolgerungen bzgl. der Validität von Auswahlverfahren ändern können, wenn andere Annahmen und Vorgehensweisen als bei Schmidt und Hunter (1998) gewählt werden.
Eine erste Version der Interview-Standards des Forum Assessment wurde im Jahr 2008 veröffentlicht und war damit wohl die erste systematische Aufstellung von Qualitätskriterien
für eignungsdiagnostische Interviews. Die hiermit vorliegende, zweite aktualisierte und vollständig überarbeitete Fassung ist das Ergebnis einer Arbeitsgruppe des Forum Assessment, die im Jahr 2020 offiziell veröffentlicht werden soll.