Organizational Research Methods

Published by SAGE Publications
Online ISSN: 1094-4281
Publications
Article
The author expresses gratitude and cherishes the growth of the journal "Organizational Research Methods" (ORM). He takes note of the important contributions made by the associate editors of ORM, including Herman Aguinis, Jeff Edwards, Karen Locke, and Bob Vandenberg. The journal has published several feature topics, and provides constructive feedback to the authors by the journal editorial members and adhoc reviewers. He expresses appreciation to several individuals whose professional and personal support have contributed both directly and indirectly to the journal's growth. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Response Facilitation Approaches 
N-BIAS Techniques 
Article
A survey is a potentially powerful assessment, monitoring, and evaluation tool available to organizational scientists. To be effective, however, individuals must complete the survey and in the inevitable case of nonresponse, we must understand if our results exhibit bias. In this article, the nonresponse bias impact assessment strategy (N-BIAS) is proposed. The N-BIAS approach is a series of techniques that when used in combination, provide evidence about a study's susceptibility to bias and its external validity. The N-BIAS techniques stem from a review of extant research and theory. To inform future revisions of the N-BIAS approach, a future research agenda for advancing the study of survey response and nonresponse is provided. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Article
Missing data is a common problem in psychological research. Missing data can occur due to attrition in a longitudinal study or non-response to questionnaire items in a laboratory or field setting. Improper treatments of missing data (e.g., listwise deletion, mean imputation) can lead to biased statistical inference using complete case analysis statistical techniques. This paper presents a method for dealing with missing data, multiple imputation (Rubin, 1987; Schafer, 1997), that allows for valid statistical inference with complete case statistical analysis. Software for implementing multiple imputation under a multivariate normal model is available (Schafer, 1997; King, Honaker, Joseph & Scheve, 1999) and should be routinely used for imputing missing data. We illustrate the application of these techniques using data from the HomeNet project (Kraut, Patterson, Lundmark, Kiesler, Mukhopadhyay & Scherlis, 1998). Any social or psychological research which makes use of collect...
 
Article
Continued discussion and debate regarding the appropriate use of null hypothesis significance testing (NHST) has led to greater reliance on effect size testing (EST) in published literature. This article examines the myth that uncritical replacement of NHST with EST will improve our science. The use of NHST and EST is described along with a summary of the arguments offered in support and against both. After addressing the veracity of these assertions, the article describes the concept of the translation mechanism and compares the success of NHST and EST as mechanisms. Finally, the authors suggest changes that may facilitate translation in future research.
 
Article
In management research, theory testing confronts a paradox described by Meehl in which designing studies with greater methodological rigor puts theories at less risk of falsification. This paradox exists because most management theories make predictions that are merely directional, such as stating that two variables will be positively or negatively related. As methodological rigor increases, the probability that an estimated effect will differ from zero likewise increases, and the likelihood of finding support for a directional prediction boils down to a coin toss. This paradox can be resolved by developing theories with greater precision, such that their propositions predict something more meaningful than deviations from zero. This article evaluates the precision of theories in management research, offers guidelines for making theories more precise, and discusses ways to overcome barriers to the pursuit of theoretical precision.
 
The Action Research Process for the Transferring Insight to Practice (TIP) Project Note. PA = Poverty Alliance.
Article
This article contributes to an understanding of action research as a phenomenological methodological paradigm for carrying out research into management and organizations. Two case studies of action research are presented. Three areas of choice—overtness, visibility, and riskiness—that emerge out of the cases and that are significant issues in designing action research projects are discussed. Highlighting and explicating these provides a basis for greater rigor and reflexivity in action research.
 
Article
Assessment of noncognitive constructs in organizational research and practice is challenging because of response biases that can distort test scores. Researchers must also deal with time constraints and the ensuing trade-offs between test length and the number of constructs measured. This article describes a novel way of improving the efficiency of noncognitive assessments using computer adaptive testing (CAT) with multidimensional pairwise preference (MDPP) items. Tests composed of MDPP items are part of a broader family of forced choice measures that ask respondents to choose between two or more equally desirable statements in an effort to combat response distortion. The authors conducted four computer simulations to explore the influences of test design, dimensionality, and the advantages of adaptive item selection for trait score and error estimation with tests involving as many as 25 dimensions. Overall, adaptive MDPP testing produced gains in accuracy over nonadaptive MDPP tests comparable to those observed with traditional unidimensional CATs. In addition, an empirical illustration involving a 15-dimension MDPP CAT administered in a field setting showed patterns of correlations that were consistent with expectations, thus showing construct validity.
 
Article
Network-based research in the management field largely assumes one-mode (unipartite) networks, despite the widespread presence of two-mode (bipartite) networks. In empirical work, scholars usually project a bipartite network onto a unipartite network, ignoring issues related to the interdependence of ties and potential loss of information. Yet new advances in measures and methods related to bipartite networks in the fields of sociology, physics, and biology may make such tactics unnecessary. This article presents an overview of three research streams related to bipartite networks, namely, (a) refinements related to the projections of bipartite networks onto unipartite networks; (b) the extension of networks measures from unipartite networks to bipartite networks, with a focus on clustering coefficients; and (c) approaches unique to bipartite networks, such as nestedness. We apply these approaches and compare the findings of a traditional unipartite network analysis, using both a simple example and a sample of 10,223 directors of 1528 Indian firms in 2009.
 
Article
Multilevel theory and research have advanced organizational science but are limited because the research focus is incomplete. Most quantitative research examines top-down, contextual, cross-level relationships. Emergent phenomena that manifest from the bottom up from the psychological characteristics, processes, and interactions among individuals—although examined qualitatively—have been largely neglected in quantitative research. Emergence is theoretically assumed, examined indirectly, and treated as an inference regarding the construct validity of higher level measures. As a result, quantitative researchers are investigating only one fundamental process of multilevel theory and organizational systems. This article advances more direct, dynamic, and temporally sensitive quantitative research methods designed to unpack emergence as a process. We argue that direct quantitative approaches, largely represented by computational modeling or agent-based simulation, have much to offer with respect to illuminating the mechanisms of emergence as a dynamic process. We illustrate how indirect and direct approaches can be complementary and, appropriately integrated, have the potential to substantially advance theory and research. We conclude with a set of recommendations for advancing multilevel research on emergent phenomena in teams and organizations.
 
A, Boxplots of simulation results for the biased standard deviation as a function of group size. B, Boxplots of simulation results for the bias-corrected standard deviation as a function of group size.  
Group Size-Dependent Absolute Results for Various Diversity Indexes
Group Size-Dependent Bias of Various Diversity Indexes in Percentage Deviation From Reference Category
Bias-Corrected Operationalizations of Group Diversity Types
Article
Work group diversity can be conceptualized in different ways (i.e., variety, separation, and disparity), and the appropriate operationalization of a diversity dimension depends on which of these diversity types researchers have in mind. Based on prior work on the measurement of the different types of diversity, we show that the most common diversity indexes (i.e., Blau’s index, Teachman’s index, standard deviation, mean Euclidean distance [MED], Gini coefficient, and coefficient of variation) are systematically biased whenever they are used in field studies in which the overall sample comprises groups of varying sizes. Using simulated data, we illustrate this bias inherent in all of the common diversity measures. This bias can lead to erroneous conclusions concerning the impact of group size and the relationship between group diversity and group outcomes. We offer bias-corrected formulas and suggest that diversity researchers henceforth use these adjusted versions when investigating the effects of group diversity in organizational settings.
 
Article
Two-mode networks are used to describe dual patterns of association between distinct social entities through their joint involvement in categories, activities, issues, and events. In empirical organizational research, the analysis of two-mode networks is typically accomplished either by (a) decomposition of the dual structure into its two unimodal components defined in terms of indirect relations between entities of the same kind or (b) direct statistical analysis of individual two-mode dyads. Both strategies are useful, but neither is fully satisfactory. In this article, the authors introduce newly developed stochastic actor-based models for two-mode networks that may be adopted to redress the limitations of current analytical strategies. The authors specify and estimate the model in the context of data they have collected on the dual association between software developers and software problems observed during a complete release cycle of an open source software project. The authors discuss the general methodological implications of the models for organizational research based on the empirical analysis of two-mode networks.
 
Article
Agent-based simulation models allow to reproduce the structure of interactions between the members of an organization, deriving organizational decision-making from individual behavior. This article explains the philosophy of agent-based models, expounds exemplary applications to crucial issues in Organization Science and discusses their validation and acceptance by the scientific community.
 
Article
This article describes a new approach for assessing cognitive precursors to aggression. Referred to as the Conditional Reasoning Measurement System, this procedure focuses on how people solve what on the surface appear to be traditional inductive reasoning problems. The true intent of the problems is to determine if solutions based on implicit biases (i.e., biases that operate below the surface of consciousness) are logically attractive to a respondent. The authors focus on the types of implicit biases that underlie aggressive individuals’attempts to justify aggressive behavior. People who consistently select solutions based on these types of biases are scored as being potentially aggressive because they are cognitively prepared to rationalize aggression. Empirical tests of the conditional reasoning system are interpreted in terms of Ozer’s criteria for ideal personality instruments. Noteworthy findings are that the system has acceptable psychometric properties and an average, uncorrected empirical validity of 0.44 against behavioral indicators of aggression (based on 11 studies).
 
Percentage of articles published in Personnel Psychology and the Journal of Applied Psychology that used interrater agreement statistics, including r WG , average deviation (AD), intraclass correlation (ICC), percentage agreement, and Cohen's kappa.
Critical Values and Null Ranges for AD M Given Distributions Defined by Skew. Critical Values Null Ranges
Article
Currently, guidelines do not exist for applying interrater agreement indices to the vast majority of methodological and theoretical problems that organizational and applied psychology researchers encounter. For a variety of methodological problems, we present critical values for interpreting the practical significance of observed average deviation (AD) values relative to either single items or scales. For a variety of theoretical problems, we present null ranges for AD values, relative to either single items or scales, to be used for determining whether an observed distribution of responses within a group is consistent with a theoretically specified distribution of responses. Our discussion focuses on important ways to extend the usage of interrater agreement indices beyond problems relating to the aggregation of individual level data.
 
Article
The measure of within-group agreement most frequently encountered in organizational psychology is the rWG index. The rWG index is determined by comparing the observed group variance among raters with an expected random variance. The most critical issue in calculating the rWG is the choice of an appropriate random distribution that would be expected to follow from raters making their ratings at random. A data-driven approach that uses random-group resampling (RGR) procedures to determine the expected random variance has been proposed. In the present study, the application of the RGR procedure will be illustrated with reference to students' ratings of their mathematics instruction and critically compared with a recently proposed simulation-based approach. It will be shown mathematically that the probability of obtaining statistically significant within-group agreement when applying the RGR procedure strongly depends on the intraclass correlation as well as on the group sizes. Finally, implications for applying the RGR procedure to assess within-group agreement in multilevel data will be discussed.
 
Article
For continuous constructs, the most frequently used index of interrater agreement (r wg(1))can be problematic. Typically, rwg(1) is estimated with the assumption that a uniform distribution represents no agreement. The authors review the limitations of this uniform nullr wg(1) index and discuss alternative methods for measuring interrater agreement. A new interrater agreement statistic,a wg(1),is proposed. The authors derive thea wg(1)statistic and demonstrate thatawg(1) is an analogue to Cohen’s kappa, an interrater agreement index for nominal data. A comparison is made between agreement estimates based on the uniformr wg(1)and a wg(1), and issues such as minimum sample size and practical significance levels are discussed. The authors close with recommendations regarding the use ofr wg(1)/rwg(J) when a uniform null is assumed,r wg(1)/rwg(J) indices that do not assume a uniform null,awg(1) / a wg(J)indices, and generalizability estimates of interrater agreement.
 
Framework for Elevating Constructs Using Computer-Aided Text Analysis. 
(continued)
Descriptive Statistics and Correlations Among the Dimensions of Organizational Psychological Capital. 
Validation Variable Descriptive Statistics and Correlation Matrix. 
Article
Applying individual-level constructs to higher levels of analysis can be a fruitful practice in organizational research. Although this practice is beneficial in developing and testing theory, there are measurement and validation concerns that, if improperly addressed, may threaten the validity and utility of the research. This article illustrates how computer-aided text analysis might be utilized to facilitate construct elevation while ensuring proper validation. Specifically, we apply a framework to develop organizational-level operationalizations of individual-level constructs using the psychological capital construct as an example.
 
Article
Corpus linguistics studies real-life language use on the basis of a text corpus, drawing on both quantitative and qualitative text analysis techniques. This article seeks to bridge the gap between the social sciences and linguistics by introducing the techniques of corpus linguistics to the field of computer-aided text analysis. The article first discusses the differences between corpus linguistics and computer-aided text analysis, which is divided into computer-aided content analysis and computer-aided interpretive textual analysis. It then outlines the techniques of corpus linguistics for exploring textual data. In an exemplary analysis of letters to shareholders, the article demonstrates how these techniques can be applied to compare letters to shareholders from two different years. The article concludes with a discussion of the strengths and limitations of corpus linguistics for management and organization studies.
 
Item Intercorrelation Matrix
CFA Results for Extreme Item Wording
Item-level CFA Results for Extreme Item Wording
Classical Test Statistics
Article
When writing items for survey measures, common advice dictates that one should avoid using extreme words like ‘‘always.’’ However, the systematic study of extreme wording effects is rare. The current study applies confirmatory factor analysis (CFA) and item response theory (IRT) methods to assess the effects of extreme item wording (i.e., the word ‘‘always’’) on item-level, response option-level, and scale- (or test) level invariance. The authors hypothesized that including the word ‘‘always’’ in an item stem would affect responses such that individuals would be less likely to strongly agree with these items. To test this hypothesis, six items with extreme wording from the Wong and Law Emotional Intelligence Scale (WLEIS) were compared with more moderately worded versions of the same items. Although an effect was found for item wording, the magnitude of nonequivalence was small and is unlikely to have a strong influence on scale-level measurement outcomes. Implications for evaluating survey psychometric properties are discussed.
 
Article
Meta-analysis is commonly used to quantitatively review research findings in the social sciences. This article looks at what happens next, after a meta-analysis is published. The authors examine how meta-analytic findings are cited in subsequent studies and whether the citing authors take full advantage of the information meta-analyses provide. A review of 1,489 citations to meta-analyses in 319 empirical studies published in three journals over two decades indicates that the frequency of citing meta-analyses is accelerating. An analysis of citing practices indicates that authors use data for a variety of purposes in subsequent research studies. However, the citing studies often underreported important aspects of meta-analytic data, and additional opportunities exist to build on the data provided by meta-analytic reviews.
 
Well-Being Items: Means and Standard Deviations
Job Satisfaction Items: Means and Standard Deviations Women Managers
Article
Summated scales are widely used in management research to measure constructs such as job satisfaction and organizational commitment. This article suggests that Revelle’s (1979) coefficient beta, implemented in Revelle’s (1978) ICLUST item-clustering procedure, should be used in conjunction with Cronbach’s coefficient alpha measure of internal consistency as criteria for judging the dimensionality and internal homogeneity of summated scales. The approach is demonstrated using ICLUST reanalyses of sample responses to Warr’s (1990) affective well-being scale and O’Brien, Dowling, and Kabanoff’s (1978) job satisfaction scale. Coefficient beta and item clustering are shown to more clearly identify the homogeneity and internal dimensional structure of summated scale constructs than do traditional principal components analyses. Given these benefits, Revelle’s approach is a viable alternative methodology for scale construction in management, organizational, and cross-cultural contexts, especially when researchers need to make defensible choices between using whole scales or subscales.
 
Article
A Monte Carlo simulation was used to examine the effectiveness of univariate analysis of variance (ANOVA), multivariate analysis of variance (MANOVA), and multiple indicator structural equation (MISE) modeling to analyze data from multivariate factorial designs. The MISE method yielded downwardly biased standard errors for the univariate parameter estimates in the small sample size conditions. In the large sample size data conditions, the MISE method outperformed MANOVA and ANOVA when the covariate accounted for variation in the dependent variable and variables were unreliable. With multivariate statistical tests, MANOVA outperformed the MISE method in the Type I error conditions and the MISE method outperformed MANOVA in the Type II error conditions. The Bonferroni methods were overly conservative in controlling Type I error rates for univariate tests, but a modified Bonferroni method had higher statistical power than the Bonferroni method. Both the Bonferroni and modified methods adequately controlled multivariate Type I error rates.
 
Article
In the two decades since storytelling was called the “sensemaking currency of organizations,” storytelling scholarship has employed a wide variety of research methods. The storytelling diamond model introduced here offers a map of this paradigmatic terrain based on wider social science ontological, epistemological, and methodological (both quantitative and qualitative) considerations. The model is beneficial for both researchers and reviewers as they plan for and assess the quality and defensibility of storytelling research designs. The main paradigms considered in the storytelling diamond model are narrativist, living story, materialist, interpretivist, abstractionist, and practice all as integrated by the antenarrative process.
 
Article
A common research problem in validation studies is the estimation of the population correlation between predictor X and performanceY from a non–randomly selected sample. Procedures for getting unbiased estimates of population correlations in a limited set of conditions in which no rejection of job offers is assumed have been developed. However, in applied selection settings, it is very likely that some of the candidates who have received job offers reject them through a selfselection process. If an estimation model based on the assumption that there is no rejection of job offers via self-selection is used, estimates of population parameters may be biased due to model misspecification. In the current study, a procedure is developed that is applicable to a variety of realistic validation settings, including a setting in which both institutional selection and applicant’s rejection of job offers are involved. Data requirements of the procedure are also discussed.
 
Article
Lodahl and Kejner’s Job Involvement (JI) measure has been and continues to be heavily used despite known measurement deficiencies. Using a convergent evidence approach, the authors examine the psychometric properties of that scale and offer a refined version that accurately taps the JI construct. Based on a combination of five methodologies (qualitative content analysis, classical item analyses, item response theory analyses, partial confirmatory factor analyses, and discriminant validity analysis) applied to five samples, results indicate that numerous items function inadequately as indicators of JI, whereas a core of items have superior item statistics and conceptually match the definition of JI. The advantages of using a convergent evidence approach are discussed.
 
Article
The use of interpretive approaches within management and organizational sciences has increased substantially. However appropriate criteria for justifying research results from interpretive approaches have not developed so rapidly alongside their adaptation. This article examines the potential of common criteria for justifying knowledge produced within interpretive approaches. Based on this investigation, appropriate criteria are identified and a strategy for achieving them is proposed. Finally, an interpretive study of competence in organizations is used to demonstrate how the proposed criteria and strategy can be applied to justify knowledge produced within interpretive approaches.
 
A visual representation of proxies with strong and weak construct validity.
The Use of R&D Intensity as an Archival Proxy.
The Use of Patent Counts as an Archival Proxy.
The Use of Patent Citations as an Archival Proxy.
Article
Archival proxies have long played a central role within strategic management research, but the degree to which archival proxies are construct valid measures of theoretical constructs remains a source of concern. In some cases, there does not appear to be a close association between an archival proxy and the construct that the proxy is meant to capture. In this brief commentary, we discuss the use of three prominent archival proxies (research and development intensity, patent counts, and patent citations) within recent articles in three leading journals. Each of these measures has been used to represent a wide variety of constructs, which creates challenges when interpreting findings. We then offer three suggestions for improving the use of archival proxies. Implementation of these suggestions would enhance knowledge development within the strategic management field.
 
Article
The authors illustrate a problem with confirmatory factor analysis (CFA)-based strategies to model disaggregated multitrait—multirater (MTMR) data—the potential to find markedly different results with the same sample of ratees simply as a result of how one selects and identifies raters within the data set one has gathered for analysis. Using performance ratings gathered as part of a large criterion-related validation study, the authors show how such differences manifest themselves in several ways including variation in (a) covariance matrices that serve as input for the modeling effort, (b) model convergence, (c) admissibility of solutions, (d) overall model fit, (e) model parameter estimates, and (f) model selection. Implications of this study for past research and recommendations for future CFA-based MTMR modeling efforts are discussed.
 
Article
This study examines the construct validity of the Goldberg International Personality Item Pool (IPIP) measure by comparing it to a well-developed measure of the five-factor model, the NEO Five-Factor Inventory (NEO-FFI). A sample of 353 diverse students from a large U.S. university completed both measures. Structural equation modeling was used to conduct the multitrait-multimethod, multiple-group, and latent mean analyses. A model with five correlated trait factors and two method factors provided the best fit to the data. Support for convergent and discriminant validity was also found. Racial and gender differences were relatively small for both instruments. These results support the construct validity of the IPIP. However, neither the NEO-FFI nor the IPIP produced a very good fit when analyzing item-level data, suggesting considerable room for improvement.
 
Article
The authors examine how strategy scholars have measured and tested industry effects. They report findings from three studies. First, they replicate the Dess, Ireland, and Hitt (1990) article on industry controls in strategic management research using a new sample of studies published during 2000 to 2009, finding that there has been a decrease in the proportion of articles that do not control for industry effects at all and at the same time noting a significant increase in the number of single-industry studies. Second, they employ a fine-grained content analysis of articles published in the Strategic Management Journal at three different points during the study period to identify the different ways that industry effects have been considered. Findings depict a myriad of highly diverse industry-level measures that researchers have applied. Third, they test the empirical implications of applying different measures of one particular industry characteristic, industry performance. They demonstrate that empirical findings and the interpretation of theoretical models can differ based on how industry effects are incorporated. Recommendations are offered for guiding future research about how to examine industry effects.
 
Summary of studies using the structural measures of causal maps
Descriptive statistics of the undergraduate and evening MBA student samples
Results of Factor Analysis
Article
Recently, text-based causal maps (TBCMs) have generated enthusiasm as a methodological tool because they provide a way of accessing large, untapped sources of data generated by organizations. Although TBCMs have been used extensively in organizational behavior and strategic management research, studies assessing the psychometric properties of TBCM measures are virtually non-existent. With the intention of facilitating large sample substantive research using TBCMs, we examine the construct validity of two most frequently employed structural properties of TBCMs: complexity and centrality. In assessing construct validity, we examine the internal consistency, dimensionality and predictive validity of the structural properties. Our results suggest that complexity is not a general cognitive attribute. Rather, it is indicative of domain knowledge. On the other hand, centrality, which reflects the degree of hierarchy characterizing the TBCM, is related to cognitive ability, and may reflect general information processing. Moreover, we found that complexity and centrality, but not cognitive ability, predicted student performance. We discuss the implications of these results.
 
Article
The personnel selection literature has recently included discussion of statistically based banding as a way to handle some differences in test scores when assessing job applicants. Banding uses classical test theory and an estimated standard error of measurement to create bands of individual scores, and these bands are treated as equivalent with respect to top-down selection. However such banding operationally assumes that standard errors of measurement are homogeneous, whereas a focus on the top score logically and statistically implies the use of a conditional standard error Other methods, such as item response theory and binomial error models, are therefore more appropriate,for computing bands. Via example and analysis, the authors demonstrate that more accurately computed bands are substantially narrower under a variety of circumstances than currently computed bands. Bands as currently constructed will label an inaccurate excess of individuals as equivalent, particularly if the test is relatively easy.
 
Article
The use of Bayesian methods for data analysis is creating a revolution in fields ranging from genetics to marketing. Yet, results of our literature review, including more than 10,000 articles published in 15 journals from January 2001 and December 2010, indicate that Bayesian approaches are essentially absent from the organizational sciences. Our article introduces organizational science researchers to Bayesian methods and describes why and how they should be used. We use multiple linear regression as the framework to offer a step-by-step demonstration, including the use of software, regarding how to implement Bayesian methods. We explain and illustrate how to determine the prior distribution, compute the posterior distribution, possibly accept the null value, and produce a write-up describing the entire Bayesian process, including graphs, results, and their interpretation. We also offer a summary of the advantages of using Bayesian analysis and examples of how specific published research based on frequentist analysis-based approaches failed to benefit from the advantages offered by a Bayesian approach and how using Bayesian analyses would have led to richer and, in some cases, different substantive conclusions. We hope that our article will serve as a catalyst for the adoption of Bayesian methods in organizational science research.
 
Article
We analyze the efficiency of six missing data techniques for categorical item non-response under the assumption that data are missing at random or missing completely at random. With efficiency we mean a procedure that produces an unbiased estimate of true sample properties that, as well, is easy to implement. The investigated techniques include list-wise deletion, mode substitution, random imputation, two regression imputations and a Bayesian model-based procedure. We analyze efficiency under six experimental conditions for a survey-based data set. We find that list-wise deletion is efficient for the data analyzed. If data loss due to list-wise deletion is an issue, the analysis points to the Bayesian method. Regression imputation is also efficient, but the result is conditioned on the specific data structure and may not hold in general. Additional problems arise when using regression imputation making it less appropriate.
 
Article
Partial least squares path modeling (PLS) was developed in the 1960s and 1970s as a method for predictive modeling. In the succeeding years, applied disciplines, including organizational and management research, have developed beliefs about the capabilities of PLS and its suitability for different applications. On close examination, some of these beliefs prove to be unfounded and to bear little correspondence to the actual capabilities of PLS. In this article, we critically examine several of these commonly held beliefs. We describe their origins, and, using simple examples, we demonstrate that many of these beliefs are not true. We conclude that the method is widely misunderstood, and our results cast strong doubts on its effectiveness for building and testing theory in organizational research.
 
The Logic of Adopting Grounded Theory Approach (GTA) to Study Academic Research Management
Article
This article addresses the methodology that can be applied when researching the field of academic research management, in which the adoption of a knowledge-based view (KBV) is especially appropriate. In particular, it discusses whether the adoption of a grounded theory approach (GTA) in this type of research is justifiable, given the contested character of the KBV constituents. GTA, so it is argued, is especially useful for investigating such a field because of three interrelated arguments: (a) that KBV and related debates provide insufficient theoretical guidance, (b) that the research managers' experience and viewpoints should form the basis of theory development and relevancy, and (c) that the concepts of knowledge and management are obscure. Adopting a GTA does not completely remove the KBV perspective from the methodological discussions. Instead, it may be useful for modifying the GTA outcomes, thus engendering theoretical plausibility, applicability, and credibility.
 
Regression Metric/Analysis and Their Associated Purpose and Features.
Output from plot.yhat for select predictor metrics from illustrative example. 
Correlation Matrix for Example 2.
Output from plot.yhat for all-possible-subsets (APS)–related metrics from illustrative example. 
Descriptive Statistics for Kendall's Tau Across Bootstrap Iterations.
Article
Multiple linear regression (MLR) remains a mainstay analysis in organizational research, yet intercorrelations between predictors (multicollinearity) undermine the interpretation of MLR weights in terms of predictor contributions to the criterion. Alternative indices include validity coefficients, structure coefficients, product measures, relative weights, all-possible-subsets regression, dominance weights, and commonality coefficients. This article reviews these indices, and uniquely, it offers freely available software that (a) computes and compares all of these indices with one another, (b) computes associated bootstrapped confidence intervals, and (c) does so for any number of predictors so long as the correlation matrix is positive definite. Other available software is limited in all of these respects. We invite researchers to use this software to increase their insights when applying MLR to a data set. Avenues for future research and application are discussed.
 
Article
Employee selection often involves a series of sequential tests (or hurdles). However, validation strategies under this complex design are not found in the literature. Missing is a discussion of the statistical properties important in establishing criterion-related validity in multiple-hurdle designs. The authors address this gap in the literature by suggesting a general statistical model for range restriction corrections. Because the multiple-hurdle design includes as special cases predictive and concurrent designs, the corrections apply also to these designs. The general correction model is based on algorithms from the missing data literature. Two missing data procedures are examined: the estimation-maximization procedure and the Bayesian multiple imputation (MI) procedure. These procedures are large-sample equivalent and often yield similar results. The MI procedure, however, has the added advantage of providing easily obtainable standard errors. A hypothetical example of a multiple-hurdle design is used to illustrate the procedures.
 
Article
The author analyzes reporting biases in regression analyses. The consequences of researchers’ strategy to select significant predictors and omit nonsignificant predictors from regression analyses are examined, focusing on how this strategy—labeled the Texas sharpshooter (TS) approach—creates a predictor reporting bias (PRB) in primary studies and research syntheses. PRB was demonstrated in simulation studies when correlation coefficients from several primary regression studies with an underlying TS approach were aggregated in meta-analyses. Several important findings are noted. First, meta-analytical effect sizes of true effects can be overestimated because smaller, nonsignificant findings are omitted from regression models. Second, suppression effects of correlated predictor variables create biased effect size estimations for variables that are not related to the outcome. Finally, existing small effects are concealed, and between-study heterogeneity can be overestimated. Results show that PRB is contingent on sample size. While PRB is substantial in studies with small sample sizes (N < 100), it is negligible when large sample sizes (N > 500) are analyzed. Preconditions and remedies for reporting biases in regression analyses are discussed.
 
Article
We aim to develop a meaningful single-source reference for management and organization scholars interested in using bibliometric methods for mapping research specialties. Such methods introduce a measure of objectivity into the evaluation of scientific literature and hold the potential to increase rigor and mitigate researcher bias in reviews of scientific literature by aggregating the opinions of multiple scholars working in the field. We introduce the bibliometric methods of citation analysis, co-citation analysis, bibliographical coupling, co-author analysis, and co-word analysis and present a workflow for conducting bibliometric studies with guidelines for researchers. We envision that bibliometric methods will complement meta-analysis and qualitative structured literature reviews as a method for reviewing and evaluating scientific literature. To demonstrate bibliometric methods, we performed a citation and co-citation analysis to map the intellectual structure of the Organizational Research Methods journal.
 
Article
CEO duality describes the governance structure when a firm’s chief executive officer also holds the position of chairman of the board. Duality is central to theoretical perspectives on corporate governance and top management, yet duality’s relationship with numerous outcomes is characterized by nonsignificant coefficients and bivariate correlations hovering near zero. We argue and present evidence that CEO duality represents a “dummy construct”—an intentionally pejorative assessment on the widespread use of binomial categorical “dummy” variables to represent complex constructs. While we highlight CEO duality, the use of dummy variables as constructs is common in research. We review CEO duality as a construct and assess typical approaches to its measurement. When compared to actual patterns of duality within organizations, we find that current operationalizations are lacking due to a lack of attention to temporal considerations. This raises questions about the construct validity of current conceptualizations of CEO duality. Actual patterns suggest constructs and theoretical perspectives not previously considered. We present a taxonomy of CEO duality archetypes and offer suggestions on the incorporation of time for studies using dummy variables.
 
Article
Organizational research has seen several calls for the incorporation of neuroscience techniques. The aim of this article is to describe the methods of neuroeconomics and the promises of applying these methods to organizational research problems. To this end, the most important neuroeconomics techniques will be described, along with four specific examples of how these methods can greatly benefit theory development, testing, and pruning in the organizational sciences. The article concludes by contrasting the benefits and limitations of neuroeconomics and by discussing implications for future research.
 
Article
The purpose of this article is to provide the research design of a meta-synthesis of qualitative case studies. The meta-synthesis aims at building theory out of primary qualitative case studies that have not been planned as part of a unified multisite effect. By drawing on an understanding of research synthesis as the interpretation of qualitative evidence from a postpositivistic perspective, this article proposes eight steps of synthesizing existing qualitative case study findings to build theory. An illustration of the application of this method in the field of dynamic capabilities is provided. After enumerating the options available to meta-synthesis researchers, the potential challenges as well as the prospects of this research design are discussed.
 
Valuation Method (Ratings Versus Policy-Capturing Weights) × Assumed Decision Strategy Interaction
Choice Proportions (in percentages) by Favorite Condition and Attribute Valuation Method, Assuming Noncompensatory Approach, Choice Set 4 Method of Attribute Valuation and Choice
Article
When studying applicants' job attribute preferences, researchers have used either direct estimates (DE) of importance or regression-derived statistical weights from policy-capturing (PC) studies. Although each methodology has been criticized, no research has examined the efficacy of weights derived from either method for predicting choices among job offers. In this study, participants were assigned to either a DE or PC condition, and weights for 14 attribute preferences were derived. Three weeks later, the participants made choices among hypothetical job offers. As predicted, PC weights outperformed DE weights when a noncompensatory strategy was assumed, and DE weights outperformed PC weights when a compensatory strategy was assumed. Implications for researchers' choice of methodology when studying attribute preferences are discussed.
 
Article
Researchers in many fields use multiple item scales to measure important variables such as attitudes and personality traits, but find that some respondents failed to complete certain items. Past missing data research focuses on missing entire instruments, and is of limited help because there are few variables to help impute missing scores and the variables are often not highly related to each other. Multiple item scales offer the unique opportunity to impute missing values from other correlated items designed to measure the same construct. A Monte Carlo analysis was conducted to compare several missing data techniques. The techniques included listwise deletion, regression imputation, hot-deck imputation, and two forms of mean substitution. Results suggest that regression imputation and substituting the mean response of a person to other items on a scale are very promising approaches. Furthermore, the imputation techniques often outperformed listwise deletion.
 
Article
Evidence-based management requires management scholars to draw causal inferences. Researchers generally rely on observational data sets and regression models where the independent variables have not been exogenously manipulated to estimate causal effects; however, using such models on observational data sets can produce a biased effect size of treatment intervention. This article introduces the propensity score method (PSM)—which has previously been widely employed in social science disciplines such as public health and economics—to the management field. This research reviews the PSM literature, develops a procedure for applying the PSM to estimate the causal effects of intervention, elaborates on the procedure using an empirical example, and discusses the potential application of the PSM in different management fields. The implementation of the PSM in the management field will increase researchers’ ability to draw causal inferences using observational data sets.
 
Article
Managerial and organizational cognition (MOC) researchers have issued a number of calls over the years for large-scale studies involving the mass application of causal mapping techniques. A number of important advances in the development of procedures for the systematic elicitation and comparison of cause maps mean that such work has been technically feasible for some time. However, due to a dearth of suitable supporting computer software, very few researchers to date have responded to this key challenge, vital to the longer-term viability of the MOC field. Building on innovations by Langfield-Smith and Wirth and Markóczy and Goldberg, the authors report on the development of Cognizer™, a comprehensive computer package designed to meet the requirements of researchers lookingto elicit and compare large numbers of maps on a longitudinal or cross-sectional basis. The principal features of Cognizer are illustrated using a cross-sectional data set comprising 200 cause maps from five organizations.
 
Example of Response Judgment Process to Two Dyadic FC Items Posited by the Multidimensional Unfolding Model With Stimuli as Anchors (Responses Are " AB " and " BA " to Items 1 and 2, Respectively)  
Pattern of Trait θs and Item Locations That Would Result in a Large Discrepancy Between Actual and Estimated θ Values Note. Dashed lines indicate the point at which a response change would be expected based on the unfolding model. The θ j s represent true trait standing on the layered, multidimensional continuum. The T› j s represent the estimated trait standing based on the responses to each item.  
The Coombs Unidimensional Unfolding Model for Four Items (Reflecting a Single Dimension) and Two Respondents (Indicated as θ 1 and θ 2 )  
Example of How Estimation Region Boundaries Are Constructed From an Individual's Response Vector to a Pentad Forced-Choice Scale  
Article
This article presents a psychometric approach for extracting normative information from multidimensional forced-choice (MFC) formats while retaining the method's faking-resistant property. The approach draws on concepts from Coombs's unfolding models and modern item response theory to develop a theoretical model of the judgment process used to answer MFC items, which is then used to develop a scoring system that provides estimates of normative trait standings.
 
Article
In this overview, the authors use the seven studies included in the feature topic as a platform to delineate three areas that latent class procedures are particularly useful for in advancing the field of organizational research. The first topic area focuses on dealing with the need to identify and understand unobserved subpopulations in organizational research. The second topic area focuses on recognizing the unobserved heterogeneity in measurement functioning. The third topic area focuses on addressing the challenges surrounding the existence of multiple longitudinal change (both quantitative and qualitative) patterns in organizational research. The authors conclude this overview by highlighting further thoughts on the ways that latent class procedures should be utilized to advance organizational research.
 
Article
There is growing evidence that an organization’s training climate can influence the effectiveness of formal and informal training activities. Unfortunately, there is limited data regarding the psychometric properties of climate measures that have been used in training research. The purpose of this article is to examine the construct validity of a training climate measure. Results from content adequacy, reliability, aggregation, and convergent, discriminant, and criterion-related validity assessments provide support for the measure’s use in diagnostic and theory testing efforts.
 
Top-cited authors
Chuck Lance
  • University of the Western Cape
Kevin G. Corley
  • Imperial College London
Dennis A. Gioia
  • Penn State Great Valley School of Graduate Professional Studies
Robert J Vandenberg
  • University of Georgia
James M. LeBreton
  • Pennsylvania State University