Article

The Best And The Rest: Revisiting The Norm Of Normality Of Individual Performance

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We revisit a long‐held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers, entertainers, politicians, and amateur and professional athletes. Results are remarkably consistent across industries, types of jobs, types of performance measures, and time frames and indicate that individual performance is not normally distributed—instead, it follows a Paretian (power law) distribution. Assuming normality of individual performance can lead to misspecified theories and misleading practices. Thus, our results have implications for all theories and applications that directly or indirectly address the performance of individual workers including performance measurement and management, utility analysis in preemployment testing and training and development, personnel selection, leadership, and the prediction of performance, among others.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent research has focused on utilizing objective measures of performance, and presented supporting evidence for the perspective that performance actually exhibits positive skewness. O'Boyle and Aguinis (2012) conducted five empirical studies across various industries, and found that individual performance actually follows a Paretian, or power law, distribution (this study uses the terms "Paretian" and "power law" interchangeably). A Paretian distribution is non-normal and skewed, in the sense that it features wider tails than does a normal curve. ...
... O'Boyle and Aguinis (2012) found that a power law distribution fits the distribution of performance across different professional fields, job types, time frames, and measures of performance. Moreover, the existence of significant individual performance differences has been supported by other researchers, for example in the realm of gifted youths' life achievements (Kell, Lubinski, & Benbow, 2013). ...
... Neither of these definitions provides an explicit threshold separating a star from a non-star performer, nor is this surprising given the contextual nature of performance. However, it may be useful to note that Paretian distributions are also known as the 80/20 principle, wherein 20 percent of the population is responsible for 80 percent of a specific outcome (O'Boyle & Aguinis, 2012). A number of research studies have set their own benchmarks for exceptional performers, such as the top 10 percent of the relevant population (e.g., Gallardo-Gallardo, Dries, & González-Cruz, 2013). ...
Conference Paper
PURPOSE: The nature of performance distributions is a topic of current research, but has not been thoroughly studied within team contexts. In this study, we examined whether the interindividual performance profiles of teams tend more towards normal or non-normal distributions, and investigated how the nature of the distribution relates to overall team performance. METHODOLOGY: The sample consisted of teams and skaters from the 2013-2014 through 2017-2018 seasons of the National Hockey League (NHL). For multiple objective measures of individual performance behaviors and outcomes, we utilized the Kolmogorov-Smirnov (KS) statistic to compare the fit of each team’s performance profile with a Gaussian (normal) and Paretian (non-normal) distribution. We then analyzed the degree of normality as a predictor of team performance. RESULTS: Behavior-based performance data generally fit a normal distribution better than a Paretian distribution. For teams in the playoffs, however, outcome-based performance tended to fit a Paretian distribution better. KSGaussian statistics (both behavior- and outcome-based) predicted playoff team performance indicating that the greater the normality of performance within a team, the better the team performed on average. LIMITATIONS: The findings may primarily generalize to team contexts with similar levels of interdependence and/or role differentiation. RESEARCH/PRACTICAL IMPLICATIONS: The results help to clarify the conditions and reasons for variability in how individual performance is distributed within work teams. Teams in high-pressure environments may benefit from structures that promote balanced contributions and interdependent collaboration. ORIGINALITY/VALUE: By focusing on teams, this study contributes a unique perspective to the debate over the normality of performance distributions.
... In line with recommendations for studies that include eminent individuals, outliers were not excluded and the data was not transformed (O'Boyle & Aguinis, 2012;Simonton, 2014). Contrary to the assumptions of normality in standard models, the distribution of eminent individuals is considered to be non-normal (Den Hartigh, Van Dijk, Steenbeek, & Van Geert, 2016;O'Boyle & Aguinis, 2012;Simonton, 2014). ...
... In line with recommendations for studies that include eminent individuals, outliers were not excluded and the data was not transformed (O'Boyle & Aguinis, 2012;Simonton, 2014). Contrary to the assumptions of normality in standard models, the distribution of eminent individuals is considered to be non-normal (Den Hartigh, Van Dijk, Steenbeek, & Van Geert, 2016;O'Boyle & Aguinis, 2012;Simonton, 2014). More specifically, research has demonstrated that eminent individuals produce a highly-skewed distribution, in which exceptional individuals are found in the right tail (Den Hartigh et al., 2016;Simonton, 2014;Simonton & Baumeister, 2005). ...
... More specifically, research has demonstrated that eminent individuals produce a highly-skewed distribution, in which exceptional individuals are found in the right tail (Den Hartigh et al., 2016;Simonton, 2014;Simonton & Baumeister, 2005). These distributions do not follow a Gaussian distribution, but rather are governed by Parentian distributions (O'Boyle & Aguinis, 2012;Simonton, 2014). In contrast to a normal curve where a value exceeding three standard deviations from the mean is ordinarily considered an outlier, a Parentian distribution considers these values common and the elimination or transformation of such outliers antitheoretical (O'Boyle & Aguinis, 2012). ...
Article
Objectives Researchers investigating the psychological aspects of Olympic coaching have studied coaches as a homogenous group, and the effect of coaches' psychological characteristics on performance-related outcomes remains unclear. The objective of this research, therefore, was to examine whether psychological factors discriminate between world-leading (i.e., Olympic gold medal winning) and world-class (i.e., Olympic non-gold medal winning) coaches. Method Self-reported psychometric questionnaires were completed by 36 Olympic coaches who had collectively coached 169 swimmers to win 352 Olympic medals, of which 155 were gold medals. The questionnaires assessed 12 variables within the Big Five personality traits, the dark triad, and emotional intelligence, and the data was analyzed using three one-way multivariate analysis of variance and follow-up univariate F-tests. Results The results showed that the 21 world-leading coaches were significantly more agreeable, had greater perception of emotion, were better at managing their own emotion, and were less Machiavellian and narcissistic than the 15 world-class coaches. The groups of coaches showed no differences in levels of conscientiousness, openness to experience, extraversion, neuroticism, psychopathy, managing other emotion, or utilization of emotion. Conclusions Psychological factors discriminate between world-leading and world-class coaches. The implications of these differences are discussed for psychology researchers and practitioners operating in Olympic sport.
... In this regards, Goldberg [3] asserts that "the variety of individual differences... are insignificant in people's daily interactions"; a viewpoint that echoes that of Galton [18]. On the other hand, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed [19][20][21] behavioral response. For instance, Micceri [22] analyzed the distributional characteristics of over 440 samples of achievement and psychometric measures. ...
... In so doing, we contrasted these assumptions to the case that primarily relied on data to uncover its underlying properties as a more tenable choice. Our study was motivated by the previous findings that identified the shortcoming of such assumptions [19][20][21][22][23] and differed from the viewpoints that advocate the insignificant differences among the individuals' behavioural responses [3,18]. ...
... It is important to note our study is not meant to be construed as a hard and fast rule for preferring one of the adopted model over and above all the other alternatives but to provide demonstrable results on some of the issues that can arise while opting for unwarranted assumptions [8,[19][20][21][22][23]38]. In fact, falling for such simplifications may well explain some of the discrepancies that are present in the field of human behaviour [8,38]. ...
Article
Full-text available
A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human-human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants' reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants' pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals' variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.
... The prevailing models that psychology researchers apply proceed from Gaussian distributions (Walberg, Strykowski, Rovai, & Hung, 1984), which hold for various human characteristics such as height, blood pressure (Pater, 2005) and intelligence (Burt, 1957). Accordingly, in industrial and organizational psychology it was long believed that individual performance displays a Gaussian distribution as well (Muchinsky, 1994;O'Boyle & Aguinis, 2012;Schmidt & Hunter, 1983). However, individuals who ultimately reach star performance find themselves in the right tail of a highly right skewed distribution (e.g., Aguinis, O'Boyle, Gonzalez-Mulé, & Joo, 2016;Aguinis & O'Boyle, 2014;Den Hartigh et al., 2016, Den Hartigh, Hill, & Van Geert, 2018Huber, 2000;Lotka, 1926;Muchinsky, 1994;O'Boyle & Aguinis, 2012;Simonton, 2003Simonton, , 2005aSutter & Kocher, 2001). ...
... Accordingly, in industrial and organizational psychology it was long believed that individual performance displays a Gaussian distribution as well (Muchinsky, 1994;O'Boyle & Aguinis, 2012;Schmidt & Hunter, 1983). However, individuals who ultimately reach star performance find themselves in the right tail of a highly right skewed distribution (e.g., Aguinis, O'Boyle, Gonzalez-Mulé, & Joo, 2016;Aguinis & O'Boyle, 2014;Den Hartigh et al., 2016, Den Hartigh, Hill, & Van Geert, 2018Huber, 2000;Lotka, 1926;Muchinsky, 1994;O'Boyle & Aguinis, 2012;Simonton, 2003Simonton, , 2005aSutter & Kocher, 2001). In other words, given that star performers produce much more output than normal individuals, they are far more on the right of the performance distribution than one would expect if individual performance would follow a Gaussian distribution. ...
... In other words, given that star performers produce much more output than normal individuals, they are far more on the right of the performance distribution than one would expect if individual performance would follow a Gaussian distribution. For instance, O'Boyle and Aguinis (2012) conducted five studies using 198 samples. These samples involved entertainers, politicians, researchers, and amateur and professional athletes. ...
Across different domains, there are 'star performers' who are able to generate disproportionate levels of performance output. To date, little is known about the model principles underlying the rise of star performers. Here, we propose that star performers' abilities develop according to a multi-dimensional, multiplicative and dynamical process. Based on existing literature, we defined a dynamic network model, including different parameters functioning as enhancers or inhibitors of star performance. The enhancers were multiplicity of productivity, monopolistic productivity, job autonomy, and job complexity, whereas productivity ceiling was an inhibitor. These enhancers and inhibitors were expected to influence the tail-heaviness of the performance distribution. We therefore simulated several samples of performers, thereby including the assumed enhancers and inhibitors in the dynamic networks and compared their tail-heaviness. Results showed that the dynamic network model resulted in heavier and lighter tail distributions, when including the enhancer- and inhibitor-parameters, respectively. Together, these results provide novel insights into the dynamical principles that give rise to star performers in the population.
... Employees' high performance (i.e., those behaviors that result in the production of high-quality goods and services; Rotundo & Sackett, 2002) is extremely valuable to their organizations because it improves the organization's reputation (Cravens & Oliver, 2006), encourages repeat business from customers (Maxham et al., 2008), and enhances the organization's overall profitability (Judge et al., 2001;Koys, 2001). Because high-performing employees bring considerable value to their organizations (Becker & Huselid, 2006;O'Boyle & Aguinis, 2012), organizations tend to invest substantial resources into the recruitment, selection, and training of "top talent" (Gardner, 2005). Additionally, high-performing employees are often considered assets that need to be protected and further cultivated to avoid substantial losses that could come from their voluntary turnover (e.g., Bedeian & Armenakis, 1998;Trevor & Nyberg, 2008). ...
... High performers obviously provide a fundamental good to their organization. Their skills, capabilities, and productivity help the organization survive and thrive (Becker & Huselid, 2006;O'Boyle & Aguinis, 2012). Furthermore, because high performers usually possess unique skills, they may overinflate the novelty of their contributions and therefore, feel superior (Sachdeva et al., 2009). ...
Article
Full-text available
We extend the performance literature by moving beyond a focus on antecedents of employees' job performance. Rather, we consider the effects of employees' high performance on their subsequent psychological states and behaviors. We adopt a social exchange approach to explain why powerful, high-performing employees may feel psychologically entitled (i.e., a belief that they are owed more than what is typical from the organization), which then prevents them from engaging in organizational citizenship behaviors (i.e., discretionary behaviors that contribute to the effective functioning of the organization). We first establish internal validity by testing our theoretical model using an experimental study design. We then establish external validity by testing our theoretical model using multi-source field data from university employees in the United States. Both studies provide support for our theoretical model in that psychological entitlement mediates the negative indirect relationship between employees' performance and OCB when employee power is higher versus lower. Theoretical and practical implications are discussed.
... However, the results regarding the limitations to achieving the said goals are not conclusive. The so-called high performers may be 400% more productive than the average employee [30] and, in the case of complex jobs, this performance difference may be as large as 800% [27]. ...
... Capelli [31] gives more importance to concentrate the retention effort to certain employees. O'Boyle and Aguinis [30] sustain that the hiring or leaving of a talented employee can have significant consequences on the overall productivity of an organization. This approach, promoting exclusive talent and focusing talent management towards top performers, attempts to demonstrate that these employees make a disproportionately high contribution to the development of the company [18]. ...
Article
Full-text available
The digital transformation means that companies are redefining the process of talent management. Previous models involved functions, practices and processes that ensured a correct flow of employees towards key positions or a generic talent management view. The digital breakthrough, together with the growing panorama of competition for talent in the market, requires a different focus to enable well-grounded and agile decision-making processes in a sustainable world. The current research considers the functions that applied research has established as the limits of talent management, and that are the key topics in an employee life cycle, namely, talent attraction and acquisition, training, evaluation, and development. In addition, new tools such as employee advo-cacy and/or brand ambassadors have been added towards to draw conclusions about the future trends of talent management. This article examines the employee life cycle of talent attraction, and acquisition, training, evaluation, and development in the study of the main digital tools utilized in the Spanish market, by both national and multinational corporations. The results indicate that future investments are needed to correlate the digital tools and take advantage of a better employee life cycle management. The main results show a rapid increase in the number and variety of tools used in the talent acquisition process, an expanded use of social networks to enhance the scope of those processes, and conversely, a minor use of digital tools for both talent development and talent retention processes.
... L'étude analyse les universitaires hautement productifs («chercheurs les plus performants»), les universitaires bien rémunérés («chercheurs hautement rémunérés») et les universitaires hautement internationalisés («chercheurs internationalistes») et explore les implications pour les scientifiques. Aguinis, 2012). In general, social phenomena such as income, wealth and price show 'strong skewness with long tail on the right, implying inequality' (Abramo, D'Angelo, & Soldatenkova, 2017, p. 324), and academic knowledge production is no exception (Kwiek, 2018c). ...
Article
Full-text available
The academic profession is internally divided as never before. This cross-national comparative analysis of stratification in higher education is based on a sample of European academic scientists (N = 8,466) from universities in 11 countries. The analysis identifies three types of stratification: 'academic performance stratification', 'academic salary stratification', and 'international research stratification'. This emergent stratification of the global scientific community is predominantly research-based, and internationalization in research is at its center; prestige-driven, internationally competitive, and central to academic recognition systems, research is the single most stratifying factor in higher education at the level of the individual scientist. These stratification processes pull the various segments of the academic profession in different directions. The study analyses highly productive academics ('research top performers'), highly paid academics ('academic top earners'), and highly internationalized academics (research 'internationalists') and explores the implications for individual scientists. It is an expanded version of the Opening Keynote Address presented at the SRHE International Conference on Research into Higher Education (2018).
... Murphy [28] also states that 'performance management should die', highlighting the disappointment that stems from PA systems in business settings, providing four reasons why performance evaluation is not effective. First of all, the distribution of performance is not Gaussian (normal), but rather Paretian (power-law) [32], thus assessment done under the assumption of a normality of performance distribution is pointless. Secondly, due to the vast amount of subjectivity and errors in performance ratings, it is almost impossible to provide valid and reliable measurements of individual performance. ...
Article
Full-text available
Performance appraisal (PA) has become a prominent feature on the agenda of higher education institutions (HEIs). However, the traditional culture of the typical university is based on individual commitment, scientific teamwork, dedication to public service and intrinsic motivation of the academic staff, all of which are the essential components of public service motivation (PSM). By interviewing key informants from three public universities, the purpose of our research was to identify various tensions between PA and PSM, by asking what is the impact of PA on PSM of academics in public HEIs. Our findings have shown that the purposefulness of PA activities may not be fully understood by public HEI management and academics. The existing tensions between PA normative aims of motivation and fair evaluation and its descriptive effects of increasing bureaucracy and dissatisfaction might undermine PSM, an essential driving force that motivates academics to work in public HEIs.
... The point is that, in the vast majority of cases, all evaluations of someone's talent are carried out a posteriori, just by looking at his/her performances À À À or at reached results À À À in some speci¯c area of our society like sport, business,¯nance, art, science, etc. This kind of misleading evaluation ends up switching cause and e®ect, rating as the most talented people those who are, simply, the luckiest ones [45,46]. In line with this perspective, in previous works, it was advanced a warning against such a kind of \naive meritocracy" and it was shown the e®ectiveness of alternative strategies based on random choices in many di®erent contexts, such as management, politics and¯nance [47][48][49][50][51][52][53][54]. ...
Article
Full-text available
This paper further investigates the Talent versus Luck (TvL) model described by [Pluchino et al. Talent versus luck: The role of randomness in success and failure, Adv. Complex Syst. 21 (2018) 1850014] which models the relationship between ‘talent’ and ‘luck’ on the impact of an individuals career. It is shown that the model is very sensitive to both random sampling and the choice of value for the input parameters. Running the model repeatedly with the same set of input parameters gives a range of output values of over 50% of the mean value. The sensitivity of the inputs of the model is analyzed using a variance-based approach based upon generating Sobol sequences of quasi-random numbers. When using the model to look at the talent associated with an individual who has the maximum capital over a model run it has been shown that the choice for the standard deviation of the talent distribution contributes to 67% of the model variability. When investigating the maximum amount of capital returned by the model the probability of a lucky event at any given epoch has the largest impact on the model, almost three times more than any other individual parameter. Consequently, during the analysis of the model results one must keep in mind the impact that only small changes in the input parameters can have on the model output.
... With respect to target task performance, high performers are valued by their companies, their leaders, and their peers, and thus their mistreatment is worthy of intervention because the costs of ignoring such behavior can be high. In particular, targets of mistreatment often react by searching for another job (Bowling & Beehr, 2006) and losing a good performer results in high financial costs to the organization (O'Boyle & Aguinis, 2012;Trevor, Gerhart, & Boudreau, 1997). Furthermore, recent research shows that colleagues working in close proximity to high performers also themselves show increased performance (Greenbaum, Housman, & Minor, 2016); should those high performers leave, the productivity gains leave with them. ...
Article
Full-text available
The current research integrates theory on the contextual characteristics that impact bystanders’ decisions to prosocially intervene against workplace incivility. We built a model based upon two of the most influential theories of prosocial intervention—Latané and Darley’s (1970) decision-tree model and Piliavin et al.’ (1981) arousal: cost-reward model—and assert that decisions to intervene are affected by the inherent ambiguity of the uncivil context as well as the costs versus rewards of intervention in ways that facilitate action. Yet, depending on the gender composition of the dyad involved in the uncivil exchange, and both the moral identity and the role (i.e., supervisor versus coworker) of the observer, ambiguity and cost-reward considerations may differ in their relative impact. Policy capturing methodology was used to test the relative influence of ambiguity-reduction (i.e., harm to the target, appeal for help) versus cost-reward (i.e., target’s task performance level, bystander workload) situational cues on coworker and supervisor bystanders’ decisions to intervene with either social support to the target or confronting the perpetrator. Results of a large-scale experiment with over 3400 participants revealed that each of the situational cues surrounding the uncivil exchange positively influenced observers’ decisions to intervene in theorized ways and that cost-reward considerations and role obligations are intricately intertwined.
... Stars can contribute to firm revenue (Han & Ravid, 2020), improve the odds of firm survival (Bedeian & Armenakis, 1998), facilitate new product development (Zucker & Darby, 1996), motivate peers to perform better (Ammann et al., 2016) and achieve other desirable criteria (Bendapudi & Leone, 2001, Liu, 2014 -which are more likely if stars are supported by colleagues and firms (Amankwah-Amoah et al., 2017;Groysberg & Lee, 2008). Thus, stars may add value directly via exceptional output or indirectly by providing their firms with access to external resources and exerting significant influence on colleagues (Grigoriou & Rothaermel, 2014;Kehoe et al., 2018;O'Boyle & Aguinis, 2012). The greater the number and variety of ways in which a star contributes to value, the more sustainable is the star's value creation for the firm (Kehoe et al., 2018). ...
Article
Full-text available
We assessed the financial value of human resource management (HRM) as a function of obtaining more star performers. Specifically, we implemented utility analysis procedures on 206 samples of individual performance (i.e. output) encompassing 824,924 workers. We found that HRM adds greater financial value by obtaining more stars. Our results also offer several specific contributions to HRM theory. First, regarding how HRM produces greater value by obtaining more stars, our evidence points to a nonlinear model of HRM’s value, where HRM generates significant yet diminishing returns by increasingly obtaining the most productive ones. Second, regarding when, our results show that diminishing returns from HRM are stronger when output differences among top stars are relatively small. Third, regarding why, our study explains that small output differences among top stars may create various costs which diminish the returns from obtaining the most productive stars. Our explanation of HRM’s nonlinear pattern contributes to the star literature by helping integrate a variety of specific explanations for stars’ curvilinear influence discussed in past research. Regarding HRM practices, we highlight the need to use utility analysis procedures that more fully consider the existence of stars. Supplemental data for this article is available online at https://doi.org/10.1080/09585192.2021.1948890 .
... When examined at scale, big data belie an old empirical conjecture about the commonality of the normal distribution of phenomena. Research has already proposed that many human performance measures are Paretian rather than normally distributed (O'Boyle Jr. & Aguinis, 2012). That is, what is sometimes referred to as the 80/20 rule means that the preponderance of achievements is enacted by a minority of performers. ...
Article
The study of personal relationships has traditionally relied on self‐reports or observations of face‐to‐face interaction. Digital media increasingly provide the ability to trace communication and relationships at scale. Such methods portend significant theoretical and methodological challenges, as well as potential. As a way of illustrating such potential, big data approaches to the select traditional relational concepts of routine relating, propinquity, homophily, small world, and reciprocity are reviewed. The fields of communication and personal relationships will need to inform such research by developing their own interdisciplinary relationships with geographic information sciences, computational linguistics, and computer sciences or cede a significant frontier of their field to these other disciplines.
... Research also began to demonstrate the difference that quality talent could make when intangible assets were the primary source of firm value (Paschen, Wilson, & Ferreira, in press). A study of 600,000 researchers, entertainers, politicians, and athletes found that the very best of them were more than 400% more productive than the average among them (Herman & O'Boyle, 2012). In another study, McKinsey found that for complex jobs, the impact on performance was an astonishing 800% higher for top performers compared to the average performer (Keller & Meaney, 2017). ...
Article
AI-enabled recruiting systems have evolved from nice to talk about to necessary to utilize. In this article, we outline the reasons underlying this development. First, as competitive advantages have shifted from tangible to intangible assets, human capital has transitioned from supporting cast to a starring role. Second, as digitalization has redesigned both the business and social landscapes, digital recruiting of human capital has moved from the periphery to center stage. Third, recent and near-future advances in AI-enabled recruiting have improved recruiting efficiency to the point that managers ignore them or procrastinate their utilization at their own peril. In addition to explaining the forces that have pushed AI-enabled recruiting systems from nice to necessary, we outline the key strategic steps managers need to take in order to capture its main benefits.
Article
Training has shown little effectiveness in altering harassing or discriminatory behavior. Limitations of prior intervention efforts may reflect poor conceptualization of the problems involved, poor training intervention design, approaches that engender cynicism, or misunderstanding psychological principles of attitude and behavior change. Interventions should capitalize on behavioral science models and tools at multiple levels from a broad array of disciplines to explain harassment and bias, and then to defeat these behaviors. Measures to ensure fair treatment should focus on leadership socialization, organizational culture and climate, increased professional competence, and integration with organizational approaches to corporate social responsibility and performance.
Article
Full-text available
The Scale of Positive and Negative Experience (SPANE) aims to measure affect with high transcultural validity. The bifactor model is the best theoretical option to represent affective balance, although it is not typically used in validation studies. The objectives of this research were to test a bifactor model vis-à-vis the traditional model composed of two correlated factors, to prove its invariance across sexes, and to provide evidence of concurrent validity. A nonprobability sample composed of 600 Mexican students of psychology and medicine was recruit. One-group and multigroup confirmatory factor analyses were carried out. The SPANE and the scales selected to assess depression, perceived stress, and satisfaction with life were applied. The bifactor model showed better goodness-of-fit indices than the two correlated factors model: Δχ²(11) = 121.436, p < . 001, Δχ²/Δdf = 11.04 > 5, ΔGFI = .034, ΔNFI = .025, ΔNNFI = .022, and ΔCFI = .023 >.01. The internal consistency for the general factor as well as for the factor of positive affect was excellent, whereas it was good for the factor of negative affect. The measurement model was valid across sexes. The general factor of affective balance had a very high correlation with depression, high with perceived stress, and medium with satisfaction with life. It is concluded that SPANE is reliable and shows evidence of validity among Mexican student of psychology and medicine, and the bifactor model is adequate to represent affective balance.
Article
Full-text available
We examined the experience of multiple jobholders with families in reference to work-family conflict and psychological stress. Survey data were collected from American employees who held one or more jobs (N = 410) to test a model that indicated a path from jobs held to stress through work-family conflict and performance quality as stressors, with the latter stressor conditional on the importance that employees place on performance quality. Our results confirmed that more jobs held were associated with more work-to-family conflict, which in turn was associated with low-rated performance quality, which in turn was associated with more psychological stress. Also confirmed was that the performance quality-stress relationship was conditional on high importance placed on performance quality. Suggestions for future research and stress-reduction intervention are discussed.
Article
We advance the understanding and measurement of the concept of time by offering a taxonomy of four distinct time constructs: duration, frequency, timing, and sequence. On the basis of a literature review of human resource management and allied fields (i.e., organizational behavior, industrial and organizational psychology, general management, entrepreneurship, and strategic management studies), we offer recommendations on how to measure each construct as well as illustrations drawn from different domains and theories on how these recommendations can be implemented. In addition, for each construct, we offer specific, practical, and actionable recommendations regarding critical design choices, dilemmas, and trade-offs that must be considered when investigating time conceptually and empirically. We discuss these recommendations in the form of a sequential decision-making process that can be used as a roadmap by researchers. We hope our conceptualization and recommendations will serve as a catalyst and useful resource for future conceptual and empirical research that aims to formulate better time-sensitive and temporally falsifiable theories.
Article
Purpose Accounting work is characterized by high job demands and tight deadlines. With less task variety, accounting work is susceptible to employee disengagement. This paper aims to examine the role of enhanced performance management practices as intervention mechanism to the disengagement among accountants. Design/methodology/approach A total of 105 accountants participated in an online survey, answering self and social reports. Hypotheses were tested using regression analyses. Findings Enhanced performance management practices promote engagement among accountants. In turn, engagement promotes job satisfaction and affective commitment among accountants. Research limitations/implications Further studies are necessary to test the study’s findings. Future research should focus on replicating this study in other settings. Practical implications Performance planning and implementation are critical to enhancing accountants’ work attitudes and behaviors. Originality/value The accounting literature has consistently addressed negative accounting work outcomes from the perspective of burnout (a negative approach). This paper addresses the issue from the perspective of engagement (a positive approach).
Chapter
In Kap. 9 beschäftigen wir uns mit dem strategischen Kompetenzmanagement, dem zweiten Ansatz zur Strategieimplementierung. Wir können in erster Linie drei Gründe dafür ausmachen, dass dieser Ansatz in den letzten 15 Jahren signifikant an Bedeutung gewonnen hat. Erstens hat die zunehmende Beliebtheit des Resource Based View – und hier besonders der Begriff der Kernkompetenzen – dazu geführt, dass das Interesse am Thema Kompetenzen stark gestiegen ist. Der zweite Grund liegt in der Unzufriedenheit mit der qualitativen Seite der strategischen Personalplanung. Drittens haben die Kompetenzmodelle durch die gemeinsame Sprache eine Klammerfunktion und ermöglichen damit eine Integration der verschiedenen Personalinstrumente, beispielsweise im Personalmarketing, in der Rekrutierung, der Personalentwicklung und auch dem Performance Management. Wir werden in diesem Kapitel erfahren, inwieweit das strategische Kompetenzmanagement den Erwartungen gerecht werden kann, welche Rahmenbedingungen es erfordert und wie verbreitet das Instrument in der Praxis ist.
Article
Full-text available
Global talent management is a key success factor for multinational corporations, as investments made to attract and retain talent are enormous. However, the link between talent management practices and retention is under-researched. In this paper, we fill this research gap by proposing a conceptual framework linking global talent management practices and talent retention in multinational corporations, by exploring the role of individual careers through knowing-whom career capital and career success. We conducted a survey among talent and a control group within a multinational company, to test our framework through structural equation modeling. The main results show that talent management practices have a positive effect on talent's intention to stay and that career-related aspects are key factors in retaining this talent on a global scale. Thus, our contribution is threefold: a conceptual framework, empirical evidence, and a new literature-based TM index, which makes the perceived intensity of TM programs measurable.
Chapter
In diesem Kapitel suchen wir Antworten auf unsere vierte Frage: Wie aktiv soll das Humankapital in die Strategieentwicklung eingebunden werden? Ist die Rolle des Humankapitals eher aktiv oder passiv? Dies hängt in erster Linie davon ab, welcher Strategieschule das Topmanagement anhängt. Vor allem Vertreter des Resource Based View argumentieren, dass dieser Ansatz die Möglichkeit bietet, sich nachhaltig vom Wettbewerb zu differenzieren. Die Frage ist, wie dies geschehen kann, da der Resource Based View vage bleibt, wenn es um Vorschläge geht, wie konkret der Wettbewerbsvorsprung auf- bzw. ausgebaut werden soll. Denn wir stehen vor der Aufgabe, aus Mitarbeitenden, die an sich austauschbar sind, in Kombinationen mit anderen Mitarbeitenden oder Ressourcen eine so firmenspezifische Konstellation an Humankapital zu schaffen, die für den Wettbewerb nur schwer kopierbar ist. Es gibt grundsätzlich fünf Optionen, mit denen ein möglichst nachhaltiger Wettbewerbsvorteil durch das Humankapital gestaltet werden kann: der Einsatz ‚besserer‘ Mitarbeitender, die HR-Architektur des Unternehmens, sein Sozial- und Organisationskapital und schließlich die Unternehmenskultur. Die Option, allein durch ‚bessere‘ Mitarbeitende einen Wettbewerbsvorteil zu erlangen, erweist sich aber als selten nachhaltig. Die Wechselwirkungen zwischen den Optionen und deren mangelnde Trennschärfe erklären, warum es nur relativ wenige Firmen schaffen, das Humankapital zur nachhaltigen Differenzierung im Wettbewerb aufzubauen: Wir haben es mit einem komplexen und damit nur schwer zu steuernden Prozess zu tun.
Research Proposal
Full-text available
Public administration-as a field of both academic study and professional practice-would benefit greatly from a more systematic and cohesive strand of research that is explicitly geared towards systematically studying the successes and positive contributions of government. At present, the citizenry at large is ill-informed about what government does well, while the civil service operates in a political environment that tends to derogate or discount its accomplishments. In this environment, it is incumbent for scholars to offer a more balanced appreciation for, and more empowering understanding of, public administration. Inspired by comparable developments in other disciplines, we outline the concrete steps and challenges in launching a "Positive Public Administration," an approach to research and scholarship that examines the degree to which, the manner in which, and the conditions under which public policies, programs, projects, organizations, networks, and partnerships thrive, advance important democratic values, and produce widely valued societal outcomes.
Conference Paper
The study in this article reveals the main structure of indicators that should be used in order to measure compatibility process in knowledge-intense organizations. Due to shift of management logic of Fit theory and extencively accepted presumption of talent shortage, management of compatibility became in highlight. Many research organizations and business sector companies are using key performance indicators for managing compatibility, but there is lack of data how successfully public sector companies are using them. Created and proposed methodology of assessment is a preparation step of empirical research. The systemic analysis of indicators which are used in managing compatibility between knowledge workersand workplaces solutions in a dynamic approach were investigated to create application technique.
Article
Full-text available
Industrial/organizational (I/O) psychology, the subfield of psychology applied to the context of work, has been criticized for being dominated by U.S. authors because this dominance could prevent the generalizability of results and the enrichment of theories, paradigms, and approaches by researchers from other parts of the world. Previous estimates on the extent of the U.S. dominance are, however, likely restricted in scope, outdated, and likely biased by non-U.S. researchers who were socialized in the U.S. or received help by U.S. co-authors. As such, we measured the level of U.S. dominance by analyzing 5,626 papers published from the top ten journals of the field of I/O psychology in the last eleven years and their authors. The results show that the U.S. dominance continues, although the internationalization of industrial/organizational psychology has steadily increased. An additional analysis of the gender distribution across our sample revealed that female first authorship is slightly more common among authors with no U.S. affiliation. We suggest several steps to further increase the level of internationalization.
Article
Full-text available
Purpose There has been a surge of interest in leader character and a push to bring character into mainstream management theory and practice. Research has shown that CEOs and board members have many questions about the construct of leader character. For example, they like to see hard data indicating to what extent character contributes to organizational performance. Human resource management professionals are often confronted with the need to discuss and demonstrate the value of training and development initiatives. The question as to whether such interventions have a dollars-and-cents return on the investment is an important one to consider for any organizational decision-maker, especially given the demand for increased accountability, the push for transparency and tightening budgets in organizations. The authors investigated the potential dollar impact associated with the placement of managers based on the assessment of leader character, and they used utility analysis to estimate the dollar value associated with the use of one instrument – the Leader Character Insight Assessment or LCIA – to measure leader character. Design/methodology/approach The authors used field data collected for purposes of succession planning in a large Canadian manufacturing organization. The focus was on identifying senior management candidates suitable for placement into the most senior levels of leadership in the organization. Peers completed the LCIA to obtain leader character ratings of the candidates. The LCIA is a behaviorally based and validated instrument to assess leader character. Performance assessments of the candidates were obtained through supervisor ratings. Findings The correlation between the leader character measure provided by peers and performance assessed by the supervisor was 0.30 ( p < 0.01). Using the data required to calculate ΔU from the Brogden-Cronbach-Gleser model leads to an estimate of CAD $564,128 for the use of the LCIA over the expected tenure of 15 years, which is equivalent to CAD $37,609 yearly; and CAD $375,285 over an expected tenure of 10 years, which is equivalent to CAD $37,529 yearly. The results of the study also indicate that there is still a positive and sizeable return on investment or ROI associated with the LCIA in employee placement even with highly conservative adjustments to the basic utility analysis formula. Originality/value Utility analysis is a quantitative and robust method of evaluating human resource programs. The authors provide an illustration of the potential utility of the LCIA in a selection process for senior managers. They assert that selecting and promoting managers on leader character and developing their character-based leadership will not only leverage their own contributions to the organization but also contribute to a trickle-down effect on employees below them.
Article
Full-text available
Anomaly detection is a hard data analysis process that requires constant creation and improvement of data analysis algorithms. Using traditional clustering algorithms to analyse data streams is impossible due to processing power and memory issues. To solve this, the traditional clustering algorithm complexity needed to be reduced, which led to the creation of sequential clustering algorithms. The usual approach is two-phase clustering, which uses online phase to relax data details and complexity, and offline phase to cluster concepts created in the online phase. Detecting anomalies in a data stream is usually solved in the online phase, as it requires unreduced data. Contrarily, producing good macro-clustering is done in the offline phase, which is the reason why two-phase clustering algorithms have difficulty being equally good in anomaly detection and macro-clustering. In this paper, we propose a statistical hierarchical clustering algorithm equally suitable for both detecting anomalies and macro-clustering. The proposed algorithm is single-phased and uses statistical inference on the input data stream, resulting in statistical distributions that are constantly updated. This makes the classification adaptable, allowing agglomeration of outliers into clusters, tracking population evolution, and to be used without knowing the expected number of clusters and outliers. The proposed algorithm was tested against typical clustering algorithms, including two-phase algorithms suitable for data stream analysis. A number of typical test cases were selected, to show the universality and qualities of the proposed clustering algorithm.
Article
Full-text available
Small firms can contribute to job creation and aggregate income. However, small firms are volatile and only a fraction of those can transition into larger firms, which create high-paying jobs. Entrepreneurs self-select for the transition. This study examines the pattern of entrepreneur self-selection. The main determinants of the self-selection are ability distribution of entrepreneurs and business environment, which represents skill distribution of laborers and social capital. This study predicts that firm-size distribution is truncated with the entrepreneur self-selection and aggregate income is larger when the business environment is better. This study contributes to the literature on firm-size distribution.
Purpose This study aimed at developing and testing a model to evaluate employee performance in Isfahan municipality. Design/methodology/approach A mixed-method design is applied in this study. To extract the model, a semi-structured interview based on the thematic analysis approach was employed. The qualitative data were obtained using a researcher-made questionnaire from a sample of 12 municipal experts selected based on purposive sampling. In the quantitative phase, the sample consisted of 76 managers and interim managers. The validity of the questionnaire was determined by the content validity index, while the structural validity was tested based on structural equation modeling using SmartPLS software. The reliability of the questionnaire was confirmed using Cronbach's alpha and composite reliability indices. Findings The factors obtained in the qualitative model included performance evaluation criteria, the desired time interval for performance evaluation, results announcement, performance evaluation approach, performance evaluation method and evaluator-related variables. There should have been an agreement between evaluators and those who were evaluated in all components of the model. In the quantitative section, performance evaluation criteria, evaluators, the evaluation method and time interval were confirmed with coefficients of 0.871, 0.815, 0.646 and 0.615, respectively. Practical implications The novelty of this study is that it uses a mixed-method research approach to extract a performance evaluation model that is specific to the Isfahan municipality. Originality/value The novelty of this study is that it uses a mixed-method research approach to extract a performance evaluation model that is specific to the Isfahan municipality.
Article
Full-text available
The paper examines the construct of luck and strong social network in the innovation process within societies. Whereas philosophy has a rich diversity of co-evolutionary work in progress to understand and integrate luck and technology generation in different spheres of social setting. The first part of this paper claims the imbedded knowledge systems that subsist within rural spaces of India. Which comes in the public domain after being resistant to the existing problem. Such knowledge corresponds to the development of agriculture technology at grassroot level, that shapes social view and affects daily living and facts. Therefore, explain the arrival of new capital technology impact on farmers in India for their income generation to innovations subjected to luck (Howitt, Violante and Aghion 2020)which depends on the context. It provides a fresh understanding of merit usefulness on the grounds of accomplishment and possibility of acknowledging those who have been fortunate in their concerned field of expertise. The second part of this paper discusses that at certain interval it is not mere luck that makes a
Article
Full-text available
This study aims to present the translation and adaptation of the Job Dedication Scale into Czech. The final sample consisted of 142 workers in the social services of retirement homes across the Czech Republic, evaluated by their direct superiors on the Job Dedication Scale. The translation took the form of back translations using three independent translators. Descriptive statistics showed the negative skewness of the given ratings and the reduced variability of responses for some items. Despite the high value of internal consistency, the confirmatory factor analysis of the full version of the scale did not demonstrate agreement with the obtained data. Following the results of factor analyses and modification indices, residual correlations between selected items were allowed, which increased the quality of the tested model. The modified model showed a good match with the data. Values of the internal consistency of the scale were excellent. Eliminating any item would reduce this value. The small and specific sample is one of the limits of the study. Another limit is an approach to assessing the job dedication of the employee using only one evaluator.
Article
Full-text available
Bayesian analysis offers strategy scholars numerous benefits. In addition to aligning empirical and theoretical endeavors by incorporating prior knowledge, the Bayesian approach allows researchers to estimate and visualize relationships that reflect the probability distributions many strategy researchers mistakenly interpret from conventional techniques. Yet, strategy scholars have proven hesitant to adopt Bayesian methods. We suggest this is because there is no accessible template for employing the technique with the types of data strategy researchers tend to encounter. The central objective of our research is to synthesize disparate contributions from the Bayesian literature that are relevant for strategy scholarship, especially for nested data. We provide an intuitive overview of Bayesian thinking and illustrate how scholars can employ Bayesian techniques to analyze nested data using an example dataset involving CEO compensation. Our results show how using Bayesian models may lead to substantively different interpretations and conclusions compared to traditional approaches based on frequentist techniques.
Chapter
Compliance, or the behavioral response to legal rules, has become an important topic for academics and practitioners. A large body of work exists that describes different influences on business compliance, but a fundamental challenge remains: how to measure compliance or noncompliance behavior itself? Without proper measurement, it's impossible to evaluate existing management and regulatory enforcement practices. Measuring Compliance provides the first comprehensive overview of different approaches that are or could be used to measure compliance by business organizations. The book addresses the strengths and weaknesses of various methods and offers both academics and practitioners guidance on which measures are best for different purposes. In addition to understanding the importance of measuring compliance and its potential negative effects in a variety of contexts, readers will learn how to collect data to answer different questions in the compliance domain, and how to offer suggestions for improving compliance measurement.
Article
Given that replication studies are important for theory building, theory testing, knowledge accumulation, and domain legitimacy, we attempted to replicate 19 seminal studies of new venture emergence that used PSED-type data; only six attempts were successful. Our humbling experience highlights how changes at the author, journal, and institutional levels—indeed, a communal effort—can encourage, facilitate, and expedite replication studies. We provide entrepreneurship scholars with ten best practices for conducting replication studies, as well as recommendations to other stakeholders to steer away from the replication “crisis” plaguing other research domains. As they say, it takes a village.
Article
Full-text available
Research and development are central to economic growth, and a key challenge for countries of the global South is that their research performance lags behind that of the global North. Yet, among Southern researchers, a few significantly outperform their peers and can be styled research “positive deviants” (PDs). In this paper we ask: who are those PDs, what are their characteristics and how are they able to overcome some of the challenges facing researchers in the global South? We examined a sample of 203 information systems researchers in Egypt who were classified into PDs and non-PDs (NPDs) through an analysis of their publication and citation data. Based on six citation metrics, we were able to identify and group 26 PDs. We then analysed their attributes, attitudes, practices, and publications using a mixed-methods approach involving interviews, a survey and analysis of publication-related datasets. Two predictive models were developed using partial least squares regression; the first predicted if a researcher is a PD or not using individual-level predictors and the second predicted if a paper is a paper of a PD or not using publication-level predictors. PDs represented 13% of the researchers but produced about half of all publications, and had almost double the citations of the overall NPD group. At the individual level, there were significant differences between both groups with regard to research collaborations, capacity development, and research directions. At the publication level, there were differences relating to the topics pursued, publication outlets targeted, and paper features such as length of abstract and number of authors.
Article
Full-text available
El Evangelio según San Mateo, más allá de la polémica versión cinematográfica de Pier Paolo Pasolini, es uno de los cuatro evangelios del Nuevo Testamento. Cada uno de los cuatro evangelios presenta una especie de biografía de Jesús poniendo diferentes énfasis. De los cuatro evangelios, dos de sus autores fueron parte de los "Doce" apóstoles (Mateo y Juan) mientras que los otros dos (Marcos y Lucas) son seguidores de aquéllos, es decir, una especie de segunda generación. Las "parábolas" constituyen uno de los géneros literarios más utilizados a la hora de poner en boca de Jesús enseñanzas que calen hondo en el corazón de los lectores. La "parábola de los talentos" es una de las más famosas. Interesante resulta que aparezca la palabra "talento" en este contexto, pero tiene otro significado. No es exactamente el que estamos habituados a utilizar en management. El texto clave de la parábola se halla en S.Mt 25:14-30 "porque a todo el que tiene, se le dará y le sobrará, pero al que no tiene, aun lo que tiene se le quitará" (Versión de la Biblia de Jerusalén). Y es notable que este mismo texto había sido asignado a Jesús en el mismo evangelio un poco más atrás (13:12) en el marco de la Parábola del Sembrador. Dos parábolas para una misma enseñanza, esto indica que es una de esas frases que circuló en los grupos primitivos del cristianismo, allá a finales del siglo I. La parábola de los talentos narra la historia de un hombre que llamó a sus siervos y les encomendó su hacienda. Les dio según su capacidad. A unos les dio cinco talentos, a otro dos y a otro uno. Lo que tenían que hacer los siervos era negociar para ganar más. En general los primeros dos duplicaron los talentos recibidos. En cambio el último, devolvió el mismo recibido, es decir, no lo trabajó. La respuesta, un tanto enigmática del señor es que Dios cosecha donde no siembra y recoge donde no esparció. Entonces, el señor decidió quitarle el talento que tenía y dárselo al que tenía 10 (cinco más cinco). Y al siervo que no duplicó, dio la orden de echarlo afuera de la hacienda, una especie de castigo eterno. Una interpretación posible podría ser que los poderes otorgados a los administradores crecen con el uso y disminuyen con el desuso. De Mateo a Robert Merton En algún momento, la Sociología recogió el relato evangélico y donde había justicia distributiva entrevió, más bien, una especie de desproporción generadora de desigualdad. De esta manera, las recompensas desproporcionadas llegaron al mundo de la ciencia. Y con ello, la subjetividad se coló en la lógica científica. A finales de los cincuenta, Robert Merton (1957), uno de los más grandes sociólogos del siglo XX, observó que las recompensas calificadas en el ámbito de la ciencia se distribuyen principalmente en la moneda equivalente del reconocimiento otorgado a la investigación por parte de otros científicos. En otras palabras, "como te ven te tratan". La imagen pública de los científicos está moldeada, en gran medida, por el testimonio de validación comunal de otros renombrados investigadores que han estado a la vanguardia de los exigentes requisitos institucionales de sus funciones. Esto es lo que Taleb Nassim (2008) denominó "el efecto de la reputación". En 1962, Thomas Kuhn publicó su texto más recordado: "La estructura de las revoluciones científicas". Si algo quedó claro de su lectura es que la legitimidad y validez de los enunciados científicos depende, en gran manera, del consenso que haya en la comunidad paradigmática que tenga mayor prestigio y reconocimiento. La verdad científica, entonces, es relativa y termina siendo un fenómeno social, una atribución que ciertos grupos relevantes (líderes de opinión) otorgan.
Article
Full-text available
This work revises the concept of defects in crystalline solids and proposes a universal strategy for their characterization at the atomic scale using outlier detection based on statistical distances. The proposed strategy provides a generic measure that describes the distortion score of local atomic environments. This score facilitates automatic defect localization and enables a stratified description of defects, which allows to distinguish the zones with different levels of distortion within the structure. This work proposes applications for advanced materials modelling ranging from the surrogate concept for the energy per atom to the relevant information selection for evaluation of energy barriers from the mean force. Moreover, this concept can serve for design of robust interatomic machine learning potentials and high-throughput analysis of their databases. The proposed definition of defects opens up many perspectives for materials design and characterization, promoting thereby the development of novel techniques in materials science.
Article
Full-text available
Crowdfunded microlending research implies that both communal and agentic characteristics are valued. These characteristics, however, are often viewed as being at odds with one another due to their association with gender stereotypes. Drawing upon expectancy violation theory and research on gender stereotypes, we theorize that gender-counterstereotypical facial expressions of emotion provide a means for entrepreneurs to project “missing” agentic or communal characteristics. Leveraging computer-aided facial expression analysis to analyze entrepreneur photographs from 43,210 microloan appeals, we show that women benefit from stereotypically masculine facial expressions of anger and disgust, whereas men benefit from stereotypically feminine facial expressions of sadness and happiness.
Thesis
Full-text available
The diploma thesis deals with contextual performance, emotional intelligence and the attachment of workers in the social services of retirement homes. To expand knowledge about job performance, its own method was created for the empirical part of the thesis, determining the job performance required directly by retirement homes. The theoretical part describes emotional intelligence, attachment and contextual performance in the form of interpersonal facilitation and job dedication. The empirical part of the thesis aims to explore the possibilities of predicting job performance based on knowledge of emotional intelligence and attachment of workers in social services. Four methods for social service workers (MSCEIT, EWR-I, LMX-7 and BFI-2) and four methods for department heads used to assess the employee job performance were used to diagnose the set variables (Job Facilitation Scale, Job Dedication Scale, LMX-7 and the SQSS questionnaire created by us). The research group consists of 141 workers in the social services. According to the results of the study, it is not possible to predict selected types of job performance based on the emotional intelligence of employees. We also failed to predict selected types of employee job performance through their attachment beyond the quality of the relationship between them and managers and beyond personality traits in terms of the Big Five. From the results we found, it is not possible to significantly predict reduced contextual performance and job performance in terms of SQSS according to the high tendency to attachment anxiety and attachment avoidance. Despite the impossibility of predicting job performance by attachment and emotional intelligence in the data obtained by us, we consider this study to be an important starting point for the implementation of other similar studies that would help clarify the importance of emotional intelligence and attachment in retirement home workers.
Article
Purpose This study aims to draw lessons on how talent identification becomes a critical factor in the field of talent management (TM). Design/methodology/approach A simulation approach with three developed scenarios is used in the paper. The first utilised the standard deviation of skewed performance scores, the second applied the standard deviation of normalised data and the third practised a percentile approach. Concerning the normalisation process of employee performance data, the paper proposed a weighted function to address skewness. Findings The results indicate that the process of identifying talent using a nine-grid box is sensitive to changes in the classification criteria used, indicating a bias in identifying talent. In sum, using a standard deviation approach using transformation data is the most appropriate choice for use in performance data with a skewed distribution. Practical implications The Government of West Java Province, Indonesia, can use the simulation results to objectively identify excellent civil servants and develop an appropriate TM strategy. A similar process treatment can be implemented in other organisations that have skew distribution issues. Originality/value This paper introduces a weighted function approach to address practical problems in the unsymmetrical distribution of employee performance scores when identifying talent using a TM framework. It shows the application of a unique mathematical technique to solve issues found in the field of human resources management systems.
Article
Full-text available
In this constructive replication, we evaluate the effect of star performers on unit performance beyond the presence of other high performers and high mean levels of unit performance to clarify and confirm stars' unique contribution to unit performance. Furthermore, we extend prior work by assessing stars' moderating effect on the unit turnover‐to‐unit performance relationship. With a sample drawn from Major League Baseball, we confirm that stars have a positive influence on unit performance above and beyond the effects of high levels of mean unit productivity and other high performers, but that the strength of this relationship diminishes as the number of stars increases. We also find that stars moderate the relationship between unit turnover and performance such that star performers effectively eliminate the harmful effects of turnover on unit performance. However, this moderating effect is itself eliminated when the turnover rate changes drastically.
Article
Individuals with credentials (Board Certified Behavior Analyst–Doctoral and Board Certified Behavior Analyst) from the Behavior Analyst Certification Board throughout the United States were asked to identify the characteristics and corresponding behaviors of individuals they consider to be exemplary in the profession. From these responses, a list of 35 characteristics and attendant behaviors was compiled into the Exemplary Behavior Analyst Checklist. This checklist contains a number of characteristics that are traditionally representative of the field (e.g., analytical, applied, conceptually systematic, technological) and relate to technical and conceptual skills. Respondents also identified a number of characteristics associated with compassion and support of clients/individuals (e.g., client centered, culturally competent, empathetic, positive/encouraging). A “top 10” list of the qualities and behaviors of exemplary behavior analysts identified by participants is presented, and a discussion regarding the implications for the training of credentialed professionals is provided.
Chapter
This chapter makes the case for universal pay equality in an organizational context by highlighting its benefits in facilitating change, innovation and workforce deployment, but it also describes its downsides particularly in denying employees financial recognition for difficult work conditions or increased contribution. The chapter highlights the complexity of financial incentives—effective in some occupations/sectors, deleterious in others—but overall favors group rewards (profit sharing/gainsharing) over individual, with base pay differentials only reflecting extra skills and responsibilities.
Article
Sustainable development is being reconsidered as a process with unknown endpoint. Outputs of sustainable urban water systems defined as 'policies, projects, laws, technologies, and consumption and reuse amounts associated with urban water sustainability goals' are therefore being viewed as inadequate monitoring instruments. I propose a new methodology for sustainability monitoring whereby normality of a system is diagnosed through normality of its supporting inputs in lieu of normality of its complex outputs. Supporting inputs are 'intents and behaviors that support system goals'. Supporting inputs follow a principle of self-organization to remain in the norm and behavior zone commonly associated with system goals. This implies that normality of supporting inputs can be inferred from their longitudinally normal or Gaussian distribution that can be explored by significance tests; in particular, the Shapiro-Wilk test which is most powerful for n < 50. We identify fourteen supporting inputs of sustainable urban water systems-such as internet searches, community campaigns, staff training, agent-principal reporting and legislation propositions about water sustainability-and define quantitative indicators for them. The Shapiro-Wilk test and Kolmogorov-Smirnov test (K-S) of these indicators and a subsequent Boxplot outlying examination of non-normal indicators are undertaken in Yazd-a desert city in central Iran with a historic record in water conservation-in the light of its complex wastewater speculation. Qualitative examination of non-normal supporting inputs confirms the ability of our statistical methodology to detect problems in the system.
Article
Team research typically assumes that team performance is normally distributed: teams cluster around average performance, performance variability is not substantial, and few teams inhabit the upper range of the distribution. Ironically, although most team research and methodological practices rely on the normality assumption, many theories actually imply nonnormality (e.g., performance spirals, team composition, team learning, punctuated equilibrium). Accordingly, we investigated the nature and antecedents of team performance distributions by relying on 274 performance distributions including 200,825 teams (e.g., sports, politics, firefighters, information technology, customer service) and more than 500,000 workers. First, regarding their overall nature, only 11% of the distributions were normal, star teams are much more prevalent than predicted by normality, the power law with an exponential cutoff is the most dominant distribution among nonnormal distributions (i.e., 73%), and incremental differentiation (i.e., differential performance trajectories across teams) is the best explanation for the emergence of these distributions. Second, this conclusion remained unchanged after examining theory-based boundary conditions (i.e., tournament versus nontournament contexts, performance as aggregation of individual-level performance versus performance as a team-level construct, performance assessed with versus without a hard left-tail zero, and more versus less sample homogeneity). Third, we used the team learning curve literature as a conceptual framework to test hypotheses and found that authority differentiation and lower temporal stability are associated with distributions with larger performance variability (i.e., a greater proportion of star teams). We discuss implications for existing theory, future research directions, and methodological practices (e.g., need to check for nonnormality, Bayesian analysis, outlier management).
Article
Full-text available
Strategies of place branding are an important tool for managing the flow of human capital. At the same time, the diversification of the labor force predetermines the specificity of a region and influences its brand. Keeping in mind that strong and effective place brands must be based on a clear vision, local governments become key players in the elaboration, implementation and maintenance of a brand. A regional brand is multidimensional and a region’s policy aimed at gaining talented people is one of its components. Considering the necessity for measuring the effectiveness of regional policies as well as comparing the attractiveness of regions with respect to a special group of human capital of talented individuals, it is possible to use indexes. The present study aims to answer the question: what is the difference in the attractiveness of Polish voivodships from the perspective of attracting talented people? The research design is based on the theory of place branding and is of exploratory nature. The aim of the study was achieved through a ranking of Polish voivodships based on the Global Talent Competitiveness Index. The ranking was completed applying the TOPSIS method. The index had been constructed using five dimensions of territorial attractiveness connected to gaining talent: enable, attract, grow, retain, be global. In general, the results show that all voivodships have good socio-economic and development conditions; however, opportunities for talented people and their global integration were less advanced. Conclusions reached through the study enable the identification of voivodships that are most and least attractive to talented people. The paper shows that the attractiveness of territories for talented people can be estimated at the meso-level providing researchers and practitioners with yet another perspective—in addition to the micro (corporate) and macro (national) levels that are better explored. The outcomes of such research facilitate the recognition of regions’ potential for attracting talent.
Article
Full-text available
Idiosyncratic employment arrangements (i-deals) stand to benefit the individual employee as well as his or her employer. However, unless certain conditions apply, coworkers may respond negatively to these arrangements. We distinguish functional i-deals from their dysfunctional counterparts and highlight evidence of i-deals in previous organizational research. We develop propositions specifying both how i-deals are formed and how they impact workers and coworkers. Finally, we outline the implications i-deals have for research and for managing contemporary employment relationships.
Article
Full-text available
The relationship between work attitudes and individual job performance was investigated using artificial neural networks (ANNs). ANNs use pattern recognition algorithms that are well suited to capturing nonlinear relationships among variables thereby providing a new perspective on research on this topic area. Results from the neural network analysis provided strong evidence of nonlinearity suggesting that nonlinear models are needed to understand the work attitude-job performance relationship. In so doing, the neural network model had greater predictive accuracy than did traditional OLS regression. Implications of this finding for theory development and future research were discussed.
Article
Full-text available
During the past 30 years, meta-analysis has been an indispensable tool for revealing the hidden meaning of our research literatures. The four articles in this special section on meta-analysis illustrate some of the complexities entailed in meta-analysis methods. Although meta-analysis is a powerful tool for advancing cumulative knowledge, researchers can be confused by the complicated issues involved in the methodology. Each of these four articles contributes both to advancing this methodology and to the increasing complexities that can befuddle researchers. In these comments, the author attempts to clarify both of these aspects and provide a perspective on the methodological issues examined in these articles.
Article
Full-text available
Examined the effect of race on the peer ratings of 43 black and 50 white industrial employees recently exposed to a foreman-training program which included intensive human relations training. Contrary to previous studies, no race effect was found. In addition, almost all the requirements for convergent and discriminant validity between the races were met. Possible explanations for these results and implications for the use of peer ratings in integrated settings are discussed. (15 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A literature review indicates that the standard deviation of employee output averaged 20% of mean output under nonpiecework compensation systems and 15% under piecework systems. For both systems, variability around the mean was small. Implications for selection and workforce productivity are discussed. (25 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Used decision theoretic equations to estimate the impact of the Programmer Aptitude Test (PAT) on productivity if used to select new computer programers for 1 yr in the federal government and the national economy. A newly developed technique was used to estimate the standard deviation of the dollar value of employee job performance, which in the past has been the most difficult and expensive item of required information. For the federal government and the US economy separately, results are presented for different selection ratios and for different assumed values for the validity of previously used selection procedures. The impact of the PAT on programmer productivity was substantial for all combinations of assumptions. Results support the conclusion that hundreds of millions of dollars in increased productivity could be realized by increasing the validity of selection decisions in this occupation. Similarities between computer programers and other occupations are discussed. It is concluded that the impact of valid selection procedures on work-force productivity is considerably greater than most personnel psychologists have believed. (37 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Five experiments with 596 undergraduates contrasted Ss' intuitive evaluation of data for hypothesis testing with the Bayesian concept of diagnosticity. According to that normative model, the impact of a datum, D relative to a pair of hypotheses, H and H̄, is captured by its likelihood ratio, equal to P( D/H)/ P( D/H̄). Results show that when Ss were asked to test the validity of H, only half expressed an interest in P( D/H). That proportion increased when they were asked to determine whether H or H̄ was true. That proportion decreased when the instructions more forcefully encouraged Ss to solicit only pertinent information. Thus Ss generally had a strong interest only in the conditional probability that mentioned the hypothesis (or hypotheses) that they were explicitly asked to test. When, however, they were presented with both components of the likelihood ratio, most Ss revealed a qualitative understanding of their meaning vis-à-vis hypothesis testing. Results are discussed in terms of the kinds of understanding that people might have for statistical principles. (22 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Reviews research that is concerned with evaluating the psychometric qualities of data in the form of ratings (rating errors) and that has been plagued with conceptual and operational confusion and inconsistency. Following a brief historical survey, inconsistencies in definitions, quantifications, and methodologies are documented in a review of more than 20 relevant articles published in Journal of Applied Psychology, Organizational Behavior and Human Performance, and Personnel Psychology (1975–1977). Empirical implications of these inconsistencies are discussed, and a revised typology of rating criteria, combined with a multivariate analytic approach, is suggested. (65 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
NSF Graduate Fellowships are awarded to approximately half of a homogeneous group of applicants in a procedure that approximates random assignment to the conditions of either fellowship or honorable mention. This natural experiment permits assessment of the effect on early career accomplishments of being named an NSF fellow. The authors found a consistent effect for PhD completion—overall, fellows were 7% more likely to complete the PhD than were nonawardees—but found no reliable fellowship effect on achieving faculty status, achieving top faculty status, or submitting or receiving an NSF or a National Institutes of Health research grant. The authors conclude that the positive expectancies associated with this prestigious fellowship have only a small influence (Pygmalion or Galatea effect) in graduate school and no effect thereafter.
Article
Tests and structured assessments are used to make inferences and decisions about individuals and groups. In personnel selection, these can range from assessments of the knowledge, skills, and abilities thought to be necessary for successful job performance to evaluations of current and past job performance. This article discusses assessments that range from paper-and-pencil tests of work-related abilities and skills to the measures based on the judgments of an interviewer or a supervisor. Many of the principles of psychometrics were first developed in the context of multi-item written tests of abilities or other enduring characteristics of individuals. In this article, the descriptions of the main models and methods of psychometrics are often framed in terms of specific characteristics of these tests (e.g., the use of multiple test items, in which all items are designed to measure the same characteristic of individuals).
Article
The "80-20" rule that describes buyer concentration is a predictable feature of consumer behavior for established brands. It is governed by the same principles that affect the overall purchase frequency distribution, namely brand popularity and frequency of purchase. Marketers interested in brand growth and in total brand-profit contribution to the company will focus on increasing brand popularity rather than narrowly targeting a small percent of "profitable households.".
Book
Psychological theories, complete with tools and methods, for dealing with human resource issues. Interdisciplinary and research-based in approach, Applied Psychology in Human Resource Management integrates psychological theory with tools and methods for dealing with human resource problems in organizations and for making organizations more effective and more satisfying places to work. The seventh edition reflects the state of the art in personnel psychology and dramatic changes that have recently characterized the field, and outlines a forward-looking, progressive model toward which HR specialists should aim. - See more at: http://www.pearsonhighered.com/educator/product/Applied-Psychology-in-Human-Resource-Management/9780136090953.page#sthash.ib9JwzRf.dpuf
Article
The reward and communication systems of science are considered.
Article
This article attempts to explain how organizations are controlled through exchange relationships with their environments. Most organizations are dependent on their environments at five points. Emerson's definitional treatment of power provides a method for weighting these five kinds of exchange relationships to determine which are more problematic for an organization. Organizational behavior can then be represented in part as a rank-weighted average of the forces emanating from these external dependencies. One partial exception to this analysis obtains when potential controllers are fractionated or dispersed. Where these conditions hold, close control will be difficult. Examining some of the conditions under which coercive influence attempts become probable completes the picture of the environmental controls over formal organizations.
Article
A rating-scoring technique for evaluating free response answers is described and illustrated. An experimental (trainee) group of 18 supervisors and a matched control group of 18 each gave free response answers to four human relations questions. The E group was trained, and then both groups answered the questions again. Four social scientist raters sorted the 72 responses to each question into a 7-category forced-normal distribution. The pre and post-test score for each respondent was the sum of the category numbers assigned by the four raters over the four questions. Validity of the device as a measure of training effectiveness was previously reported (see 25: 7152). Interrater reliability is reported here: the summated pretest score reliability was .85; posttest score reliability was .88. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Power in organizations is a fluid social construction subject to multiple interpreta- tions. The extensive literature on power provides insights about the antecedents and consequences of power at the individual and group levels but does not provide a model tracing the linkages between them and describing how power develops and is transferred between individuals and groups. In this article we describe some of the conditions necessary for power identities and reputations to develop and transfer effectively between individuals and groups in organizations. A nurse, an administrator, and a high- powered physician were asked to join forces as a team to create a new pa- tient scheduling system for a surgical unit at General Hospital. Perceptions of the power of these three individuals differed significantly at the time they formed the team. Despite these initial differences, the team was perceived by its own members and others as in- creasingly powerful over time. More- over, perceptions of power were trans- ferred back from the team to its individual members. That is, the team's ultimate power enhanced their own and others' perceptions of each member's power.
Article
One of the most critical challenges faced by management scholars is how to integrate micro and macro research methods and theories. This article introduces a special issue of the Journal of Management addressing this integration challenge. First, the authors describe the nature of the micro—macro divide and its challenge for the field of management. Second, the authors provide a summary of each of the four guest editorials and seven articles published in the special issue and how each piece, in its own unique way and adopting a different perspective, makes a novel contribution toward addressing this challenge. Finally, they offer suggestions for future research that they hope will stimulate greater integration of management research with the goal of bridging not only the micro—macro gap but also the science—practice gap.
Article
Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from "factors" to "actors" in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach.
Article
This article presents statistical methods for identifying outcomes in a given sample that can be inferred as plausible extreme and whether the extremes on two variables are associated. Applications to CEO pay and performance of 50 top-paid CEOs illustrate these methodologies. Thresholds between extremes and nonextremes are found using high probability intervals under the probability distributions that govern sampling variations of the sample extremes. A Bayesian approach is used to compute odds on the association between the extremes of the two variables. The extreme pay—performance analysis of 50 top-paid CEOs reveals astonishing odds in favor of a company being extreme high only on one of the two versus on both variables. The result is considered decisive evidence for a negative association between extreme on CEO pay and extreme on performance among such top-paid CEOs. By contrast, analysis of the nonextreme CEOs yielded no evidence of any association between CEO pay and performance.
Article
Scale coarseness is a pervasive yet ignored methodological artifact that attenuates observed correlation coefficients in relation to population coefficients. The authors describe how to disattenuate correlations that are biased by scale coarseness in primary-level as well as meta-analytic studies and derive the sampling error variance for the corrected correlation. Results of two Monte Carlo simulations reveal that the correction procedure is accurate and show the extent to which coarseness biases the correlation coefficient under various conditions (i.e., value of the population correlation, number of item scale points, and number of scale items). The authors also offer a Web-based computer program that disattenuates correlations at the primary-study level and computes the sampling error variance as well as confidence intervals for the corrected correlation. Using this program, which implements the correction in primary-level studies, and incorporating the suggested correction in meta-analytic reviews will lead to more accurate estimates of construct-level correlation coefficients.
Article
Artificial neural networks are rapidly gaining popularity in the hard sciences and in social science. This article discusses neural networks as tools business researchers can use to analyze data. After providing a brief history of neural networks, the article describes limitations of multiple regression. Then, the characteristics and organization of neural networks are presented, and the article shows why they are an attractive alternative to regression. Shortcomings and applications of neural networks are reviewed, and neural network software is discussed.
Article
This study uses data on the U.S. film industry from 1982 to 2001 to analyze the effects on box office performance of prior relationships between film producers and distributors. In contrast to prior studies, which have appeared to find performance benefits to both buyers and sellers when exchange occurs embedded within existing social relations, we propose that the apparent mutual advantages of embedded exchange can also emerge from endogenous behavior that benefits one party at the expense of the other: actors offer better terms of trade and allocate more resources to transactions embedded within existing social relations, thereby contributing to the ostensible advantages of such exchange patterns. Findings show that not only do distributors exhibit a preference for carrying films involving key personnel with whom they had prior exchange relations, but also they tend to favor these films when allocating scarce resources (opening dates and promotion effort). After controlling for the effects of these decisions, films with deeper prior relations to the distributor perform worse at the box office. The results suggest that, rather than benefiting from repeated exchange, distributors overallocate scarce resources to these prior exchange partners, enacting a self-confirming dynamic.
Article
Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from “factors” to “actors” in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach.
Article
This cautionary note provides a critical analysis of a statistical practice that is used pervasively by researchers in strategic management and related fields in conducting covariance structure analyses: The argument that a “large” sample size renders the ?2 goodness-of-fit test uninformative and a statistically significant result should not be an indication that the model does not fit the data well. Our analysis includes a discussion of the origin of this practice, what the attributed sources really say about it, how much merit this practice really has, and whether we should continue using it or abandon it altogether. We conclude that it is not correct to issue a blanket statement that, when samples are large, using the ?2 test to evaluate the fit of a model is uninformative and should be simply ignored. Instead, our analysis leads to the conclusion that the ?2 test is informative and should be reported regardless of sample size. In many cases, researchers ignore a statistically significant ?2 inappropriately to avoid facing the inconvenient fact that (albeit small) differences between the observed and hypothesized (i.e., implied) covariance matrices exist.
Article
We highlight important differences between twenty‐first‐century organizations as compared with those of the previous century, and offer a critical review of the basic principles, typical applications, general effectiveness, and limitations of the current staffing model. That model focuses on identifying and measuring job‐related individual characteristics to predict individual‐level job performance. We conclude that the current staffing model has reached a ceiling or plateau in terms of its ability to make accurate predictions about future performance. Evidence accumulated over more than 80 years of staffing research suggests that general mental abilities and other traditional staffing tools do a modest job of predicting performance across settings and jobs considering that, even when combined and corrected for methodological and statistical artifacts, they rarely predict more than 50% of the variance in performance. Accordingly, we argue for a change in direction in staffing research and propose an expanded view of the staffing process, including the introduction of a new construct, in situ performance, and an expanded view of staffing tools to be used to predict future in situ performance that take into account time and context. Our critical review offers a novel perspective and research agenda with the goal of guiding future research that will result in more useful, applicable, relevant, and effective knowledge for practitioners to use in organizational settings.
Article
Two methods for estimating dollar standard deviations were investigated in a simulated environment. 19 graduate students with management experience managed a simulated pharmaceutical firm for 4 quarters. Ss were given information describing the performance of sales representatives on 3 job components. Estimates derived using the method developed by F. L. Schmidt et al (see record 1981-02231-001) were relatively accurate with objective sales data that could be directly translated to dollars, but resulted in overestimates of means and standard deviations when data were less directly translatable to dollars and involved variable costs. An additional problem with the Schmidt et al procedure involved the presence of outliers, possibly caused by differing interpretations of instructions. The Cascio-Ramos estimate of performance in dollars (CREPID) technique, proposed by W. F. Cascio (1982), yielded smaller dollar standard deviations, but Ss could reliably discriminate among job components in terms of importance and could accurately evaluate employee performance on those components. Problems with the CREPID method included the underlying scale used to obtain performance ratings and a dependency on job component intercorrelations. (11 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The goals of this chapter are to introduce organizational responsibility research and practice to the field of industrial and organizational (I/O) psychology and to encourage I/O psychology researchers and practitioners to embrace organizational responsibility in their research and practice. Although its definition is elaborated in detail later in the chapter, organizational responsibility is defined as context-specific organizational actions and policies that take into account stakeholders’ expectations and the triple bottom line of economic, social, and environmental performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This volume summarizes the results of a research project in industrial relations in which the Industrial Research Department of the Harvard Graduate School of Business Administration cooperated with the Western Electric Company. 12 years of research bring the authors to a critical evaluation of the traditional view that workers, supervisors, or executives be considered apart from their social setting and treated as essentially "economic men." For example, "it became clear that the beneficial effects of rest pauses could be explained equally well in terms of the social function." The work involved was not heavy manual labor. Again, "the efficiency of a wage incentive is so dependent on its relation to other factors that it is impossible to separate it out as a thing in itself having an independent effect." The book, 26 chapters in length, is divided into 5 parts. There is a foreword by C. G. Stoll of Western Electric and a preface by Elton Mayo. 34 tables and 48 figures assist the reader in visualizing details. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The most ubiquitous method of performance appraisal is rating. Ratings, however, have been shown to be prone to various types of systematic and random error. Studies relating to performance rating are reviewed under the following headings: roles, context, vehicle, process, and results. In general, cognitive characteristics of raters seem to hold the most promise for increased understanding of the rating process. A process model of performance rating is derived from the literature. Research in the areas of implicit personality theory and variance partitioning is combined with the process model to suggest a unified approach to understanding performance judgments in applied settings. (6 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Cognitive complexity, defined as the degree to which a person possesses the ability to perceive behavior in a multidimensional manner, is argued to distinguish between raters for whom Behavioral Expectation Scales (BES) would be effective and for whom they would not. With a sample of 60 manufacturing workers, it was found that cognitively complex raters (CRs) (a) were more confident (p < .001) with the BES format, as opposed to a simpler format; (b) preferred it in use (p < .025); and (c) exhibited ratings with significantly less leniency and restriction of range error when using it. CRs had less halo error than simple raters (SRs) with both formats, however. SRs preferred the simple format, as predicted. The hypothesized importance of compatibility between rater cognitive structure and cognitive demands made by appraisal formats was thus confirmed. Results are discussed in the context of past BES research, and suggestions for future appraisal research are made. (38 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
NSF Graduate Fellowships are awarded to approximately half of a homogeneous group of applicants in a procedure that approximates random assignment to the conditions of either fellowship or honorable mention. This natural experiment permits assessment of the effect on early career accomplishments of being named an NSF fellow. The authors found a consistent effect for PhD completion—overall, fellows were 7% more likely to complete the PhD than were nonawardees—but found no reliable fellowship effect on achieving faculty status, achieving top faculty status, or submitting or receiving an NSF or a National Institutes of Health research grant. The authors conclude that the positive expectancies associated with this prestigious fellowship have only a small influence (Pygmalion or Galatea effect) in graduate school and no effect thereafter. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Behavioral examples of how military units express varying degrees of morale were provided by US military personnel in the US and in 2 foreign locations. From these examples, behaviorally anchored rating scales were developed for 8 dimensions of group morale. They were used to rate morale of 47 platoon-sized units in the US Army stationed in a foreign location. Although errors of leniency and restriction of range did not seem severe, the ratings did show indications of halo error and only low to moderate interrater reliability. Despite these psychometric deficiencies, correlations with ratings of unit effectiveness and self-reports of unit members provided some evidence for convergent validity. Military units rated high on the morale scales were also rated high on overall effectiveness and low on frequency of low-morale activities like dissent, drug abuse, and destruction/sabotage. Members of units rated high on some of the morale scales were more likely to report high morale and intentions of reenlisting. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
"Check lists for use in evaluating task performance in several related naval job specialities (ratings) were shown to meet the Thurstone and Guttman scalability requirements. The Scaled Technical Proficiency Check Lists evaluate the status of a technician with reference to tasks normally performed by men of equivalent pay grade and rating. The lists contain only a relatively small number of items, so that they are simple and convenient to use. Yet, because the tasks included form a scale, the score obtained from them can be generalized in meaning to the 'universe' of tasks of which they are representative." From Psyc Abstracts 36:04:4LD37S. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The practice of psychological testing has advanced with great rapidity within recent years. The early crude methods are being replaced by scientific procedures, and the early naive views in regard to the test-aptitude relation and the possibilities of tests are giving way before more adequate theories and more sober expectations. In a word, aptitude testing, like medicine and engineering, is ceasing to be a job for amateurs and is becoming the work of technically trained professionals. It has been the purpose of the author to include within a convenient space two of the essentials of the training for aptitude work: (1) an account of the fundamental principles of aptitude testing and (2) an intelligible description of the most effective and the most economical methods of constructing batteries of aptitude tests. Specifically the book is designed as a text for university and college classes in aptitude testing and as a general handbook for those engaged in aptitude work of all kinds, whether in the form of vocational guidance, general personnel work, or employment selection. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
An investigation of the distributional characteristics of 440 large-sample achievement and psychometric measures found all to be significantly nonnormal at the alpha .01 significance level. Several classes of contamination were found, including tail weights from the uniform to the double exponential, exponential-level asymmetry, severe digit preferences, multimodalities, and modes external to the mean/median interval. Thus, the underlying tenets of normality-assuming statistics appear fallacious for these commonly used types of data. However, findings here also fail to support the types of distributions used in most prior robustness research suggesting the failure of such statistics under nonnormal conditions. A reevaluation of the statistical robustness literature appears appropriate in light of these findings. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Describes the development of a method for estimating the standard deviation of job performance in dollars that might permit wider application of utility analysis to personnel activities. The method builds on traditional industrial psychological principles of job analysis and performance measurement, and it allows translation of behaviorally based performance rating data into economic terms. In a field study of 602 1st-level managers, promoted either via a panel selection/interview process or via an assessment center, the method was shown to be feasible, practical, and simple to use. Comparative utility analysis indicated a significant payoff in terms of improved performance per person per year for the assessment center. (26 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
In deregulated industries former monopolies often adopt asymmetric behaviors: these firms impede the entry of foreign competitors in their home market, especially using defensive political strategies, and, at the same time, aggressively develop international strategies in foreign markets. To account for this behavior, I develop a game theoretic model involving three players: the former monopoly, its home government, and the host government of the country into which the firm wants to enter. I show first that there are in fact different asymmetric strategies that former monopolies can use in such a setting, and that a global strategy cannot always be implemented by those firms because of cooperation issues between the two governments. I also study the conditions under which these issues can be solved and show that this can happen only when the firm develops a political strategy that integrates both defensive and offensive activities. Overall, this paper therefore argues that asymmetric strategies are not always adopted to maintain monopoly rents but are also dictated by the nature of the international relationships between the governments involved. Copyright © 2003 John Wiley & Sons, Ltd.