Journal of Business and Psychology (J BUS PSYCHOL)

Publisher: Business Psychology Research Institute (Mendota, Minn.), Springer Verlag

Journal description

Journal of Business and Psychology publishes empirical research case studies and literature reviews dealing with psychological concepts and services implemented in business settings. Written by psychologists behavioral scientists and organizational specialists employed in business industry and academia articles deal with all aspects of psychology that apply to the business sector. Subjects include personnel selection and training; organizational assessment and development; risk management and loss control; marketing and consumer behavior research.

Current impact factor: 1.25

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2009 Impact Factor 0.444

Additional details

5-year impact 1.32
Cited half-life 7.90
Immediacy index 0.44
Eigenfactor 0.00
Article influence 0.46
Website Journal of Business and Psychology website
Other titles Journal of business and psychology
ISSN 0889-3268
OCLC 13847167
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as arXiv.org
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of the study was to examine antecedents of interview performance commonly measured via two divergent methods; selection tests and evaluator assessments. General mental ability (GMA), emotional intelligence (EI), and extraversion have been largely studied in isolation. This study evaluates the relative strength of these traits across methods and tests whether selection test and evaluator-assessed traits interact to further enhance the prediction of interview performance. 81 interviewees were asked to complete traditional selection tests of GMA, EI, extraversion, and a video-recorded structured behavioral and situational job interview. The traits, behavioral, and situational interview performance were then evaluated with three independent sets of raters. Regression analysis was used to investigate the extent that these traits predicted structured interview performance. Results indicate that each trait was a strong predictor of interview performance, but results differed based on the method of measurement and the type of structured interview assessed. Further, evaluator perceptions related to interview performance more strongly than did selection tests. Finally, evaluator assessments of each trait interacted with its respective selection test counterpart to further enhance the prediction of interview performance. This improves our understanding of how applicant traits impact hiring decisions. This is the first study to directly compare tested versus others’ ratings of interviewee GMA, EI, and extraversion as predictors of interview performance.
    Journal of Business and Psychology 09/2015; 30(3). DOI:10.1007/s10869-014-9381-6
  • Journal of Business and Psychology 08/2015; DOI:10.1007/s10869-015-9416-7
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose The purpose of this study was to investigate if chronological age sparks negative expectancies thus initiating a self-fulfilling prophecy in technology training interactions. Design/Methodology/Approach Data were obtained from undergraduate students (age ≤ 30) paired in 85 trainer–trainee dyads and examined through the actor-partner interdependence model. Trainer and trainee age (younger or older) were manipulated in this laboratory experiment by presenting pre-selected photographs coupled with voice enhancing software. Findings As compared to younger trainees, ostensibly older trainees evoked negative expectancies when training for a technological task, which ultimately manifested in poorer training interactions and trainer evaluations of trainee performance. Implications Identifying a connection between chronological age and negative expectancies in technology training advances our theoretical understanding of sources contributing to older trainees’ poorer performance in workforce training programs. This study provides evidence of a negative relationship between trainees’ chronological age and trainers’ expectations for trainee success and subsequent training evaluations. Such knowledge offers initial support for a “train-the-trainer” intervention through educating trainers on the potential dangers of age-based stereotypes, which could help to reduce age-based performance discrepancies. Originality/Value This is the first study to manipulate age during training thus isolating the influence of age-based stereotypes on training experiences. Given that potential age-related performance decrements in capability and motivation can be eliminated as explanations, this evidence of poorer interactions and outcomes for older workers is critical.
    Journal of Business and Psychology 02/2015; DOI:10.1007/s10869-014-9390-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose An item-sort task is a common method to reduce over-representative item lists during the scale-creation process. The current article delineates the limitations and misapplications of the accepted statistical significance formula for item-sort tasks and proposes a new statistical significance formula with greater utility across a wider range of item-sort tasks. Design First, a simulation study compares the two formulas in an array of conditions that vary on sample size and number of assignment choices. Second, an empirical study compares the results of three separate item-sort tasks across the two formulas for statistical significance. Findings In the empirical study, the proposed formula produces more correct retention decisions than the existing formula across all three item-sort tasks. In the simulation study, the proposed formula is more appropriate than the existing formula under most conditions. The two formulas function identically in item-sort tasks with only two assignment choices. Implications Researchers could obtain erroneous results when misapplying the existing item-sort task statistical significance formula to cases with more than two assignment choices. The proposed formula corrects this limitation, ultimately providing accurate results more often than the existing formula. Applying the proposed formula could help future research and practice throughout the scale development process. Originality Despite widespread use, few attempts have been made to improve scale-creation pretest methods, particularly item-sort tasks. The current study demonstrates that even conventional statistical methods are susceptible to misuse and misapplication, and future research could benefit from the reexamination of other common methods.
    Journal of Business and Psychology 01/2015; DOI:10.1007/s10869-015-9404-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose Drawing from core self-evaluations (CSE) theory, we argue and demonstrate that disposition plays an important role in explaining the way job applicants respond to testing procedures in the selection process. We demonstrate that CSE predicts job candidate reapplication intentions, acceptance intentions, and recommendation intentions—even after controlling for test performance. Moreover, we show that CSE moderates the relationship between perceived fairness and applicant behavioral intentions. Design/Methodology/Approach Drawing from a sample of 194 applicants for the position of police officer, this research uses data at four different time periods to explain the impact that applicant CSE has on outcomes in a high-stakes (i.e., civil service) testing environment. Findings Our results indicate that behavioral intentions resulting from selection processes are attributable at least in part to applicant CSE and that self-serving attributions are not the only relevant driving factor. We also show that CSE influences the relationship between perceptions of fairness and behavioral intentions. Implications Theoretically, this manuscript explains why and shows how CSE is a driving force behind intention formation. This research provides practitioners with insight to the formation of applicant reactions and intentions showing that important perceptions about the organization can be impacted by CSE. We also demonstrate that CSE impacts selection test performance. Originality/Value This is the first study to examine the impact of CSE on applicant responses related to the formation of organizationally relevant outcomes
    Journal of Business and Psychology 01/2015; DOI:10.1007/s10869-015-9405-x