Content uploaded by Patrick Hyland
Author content
All content in this area was uploaded by Patrick Hyland on Sep 02, 2020
Content may be subject to copyright.
Content uploaded by Patrick Hyland
Author content
All content in this area was uploaded by Patrick Hyland on Sep 02, 2020
Content may be subject to copyright.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
1
Employee Survey Research: A Critical Review of Theory and Practice
Lewis Garrad
Mercer | Sirota
Patrick K. Hyland
Mercer | Sirota
Author Note
Lewis Garrad, Mercer|Sirota, Singapore;
Patrick K. Hyland, Mercer|Sirota, Purchase, New York.
Authors contributed equally; names are listed alphabetically.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
2
Abstract
Surveys have become a mainstay of organizational research. When surveys are efficient,
targeted, theory driven and predict useful outcomes, they provide a valuable data set for decision
making. Yet many organizational researchers and HR practitioners overlook the dark side of
studying employees at work, especially when it comes to challenging ontological and
epistemological assumptions. Gathering, analyzing and using employee survey data is often
fraught with complexity which limits action and change. This chapter outlines how the
emergence and establishment of employee survey research led us to modern assumptions about
what we seek to learn from people at work, and how those assumptions help and hinder our
progress to improving organizational performance and employee wellbeing. We then make
recommendations for the future development of survey research in the broader context of social
and technological development.
Keywords: Survey research; research methods; employee feedback; data analysis
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
3
Introduction
Employee survey research has become common practice in modern organizations. Two
decades ago, researchers found that over 70% of US organizations surveyed their employees on
an annual or bi-annual basis (Paul & Bracken, 1995). More recently, researchers at The
Engagement Institute (Ray, Hyland, Dye, Kaplan & Pressman, 2013) found that over 80% of
global organizations survey their employees on a regular basis, with an increasing number
administering regular pulse surveys throughout the year.
While employee surveys are popular, this methodology has its limitations. Many of the
fundamentals of employee survey research were established in the early part of the 20th century
(Jacoby, 1988). But a lot has changed in the world of work, the field of psychology, and the
practice of management over the past 100 years. By evaluating the extent to which the discipline
has kept pace with new developments and discoveries, we can identify methodological weak
spots and opportunities for improvement and progress.
In this chapter, we conduct a critical review of employee survey research theory and
practice. We start by exploring the theoretical foundations of employee survey research in
context of new perspectives about work, organizational change, and workplace dynamics. Then
we highlight common methodological and analytical mistakes that researchers in the field often
make. Finally, we provide recommendations for improving the way employee research is
conducted in today’s business environment.
The Theoretical Foundations of Employee Survey Research
The origins of employee survey research can be traced back to the 1920s, when
organizations started using questionnaires to assess worker morale and improve employee
relations (Jacoby, 1988). Following the development of the Thurstone procedure for attitude
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
4
assessment (Thurstone & Chave, 1929), an increasing number of companies started conducting
employee surveys (Schneider, Ashworth, Higgs, and Carr, 1996). In the 1940s, Rensis Likert
and Kurt Lewin each founded research centers that advanced and refined organizational survey
research methods (Burke, 2002). In the decades that followed, field work by researchers like
Floyd Mann (1957), David Nadler (1977), and Ben Schneider (e.g., Schneider, Parkington, and
Buxton, 1980) helped establish employee survey research as a core practice in the field of
organization development and industrial-organizational psychology.
Today, surveys are typically used to gather employee feedback about a range of topics
(e.g., employee engagement, job satisfaction, leader effectiveness, team collaboration, and
workplace efficiency). While the content and structure of employee surveys often vary across
organizations, there are three main activities that characterize all well-designed organizational
survey projects.
Careful Measurement. Employee surveys are designed with a specific goal in mind: to
measure personal perceptions in an objective way. In their primer on survey research,
Church and Waclawski (2017) highlight the importance of measurement, defining
employee surveys as “a systematic process of data collection designed to quantitatively
measure specific aspects of organizational members’ experience as they relate to work”
(p. 3). Because accurate measurement is the sine qua non for any employee survey,
survey researchers and practitioners place heavy emphasis on developing and utilizing
well-designed items and scales, valid and reliable indices, and representative and
generalizable research samples.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
5
Robust Data Analysis. After survey data have been collected, it is subjected to robust
statistical analysis. Using various techniques, researchers seek to generate insight from
survey responses. For example, descriptive statistics are often computed to gain a basic
understanding of the data. Comparative measures--based on either internal norms,
between group difference tests, longitudinal results, or external benchmarks--are used to
identify high and low scores and positive and negative trends. If open ended questions
are included in the survey, content coding techniques are often used to convert written
comments into quantifiable themes. Correlational, regression, and relative weights
analysis, along with structural equation modeling, is often conducted to identify
relationships between variables and determine key drivers of important outcomes. On
occasion, linkage analysis is conducted to explore the relationship between employee
attitudes and other organizational metrics like customer satisfaction, store sales, or
product defects.
Collective Feedback & Action Planning. After data analysis is conducted, survey results
are often shared broadly with organizational members. Influenced by Lewin’s seminal
thinking about change (i.e., data can unfreeze groups and create change) and action
research (i.e., stakeholders should be involved in any research effort focused on creating
collective change), Likert and Mann emphasized the importance of involving employees,
managers, and leaders in the process of interpreting results, identifying areas for change,
and developing plans for improvement. Based on his field experience, Mann (1957)
noticed that employees tended to become frustrated and cynical when managers did not
review results and involve them in the action planning process. Decades later, Nadler
noted that surveys create expectation for action, noting that “the mere act of data
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
6
collection generates energy around those activities that are being measured” and “creates
expectations that the data collection will have consequences” (1996, p. 180).
All research paradigms are based on a set of philosophical assumptions about the nature
of reality (i.e., ontology), the process of knowledge-building (i.e., epistemology), and the essence
of what is right and wrong (i.e., axiology) (Patton, 2002). These philosophical assumptions
influence the kinds of questions that researchers raise, the methods they use to study those
questions, and the conclusions they reach (Chilisa & Kawulich, 2012).
Inherent in the three main survey activities described above are a set of basic assumptions
about the nature of organizations, the correct way to do research, and the appropriate way to treat
people in work settings. From an ontological perspective, the logic of employee survey research
is influenced by both general systems theory (von Bertalanffy, 1950) and the field of cybernetics
(Wiener, 1948). As a result, survey researchers tend to view organizations as open systems and
place great importance on feedback as a source of organizational self-regulation and survival
(Katz & Kahn, 1978). Organizational change is assumed to be a teleological, episodic, and time
bound process, progressing in a linear, rational, goal-oriented way (Van de Ven & Poole, 1995).
From an epistemological standpoint, the practice of employee survey research was greatly
shaped by the scientific thinking of the first half of the 20th century, which emphasized
reductionism, determinism, and equilibrium as core principles (Hayles, 1991). Influenced by the
philosophical perspectives of positivism, functionalism, and social determinism, survey
researchers seek to make sense of workplace phenomena by using nomothetic methods—often
focused on contextual factors—to discover underlying patterns of behavior, test hypotheses,
identify causal relationships, determine universal truths, and produce pragmatic solutions
(Burrell and Morgan, 1979). In terms of axiology, Lewin, Likert, Mann and other early
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
7
organization development practitioners emphasized that employee survey research should be
conducted in a way that promotes humanistic values, participative processes, and democratic
ideals (Burke, 1997).
Employee survey research is clearly founded on reasoned philosophical principles. But
new theory and research highlight potential limitations, liabilities, and blind spots associated
with this methodology. In the decades following the establishment of the employee survey field,
new scientific perspectives—based on different philosophical assumptions—have been
advanced. For example, interpretivism—which emerged as a critique of the positivism paradigm
in the mid-twentieth century—contends that idiographic techniques and qualitative methods are
the best way to understand and interpret human behavior (Neuman, 2000; Hudson & Ozanne,
1988). Complexity theory is causing scientific communities to shift their focus from
reductionism, predictability, and linearity to holism, uncertainty, and nonlinearity (Grobman,
2005). Complex adaptive systems theory, an outgrowth of complexity theory, is leading some
organizational researchers and practitioners to question if open systems theory is sophisticated
enough to understand chaotic, disruptive, and emergent organizational dynamics. Finally, a
growing body of cross-cultural research (for example, see Dickson, Den Hartog, and Mitchelson,
2003) calls into question whether Western norms and ideals—like participative leadership and
democratic decision making—are valued by non-Western employees.
More than 50 years ago, Abraham Kaplan (1964) cautioned researchers to beware of the
law of the instrument—the tendency to design research studies based on one’s methodological
expertise. "It comes as no particular surprise,” he said, “to discover that a scientist formulates
problems in a way which requires for their solution just those techniques in which he himself is
especially skilled" (p. 28). For organizational researchers, this means it is critical to carefully
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
8
consider the limitations of employee surveys, particularly in light of new paradigms, emerging
frameworks, and global findings like the ones discussed above.
Assuming that surveys are the only effective way to investigate workplace research
questions and initiate organizational change is a mistake. To guard against over-utilizing
employee surveys, researchers should think carefully about the best way to explore various
organizational phenomena and consider a variety of options. Recently, Edmondson & McManus
(2007) introduced a contingency framework for conducting field research that provides a helpful
set of recommendations. They argue that the best way to produce high quality field research is to
create a fit between research methodology and level of knowledge. When a body of knowledge
or a theory is new and research support is limited, they recommend using qualitative research
techniques (e.g., interviews, observations, ethnographies) and pattern analysis to generate new
insights in an inductive way. When a body of knowledge is growing and developing, having
received mixed support, they recommend deploying both qualitative and quantitative research
methods and exploratory analyses to expand knowledge. And when a body of knowledge or
theory is well established, they recommend using quantitative research techniques, like surveys,
and conducting formal hypothesis testing in a confirmatory way. By utilizing a framework like
this, organizational researchers can ensure they are deploying employee surveys in a judicious
way.
From Theory to Practice: Common Methodological and Analytical Mistakes
Even when employee surveys are the best available method for investigating workplace
questions, methodological and analytical errors can undermine the accuracy, validity, and utility
of results. In this section, we highlight three common methodological concerns that employee
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
9
survey researchers often overlook, along with three common analytical mistakes they often
make.
While survey strategies vary across organizations, there are some normative practices.
Surveys are often administered to a population of employees (e.g., either a sample or a census) at
a given point in time (e.g., annually, quarterly, monthly, daily, ad hoc), with a series of questions
asking about work-related attitudes, feelings, behaviors, perceptions, and preferences. Employee
surveys usually include items that measure proposed predictor and criterion variables. Once data
collection is complete, researchers typically conduct various analyses (e.g., correlational
analysis, key driver analysis, structural equation modeling) to determine if empirical
relationships exist.
From a methodological perspective, survey researchers often ignore three potentially
confounding factors when they follow these practices.
Overlooking the limitations of cross-sectional design. Many researchers use a cross-
sectional design when conducting organizational surveys. Cross-sectional surveys are
often used because they are easy to administer, allowing a researcher to get a snapshot of
workforce attitudes at a particular point in time by administering a survey (often
anonymously) to current employees. Assuming a representative sample completes the
survey, cross-sectional design allows researchers to understand prevalence rates and
explore associations between variables. To understand the extent to which attitudes
change over time, researchers who use this design usually compare survey results across
a number of administrations and calculate trend scores, either at the overall organization,
business unit, or team level.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
10
While this approach can provide a directional indication of change-over-time trends, it
can also lead to inaccurate conclusions, particularly if the employee population changes
between survey administrations. For example, various researchers and practitioners
have found that a honeymoon exists at work: new hires often have very favorable
attitudes when they first join an organization (e.g., Johnson, 2018). If a number of new
employees join an organization during the course of a year, their positive attitudes could
impact overall organizational survey results, leading researchers to conclude that
workforce attitudes are improving (when in fact the honeymoon effect is elevating
results). The only way to arrive at accurate change-over-time assessments and causal
conclusions is to use a longitudinal survey design. This requires a researcher to track
attitudes at the individual level, and make comparisons based only on employees who
complete all surveys over a given time period. Fortunately, new technology is making
this easier to do. Using HRIS data and web-based surveys, an increasing number of
organizations are now conducting identified, confidential surveys. But even when
organizations do conduct identified surveys, many researchers continue to compare total
organization, unit, or team results across time, failing to consider or explore how hiring,
turnover, or other changes in the composition of the workforce may impact trend results.
This can lead to inaccurate findings and faulty conclusions. When exploring survey
trends, researchers should either limit their analysis to a within-person longitudinal
design, or at least ensure that cross-sectional comparisons are based on comparable
organizational samples (e.g., similar size, response rate, percentage of new hires,
demographic composition, etc.) and highlight possible interpretive risks.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
11
Ignoring common method bias & the necessary conditions for claiming causality. Many
organizational surveys seek to determine both how employees feel about their work
experience and why they feel the way they do. Researchers often take data collected
from a single survey and conduct correlational and key driver analysis to evaluate the
extent to which survey items and dimensions are related with each other. When
empirical relationships are discovered, they are often presented as being part of a
potential causal chain (e.g., key driver results suggest that if we clarify career paths,
employees are more likely to be motivated). There are two methodological problems
with this approach. First, it does not take into account the potential systematic
measurement error that could be caused by method variance, which is the “variance that
is attributable to the measurement method rather than to the construct of interest”
(Bagozzi and Yi, 1991, p. 426). This means that statistical relationships discovered
between items or dimensions measured by a single survey may not be valid or accurate.
As Podsokaff and colleagues note in their seminal article on common method bias
(Podsokaff, MacKenzie, Lee, & Podsokaff, 2003), method variance can inflate or deflate
the amount of observed variance between attitudinal constructs, causing researchers to
erroneously under or overstate the size of the relationship between variables. Second,
concluding that causal relationships exist between survey items or dimensions measured
at a single point in time is bad practice. While any good survey researcher or
practitioner knows that correlation does not does not imply causation, some leaders,
managers, and employees may not.
Fortunately, there is a straightforward way to correct for both problems. Creating a
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
12
temporal separation between measuring predictor variables and criterion variables is one
way researchers can reduce method variance (Podsokaff et al., 2003). It also allows
researchers to establish time precedence, one of the necessary conditions for
demonstrating causality. Many organizations are currently administering regular pulse
surveys (e.g., weekly, monthly, or quarterly). By conducting statistical analyses across
surveys, rather than within surveys, researchers can control for common method bias
and test for causal relationships.
Surveying only the evaluating and remembering self. Typically, organizational surveys
ask employees to make global evaluations about work based on retrospective reflections.
Employees are often presented with a statement (e.g., “My job makes good use of my
skills and abilities”) and then are instructed to use a Likert scale to indicate the extent to
which they agree with the statement. Sometimes survey items explicitly direct
employees to reflect on events or experiences that have happened in the past (e.g., “Over
the past month, how often has your manager provided you with helpful feedback?”).
Wellbeing research by Daniel Kahneman and his colleagues suggests that these types of
survey items only assess part of the phenomenon of work. Through a series of studies,
they discovered that people’s moment-by-moment assessment of events can be quite
different than their after-the-fact, reflected evaluation of those same events. For
example, Wirtz, Kruger, Scollon, & Diener (2003) found that respondents’ experience of
their vacation, assessed using day-by-day event-sampling surveys, was notably different
than their evaluation of vacation, assessed by a series of post-vacation survey items.
Based on findings like these, Kahneman & Riis (2005) emphasize the importance of
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
13
measuring both moment-by-moment experiences and retrospective evaluations.
“Evaluation and memory are important on their own,” they say, “because they play a
significant role in decisions, and because people care deeply about the narrative of their
life. On the other hand, an exclusive focus on retrospective evaluations is untenable if
these evaluations do not accurately reflect the quality of actual experience” (p. 289).
For organizational researchers seeking to develop a comprehensive understanding of their
work environment, this line of research has important implications. Retrospective and evaluative
assessments of critical workplace events (e.g., starting a new job; receiving a promotion;
working on a team; going through an organizational change) may not provide a fully accurate
understanding of the employee experience. By using experience sampling techniques like The
Day Reconstruction Method (Kahneman, Krueger, Schkade, Schwarz, Stone, 2004) or daily
diaries, researchers may be able to generate new insights about a range of important topics.
According to Fisher and To (2012), these methodologies are particularly well suited for
“studying dynamic within-person processes involving affect, behavior, interpersonal interactions,
work events, and other transient workplace phenomena” (p. 865).
From an analytical perspective, survey researchers often make three mistakes when they
analyze and interpret survey data.
Mistake 1: Assuming more is better. Many survey practitioners and researchers
assume that positive attitudes are good, and the more positive attitudes are the better.
The positive psychology movement has been influential in advancing the idea that
positive emotions and attitudes result in better outcomes (Peterson, 2008), citing
examples which show that people with positive mindsets often experience more
creativity and better wellbeing (e.g., Fredrickson and Branigan, 2011; Tugade,
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
14
Fredrickson, & Barrett, 2004). But there are three potential drawbacks that come
with consistently positive survey results.
1. Strongly favorable attitudes may prevent organizational leaders from
engaging in the critical process of reflection and change. Organizational
change experts like Lewin and Kotter have long emphasized the importance of
using disruptive data to unfreeze organizations and create an impetus for
change (Cummings, Bridgman,, & Brown, ., 2016). If employees are
satisfied with the way things are, organizations may struggle to break out of
the status quo.
2. Research suggests that employees in some roles may actually benefit from a
sense of dissatisfaction at work. For example, studies have found that creative
people are more likely to experience negative affect (Akinola,& & Mendes,
2008) and antisocial behavior (Gino,& Ariely., 2012) which makes engaging
them more of a challenge. For these people, often it is their frustration that
drives their action (Norem, & Cantor, 1986).
3. It is also possible that some attitudes and states are better measured via an
ideal-point approach rather than a dominance approach which intrinsically
assumes that stronger endorsement of an item is predictive of increased
outcomes. For example, research shows that people who are overly optimistic
about their current performance tend to raise less energy and therefore prepare
less effectively for the future (Kappes, Oettingen,, Mayer, & Maglio2, 011).
Research (e.g. Halbesleben & Bolino, 2009) also shows that highly engaged
employees suffer more work-family interference. The implication is that
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
15
some attitudes and employee states may actually be more beneficial if they
fluctuate (as noted by George, 2010) rather than maintained at a consistently
high level.
All together these concerns mirror the patterns often seen with other
psychological traits where curvilinear relationships between variables are
common. The “too much of a good thing” effect has been found for
everything from intelligence (Antonakis, House, & Simonton, 2017) to
personality (Le Oh, Robbins, Ilies, Holland & Westrick, 2011). More is not
always better—this is an important point that survey researchers and
practitioners should keep in mind as they interpret survey data and share
results with organizational members.
Mistake 2: Overemphasizing contextual factors. Most employee surveys focus on
measuring climate factors and shared employee experiences. This is because
researchers primarily seek to understand how contextual and situational factors—like
teamwork and leader effectiveness—impact organizational outcomes like engagement
or retention. While this is not an illogical strategy per se, this focus has led to a lack
of practical integration between the psychology of individual differences and
perceptions of workplace experience.
This is important because recent research suggests that individual factors like
personality play a significant role in how people think and feel about their work. For
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
16
example, a meta-analysis published in 2018 established that around 50% of the
variance in an employee’s work engagement (as measured by the Utrecht Work
Engagement Scale (UWES) is predicted by personality and character variables
(Young, Glerum, Wang & Joseph, 2018). Given the relative stability of personality in
most people during their work life (Roberts & DelVecchio, 2000), this finding
implies that variables like engagement are heavily influenced by the nature of the
people selected into the organization. It also helps to underscore the importance of
understanding the interactions between individual and situational factors. For
example, the way people construct their views about work can be rather personal and
research into how employees experience meaningfulness in their job shows
significant individual variance (Bailey and Madden, 2015), even for people doing the
same work. Actions that focus only on situational factors are therefore likely to have
limited impact.
Mistake 3: Assuming survey results will lead to change. Publicly available data on
employee attitudes rarely shows much change. For example, Harter (2018) reported
that the percentage of engaged employees “has ranged from a low of 26% in 2000
and 2005 to the recent six-month high of 34% in 2018. On average, 30% of
employees have been engaged at work during the past 18 years.” With a tiny variation
of only 4% from the average, it’s difficult to argue that survey programs focused on
engagement actually change anything. Why do people seem so stubborn in their
opinion about work?
A principal assumption of the employee survey process is that leaders have the
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
17
capacity and willingness to use employee feedback to create change. Yet this
assumption has not always held true for several reasons. First, if leadership style is a
function of leader personality (Judge, Bono, Iles & Gerhardt, 2002), and employee
experiences are related to leadership style (Martin, Guillaume, Thomas, Lee &
Epitropaki, 2016), then the stability of leader personality limits behavioral change
unless he/she applies sustained effort and learning. Second, individual employee
personality also plays a significant role in determining how people respond to
surveys, reducing the impact a leader can have on the situational factors that drive the
feedback.
Next, as leaders have become more aware of the role that broader systemic and
cultural issues play in determining organization performance (e.g., Judge &
Cable,1997), survey researchers have expanded their questions to wider and more
complex issues of organization effectiveness. These elements of organization
functioning are hard to change, and usually require significant coordinated activity
across various teams and leaders simultaneously. The role of senior leadership has
been found to be also critical, with studies demonstrating that factors like CEO
personality playing a role in culture development (O’Reilly, Caldwell, Chatman,
Doerr, 2014). The effort involved to achieve this the type of coordinated shift that is
needed is usually beyond the resources many organizations are willing to apply unless
they are in dire need.
Combined, these issues mean that it can be difficult for any individual manager to be
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
18
able to effect change. The risk is that many then become disillusioned and frustrated
with the employee survey process and/or become passive participants.
Looking Beyond Employee Surveys: Building Better Employee Research Programs
When employee surveys are carefully planned and rigorously designed, the data they
generate can produce powerful insights, foster meaningful change, enhance the employee
experience, and improve organizational performance. But, as we’ve argued in this chapter,
surveys have limitations. When researchers and practitioners misuse this methodology, or
assume it is the only way to learn from their employees, they run the risk of creating faulty
feedback loops, misguided conclusions, and organizational blind spots.
In recent years, various experts have argued that evidence based human resources (e.g.,
Rousseau & Barends, 2011), advanced people analytics (e.g., Waber, 2013), organizational sense
making (Weick & Sutcliffe, 2015), and organizational learning (Hess, 2014) are critical in
today’s volatile and uncertain business environment. In light of these arguments, we think
organizations should develop a broad set of research capabilities that extend beyond traditional
employee surveys. For organizational researchers and practitioners seeking to build a best-in-
class employee research program, we recommend focusing on four practices.
Learn to use multiple methodologies. Surveys can be effective for measuring well tested
theories. But so much of what organizations need to learn about in today’s fast-paced
world is nascent, emergent, nuanced, and complex. For some research questions,
qualitative methods may yield the best insights. Technological advances in artificial
intelligence, computer-assisted text analysis, and natural language processing are making
it easier to analyze extensive amounts of unstructured data in efficient and effective ways.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
19
Assess phenomena at multiple levels. Fifteen years ago, Richard Hackman (2003) wrote
a compelling article about learning more by crossing levels. His main point was that
because most organizational dynamics happen at multiple levels, with individual, group-
level, and organizational-level causes and consequences, researchers can develop a better
understanding of workplace phenomena by conducting research at multiple levels. Using
a multi-level approach, researchers could explore team effectiveness, for example, by
assessing team member personalities, group level behavior, and organizational level
culture. Unfortunately, many researchers do not take this approach, instead focusing on
just one level of analysis. As a result, they often develop a limited understanding of
organizational dynamics.
Integrate multiple data sets. Many organizations have fallen into the habit of relying on
single-survey correlational and/or key driver analysis to make significant organizational
decisions. As we noted earlier, this type of analysis is prone to common method bias.
These days, most organizations have robust effectiveness, performance, and productivity
metrics. By linking this data with attitudinal data, researchers and practitioners can
conduct analyses that are more sophisticated, more valid, and potentially more insightful
for their organizations.
Engage multiple stakeholders. In our opinion, the best employee research programs
create insight, energy, and direction for organizational members. With that in mind, we
think that organizational researchers and practitioners should think carefully about how to
design research assessments, analyses, and feedback mechanisms that empower and
energize everyone from front-line employees to c-suite leaders. Is our research actually
motivating employees? Is it helping managers and leaders be more effective? Is it truly
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
20
improving the work environment and organizational effectiveness? These are the kinds
of questions we think researchers should ask themselves on a regular basis.
As we look to the future, we think that employee surveys should continue to play a critical role
in employee research programs. But we also think that the smartest organizations will develop
and deploy a comprehensive set of research tools and methodologies to understand their
workforce.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
21
References
Akinola M., Mendes W.B., (2008). The dark side of creativity: biological vulnerability and
negative emotions lead to greater artistic creativity. Personality and Social Psychology
Bulletin, 34(12):1677-86. doi: 10.1177/0146167208323933
Antonakis J., House R.J., Simonton D.K. (2017). Can super smart leaders suffer from too much
of a good thing? The curvilinear effect of intelligence on perceived leadership behavior.
Journal of Applied Psychology, 102(7): 1003-1021. doi: 10.1037/apl0000221.
Bagozzi, R. P., & Yi, Y. (1991). Multitrait–multimethod matrices in consumer research. Journal
of Consumer Research, 17, 426–439.
Bailey C., and Madden A. (2015, October). Time Reclaimed: Temporality and the Experience of
Meaningful Work. Work, Employment, & Society. doi: 10.1177/0950017015604100
Burke, W. W. (1997). The new agenda for organization development. Organization Dynamics,
25(1), 7-21.
Burke, W.W. (2002) Organizational Change: Theory and Practice. Sage, Thousand Oaks.
Burrell, G., & Morgan, G. (1979). Sociological Paradigms and Organizational Analysis.
Aldershot, UK: Gower.
Chilisa, B. & Kawulich, B. B. 2012. Selecting a research approach: paradigm, methodology and
methods. In Doing social research: a global context. C. Wagner, B. B. Kawulich & M.
Garner, Eds. London: McGraw Hill.
Church, A. H., & Waclawski, J. (2017). Designing and using organizational surveys: A seven
step approach. San Francisco, CA: Jossey-Bass.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
22
Cummings, S., Bridgman, T., & Brown, K. G. (2016). Unfreezing change as three steps:
Rethinking Kurt Lewin’s legacy for change management. Human Relations, 69(1), 33–
60. https://doi.org/10.1177/0018726715577707
DelVecchio W.F, Roberts B.W. (2000). The rank-order consistency of personality traits from
childhood to old age: a quantitative review of longitudinal studies. Psychological
Bulletin. 126(1):3-25.
Dickson, M. W., Den Hartog, D. N., & Mitchelson, J. K. (2003). Research on leadership in a
cross-cultural context: Making progress, and raising new questions. Leadership
Quarterly, 14(6), 729-768.
Edmondson, A. and McManus, S. (2007) Methodological Fit in Management Field Research.
Academy of Management Review, 32, 1155-1179.
http://dx.doi.org/10.5465/AMR.2007.26586086
Fisher, C. D., & To, M. L. (2012). Using experience sampling methodology in organizational
behavior. Journal of Organizational Behavior, 33(7), 865-877.
https://doi.org/10.1002/job.1803
Fredrickson, B.L. (2001 Mar). The Role of Positive Emotions in Positive Psychology. The
Broaden-and-Build Theory of Positive Emotions. American Psychologist, 56(3): 218–226
George, J. M. (2010). More engagement is not necessarily better: The benefits of fluctuating
levels of engagement. In S. L. Albrecht (Ed.), New horizons in management. Handbook
of employee engagement: Perspectives, issues, research and practice (pp. 253-263).
Gino, F., & Ariely, D. (2012). The dark side of creativity: Original thinkers can be more
dishonest. Journal of Personality and Social Psychology, 102(3), 445-459.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
23
Grobman, G. (2005). Complexity Theory: A New Way to Look at Organizational Change.
Public Administration Quarterly, 29(3), 351-384.
Hackman, J. R. (2003). Learning more from crossing levels: Evidence from airplanes, orchestras,
and hospitals. Journal of Organizational Behavior, 24, 1-18.
Halbesleben J.R., Harvey J., & Bolino M.C. (2009). Too engaged? A conservation of resources
view of the relationship between work engagement and work interference with family.
Journal of Applied Psychology Appl Psychol. Nov; 94(6):1452-65. doi:
10.1037/a0017595.
Harter, J. (2018). Employee Engagement on the Rise in the U.S. Retrieved from
https://news.gallup.com/poll/241649/employee-engagement-rise.aspx
Hayles, K. (1991). Chaos and Order: Complex Dynamics in Literature and Science. Bibliovault
OAI Repository, the University of Chicago Press.
Hess, E.D. (2014). Learn or Die: Using Science to Build a Leading-Edge Learning
Organization. New York: Columbia University.
Hudson, L. A., & Ozanne, J. L. (1988). Alternative ways of seeking knowledge in consumer
research. Journal of Consumer Research, 14, 508-521. http://dx.doi.org/10.1086/209132
Jacoby, S. M. (1988). Employee Attitude Surveys in Historical Perspective. Industrial Relations,
27(1), 74-93. https://doi.org/10.1111/j.1468-232X.1988.tb01047.x
Johnson, S. R. (2018). Engaging the Workplace: Using Surveys to Spark Change. New York:
ASTD.
Kahneman, D., Krueger, A.B., Schkade, D.A., Schwarz, N., and Stone, A.A. (2004). A survey
method for characterizing daily life experience: the day reconstruction method (DRM).
Science 306 (5702), 1776-80.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
24
Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W. (2002). Personality and leadership: A
qualitative and quantitative review. Journal of Applied Psychology, 87(4), 765-780.
http://dx.doi.org/10.1037/0021-9010.87.4.765
Judge, T. A. and Cable, D. M. (1997), Applicant Personality, Organizational Culture, and
Organization Attraction. Personnel Psychology, 50, 359-394. doi:10.1111/j.1744-
6570.1997.tb00912.x
Kahneman, D. and Riis, J. (2005) Living, and thinking about it, two perspectives. In F. A.
Huppert, N. Baylis, and B. Keverne (Eds.), The Science of Well-Being. New York:
Oxford University Press.
Kaplan A. (1964). The conduct of inquiry: methodology for behavioral science. San Francisco,
CA: Chandler.
Kappes, H. B., Oettingen, G., Mayer, D., & Maglio, S. (2011). Sad mood promotes self-initiated
mental contrasting of future and reality. Emotion, 11(5), 1206-1222
Katz, D., & Kahn, R. L. (1978). The social psychology of organizations. New York: Wiley.
Le H., Oh I.S., Robbins S.B., Ilies R., Holland E., Westrick P. (2011). Too much of a good thing:
curvilinear relationships between personality traits and job performance. Journal of
Applied Psychology, 96(1):113-33. doi: 10.1037/a0021016
Mann, F.C. (1957). Studying and creating change: A means to understanding social
organization. Research in Industrial Human Relations. (Industrial Relations Research
Association, Publication No. 17).
Martin, R., Guillaume, Y., Thomas, G., Lee, A., & Epitropaki, O. (2016). Leader‐Member
Exchange (LMX) and performance: a meta‐analytic review. Personnel Psychology,
69(1), 67–121. https://doi.org/10.1111/peps.12100
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
25
Nadler, D. A. (1996). Setting expectations and reporting results: Conversations with top
management. In A. I. Kraut (Ed.), Organizational surveys: Tools for assessment and
change. San Francisco: Jossey-Bass.
Nadler, D. A. (1977). Feedback and organization development: using data-based methods.
London: Addison-Wesley.
Neuman, W.L. (2000) Social research methods qualitative and quantitative approaches. 4th
Edition, Allyn & Bacon, Needham Heights.
Norem J.K., & Cantor N. (1986) Defensive pessimism: harnessing anxiety as motivation. Journal
of Personality and Social Psychology, 51(6):1208-17.
O’Reilly, C. A., Caldwell, D. F., Chatman, J. A., & Doerr, B. (2014). The Promise and Problems
of Organizational Culture: CEO Personality, Culture, and Firm Performance. Group &
Organization Management, 39(6), 595–625. https://doi.org/10.1177/1059601114550713
Patton. M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks,
CA: Sage Publications.
Paul, K. B., & Bracken, D. W. (1995). Everything you always wanted to know about employee
surveys. Training & Development, 49(1), 45-49.
Peterson, C., Park, N. and Sweeney, P. J. (2008), Group Well‐Being: Morale from a Positive
Psychology Perspective. Applied Psychology, 57: 19-36. doi: 10.1111/j.1464-
0597.2008.00352.
Podsakoff, P.M., Mackenzie, S.B., Lee, J.Y. and Podsakoff, N.P. (2003) Common Method
Biases in Behavioral Research: A Critical Review of the Literature and Recommended
Remedies. Journal of Applied Psychology, 88, 879-903. http://dx.doi.org/10.1037/0021-
9010.88.5.879
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
26
Ray, R., Hyland, P. K., Dye, D., Kaplan, J., and Pressman, A. (2013). The DNA of Engagement.
Retrieved from //www.conference-board.org/councils/councildetail.cfm?councilid=1058
Rousseau, D. M. and Barends, E. G. (2011), Becoming an evidence‐based HR practitioner.
Human Resource Management Journal, 21: 221-235. doi:10.1111/j.1748-
8583.2011.00173.x
Schneider, B., Ashworth, S. D., Higgs, A. C., & Carr, L. (1996). Design, validity, and use of
strategically focused employee attitude surveys. Personnel Psychology, 49(3), 695-705.
http://dx.doi.org/10.1111/j.1744-6570.1996.tb01591.x
Schneider, B., Parkington, J. J., & Buxton, V. M. (1980). Employee and customer perceptions of
service in banks. Administrative Science Quarterly, 25, 252-267.
Thurstone, L. L. & Chave, E. J. (1929). The measurement of attitude. Oxford, England: Univ. of
Chicago Press.
Tugade, M.M., Fredrickson, B. L., Barrett, L.F. (2004). Emotional Granularity: Examining the
Benefits of Positive Emotions on Coping and Health. J Pers. Psychological Resilience
and Positive, 72(6): 1161–1190.
Van De Ven, A.H. & Poole, M.S. (1995) Explaining Development and Change in Organizations.
Academy of Management Review, 20, 510-540. http://dx.doi.org/10.2307/258786
Von Bertalanffy, L. (1950). An outline of general systems theory. British Journal of the
Philosophy of Science, 1, 134-165.
Waber, B. (2013). People Analytics: How Social Sensing Technology Will Transform Business
and What It Tells Us about the Future of Work. New Jersey: FT Press.
Weick, K.E. & Sutcliffe, K. M. (2015). Managing the Unexpected: Sustained Performance in a
Complex World. New York: Wiley Jossey-Bass.
Running head: CRITICAL REVIEW OF EMPLOYEE SURVEY RESEARCH
27
Wiener, N. (1948). Cybernetics; or control and communication in the animal and the machine.
Oxford, England: John Wiley.
Wirtz, D., Kruger, J., Scollon, C.N., and Diener, E. (2003). What to do on spring break? The role
of predicted, on-line, and remembered experience in future choice. Psychological
Science,14 (5), 52Q-4.
Young H.R., Glerum D.R., Wang W., Joseph D.L. (2018). Who are the most engaged at work?
A meta‐analysis of personality and employee engagement. Journal of Organizational
Behavior, 39, 1330–1346. https://doi.org/10.1002/job.2303