ArticlePDF Available

Inequality at Work: The Effect of Peer Salaries on Job Satisfaction

Authors:

Abstract and Figures

We use a simple theoretical framework and a randomized manipulation of access to information on peers' wages to provide new evidence on the effects of relative pay on individual job satisfaction and job search intentions. A randomly chosen subset of employees of the University of California (UC) was informed about a new website listing the pay of University employees. All employees were then surveyed about their job satisfaction and job search intentions. Our information treatment doubles the fraction of employees using the website, with the vast majority of new users accessing data on the pay of colleagues in their own department. We find an asymmetric response to the information treatment: workers with salaries below the median for their pay unit and occupation report lower pay and job satisfaction, while those earning above the median report no higher satisfaction. Likewise, below-median earners report a significant increase in the likelihood of looking for a new job, while above-median earners are unaffected. Our findings suggest that job satisfaction depends directly on relative pay comparisons, and that this relationship is non-linear.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.
Content may be subject to copyright.
NBER WORKING PAPER SERIES
INEQUALITY AT WORK:
THE EFFECT OF PEER SALARIES ON JOB SATISFACTION
David Card
Alexandre Mas
Enrico Moretti
Emmanuel Saez
Working Paper 16396
http://www.nber.org/papers/w16396
NATIONAL BUREAU OF ECONOMIC RESEARCH
1050 Massachusetts Avenue
Cambridge, MA 02138
September 2010
We are grateful to Stefano Dellavigna, Ray Fisman, Kevin Hallock, Lawrence Katz, Andrew Oswald,
and numerous seminar participants for many helpful comments. We thank the Princeton Survey Research
Center, particularly Edward Freeland and Naila Rahman for their assistance in implementing the surveys.
We are grateful to the Center for Equitable Growth at UC Berkeley and the Industrial Relations Section
at Princeton University for research support. This paper reflects solely the views of the authors and
not necessarily the views of the institutions they belong to, nor those of the National Bureau of Economic
Research.
© 2010 by David Card, Alexandre Mas, Enrico Moretti, and Emmanuel Saez. All rights reserved.
Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided
that full credit, including © notice, is given to the source.
Inequality at Work: The Effect of Peer Salaries on Job Satisfaction
David Card, Alexandre Mas, Enrico Moretti, and Emmanuel Saez
NBER Working Paper No. 16396
September 2010, Revised April 2011
JEL No. J0
ABSTRACT
We use a simple theoretical framework and a randomized manipulation of access to information on
peers' wages to provide new evidence on the effects of relative pay on individual job satisfaction and
job search intentions. A randomly chosen subset of employees of the University of California (UC)
was informed about a new website listing the pay of University employees. All employees were then
surveyed about their job satisfaction and job search intentions. Our information treatment doubles
the fraction of employees using the website, with the vast majority of new users accessing data on
the pay of colleagues in their own department. We find an asymmetric response to the information
treatment: workers with salaries below the median for their pay unit and occupation report lower pay
and job satisfaction, while those earning above the median report no higher satisfaction. Likewise,
below-median earners report a significant increase in the likelihood of looking for a new job, while
above-median earners are unaffected. Our findings suggest that job satisfaction depends directly on
relative pay comparisons, and that this relationship is non-linear.
David Card
Department of Economics
549 Evans Hall, #3880
University of California, Berkeley
Berkeley, CA 94720-3880
and NBER
card@econ.berkeley.edu
Alexandre Mas
Industrial Relations Section
Firestone Library
Princeton University
Princeton, NJ 08544
and NBER
amas@princeton.edu
Enrico Moretti
University of California, Berkeley
Department of Economics
549 Evans Hall
Berkeley, CA 94720-3880
and NBER
moretti@econ.berkeley.edu
Emmanuel Saez
Department of Economics
University of California, Berkeley
549 Evans Hall #3880
Berkeley, CA 94720
and NBER
saez@econ.berkeley.edu
1 Introduction
Economists have long been interested in the possibility that individuals care about both their
absolute income and their income relative to others.1Relative income concerns have important
implications for microeconomic and macroeconomic policy,2and for understanding the impact of
income inequality.3Recent studies have documented a systematic correlation between measures
of relative income and reported job satisfaction (e.g., Clark and Oswald, 1996), happiness (e.g.,
Luttmer, 2005 and Solnick and Hemenway 1998), health and longevity (e.g., Marmot, 2004),
and reward-related brain activity (e.g., Fliessbach et al. 2007).4Despite confirmatory evidence
from laboratory experiments (e.g., Fehr and Schmidt, 1999), the interpretation of the empiri-
cal evidence is not always straightforward. Relative pay effects pose a daunting challenge for
research design, since credible identification hinges on the ability to isolate exogenous variation
in the pay of the relevant peer group.
In this paper we propose and implement a new strategy for evaluating the effect of relative
pay comparisons, based on a randomized manipulation of access to information on co-workers’
wages.5Following a court decision on California’s “right to know” law, the Sacramento Bee
newspaper established a website (www.sacbee.com/statepay) in early 2008 that made it possible
to search for the salary of any state employee, including faculty and staff at the University of
California. In the months after this website was launched we contacted a random subset of
employees at three UC campuses, informing them about the existence of the site.6A few days
1Early classical references are Smith (1759) and Veblen (1899). Modern formal analysis began with Duesen-
berry’s (1949) relative income model of consumption. Easterlin (1974) used this model to explain the weak link
between national income growth and happiness. Hamermesh (1975) presents a seminal analysis of the effect of
relative pay on worker effort. Akerlof and Yellen (1990) provide an extensive review of the literature (mostly
outside economics) on the impact of relative pay comparisons.
2For example, Boskin and Sheshinski (1978) show how optimal taxation is affected by relative income concerns.
More recently, Grossman and Helpman (2007) develop the implications of relative wage concerns for the optimal
extent of off-shoring. Fuhrer and Moore (1995) introduce relative wage concerns in an overlapping contract
macro wage model.
3Most of the work on inequality has focused on the explanations for the rise in earnings inequality in recent
decades (see reviews by Katz and Autor, 1999 and Acemoglu and Autor, 2011). However, there is less work on
the question of why inequality per se is a matter of public concern.
4There are also studies that have concluded a more important role for absolute income than relative income,
for example, Stevenson and Wolfers (2008). Kuhn et al. (2011) find that people do not experience reduced
happiness when their neighbors win the lottery.
5A number of recent empirical studies in behavioral economics have used similar methods that manipulate
information–rather than the underlying economic parameters–to uncover the effects of various policies. See
Hastings and Weinstein (2009) on school quality, Jensen (2010) on returns to education in developing countries;
Chetty, Looney, and Kroft (2009) on sales taxes, Chetty and Saez (2009) on the Earned Income Tax Credit, and
Kling et al. (2008) on Medicare prescription drug plans.
6Initially the website was relatively unknown. Even as late as June 2009, when we conducted the last of
later we surveyed all campus employees to elicit information about their use of the Sacramento
Bee website, their pay and job satisfaction, and their job search intentions. We compare the
answers from workers in the treatment group (who were informed of the site) and the control
group (who were not). We use administrative salary data matched to the survey responses
to compare the effects of the information treatment on individuals who were earning above or
below the median pay in their unit and occupation, and estimate models that allow the response
to treatment to depend on an individual’s salary relative to the median for his or her unit and
occupation. Throughout our analysis we define peers as co-workers in the same occupation
group (faculty vs. staff) and the same unit (i.e., department or school) within the University.
Theoretically there are two broad reasons why information on peer salaries may affect work-
ers’ utilities. Much of the existing relative pay literature assumes that workers’ preferences
depend directly on their salary relative to their peers’. Alternatively, workers may have no
direct concern about co-workers’ pay but may use peer wages to help predict their own future
pay, as in the “tunnel effect” of Hirschman and Rothschild (1973).7These models have different
predictions on how information on co-worker salary affects utility.
In the relative utility model, we assume that individuals value their position relative to
co-workers in the same pay unit and occupation, and that in the absence of external informa-
tion, people have imperfect information on their co-workers’ wages. Accessing information on
the Sacramento Bee website allows people to revise their estimates of co-worker pay. If job
satisfaction depends linearly on relative pay, information revelation has a negative effect on
below-median earners and a positive effect on above-median earners, with an average impact
of zero. If job satisfaction is a concave function of relative pay, as would be the case under
inequality aversion (eg. Fehr and Schmidt 1999), the negative effect on below-median earners is
larger in magnitude than the positive effect on above-median earners, and information revelation
causes a reduction in average job satisfaction.
The predicted pattern of impacts is quite different in a model where people have no direct
our three surveys, only about 40% of employees who had not been directly informed about the site through our
experiment report being aware of its existence.
7Hirschman and Rothschild (1973) proposed this model in the context of developing economies where increases
in inequality are tolerable because they act as a signal for future own income growth. Senik (2004) proposed a
recent test of the model in the case of Russia in transition. More closely related to our study, Galizzi and Lang
(1998), using administrative data from Italian firms, show that, conditional on own wages, the average wages of
similar workers in the firm is positively related to future wage growth and negatively related to quits. Clark et
al. (2009) show, using matched employer-employee panel data, that individual job satisfaction is higher when
other workers in the same establishment are better-paid, and interpret this as evidence of a tunnel effect.
2
concern over co-worker wages, but rationally use information on peer salaries to update their
future pay prospects. Indeed, in our data, future earnings growth is positively related to current
median earnings in one’s department (conditioning on current individual earnings). Hence, if
co-worker salaries provide a signal about future wages, either through career advancement or
a bargaining process, learning that one’s wage is low (high) relative to one’s co-workers causes
expected future wages to be updated positively (negatively). In this case the revelation of
co-workers’ salaries raises the job satisfaction of relatively low-wage workers and lowers the
satisfaction of relatively high-wage workers. Our simple randomized design allows us to mea-
sure the causal impacts of information revelation for workers at different points in the salary
distribution and distinguish between the alternative models.
Informing UC employees about the Sacramento Bee website had a large and highly significant
impact on the fraction who used the site. In the absence of treatment we estimate that only
about one-quarter of UC employees had used the site. Our treatment more than doubled that
rate. Most new users (80%) report that they investigated the wages of colleagues in their own
department or pay unit. This strong “first stage” result establishes that workers are interested in
co-workers’ wages – particularly the pay of peers in the same department – and that information
manipulation is a powerful and practical way to estimate the effects of relative pay on workers.
We find that access to information on co-workers’ wages had different effects on employees
with salaries above and below the median in their department and occupation group: The in-
formation treatment caused a reduction in pay and job satisfaction for those whose wages are
below the median in their department and occupation group and an increase in the probability
that the worker reports looking for another job. The reductions in pay and job satisfaction,
and the increased reporting of job search in the treatment, are more pronounced the further
below the wage is from the unit and occupation median. By comparison, those who are paid
above the median experienced no significant change in pay satisfaction or reported job search.
The evidence further suggests that the response to treatment depends more on the wage rank
than the wage level relative to median as in Parducci’s (1995) theory. Overall, these findings
are consistent with relative pay effects, and inconsistent with the alternative rational learning
model. The information treatment also leads to an increase in the fraction of respondents who
think that overall inequality in America is too high. However, we do not find evidence that
the treatment affected turnover 2-3 years after the experiment possibly because the informa-
3
tion diffused overtime to both controls and treatments or because the Great recession reduced
voluntary quits.
Our results provide credible field-based evidence confirming the importance of the relative
pay comparisons that have been identified in earlier observational studies of job turnover (Kwon
and Milgrom, 2011), job satisfaction (Clark and Oswald, 1996; Hamermesh, 2001; Lydon and
Chevalier, 2002) and happiness (Frey and Stutzer, 2002; Luttmer, 2005), and in some (but not
all) lab-based studies.8Specifically, they support the theory of relative income in which negative
comparisons reduce workers’ satisfaction but positive comparisons have less impact.
Our results also contribute to the literature on pay secrecy policies.9About one-third of U.S.
companies have “no-disclosure” contracts that forbid employees from discussing their pay with
co-workers. Such contracts are controversial and are explicitly outlawed in several states. Our
finding of an asymmetric impact of access to wage information for lower-wage and higher-wage
workers suggests that employers have an incentive to maintain pay secrecy, since the cost to
low-paid employees is greater than any benefit received by their high-wage peers.
The remainder of the paper is organized as follows. Section 2 presents a simple theoretical
framework for structuring our empirical investigation. Section 3 describes the experimental
design, our data collection and assembly procedures, and selection issues. Section 4 presents
our main empirical results. Section 6 concludes.
2 A Simple Theoretical Framework
In this section we lay out two simple models that illustrate how information on co-worker pay
may affect job satisfaction. We are particularly interested in understanding how the relation
between job satisfaction and information on co-workers’ pay may differ for those whose wage
is above the average wage of their co-workers and those whose wage is below the average of
8Lab experimental studies have developed a series of games such as the dictator game, the ultimatum game,
or the trust game (see Rabin 1998 for a survey) showing evidence that relative outcomes matter. See in particular
Fehr and Falk 1999, Charness 1999, Fehr et al. 1998, Fehr et al. 1993, Fehr and Schmidt, 1999, Charness and
Rabin, 2002, and Clark et al., 2010 for lab evidence of relative pay effects. Note however that in experimental
effort games, Charness and Kuhn (2007) and Bartling and Von Siemens (2010) find that workers’ effort is highly
sensitive to their own wages, but unaffected by co-worker wages. Following the theory that ordinal rank matters
proposed in psychology by Parducci (1995), some lab studies have shown that rank itself matters (see e.g. Brown
et al. 2008 and Kuziemko et al. 2010).
9The seminal work on pay secrecy is Lawler (1965). Futrell (1978) presents a comparison of managerial
performance under pay secrecy and disclosure policies, while Manning and Avolio (1985) study the effects of
pay disclosure of faculty salaries in a student newspaper. Most recently Danziger and Katz (1997) argue that
employers use pay secrecy policies to reduce labor mobility and raise monopsonistic profits.
4
their co-workers. We begin with the case where workers care directly about relative pay, as
in the model of Clark and Oswald (1996). We then consider an alternative scenario in which
people do not care about relative pay, but use information on their co-workers’ pay to form
expectations about their own future pay. In both cases we assume that in the absence of the
website, people know their own salary with certainty and have imperfect information on their
peers’ salary. With access to the website they have complete information on co-workers’ salary.
2.1 Model 1 – Relative Utility
Consider a worker whose own wage is wand who works in a unit with an average wage m. For
simplicity we will assume that wages within each unit are symmetrically distributed (so mean
and median wages in the unit are the same), and that agents who lack complete information
hold Bayesian priors. Let Idenote the information set available to the worker: I=I0will
denote the information set in the absence of access to the Sacramento Bee website, and I=I1
will denote the information set with access to the site. For the sake of the model, our experiment
can be thought of as changing the information set from I0to I1. In practice, our experiment has
“imperfect compliance”, in the sense that some members of the control group have information
I1and some members of the treatment group have information I0. We defer a discussion of this
issue until section 2.3, below.
Assume that the worker’s utility, or job satisfaction, given set I, can be written as:
S(w, I ) = u(w) + v(wE[m|I]) + e, (1)
where eis an individual-specific term representing random taste variation and v(.) represents
feelings arising from relative pay.10 With suitable choices for the functions u(.) and v(.), this
specification encompasses most of the functional forms that have been proposed in the literature
on relative pay. We assume that in the absence of the website, individuals only know their own
salary, and that they hold a prior for mthat is centered on their own wage, i.e., E[m|I0] = w.
Under these assumptions, job satisfaction in the absence of external information is:
S(w, I 0) = u(w) + v(wE[m|I0]) + e=u(w) + e,
10These feelings depend on information about co-workers’ pay–information may never be revealed–which is
why the expectation term in (1) is inside the function v(.) rather than outside.
We ignore cost of effort because it is not affected by the information treatment, and therefore is on average
the same for the group of workers who receive the information treatment and the control group of workers who
do not.
5
where we assume (w.l.o.g.) that v(0) = 0. With access to the website we assume that individuals
can observe mperfectly.11 Then job satisfaction conditional on using the website is
S(w, I 1) = u(w) + v(wE[m|I1]) + e=u(w) + v(wm) + e.
Let Dbe an indicator for whether an individual is informed or not, then we have
S(w, m, D) = u(w) + D·v(wm) + e. (2)
This equation provides a complete description of an idealized experiment in which members
of the control group have D= 0 and members of the treatment group have D= 1.For such
an experiment the treatment response function R(w, m)E[S(w, m, 1) S(w, m, 0)|w, m]
identifies the relative pay concern function v(wm).A simple specification of the function v(.)
is a piece-wise linear model allowing for different slopes above and below the median m:
S(w, m, D) = u(w) + b0·D·(wm)·1(wm) + b1·D·(wm)·1(w > m) + e. (3)
We assume b0b10 to allow (potentially) for concavity in the relative utility function,
the so called “inequality aversion” hypothesis of Fehr and Schmidt (1999).12 In this case the
treatment reduces job satisfaction for those with wmand weakly increases job satisfaction
for those with w > m, implying that the average effect of treatment is weakly negative. Holding
constant mthe effect of treatment is increasing in w, with a slower rate of increase once w > m.
The linear case with no kink at the median corresponds to b0=b1. In this case the average
treatment effect is zero.
2.2 Model 2 – Co-worker Wages as a Signal of Future Wages
Appendix Table A1 presents a set of earnings growth comparisons and regression models for
wages of workers at UCLA between 2007 and 2008. The table shows that workers paid below the
median for their department and occupation group experience significant earnings gains relative
to those above the median, and that wage growth is strongly correlated with median peer wages,
11Complete information is a strong assumption, and can be relaxed by assuming that access to the website
provides a noisy signal of the true mean wage of co-workers. This addition does not substantively change our
theoretical model.
12Other related models could also be consistent with those predictions. For example, individuals may value
social approval or social esteem. Learning that one earns less than peers might be taken as an indication that
one’s work is not valued as much by the employer, and this disappointment may lead to lower job satisfaction.
If utility is concave in social esteem, then these predictions would also hold.
6
holding constant own wages. These patterns suggest a possible alternative to the relative income
model in which utility depends solely on own earnings, but people use their co-workers’ wages to
help predict future pay.13 If respondents interpret the pay and job satisfaction questions in our
surveys as reflecting not just their current situation, but also their expected future trajectory,
then new information on co-worker pay could lead to changes in satisfaction though updating,
rather than relative pay considerations. In essence, information on peers’ salaries provides a
signal of expected future pay, arising through either career advancement or bargaining.
Formally, suppose that people evaluate their job satisfaction based on their current wage w
and on the net present value of their expected future wages w0given their information set I:
S(w, I ) = w+βE[w0|I] + e, (4)
where β > 0 is a discount factor and the linearity assumption is made for simplicity (see our
discussion below). We assume that future wages are normally distributed and that individuals
hold a conjugate prior centered on their current wage with precision q(i.e., their prior is w0
N(w, 1/q)).14 In addition, individuals who receive the information treatment observe a noisy
signal about their future wage from their peers’ average wage m. In particular, we assume
that m=w0+uwhere uis assumed to be normally distributed with mean 0 and precision
k, independent of w0.15 The larger is k, the more informative is the signal. Workers form
expectations about future wages by combining their prior and the signal:
E[w0|I1] = (1 λ)w+λm,
where λk/(q+k) represents the relative precision of the signal. Observed job satisfaction
for members of the control and treatment groups, conditional on (w, m, D) is given by
S(w, m, D) = (1 + β)w+D·b0·(mw) + e(5)
where b0βλ. Although this equation has the same form as equation (3) above, the predictions
are the opposite than those of the relative pay model. When people learn about their own future
wages from co-worker pay the effect of access to information on job satisfaction is increasing in
13For example, if people believe that their employer has a strict pay ceiling, then learning that a colleague’s
pay is above that ceiling increases the probability of obtaining a higher wage in the future.
14Assuming that the mean of w0is (1 + g)wwhere gis a common growth factor does not affect these results.
15This assumes that peer wages are an unbiased signal of future wages. We could easily incorporate more
general signals with no substantive change in the model.
7
the gap between mand wbecause the further an individual is below the mean for his or her
peers, the greater is his or her expected growth of win the future. On average, half the workers
have a positive surprise and half have a negative surprise, with an average impact of zero.16
2.3 Empirical Implementation
This section describes how we test the predictions of the alternative models. We first discuss
the issue of imperfect compliance. We then turn to a discussion of the empirical models that
we fit to the data and the empirical tests that we perform. These tests directly follow from the
predictions of the models in sections 2.1 and 2.2.
2.3.1 Incomplete Compliance
In the theoretical models above, we have assumed that all treated individuals do access the
web site salary information, and none of the individuals in the control group do. In practice,
however, our experiment has incomplete compliance. Prior to our experimental intervention
some employees of the UC system had already used the Sacramento Bee website. After our
information treatment not everyone who was informed about the existence of the website decided
to use it.17 Thus some members of the control group were informed, while some members of the
treatment group were uninformed. As in other experimental settings this incomplete compliance
raises potential difficulties for the interpretation of our empirical results.
Let Tdenote the treatment status of a given individual (T= 0 for the control group; T= 1
for the treatment group), and let π0=E[D|T= 0, w, m] and π1=E[D|T= 1, w, m] denote
the probabilities of being informed conditional on treatment status, individual wages, and peer
mean wages. With this notation, we have
S=u(w) + π0v(wm) + T·(π1π0)v(wm) + e+φ, (6)
for some functions uand v, and where φis an error component reflecting the deviation of an
16It is possible to extend this learning model to the case where workers value income in each period using
a concave utility function u(w): S(w, I ) = u(w) + βE[u(w0)|I] + e. With concavity the positive surprises
experienced by lower-wage workers lead to a relatively large gain in satisfaction, while the negative surprises
experienced by high-wage workers lead to relatively smaller reductions in satisfaction. Thus, the average change
in satisfaction is positive. This will be true for any concave utility function, including a reference point utility
function where there is a concave kink at the reference point (Kahneman and Tversky, 1979).
17Some treated employees may have failed to read our initial email informing them of the website. Others
may have been concerned about clicking a link in an unsolicited email, and decided not to access the site.
8
individual’s actual information status from his or her expected status.18 Under the assumption
that the “information treatment intensity” δπ1π0is constant across individuals, equation
(6) implies that the observed treatment response function in our experiment is simply an atten-
uated version of the “full compliance” treatment effect, with an attenuation factor of δ. As in
a simpler model with a homogeneous treatment effect, we can therefore inflate the coefficients
of the estimated treatment response function using an estimate of δfrom a first-stage linear
probability model that relates the probability of using the website to treatment status and the
other observed characteristics of an individual.
In the more general case in which the information treatment varies with wand mthe
experimental response reflects a combination of the variation in the information treatment effect
(π1π0) and the difference in satisfaction in the presence or absence of information (v(wm)).
Below we estimate a variety of “first stage” models that measure the effect of the information
treatment on use of the Sacramento Bee website, including models that allow the treatment
effect to vary with functions of (wm). Importantly, we find that the information treatment
intensity is independent of the observed characteristics of individuals, including their wage and
relative wage. This allows us to interpret our satisfaction models as variants of equation 6 with
an attenuated treatment response.
2.3.2 Econometric Models
Based on the simple predictions arising from the models described above, we fit two main models
to the measures of job satisfaction collected in our survey. First, we fit models of the form:
S=g(w, x) + a·1(wm) + b0·T·1(wm) + b1·T·1(w > m) + µ, (7)
which include controls for individual wages and other covariates (x),a dummy for whether the
individual’s wage is less than the median in his or her pay unit and occupation, and interactions
of a treatment dummy with indicators for whether the individual’s wage is below or above the
median for his or her pay unit and occupation.
Our second set of empirical models focus directly on the shape of the treatment response
function implied by equations 3 and 5. These models have the form
S=g(w, x) + c1T·(wm)·1(wm) + c2T·(wm)·1(w > m) + µ, (8)
18The models described above imply that satisfaction can be written as S=u(w)+ D·v(wm) +e. Formally,
φ= [DT π1(1 T)π0)]v(wm). This term is mean-independent of the conditioning variables in π0and
π1.
9
which includes controls for individual wages and other covariates (x),a dummy for treatment
status, an interaction of treatment status with the individual’s relative wage when the wage is
below median in the individual’s pay unit, and a second interaction between the relative wage
and treatment status when the wage exceeds the median. As discussed below, we apply these
models to three complementary measures of job satisfaction.
We consider several tests of the estimated coefficients from these models. We first consider
the test that the treatment effects are jointly zero. Assuming that the observed treatment
response function in our data is simply a rescaled version of the “full compliance” response
function described by the competing models, this can be interpreted as a general test of whether
information about co-workers’ pay affects job satisfaction at all. This test cannot distinguish
why or how information about co-workers’ pay might affect job satisfaction.
Equation (7) allows us to distinguish between a model in which relative wages have a direct
effect on satisfaction and one in which people learn about their own future wages from their
peers’ salaries. A finding of b0<0 and b10, for example, would favor the relative wage model.
To test a model with linear effects of the relative wage on job satisfaction versus a model with
a strictly concave response to relative wages we would test b1=b0<0vs. b0<b10.
Inequality aversion also suggests the presence of a “kink” in the response to the comparison
wage once w > m which implies c1> c20 in equation (8).
3 Design, Data, and Selection Issues
3.1 Experimental Design and Data Collection
In March 2008, the Sacramento Bee posted a searchable database at www.sacbee.com/statepay
containing individual pay information for California public employees including workers at the
University of California (UC) system. Although public employee salaries have always been
considered “public” information in California, in practice access to salary data was extremely
restrictive and required a written request to the State or the University of California. The
Sacramento Bee database was the first to make this information easily accessible.19 At its
19Prior to March 2008, other local newspapers (the San Francisco Chronicle and the San Jose Mercury) had
posted online databases on top earners at the University of California (defined as workers paid over $200,000
in the year). The SacBee updates its website annually when new compensation information is made available.
Data for calendar year tearnings are posted in June of year t+ 1. Others have also posted the comprehensive
information online after March 2008. For example, http://ucpay.globl.org/letters.php posts the complete data
from year 2004 to 2009.
10
inception the database contained pay information for calendar year 2007 for all UC workers
(excluding students and casual workers) as well as monthly pay for all other state workers.
3.1.1 Information Treatment
In the Spring 2008, we decided to conduct an experiment to measure the reactions of employees
to the availability of information on the salaries of their co-workers. We elected to use a ran-
domized design with stratification by department (or pay unit). Ultimately we focused on three
UC campuses: UC Santa Cruz (UCSC), UC San Diego (UCSD), and UC Los Angeles (UCLA).
Our information treatment consisted of an email (sent from special email accounts established
at UC Berkeley and Princeton) informing the recipient of the existence of the Sacramento Bee
website, and asking recipients to report whether they were aware of the existence of the site or
not. The emails were sent in October 2008 for UCSC, in November 2008 for UCSD, and in
May 2009 for UCLA. The exact text of the email was as follows:
“We are Professors of Economics at Princeton University and Cal Berkeley conducting a re-
search project on pay inequality at the University of California. The Sacramento Bee newspaper
has launched a web site listing the salaries for all State of California employees, including UC
employees. The website is located at www.sacbee.com/statepay or can be found by searching
“Sacramento Bee salary database” with Google. As part of our research project, we wanted to
ask you: Did you know about the Sacramento Bee salary database website?”
About 25% of people who received these emails responded by filling out a 1-question online
survey on their knowledge of the site. Since the answers are only available for the treatment
group we do not use the response to this online survey in the analysis below.
Our experimental design is summarized in Table 1. We collected online directories at
each of the three campuses to use as the basis for assignment. These directories contain
employees’ names, job titles, departments, and email addresses.20 At each campus, a fraction
of departments was randomly selected for treatment (two-thirds of departments at UC Santa
Cruz; one-half at the other two campuses). Within each treated department a random fraction
of employees was selected for treatment (60% at UC Santa Cruz, 50% at UC San Diego, 75% at
UCLA). Our original design targeted 40% of employees at UC Santa Cruz, 25% of employees at
UC San Diego, and 37.5% of employees at UCLA to receive treatment. As indicated in column
20Since our treatment and survey are administered by email, we omit all employees who do not have a UC
email address–a rare situation at UC.
11
2 of Table 1, the actual fractions receiving treatment were relatively close to these targets.21
The stratified treatment design was chosen to test the possibility of peer interactions in
the response to treatment.22 Specifically, we anticipated that employees who received the
information treatment might inform colleagues and co-workers in their department about the
site. As we show below, however, any within-department spillover effects appear to have been
very small in our experiment, and in our main analysis we therefore focus on simple comparisons
between people who were directly treated versus those who were not, though we cluster the
standard errors for all models by department to reflect the stratified design.
As indicated in the third column of Table 1, we also randomly selected one-quarter of
departments at UCLA as “Placebo treatment” departments. The placebo treatment informed
people about a UC website listing the salaries of top UC administrators, and invited them
to fill out a 1-question survey on their knowledge of the site. Within these departments 75%
of individuals were randomly selected to receive the placebo treatment. We use the group
of workers who received the placebo treatment to assess the validity of our interpretation of
the evidence in light of possible confounders including priming effects due to the language of
the treatment email and differential response rates between the treatments and control. Like
workers in the treatment group, workers in the placebo group received an email about salary
differences within the UC system. But unlike the email received by workers in the treatment
group, the email received by workers in the placebo group provides no information about peers’
salary. Therefore, if our interpretation of the evidence is correct, the estimated effect of the
placebo treatment should be limited or null.
3.1.2 Second Stage Survey
The second stage of our design consisted of a follow-up survey, emailed to 100% of employees at
each campus some 3-10 days after the initial treatment emails were sent. The survey (repro-
duced in the appendix section A.1) included questions on knowledge and use of the Sacramento
Bee website, on job satisfaction and future job search intentions, on the respondent’s age and
gender, and on the length of time they had worked in their current position and at the University
21There is wide variation in the size of departments (from a handful in some departments to over 1000 at the
Business School at UCLA). To keep our design simple we decided to randomize across departments with no
regard for department size. This created some imbalance in the fraction of employees assigned to treatment.
22Such interactions were present in the response to the information treatment considered by Duflo and Saez
(2003) who studied the effects of a benefits fair on retirement savings plan participation at a large University.
12
of California. The survey was completed online by following a personalized link to a website.
In an effort to raise response rates we randomly assigned a fraction of employees at the first
two campuses in our experiment to be offered a chance at one of three $1000 prizes for people who
completed the survey.23 Again, we used a stratified design detailed in column 4 of Table 1: all
employees in one-third of departments were offered the incentive; and one-half of the employees
in another third of departments were offered the incentive. The selection of departments (and
individuals) to receive the incentive offer was made independently of the selection to receive the
original information treatment. Based on the positive reaction to the incentive offer at UCSC
and UCSD, we decided to extend the incentive to everyone in the UCLA survey. In all, just
over three-quarters of employees at the three campuses were offered the response incentive, and
a total of nine respondents across the three campuses won $1000 each.
For our surveys at UCSC and UCSD we also randomly varied the amount of time between
the information treatment and the follow-up survey: employees in one-half of departments were
emailed the survey 3 days after the initial treatment emails; employees at the other half were
emailed the survey 10 days after. For UCLA we decided to simplify the design and send all the
follow-up surveys 10 days after the information treatments. At all three campuses, we sent up
to two additional email reminders asking people to complete the follow-up survey.
3.1.3 Matching Administrative Salary Data
Our final dataset combines treatment status information, campus and department location,
follow-up survey responses, and administrative data on the salaries of employees at the Univer-
sity of California. The salary data – which were obtained from the same official sources used
by the Sacramento Bee – include employee name, base salary, and total wage payments from
the UC for calendar year 2008. We matched the salary data to the online directory database
by employee name. Specifically we matched observations from the online directories used as
the basis for random assignment with the salary file by first and last name, dropping all cases
for which the match was not one-to-one (i.e., any cases where two or more employees had the
same first and last name). Appendix Table A2 presents some summary statistics on the success
of our matching procedures. Overall, we were able to match about 76% of names from our
online directories to the salary database. The match rate varies by campus, with a high of
23More precisely, all respondents were eligible for the prize, but only a randomly selected sample were told
what it would be.
13
81% at UCSD and a low of 71% at UCSC. We believe that these differences are explained by
differences in the quality and timeliness of the information in the online directories at the three
campuses.
3.2 Response to the Follow-up Survey
Overall, just over 20% of employees at the three campuses responded to our follow-up survey
(appendix Table A2). While comparable to the response rates in many other non-governmental
surveys, this is still a relatively low rate, leading to some concern that the respondent sample
differs systematically from the overall population of UC employees. A particular concern is
that response rates may be affected by our information treatment, potentially confounding any
measured treatment effects on job satisfaction.
Table 2 presents a series of linear probability models for the event that an individual re-
sponded to our follow-up survey. The models in columns 1-2 are fit to the overall universe of
41,975 names that were subject to random assignment (based on the online directories). The
models in columns 3-6 are fit on the subset of 31,887 names we were able to match to the ad-
ministrative salary data. The baseline model in column 1 includes additive effects for our three
primary experimental manipulations: (1) receiving the information treatment; (2) receiving the
placebo treatment; (3) being informed of the lottery prize for survey respondents. As discussed
above, the information treatment and placebo treatment were offered to a random subsample
of people in randomly selected departments. Likewise, the response incentive was offered to
everyone in some departments, and a fraction of people in other “partially incentivized” de-
partments. To allow for spillover effects in the information treatment and placebo treatment
we include a dummy for direct assignment to treatment/placebo status, and a second dummy
for people who were in treated or placebo departments but not treated or offered the placebo
(with the omitted group being people in departments where no one received the information
or placebo treatment). Similarly, we include separate indicators for people in departments
where everyone was informed of the response incentive, people offered the response incentive in
departments with a 50% offer rate; and people in the partially incentivized departments who
were not offered the incentive (with the omitted group being people in departments where no
one was offered the incentive). The baseline model also includes a dummy if the individual
could be matched to the administrative salary data, and a full set of interaction of campus and
14
faculty/staff status.24 For comparison, column 2 shows a model in which potential spillover
effects from the information and placebo treatments, and the response incentive are set to zero.
The coefficient estimates for the models in columns 1 and 2 point to several interesting
conclusions. First and as suggested by the simple comparisons in Appendix Table A2, the
response rate for people who could be matched to the administrative salary data is significantly
higher (roughly +3.4 percentage points) than for those who could not. Second, assignment to
either the information treatment or the placebo treatment had a significant negative effect on
response rates, on the order of -3 to -5 percentage points. This pattern suggests that there was
a “nuisance” effect of being sent two emails that lowered response rates to the follow-up survey
independently of the content of the first email. Third, being offered the response incentive had
a sizeable positive (+4 percentage point) effect on response rates. Finally, none of the three
primary manipulations appear to have had within-department spillover effects. An F-test for
exclusion of all the spillover effects (reported in the bottom row of the table) has a p-value of
0.85. The estimates of the individual assignment coefficients are also very similar whether the
spillover effects are included or excluded (compare column 1 and column 2).
The models in columns 3-6 of Table 2 repeat these specifications on the subset of people
who can be matched to wage data, with and without the addition of a cubic polynomial in
individual wages as an added control. As would be expected if random assignment was correctly
implemented, the latter addition has little impact on the estimated coefficients for the various
assignment classes, though it does lead to some increase in the explanatory power of the model.
Again, tests for exclusion of all the spillover effects are insignificant, with p-values in the range
of 30-40%. Finally, in anticipation of the treatment effect models estimated below, the model
in column 6 allows for a differential treatment effect on response rates for people whose wages
are above or below the median for their occupation and pay unit. The estimation results in
column 6 suggest that the negative response effect of treatment assignment is very similar for
people with above-median wages (-4.04%) and below-median wages (-3.60%), and we cannot
reject a homogeneous effect. We also fit a variety of richer models allowing interactions between
wages and treatment status, and allowing a potential kink in the effect of wages at the median
of the pay unit. In none of these models could we reject the homogeneous effects specification
presented in column 5.
24We define faculty status based on job title in the directories. There is likely a small amount of misclassifi-
cation error in the determination of faculty status.
15
Overall, we conclude that all three of our experimental manipulations–assignment to the
information treatment, assignment to the placebo treatment, and assignment to the response
incentive–had significant effects on response rates to our follow-up survey. Although the nega-
tive effect of the information treatment on the response rate is modest in magnitude (about a
15 percent reduction in the likelihood of responding), it is highly statistically significant, and
poses a potential threat to the interpretation of our estimates of the effect of treatment, which
rely on data from survey respondents. Importantly, the effects of the information treatment
and placebo treatment on the response rate are very similar, suggesting that it was the nuisance
of being contacted twice that lowered the response rate of the treatment group, rather than the
content of the treatment email. Because the response rates in the treatment and placebo are
close, we can test whether the placebo group shows a similar pattern of effects as the treatment
group to probe for possible selection-biases.25
3.3 Summary Statistics and Comparisons by Treatment Status
Table 3 presents some comparisons between people who were assigned to receive our information
treatment and those who were not. For simplicity we refer to these two groups as the treatment
and control groups of the experiment.26 Beginning with our overall sample, the fractions of
employees classified as faculty and the faction who can be matched to wage data are very similar
between the treatment and control groups. The third column of the table reports a t-test for
equality of the means for the two groups, taken from a linear regression model that also includes
campus effects (which control for the differential treatment rates at the three campuses). The
t-tests (clustered by department to reflect the stratified design) are not significant for either
variable. Next we focus on the subset of employees who can be matched to wage data. Base
earnings (which exclude over-time, extra payments, etc.) are slightly higher for the treatment
group than the control group (t= 2.04), but the gap in total earnings (which include over-
time and supplements like summer pay and housing allowances) is smaller and not significant.
Similarly, neither the fraction with total earnings less than $20,000 or the fraction with total
earnings over $100,000 are significantly different between the two groups. As noted above,
however, the fraction of the treatment group who responded to our follow-up survey is about
25We may observe the same pattern of effects in the placebo and treatment even without sample selection
bias, for example if there is a priming effect. The placebo treatment will capture the overall effect from our
first-stage email absent the disclosure of the salary database.
26Here the control group includes the group of workers who received the placebo treatment.
16
3 percentage points lower than the rate for the controls, and the difference is highly significant
(t= 4.49). Finally, the bottom panel of Table 3 presents comparisons in our main analysis
sample, which consists of the 6,411 people who responded to our follow-up survey (with non-
missing responses for the key outcome variables) and can be matched to administrative salary
data. This sample is comprised of 85% staff and 15% faculty, with mean total earnings of
around $67,000. Data from the follow-up survey suggest that sample members are about 60%
female, and have relatively long tenure at the University and in their current position. None
of these characteristics are different between the treatment and control groups. Within the
analysis sample the probability of treatment is statistically unrelated to age, tenure at UC,
tenure at the current job position, gender, and wages.27
4 Empirical Results
4.1 Treatment Effect on Use of the Sacramento Bee Website
We now turn to our main analysis of the effects of the information treatment. Except in Section
4.4, we restrict attention to the subsample of survey respondents in our main analysis sample,
although we include some specifications that use a selection correction term derived from the
larger sample. We begin in Table 4a by estimating a series of linear probability models that
quantify the effect of our information treatment on use of the Sacramento Bee web site.28 The
mean rate of use reported by the control group is 19.2%. As shown by the model in column 1,
the information treatment more than doubles that rate (by +28% to a mean rate of 48%). The
spillover effect of being in a department where other colleagues were informed of the treatment
(but not being directly informed) is very close to zero, and the estimated effect of treatment is
similar when we restrict the spillover effect to zero (column 2). This indicates that the spread
of information about the web site by word of mouth was limited.
In column 3 we include a dummy indicating whether the individual was offered a probabilistic
monetary response incentive. Recall that a random subset of individuals surveyed were offered
27We fit a logit for individual treatment status, including campus dummies (to reflect the design of the
experiment) and a set of 15 additional covariates: 3 dummies for age category, 4 dummies for tenure at the UC,
4 dummies for tenure in current position, a dummy for gender, and a cubic in total wages received from UC.
The p-value for exclusion of the 15 covariates is 0.74.
28All the models include controls for campus and faculty/staff status (fully interacted) as well as a cubic
polynomial in total individual pay. The faculty/staff and individual pay controls have no effect on the size of
the estimated treatment effect but do contribute to explanatory power.
17
the monetary incentive. The coefficient estimate for the treatment dummy is the same as in
column 2, and the coefficient on the incentive dummy is very close to 0.
Column 4 shows a model in which we add in demographic controls (gender, age dummies,
and dummies for tenure at the UC and tenure in current position). These controls have some
explanatory power (e.g., women are about 5 percentage points less likely to use the website than
men with t= 4.3), but their addition has no impact on the effect of the information treatment.
Our theoretical framework suggests that there are potentially interesting interactions be-
tween the information treatment and an employee’s relative position in the wage structure of
his or her pay unit. As noted in Section 2.3.1, however, interpreting any differential response
to the treatment is complicated if people with different relative wages responded differently in
their use of the Sacramento Bee website. The models in columns 5 and 6 of Table 4a address
this potential complication. The specification in column 5 allows separate treatment effects
for people paid above or below the median for their pay unit. As in Table 2, we define pay
unit as the intersection of department and faculty-staff status. The estimated treatment effects
are very similar in magnitude and we easily accept the hypothesis of equal effects (p=0.75,
reported in bottom row of the table). The specification in column 6 allows a main effect for
treatment, and an interaction of treatment status with wage relative to the median wage in the
pay unit, with a potential kink in the interaction term when salary exceeds the median salary
in the pay unit. The interaction terms are very small in magnitude and again we easily accept
the hypothesis of a homogenous treatment effect at all relative salary levels (p=0.89). We have
fit many other interaction specifications and, consistent with the models in Table 4a, found
that the information treatment had a large and relatively homogeneous effect on the use of the
Sacramento Bee website.29 On balance, we believe the evidence is quite consistent with the
hypothesis that the information treatment had a homogeneous effect on the use of the web site.
Having shown that our information treatment increased the use of the salary web site, a
second interesting question is whose salary information the new users actually checked at the
site. We gathered information on the uses of the web site only in our UCLA survey. Specifically,
we asked whether people had looked at the pay of: (1) colleagues in their own department; (2)
29The estimated effect of treatment is a little larger at UCSC (33%, standard error = 5%) than at the other
two campuses (UCSD: 28%, standard error =2%; UCLA: 28%, standard error = 2%) but we cannot reject a
constant treatment effect (p=0.21). The estimated treatment effect is also somewhat larger for faculty (32%,
standard error 3%) than for staff (28%, standard error 2%), but again we cannot reject a constant effect at
conventional significance levels (p=0.23).
18
people in other departments at their campus; (3) colleagues at other UC campuses; (4) “high
profile” people like coaches, chancellors, and provosts. The answers to this question were not
mutually exclusive as respondents could pick more than one answer. Table 4b reports estimated
linear probability models (fit to the UCLA sample) for 6 alternative dependent variables.
The first, in column 1, is just a dummy for any use of the Sacramento Bee site. For
simplicity we show only two specifications: one with a single treatment effect, the second with
separate treatment effects for people with salaries above or below the median in their pay unit.
The results for this dependent variable mirror the results in Table 4a and show a large and
homogeneous treatment effect on use of the site. The second variable (column 2) is a dummy
equal to 1 if the individual reported using the site and reported looking up the salaries of
colleagues in his/her own department. Here the combined treatment effect is 24.1 percentage
points. Compared with the treatment effect of 27.6 percent for any use of the site, this estimate
suggests that among “new users” who were prompted to look at the site by our information
treatment, 87% (=24.1/27.6) examined the pay of colleagues in their own department. Columns
3-5 show similar models for using the web site and investigating colleagues in other departments
at the same campus, colleagues at other campuses, and high profile people. In all cases we find
relatively large and homogeneous effects of our information treatment.
Overall the results in Table 4b confirm that people who were informed about the Sacramento
website by our treatment e-mail were very likely to use the site to look-up the pay of their closest
co-workers (defined as those in the same department). We take this as direct evidence that the
department is a relevant unit for defining relative pay comparisons. This may also explain why
we fail to find any spillover effects of the information treatment within departments: If workers
look-up primarily the pay of their peers’ in the department, they might not want to bring it up
with their peers, and risk being perceived as invading their privacy.
4.2 Treatment Effect on Job and Salary Satisfaction and Mobility
4.2.1 Baseline Models
We turn now to models of the effect of the information treatment on various measures of an
employee’s satisfaction. Our surveys asked respondents a number of questions related to their
overall satisfaction with their pay and job, and whether they planned on making a serious effort
to look for a new job. The first is based on responses to the question: How satisfied are you
19
with your wage/salary on this job?”. Respondents could choose one of four categories: “very
satisfied”, “somewhat satisfied”, “not too satisfied” or “not at all satisfied”. The second is based
on responses to the question:“In all how satisfied are you with your job? ”. Respondents could
choose among the same four categories as for wage satisfaction. The third is based on responses
to the question: Do you agree or disagree that your wage is set fairly in relation to others in
your department/unit?”. Respondents could choose “Strongly Agree”, “Agree”, “Disagree” or
“Strongly Disagree”. The fourth is based on responses to the question: Taking everything into
consideration, how likely is it you will make a genuine effort to find a new job within the next
year?”. Respondents could choose “very likely”, “somewhat likely” or “not at all likely”.
We report in Appendix Table A3 the distributions of responses to these questions among the
control and treatment groups of our analysis sample. We also show the distribution of responses
for the controls when they are reweighted across the three campuses to be directly comparable
to the treatment group. In general, UC employees are relatively happy with their jobs but less
satisfied with their wage or salary levels. Despite their professed job satisfaction, just over one-
half say they are somewhat likely or very likely to look for a new job next year. Close inspection
of the distributions of responses between the treatment and control groups of our experiment
reveal few large differences. Indeed, simple chi-square tests (which make no allowance for the
design effects in our sample) show the distributions of job satisfaction and job search intentions
are very similar (p=0.99 for job satisfaction, p=0.43 for search intentions) between the groups.
There is a clearer indication of a gap in wage satisfaction (which is somewhat lower for the
treatment group) where the simple chi-square test is significant (p=0.05).
For much of the subsequent analysis we consider three dependent variables. In order to
simplify the presentation of results, and to improve precision, we combine wage satisfaction,
job satisfaction, and wage fairness into a single index by taking the simple average of these
measures.30 The resulting variable, which we call the satisfaction index is interpretable as
a general measure of work satisfaction. The index has a ten point scale with higher values
indicating the respondent is more satisfied based on the three underlying measures.31 The
30Specifically, for the three variables we assign numerical scores 1-4. For wage and job satisfaction 4 represents
“very satisfied” and 1 represents “not at all satisfied.” For wage fairness 4 represents “strongly agree” and 1
represents “strongly disagree”. We then take the average of these responses. We have experimented with
different ways of combining these variables, for example taking the first principal component of these variables,
and the estimates are not sensitive to these alternatives.
31We show the results of the baseline models for each of the sub-components in Appendix Table A4.
20
second outcome variable in the main analysis is the job search intention described above where
larger values mean that the respondent reports being more likely to be look for a new job. The
third outcome is a binary variable for whether the respondent is dissatisfied and is looking for a
new job. Specifically, we create a binary variable taking the value of 1 for whether the respondent
is dissatisfied (below the median on the satisfaction index) and responds “very likely” to the
job search intentions question, and 0 otherwise. We treat these outcome variables as arbitrarily
scaled responses from a single latent index of satisfaction, and assume that the unobserved
components of satisfaction are normally distributed, implying an ordered probit response model
for each measure (the binary outcome reduces to a probit).
Tables 5 and 6 present estimates of a series of ordered probit models for these three measures.
The models in Table 5 follow the specification of equation (7) and include treatment effects
interacted with whether the individual is paid above or below the median for his/her unit. We
begin with the basic models in columns 1, 5, and 9 which include only a simple treatment
dummy. The estimated treatment effects from this simple specification are either insignificant
or only borderline significant. The point estimate for the satisfaction index is negative (t= 1),
the point estimate for search intentions is positive (t= 1.2), and the point estimate for the
combined variable (dissatisfied and likely looking for a new job) is positive (t= 1.7). These
estimates are suggestive of a tendency for a negative average impact on satisfaction. All the
models include controls for a cubic in individual wage, interacted with campus and occupation
(staff/faculty). The coefficients on these controls (not reported in the table) indicate that in
the range of observed wages, higher wages are associated with higher job and wage satisfaction,
and lower probability of looking for a new job.
Allowing for differential treatment effects for those with below-median and above-median
wages (columns 2, 6, 10 of Table 5) indicates that the small average effect masks a larger negative
impact on satisfaction for below-median wages, coupled with a zero or very weak positive effect
for those with above-median wages. For workers whose salaries are below median in their unit
and occupation, the point estimate for the satisfaction index is negative (t= 2.1), the point
estimate for search intentions is positive (t= 2.6), and the point estimate for the combined
binary variable is positive (t= 3). For workers earning more than the unit and occupation
median the treatment effect is insignificant in all cases.32 The table shows the difference in
32We have also estimated restrictive models (not shown here) that assume no treatment effect on above-median
workers which fit as well as ones that allow an effect on this group, and show a pattern of negative treatment
21
the estimated treatment effect between above and below median workers which are statistically
significant for all three of these models at the five percent level. The estimates in Columns 3,
7, and 11 of Table 5 include a set of demographic controls and are qualitatively very similar.33
We do an initial probe of the robustness of our inferences to potential selection biases by
fitting selection-correction models where we take advantage of random assignment of the in-
centive that we introduced to raise response rates, as well as the random assignment of the
placebo which reduces response rates. In column 4 we present estimates from a Heckit model
for the satisfaction index outcome where in the first stage we estimate a probit model using
the explanatory variables from model 2 in Table 5 and dummies for whether the respondent
was randomly assigned to the response incentive or placebo groups, the latter two of which are
excluded from the second stage.34 In columns 8 and 12, we report selection-corrected max-
imum likelihood estimates for the ordered probit model for the job search and the combined
satisfaction/job search variables (the latter, a binary variable, again reduces to a probit) using
the same exclusion restrictions. While in none of these models is there a significant relationship
in the correlation between the response for the outcome and participation, we place little weight
on these estimates because we have found that they are not generally robust to choices on exclu-
sion restriction (both placebo and price incentive versus just placebo versus just incentive) and
estimating procedure (two-step versus maximum likelihood). We therefore postpone making
conclusions on the extent of selection bias until we discuss the placebo experiment which we
view as our strongest test.
Overall, the findings in Table 5 are more consistent with a model in which relative pay
comparisons play a direct role in worker’s utility than with a model in which they learn about
future pay opportunities from the salaries of co-workers. The negative impact of information on
below-median workers coupled with the absence of any positive effect for above-median workers
is consistent with inequality aversion.
The specifications in Table 6 are based directly on a piece-wise linear variant of inequality
aversion and follow the specification of equation (8). The specifications in columns 1, 4, and 7
include an interaction of treatment with an individual’s wage relative to the pay unit median
effects on job satisfaction and positive effects on search intentions.
33We postpone discussion of the magnitude of the effects until we present the estimates from Table 6.
34Because the satisfaction index has a ten point scale, estimating an ordered probit maximum liklihood
selection model was not possible.
22
for workers who are below the median, and a separate interaction for workers who are above
the median, thus allowing a kink in the treatment response function at the median wage of
the pay unit.35 These models suggest a negative information treatment effect on the lowest
wage individuals for all outcomes. The pattern of estimates confirm the non-linearity in the
interaction between the treatment effect and relative wages suggested by Table 5: higher wages
reduce the negative effect on satisfaction of the information treatment for those whose wage is
less than the median of their pay unit. Once an individual’s wage exceeds the median for his or
her unit, there is no additional effect. Across all models reported in Table 6, we cannot reject
that the treatment response function is zero when the wage exceeds the pay unit median.
To get a sense of the magnitude of the effects, we can relate the size of the effect to the loss of
income necessary to get the same decline in the latent index of satisfaction in the control group.
We fit some simple ordered probit models to the control group relating each of the dependent
variables to wage and a set of demographic and campus controls.36 These models show that
in the control group an additional $10,000 is associated with shifts in the latent index of 4.7
for the satisfaction index. This implies that the effect of being $10,000 closer to median in the
treatment is equivalent to about $5500 in extra income in the control. Because of incomplete
compliance, discussed in section 2.3.1, this estimate needs to be inflated. However, it is not
obvious what the appropriate inflation factor should be. This is a case where the treatment
effect is likely to be heterogeneous—since the effect of peer salary on workers utility is likely to
vary significantly in the population—and this heterogeneity could be related to the propensity
to look at the SacBee web site. A conservative range is that the effect of being $10,000 closer
to median in the treatment is equivalent to between $5500 to $22,000 in extra income in the
control. We view these as large effects.
In Table 6 we further explore the effect of the information treatment based on wage rank
rather than levels. The motivation for this specification is that it is possible in principle that
ordinal rank matters more for relative utility considerations than absolute salary differences, as
has been suggested in the psychology literature (Parducci, 1995). In these models we replace
the relative wage in levels with the percentile rank in pay unit (expressed so median = 0) in
35We present in appendix Table A5 specifications that also include a main treatment effect, with similar
results.
36In addition to wage, the explanatory variables are dummies for gender, age, tenure, time in position and
controls for campus interacted with faculty/staff.
23
the interaction terms. For the satisfaction and the satisfaction/search outcomes, rank shows
a more pronounced effect than the model based on levels (specifications 2 and 10). When we
estimate models with both rank and levels, rank wins the “horse race” for these two outcomes.
Specifically, the interaction of treatment and rank is significant for the below median workers,
and once this interaction is in the model the interaction of treatment and the relative wage level
is no longer significant (specifications 3, 7, and 11).37 For the job search variable we do not
have the precision to be able to distinguish between the two.
In columns 4, 8, and 12 of Table 6 we fit selection-correction models using the approach
from Table 5. We fit these to the models based on the rank order of the respondent, but the
results are similar for levels. We find no evidence of selection bias as the correlations between
the outcome and participation errors are small and insignificant in all cases.
We have also looked at whether the effects vary depending on whether we sent the follow-up
survey 3 days or 10 days after treatment. We find robust evidence that the estimates are
stronger for surveys sent 10 days after the treatment than 3 days, with similar rates of SacBee
use for both groups (results available upon request). This is suggestive evidence that these
effects are not entirely transitory.
4.2.2 Effects by Subgroups
We have estimated models allowing the treatment effects to vary by gender, faculty/staff status,
and length of tenure, shown in Appendix Table A6 (same model as in Table 5). Although both
men and women express the same elevated dissatisfaction following the information treatment,
women appear more inclined to report that they are searching for a new job following treatment.
Low-paid and low-tenure respondents respond to the treatment by elevated job search intentions
while high tenure and low-paid respondents do not.38 Staff appear to be more responsive than
faculty to the treatment on both dissatisfaction and job search, but the relatively small number
of faculty limits our ability to make precise comparisons.
We have also explored models in which we examine effects using the full campus (instead
37Models where we have added a treatment main-effect (Appendix Table A5) also show that the rank variable
appears to be more significant in the treatment response than relative wage levels.
We have also estimated models where wages are measured in logs. Estimates (not reported, but available
upon request) are qualitatively similar to the ones in Table 6.
38This is not surprising as very few UC employees with long tenure change jobs. We use this feature to test
that responses to job search are truthful (and not cheap talk due to wage dissatisfaction). In Appendix Table A7,
we show that treatment effects on job search are present only in the group of more mobile workers as predicted
by age, tenure, time in position, sex, faculty/staff status, and campus (estimated from the control group).
24
of the department) as peer unit but always keeping the distinction staff vs. faculty. The
results are presented in Table 7. Interestingly, faculty experience a very strong and significant
treatment response on the satisfaction index both below median, where satisfaction drops, and
above median, where satisfaction increases (column 1). The difference between below and above
median is highly significant (t=3.6). In contrast, staff shows only a weak dissatisfaction below
median and no effect above median (column 2). The same qualitative results are found for job
search (columns 3 and 4) and the satisfaction/job search index (columns 5 and 6), although the
results are not as significant for faculty. This suggests that the kink effect in relative pay utility
might not be present for faculty. From our own experience and those of readers, it is perhaps
not surprising that faculty in departments where pay is relatively low (such as humanities)
would feel upset when they discover the true pay gap with departments where pay is relatively
high (such as economics, business, or law). Conversely, faculty in high pay departments who
feel underpaid relative to their department colleagues might feel some solace in seeing that
they are still much better paid than faculty in low pay departments. Overall, those results
suggest that the relevant comparison group for faculty might be campus-wide colleagues while
the department might be relevant comparison group for staff.
4.2.3 Effects of the Placebo Treatment
While our randomized research design provides a strong basis for inferences about the effects
of an information treatment, there may be a concern that the interpretation of the measured
treatment effects is flawed. For example, it is conceivable that receiving the first stage email
about research on inequality at UC campuses could have reduced job satisfaction of relatively
low paid employees, independently of the information they obtained from the Sacramento Bee.
Such effects are known in the psychology literature as “priming effects.” Another area of possible
concern that we have already discussed is the lower response in the treatment group which may
introduce possible selection biases.
One simple way to address these concerns is to fit the same types of models used in Tables
5 and 6, using the placebo treatment instead of our real information treatment. The placebo
experiment is subject to the same set of potential biases as the treatment. Because the placebo
reduced the response rate to our survey by the same magnitude as the treatment we should
observe a pattern of estimates similar to the treatment if the effects are due to selection bias.
25
If the effects are due to priming, we should see the same pattern of effects in the placebo group.
The wording of the placebo treatment email closely follows the wording of our main informa-
tion treatment, and was as follows: We are Professors of Economics at Princeton University
and Cal Berkeley conducting a research project on pay inequality and job satisfaction at the
University of California. The University of California, Office of the President (UCOP) has
launched a web site listing the individual salaries of all the top administrators on the UC cam-
puses. The listing is posted at [...]. As part of our research project, we wanted to ask you: Did
you know that UCOP had posted this top management pay information online?”.
This treatment was only administered at UCLA, and was randomly assigned to three quarters
of people in a random one-quarter of departments (see Table 1). To analyze the effects of the
placebo treatment, we use all observations who were not assigned to the information treatment
at the UCLA campus (i.e., the UCLA “control group”), distinguishing within this subsample
of 1,880 people between those who were assigned the placebo treatment (N=503) and those
who were not (N=1,377). As a first step we analyzed the effect of placebo treatment on use
of the Sacramento Bee website. Among the placebo treatments the rate of use of the website
was 25.6%, while the rate for the remainder of the controls was 23.8%. The gap is small and
insignificant (t= 0.6 accounting for the clustered design). We also fit various models similar
to the ones in Table 4 and found no indication that the placebo treatment had any effect on
use of the Sacramento Bee site.
We then fit the models summarized in Table 8, which relate the placebo treatment to our
three outcome measures. For each outcome we show two estimates: the baseline specification
that interacts the treatment dummy with indicators for wages above or below the median of the
pay unit for the UCLA sample (excluding the placebo group) and a specification that interacts
the placebo dummy with indicators for whether the respondent’s earnings are above or below
the median in his/her the pay unit (excluding the treatment group). In the third column, we
show p-values corresponding to the test that the parameters from the information treatment
model is equal to the placebo model.
These results suggest that the systematic pattern of estimates in Tables 5 is not an artefact
of priming effects or selection biases arising from our earlier email contact of the treatment
group. Among the UCLA treatment group, the pattern of estimates is very similar to the
pattern in Table 5 for all three campuses, though less precise because of the smaller sample.
26
The low earnings group receiving our email informing them of the Sacramento Bee database
have lower satisfaction, are more likely to report that they are searching for a job, and are more
likely to be dissatisfied and searching for a job relative to the control group and relative to the
high earnings group. By contrast, for the low earnings group receiving the placebo email, we
do not observe significant effects in any of these dimensions. Indeed, the point estimates show
the opposite pattern. For the three outcomes, we can reject the hypothesis that the interaction
of treatment with below median in pay unit is equal to the interaction of placebo and below
median in pay unit below the 10% level.
4.3 The Effect of Peer Salary Disclosure on Perceptions of Inequality
In addition to our basic questions on wage and job satisfaction, and job search intentions,
we asked a more general question on overall inequality in the United States. Specifically,
respondents were asked to what extent they agreed or disagreed that “Differences in income
in America are too large” (with the same 4-point scale). UC employees appear to be in
nearly unanimous agreement with this statement: 38% of our sample agreed and 48% strongly
agreed, while only 11% disagreed and 2% strongly disagreed. Table 9 reports estimates for this
dependent variable. Here our results suggest that the response to the information treatment is
homogenous: people who were informed of the Sacramento Bee website express a significantly
higher rate of agreement with the statement, regardless of their relative wage position (t= 1.92
without demographic controls and t= 1.85 with controls). Columns 3 and 4 include the
interaction of treatment with whether the respondent is paid below median in their pay unit.
Unlike the other dependent variables we have considered, this interaction term is not significantly
different from zero and, if anything, the effects are larger for upper income earners.
Information about peer salary appears to increase concerns about nationwide income in-
equality, and if anything, the effects are larger for upper income earners and hence likely driven
by fairness rather than envy. Overall, those findings suggest that learning about pay disparity
can have significant impacts on concerns about inequality. In principle, this could ultimately
have effects on voting behavior.
27
4.4 Effects on Actual Turnover in the Medium-Run
One important limitation of our study is that we are constrained to self-reported outcomes,
raising the question as to whether the estimated effects translate into changes in economic
behavior, such as quits or changes in earnings. We have made an attempt at estimating whether
there is a treatment effect on job turnover more than two years after we sent the first treatment.
We linked information from online directories as of March 2011 for the three campuses to
our original sample to assess whether there is a relationship between treatment and whether
individuals were still at the campus in March 2011. This exercise is summarized in Table 10.
Reassuringly, the analysis shows that our outcome variable for job search is highly predictive of
whether someone left the school in the previous two years. However, we do not find differences
in the probability of leaving by treatment status. It is impossible to know whether this lack of an
effect reflects a treatment effect that was transitory, or whether there were too many confounders
to be able to detect any differences. There are several important potential confounders. First,
information about the Sacramento Bee website (as well as other subsequent websites) has been
diffusing over time so that the treatment effect of our initial intervention has become diluted.
Second, we no longer have a clean control because to determine whether we had a treatment,
at the end of our original survey we asked the control group whether they were aware of the
Sacramento Bee website. Third, our survey took place during a severe recession with a high
unemployment rate in the state of California making it difficult for UC workers to quit their
jobs. Because of these challenges, we view the development of research designs to estimate the
longer-term effects of salary disclosure on behavior as a promising path for future research.
5 Conclusion
We evaluate the effects of our information treatment on employees’ satisfaction and on their job
search intentions. We find that the information treatment has a negative effect on people paid
below the median for their unit and occupation, with no effect on more highly-paid individu-
als. For workers below the median, there is a relationship between the treatment response in
satisfaction and distance from the median. This relationship appears to be generally stronger
for wage rank than the relative wage level. These patterns are consistent with inequality aver-
sion in preferences, which imposes a negative cost for having wages below the median of the
28
appropriate comparison unit, but no reward for having wages above the median. Overall, our
results support previous observational empirical studies and many lab experiment studies on
relative income. Our evidence also suggests that access to information about pay disparity at
the workplace increases concerns about both pay setting fairness and nationwide inequality. We
have only a very limited window on the effects of salary disclosure on long-term changes of
economic behavior. Finding ways to estimate these longer-term effects through experimental
or quasi-experimental research designs is a promising path for future research.
In terms of workplace policies, our findings indicate that employers have a strong incentive
to impose pay secrecy rules. Forcing employers to disclose the salary of all workers would
result in a decline in aggregate utility for employees, holding salaries constant. However, it is
possible that forcing an employer to disclose the salary of all workers may ultimately result in
an endogenous change in wages and worker mix, that could ultimately affect the distribution of
wages as in the models of Frank (1984), Bewley (1999), or Bartling and von Siemens (2010b).
References
Acemoglu, Daron and David Autor (2011) “Skills, Tasks and Technologies: Implications
for Employment and Earnings.” in Orley Ashenfelter and David Card, eds., Handbook of Labor
Economics, Volume 4, Amsterdam: Elsevier-North Holland.
Akerlof, George, and Janet Yellen (1990). “The Fair-wage Effort Hypothesis and Unem-
ployment,” Quarterly Journal of Economics 105(2), 255-84.
Bartling, Bjorn and Ferdinand von Siemens (2010). “Wage Inequality and Team Produc-
tion: An Experimental Analysis,” forthcoming Journal of Economic Psychology.
Bartling, Bjorn and Ferdinand von Siemens (2010b). “The Intensity of Incentives in
Firms and Markets: Moral Hazard with Envious Agents,” Labour Economics 17 , 598-607.
Bewley, Truman (1999). Why Wages Don’t Fall during a Recession. Cambridge, MA: Harvard
University Press.
Boskin, Michael and Eytan Sheshinski (1978). “Optimal Redistributive Taxation When
Individual Welfare Depends Upon Relative Income.” Quarterly Journal of Economics 92(4),
589-601.
Brown, Gordon, Gardner, Jonathan, Oswald, Andrew, and Jing Qian (2008). “Does
Wage Rank Affect Employees’ Wellbeing?” Industrial Relations 47, 355?389.
Charness, Gary (1999). “Optimal Contracts, Adverse Selection, and Social Preferences: an
Experiment.” manuscript, University of California, Santa Barbara.
Charness, Gary, and Peter Kuhn (2007). “Does Pay Inequality Affect Worker Effort?
Experimental Evidence.” Journal of Labor Economics 23(4): 693–724.
Charness, Gary, and Matthew Rabin (2002). “Understanding Social Preferences with
Simple Tests,” Quarterly Journal of Economics 117(3), 817–869.
Chetty, Raj, Adam Looney and Kory Kroft (2009). “Salience and Taxation: Theory and
Evidence” American Economic Review 99(4): 1145-1177.
29
Chetty, Raj and Emmanuel Saez (2009) “Teaching the Tax Code: Earnings Responses to
an Experiment with EITC Recipients,” NBER Working Paper No. 14836.
Clark, Andrew E., Kristensen, Nicolai, and Niels Westergard-Nielsen (2009). “Job
Satisfaction and Co-worker Wages: Status or Signal?” Economic Journal 119, 430-447.
Clark, Andrew, E., Masclet, David, and Marie Claire Villeval (2010). “Effort and Com-
parison Income: Experimental and Survey Evidence,” Industrial and Labor Relations Review
63, 407-426.
Clark, Andrew E. and Andrew J. Oswald (1996). “Satisfaction and Comparison Income,”
Journal of Public Economics 61(3), 359-381.
Danziger, Leif, and Eliakim Katz (1997). “Wage Secrecy as a Social Convention,” Economic
Inquiry 35, 59-69.
Duflo, Esther and Emmanuel Saez (2003) “The Role of Information and Social Interactions
in Retirement Plan Decisions: Evidence ¿From a Randomized Experiment”, Quarterly Journal
of Economics 118(3), 815-842.
Duesenberry, James S. (1949). Income, Saving and the Theory of Consumer Behavior.
Cambridge, MA: Harvard University Press.
Easterlin, Richard A. (1974). “Does Economic Growth Improve the Human Lot? Some
Empirical Evidence.” In Nations and Households in Economic Growth: Essays in Honor of
Moses Abramowitz, edited by P.A. David and M.W. Reder. New York: Academic Press.
Fehr, Ernst, and Armin Falk (1999). “Wage Rigidity in a Competitive Incomplete Market,”
Journal of Political Economy 107(1), 106–34.
Fehr, Ernst, Erich Kirchler, Andreas Weichbold, and Simon Gachter (1998). “When
Social Norms Overpower Competition: Gift Exchange in Experimental Labor Markets,” Journal
of Labor Economics 16(2), 324–51.
Fehr, Ernst, Georg Kirchsteiger, and Arno Riedl (1993). “Does Fairness Prevent Market
Clearing? An Experimental Investigation,” Quarterly Journal of Economics 108(2), 437–59.
Fehr, Ernst, and Klaus M. Schmidt (1999). “A Theory of Fairness, Competition, and
Cooperation.” Quarterly Journal of Economics 114(August), 817–868.
Fliessbach, K., Weber, B., Trautner, P., Dohmen, T., Sunde, U., Elger, C., and
Falk, A. (2007). “Social comparison affects reward-related brain activity in the human ventral
striatum,” Science 318, 1305-1308.
Frank, Robert H. (1984). “Are Workers Paid their Marginal Products?” American Economic
Review 74(4), 549-571.
Frey, Bruno S., and Alois Stutzer (2002). “What Can Economists Learn from Happiness
Research?” Journal of Economic Literature 40, pp. 402–435.
Fuhrer, Jeff and George Moore (1995). “Inflation Persistence.” Quarterly Journal of
Economics 110 (February), 127-159.
Futrell, Charles M. (1978). “Effects of Pay Disclosure on Satisfaction for Sales Managers: A
Longitudinal Study.” Academy of Management Journal 21 (March), 140-144.
Galizzi, Monica and Kevin Lang (1998). “Relative Wages, Wage Growth and Quit Behav-
ior,” Journal of Labor Economics 16, 367-391.
Grossman, Gene M. and Elhanan Helpman (2007). “Fair Wages and Foreign Sourcing.”
Harvard University Department of Economics Working Paper, January.
Hamermesh, Daniel (1975). “Interdependence in the Labor Market,” Economica 42, 420–29.
Hamermesh, Daniel S. (2001). “The Changing Distribution of Job Satisfaction.” Journal of
Human Resources 36 (Winter), pp. 1-30.
30
Hastings, Justine and Jeffrey Weinstein (2009). “Information, School Choice, and Aca-
demic Achievement: Evidence from Two Experiments.” Quarterly Journal of Economics,
124(4), 1373-1414.
Hirschman Albert O., and Michael Rothschild (1973). “The Changing Tolerance for
Income Inequality in the Course of Economic Development,” Quarterly Journal of Economics
87(4), 544-566.
Jensen, Robert (2010). “The (Perceived) Returns to Education and the Demand for School-
ing,” Quarterly Journal of Economics 125(2), 515-548.
Kahneman, Daniel, and Amos Tversky (1979). “Prospect Theory: An Analysis of Decision
under Risk.” Econometrica 47(2), pp. 263–91.
Katz, Lawrence, and David Autor (1999). “Changes in the Wage Structure and Earnings
Inequality,” Handbook of Labor Economics, O. Ashenfelter and D. Card, eds., Amsterdam:
North-Holland, Volume 3A.
Kling, Jeffrey, Sendhil Mullainathan, Eldar Shaffir, Lee Vermeulen, Marian V.
Wrobel (2008). “Misperceived Prices: Medicare Drug Plan Choice,” unpublished manuscript.
Kuhn, Peter, Peter Kooreman, Adriaan R. Soetevent, and Arie Kapteyn (2011).
“The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch
Postcode Lottery,” forthcoming, American Economic Review.
Kuziemko, Ilyana, Ryan Buell, Taly Reich, and Michael Norton (2010). “Last-place
Aversion: Evidence and Redistributive Implications”, manuscript, Princeton University.
Kwon, Illoong and Eva M. Milgrom (2011) “Status in the Worlplace: Evidence from
M&A”, forthcoming Economic Journal.
Lawler, Edward E. (1965). “Managers’ Perceptions of Their Subordinates’ Pay and of Their
Superiors’ Pay.” Personnel Psychology 18, 413-422.
Luttmer, Erzo (2005). “Neighbors as Negatives: Relative Earnings and Well-Being,” Quar-
terly Journal of Economics 120(3), pp. 963–1002.
Lydon, Reamonn and Arnaud Chevalier (2002). “Estimates of the Effect of Wages on Job
Satisfaction.” London School of Economics Center for Economic Performance Working Paper.
Manning, Michael R. and Bruce J. Avolio (1985). “The Impact of Blantant Pay Disclosure
in a University Environment.” Research in Higher Education 23 (2), pp. 135-149.
Marmot, Michael (2004). The Status Syndrome: How Social Standing Affects Our Health
and Longevity. Times Book, New York.
Parducci, Allen (1995). Happiness, Pleasure, and Judgment: The Contextual Theory and its
Applications. Mahwah, NJ: Erlbaum.
Rabin, Matthew (1998). “Psychology and Economics,” Journal of Economic Literature 36(1),
11–46.
Senik, Claudia (2004). “When Information Dominates Comparison. Learning from Russian
Subjective Panel Data,” Journal of Public Economics 88, 2099-2133.
Smith, Adam (1759). The Theory of Moral Sentiments. Glasgow, Scotland.
Solnick Sara J. and David Hemenway (1998). “Is More Always Better?: A Survey on
Positional Concerns,” Journal of Economic Behavior and Organization 37(3), 373-383.
Stevenson, Betsey and Justin Wolfers (2008). “Economic Growth and Subjective Well-
Being: Re-assessing the Easterlin Paradox,” Brookings Papers on Economic Activity, Spring.
Veblen, Thorstein (1899). The Theory of the Leisure Class. Macmillan Company, New York.
31
Campu
s
Information Treatment
A
ssi
g
nmen
t
Placebo Assi
g
nmen
t
Response Incentive Assi
g
nmen
t
UC Santa Cruz 66.7% of departments assigned none 33% of departments assigned to 100%
N=3,606 in 223 departments incentive (all receive incentive)
or administrative units 60% of individuals in treated 33% of departments assigned to 50%
department assigned incentive (one-half receive incentive)
33% of departments assigned to no
target = 40% of individuals incentive (none receive incentive)
actual = 42.0% target = 50% of individuals
actual = 49.3%
UC San Diego 50% of departments assigned none 33% of departments assigned to 100%
N=17,857 in 410 departments incentive (all receive incentive)
or administrative units 50% of individuals in treated 33% of departments assigned to 50%
department assigned incentive (one-half receive incentive)
33% of departments assigned to no
target = 25% of individuals incentive (none receive incentive)
actual = 23.9% target = 50% of individuals
actual = 55.0%
UCLA 50% of departments assigned 25% of departments assigned All individuals receive incentive
N=20,512 in 445 departments
or administrative units 75% of individuals in treated 75% of individuals in placebo
department assigned department assigned
target = 37.5% of individuals target = 18.8% of individuals
actual = 36.4% actual = 21.9%
All Three campuses target = 32.4% of individuals target = 9.2% of individuals target = 74.4% of individuals
N=41,975 in 1,078 departments actual = 31.6% actual = 10.7% actual = 76.5%
or administrative units
Table 1: Design of the Information Experiment
Notes: Assignment was based on name/email and department informationcontained in online directories. Sample sizes reflect number of valid emailaddresses extracted
from directories. See text for procedures used to define departments/administrativeunits. The response incentive assignment offered the opportunity to win $1000 (from
a random lottery with 3 winners for each campus) for survey respondents. The information treatment assignment and the response incentive assignment were
orthogonal. Placebo treatment departments were chosen among control departments which did not receive the information treatment.
All Coefficients×100 (1) (2) (3) (4) (5) (6)
Dummy if match to wage 3.37 3.37 -- -- -- --
(0.58) (0.58)
Treatment Effects:
Treated individual (all in treated departments) -3.53 -3.81 -3.38 -3.47 -3.82 --
(0.70) (0.54) (0.79) (0.78) (0.61)
Untreated individual in treated department 0.45 0.00 0.48 0.39 0.00 0.00
(0.82) -- (0.92) (0.91) -- --
Placebo individual (all in placebo departments) -5.10 -5.46 -5.49 -5.41 -5.89 -5.90
(1.05) (0.88) (1.20) (1.17) (1.01) (1.01)
Untreated individual in placebo department 1.71 0.00 2.79 2.91 0.00 0.00
(1.55) -- (1.49) (1.47) -- --
Response Incentive Effects:
Offered prize in 100% incentive department 4.37 4.25 4.57 4.43 4.23 4.24
(0.99) (0.75) (1.11) (1.10) (0.86) (0.86)
Offered prize in 50% incentive department 3.82 4.25 3.14 3.10 4.23 4.24
(1.18) -- (1.38) (1.36) -- --
Not offered prize in 50% incentive department -0.15 0.00 -0.52 -0.55 0.00 0.00
department (1.29) -- (1.43) (1.46) -- --
Treatment Effects Based on Relative Wage:
Treated individual with wage less than median -- -- -- -- -- -3.60
in pay unit (0.79)
Treated individual with wage greater than -- -- -- -- -- -4.04
median in pay unit (0.81)
Dummy if wage greater than median in pay -- -- -- -- -- -0.73
unit (0.75)
Cubic in wage? no no no yes yes yes
P-value for test: only individual treatment or 0.85 -- 0.36 0.32 -- --
incentive status matters (4 degrees of freedom)
Table 2: Linear Probability Models for Survey Response
Overall Sample (N=41,975) Subsample Matched to Wage Data (N=31,887
)
Notes: Standard errors, clustered by campus/department, are in parentheses(1,078 clusters for models in columns 1-2; 1,044 for columns 3-6). Dependent
variablein all modelsis dummy for respondingto survey (mean=0.204for columns 1-2; mean=0.214for columns3-6). All modelsincludeinteractedeffects
for campus and faculty or staff status (5 dummies). "Wage" refers to total UC payments in 2008. Pay unit refers to faculty or staff members in an
individual's department. Columns 1-2 include the full sample while columns 3-6 include only the subsample successfully matched to the wage data. In
columns2, 5, and 6, we do not includedummies for spillovereffects within departments(i.e., not beingtreated in a department wheresome colleaguesare
treated). Columns 4-6 include wage controls (up to cubic term). Column 6 includes interactions of treatment and relative wage in the unit.
Mean of Mean of Difference
Control Treatment (adjusted for t-test
Grou
p
aGrou
p
cam
p
us
)
(1) (2) (3) (4)
Overall Sample (N=41,975)
Percent faculty 16.2 19.1 1.47 0.91
(1.61)
Percent matched to wage data 76.3 75.2 0.12 0.10
(1.15)
Sample Matched to Wage Data (N=31,887)
Mean base earnings ($1000's) 54.73 58.26 2.50 2.04
(1.23)
Mean total earnings (base + supplements, $1000's) 63.35 66.93 2.34 1.22
(1.91)
Percent with total earnings < $20,000 13.2 12.8 -0.37 0.47
(0.77)
Percent with total earnings > $100,000 15.3 16.9 0.90 0.77
(1.16)
Percent responded to survey with non-missing 21.1 17.8 -2.76 4.49
responses for 8 key variables (0.61)
Survey Respondents with Wage Data and non-Missing Values (N=6,411)
Percent faculty 15.0 17.9 1.22 0.68
(1.79)
Mean total earnings (base + supplements, $1000's) 65.61 69.09 1.69 0.75
(2.23)
Percent female 60.9 61.0 0.43 0.24
(1.79)
Percent age 35 or older 72.9 75.9 1.68 1.15
(1.46)
Percent employed at UC 6 years or more 59.1 62.7 1.03 0.62
(1.67)
Percent in current position 6 years or more 40.3 43.8 1.76 1.08
(1.63)
a Includes placebo treatment group (at UCLA only).
Table 3: Comparison of Treated and Non-treated Individuals
Notes: Entries represent means for treated and untreated individuals in indicated samples. Difference between mean for treatment and control groups,
adjusting for campus effects to reflect the experimental design, is presented in column 3 along with estimated standard errors (in parentheses), clustered by
campus/department. The t-test for difference in means of treatment and control group is presented in column 4.
(1) (2) (3) (4) (5) (6)
Treated individual (coefficient × 100) 28.4 28.3 28.3 28.5 -- 28.7
(1.8) (1.6) (1.6) (1.6) (2.1)
Untreated individual in treated department 0.3 -- -- -- -- --
(coefficient × 100) (1.7)
Treated individual with wage less than median -- -- -- -- 28.9 --
in pay unit (coefficient × 100) (2.2)
Treated individual with wage greater than median -- -- -- -- 28.1 --
in pay unit (coefficient × 100) (2.0)
Treated individual × deviation of wage from median -- -- -- -- -- -0.2
in pay unit (coefficient × 100) (0.7)
Treated individual × deviation of wage from median -- -- -- -- -- 0.0
in pay unit if deviation positive (coefficient × 100) (1.0)
Dummy for response incentive (test for -- -- 0.0 -- -- --
selection bias in respondent sample) (1.8)
Dummy for wage less than median -- -- -- -- -1.4 --
in pay unit (coefficient × 100) (1.9)
Deviation of wage from median (coefficient × 100) -- -- -- -- -- -0.2
(0.50)
Deviation of wage from median -- -- -- -- -- 0.2
if deviation positive (coefficient × 100) (0.60)
Controls for campus × (staff/faculty) and cubic yes yes yes yes yes yes
in wage?
Demographic controls (gender, age, tenure and no no no yes yes yes
time in position)
P-value for test against model in column 4 -- -- -- -- 0.75 0.89
Use
Sacramento
Bee website
Colleagues in
own
department
Colleagues in
other
departments,
own campus
Colleagues at
other UC
campuses
"High-profile"
UC
emplo
y
ees Any of those
in cols. 2-5
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
Mean rate of use for control group (percent) 24.3 15.2 10.1 6.4 13.2 23.9
Estimated treatment effect from model with basic controls:
Treated individual (coefficient × 100) 27.8 24.1 15.0 7.5 9.5 27.6
(2.4) (2.2) (1.7) (1.4) (2.0) (2.4)
Estimated treatment effect from interacted model with basic controls:
Treated individual with wage less than 29.5 25.4 14.5 7.6 10.6 29.4
median in pay unit (coefficient × 100) (3.5) (3.3) (2.3) (2.0) (2.9) (3.5)
Treated individual with wage greater than 26.3 23.0 15.6 7.4 8.7 26.1
median in pay unit (coefficient × 100) (2.8) (2.7) (2.1) (1.7) (2.4) (2.8)
P-value for equality of treatment effectsa 0.45 0.54 0.72 0.92 0.56 0.41
at-test for equality of treatment effects for people with wage below median in pay unit and those with wage above median in pay unit.
Notes: Estimated on sample of 2,806 survey respondents from UCLA (1,880 controls, including those assigned placebo treatment, and 926 treated
Notes: Standard errors, clustered by campus/department, are in parentheses (819 clusters for all models). Dependent variable in all models is dummyfor
using Sacramento Bee web site (mean for control group=19.1%; mean for treatment group=49.4%; overall mean=27.5%). "Wage" refers to total UC
payments in 2008. Pay unit refers to faculty or staff members in an individual's department. Model in column 5 also includes dummy indicating if individual's
wage is below median of pay unit. Model in column 6 also includes deviation of wage from median of pay unit interacted with dummy for whether deviation is
positive.
Table 4a: Linear Probability Models for Effect of Treatment on Use of Sacramento Bee Website
Table 4b: Treatment Effects on Use of Sacramento Bee Website for Different Types of Salary Information
Used Sacramento Bee Website and Looked at Salary Information for:
Heckit
ML
Select.
Model
ML
Select.
Model
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
(
7
)
(
8
)
(
9
)
(
10
)
(
11
)
(
12
)
Treated individual (coefficient × 100) -3.2 -- -- -- 4.0 -- -- -- 8.8 -- -- --
(3.3) (3.4) (5.1)
I. Treated individual with wage than -- -9.6 -9.0 -5.1 -- 11.6 11.5 12.7 -- 20.1 19.8 19.6
median in pay unit (coefficient × 100) (4.5) (4.5) (2.9) (4.5) (4.5) (5.0) (6.6) (6.6) (7.8)
II. Treated individual with wage > than -- 2.7 1.9 3.1 -- -3.3 -0.9 -2.2 -- -5.0 -3.7 -5.5
median in pay unit (coefficient × 100) (4.1) (4.1) (2.8) (4.9) (4.6) (5.3) (7.5) (7.5) (8.0)
II-I -- 12.3 10.9 8.2 -- -14.9 -12.4 -14.9 -- -25.2 -23.5 -25.1
(5.4) (5.4) (3.6) (6.6) (6.4) (6.5) (9.6) (9.5) (9.6)
Inverse Mills Ratio -- -- -- -0.14 -- -- -- -- -- -- -- --
(0.14)
Correlation -- -- -- -0.21 -- -- -- -0.13 -- -- -- 0.06
(0.25) (0.32)
Controls for campus ×
(
staff/facult
y)
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
and cubic in wa
g
e?
Demographic controls (gender, age, No No Yes No No No Yes No No No Yes No
tenure and time in position)?
P-value for exclusion of 0.34 0.05 0.09 0.07 0.24 0.03 0.04 0.03 0.08 0.01 0.01 0.02
treatment effects
Table 5: Ordered Probit Models for Effect of Information Treatment on Measures of Job Satisfaction
In addition to the explanatory variables presented in the table, all models other than (1), (5), and (9) include an indicator for whether the respondent is paid at least the
median in his/her pay unit.
Satisfaction Index Likely to Look for New Job Dissatisfied and Likely Looking
for a New Job
Notes: Unless otherwise noted, specifications are ordered probit models. Model (4) is a two-step Heckman selection model. Models (8) and (12) are maximum
likelihood selection models for an ordered categorical dependent variable. See text for details on the specification of the selection models. Standard errors, clustered by
campus/department, are in parentheses (819 clusters for all models). "Wage" refers to total UC payments in 2008. Pay unit refers to faculty or staff members in an
individual's department. The satisfaction index is the average of responses for the questions: "How satisfied are you with your wage/salary on this job?", "How satisfied
are you with your job?", and "Do you agree or disagree that your wage is set fairly in relation to others in yourdepartment/unit?". Responses are ordered so that higher
values indicate greater satisfaction. The variable "Dissatisfied and Likely Looking for a New Job" is 1 if the respondent is below the median value of the satisfaction
index (median = 8/3) and reports being "very likely" to make an effort to find a new job. See text and Appendix Table 2 for further details on the construction of the
dependent variables.
(10 point scale) (1-3 scale) (0-1)
Heckit
ML
Select.
Model
ML
Select.
Model
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
(
7
)
(
8
)
(
9
)
(
10
)
(
11
)
(
12
)
Treated individual × deviation of wa
g
e from 2.6 -- -1.4 -- -4.1 -- -1.8 -- -6.3 -- 0.0 --
median if deviation ne
g
ative
(
coefficient × 100
)
(
1.4
)
(
2.4
)
(
1.5
)
(
2.5
)
(
2.2
)
(
3.6
)
Treated individual × deviation of wa
g
e from -1.1 -- -1.7 -- -1.2 -- -1.8 -- -1.4 -- -0.7 --
median if deviation
p
ositive
(
coefficient × 100
)
(
1.0
)
(
1.6
)
(
1.1
)
(
1.7
)
(
1.9
)
(
2.4
)
Treated individual × deviation of rank from 0.5 -- 3.8 5.4 1.9 -- -4.6 -2.9 -4.9 -- -7.3 -7.3 -7.2
if deviation negative (coefficient × 10) (1.5) (2.8) (1.1) (1.7) (2.9) (1.9) (2.3) (4.0) (2.7)
Treated individual × deviation of rank from 0.5 -- -0.8 1.6 0.2 -- -1.3 1.1 -0.8 -- -2.5 -1.5 -2.6
if deviation positive (coefficient × 10) (1.4) (2.5) (1.0) (1.7) (2.7) (1.8) (2.9) (4.2) (3.1)
Inverse Mills Ratio -- -- -- -0.15 -- -- -- -- -- -- -- --
(0.14)
Correlation -- -- -- -0.24 -- -- -- -0.13 -- -- -- 0.02
(0.26) (0.34)
Controls for cam
p
us ×
(
staff/facult
y)
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
and cubic in wa
g
e
?
P-value for exclusion of treatment effects 0.11 0.05 0.05 0.2 0.02 0.02 0.08 0.02 0.01 0.00 0.02 0.01
Table 6: Ordered Probit Models for Effect of Information Treatment on Measures of Job Satisfaction
Notes: Unless otherwise noted, specifications are ordered probit models. Model (4) is a two-step Heckman selection model. Models (8) and (12) are maximum likelihood selection
models for an ordered categorical dependent variable. See text for further detail on the specification of the selection models. Standard errors, clustered by campus/department, are in
parentheses (819 clusters for all models). "Wage" refers to total UC payments in 2008. Pay unit refers to faculty or staff members in an individual's department. See note to Table 5
for description of the dependent variables. In addition to the explanatory variables presented in this table, specifications 1, 3, 5, 7, 9, and 11 include the deviation of the wage from the
median wage in the pay unit if the deviation is positive, the deviation of the wage from the median wage in the pay unit if the deviation is negative, and an indicator for whether the
deviation is negative.
(10 point scale) (1-3 scale) (0-1)
Specifications 2-4, 6-8 and 10-12 include the deviation of the rank in the pay unit from 0.5 if the deviation is positive, the deviation of the rank in the pay unit from 0.5 if the deviation is
negative, and an indicator for whether the deviation is negative.
Satisfaction Index Likely to Look for New Job Dissatisfied and Likely Looking for a
New Job
Facult
y
Staff Facult
y
Staff Staff
(
1
)
(
2
)
(
3
)
(
4
)
(
6
)
I. Treated individual with wage than -24.7 -8.7 10.2 13.2 17.8
occupation/campus median (coefficient × 100) (10.4) (5.3) (11.1) (5.3) (6.8)
I. Treated individual with wa
g
e > than 25.2 -0.2 -10.9 -2.4 -1.7
occupation/campus median (coefficient × 100) (8.9) (4.4) (11.2) (5.5) (8.8)
II-I 50.0 8.5 -21.2 -15.6 -19.5
(14.0) (6.0) (15.0) (7.4) (10.5)
Controls for campus Yes Yes Yes Yes Yes
and cubic in wa
g
e?
P-value for exclusion of 0.00 0.24 0.37 0.04 0.03
treatment effects
Treatment Placebo p-valuea Treatment Placebo p-valuea Treatment Placebo p-valuea
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
(
7
)
(
8
)
(
9
)
Treated individual with wage less -12.7 2.2 0.06 11.8 -7.3 0.08 29.2 -18.7 0.01
than median in pay unit (coefficient × 100) (7.2) (7.2) (7.4) (9.6) (9.5) (16.4)
Treated individual with wage more -3.3 -2.4 0.90 -0.7 -10.9 0.23 -7.8 6.9 0.23
than median in pay unit (coefficient × 100) (6.1) (6.1) (7.3) (7.5) (10.7) (11.0)
Controls for staff/faculty status and cubic yes yes yes yes yes yes
in wage?
Observations 2,303 1,880 2,303 1,880 2,303 1,880
a p-value for hypothesis that placebo and treatment effects are equal.
0.30
Notes: A low pay department/unitcorresponds to a department/unitwhere the department/unitmedianwage is belowthe median departmentat the campus, computed
separatelyfor faculty and staff. In addition to the explanatoryvariablespresented in the table, all modelsother than (1), (5), and (9) include an indicator for whether the
department/unit median (computed separately for faculty/staff) is below the campus median for all departments/units by faculty/staff.
-10.0
(19.6)
-32.8
(24.8)
Likely to Look for New
Job
(10 point scale) (1-3 scale)
Yes
Notes: Specifications are ordered probit models. Standard errors, clustered by campus/department, are in parentheses. "Treatment" in the columns denotes the information
treatment. "Placebo" denotesthe placebotreatment. Sample is for UCLA only. Treatment specificationsexcludethe placebo group. Placebo specificationsexclude the treatment
group. Standard errors, clustered by campus/department, are in parentheses. "Wage" refers to total UC payments in 2008. Pay unit refers to faculty or staff members in an
individual's department. Models are based on specifications 2, 6 and 10 of Table 5. For additional details see notes to Table 5 and text.
Likely to Look for New Job Dissatisfied and Likely Looking
(10 point scale) (1-3 scale) for a New Job (0-1)
Satisfaction Index
Table 7: Effect of Information Treatment on Job Satisfaction by Pay Relative to Campus/Occupation Median
Dissatisfied and Likely
Looking for a New Job
(0-1)
Table 8: Estimates of the Effect of "Placebo" Treatment
Faculty
(5)
22.8
(15.6)
Satisfaction Index
Treated individual (coefficient × 100)
Treated individual with wage less than
median in pay unit (coefficient × 100)
Controls for campus × (staff/faculty)
Demographic controls (gender, age,
tenure and time in position)?
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
(
7
)
Reported "very likely" to make a genuine 20.5
effort to find a new job (coefficient × 100) (1.6)
Reported "somewhat likely" to make a genuine 5.9
effort to find a new job (coefficient × 100) (1.1)
Treated individual (coefficient × 100) -0.1 -0.2 0.0
(1.3) (1.2)
(
1.1
)
Treated individual with wage than 0.4 -0.76 -0.40
median in pay unit (coefficient × 100) (1.9) (1.5) (1.4)
Treated individual with wage > than -0.6 0.36 0.31
median in pay unit (coefficient × 100) (1.5) (1.2) (1.1)
Controls for cam
p
us ×
(
staff/facult
y)
Yes Yes Yes Yes Yes Yes Yes
and cubic in wa
g
e?
Notes: Specifications are ordered probit models. Standard errors, clustered by campus/department, are in parentheses (818
clusters). Dependent variable is "4" if respondent "strongly agrees" that differences in income in America are too large, "3" if they
"agree", "2" if they "disagree", and "1" if they "strongly disagree". See Appendix Table 2 for means of the variable. "Wage" refers
to total UC payments in 2008. Pay unit refers to faculty or staff members in an individual's department. In addition to the
explanatory variables presented in the table, models (3) and (4) include an indicator for whether the respondent is paid at least the
median in his/her pay unit.
Table 9: Effect of Information Treatment on Perceptions of Overall Inequality
(4)
Differences in Income in America Are Too Large (1-4 Scale)
7.5
(4.7)
-1.4
7.1 6.8
No Yes No
Yes Yes Yes
Yes
7.9
(6.2)
Notes: Dependant variable is 1 if we were able to locate individuals in the original sub-sample matched to wage data (see Table
2) to online campus directories in March 2011. We found 49% of original sample in UCSC, 76% in UCSD, and 74.5% in UCLA.
Excluded category in column (1) is "not likely at all". The mean of the dependant variable for the full sample is 0.27. In addition to
the explanatory variables presented in the table, models 3, 5, and 7 include an indicator for whether the respondent is paid at least
the median in his/her pay unit.
Responders Nonresponders
Table 10: Linear Probability Models for Effect of Information Treatment on Job Mobility
(N=6,835)
p
(N=25,048) (N=31,883)
Full sample
(1) (2) (3)
-1.5
(6.2)
(3.7) (3.7) (4.7)
Yes
A Appendix (not for publication)
A.1 Survey Questions
In this appendix, we reproduce the exact wording of the online second stage survey. We show
the exact questions in the case of UCLA (UCSC and UCSD surveys had a similar set of ques-
tions but did not include questions C1-C5 on detailed usage of the Sacramento Bee website).
The survey is divided into 3 parts: A. job satisfaction and pay equity questions, B. Demo-
graphic and job characteristics questions, C. Knowledge and use of the SacBee website. Those
parts will not be presented or flagged to the subjects so that we do not influence the responses.
A. Job Satisfaction and Pay Equity:
1. Please indicate whether you agree or disagree with the following statements:
(a) “My wage/salary is set fairly in relation to others in my department or unit.”
(b) “My wage/salary is set fairly in relation to workers in similar jobs on campus.”
(c) “My wage/salary is set fairly in relation to workers in similar jobs at other UC
campuses.”
Strongly Agree/Agree/Disagree/Strongly Disagree
2. Please indicate whether you agree or disagree with the following statement: “Differences
in income in America are too large.”
Please pick one of the answers below.
Strongly agree
– Agree
– Disagree
Strongly disagree
3. Do you expect to receive a salary increase in the next 3 years over and above the standard
cost of living adjustment?
Please pick one of the answers below.
– Yes
– No
4. Please indicate whether you agree or disagree with the following statement: “At UC,
individual performance on the job plays an important role in promotions and salary in-
creases.”
Please pick one of the answers below.
Strongly agree
– Agree
40
– Disagree
Strongly disagree
(a) How satisfied are you with your wage/salary on this job?
Please pick one of the answers below.
Very satisfied
Somewhat satisfied
Not too satisfied
Not at all satisfied
(b) All in all, how satisfied are you with your job?
Please pick one of the answers below.
Very satisfied
Somewhat satisfied
Not too satisfied
Not at all satisfied
5. Taking everything into consideration, how likely is it you will make a genuine effort to
find a new job within the next year?
Please pick one of the answers below.
Very likely
Somewhat likely
Not at all likely
B. Demographic and Job Characteristics Questions:
Please tell us a few things about yourself:
1. Are you working full-time or part-time in your job on campus?
Please pick one of the answers below.
– Full-time
– Part-time
(a) Is your position covered by a collective bargaining agreement?
Please pick one of the answers below.
– Yes
– No
2. Are you female or male?
Please pick one of the answers below.
– Female
– Male
41
3. What is your current age?
Please pick one of the answers below.
Under 25
– 25-34
– 35-54
Over 55
4. How many years have you worked at this university?
Please pick one of the answers below.
Less than 1 year
2 to 5 years
6 to 10 yrs
11 to 20 years
More than 20 years
5. How many years have you worked in your current position?
Please pick one of the answers below.
Less than 1 year
2 to 5 years
6 to 10 yrs
11 to 20 years
More than 20 years
C. Awareness and use of the Sacramento Bee website:
1. Are you aware of the web site created by the Sacramento Bee newspaper that lists salaries
for all State of California employees? (The website is located at www.sacbee.com/statepay,
or can be found by entering the following keywords in a search engine: Sacramento Bee
salary database).
Please pick one of the answers below.
– Yes
– No
If yes, skip 4; otherwise, skip 2-3.
2. (a) When did you learn about the salary database posted by the Sacramento Bee?
Please pick one of the answers below.
In the last few weeks
More than one month ago
42
(b) Please tell us: Have you used the Sacramento Bee salary database?
Please pick one of the answers below.
– Yes
– No
3. (a) Which people’s salaries were you most interested in? (You may select more than one
group.)
Colleagues in my department
Colleagues in other departments on campus
Colleagues at other campuses
Highly paid or high profile people
(b) Were the salaries you checked higher or lower than you expected?
Please pick one of the answers below.
– Higher
About what I expected
– Lower
4. Why didn’t you use SacBee website? (Select all the options that apply.)
I already know enough about salaries of University employees
Learning about colleagues’ pay could make me feel underpaid
Learning about colleagues’ pay could make me feel overpaid
I want to respect the privacy of my colleagues on campus
Information about salaries of University employees is of no interest to me
5. Do you think that making available public information on individual salaries is
Helpful for people who are paid less than average
Harmful for people who are paid less than average
Helpful for morale in your department
Harmful for morale in your department
Likely to lead to salary increases for some people
Likely to lead some people to look for other jobs
If you have any additional comments please feel free to enter them here before you submit
the questionnaire. Please write your answer in the space below.
43
A.2 Additional Empirical Results
Appendix Table A1 presents estimates of earnings changes of workers at UC Los Angeles be-
tween 2007 and 2008 as a function of their 2007 level of earnings for workers who are paid above
and below the median in their department and occupation. The sample is all UCLA employees
matched to their earnings and who are present in both 2007 (base year) and 2008 (following
year). We further restrict the sample to employees with base pay above $10,000 in 2007 (this
is done to avoid having extreme log changes that bias results in favor of our hypothesis, qual-
itative results are highly robust to changes in the $10,000 cut-off). The pay unit (for median
computation) refers to faculty or staff members in an individual’s department (in this sample
with base earnings above $10,000). Medians are defined separately for base pay and total pay.
Col. (1) reports the average for employees below median, col. (2) for employees above median,
col. (3) reports the difference (2)-(1). Cols. (4) and (5) report the coefficients of regressing the
left-hand-side variable on own pay and base median pay (those two right-hand-side variables
are in log when the left-hand-side is log and use the same definition of base vs. total pay as the
left-hand-side variables). The table shows that workers below the median experience significant
earnings gains relative to those above the median, and that holding wage constant, the higher
the median the higher is future wage growth on average.
Appendix Table A2 presents some summary statistics on the success of our matching pro-
cedures. Overall, we were able to match about 76% of names from our online directories to
the salary database. The match rate varies by campus, with a high of 81% at UCSD and a
low of 71% at UCSC. We believe that these differences are largely driven by differences in the
quality and timeliness of the information in the online directories at the three campuses. Some
evidence in support of this conjecture is provided by the fact that the survey response rate was
significantly higher for people we could match to the wage data (21.4%) than those we could not
match (17.7%). This pattern would be expected if some of the names that could not be matched
to the salary data were for former employees who were no longer working at the university.
Appendix Table A3 reports the distributions of responses to these questions among the con-
trol and treatment groups of our analysis sample. We also show the distribution of responses
for the controls when they are reweighted across the three campuses to have the same distri-
bution as the treatment group. In general, UC employees are relatively happy with their jobs
but less satisfied with their wage or salary levels. For example, about 85% of the control group
say they are somewhat satisfied or very satisfied with their job, but only 52% express the same
sentiment about their salary. Despite their professed job satisfaction, just over one-half say
they are somewhat likely or very likely to look for a new job next year. Close inspection of the
distributions of responses between the treatment and control groups of our experiment reveal
few large differences. Indeed, simple chi-square tests (which make no allowance for the design
effects in our sample) show the distributions of job satisfaction and job search intentions are very
similar (p=0.99 for job satisfaction, p=0.43 for search intentions) between the groups. There
is a clearer indication of a gap in wage satisfaction (which is somewhat lower for the treatment
group), and the simple chi-square test is significant (p=0.05) for this measure.
Appendix Table A4 presents estimates from model 2 of Table 5 using the survey measures
from which the satisfaction index is derived. The coefficient on the interaction of treatment
with earning below pay unit median is negative for all of the sub-components of the satisfaction
index with varying levels of precision.
44
Appendix Table A5 presents estimates comparable to Table 6 but adding a main treatment
effect in the specification. Some precision is lost in the estimates but the results remain generally
consistent with those presented in Table 6 and again show that the rank variable appears to be
more significant in the treatment response than relative wage levels.
Appendix Table Table A6 shows how our results vary for specific subgroups of employees.
In particular, we show estimates by gender (columns 1 and 2), by faculty/staff status (columns
3 and 4) and by length of tenure (columns 5 and 6). For ease of exposition, we present results
only for the baseline specification presented in columns 2, 6, and 10 of Table 5.
Men and women respond differently to the information treatment. Both express elevated
dissatisfaction following the information treatment of about the same magnitude (Panel A), but
women appear more inclined to report that they are searching for a new job following treatment
(Panel B). Staff appear to be more responsive than faculty to the treatment on both measures,
but the relatively small number of faculty (n=1015) results in imprecision that limits our ability
to make reliable comparisons. The pattern of results for tenure also cut both ways. We observe
significant negative satisfaction effects for low-paid respondents with higher tenure (defined as
6 or more years of seniority) but not for lower tenure respondents (Panel A, columns 5 and
6). By contrast, low-paid and low-tenure respondents respond to the treatment by elevated job
search intentions while high tenure respondents do not (Panel B, columns 5 and 6).
It is not surprising that higher-tenure respondents in the treatment are not reporting elevated
job search intentions relative to the control because in the UC system few employees with long
tenure change jobs. Indeed, this finding is a useful specification check because it suggests that
respondents are responding to our survey truthfully. One might be concerned that a respondent
who is not mobile, and does not plan to change jobs, nevertheless reports to us that they plan
to look for a new job because they are upset by what they learned from the salary data and
because talk is cheap. The tenure result is suggestive that this is not occurring systematically.
Following this intuition, we provide a formal test of the “truthfulness” of responses in Ap-
pendix Table A7. We first estimate a probit model on the control group sample where the
dependent variable is 1 if the respondent reports being “very likely” to be searching for a new
job with age, gender, tenure, faculty/staff, campus, and time in position dummies as covariates.
From the probit, we predict the probability that a respondent reports to be very likely to look for
a new job, both in the treatment and control. If respondents are answering the question of job
search intention truthfully, we should see a limited response in the treatment from respondents
who are unlikely to report searching for a new job because of their characteristics.
For ease of exposition, Panel A limits the sample to workers who are paid less than the pay
unit median. Column 1 is a model with just a treatment dummy it shows the expected result
that below-median respondents are more likely to report searching for a job in the treatment
than control. Column 2 includes an interaction of the treatment with the predicted probability
of job search. Consistent with truthful response, workers who we predict are less mobile because
of their characteristics do not respond to the treatment by reporting to us that they plan to look
for a new job. The interaction term is statistically significant (t= 2.5) and implies that the
response to the treatment on job search intention is only positive when the respondent has more
than a 17% predicted chance of looking for a new job. As could be expected, this interaction
does not enter significantly when using the satisfaction index (columns 3 and 4): low mobility
workers can still report elevated dissatisfaction as a result of new information. Panel B shows
the same set of models for the above-median respondents and, not surprisingly, the interaction
term is insignificant in both models.
45
Below
Median Above
Median Differenc
e (2)-(1)
Own
2007
pay
Median
2007
pay
(1) (2) (3) (4) (5)
A. Log pay changes from 2007 to 200
8
log (total pay) .116 .026 -.090 -.174 .145
(.007) (.006) (.009)
log (base pay) .121 .023 -.098 -.201 .204
(.007) (.007) (.011)
B. Level pay changes from 2007 to 200
8
(total pay) 6775 4906 -1870 .013 .035
(433) (.004) (.006)
(base pay) 6246 4142 -2104 -.028 .088
(262) (.004) (.006)
Observations 6654 6505 13,159 13,159 13,159
Appendix Table A1: Earnings Growth and Median Peer Earnings
Below/Above Median
comparison Regressions
Notes: Standard errors are in parentheses. Sample is all UCLA employees matched to their earnings and who are
present in both 2007 (base year) and 2008 (following year). We further restrict the sample to employees with
base pay above $10,000 in 2007 (this is done to avoid having extreme log changes that bias results in favor of
our hypothesis, qualitative results are highly robust to changes in the $10,000 cut-off). Pay unit (for median
computation) refers to faculty or staff members in an individual's department (in this sample with base earnings
above $10,000). Medians are defined separately for base pay and total pay. Col. (1) reports the average for
employees below median, col. (2) for employees above median, col. (3) reports the difference (2)-(1). Cols. (4)
and (5) report the coefficients of regressing the left-hand-side variable on own pay and base median pay (those
two right-hand-side variables are in log when the left-hand-side is log and use the same definition of base vs. total
pay as the left-hand-side variables).
Pct. Responded Pct. With Wage
Number in Pct. Matched Pct. Responded Conditional on and non-missing Sample Size
Online Directory to Wage Data to Survey Wage Data Survey Data in Analysis File
(1) (2) (3) (4) (5) (6)
UC Santa Cruz
Staff 2,797 70.3 14.7 16.8 10.9 306
Faculty 809 73.6 18.9 21.2 14.7 119
All 3,606 71.1 15.6 17.8 11.8 425
UC San Diego
Staff 15,782 81.1 24.0 24.0 17.9 2,830
Faculty 2,075 78.8 21.7 23.8 17.5 363
All 17,857 80.8 23.7 23.9 17.9 3,193
UCLA
Staff 16,227 73.8 19.0 19.8 14.1 2,283
Faculty 4,285 68.1 16.3 19.1 12.5 536
All 20,512 72.6 18.4 19.6 13.7 2,819
All Three campuses
Staff 34,806 76.8 20.9 21.6 15.6 5,419
Faculty 7,169 71.8 18.2 20.8 14.1 1,018
All 41,975 76.0 20.4 21.4 15.3 6,437
Notes: Sample sizes in column (1) reflect number of valid email addresses extracted from directories. Wage data were matched to directory data by campus and
name. Entries in columns 5 and 6 are based on individuals in the online directory who can be matched to wage data, responded to the survey, and provided non-
missing responses for 8 key questions.
Appendix Table A2: Matching and Response Rates
Not At All
Satisfie
d
Not Too
Satisfie
d
Somewhat
Satisfie
d
Very
Satisfie
d
Overall Sample (N=6411) 16.3 31.9 40.1 11.7
Control Group (N=4635) 15.9 32.5 39.5 12.1
Controls Rewei
g
hte
d
a15.6 32.9 39.6 11.8
Treatment Group (N=1776) 17.3 30.4 41.8 10.6
Overall Sample (N=6411) 3.3 12.1 47.3 37.3
Control Group (N=4635) 3.3 12.2 47.4 37.2
Controls Rewei
g
hte
d
a3.0 12.1 47.1 37.8
Treatment Group (N=776) 3.3 12.0 47.1 37.6
Not At All
Likel
y
Somewhat
Likel
y
Ver
y
Likel
y
Overall Sample (N=6411) 47.0 30.8 22.2
Control Group (N=4635) 47.2 30.7 21.9
Controls Rewei
g
hte
d
a47.5 30.5 22.1
Treatment Group (N=1776) 45.8 31.1 23.1
Strongly
Disa
g
re
e
Disa
g
re
e
Ag
re
e
Strongly
Ag
re
e
Overall Sample (N=6411) 11.7 31.1 47.5 9.8
Control Group (N=4635) 11.4 31.0 47.8 9.9
Controls Rewei
g
hte
d
a11.3 31.4 47.5 9.8
Treatment Group (N=1766) 12.6 31.1 46.9 9.4
Overall Sample (N=6397) 1.9 11.4 38.1 48.5
Control Group (N=4625) 2.1 11.6 38.8 47.6
Controls Rewei
g
hte
d
a2.2 11.4 38.5 48.0
Treatment Group (N=1772) 1.6 11.0 36.5 51.0
1 4/3 5/3 2 7/3 8/3 9 10/3 11/3 4
Satisfaction Index (10 point scale) Overall Sample (N=6411) 1.3 2.7 5.8 9.8 14.7 18.3 20.4 15.4 7.4 4.2
Control Group (N=4635) 1.3 2.7 5.6 9.6 14.9 18.5 20.5 15.2 7.3 4.5
Controls Rewei
g
hte
d
a1.2 2.5 5.7 9.4 15.1 19.0 20.5 15.2 7.0 4.6
Treatment Group (N=1766) 1.4 2.7 6.2 10.3 14.3 17.9 20.2 16.1 7.6 3.4
N
o
Yes
Dissatisfied and likely to make an Overall Sample (N=6411) 86.6 13.4
effort to find a job Control Group (N=4635) 87.1 12.9
Controls Rewei
g
hte
d
a87.2 12.8
Treatment Group (N=1766) 85.1 14.9
Notes: Entries are tabulations of responses for analysis sample (or subset of analysis sample with non-missing responses).
aMeans for control group are reweighed across campuses to reflect unequal probability of treatment at different campuses. Reweighted controls are then directly comparable to
Treatment.
"Do you agree or disagree that your wage i
s
set fairly in relation to others in your
department/unit?"
"How satisfied are you with your
wage/salary on this job?"
"How satisfied are you with your job?"
"How likely is it you will make a genuine
effort to find a new job within the next
year?"
"Do you agree or disagree that differences
in income in America are too large?"
Appendix Table A3: Means of Outcome Measures by Treatment Status
Wa
g
e is fair Satisfied with
Wa
g
e on Job Satisfied with
Job Likely to Look
for New Job
(
1-4 scale
)
(
1-4 scale
)
(
1-4 scale
)
(
1-3 scale
)
(
1
)
(
2
)
(
3
)
(
4
)
I. Treated individual with wage < than -10.1 -6.3 -8.5 11.6
median in pay unit (coefficient × 100) (4.9) (4.5) (4.9) (4.5)
II. Treated individual with wage > than 2.5 -0.5 6.3 -3.3
median in pay unit (coefficient × 100) (4.5) (4.5) (4.4) (4.9)
II-I 12.6 5.8 14.8 -14.9
(6.0) (5.7) (6.5) (6.6)
Controls for cam
p
us ×
(
staff/facult
y)
Yes Yes Yes Yes
and cubic in wa
g
e?
P-value for exclusion of 0.08 0.38 0.07 0.03
treatment effects
Appendix Table A4: Ordered Probit Models for Effect of Information Treatment on
Measures of Job Satisfaction
Notes: Specifications are ordered probit models. Standard errors, clustered by campus/department, are in
parentheses (819 clusters for all models). "Wage" refers to total UC payments in 2008. Pay unit refers to
faculty or staff members in an individual's department. See Appendix Table 3 and text for description and
means of the depedent variables. For columns 1-3 responses are ordered so that higher values indicate greater
satisfaction. Models are based on specification (2) of Table 5. In addition to the explanatory variables presented
in the table, all models include an indicator for whether the respondent is paid at least the median in his/her pay
unit.
Heckit
ML
Select.
Model
ML
Select.
Model
(
1
)
(
2
)
(
3
)
(
4
)
(
5
)
(
6
)
(
7
)
(
8
)
(
9
)
(
10
)
(
11
)
(
12
)
Treated individual
(
coefficient × 100
)
2.4 10.3 9.8 6.4 2.7 0.6 -0.1 1.1 6.8 6.7 6.5 6.5
(4.3) (5.7) (5.8) (3.6) (4.6) (6.8) (6.8) (6.8) (6.5) (8.7) (8.8) (8.9)
Treated individual × deviation of wa
g
e from 3.1 -- -1.6 -- -3.6 -- -1.8 -- -4.8 -- -0.2 --
median if deviation ne
g
ative
(
coefficient × 100
)
(
1.6
)
(
2.4
)
(
1.7
)
(
2.5
)
(
2.6
)
(
3.6
)
Treated individual × deviation of wa
g
e from -1.4 -- -1.4 -- -1.6 -- -1.8 -- -2.4 -- -0.5 --
median if deviation positive (coefficient × 100) (1.1) (1.6) (1.2) (1.6) (2.2) (2.4)
Treated individual × deviation of rank from 0.5 -- 7.0 8.7 4.0 -- -4.4 -2.9 -4.6 -- -5.2 -5.1 -5.1
if deviation negative (coefficient × 10) (2.1) (3.2) (1.5) (2.7) (3.6) (2.7) (3.3) (4.6) (3.4)
Treated individual × deviation of rank from 0.5 -- -3.9 -1.8 -1.9 -- -1.4 1.1 -1.1 -- -4.6 -3.8 -4.7
if deviation positive (coefficient × 10) (2.2) (3.2) (1.4) (2.7) (3.4) (2.7) (4.2) (5.5) (4.2)
Inverse Mills Ratio -- -- -- -0.12 -- -- -- -- -- -- -- --
(0.14)
Correlation -- -- -- -0.2 -- -- -- -0.14 -- -- -- 0.04
(0.25) (0.33)
Controls for campus ×
(
staff/facult
y)
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
y
es
and cubic in wage?
P-value for exclusion of treatment effects 0.18 0.01 0.01 0.06 0.04 0.06 0.1 0.05 0.02 0.01 0.3 0.02
Notes: Unless otherwisenoted, specifications are ordered probit models. Model (4) is a two-step Heckman selectionmodel. Models (8) and (12) are maximum likelihoodselection
models for an ordered categorical dependent variable. See text for further detailon the specificationof the selection models. Standard errors, clustered by campus/department, are
in parentheses(819 clusters for all models). "Wage" refers to total UC payments in2008. Pay unitrefers to faculty or staff members in an individual'sdepartment. See note to Table
5 for description of thedependent variables. In additionto the explanatoryvariables presented in thistable, specifications 1, 3, 5, 7, 9, and 11 include the deviationof the wage from
the median wage in the pay unit if the deviationis positive, the deviationof the wage from the median wage in the pay unit if the deviationis negative, and an indicatorfor whether the
deviation is negative.
Specifications 2-4, 6-8 and 10-12 includethe deviation of the rank in the pay unit from 0.5 if the deviation is positive, the deviation of the rank in the pay unit from 0.5 if the deviation is
negative, and an indicator for whether the deviation is negative.
Appendix Table A5: Ordered Probit Models for Effect of Information Treatment on Measures of Job Satisfaction
Satisfaction Index Likely to Look for New Job Dissatisfied and Likely Looking for
a New Job
(10 point scale) (1-3 scale) (0-1)
Panel A: Females Males Staff Faculty Low
Tenure High
Tenure
Satisfaction Index (10 point scale) (1) (2) (3) (4) (5) (6)
I. Treated individual with wage < than -9.2 -10.0 -10.9 -3.0 -4.7 -14.3
median in pay unit (coefficient × 100) (5.5) (6.8) (5.3) (9.9) (6.1) (6.3)
II. Treated individual with wage > than 6.1 -1.5 2.2 5.7 -4.7 4.6
median in pay unit (coefficient × 100) (5.6) (6.2) (4.5) (9.6) (7.5) (4.8)
II-I 15.3 8.4 13.1 8.7 0.0 18.9
(7.5) (8.6) (6.3) (13.9) (9.0) (7.4)
P-value for exln. of treatment effects 0.11 0.34 0.08 0.80 0.64 0.03
Observations 3908 2503 5396 1015 2558 3853
Panel B: Females Males Staff Faculty Low
Tenure High
Tenure
Likely to Look for New Job (1-3 scale) (1) (2) (3) (4) (5) (6)
I. Treated individual with wage < than 17.8 0.0 14.9 -4.8 19.5 3.8
median in pay unit (coefficient × 100) (5.7) (8.2) (5.2) (10.8) (6.4) (6.9)
II. Treated individual with wage > than -7.2 1.4 -4.8 3.5 -1.5 -3.9
median in pay unit (coefficient × 100) (6.4) (7.0) (5.5) (11.7) (8.5) (5.6)
II-I -25.1 1.5 -19.6 8.3 -21.0 -7.6
(8.1) (11.4) (7.3) (15.2) (10.3) (9.1)
P-value for exln. of treatment effects 0.00 0.98 0.01 0.86 0.01 0.69
Panel C: Females Males Staff Faculty Low
Tenure High
Tenure
Dissatisfied and Likely Looking for a New
Job (0-1) (1) (2) (3) (4) (5) (6)
I. Treated individual with wage < than 20.6 19.0 21.3 12.9 21.3 19.7
median in pay unit (coefficient × 100) (7.8) (11.1) (7.3) (15.4) (8.4) (9.6)
II. Treated individual with wage > than -7.5 -1.8 -6.4 5.6 2.4 -8.3
median in pay unit (coefficient × 100) (9.9) (10.2) (8.2) (18.9) (11.4) (9.1)
II-I -28.1 -20.8 -27.7 -7.3 -18.9 -28.1
(12.1) (15.8) (10.3) (23.8) (13.5) (13.2)
P-value for exln. of treatment effects 0.02 0.23 0.01 0.68 0.04 0.08
Notes: Standard errors, clustered by campus/department, are in parentheses
.
"Wage" refers to total UC
payments in 2008. Pay unit refers to faculty or staff members in an individual's department. Models are based on
specifications 2, 6 and 10 of Table 5. For additional details see notes to Table 5 and text.
Appendix Table A6: Ordered Probit Models for Effect of Information Treatment -- by
Subgroup
(
1
)
(
2
)
(
3
)
(
4
)
Panel A: Workers with wage median
Treated individual
(
coefficient × 100
)
11.1 -25.9 -9.3 -11.7
(4.6) (16.7) (4.5) (14.5)
Treated individual × Predicted -- 1.5 -- 0.1
probabilit
y
of search
(
coefficient × 100
)
(
0.6
)
(
0.6
)
Predicted probabilit
y
of search -- 3.5 -- -0.1
(0.3) (0.3)
Controls for campus ×
(
staff/facult
y)
Yes Yes Yes Yes
and cubic in wa
g
e?
Panel B: Workers with wage > median
Treated individual
(
coefficient × 100
)
-3.7 -14.6 3.4 19.2
(4.8) (12.8) (4.0) (10.4)
Treated individual × Predicted -- 0.6 -- -0.8
probabilit
y
of search
(
coefficient × 100
)
(
0.6
)
(
0.5
)
Predicted probabilit
y
of search -- 3.5 -- 0.1
(0.3) (0.3)
Controls for campus ×
(
staff/facult
y)
Yes Yes Yes Yes
and cubic in wa
g
e?
Likely to Look for
New Job Satisfaction Index
(1-3 scale) (10 point scale)
Notes: Standard errors, clustered by campus/department, are in parentheses. "Wage" refers to total UC
payments in 2008. Pay unit refers to faculty or staff members in an individual's department. The predicted
probability of search is the predicted value from a probit model estimated over the control group where the
dependent variable is 1 if the respondent reports being "verylikely" to be searching for a new job, withage,
gender, tenure, faculty/staff, campus, and time in position dummies as covariates. See note to Table 5 for
definitions of the dependent variables.
Appendix Table A7: Effect of Predicted Mobility on Search and
Satisfaction Treatment Effects
... Relatedly, studies show that savings depend not on absolute income but on relative income (Duesenberry, 1949), indicating that people's decisions are more influenced by the state of the reference group than their own state of wealth (Kahneman and Tversky, 1979). For example, some empirical studies find the individual's rank in wage distribution to be a relevant driving force for people's job satisfaction and wellbeing (Brown et al., 2008;Card et al., 2012), or even happiness (Ferrer-i-Carbonell, 2005). Brown et al. (2015) also show that the relative income effect is sensitive to the definition of the reference group, the utility proxy and the estimation method. ...
Article
Full-text available
Food has been used to define social classes and as a means of embodying the ‘good life.’ Depending on the food culture and food environment, certain foods may be consumed more by the relatively higher income groups and therefore are perceived as ‘positional.’ This study examines whether social status—proxied by the relative consumption expenditures (the rank in the consumption expenditure distribution) and the relative deprivation in consumption expenditures—can explain household food choices. Based on the nationally representative Nigeria General Household Panel Surveys and using fixed effects estimations, we find that consumption of highly processed foods is strongly associated with the social status of the household. We observe differences amongst highly processed foods consumed at home and away from home, across geographic locations and consumption expenditure terciles. The results of this study provide suggestive evidence that reducing income inequality is required to support healthier household food preferences beyond social status.
... The findings provide novel support to the idea that some investors, much like the general public, exhibit pro-social preferences (Hong and Kacperczyk, 2009;Pan et al., 2022). While there is plenty of evidence for inequality-averse preferences in experimental studies (see, e.g., Fehr and Schmidt, 1999;Neilson and Stowe, 2010;Card et al., 2012), contemporaneous research shows that firms with inequality-averse investors experience negative abnormal announcement-returns when disclosing high pay-inequality (Pan et al., 2022). ...
Article
Recent research shows that a high wage-gap between managers and workers identifies better-performing firms, but the stock market does not seem to price this information. In this paper, we show that not all investors neglect pay inequality. Using a unique data set on German firms’ employee compensation, we find that the mispricing of the wage gap is driven by limits to arbitrage. Specifically, some investors seem to bid up low-wage-gap stocks for non-monetary reasons, thus exhibiting a preference for low pay-inequality. The results suggest that firms with equitable pay schemes are rewarded with a lower cost of capital.
... An employee's sense of security, self-esteem and social standing are all enhanced by financial perks provided by their employer, and this is consistent with the concept of compensation. Card et al. (2012) found that while earning less than the median salary lowers job satisfaction, earning more than the median doesn't increase it either. Employers are forced to look for other incentives for great performance because a given amount of money does not directly convert into increased job satisfaction once that level has been attained. ...
Article
Full-text available
The objective of this study is to gain a better comprehension of how employees in INGOs in Yemen perceive performance appraisals and their role in job satisfaction through exploring the impact of performance appraisal and its two constructs, performance appraisal process and fairness of performance appraisal process on employees' job satisfaction in INGOs in Yemen.
... Some of these experiments are adduced below. Card et al., (2012) describe the following experiment. According to a California court ruling, the information about any state employee's salary became publicly available. ...
Article
Full-text available
The article addresses the attitudes to inequality in the Russian society depending on the role of the individual in a reference group. It is shown that people are ready to accept significant income inequality if they believe that the income is well earned. No correlation was found between subjective well-being and inequality. The vast majority of people compare themselves with friends, neighbours and relatives. The next most important reference group is colleagues, followed by celebrities. The rejection of representatives of lower social classes is negatively correlated with life satisfaction. At the same time, the respondents expressed willingness to build a society where, having due means, people would organise help to those who cannot provide for themselves.
Article
Increases in the minimum wage can substantially reduce earnings inequality. To demonstrate this, we combine administrative and survey data with an equilibrium model of the Brazilian labor market. We find that a 128 percent increase in the real minimum wage in Brazil between 1996 and 2018 had far-reaching spillover effects on wages higher up in the distribution. The increased minimum wage accounts for 45 percent of a large fall in earnings inequality over this period. At the same time, the effects of the minimum wage on employment and output are muted by reallocation of workers toward more productive firms.. (JEL D31, E23, E24, J31, J38, O15)
Book
Full-text available
Brak zrozumienia zjawisk i procesów zachodzących na współczesnym rynku pracy, ze szczególnym uwzględnieniem okresu pandemii Covid-19, ogranicza możliwości sprostania wyzwaniom oraz przeciwdziałania negatywnym konsekwencjom tych zjawisk. Monografia, będąca efektem badań prowadzonych w Katedrze Analiz i Prognozowania Rynku Pracy UE Katowice, a także własnych przemyśleń i analiz Autorki, stanowi próbę przedstawieniu istoty aktywności zawodowej, a szczególnie pracy zarobkowej jako kluczowego elementu jakości życia, wraz z uwzględnieniem różnic ich postrzegania przez kobiety i mężczyzn. O jej naukowych i aplikacyjnych walorach świadczy także przedstawiona ocena skutków pandemii Covid-19 w obszarze rynku pracy oraz rekomendacji kierowanych do pracowników, pracodawców oraz instytucji i organizacji mających bezpośredni i pośredni wpływ na kształtowanie stosunków pracy w Polsce.
Article
Despite increased scrutiny of administrators’ salaries in higher education, little is known about the determinants of executive-level compensation at universities. This study examines the individual and institutional determinants of compensation of business school deans, in the United States, with a focus on differentials between private and public university deans, in the level of remuneration and the structure of compensation. Specifically, using a Oaxaca-decomposition, I estimate that despite managing smaller, less research intensive business schools, private university business school deans earn approximately 15% more than comparable public university deans, and that they are compensated more for managing finances than for managing students.
Article
We analyse how habit formation affects collective bargaining outcomes if a firm-specific trade union determines wages. For a wide variety of alternative analytical settings, such internal reference points induce the union to increase wages over time. A numerical example suggests that the resulting decline in employment can be substantial. Furthermore, policy changes in one period, which are either reversed in the next or anticipated in previous periods, have effects on wage outcomes for multiple periods because they affect the habit stock at times at which they are not yet or no longer in operation.
Article
People increasingly need to collaborate with others in their workplace to perform their jobs. Studies have shown that task interdependence can have important consequences for teams and organizations, and emerging research suggests that it may be contributing to gender inequality. In this paper, we expand upon this literature to propose a relationship between interdependence and the gender wage gap. Relying on the lack‐of‐fit model, we predicted that the relationship between interdependence and the gender wage gap would vary with the gender composition of the occupation. This prediction was evaluated using multi‐source panel data from a U.S. sample of approximately 8,000 individuals. We found that higher levels of interdependence were positively associated with gender differences in wages among people working in male‐dominated occupations but negatively associated with gender differences in wages among those working in female‐dominated occupations. Supplemental analyses using individual fixed effects, an expanded sample, and alternative job characteristics provide support for our arguments. Taken together, our theory and findings offer novel insight into the consequences of rising interdependence for individuals and their career outcomes.
Article
This paper examines how employee earnings respond to a one-time cash flow shock in the form of a government R&D grant. In a regression discontinuity design, we find that the grant immediately increases average annual employee-level earnings by 2.9%. This benefit accrues only to incumbent employees and rises with job tenure. The grant also affects firm growth, but the initial wage patterns do not appear to reflect growth or productivity. Instead, the evidence supports implicit equity financing within the firm, where employees initially accept lower wages from financially constrained firms and earn more when the firm has ability to pay.
Article
Full-text available
We develop a simple general equilibrium model for studying the impact of workers'relative- wage concerns on resource allocation and the organization of production. We characterize equi- libria for the closed economy and for an open economy in which an intermediate input can be produced oshore. In the closed economy, …rms that are otherwise identical may have dierent hiring practices and pay dierent wages to low-skill workers. In the open economy, some …rms perform all production at home while others produce all of the intermediate input oshore. We show that relative-wage concerns add to incentives for oshoring. Oshore production may take place in the presence of relative-wage concerns in situations where it would not be pro…table in their absence. And if oshoring takes place with or without such concerns, the extent of o¤- shore production is greater in the former setting than in the latter. We further show that when workers are concerned about relative pay, the equilibrium does not maximize the economy's net output. Nonetheless, the competitive equilibrium with oshoring is constrained Pareto e¢ cient.
Article
Full-text available
We present evidence from laboratory experiments showing that individuals are “last-place averse.” Participants choose gambles with the potential to move them out of last place that they reject when randomly placed in other parts of the distribution. In modified dictator games, participants randomly placed in second-to-last place are the most likely to give money to the person one rank above them instead of the person one rank below. Last-place aversion suggests that low-income individuals might oppose redistribution because it could differentially help the group just beneath them. Using survey data, we show that individuals making just above the minimum wage are the most likely to oppose its increase. Similarly, in the General Social Survey, those above poverty but below median income support redistribution significantly less than their background characteristics would predict. JEL Codes: H23, D31, C91.
Article
This paper presents evidence that consumers underreact to taxes that are not salient and characterizes the welfare consequences of tax policies when agents make such optimization errors. The empirical evidence is based on two complementary strategies. First, we conducted an experiment at a grocery store posting tax inclusive prices for 750 products subject to sales tax for a three week period. Scanner data show that this intervention reduced demand for the treated products by 8 percent. Second, we find that state-level increases in excise taxes (which are included in posted prices) reduce alcohol consumption significantly more than increases in sales taxes (which are added at the register and are hence less salient). We develop simple, empirically implementable formulas for the incidence and efficiency costs of taxation that account for salience effects as well as other optimization errors. Contrary to conventional wisdom, the formulas imply that the economic incidence of a tax depends on its statutory incidence and that a tax can create deadweight loss even if it induces no change in demand. Our method of welfare analysis yields robust results because it does not require specification of a positive theory for why agents fail to optimize with respect to tax policies.
Article
This paper investigates whether individuals feel worse off when others around them earn more. In other words, do people care about relative position, and does “lagging behind the Joneses” diminish well-being? To answer this question, I match individual-level data containing various indicators of well-being to information about local average earnings. I find that, controlling for an individual's own income, higher earnings of neighbors are associated with lower levels of self-reported happiness. The data's panel nature and rich set of measures of well-being and behavior indicate that this association is not driven by selection or by changes in the way people define happiness. There is suggestive evidence that the negative effect of increases in neighbors' earnings on own well-being is most likely caused by interpersonal preferences, that is, people having utility functions that depend on relative consumption in addition to absolute consumption.
Article
We use matched employer–employee panel data to show that individual job satisfaction is higher when other workers in the same establishment are better-paid. This runs counter to substantial existing evidence of income comparisons in subjective well-being. We argue that the difference hinges on the nature of the reference group. Here we use co-workers. Their earnings not only induce jealousy but also provide a signal about the worker's own future earnings. In our data, this positive future earnings signal outweighs any negative status effect. This phenomenon is stronger for men and in the private sector but weaker for those nearer retirement.