Content uploaded by Ning Ma
Author content
All content in this area was uploaded by Ning Ma on Apr 30, 2019
Content may be subject to copyright.
The Roots of Bias on Uber
Benjamin V. Hanrahan, Ning F. Ma, Chien Wen Yuan
College of Information Sciences and Technology at the Pennsylvania State Univer-
sity, USA
bvh10@ist.psu.edu, nzm37@ist.psu.edu, tuy11@psu.edu
Abstract. In the last decade there has been a growth in digitally mediated workplaces.
That is, a workplace that is primarily mediated via an often propriety, algorithmically man-
aged, and where the majority of interactions between stakeholders take place. In fact the
replacement by the platform of the relationships between stakeholders is a key aspect of
these workplaces, and the root of the decrease in contractual responsibility each stake-
holder has to the others. In this paper we are particularly interested in how a particular lack
of accountability, biases, play out on Uber, a digitally mediated workplace, the ridesharing
application.
1 Introduction
Recently, there has been a growth in digitally mediated workplaces, which is par-
tially defined by the particular stakeholder structure that they rely on: a platform
creator, who is responsible not only for defining and implementing the functionality
for the platform, but also the policies around the workplace that the platform imple-
ments or supplements; a worker, who uses the platform to find, claim, and obtain
remuneration from jobs; and a client, who uses the platform to procure and pay for
labor. This structure is instantiated by a number of different platforms for a num-
ber of different purposes, e.g.: Amazon Mechanical Turk (AMT), where the worker
is part of the crowd and the client requests work from the crowd; Fiverr, Upwork,
or Elance, which are primarily for freelancers to sell their services to clients; and
Uber (the focus of this paper), where the worker is the driver and the client is the
passenger.
A key aspect of a digitally mediated workplace is that the, usually proprietary,
1
platform (e.g. AMT, Fiverr, Ola, TaskRabbit, or Uber) replaces much of the rela-
tionship between the worker and client, or the employee and employer. This also
drastically alters, if not eradicates, the contractual responsibilities of each stake-
holder to each other and reduces the level of accountability all around (sometimes
discussed as algorithmic accountability (Lustig et al., 2016; Lee et al., 2015; Wa-
genknecht et al., 2016)). In this paper, we are interested in a particular aspect of this
accountability, protection against bias for the worker and the client.
There is already evidence that bias and discrimination are having a demonstrable
impact on the stakeholders of these platforms (Hann´
ak et al., 2017; Edelman and
Luca, 2014). However, existing work has looked more at the existence of bias and
less about how biased decisions are performed on or via these platforms. To begin
to investigate how to design to avoid bias more broadly on these platforms, we first
need to look at how bias is specifically occurring and what its roots might be on
a specific platform. The specific platform that we report on in this paper is Uber,
a Ridesharing or Transportation Network Company, where passengers obtain rides
from the drivers. Drivers must use their own cars and obtain rides via the Uber app.
We argue that, while Uber certainly is not wholly a digital workplace, it is a
digitally mediated workplace. That is, while there are certainly face-to-face inter-
actions between the driver and the passenger, these exchanges are arranged via the
Uber app and the consequences of the interactions are mediated by the app. So
Uber serves as an interesting mixed-setting for a digitally mediated workplace, as
consequences of face-to-face interactions are both captured and propagated through
the digital platform.
This means, that while there are face-to-face interactions between the worker
and client, whatever rating they give each other is mediated solely through digital
means. That is, there is no human in the loop to take different factors into account
or impart a level of flexibility or subjectivity to the process. As these ratings have
a real impact on both the driver and passenger’s ability to provide and procure ser-
vices, this opens up an avenue for unfettered biased judgements that are propagated
by the platform (Mcgregor et al.). To best illustrate our point, we provide this specu-
lative comparison. In an existing, more traditional taxi service, if a passenger would
like to make a biased complaint they must call a supervisor, or at the very least a
representative of the taxi service. During this call, there is a likelihood that the su-
pervisor will uncover or detect the bias due to the existing relationship between the
supervisor and the driver, in addition to the supervisor’s judgement as to the validity
and veracity of the complaint. So there is at least some level of human mediation
when fielding complaints. Contrast this to a biased complaint on Uber, where the
only signal of the complain might be a rating, where all of the nuance and reasoning
behind that are not interrogated or even captured by the system. This biased judge-
ment then propagates throughout the system since that rating is used by the system
and users to determine which driver to send to a job.
In this paper, we draw from a similar methodology as Martin et al. (2014) where
we examine what discussions Uber drivers are having regarding bias online. As
we argue that Uber is a digitally mediated workplace, online forums are a place
where the shop talk happens, similarly to Turkers. For this paper, we looked to see
whether or not and how the drivers discussed the effect of biases on them or even
their own biases. We report some of our preliminary findings on how biases bear
out both by and towards drivers on Uber and what the role of the platform is. In this
way, we are beginning to look at how the same phenomena that led to protections
for workers and customers in traditional workplaces are reoccurring in digitally
mediated ones. Analyzing the practice of bias is the first step towards designing
similar functionality that govern these digitally mediated workplaces to the policies
that govern tradition workplaces.
2 Related Work
In this section we review research into digital mediation of work and how biases
may be being enacted in the workplace.
2.1 Peer-to-peer platforms and technology mediation
Beyond technological tools that mediate work like email (Hinds and Kiesler, 1995),
instant messaging (Isaacs et al., 2002), or social network site (Dimicco et al.), peer-
to-peer (P2P) platforms like Uber, Lyft, or Ola are digitally mediated workplaces
where workers manage their tasks and negotiate transactions with their clients both
online and offline. While the task is completed offline, such as the driver sending
the passenger to a destination and potentially engaging in social interactions with
each other along the way, many practices are structured by technological features
and computational algorithms of the platforms. Automated dispatch systems use
genetic or optimization algorithms and devices with built-in GPS to match drivers
with passengers in real time based on geo-locations (Karande and Bogiri, 2015;
Rawley and Simcoe, 2013). Fares and payment rates are set based on locations,
times of the day (e.g., higher in rush hours), and the services requested (e.g., single
ride or shared ride). In addition to real-time data, Uber assigns work to drivers and
allows passengers to request services based on the historical data, namely the rating
system on the platform (Ahmed et al., 2016).
Much previous work has investigated issues revolved around such computing
systems and algorithms, and their influences on users. Automated dispatch sys-
tems may deploy drivers to move outside their familiar geographic areas (Hsiao
et al., 2008). While this allows drivers to acquire information about some poten-
tial hotspots, it also demands drivers to develop temporal and spatial knowledge.
Devices with GPS systems shape drivers’ wayfinding and navigation skills and po-
tentially change the social dynamics of the riding processes between drivers and
passengers (Girardin and Blat, 2010; Hsiao et al., 2008). With their influences on
practices and work revolved around the P2P platforms, the most prominent issue
with these algorithms and systems is the lack of transparency to users (Lustig et al.,
2016). Despite the invisibility and inaccessibility, users still have to make sense of
how to interact with the systems in order to manage their work (Lee et al., 2015),
rely on the digital infrastructure to quantify their work and develop their account-
ability using the rating system (Scott and Orlikowski, 2012), or deal with potential
offline consequences like the uncertainty of finding next customer by taking request
from the dispatch system Ahmed et al. (2016).
Designed to collect data to facilitate coordination or even prediction of human
work, computing systems and algorithms are often valued for their instrumental
functions. Given these identified issues, computing systems and algorithms may
not be posed as neutral and objective as they may seem (Kneese et al., 2014) . It is
possible that the digital infrastructure imposes and renders biases, intentionally or
unintentionally, against users (Wagenknecht et al., 2016) . In this study, we com-
plement prior work by exploring and identifying some existing biases. We include
experiences from both workers and clients, using Uber as our target platform, in an
attempt to draw a holistic picture revolved around this issue.
2.2 Biases at workplace
Biases usually refer to stereotypical generalizations based on sociodemographic
or physical characteristics about certain groups that are assigned to the individual
group members. Previous research reported gender biases (Heilman, 2012), ageism
(Rupp et al., 2006), racial biases (Rosette et al., 2008), or weight bias (Rudolph
et al., 2009) at workplace. These biases are associated with inequality in employ-
ment decisions, career advancement, performance expectation, workload, overall
evaluations, etc.
While these biases are prevalent in physical workplace because the characteris-
tics and attributions are visible and obvious to elicit implicit or explicit biases, they
do not disappear even if the work is mediated. Research reported that biases also
took place on technological platforms. For example, workers on TaskRabbit used
geolocations to evaluate whether to accept a task and were found that they tended
to avoid distant and less well-to-do areas (Thebault-Spieker et al., 2015). On the
other hand, clients may also choose workers from these P2P platforms based on
their gender and race no matter the tasks are completed in physical or virtual con-
texts (Hann´
ak et al., 2017). Workers have to have adequate equipment like bank
accounts, smartphone with built-in GPS or in UberBlacks case, a fancy car, to be
able to provide services (Kasera et al., 2016).
In addition to biases rendered by sociodemographical and physical factors, we
argue that on the digitally mediated workplace, these biases could potentially be
reinforced and propagated by the digital infrastructure.
The rating system on Uber represents a record of drivers’ work performance
and is used to evaluate their eligibility to receive service requests. However, there is
no clear metric, such as driving skills, safety concerns, or decision-making strate-
gies about picking up routes, as to how the performance is evaluated. Instead,
drivers may have to engage in “emotional labor,” in which they need to quickly
build “micro-relationships” that make passengers feel good so as to get good rat-
ings (Nardi, 2015; Rogers, 2015; Mcgregor et al.). Such emotional labor is easily
influenced by random factors and the efficacy and accuracy of the rating system
may benefit from a more holistic evaluation (Lee et al., 2015).
In addition, while racial and gender biases are suggested to be mitigated through
Uber’s matching algorithm, Mcgregor et al. pointed out that the algorithm actually
denies users ability to choose their desirable drivers or passengers and therefore
deepens the negative effect of expected homophily for both drivers and passengers.
The consequence may be a lowered rating. On Uber platform, drivers usually have
to respond to requests within 15 seconds without knowing the destination and ex-
pected fare. In order to avoid deactivation from the platform, Uber drivers often do
not have sufficient time for decision-making (Rosenblat and Stark, 2016) and have
to deal with offline consequences reinforced by the platforms (Ahmed et al., 2016).
In our study, we explore several different occurrences of biased practices and
judgements that are either enabled by the digital infrastructure or rooted in an aspect
of it.
3 Method
In investigating if and how Uber drivers discuss bias in the workplace, we borrowed
heavily from the approach taken by Martin et al. (2014) in their study of Turk-
ers’ issues and concerns. We focused on the most popular forum for Uber drivers,
https://uberpeople.net. The primary way that we differ from Martin et al. (2014),
is that we were interested in a specific topic and did not let all of the topics that
concern Uber drivers emerge from out study. That said, we still took an exploratory
approach to our investigation around bias in the workplace, looking at all forms and
instances, e.g. not just biases on the part of passengers, but also biases expressed
by the drivers on the forums.
UberPeople is a forum that is run by drivers for drivers. The current users are
from major cities around the world with most of active members located within
the US. The forum is divided into many sections, the ones that we looked most
closely at were community related: Advice,Stories,People, and Complaints. The
Advice section is the most active section, closely followed by the Complaint section,
the other sections Stories and People, have significantly less activity. The primary
source of the content in this paper are from the Complaint section.
For two months, we have started to collect content from the various posts on the
forum and gathered threads posted between January 2015 - February 2017, which
are relevant to the bias theme. From the collected threads, in the preliminary pa-
per we report on 16 selected threads that represent a range of biased practices and
scenarios in the workplace. To gauge how broad and valid the content of the dif-
ferent posts were, we looked at the responses by the community. For instance, if
a user wrote a post making an uncommon, potentially outrageous claim, then the
community would respond in kind. That said, outrage at a claim of bias is not un-
common so we to avoid false negatives. However, if the community is supportive
and is in agreement this is a strong sign that a phenomena is valid. For any threads
that contains a mix of opinions on the part of the forum users, we present both sides
of the argument. All the selected posts and threads are categorized as being rooted
in either a lack of transparency or lack of recourse. While presenting the different
themes that emerged we make note of whether or not these are biases impacting
drivers or passengers.
4 Findings
In our reading of the forum we saw a number of themes emerge around the discus-
sion of biases on Uber, some were enabled by the platform, some where exacted by
the platform itself, and were clearly due to the behaviors by the different stakehold-
ers. That is, there are some biases that are seen as inherent in the design of the Uber
marketplace and tool. While there are other biases that may be propagated or sup-
ported by the system unwittingly, but clearly originate from one of the stakeholders
and are clearly directed at another specific stakeholder. Somewhat surprisingly to
us, we found a diverse set of biases, that is, while we expected – and did – see bi-
ases that impacted the drivers (drivers were after all the primary users of the forum),
we also saw discussions about biases aimed towards passengers by both the drivers
and the platform structure. We saw two main roots to the perception or practice of
biases: the lack of transparency, this manifested mostly in the rating system; and
the lack of recourse, there was no clear way to take recourse against what drivers
saw as biased judgements, so they developed strategies which themselves contained
biases.
4.1 Biases Rooted in a Lack of Transparency
One of the frustrations that drivers had with Uber’s rating system is that it is not
terribly transparent with respect to passengers’ ratings, which was especially con-
cerning when the drivers had received low ratings.
The reason why we need to know who rated to be able to fix any issue ... This system
will make riders more accountable before they ruin someones life. - F1
At times, this lack of transparency leads drivers down a path of suspicion. It is
hard for the drivers to know what exactly they did to deserve a poor rating and they
begin to speculate about a variety of reasons. When drivers belong to a minority
and are receiving low ratings for reasons that are unknown to them, they begin to
speculate – with ample reason at times – that it is related to a particular bias.
4.1.1 Racial
Drivers are clearly aware of the potential for biased ratings, as well as the inability to
know whether or not bias has influenced their ratings. Drivers are certainly worried
that biased ratings might be impacting them.
If I were black and got deactivated I’d be screaming from the hilltops about racism.
It’s probably THE best argument against the rating system there is... Ageism is abso-
lutely a factor too. But if you are an older black male I would say it’s worse... But the
bottom line is the ratings are unfairly applied. It probably depends on the area and the
demographics of the customer base as to HOW they are unfairly applied. But anyone
who thinks race isn’t a factor (and ageism and sexism) in any system is deluded. - F2
One user believed that they were suffering from biased ratings, which was par-
ticularly problematic as they had just started and were in danger of being deacti-
vated.
This is my 4th day driving. My rating now stands at 4.64... I just can’t figure out why
my rating are borderline deactivation level. This is crazy. I’m curious, especially to
hear from other young(ish) black male drivers if they are constantly on the borderline
as well. I hate even having to bring up this topic, but honestly I don’t know what else
I could even be doing to bring my rating up. - F3
Conversations around biases, particularly racism, seem to become contentious
fairly quickly on the forum (similar to other venues). When the issue is specifically
called out by a user, passionate voices fall on both sides of the issue. Some minimize
and deride the claim of bias:
Every bad thing in your life that happens to you is racially motivated. “The man” is
out to get you. - F4
Others provide support and counter other members to defend the original poster:
You can talk all the sh!t you like, I am a black man in America, I see, hear and
experience racism on a weekly basis. - F5
4.1.2 Other Biases
There were other biases related to language that one driver claimed to have noticed.
I’ve noticed a number of posts by poor-English speakers about bad ratings. That’s
probably one of the most difficult biases to overcome. - F6
One user hypothesized that all manner of biases are probably at play in the rating
system.
Of course the crowd-sourced rating system is racist. Probably sexist and ageist too.
Ugly people get lower ratings than attractive people too. - F7
It seems clear that the lack of transparency behind the reasoning for passengers’
ratings is opening the door to biased ratings that are unfettered by the system. At
the very least, this lack of accountability is leading to a lot of suspicion. Drivers
even speculated that Uber assigns certain types of people to certain types of areas:
I think as much as possible Uber tries to send us black drivers into the “hood”.... To
pick up black passengers.... This morning I was at the air port the 3rd one to go
out....when I get a ping...I look at my phone, and see the pax is 25 min away and has
a very ethnic specific name - F10
Although this was met with skepticism from other drivers, and other drivers
encouraged the driver to be more selective about what types of neighborhoods or
distances that they traveled for their passengers.
4.2 Strategies in Response to Perceived Bias
While there is evidence on the forums that drivers are impacted by the biases of
passengers, there is also evidence of the various strategies that drivers had developed
in response.
Passengers themselves are not immune to the biases of the drivers either. The
biases that we saw on the part of the drivers, were surprisingly rooted in practices
that drivers had enacted as a strategic response to the perception of passenger biases.
4.2.1 Ignore
One of the more innocuous strategies that drivers suggested in an earlier example
to F3, was to just tolerate the bias as a part of doing business. They advised not
to worry about it as cases of bias are absorbed by the majority of good, decent
passengers and as time when on these incidents had less and less impact on their
overall rating.
It only takes one rider to dent your rating when you’re new. I wouldnt worry too much
just yet. - prk
Seriously, do not worry about your rating this early in the game. I get the exact same
BS feedback you got at 4.92 ratings after 500 plus rides. - F11
Simply to tolerate this intolerance is anathema to the zero tolerance policy to
which Uber subscribes 1.
4.2.2 Retaliation, Protest
In one case of a driver being frustrated by receiving poor ratings for inscrutable
reasons, a driver decided to take a protest action of giving any and all passengers
that they gave a ride to on that day a poor rating.
Ok. So since Uber doesnt let us know who give us a bad rating and leave us guessing.
I decided to punish all riders of the day if my rating goes down .01 point. ... I think
we have the right to know who rate us bad and the reason. Otherwise i will use this
method. I know it wont matter. But when the rider check their ratings they will see
how it dipped down too. - F1
In another thread discussing the effect of biased ratings on the drivers, the con-
versation turned towards speculation about ‘certain areas’ and ‘stupid biases’ being
the source of poor ratings. One user had taken a similarly oppositional practice of
awarding high ratings only to exceptional passengers.
Im done worrying about riders so much. If you work around certain areas. Youll
realize your rating drops even if you keep the cleanest car and is the best driver. Now
the pax needs to amuse me to get over 4 stars. Stupid Biases and complexes really get
in the way. - F14
1https://www.uber.com/legal/policies/zero-tolerance-policy/en/
4.2.3 Avoidance
The instances of driver bias towards passengers mostly happened in how the drivers
tried to avoid certain areas or types of passengers.
One example, is a driver who after a bad experience with passengers from the
Black Entertainment Television awards, experienced a dip in their rating and came
to this conclusion:
I’m not ignorant of the racial tensions in this country right now. I’m sure there’s some
real animosity. I think there’s something about Rap too that brings out the hate. Now
when I see a group of black guys I’m automatically going to just hit cancel. I hate
saying that too because I love my black friends but what are you going to do. - F9
In this same thread, other drivers provided numerous counter examples where
they had positive experiences with African American passengers. Clearly, there is
the potential for drivers’ biases to impact passengers’ ability to procure a ride.
A different driver had another set of much more blatantly racist complaints about
a different group of riders, framing them as others that even inhabit a different world
of sorts.
1 They do not know this is a ride-sharing. They treat you like a low-educated, no-skill
cab driver. 2 They intentionally make you wait for up to 5 minutes 3 They ask you
drive up to the front door even they live in an apartment complex...4 Most of them
have very strong body odors... 5 Most of their rides are a $4 trip including pick up
from or go to the Indian grocery store or Indian restaurant...7 They never tip...8 They
gave you wrong directions and blame you taking the longest route from point A to
B. 9 They give you lower rating too. In their world, a 5star is impossible and never
exists. - F8
These avoidance strategies where “experienced” drivers make use of Uber’s can-
cellation policies which provides a loophole to drivers who want to avoid passengers
and suffer few if any consequences. These strategies do have a negative impact on
the passengers, which can be seen in one of the rare instances of a passenger posting
to the forum.
This guy wasted my time (which apparently was very precious in that span), didn’t
answer my calls, THEN had the nerve to charge me a cancellation fee! Isn’t there
some way to rate this guy as unprofessional? I have his ID number. - F12
This passenger was canceled by the driver in a severe weather day. Due to the
app system design, the passenger was charged a fee while his/her trip was canceled.
This shows that there is at least a reciprocal avenue through which passengers can
also be impacted by drivers’ biases.
5 Discussion
When we set out to start this study, we expected to see drivers discussing the im-
pact of biases on themselves. What we were surprised by, was the candor with
they discussed their own biases and how these biases impacted passengers. One fo-
rum member felt that the various avoidance strategies that drivers used were being
reinforced by the various pricing strategies that Uber employs.
Uber has brought back redlining with its boost incentives. It is subsidizing the rides
of the well off, mostly white riders on the west side and leaving minorities and lower
income residents in Central LA and South LA with fewer drivers. Uber, ..., are the
ones responsible for ride share redlining ... - F13
Redlining is a practice that originates in more traditional taxi companies, where
the companies refused fares from low-income communities. This practice of taxi
companies was dealt with via legislation, but now seems to be reemerging on Uber.
5.1 Transparency
When biases are more apparent and obvious, Uber is able to take action. Such was
the case when a Raleigh, NC same-sex couple was kicked out of an Uber driver’s
car, their story was covered in the media and discussed later in the forum with mixed
voices. Uber released a statement and blocked the driver from giving rides on Uber.
However, the small instances of bias that we have seen evidence of, be it by either
drivers or passengers, are much more difficult to trace and take action on.
A great deal of the problem of detecting bias and preventing bias is rooted in the
lack of transparency in the rating system. This lack of transparency led to a lack of
accountability in terms of the giving of ratings, as well as the suspicion that biases
were having an effect on the drivers’ status. The biases that we saw in our study
were either directly enabled by the lack of transparency, or in direct response to the
presumption of bias due to this lack of transparency.
5.2 Design Implications
There are two preliminary design implications from our findings. First, we would
argue for a higher degree of transparency in the reasons behind low ratings. This
could take the form of pinging the author of the low rating for additional more qual-
itative feedback that the driver could take action on. Additionally, Uber can better
leverage the various data points that they are gathering about the ratings and inter-
actions between a particular passenger and different drivers, or between a particular
driver and different passengers. For instance, Uber may be able to track a certain
passenger’s reactions to different demographics and use this information to reduce
the weight of that persons ratings. Perhaps, the passenger could be confronted with
this perceived bias, as it may be implicit and not realized, so that they can act to
remedy their own bias.
6 Limitations
Our preliminary study has obviously limitations in the length of time that we have
been collecting data and the breadth of data that we have collected. That said, we
feel that we have several concrete examples of a relatively rarely discussed phe-
nomenon that map to the bias that other researchers have reported on these plat-
forms. We have also began to identify some of the strategies that drivers have taken
in response to perceived bias.
References
Ahmed, S. I., N. J. Bidwell, H. Zade, S. H. Muralidhar, A. Dhareshwar, B. Karachi-
wala, C. N. Tandong, and J. O’Neill (2016): ‘Peer-to-peer in the Workplace’.
In: Proceedings of the 2016 CHI Conference on Human Factors in Computing
Systems - CHI ’16. New York, New York, USA, pp. 5063–5075, ACM Press.
Dimicco, J. M., D. R. Millen, W. Geyer, and C. Dugan. ‘Research on the Use of
Social Software in the Workplace’.
Edelman, B. and M. Luca (2014): ‘Digital Discrimination: The Case of
Airbnb.com’. Harvard Business School, pp. 21.
Girardin, F. and J. Blat (2010): ‘The co-evolution of taxi drivers and their in-car
navigation systems’.
Hann´
ak, A., C. Wagner, D. Garcia, A. Mislove, M. Strohmaier, and C. Wilson
(2017): ‘Bias in Online Freelance Marketplaces: Evidence from TaskRabbit
and Fiverr’.
Heilman, M. E. (2012): ‘Gender stereotypes and workplace bias’. Research in
Organizational Behavior, vol. 32, pp. 113–135.
Hinds, P. and S. Kiesler (1995): ‘Communication across Boundaries: Work, Struc-
ture, and Use of Communication Technologies in a Large Organization’. Orga-
nization Science, vol. 6, no. 4, pp. 373–393.
Hsiao, R.-L., S.-H. Wu, and S.-T. Hou (2008): ‘Sensitive cabbies: Ongoing sense-
making within technology structuring’. Information and Organization, vol. 18,
no. 4, pp. 251–279.
Isaacs, E., A. Walendowski, S. Whittaker, D. J. Schiano, and C. Kamm (2002):
‘The character, functions, and styles of instant messaging in the workplace’. In:
Proceedings of the 2002 ACM conference on Computer supported cooperative
work - CSCW ’02. New York, New York, USA, p. 11, ACM Press.
Karande, N. B. and N. Bogiri (2015): ‘Solution To Carpool Problems using Genetic
Algorithms’. International Journal of Engineering and Techniques, vol. 1, no. 3.
Kasera, J., J. O’Neill, and N. J. Bidwell (2016): ‘Sociality, Tempo & Flow: Learn-
ing from Namibian Ridesharing’. In: Proceedings of the First African Confer-
ence on Human Computer Interaction - AfriCHI’16. New York, New York, USA,
pp. 36–47, ACM Press.
Kneese, T., A. Rosenblat, and <.<!>boyd (2014): ‘Understanding Fair Labor
Practices in a Networked Age’. SSRN Electronic Journal.
Lee, M. K., D. Kusbit, E. Metsky, and L. Dabbish (2015): ‘Working with Machines:
The Impact of Algorithmic and Data-Driven Management on Human Workers’.
In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Com-
puting Systems. pp. 1603–1612.
Lustig, C., K. Pine, B. Nardi, L. Irani, M. K. Lee, D. Nafus, and C. Sandvig (2016):
‘Algorithmic Authority: the Ethics, Politics, and Economics of Algorithms that
Interpret, Decide, and Manage’. In: Proceedings of the 2016 CHI Conference
Extended Abstracts on Human Factors in Computing Systems - CHI EA ’16. New
York, New York, USA, pp. 1057–1062, ACM Press.
Martin, D., B. V. Hanrahan, J. O’Neill, and N. Gupta (2014): ‘Being a turker’. In:
CSCW 2014. pp. 224–235.
Mcgregor, M., B. Brown, M. Gl¨
oss, and A. Lampinen. ‘On-Demand Taxi Driving:
Labour Conditions, Surveillance, and Exclusion’.
Nardi, B. (2015): ‘Inequality and limits’. First Monday, vol. 20, no. 8.
Rawley, E. and T. S. Simcoe (2013): ‘Information Technology, Productivity, and
Asset Ownership: Evidence from Taxicab Fleets’. Organization Science, vol. 24,
no. 3, pp. 831–845.
Rogers, B. (2015): ‘The Social Costs of Uber’. The University of Chicago Law
Review Dialogue, vol. 82, no. 85, pp. 85–102.
Rosenblat, A. and L. Stark (2016): ‘Algorithmic Labor and Information Asymme-
tries: A Case Study of Uber’s Drivers’. International Journal of Communication,
vol. 10, pp. 3758–3784.
Rosette, A. S., G. J. Leonardelli, and K. W. Phillips (2008): ‘The White standard:
Racial bias in leader categorization.’. Journal of Applied Psychology, vol. 93,
no. 4, pp. 758–777.
Rudolph, C. W., C. L. Wells, M. D. Weller, and B. B. Baltes (2009): ‘A meta-
analysis of empirical studies of weight-based bias in the workplace’. Journal of
Vocational Behavior, vol. 74, no. 1, pp. 1–10.
Rupp, D. E., S. J. Vodanovich, and M. Crede (2006): ‘Age Bias in the Workplace:
The Impact of Ageism and Causal Attributions1’. Journal of Applied Social
Psychology, vol. 36, no. 6, pp. 1337–1364.
Scott, S. V. and W. J. Orlikowski (2012): ‘Reconfiguring relations of accountability:
Materialization of social media in the travel sector’. Accounting, Organizations
and Society, vol. 37, no. 1, pp. 26–40.
Thebault-Spieker, J., L. G. Terveen, and B. Hecht (2015): ‘Avoiding the South Side
and the Suburbs’. In: Proceedings of the 18th ACM Conference on Computer
Supported Cooperative Work & Social Computing - CSCW ’15. New York, New
York, USA, pp. 265–275, ACM Press.
Wagenknecht, S., M. K. Lee, C. Lustig, J. O’Neill, and H. Zade (2016): ‘Algorithms
at Work: Empirical Diversity, Analytic Vocabularies, Design Implications’.