Content uploaded by Marco Marabelli
Author content
All content in this area was uploaded by Marco Marabelli on Oct 09, 2017
Content may be subject to copyright.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 1
Perceived Role Relationships in Human-
Algorithm Interactions: The Context of Uber
Drivers
Short Paper
Xinru Page
Bentley University
Waltham, MA
xpage@bentley.edu
Marco Marabelli
Bentley University
Waltham, MA
mmarabelli@bentley.edu
Monideepa Tarafdar
Lancaster University
Lancaster, United Kingdom
m.tarafdar@lancaster.ac.uk
Abstract
As individuals increasingly interact with algorithms in a work context, it is important to
understand these new types of ‘human-algorithm’ relationships. We investigate the
human-algorithm interaction between Uber drivers and the Uber driver app in
managing customers, routes and fares. This research-in-progress paper reports on
initial findings from an ongoing study, from interviews with ten Uber drivers in the
United States. Our findings illustrate that Uber drivers experience role ambiguity and
role conflict as they attribute different roles to the algorithms embedded in their app.
The literature shows that ambiguity and conflict create workplace uncertainty. We
expand on it by identifying several new sources of role ambiguity and role conflict that
emerge between the driver and the algorithm. Our initial results are positioned within
the literature that studies the emerging role of algorithms at work.
Keywords: Human-computer interaction, Human-algorithm interaction, Role Conflict,
Role Ambiguity, Workplace Relationships
Introduction
The last decade has witnessed the widespread diffusion of (more or less) portable devices equipped with
applications running algorithms that interact with their users (Beer 2009; Loebbecke and Picot 2015;
Newell and Marabelli 2015). The human-algorithm interactions emerging from the intense use of these
devices are meaningful because they disrupt the traditional, one-way, relationship between computers
and machines (Marabelli et al. 2017; Urquhart and Rodden 2017). These algorithms are built and trained
to develop a ‘relationship’ with the user that underpins a series of sociotechnical dynamics that vary
across devices and their users (Holzinger 2016). For instance, Amazon Echo might be considered a
‘helper’ because it responds to our requests by suggesting music to listen to or providing the meaning of
words we do not know. Waze could be seen as a ‘buddy’ that both gives and accepts instructions; it
provides hints pointing to the fastest (or cheaper, or shorter) route while relying on the user to inform
Waze of road conditions. Algorithms such as those embedded in transportation robots, such as those used
by Amazon to manage storing processes in the warehouses, together with their specialized employees may
represent more of a ‘boss’ as the robots instruct workers when and how to pick and assemble orders that
need to be shipped.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 2
Investigating how an algorithm can be perceived in multiple ways by the human whose task is mediated
by it extends the HCI (human-computer interaction) literature on human behavior relating to algorithms
(e.g., Crabtree and Mortier 2015; Dix 2009; Lee and Baykal 2017). Specifically, analyzing how various
users perceive an algorithm from a role/relationship perspective (e.g., something to rely on or something
to challenge or question) helps us build a theoretical understanding of the nuances of human behavior
when interacting with IT artifacts that take on the role of human colleagues (in that they are supposed to
give ‘human like’ responses and learn from our reactions). Practical implications have to do with how
these algorithms respond to human actions and the different roles users might perceive them in. We
therefore frame the human–algorithm relationship in the theoretical edifice of the literature that
addresses uncertainties in workplace roles and relationships, and we draw on role conflict and role
ambiguity theory (Graen 1976; Katz and Kahn 1978). We address the following research questions: (1) Are
ambiguities and conflicts in workplace roles manifested in the human-algorithm relationship? (2) What
issues shape these ambiguities and conflicts?
Our research focuses on Uber drivers and the algorithms (embedded in a specific f0r-driver app on
smartphones) that provide the driver with customer locations, expected charge/compensation, routes,
pick up and drop off choices, customer details and so on. In this paper we report preliminary findings
derived from a set of exploratory interviews with Uber drivers. In particular, we investigate whether and
how Uber drivers experience role ambiguity and role conflict as they attribute a variety of different roles
to the algorithm, as part of an ongoing and dynamic relationship with it.
Background
Organizational Roles and Human Expectations
Drawing on the management and sociology literatures, organizations have been defined as systems of
roles (Katz and Kahn 1978). Every position in an organization has a specific set of tasks or responsibilities
associated with it. These are determined by the person’s ‘role’ in the organization. Roles therefore form
the context that determines the task and function-related expectations and responsibilities of the
individual within the organization (Cooper et al. 2001; Perrone et al. 2003); these expectations and
responsibilities guide an individual’s behavior and his or her interaction with co-workers. Roles have
traditionally been regarded as embodiments of the patterns of human connectedness within which human
behavior in organizations take place (McGrath 1976).
The implementation of IT causes changes in work, routines, and organizational structure. It can thus
change an individual’s role in two ways. The first way is through a change in the material artifacts (i.e.
physical and technical systems) with which individuals interact (Graen 1976). For instance, when an
enterprise system (ES) is introduced, individuals are required to perform new tasks, such as use the
system’s screens and functionalities as prescribed by the system’s configuration. They are also not
required to do certain other things, such as follow up a paper trail of documents with individuals in other
departments. The second way is through a change in the social and cultural systems in which individuals
work. These systems consist of reporting, hierarchical, departmental, and authority structures within the
organization (Graen 1976; Katz and Kahn 1978). The ES, for example, makes follow-up and interaction on
specific tasks redundant because of workflow automation. This creates a change in how individuals
interact with each other. These examples illustrate the more ‘traditional’ human-computer interaction
where people ‘‘use’ the system, in this case, ES to accomplish their everyday tasks (Marabelli and Newell
2009; Markus and Tanis 2000). However, if we examine the same interactions in the context of
algorithms embedded in modern portable devices, the interaction between humans and IT are less
straightforward, as algorithms react and respond similarly to what humans in specific organizational roles
would do (Brynjolfsson and McAfee 2014; McAfee et al. 2012). Further, given the emergent and dynamic
nature of these types of interactions, it is hard for individuals to anticipate the algorithm’s reactions to
their own actions. There are thus three important possibilities in these interactions - (1) the human is
conflicted as to what role the algorithm plays; (2) the human is not clear about what is expected of him or
her; and (3) the human may find his or her expectations regarding the algorithm’s reaction contradicted
by what the algorithm actually does. We therefore draw on the literatures on role conflict, role ambiguity
and expectation confirmation theory, to investigate these particular contexts of human-IT interaction.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 3
Role Conflict, Role Ambiguity and Expectation Confirmation Theory
Uncertainties in workplace roles are embodied in the two concepts of Role Conflict and Role Ambiguity.
Role conflict is defined as “an incompatibility between job tasks, resources, rules or policies, and other
people” (Marabelli and Newell 2009; Markus and Tanis 2000). This happens when an individual is
exposed to contradictory, incompatible or incongruent role requirements, such as when he or she is asked
to fulfill the requirements of more than one role, the expectations from which may be opposite or at odds
(Nicholson and Goh 1983, p. 149) and imply different or contradictory expectations of a person’s behavior
(Abdel-Halim 1981; Kahn et al. 1964; Rizzo et al. 1970). Role ambiguity is defined as a lack of clear,
adequate, available and consistent information about the individual’s role within the organization (Katz
and Kahn 1978; McGrath 1976). Role Conflict concerns incompatible or contradictory requirements of the
role.
Role ambiguity might result because either role-related information is not existent, or it has not been
clearly communicated. The individual may be uncertain about many aspects of his/her work environment;
he or she may not know what exactly to do, or may not know how to do it. Both role ambiguity and role
conflict are detrimental to the individual and ultimately the organization, because they are known causes
of decreased performance, satisfaction, and commitment (Graen 1976; Kahn et al. 1964; McGrath 1976).
Indicators of role conflict include multiple and incompatible requirements from the job, demands from
different supervisors, requests from different colleagues, and multiple definitions of one role. Indicators
of role ambiguity include the individual perceiving a lack of a reasonably clear ambit for his or her tasks,
lack of specific information about key job aspects such as who the supervisor/supervisees are, how he or
she will be evaluated, expectation levels of performance, consequences of low performance and control
systems for feedback (Katz and Kahn 1978; Nicholson and Goh 1983). Role Ambiguity concerns a lack of
clear information about the role.
Expectation Confirmation theory suggests that the individual’s continued use of an application depends
on the extent to which his or her expectations of the application are confirmed or not. Individuals form an
initial expectation of an application’s functionality based on its early use. They use the application and
assess its functionality and performance vis-à-vis their original expectation. By doing so, they determine
the extent to which their expectation is confirmed or not. The greater the extent of confirmation, the
greater their satisfaction with the application and continued use of the application (Bhattacherjee 2001).
In the case of the human-algorithm interaction as described above, given the emergent nature of the
interaction, the extent of confirmation is likely to be low. This would have implications for how the
individual would continue to use the algorithm over time.
Method
To understand the driver-algorithm relationship, we conducted 10 interviews over the period January –
March 2017 with drivers who use ride-hailing apps such as Uber. While one driver had just started, others
had been driving for over 2 years (average experience 1 year). We recruited by advertising the study
through email lists and social media, as well as using our personal networks. It is worth noting that
recruiting Uber drivers was particularly challenging because of the recent negative publicity of the Uber
company. All Uber drivers who participated in our study operate in the US. We interviewed 3 female and 7
male drivers, half of whom were in their thirties, one in his twenties, two in their forties, and two in their
sixties. Six interviewees drove part-time to supplement their day job.
Our approach was extremely exploratory. We therefore first conducted a pilot with 3 interviews following
no specific protocol. Instead, informed by past research aimed at capturing practices through interviews
(Fisher and Gitelson 1983; Nicholson and Goh 1983; Tubre and Collins 2000) we conducted informal,
open-ended and unstructured interviews aimed at collecting as much detail as possible through carefully
listening to participants’ “stories” around their use of technologies supporting their job (Kahn et al. 1964).
Preliminary analysis of the pilot interviews informed the creation of a semi-structured interview protocol
that probed on their experiences as a driver using the Uber app. This approach to qualitative data
collection involving ‘back and forth’ interactions between fieldwork and literature is in line with previous
research aimed at studying emerging (and under-studied) phenomena, and position them within the
interpretive paradigm (Kaplan and Orlikowski 2013). We were able to uncover a number of preliminary
characteristics of the human-algorithm interaction. We used Nvivo to analyze the data (unit of analysis:
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 4
“human-algorithm interactions”) through thematic coding for overall role ambiguity and conflict,
supplemented by open-coding (Miles and Huberman 1984) to capture emergent themes within each. One
researcher coded all interviews, coming to consensus with the other authors for the final codes.
Findings and Analysis
We uncovered many significant characteristics of the Uber service as perceived by the drivers and which
influenced driver attitudes and behaviors. Drivers described the Uber algorithm as having two key
functionalities. The primary one is that of matching the driver with riders. According to our respondent-
drivers, the app’s main objectives in this process are to keep wait-times and costs low for riders, while
allowing drivers the flexibility of being their own boss in deciding when and who to drive. Thus, drivers
perceived that the Uber app should help them optimize matchmaking with riders. Drivers believed that
the Uber algorithm generates ride requests by considering distance from one another and previous
history of being matched. Our more seasoned drivers believed that the Uber app looks at the physical
number of miles from the customer to the driver to determine who is closest, or in other words, the driver
with the smallest “radial” distance from the rider. Drivers report having a matter of seconds to accept the
request before it is passed on to another driver: “It’s around six seconds or whatever to accept or deny the
ride…but if you just leave it there it will just go to the next Uber driver who’s closer.”
Drivers perceived the second main function of the algorithm is to facilitate workflow by providing support
for pickup, navigation and payment. When the driver arrives at the pickup location, ideally the customer
is there and ready to go. However, the Uber app provides suggested wait times beyond which the driver
should feel free to cancel the ride. Drivers typically cited 5 minutes, but many were more flexible and
responsive to customer’s requesting more time. On the other hand, drivers explained that for UberPOOL,
the acceptable time was a matter of only 1-2 minutes after which the app pushed them to continue the ride
since it would impact the current rider if the wait is too long. Once the driver accepted a ride, the Uber
app assisted drivers to navigate to the rider’s location as well as to get to the final destination. Initially,
drivers only know the rider’s starting location and upon pickup, they find out the customer’s destination.
The Uber app facilitates getting the driver to the ride location by invoking the driver’s navigation app of
choice (e.g. Maps, Waze) and initializing it to the correct destination. Drivers’ local knowledge of their
area varied from being completely dependent upon the navigation app to having little need of it.
Regardless, drivers left the navigation system visible to the rider both for practical reasons and as a sign of
trustworthiness: “If you think there’s a faster way, like you just turn left when like Waze says go right,
then it will reroute and that person can see. Because I always have my phone on like a little stand thing in
the middle so they can see the route.” They would confirm with the passenger before changing routes.
However, drivers observed that it wasn’t uncommon to see other drivers hiding the interface by “putting it
on the left. I think it’s like sketchy.”
The Uber app calculates the fare for a given ride and facilitates transfer of funds from the rider’s account
to the driver’s. This eliminates the need for the driver and rider to spend time doing so at the end of each
ride. The Uber app does so by tracking the distance traveled and time spent during each ride. The driver
indicates to the app when a ride starts, and also tells it when the ride ends. Drivers reported that the app
needs to have a cell signal for it to register the start or end of a ride and thus problems could arise when
their phones were not getting signal in rural or other spotty areas. We next organize our findings around
role conflict and role ambiguity.
Role Ambiguity for the Uber Driver
We observed a number of instances of role ambiguity in the human-algorithm relationship. This
happened because important aspects of the driver’s tasks and conditions of work changed unpredictably
or were not clearly enunciated. We describe sources of role ambiguity next.
Price-Setting
The price of each ride is calculated dynamically: Drivers reported that earnings are calculated based on a
per-mile and per-hour rate, with a minimum base rate per ride. However, we observed that these rates
vary widely per region (drivers are assigned to their local region). For example, a Charlotte, North
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 5
Carolina driver reported a base rate of $1.10 while a Washington, DC area driver received over two dollars.
These rates also changed over time. Thomas, a driver in his forties and operating in North Carolina, told
us that “when I first started driving the rate was around $1 per mile and then 20 cents per minute or
something like that. And then not too long after I started it went down to 75 cents per mile and 15 cents
per minute. And recently it went down again to 65 cents per mile and like I don’t remember per minute.”
Furthermore, he cites a change in commission: “Before I started, drivers used to get paid 80% of the fare
but I started right at the time they switched to drivers getting paid 75% of the fare.” This “inspired” him to
only go when there is a “big surge. It’s not worth my time if I am going to be getting, barely recovering my
costs.” This caused ambiguity in that drivers did not truly feel in control of deciding when to work –
rather, these drivers were at the beck and call of algorithmic surges.
The Uber app also dynamically adjusts pricing of specific local areas to match supply and demand. This is
known as surge pricing when there is an extreme shortage of drivers compared to number of riders
making requests in that area. Drivers reported seeing mild surge rates such as one and a half times the
normal fare, up to ten times the normal price. When the surge is especially high, drivers receive text and
email messages urging them to sign on and go to the surge area. This often happened when many people
were trying to get to or from large events. Rush hour was also a commonly high earnings time. A surge
could cover the whole town/city or be more targeted. However, drivers commonly complained about bad
experiences trying to get to a surge area just to have the prices go down. Or of being in a surge area just to
receive requests that originate outside of the surge area. Often they were counseled by other drivers not to
“chase the surge”. This caused further role ambiguity in that drivers believed they could determine the
price and location where they wanted to drive, but were not able to in reality. Several drivers coped by
ignoring surge pricing in favor of finding high volume patterns on their own. Some drivers even resorted
to using the rider Uber app to infer high demand patterns such as seeing cars quickly appearing and
disappearing on the screen.
Contextual Knowledge
Although the algorithm was supposed to facilitate the driver’s workflow, it often lacked contextual
knowledge of traffic and road conditions, and thus provided incomplete or confusing information about
the most important aspects, the driving and navigation. When helping the driver navigate, or assigning
customers, key contextual geographic details would be missing: “I was on the other side of the water from
where they were so I had to get there through the tunnel which took 20, 30 minutes." Often drivers would
be geographically close, but the drivable route would be much further. Another common problem was
getting to the other side of a street whether being restricted from “crossing over the train tracks” or “if
you’re on the [road] in Boston it is only one way and you have to drive a few miles if you want to turn
around to the other side.” Sometimes the location would take them to the wrong side of the building: “It’s
a fairly common experience of Uber drivers to be called to some place where the passenger isn’t.” This
highlighted the role ambiguity of drivers who were not familiar enough with the area to fend off the
misguided directions. They considered it unfair that customers blamed them instead of the algorithm for
the problem.
Algorithmic ride-matching also led to role ambiguity. It was unclear the extent to which drivers should go
to accommodate customers. One driver often would be called to pick up teenagers who are not allowed to
use the app. She would commonly have to cancel the ride but be stuck having wasted her gas and time
getting to the would-be customer. Customers with dogs, wheelchairs, and luggage also were not indicated
as such in ride requests and would often have to take up extra seats and space – this is especially
problematic with UberPOOL where the driver may already have passengers. This was an especially
common issue at some airports with riders who would bring an extra passenger and have several pieces of
luggage. To save money, another set of riders would also call UberPOOL rather than UberX and there
would be no space.
Acceptance Rates
Drivers “have the ability to accept or decline” a ride request that comes through the Uber app. “However if
you decline too many then they threaten to close your account.” Many drivers were unsure what that exact
acceptance rate was, they just knew that they had to accept a high percentage of rides to be able to keep
driving. With an unknown limit, some had violated this rule and received “an email. They call it
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 6
acceptance rate, they say your acceptance rate is too low…I usually brought it back up within a week and
then they were happy. They’d send me another one saying you’re fine now.” These types of interactions
contributed to the ambiguity of whether Uber drivers really are the boss and in control of their work
conditions and clientele.
Driver Rating
At the end of the ride, customers rate the drivers. Drivers asserted that customers normally rated them a 5
if the ride went smoothly, although outside factors such as problems with the GPS could affect the ratings.
Since the logic underlying a particular rating was not clear or apparent, to keep their ratings high, drivers
went the extra mile by providing amenities such as multiple types of phone chargers or bottled water to
get on the customer’s good side: “At the end of the day… the overall rating for that day they will tell me oh,
over 5 or 7 customers your average is 4.85. Or, there were a couple of five stars, good job. So they will
forward some of the good comments to you. So with that and with all these updates and reminders I will
definitely be very conscious that I don’t do things to upset my customers.” Drivers were aware only of the
aggregate rating given to them each day rather than ratings from individual customers. Furthermore, the
way their rating is evaluated by the Uber app is to compare it with fellow drivers in the region: “These
ratings are from 1 to 5 and Uber will tell us that a typical Uber driver will be getting on average about 4.7
or 4.8. So if you have a rating of 4.5 then you’re not very good, you need to improve. And if you go down to
4.2 or 4.0 they’re going to put you on probation and make sure that you improve yourself.”
As a result of these unpredictable metrics, one interviewee was required to go to Uber classes to bring up
his rating when he first started driving. Another interviewee described a driver he knew who “got kicked
off driving for Uber because he would drive a lot of people at night and they’re drunk and they’d give him
bad ratings because they thought he was driving bad. Uber’s just like…You’re performing below par, like
get out of our service, we don’t want you anymore.” Because the bar for getting a good rating was not well
defined, drivers also were unsure if their role as a dependable driver was enough – they often added other
customer service roles.
Role Conflict for the Uber Driver
We observed a number of instances of role conflict in the human-algorithm relationship. This happened
because the drivers viewed important aspects of their work as being subject to competing and
contradictory demands and requirements. We describe sources of role conflict next.
Multiple Algorithms
The multiple algorithms in the Uber app often came in conflict with one another. Drivers described
conflicts that arose from being subject to multiple algorithms. For example, the ride request algorithm
often worked against the incentives and surge pricing. It was frustrating when drivers were called out of a
surge area, or guaranteed wage area, or incentive area to service a request. For example, one driver
explains his experience with chasing guaranteed wages where “on really busy nights they’ll say we
guarantee that you will get, you know, $20 an hour from this hour to this hour and then $19 from this
hour to that hour. But written in this little fine print is that you have to maintain an acceptance rate of
90% and you’ve got a minimum number of rides per hour in order to get them.” In practice, this meant
that the driver loses his guarantee if he refuses a single ride, but “if I take a ride that is twenty minutes out
and I take that guy five minutes, I’m almost guaranteed not to get my guarantees because I’m not going to
get back in time to where the populated centers are to be able to get enough rides.” Here, the algorithm for
calculating compensation introduces a conflict between serving the company during specific needed times
and serving a particular customer.
Also frustrating was when drivers are directed out of a surge area: “Sometimes you get a ping that there
was a request and if I pull out of a really busy area where there was a surge, I could go somewhere out of
the way.” The drivers would be frustrated that they responded to a surge request and then the algorithm
directed them elsewhere. What was worse was that “sometimes the trip was cancelled when you got there
and that could be pretty frustrating if you were in a surge area where you get higher prices and it takes you
out of the surge area and you ended up getting nothing.” Again, the ride-matching algorithm creates a
conflict between serving the needs of out-of-area customers and serving in a high-need area.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 7
Another common source of these frustrating conflicts was for drivers who attempted to service the airport
whose patrons usually request longer and more profitable rides. When drivers arrive in the airport area,
they are automatically added to a queue waiting to get requests from airport patrons. As one driver
explained, “I wanted to just serve the airport and so my expectation was that when I was in a queue I
would only be getting requests from the airport queue, but they would send me requests from like a 20-
mile radius still… around five o’clock when people were getting off work at these call centers and things
that are around the airport area is, you’d be getting requests to go pick them up and take them home
because they didn’t have a car.” Unfortunately this would leave too few drivers at the airport queue and
“they’d send out what they call a panic text. They send out a mass text message that always says don’t text
and drive, but there’s only two cars waiting at the airport so you need to get back there.” The algorithm
that emphasizes the driver’s role to cover the airport was at odds with the algorithm that emphasizes the
driver’s duty to help all customers, resulting in a tug o’ war shuffling drivers back and forth. These types of
conflicts were especially frustrating when the non-airport rides were short and to low-traffic areas making
it unlikely to pick up another ride.
Performance Metrics
What aggravated the conflicts between tasks described above was that performance metrics were also
often misaligned with the tasks. For instance, turning down requests to customers out of the surge area or
out of the airport area would allow for drivers to meet the higher demand for drivers in those areas.
However, it would also damage their acceptance rate, a key performance metric used by Uber to evaluate
its drivers. Thus, by assigning requests out of area, the algorithm created conflicts between their need to
maximize their profits as an entrepreneur, and the emphasis on their evaluation as a contractor to Uber.
The driver rating performance metric was also at odds with Uber’s task assignments. A common refrain
from drivers was complaining about UberPOOL’s automatic passenger assignment. The app would add
customers to be picked up along the way, going roughly in the same direction as the current rider.
Sometimes the new customer would be in the opposite direction that the driver was going. Other times,
the driver would be on the freeway and getting to the rider was a much longer detour. In such situations
where picking up an additional UberPOOL customer would cause a much longer delay, the driver risks a
lower driver rating from the delayed customer. Thus, drivers overwhelmingly chose to cancel the
automatically added ride. This was an effort to preserve customer satisfaction and keep their driver rating
high, but at the same time, it hurt their acceptance rate. Thus, the algorithm would introduce conflicts
between their customer service versus contractor-oriented performance metrics, without a way to
reconcile the two.
Source of Role Ambiguity and Role Conflict
Based on the data, our analysis identifies that the human-algorithm relationship, like human-human
relationships, also exhibits role ambiguity and role conflict. We identify sources of role ambiguity and role
conflict, some of which are unique to the fact that the algorithm is playing the managerial role. We saw
three sources of role ambiguity. First, drivers did not feel empowered to dictate their work hours or
circumstance because of uncertainty around how much they would be paid. This came as a result of
periodic high-level shifts in pricing, down to uncertainty of whether they make the incentive or surge
pricing. Secondly, they often received instructions for pick-up location, customer, or wait times from the
driver Uber app that did not factor in key contextual knowledge. This led to drivers being blamed for not
doing their job or wasting time and resources being directed to customers who they couldn’t pick up.
Lastly, there was role ambiguity that resulted from unclear performance metrics such as not knowing the
acceptance rate goal. This ambiguity also resulted from ambiguous feedback such as not knowing how a
customer rates the driver, only knowing the aggregate rating, or from performance metrics based on a
moving target, such as driver ratings that are compared to the average in the area.
We saw two sources of role conflict as well. The first type of role conflict emerged due to the contradictory
functionalities of the algorithm itself, which gave rise to different and opposing requirements for the
driver. A human manager is usually an embodiment of a single role – e.g. a supervisor or a supervisee or a
colleague. However, our study’s participants reported being at the mercy of various algorithms that were
at odds with one another and did not reconcile their differences. A second source of role conflict arises
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 8
from these algorithms being misaligned with the performance metrics. Drivers had to decide which one to
let down. Table 1 synthesizes the findings in light of our analysis.
Macro-theme Key factor involved Description
Role ambiguity
Unknown compensation
Stems from dynamic pricing and uncertainty
around obtaining incentives and surge pricing
Managing in absence of context
App assigns tasks and even micromanages
despite lacking key contextual knowledge
Unknown performance metric
Due to lack of information (e.g. acceptance
rate) or moving targets (e.g. driver rating is
defined relative to others) or ambiguity (e.g.
don’t know which rider rated you poorly)
Role conflict
Disembodied, multi-headed Boss
Rather than one boss that can provide a single
cohesive plan of action, drivers have many
bosses that act in silos and often conflict.
Misalignment of tasks with
performance metrics
Tasks are not always aligned with performance
metrics.
Table 1. Role Ambiguity and Conflict
Conclusions and Future Research
Uber provides a work context in which humans interact with an algorithm for accomplishing tasks, in the
same way they have traditionally interacted with other humans. However, how this relationship is
perceived and performed is theoretically unclear. Our preliminary findings are interesting because they
show that key aspects of human-human interactions (such as role conflict and role ambiguity) manifest in
the human-algorithm interactions as well. We found that drivers did not perceive the Uber algorithm in a
single role of the traditional manager, colleague, or subordinate type relationships. Instead, they
perceived the algorithm to embody each of these different roles at different times, depending on the
context. Interestingly, the driver-algorithm interaction was different than what has been observed in the
case of human-human interaction. This new kind of workplace relationship, and the role conflicts and role
ambiguity that resulted from it, led to new types of consequences as well.
One consequence was a dialectic between blaming the app, or not, when something went wrong. On the
one hand, we saw that drivers and passengers alike attributed a sense of neutrality where the algorithm
was not held accountable for its actions. Rather, either the fault would be attributed to the driver, or there
would simply be no blame. For instance, when the destination setting feature or other aspects requiring
cell signal do not work, there may be no fault assigned to anyone. On the other hand, drivers also
commonly were blamed when it was their Uber app that failed. This was especially problematic for the
driver since the Uber app would send conflicting tasks, as described earlier.
A second consequence of the algorithm-driven passenger selection and pick-up was the lack of a relational
and social component in the driving. The Uber app is optimized for each ride to be, as one interviewee put
it, “a single serving friendship”. This one-time use mentality affected ratings and lack of tips, and the
driver’s willingness to cancel when a customer slips up such as making the driver wait too long. However,
we found that many drivers craved a more social connection and, in fact, that was what motivated them to
drive. Several drivers found Facebook Uber driver groups valuable for emotional support and picking up
tips such as dealing with drunk passengers. Many would also get tips from passengers who were Uber
drivers themselves, or from their driver when taking Uber themselves. This sense of community was
invaluable.
Based on our preliminary observations, we suggest that the nature of the human-algorithm relationship
leads to role conflict and role ambiguity. In addition, the specific role that the human perceives of the
algorithm is dynamic over time. Specifically, drivers start by putting their trust in the algorithm’s ability to
efficiently manage their tasks. As the interaction introduces role conflict and role ambiguity however,
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 9
drivers stop heeding the algorithm and may even start gaming it. In this way, the relationship may change
over time.
Future work includes undertaking additional interviews and data collection in the US to further examine
these ideas. We expect our study will open up new research opportunities to theoretically understand the
nuances of human - algorithm interaction from the point of view of how they affect the individual’s
perceptions of organizational roles. Besides these theoretical implications, we also expect our study to
practically inform organizations that deploy human-algorithm interactions in business processes about
how their employees might perceive such algorithms, and management implications thereof.
References
Abdel-Halim, A. A. 1981. "Effects of Role Stress-Job Design-Technology Interaction on Employee Work
Satisfaction," Academy of Management Journal (24:2), pp. 260-273.
Beer, D. 2009. "Power through the Algorithm? Participatory Web Cultures and the Technological
Unconscious," New Media & Society (11:6), pp. 985-1002.
Bhattacherjee, A. 2001. "Understanding Information Systems Continuance: An Expectation-Confirmation
Model," MIS Quarterly (25:3), pp. 351-370.
Brynjolfsson, E., and McAfee, A. 2014. The Second Machine Age: Work, Progress, and Prosperity in a
Time of Brilliant Technologies. New York: WW Norton & Company.
Cooper, C. L., Dewe, P. J., and O'Driscoll, M. P. 2001. Organizational Stress: A Review and Critique of
Theory, Research, and Applications. London and New Delhi: Sage.
Crabtree, A., and Mortier, R. 2015. "Human Data Interaction: Historical Lessons from Social Studies and
Cscw," ECSCW 2015: Proceedings of the 14th European Conference on Computer Supported
Cooperative Work, 19-23 September 2015, Oslo, Norway: Springer, pp. 3-21.
Dix, A. 2009. "Human-Computer Interaction," in Encyclopedia of Database Systems. Berlin: Springer,
pp. 1327-1331.
Fisher, C. D., and Gitelson, R. 1983. "A Meta-Analysis of the Correlates of Role Conflict and Ambiguity,"
Journal of Applied Psychology (68:2), pp. 320-333.
Graen, G. 1976. "Role-Making Processes within Complex Organizations," in Handbook of Industrial and
Organizational Psychology, M.D. Dunnette (ed.). Chicago: Rand McNally, pp. 1201-1245.
Holzinger, A. 2016. "Interactive Machine Learning for Health Informatics: When Do We Need the
Human-in-the-Loop?," Brain Informatics (3:2), pp. 119-131.
Kahn, R. L., Wolfe, D. M., Quinn, R. P., Snoek, J. D., and Rosenthal, R. A. 1964. Organizational Stress:
Studies in Role Conflict and Ambiguity. Oxford: Wiley.
Kaplan, S., and Orlikowski, W. J. 2013. "Temporal Work in Strategy Making," Organization Science
(24:4), pp. 965-995.
Katz, D., and Kahn, R. L. 1978. The Social Psychology of Organizations. New York: Wiley.
Lee, M. K., and Baykal, S. 2017. "Algorithmic Mediation in Group Decisions: Fairness Perceptions of
Algorithmically Mediated Vs Discussion Based Social Division," Proceedings of the 20 th ACM
Conference on Computer-Supported Cooperative Work & Social Computing [CSCW].
Loebbecke, C., and Picot, A. 2015. "Reflections on Societal and Business Model Transformation Arising
from Digitization and Big Data Analytics: A Research Agenda," The Journal of Strategic Information
Systems (24:3), pp. 149-157.
Marabelli, M., Hansen, S., Newell, S., and Frigerio, C. 2017. "The Light and Dark Side of the Black Box:
Sensor-Based Technology in the Automotive Industry," Communication of the AIS (40:16), pp. 351-
374.
Marabelli, M., and Newell, S. 2009. "Organizational Learning and Absorptive Capacity in Managing Erp
Implementation Projects," Proceedings of ICIS 2009, Phoenix, AX.
Markus, M. L., and Tanis, C. 2000. "The Enterprise Systems Experience-from Adoption to Success," in
Framing the Domains of IT Management: Projecting the Future through the Past, R.W. Zmud (ed.).
Cincinnati,OH: Pinnaflex Educational Resources, Inc., pp. 173-207.
McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D., and Barton, D. 2012. "Big Data: The Management
Revolution," Harvard Business Review (90:10), pp. 61-67.
McGrath, J. E. 1976. "Stress and Behavior in Organizations," in Handbook of Industrial and
Organizational Psychology, M.D. Dunnette (ed.). Chicago: Rand-McNally, pp. 1351-1395.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 10
Miles, M. B., and Huberman, A. M. 1984. Qualitative Data Analysis: A Sourcebook of New Methods.
Newbury Park, CA: SAGE Publications.
Newell, S., and Marabelli, M. 2015. "Strategic Opportunities (and Challenges) of Algorithmic Decision-
Making: A Call for Action on the Long-Term Societal Effects of ‘Datification’," The Journal of
Strategic Information Systems (24:1), pp. 3-14.
Nicholson, P. J., and Goh, S. C. 1983. "The Relationship of Organization Structure and Interpersonal
Attitudes to Role Conflict and Ambiguity in Different Work Environments," Academy of Management
Journal (26:1), pp. 148-155.
Perrone, V., Zaheer, A., and McEvily, B. 2003. "Free to Be Trusted? Organizational Constraints on Trust
in Boundary Spanners," Organization Science (14:4), pp. 422-439.
Rizzo, J. R., House, R. J., and Lirtzman, S. I. 1970. "Role Conflict and Ambiguity in Complex
Organizations," Administrative Science Quarterly (15:2), pp. 150-163.
Tubre, T. C., and Collins, J. M. 2000. "Jackson and Schuler (1985) Revisited: A Meta-Analysis of the
Relationships between Role Ambiguity, Role Conflict, and Job Performance," Journal of
Management (26:1), pp. 155-169.
Urquhart, L., and Rodden, T. 2017. "New Directions in Information Technology Law: Learning from
Human–Computer Interaction," International Review of Law, Computers & Technology (Published
online: 28 Mar 2017), pp. 1-20.