Conference PaperPDF Available

Perceived Role Relationships in Human-Algorithm Interactions: The Context of Uber Drivers

Authors:

Abstract

As individuals increasingly interact with algorithms in a work context, it is important to understand these new types of 'human-algorithm' relationships. We investigate the human-algorithm interaction between Uber drivers and the Uber driver app in managing customers, routes and fares. This research-in-progress paper reports on initial findings from an ongoing study, from interviews with ten Uber drivers in the United States. Our findings illustrate that Uber drivers experience role ambiguity and role conflict as they attribute different roles to the algorithms embedded in their app. The literature shows that ambiguity and conflict create workplace uncertainty. We expand on it by identifying several new sources of role ambiguity and role conflict that emerge between the driver and the algorithm. Our initial results are positioned within the literature that studies the emerging role of algorithms at work.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 1
Perceived Role Relationships in Human-
Algorithm Interactions: The Context of Uber
Drivers
Short Paper
Xinru Page
Bentley University
Waltham, MA
xpage@bentley.edu
Marco Marabelli
Bentley University
Waltham, MA
mmarabelli@bentley.edu
Monideepa Tarafdar
Lancaster University
Lancaster, United Kingdom
m.tarafdar@lancaster.ac.uk
Abstract
As individuals increasingly interact with algorithms in a work context, it is important to
understand these new types of ‘human-algorithm’ relationships. We investigate the
human-algorithm interaction between Uber drivers and the Uber driver app in
managing customers, routes and fares. This research-in-progress paper reports on
initial findings from an ongoing study, from interviews with ten Uber drivers in the
United States. Our findings illustrate that Uber drivers experience role ambiguity and
role conflict as they attribute different roles to the algorithms embedded in their app.
The literature shows that ambiguity and conflict create workplace uncertainty. We
expand on it by identifying several new sources of role ambiguity and role conflict that
emerge between the driver and the algorithm. Our initial results are positioned within
the literature that studies the emerging role of algorithms at work.
Keywords: Human-computer interaction, Human-algorithm interaction, Role Conflict,
Role Ambiguity, Workplace Relationships
Introduction
The last decade has witnessed the widespread diffusion of (more or less) portable devices equipped with
applications running algorithms that interact with their users (Beer 2009; Loebbecke and Picot 2015;
Newell and Marabelli 2015). The human-algorithm interactions emerging from the intense use of these
devices are meaningful because they disrupt the traditional, one-way, relationship between computers
and machines (Marabelli et al. 2017; Urquhart and Rodden 2017). These algorithms are built and trained
to develop a ‘relationship’ with the user that underpins a series of sociotechnical dynamics that vary
across devices and their users (Holzinger 2016). For instance, Amazon Echo might be considered a
‘helper’ because it responds to our requests by suggesting music to listen to or providing the meaning of
words we do not know. Waze could be seen as a ‘buddy’ that both gives and accepts instructions; it
provides hints pointing to the fastest (or cheaper, or shorter) route while relying on the user to inform
Waze of road conditions. Algorithms such as those embedded in transportation robots, such as those used
by Amazon to manage storing processes in the warehouses, together with their specialized employees may
represent more of a ‘boss’ as the robots instruct workers when and how to pick and assemble orders that
need to be shipped.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 2
Investigating how an algorithm can be perceived in multiple ways by the human whose task is mediated
by it extends the HCI (human-computer interaction) literature on human behavior relating to algorithms
(e.g., Crabtree and Mortier 2015; Dix 2009; Lee and Baykal 2017). Specifically, analyzing how various
users perceive an algorithm from a role/relationship perspective (e.g., something to rely on or something
to challenge or question) helps us build a theoretical understanding of the nuances of human behavior
when interacting with IT artifacts that take on the role of human colleagues (in that they are supposed to
give ‘human like’ responses and learn from our reactions). Practical implications have to do with how
these algorithms respond to human actions and the different roles users might perceive them in. We
therefore frame the humanalgorithm relationship in the theoretical edifice of the literature that
addresses uncertainties in workplace roles and relationships, and we draw on role conflict and role
ambiguity theory (Graen 1976; Katz and Kahn 1978). We address the following research questions: (1) Are
ambiguities and conflicts in workplace roles manifested in the human-algorithm relationship? (2) What
issues shape these ambiguities and conflicts?
Our research focuses on Uber drivers and the algorithms (embedded in a specific f0r-driver app on
smartphones) that provide the driver with customer locations, expected charge/compensation, routes,
pick up and drop off choices, customer details and so on. In this paper we report preliminary findings
derived from a set of exploratory interviews with Uber drivers. In particular, we investigate whether and
how Uber drivers experience role ambiguity and role conflict as they attribute a variety of different roles
to the algorithm, as part of an ongoing and dynamic relationship with it.
Background
Organizational Roles and Human Expectations
Drawing on the management and sociology literatures, organizations have been defined as systems of
roles (Katz and Kahn 1978). Every position in an organization has a specific set of tasks or responsibilities
associated with it. These are determined by the person’s rolein the organization. Roles therefore form
the context that determines the task and function-related expectations and responsibilities of the
individual within the organization (Cooper et al. 2001; Perrone et al. 2003); these expectations and
responsibilities guide an individual’s behavior and his or her interaction with co-workers. Roles have
traditionally been regarded as embodiments of the patterns of human connectedness within which human
behavior in organizations take place (McGrath 1976).
The implementation of IT causes changes in work, routines, and organizational structure. It can thus
change an individual’s role in two ways. The first way is through a change in the material artifacts (i.e.
physical and technical systems) with which individuals interact (Graen 1976). For instance, when an
enterprise system (ES) is introduced, individuals are required to perform new tasks, such as use the
system’s screens and functionalities as prescribed by the system’s configuration. They are also not
required to do certain other things, such as follow up a paper trail of documents with individuals in other
departments. The second way is through a change in the social and cultural systems in which individuals
work. These systems consist of reporting, hierarchical, departmental, and authority structures within the
organization (Graen 1976; Katz and Kahn 1978). The ES, for example, makes follow-up and interaction on
specific tasks redundant because of workflow automation. This creates a change in how individuals
interact with each other. These examples illustrate the more ‘traditional’ human-computer interaction
where people ‘‘use’ the system, in this case, ES to accomplish their everyday tasks (Marabelli and Newell
2009; Markus and Tanis 2000). However, if we examine the same interactions in the context of
algorithms embedded in modern portable devices, the interaction between humans and IT are less
straightforward, as algorithms react and respond similarly to what humans in specific organizational roles
would do (Brynjolfsson and McAfee 2014; McAfee et al. 2012). Further, given the emergent and dynamic
nature of these types of interactions, it is hard for individuals to anticipate the algorithm’s reactions to
their own actions. There are thus three important possibilities in these interactions - (1) the human is
conflicted as to what role the algorithm plays; (2) the human is not clear about what is expected of him or
her; and (3) the human may find his or her expectations regarding the algorithm’s reaction contradicted
by what the algorithm actually does. We therefore draw on the literatures on role conflict, role ambiguity
and expectation confirmation theory, to investigate these particular contexts of human-IT interaction.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 3
Role Conflict, Role Ambiguity and Expectation Confirmation Theory
Uncertainties in workplace roles are embodied in the two concepts of Role Conflict and Role Ambiguity.
Role conflict is defined as “an incompatibility between job tasks, resources, rules or policies, and other
people” (Marabelli and Newell 2009; Markus and Tanis 2000). This happens when an individual is
exposed to contradictory, incompatible or incongruent role requirements, such as when he or she is asked
to fulfill the requirements of more than one role, the expectations from which may be opposite or at odds
(Nicholson and Goh 1983, p. 149) and imply different or contradictory expectations of a person’s behavior
(Abdel-Halim 1981; Kahn et al. 1964; Rizzo et al. 1970). Role ambiguity is defined as a lack of clear,
adequate, available and consistent information about the individual’s role within the organization (Katz
and Kahn 1978; McGrath 1976). Role Conflict concerns incompatible or contradictory requirements of the
role.
Role ambiguity might result because either role-related information is not existent, or it has not been
clearly communicated. The individual may be uncertain about many aspects of his/her work environment;
he or she may not know what exactly to do, or may not know how to do it. Both role ambiguity and role
conflict are detrimental to the individual and ultimately the organization, because they are known causes
of decreased performance, satisfaction, and commitment (Graen 1976; Kahn et al. 1964; McGrath 1976).
Indicators of role conflict include multiple and incompatible requirements from the job, demands from
different supervisors, requests from different colleagues, and multiple definitions of one role. Indicators
of role ambiguity include the individual perceiving a lack of a reasonably clear ambit for his or her tasks,
lack of specific information about key job aspects such as who the supervisor/supervisees are, how he or
she will be evaluated, expectation levels of performance, consequences of low performance and control
systems for feedback (Katz and Kahn 1978; Nicholson and Goh 1983). Role Ambiguity concerns a lack of
clear information about the role.
Expectation Confirmation theory suggests that the individual’s continued use of an application depends
on the extent to which his or her expectations of the application are confirmed or not. Individuals form an
initial expectation of an application’s functionality based on its early use. They use the application and
assess its functionality and performance vis-à-vis their original expectation. By doing so, they determine
the extent to which their expectation is confirmed or not. The greater the extent of confirmation, the
greater their satisfaction with the application and continued use of the application (Bhattacherjee 2001).
In the case of the human-algorithm interaction as described above, given the emergent nature of the
interaction, the extent of confirmation is likely to be low. This would have implications for how the
individual would continue to use the algorithm over time.
Method
To understand the driver-algorithm relationship, we conducted 10 interviews over the period January
March 2017 with drivers who use ride-hailing apps such as Uber. While one driver had just started, others
had been driving for over 2 years (average experience 1 year). We recruited by advertising the study
through email lists and social media, as well as using our personal networks. It is worth noting that
recruiting Uber drivers was particularly challenging because of the recent negative publicity of the Uber
company. All Uber drivers who participated in our study operate in the US. We interviewed 3 female and 7
male drivers, half of whom were in their thirties, one in his twenties, two in their forties, and two in their
sixties. Six interviewees drove part-time to supplement their day job.
Our approach was extremely exploratory. We therefore first conducted a pilot with 3 interviews following
no specific protocol. Instead, informed by past research aimed at capturing practices through interviews
(Fisher and Gitelson 1983; Nicholson and Goh 1983; Tubre and Collins 2000) we conducted informal,
open-ended and unstructured interviews aimed at collecting as much detail as possible through carefully
listening to participants’ “stories” around their use of technologies supporting their job (Kahn et al. 1964).
Preliminary analysis of the pilot interviews informed the creation of a semi-structured interview protocol
that probed on their experiences as a driver using the Uber app. This approach to qualitative data
collection involving ‘back and forth’ interactions between fieldwork and literature is in line with previous
research aimed at studying emerging (and under-studied) phenomena, and position them within the
interpretive paradigm (Kaplan and Orlikowski 2013). We were able to uncover a number of preliminary
characteristics of the human-algorithm interaction. We used Nvivo to analyze the data (unit of analysis:
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 4
“human-algorithm interactions”) through thematic coding for overall role ambiguity and conflict,
supplemented by open-coding (Miles and Huberman 1984) to capture emergent themes within each. One
researcher coded all interviews, coming to consensus with the other authors for the final codes.
Findings and Analysis
We uncovered many significant characteristics of the Uber service as perceived by the drivers and which
influenced driver attitudes and behaviors. Drivers described the Uber algorithm as having two key
functionalities. The primary one is that of matching the driver with riders. According to our respondent-
drivers, the app’s main objectives in this process are to keep wait-times and costs low for riders, while
allowing drivers the flexibility of being their own boss in deciding when and who to drive. Thus, drivers
perceived that the Uber app should help them optimize matchmaking with riders. Drivers believed that
the Uber algorithm generates ride requests by considering distance from one another and previous
history of being matched. Our more seasoned drivers believed that the Uber app looks at the physical
number of miles from the customer to the driver to determine who is closest, or in other words, the driver
with the smallest radialdistance from the rider. Drivers report having a matter of seconds to accept the
request before it is passed on to another driver: “It’s around six seconds or whatever to accept or deny the
ride…but if you just leave it there it will just go to the next Uber driver who’s closer.”
Drivers perceived the second main function of the algorithm is to facilitate workflow by providing support
for pickup, navigation and payment. When the driver arrives at the pickup location, ideally the customer
is there and ready to go. However, the Uber app provides suggested wait times beyond which the driver
should feel free to cancel the ride. Drivers typically cited 5 minutes, but many were more flexible and
responsive to customer’s requesting more time. On the other hand, drivers explained that for UberPOOL,
the acceptable time was a matter of only 1-2 minutes after which the app pushed them to continue the ride
since it would impact the current rider if the wait is too long. Once the driver accepted a ride, the Uber
app assisted drivers to navigate to the rider’s location as well as to get to the final destination. Initially,
drivers only know the rider’s starting location and upon pickup, they find out the customer’s destination.
The Uber app facilitates getting the driver to the ride location by invoking the driver’s navigation app of
choice (e.g. Maps, Waze) and initializing it to the correct destination. Drivers’ local knowledge of their
area varied from being completely dependent upon the navigation app to having little need of it.
Regardless, drivers left the navigation system visible to the rider both for practical reasons and as a sign of
trustworthiness: “If you think there’s a faster way, like you just turn left when like Waze says go right,
then it will reroute and that person can see. Because I always have my phone on like a little stand thing in
the middle so they can see the route.” They would confirm with the passenger before changing routes.
However, drivers observed that it wasn’t uncommon to see other drivers hiding the interface by “putting it
on the left. I think it’s like sketchy.”
The Uber app calculates the fare for a given ride and facilitates transfer of funds from the rider’s account
to the drivers. This eliminates the need for the driver and rider to spend time doing so at the end of each
ride. The Uber app does so by tracking the distance traveled and time spent during each ride. The driver
indicates to the app when a ride starts, and also tells it when the ride ends. Drivers reported that the app
needs to have a cell signal for it to register the start or end of a ride and thus problems could arise when
their phones were not getting signal in rural or other spotty areas. We next organize our findings around
role conflict and role ambiguity.
Role Ambiguity for the Uber Driver
We observed a number of instances of role ambiguity in the human-algorithm relationship. This
happened because important aspects of the driver’s tasks and conditions of work changed unpredictably
or were not clearly enunciated. We describe sources of role ambiguity next.
Price-Setting
The price of each ride is calculated dynamically: Drivers reported that earnings are calculated based on a
per-mile and per-hour rate, with a minimum base rate per ride. However, we observed that these rates
vary widely per region (drivers are assigned to their local region). For example, a Charlotte, North
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 5
Carolina driver reported a base rate of $1.10 while a Washington, DC area driver received over two dollars.
These rates also changed over time. Thomas, a driver in his forties and operating in North Carolina, told
us that “when I first started driving the rate was around $1 per mile and then 20 cents per minute or
something like that. And then not too long after I started it went down to 75 cents per mile and 15 cents
per minute. And recently it went down again to 65 cents per mile and like I don’t remember per minute.
Furthermore, he cites a change in commission: “Before I started, drivers used to get paid 80% of the fare
but I started right at the time they switched to drivers getting paid 75% of the fare.” This “inspired” him to
only go when there is a “big surge. It’s not worth my time if I am going to be getting, barely recovering my
costs.” This caused ambiguity in that drivers did not truly feel in control of deciding when to work
rather, these drivers were at the beck and call of algorithmic surges.
The Uber app also dynamically adjusts pricing of specific local areas to match supply and demand. This is
known as surge pricing when there is an extreme shortage of drivers compared to number of riders
making requests in that area. Drivers reported seeing mild surge rates such as one and a half times the
normal fare, up to ten times the normal price. When the surge is especially high, drivers receive text and
email messages urging them to sign on and go to the surge area. This often happened when many people
were trying to get to or from large events. Rush hour was also a commonly high earnings time. A surge
could cover the whole town/city or be more targeted. However, drivers commonly complained about bad
experiences trying to get to a surge area just to have the prices go down. Or of being in a surge area just to
receive requests that originate outside of the surge area. Often they were counseled by other drivers not to
“chase the surge”. This caused further role ambiguity in that drivers believed they could determine the
price and location where they wanted to drive, but were not able to in reality. Several drivers coped by
ignoring surge pricing in favor of finding high volume patterns on their own. Some drivers even resorted
to using the rider Uber app to infer high demand patterns such as seeing cars quickly appearing and
disappearing on the screen.
Contextual Knowledge
Although the algorithm was supposed to facilitate the driver’s workflow, it often lacked contextual
knowledge of traffic and road conditions, and thus provided incomplete or confusing information about
the most important aspects, the driving and navigation. When helping the driver navigate, or assigning
customers, key contextual geographic details would be missing: “I was on the other side of the water from
where they were so I had to get there through the tunnel which took 20, 30 minutes." Often drivers would
be geographically close, but the drivable route would be much further. Another common problem was
getting to the other side of a street whether being restricted from “crossing over the train tracks” or “if
you’re on the [road] in Boston it is only one way and you have to drive a few miles if you want to turn
around to the other side.” Sometimes the location would take them to the wrong side of the building: “It’s
a fairly common experience of Uber drivers to be called to some place where the passenger isn’t.This
highlighted the role ambiguity of drivers who were not familiar enough with the area to fend off the
misguided directions. They considered it unfair that customers blamed them instead of the algorithm for
the problem.
Algorithmic ride-matching also led to role ambiguity. It was unclear the extent to which drivers should go
to accommodate customers. One driver often would be called to pick up teenagers who are not allowed to
use the app. She would commonly have to cancel the ride but be stuck having wasted her gas and time
getting to the would-be customer. Customers with dogs, wheelchairs, and luggage also were not indicated
as such in ride requests and would often have to take up extra seats and space this is especially
problematic with UberPOOL where the driver may already have passengers. This was an especially
common issue at some airports with riders who would bring an extra passenger and have several pieces of
luggage. To save money, another set of riders would also call UberPOOL rather than UberX and there
would be no space.
Acceptance Rates
Drivers “have the ability to accept or decline” a ride request that comes through the Uber app. “However if
you decline too many then they threaten to close your account.” Many drivers were unsure what that exact
acceptance rate was, they just knew that they had to accept a high percentage of rides to be able to keep
driving. With an unknown limit, some had violated this rule and received “an email. They call it
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 6
acceptance rate, they say your acceptance rate is too low…I usually brought it back up within a week and
then they were happy. They’d send me another one saying you’re fine now.” These types of interactions
contributed to the ambiguity of whether Uber drivers really are the boss and in control of their work
conditions and clientele.
Driver Rating
At the end of the ride, customers rate the drivers. Drivers asserted that customers normally rated them a 5
if the ride went smoothly, although outside factors such as problems with the GPS could affect the ratings.
Since the logic underlying a particular rating was not clear or apparent, to keep their ratings high, drivers
went the extra mile by providing amenities such as multiple types of phone chargers or bottled water to
get on the customer’s good side: “At the end of the daythe overall rating for that day they will tell me oh,
over 5 or 7 customers your average is 4.85. Or, there were a couple of five stars, good job. So they will
forward some of the good comments to you. So with that and with all these updates and reminders I will
definitely be very conscious that I don’t do things to upset my customers.” Drivers were aware only of the
aggregate rating given to them each day rather than ratings from individual customers. Furthermore, the
way their rating is evaluated by the Uber app is to compare it with fellow drivers in the region: “These
ratings are from 1 to 5 and Uber will tell us that a typical Uber driver will be getting on average about 4.7
or 4.8. So if you have a rating of 4.5 then you’re not very good, you need to improve. And if you go down to
4.2 or 4.0 they’re going to put you on probation and make sure that you improve yourself.”
As a result of these unpredictable metrics, one interviewee was required to go to Uber classes to bring up
his rating when he first started driving. Another interviewee described a driver he knew who “got kicked
off driving for Uber because he would drive a lot of people at night and they’re drunk and they’d give him
bad ratings because they thought he was driving bad. Uber’s just like…You’re performing below par, like
get out of our service, we don’t want you anymore.” Because the bar for getting a good rating was not well
defined, drivers also were unsure if their role as a dependable driver was enough they often added other
customer service roles.
Role Conflict for the Uber Driver
We observed a number of instances of role conflict in the human-algorithm relationship. This happened
because the drivers viewed important aspects of their work as being subject to competing and
contradictory demands and requirements. We describe sources of role conflict next.
Multiple Algorithms
The multiple algorithms in the Uber app often came in conflict with one another. Drivers described
conflicts that arose from being subject to multiple algorithms. For example, the ride request algorithm
often worked against the incentives and surge pricing. It was frustrating when drivers were called out of a
surge area, or guaranteed wage area, or incentive area to service a request. For example, one driver
explains his experience with chasing guaranteed wages where “on really busy nights they’ll say we
guarantee that you will get, you know, $20 an hour from this hour to this hour and then $19 from this
hour to that hour. But written in this little fine print is that you have to maintain an acceptance rate of
90% and you’ve got a minimum number of rides per hour in order to get them.” In practice, this meant
that the driver loses his guarantee if he refuses a single ride, butif I take a ride that is twenty minutes out
and I take that guy five minutes, I’m almost guaranteed not to get my guarantees because I’m not going to
get back in time to where the populated centers are to be able to get enough rides.” Here, the algorithm for
calculating compensation introduces a conflict between serving the company during specific needed times
and serving a particular customer.
Also frustrating was when drivers are directed out of a surge area: “Sometimes you get a ping that there
was a request and if I pull out of a really busy area where there was a surge, I could go somewhere out of
the way.” The drivers would be frustrated that they responded to a surge request and then the algorithm
directed them elsewhere. What was worse was that “sometimes the trip was cancelled when you got there
and that could be pretty frustrating if you were in a surge area where you get higher prices and it takes you
out of the surge area and you ended up getting nothing.Again, the ride-matching algorithm creates a
conflict between serving the needs of out-of-area customers and serving in a high-need area.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 7
Another common source of these frustrating conflicts was for drivers who attempted to service the airport
whose patrons usually request longer and more profitable rides. When drivers arrive in the airport area,
they are automatically added to a queue waiting to get requests from airport patrons. As one driver
explained, “I wanted to just serve the airport and so my expectation was that when I was in a queue I
would only be getting requests from the airport queue, but they would send me requests from like a 20-
mile radius still… around five o’clock when people were getting off work at these call centers and things
that are around the airport area is, you’d be getting requests to go pick them up and take them home
because they didn’t have a car.” Unfortunately this would leave too few drivers at the airport queue and
they’d send out what they call a panic text. They send out a mass text message that always says don’t text
and drive, but there’s only two cars waiting at the airport so you need to get back there.” The algorithm
that emphasizes the driver’s role to cover the airport was at odds with the algorithm that emphasizes the
driver’s duty to help all customers, resulting in a tug o’ war shuffling drivers back and forth. These types of
conflicts were especially frustrating when the non-airport rides were short and to low-traffic areas making
it unlikely to pick up another ride.
Performance Metrics
What aggravated the conflicts between tasks described above was that performance metrics were also
often misaligned with the tasks. For instance, turning down requests to customers out of the surge area or
out of the airport area would allow for drivers to meet the higher demand for drivers in those areas.
However, it would also damage their acceptance rate, a key performance metric used by Uber to evaluate
its drivers. Thus, by assigning requests out of area, the algorithm created conflicts between their need to
maximize their profits as an entrepreneur, and the emphasis on their evaluation as a contractor to Uber.
The driver rating performance metric was also at odds with Uber’s task assignments. A common refrain
from drivers was complaining about UberPOOL’s automatic passenger assignment. The app would add
customers to be picked up along the way, going roughly in the same direction as the current rider.
Sometimes the new customer would be in the opposite direction that the driver was going. Other times,
the driver would be on the freeway and getting to the rider was a much longer detour. In such situations
where picking up an additional UberPOOL customer would cause a much longer delay, the driver risks a
lower driver rating from the delayed customer. Thus, drivers overwhelmingly chose to cancel the
automatically added ride. This was an effort to preserve customer satisfaction and keep their driver rating
high, but at the same time, it hurt their acceptance rate. Thus, the algorithm would introduce conflicts
between their customer service versus contractor-oriented performance metrics, without a way to
reconcile the two.
Source of Role Ambiguity and Role Conflict
Based on the data, our analysis identifies that the human-algorithm relationship, like human-human
relationships, also exhibits role ambiguity and role conflict. We identify sources of role ambiguity and role
conflict, some of which are unique to the fact that the algorithm is playing the managerial role. We saw
three sources of role ambiguity. First, drivers did not feel empowered to dictate their work hours or
circumstance because of uncertainty around how much they would be paid. This came as a result of
periodic high-level shifts in pricing, down to uncertainty of whether they make the incentive or surge
pricing. Secondly, they often received instructions for pick-up location, customer, or wait times from the
driver Uber app that did not factor in key contextual knowledge. This led to drivers being blamed for not
doing their job or wasting time and resources being directed to customers who they couldn’t pick up.
Lastly, there was role ambiguity that resulted from unclear performance metrics such as not knowing the
acceptance rate goal. This ambiguity also resulted from ambiguous feedback such as not knowing how a
customer rates the driver, only knowing the aggregate rating, or from performance metrics based on a
moving target, such as driver ratings that are compared to the average in the area.
We saw two sources of role conflict as well. The first type of role conflict emerged due to the contradictory
functionalities of the algorithm itself, which gave rise to different and opposing requirements for the
driver. A human manager is usually an embodiment of a single role e.g. a supervisor or a supervisee or a
colleague. However, our study’s participants reported being at the mercy of various algorithms that were
at odds with one another and did not reconcile their differences. A second source of role conflict arises
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 8
from these algorithms being misaligned with the performance metrics. Drivers had to decide which one to
let down. Table 1 synthesizes the findings in light of our analysis.
Macro-theme Key factor involved Description
Role ambiguity
Unknown compensation
Stems from dynamic pricing and uncertainty
around obtaining incentives and surge pricing
Managing in absence of context
App assigns tasks and even micromanages
despite lacking key contextual knowledge
Unknown performance metric
Due to lack of information (e.g. acceptance
rate) or moving targets (e.g. driver rating is
defined relative to others) or ambiguity (e.g.
don’t know which rider rated you poorly)
Role conflict
Disembodied, multi-headed Boss
Rather than one boss that can provide a single
cohesive plan of action, drivers have many
bosses that act in silos and often conflict.
Misalignment of tasks with
performance metrics
Tasks are not always aligned with performance
metrics.
Table 1. Role Ambiguity and Conflict
Conclusions and Future Research
Uber provides a work context in which humans interact with an algorithm for accomplishing tasks, in the
same way they have traditionally interacted with other humans. However, how this relationship is
perceived and performed is theoretically unclear. Our preliminary findings are interesting because they
show that key aspects of human-human interactions (such as role conflict and role ambiguity) manifest in
the human-algorithm interactions as well. We found that drivers did not perceive the Uber algorithm in a
single role of the traditional manager, colleague, or subordinate type relationships. Instead, they
perceived the algorithm to embody each of these different roles at different times, depending on the
context. Interestingly, the driver-algorithm interaction was different than what has been observed in the
case of human-human interaction. This new kind of workplace relationship, and the role conflicts and role
ambiguity that resulted from it, led to new types of consequences as well.
One consequence was a dialectic between blaming the app, or not, when something went wrong. On the
one hand, we saw that drivers and passengers alike attributed a sense of neutrality where the algorithm
was not held accountable for its actions. Rather, either the fault would be attributed to the driver, or there
would simply be no blame. For instance, when the destination setting feature or other aspects requiring
cell signal do not work, there may be no fault assigned to anyone. On the other hand, drivers also
commonly were blamed when it was their Uber app that failed. This was especially problematic for the
driver since the Uber app would send conflicting tasks, as described earlier.
A second consequence of the algorithm-driven passenger selection and pick-up was the lack of a relational
and social component in the driving. The Uber app is optimized for each ride to be, as one interviewee put
it, “a single serving friendship”. This one-time use mentality affected ratings and lack of tips, and the
driver’s willingness to cancel when a customer slips up such as making the driver wait too long. However,
we found that many drivers craved a more social connection and, in fact, that was what motivated them to
drive. Several drivers found Facebook Uber driver groups valuable for emotional support and picking up
tips such as dealing with drunk passengers. Many would also get tips from passengers who were Uber
drivers themselves, or from their driver when taking Uber themselves. This sense of community was
invaluable.
Based on our preliminary observations, we suggest that the nature of the human-algorithm relationship
leads to role conflict and role ambiguity. In addition, the specific role that the human perceives of the
algorithm is dynamic over time. Specifically, drivers start by putting their trust in the algorithm’s ability to
efficiently manage their tasks. As the interaction introduces role conflict and role ambiguity however,
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 9
drivers stop heeding the algorithm and may even start gaming it. In this way, the relationship may change
over time.
Future work includes undertaking additional interviews and data collection in the US to further examine
these ideas. We expect our study will open up new research opportunities to theoretically understand the
nuances of human - algorithm interaction from the point of view of how they affect the individual’s
perceptions of organizational roles. Besides these theoretical implications, we also expect our study to
practically inform organizations that deploy human-algorithm interactions in business processes about
how their employees might perceive such algorithms, and management implications thereof.
References
Abdel-Halim, A. A. 1981. "Effects of Role Stress-Job Design-Technology Interaction on Employee Work
Satisfaction," Academy of Management Journal (24:2), pp. 260-273.
Beer, D. 2009. "Power through the Algorithm? Participatory Web Cultures and the Technological
Unconscious," New Media & Society (11:6), pp. 985-1002.
Bhattacherjee, A. 2001. "Understanding Information Systems Continuance: An Expectation-Confirmation
Model," MIS Quarterly (25:3), pp. 351-370.
Brynjolfsson, E., and McAfee, A. 2014. The Second Machine Age: Work, Progress, and Prosperity in a
Time of Brilliant Technologies. New York: WW Norton & Company.
Cooper, C. L., Dewe, P. J., and O'Driscoll, M. P. 2001. Organizational Stress: A Review and Critique of
Theory, Research, and Applications. London and New Delhi: Sage.
Crabtree, A., and Mortier, R. 2015. "Human Data Interaction: Historical Lessons from Social Studies and
Cscw," ECSCW 2015: Proceedings of the 14th European Conference on Computer Supported
Cooperative Work, 19-23 September 2015, Oslo, Norway: Springer, pp. 3-21.
Dix, A. 2009. "Human-Computer Interaction," in Encyclopedia of Database Systems. Berlin: Springer,
pp. 1327-1331.
Fisher, C. D., and Gitelson, R. 1983. "A Meta-Analysis of the Correlates of Role Conflict and Ambiguity,"
Journal of Applied Psychology (68:2), pp. 320-333.
Graen, G. 1976. "Role-Making Processes within Complex Organizations," in Handbook of Industrial and
Organizational Psychology, M.D. Dunnette (ed.). Chicago: Rand McNally, pp. 1201-1245.
Holzinger, A. 2016. "Interactive Machine Learning for Health Informatics: When Do We Need the
Human-in-the-Loop?," Brain Informatics (3:2), pp. 119-131.
Kahn, R. L., Wolfe, D. M., Quinn, R. P., Snoek, J. D., and Rosenthal, R. A. 1964. Organizational Stress:
Studies in Role Conflict and Ambiguity. Oxford: Wiley.
Kaplan, S., and Orlikowski, W. J. 2013. "Temporal Work in Strategy Making," Organization Science
(24:4), pp. 965-995.
Katz, D., and Kahn, R. L. 1978. The Social Psychology of Organizations. New York: Wiley.
Lee, M. K., and Baykal, S. 2017. "Algorithmic Mediation in Group Decisions: Fairness Perceptions of
Algorithmically Mediated Vs Discussion Based Social Division," Proceedings of the 20 th ACM
Conference on Computer-Supported Cooperative Work & Social Computing [CSCW].
Loebbecke, C., and Picot, A. 2015. "Reflections on Societal and Business Model Transformation Arising
from Digitization and Big Data Analytics: A Research Agenda," The Journal of Strategic Information
Systems (24:3), pp. 149-157.
Marabelli, M., Hansen, S., Newell, S., and Frigerio, C. 2017. "The Light and Dark Side of the Black Box:
Sensor-Based Technology in the Automotive Industry," Communication of the AIS (40:16), pp. 351-
374.
Marabelli, M., and Newell, S. 2009. "Organizational Learning and Absorptive Capacity in Managing Erp
Implementation Projects," Proceedings of ICIS 2009, Phoenix, AX.
Markus, M. L., and Tanis, C. 2000. "The Enterprise Systems Experience-from Adoption to Success," in
Framing the Domains of IT Management: Projecting the Future through the Past, R.W. Zmud (ed.).
Cincinnati,OH: Pinnaflex Educational Resources, Inc., pp. 173-207.
McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D., and Barton, D. 2012. "Big Data: The Management
Revolution," Harvard Business Review (90:10), pp. 61-67.
McGrath, J. E. 1976. "Stress and Behavior in Organizations," in Handbook of Industrial and
Organizational Psychology, M.D. Dunnette (ed.). Chicago: Rand-McNally, pp. 1351-1395.
Human-Algorithm Relationships in the Uber Context
Thirty Eighth International Conference on Information Systems, South Korea 2017 10
Miles, M. B., and Huberman, A. M. 1984. Qualitative Data Analysis: A Sourcebook of New Methods.
Newbury Park, CA: SAGE Publications.
Newell, S., and Marabelli, M. 2015. "Strategic Opportunities (and Challenges) of Algorithmic Decision-
Making: A Call for Action on the Long-Term Societal Effects of ‘Datification’," The Journal of
Strategic Information Systems (24:1), pp. 3-14.
Nicholson, P. J., and Goh, S. C. 1983. "The Relationship of Organization Structure and Interpersonal
Attitudes to Role Conflict and Ambiguity in Different Work Environments," Academy of Management
Journal (26:1), pp. 148-155.
Perrone, V., Zaheer, A., and McEvily, B. 2003. "Free to Be Trusted? Organizational Constraints on Trust
in Boundary Spanners," Organization Science (14:4), pp. 422-439.
Rizzo, J. R., House, R. J., and Lirtzman, S. I. 1970. "Role Conflict and Ambiguity in Complex
Organizations," Administrative Science Quarterly (15:2), pp. 150-163.
Tubre, T. C., and Collins, J. M. 2000. "Jackson and Schuler (1985) Revisited: A Meta-Analysis of the
Relationships between Role Ambiguity, Role Conflict, and Job Performance," Journal of
Management (26:1), pp. 155-169.
Urquhart, L., and Rodden, T. 2017. "New Directions in Information Technology Law: Learning from
HumanComputer Interaction," International Review of Law, Computers & Technology (Published
online: 28 Mar 2017), pp. 1-20.
... Nevertheless, we really don't know how matching algorithm works. Some Uber drivers seem to be reluctant to trust it, as they argue that it generates ride requests by considering distance from one another and previous history of being matched (Page et al., 2017). Lee et al. (2015) argue that other factors, such as passenger-driver mutual rating and driver login time, could be factored into the algorithm in addition to passenger-driver distance. ...
... Although it is explained that not accepting trip requests does not lead to permanent loss of a driver's account, it is also specified that declining trip requests in a consistent way leads to being logged out of the app, assuming that the driver just does not want to accept more trips. 13 Some interviewed drivers confirm that declining too many trips leads to the threat of having their accounts closed (Page et al., 2017). Sometimes, if a driver has an acceptance rate below a certain threshold, he is encouraged to raise his acceptance rate through occasional promotions that offer a guaranteed hourly pay (Lee et al., 2015) in this case the threshold value is not indicated either, thus leading drivers to accept as many assignments as possible (Lee et al., 2015). ...
... -Available time to accept the request. It is about ten seconds, although an interviewed driver revealed it really was about 6 seconds (Page et al., 2017). And yet, it should be highlighted that the driver is not aware of the passenger destination, and therefore of the possible earning, until he accepts the request and picks up the rider. ...
Article
Full-text available
Workers of digital platforms are managed by algorithms and then ratedby customersabout their work. The aim of the paper is thus to describe howtechnological innovation is used to manage and control digital workers, and to present tocollective actors some proposals in order to negotiate algorithms and technological innovation with the purpose of improving drivers’working conditions. In particular, the paper analyses Uber case-study and the technology employed: geo-localisation techniques for smartphones and GPS, as well as algorithms. The paper is structured in two parts. After a brief analysis of legal cases, in the first part, it describes which technologies are (or could be) employed by the company to estimate the geographic position of users, to match supply and demandand to control drivers in the fulfilment of their working activity. In particular, it goes in depth into some technical issuesand parameters used to manage working conditions. In a second part, by adopting the results of the technical analysis as the starting point, it presents some proposals on how collective bargaining can intervene in the management of those kinds of technology to improve drivers’ working conditions.
... Moreover, platform workers are also reported to experience precarious work conditions, with unpredictable earnings, and often feel the pressure to do extra work. Drivers did not feel empowered to "dictate their work hours or circumstance because of uncertainty around how much they would be paid" (Page et al., 2017) Hand in hand with the studies of the effects of algorithmic decisions is the scholar's interest in how workers react to them. Such responses can be classified into two types: emotional or actional (Langer and Landers, 2021;Pregenzer et al., 2021). ...
... Workers receive limited information formulated by an algorithm so that unpopular jobs cannot be identified and rejected at the early stages (Möhlmann and Zalmanson, 2017). Page et al. (2017) mentioned that due to the lack of information, platform workers also experience role ambiguity. Moreover, the situation is only aggravated by the fact that the algorithms are constantly changing, and platform workers, along with a large amount of information provided, cannot independently develop at least some strategies for interacting with the algorithm (Ens, 2019). ...
Conference Paper
Full-text available
We review the literature on algorithmic management to help future researchers acquire a comprehensive "recap" of past research with detailed discussions on the main findings and develop a taxonomy as a tool of summarization that assists researchers in reflecting critically on their systems and identifying potential gaps. We determine five critical areas of algorithmic management: the mechanisms of algorithmic management, effects of algorithmic management, second party's response to algorithmic management, concerns around algorithmic management, design of algorithmic management, and policy implications. These topics are analyzed and discussed.
... Despite its manifold benefits for online labor platforms, algorithmic management is a doubleedged sword. Workers exposed to algorithmic management often report that they experience tensions in their work environment (Gal et al., 2020;Kellogg et al., 2020;Möhlmann et al., 2021;Page et al., 2017;Tilson et al., 2021;Wiener et al., 2021). For example, while gig workers often experience high levels of autonomy and flexibility (Rosenblat & Stark, 2016), they still feel surveilled and controlled through real-time surveillance (Newell & Marabelli, 2015;Zuboff, 2019). ...
... First, algorithm sensemaking is characterized by complexity; thus, this notion helps unpack contextual factors and accounting for essential elements in this setting. Algorithm sensemaking reminds us to look more deeply at technology's role in sensemaking (Mesgari & Okoli, 2019) and how people managed by algorithms perceive, and react to, the automated activity imposed on them (Page et al., 2017). ...
Article
Full-text available
Algorithmic management may create work environment tensions that are detrimental to workplace well-being and productivity. One specific type of tension originates from the fact that algorithms often exhibit limited transparency and are perceived as highly opaque, which impedes workers' understanding of their inner workings. While algorithmic transparency may facilitate sensemaking, the algorithm's opaqueness may aggravate sensemaking. By conducting an empirical case study in the context of the Uber platform, we explore how platform workers make sense of the algorithms managing them. Drawing on Weick's enactment theory, we theorize a new form of sensemaking-algorithm sensemaking-and unpack its three sub-elements: (1) focused enactment, (2) selection modes, and (3) retention sources. The sophisticated, multi-step process of algorithm sensemaking allows platform workers to keep up with algorithmic instructions systematically. We add to previous literature by theorizing algorithm sensemaking as a mediator linking workers' perceptions about tensions in their work environment and their behavioral responses.
... Recent literature depicts algorithms as bosses (Mohlman et al 2021) or as prescriptive agents that can take on supervisory roles (Baird and Maruping 2021). • Algorithms automatically capture digital traces of the human's task actions to calculate further parameters such as performance and rewards (Schildt 2017) with no human involvement or co-constructed dialog (Gal et al. 2017;Momin and Mishra 2015) Interaction of the algorithm with the human • Interactions occur through digital interfaces (e.g., an app) and are mediated by underlying algorithmic computations • Humans relate to algorithms as co-workers (Page et al. 2017) • Computational logic behind the algorithm's outputs is not always transparent (e.g., Bernstein 2017;Dolata et al. 2021;Feuerriegel et al. 2020) Outputs of the algorithm ...
... This can make communication from the algorithm unclear to the human (Rahman 2021; Parent-Rocheleau and Parker 2021). Yet, the algorithm's work instructions may be mandatory and not following them might lead to penalties for the human as observed in the case of algorithmic ridesharing (Page et al. 2017). ...
Article
Full-text available
In algorithmic work, algorithms execute operational and management tasks such as work allocation, task tracking and performance evaluation. Humans and algorithms interact with one another to accomplish work so that the algorithm takes on the role of a co‐worker. Human–algorithm interactions are characterised by problematic issues such as absence of mutually co‐constructed dialogue, lack of transparency regarding how algorithmic outputs are generated, and difficulty of over‐riding algorithmic directive – conditions that create lack of clarity for the human worker. This article examines human–algorithm role interactions in algorithmic work. Drawing on the theoretical framing of organisational roles, we theorise on the algorithm as role sender and the human as the role taker. We explain how the algorithm is a multi‐role sender with entangled roles, while the human as role taker experiences algorithm‐driven role conflict and role ambiguity. Further, while the algorithm records all of the human's task actions, it is ignorant of the human's cognitive reactions – it undergoes what we conceptualise as ‘broken loop learning’. The empirical context of our study is algorithm‐driven taxi driving (in the United States) exemplified by companies such as Uber. We draw from data that include interviews with 15 Uber drivers, a netnographic study of 1700 discussion threads among Uber drivers from two popular online forums, and analysis of Uber's web pages. Implications for IS scholarship, practice and policy are discussed.
... Think of Uber drivers, who need to follow specific routes to pick-up and drop-off customers according to an algorithm that makes decisions on their behalf. Deviating from its prescriptions has consequences that include a lower rating, loss of income and ultimately being laid off (Page et al. 2017). Yet, the long-term use of these algorithms has demonstrated that drivers cannot sever customers effectively if they follow the (rigid) algorithm blindly (Möhlmann et al. 2020). ...
... We have the knowledge and business-related background on how the workforce should (or should not) be managed. This would also add to (and build on) the growing (IS) body of literature on algorithms at work, for instance in Uber contexts (Möhlmann et al. 2020;Page et al. 2017). This would help addressing questions such as: how is it possible to incorporate human vetting of ADMS in novel and emergent surveillance settings? ...
Article
Full-text available
in this viewpoint article we discuss algorithmic decision-making systems (ADMS), which we view as organizational sociotechnical systems with their use in practice having consequences within and beyond organizational boundaries. We build a framework that revolves around the ADMS lifecycle and propose that each phase challenges organizations with "choices" related to technical and processual characteristics-ways to design, implement and use these systems in practice. We argue that it is important that organizations make these strategic choices with awareness and responsibly, as ADMS' consequences affect a broad array of stakeholders (the workforce, suppliers, customers and society at-large) and involve ethical considerations. With this article we make two main contributions. First, we identify key choices associated with the design, implementation and use in practice of ADMS in organizations, that build on past literature and are tied to timely industry-related examples. Second, we provide IS scholars with a broad research agenda that will promote the generation of new knowledge and original theorizing within the domain of the strategic applications of ADMS in organizations.
... 3 While in principle an analytics approach to people is powerful (Isson et al. 2016), most research reports issues related to people being surveilled and "bossed around" by algorithms (Mohlmann and Zalmanson 2017;Page et al. 2017;Rosenblat 2018). Some argue that employees' reluctance to engage with analytics in the workplace stems from beliefs that algorithms are not fully capable to assess one's job (Jago 2019). ...
... Some of them decided to leave the organization precisely because they did not tolerate that something (and not someone) would tell them how to perform their job (doctors), or because they were afraid they would be laid off once the PI initiative found that things could be done with less people (nurses). While it is quite typical that people experience unpleasant feelings when monitored by IT (Mohlmann and Zalmanson 2017;Page et al. 2017;Rosenblat 2018) a common reaction is to try and game the system, so circumvent analytics control by for instance turning off monitoring devices (Lee et al. 2015) or anyway attempting to bypass algorithmic assessment of one's job in creative ways, yet against organizational policies (Curchod et al. 2019). ...
... While in principle an analytics approach to people is powerful (Isson et al. 2016), 12720 3 most research reports issues related to people being surveilled and "bossed around" by algorithms (Mohlmann and Zalmanson 2017;Page et al. 2017;Rosenblat 2018). Some argue that employees' reluctance to engage with analytics in the workplace stems from beliefs that algorithms are not fully capable to assess one's job (Jago 2019). ...
... Some of them decided to leave the organization precisely because they did not tolerate that something (and not someone) would tell them how to perform their job (doctors), or because they were afraid they would be laid off once the PI initiative found that things could be done with less people (nurses). While it is quite typical that people experience unpleasant feelings when monitored by IT (Mohlmann and Zalmanson 2017;Page et al. 2017;Rosenblat 2018) a common reaction is to try and game the system, so circumvent analytics control by for instance turning off monitoring devices (Lee et al. 2015) or anyway attempting to bypass algorithmic assessment of one's job in creative ways, yet against organizational policies (Curchod et al. 2019). ...
Conference Paper
Full-text available
In this paper we focus on how organizational practices are generated because of the introduction of analytics-based technologies aimed at monitoring employees' performance. We use longitudinal data collected from June 2017 to December 2019 at a healthcare network in the Greater Boston Area. The related literature often points to negative aspects of workplace surveillance through these systems; what is more, it is not clear why some organizations can benefit from process improvement through analytics while other cannot. These mixed findings and a gray area around reasons underpinning the successful deployment of these systems motivate our study. We found that a mixture of top-down and bottom-up practices, in the long-term, promote collective actions of supporting the effective use of analytics. Top-down practices (management) focus on the reorganization of structures and formal processes; bottom-up practices (employees) concern cross-community bonding, creative workarounds to improve current practices and the attempt to transfer these (improved) practices to different contexts where the same analytics standards "rule." We theorize on how these practices need to be interwoven to be successful, and we highlight that this takes time. We therefore contribute to (and question the pessimism of) related literature by showcasing the bright side of analytics at work, happening over time.
... However, studies that connect the characteristics of work to the worker's experiences of wellbeing are scarce (Vartiainen and Hyrkkänen 2010). The boundaries of our theorizing can be extended beyond BC-RMWs to inform research on other types of mobile work such as gig workers working on digital platforms who, like BC-RMWs, have low task latitude, high monitoring and little human contact, and who suffer low occupational wellbeing (Page et al 2017). It is worth exploring if they might benefit from ICT-enabled job crafting practices. ...
Article
Blue collar remote and mobile workers (BC-RMWs) such as repair/installation engineers, delivery drivers and construction workers, constitute a significant share of the workforce. They work away from a home or office work-base, at customer and remote work sites and are highly dependent on ICT for completing their work tasks. Low occupational wellbeing is a key concern regarding BC-RMWs. The objective of this research is to understand how BC-RMWs can use Information and Communication Technology (ICT) to elevate their occupational wellbeing. We conducted a study of 28 BC-RMWs employed in two private sector firms (telecom service provision and construction industries) in the UK, across 14 remote work sites. Based on our findings, we develop the concept of ICT-enabled job crafting and theorize how ICT-enabled job crafting by BC-RMWs can help them increase their joccupational wellbeing.
... Although, this example might not be transferred one-to-one to innovation processes, it still raises questions on the importance of DIAs in these settings. As individuals increasingly interact with algorithms in a work context, it is important to not only understand these novel types of 'actor-actant' but also the newly arising category of 'actant-actor' relationships (Page, Marabelli, & Tarafdar, 2017). ...
Article
Full-text available
Despite providing service and consumption are two sides of the same coin of value co-creation in the gig economy, value as an outcome was only investigated from the customer point of view, not from the provider. This study aims to explore the impact of algorithmic management, customer dysfunctional behavior and perceived injustice on Uber and Careem drivers perceived value in Egypt. Qualitative interviews and content analysis were employed. Thematic analysis will be used for identifying, analyzing, and reporting patterns within data. Our findings define how drivers’ perceived value is negatively influenced by algorithmic management, customer dysfunctional behavior, and perceived injustice. In order to increase drivers’ perceived value, ride-hailing companies should not only put consideration on how to improve the control of algorithmic management and customer empowerment but also have to revise their policies and decisions to provide positive value to their drivers.
Conference Paper
Full-text available
How do individuals perceive algorithmic vs. group-made decisions? We investigated people's perceptions of mathematically-proven fair division algorithms making social division decisions. In our first qualitative study, about one third of the participants perceived algorithmic decisions as less than fair (30% for self, 36% for group), often because algorithmic assumptions about users did not account for multiple concepts of fairness or social behaviors, and the process of quantifying preferences through interfaces was prone to error. In our second experiment, algorithmic decisions were perceived to be less fair than discussion-based decisions, dependent on participants' interpersonal power and computer programming knowledge. Our work suggests that for algorithmic mediation to be fair, algorithms and their interfaces should account for social and altruistic behaviors that may be difficult to define in mathematical terms.
Article
Full-text available
Sensor-based technologies are increasingly integrated into diverse aspects of our everyday lives. Despite the importance of understanding how these technologies are adopted and exploited by businesses and consumers, the information systems (IS) community has thus far devoted relatively little attention to the topic. Accordingly, our objective in this paper is to foster an exploration of the issue amongst IS scholars by focusing on the emergent use of sensor-based technologies in the automotive insurance industry. Insurance providers are increasingly turning to such technologies to gain competitive advantage around risk assessment and behavior-based pricing. To investigate this phenomenon, we consider the experiences of two organizations operating distinct national contexts – Progressive Insurance (US) and Generali (Italy). These two insurance providers have been first movers in the adoption of sensor-based technologies for risk assessment and policy pricing. First, we highlight the key similarities and differences between the cases with regard to the technologies adopted, business models pursued, and anticipated benefits and pitfalls for the companies and their consumers. Second, in a more holistic way we discuss the implications and unintended consequences of sensor-based technologies in the automotive insurance industry. We formulate several research questions that will provide opportunities and encourage more research in this emerging area of study.
Article
Full-text available
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.
Article
Effectively regulating the domestic Internet of Things (IoT) requires a turn to technology design. However, the role of designers as regulators still needs to be situated. By drawing on a specific domain of technology design, human–computer interaction (HCI), we unpack what an HCI-led approach can offer IT law. By reframing the three prominent design concepts of provenance, affordances and trajectories, we offer new perspectives on the regulatory challenges of the domestic IoT. Our HCI concepts orientate us towards the social context of technology. We argue that novel regulatory strategies can emerge through a better understanding of the relationships and interactions between designers, end users and technology. Accordingly, closer future alignment of IT law and HCI approaches is necessary for effective regulation of emerging technologies.
Conference Paper
Human Data Interaction (HDI) is an emerging field of research that seeks to support end-users in the day-to-day management of their personal digital data. This is a programmatic paper that seeks to elaborate foundational challenges that face HDI from an interactional perspective. It is rooted in and reflects foundational lessons from social studies of science that have had a formative impact on CSCW, and core challenges involved in supporting interaction/collaboration from within the field of CSCW itself. These are drawn upon to elaborate the inherently social and relational character of data and the challenges this poses for the ongoing development of HDI, particularly with respect to the ‘articulation’ of personal data. Our aim in doing this is not to present solutions to the challenges of HDI but to articulate core problems that confront this fledgling field as it moves from nascent concept to find a place in the interactional milieu of everyday life and particular research challenges that accompany it.