ArticlePDF Available

Designing Social Nudges for Enterprise Recommendation Agents: An Investigation in the Business Intelligence Systems Context

Authors:

Abstract and Figures

According to behavioral economists, a 'nudge' is an attempt to steer individuals toward making desirable choices without affecting their range of choices. In this paper, we draw on the nudge concept. We design and examine nudges that exploit social influence's effects to control individuals' choices. Although recommendation agent research provides numerous insights into extending information systems and assisting end-consumers, it lacks insights into extending enterprise information systems to assist organizations' internal employees. We address this gap by demonstrating how enterprise recommendation agents (ERAs) and social nudges can be used to tackle a common challenge that enterprise information systems face. That is, we are using an ERA to facilitate information (i.e., reports) retrieval in a business intelligence system. In addition, we are using social nudges to steer users toward reusing specific recommended reports rather than choosing between recommended reports randomly. To test the effects of the ERA and the four social nudges, we conduct a within-subjects lab experiment with 187 participants. We also conduct gaze analysis ("eye tracking") to examine the impact of participants' elaboration. The results of our logistic mixed-effects model show that the ERA and the proposed social nudges steer individuals toward certain choices. Specifically, the ERA steers users toward reusing certain reports. These theoretical findings also have high practical relevance and applicability: In an enterprise setting, the ERA allows employees to reuse existing resources such as existing reports more effectively across their organizations, because employees can easier find the reports they actually need. This in turn prevents the development of duplicate reports.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Kretzer, M. and Maedche, A. (2018). " Designing Social Nudges for Enterprise Recommendation Agents:
An Investigation in the Business Intelligence Systems Context ". to appear in:
Journal of the Association for Information Systems (JAIS).
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Fritz-Erler-Strasse 23
76133 Karlsruhe - Germany
http://iism.kit.edu
© 2017. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
Accepted for publication: Journal of the Association for Information Systems (JAIS)
Designing Social Nudges for Enterprise Recommendation Agents:
An Investigation in the Business Intelligence Systems Context
Martin Kretzer
Alexander Maedche
PricewaterhouseCoopers
martin.kretzer@ch.pwc.com
Karlsruhe Institute of Technology (KIT)
alexander.maedche@kit.edu
Abstract
According to behavioral economists, a ‘nudge’ is an attempt to steer individuals toward making desirable
choices without affecting their range of choices. In this paper, we draw on the nudge concept. We design and
examine nudges that exploit social influence’s effects to control individuals’ choices. Although
recommendation agent research provides numerous insights into extending information systems and
assisting end-consumers, it lacks insights into extending enterprise information systems to assist
organizations’ internal employees. We address this gap by demonstrating how enterprise recommendation
agents (ERAs) and social nudges can be used to tackle a common challenge that enterprise information
systems face. That is, we are using an ERA to facilitate information (i.e., reports) retrieval in a business
intelligence system. In addition, we are using social nudges to steer users toward reusing specific
recommended reports rather than choosing between recommended reports randomly.
To test the effects of the ERA and the four social nudges, we conduct a within-subjects lab experiment with
187 participants. We also conduct gaze analysis (“eye tracking”) to examine the impact of participants’
elaboration. The results of our logistic mixed-effects model show that the ERA and the proposed social
nudges steer individuals toward certain choices. Specifically, the ERA steers users toward reusing certain
reports.
These theoretical findings also have high practical relevance and applicability: In an enterprise setting, the
ERA allows employees to reuse existing resources such as existing reports more effectively across their
organizations, because employees can easier find the reports they actually need. This in turn prevents the
development of duplicate reports.
Keywords: Behavioral economics, nudge, social influence, recommender system, workaround systems,
laboratory experiment, eye tracking, elaboration.
1 Introduction
Employees in many organizations supplement their Information Systems (IS) with additional Workaround
Systems (WS) to adapt their IS to their daily work routines (Alter, 2014; Jasperson et al., 2005). For instance,
employees frequently supplement their organizations’ core Business Intelligence Systems (BIS) with
spreadsheet applications when preparing business reports (Burton-Jones and Grange, 2013; Davenport,
2014; Li et al., 2013). Generally, such WS are considered flexible and are very popular with end users
(Bagayogo et al., 2014; Sun, 2012; Gass et al., 2015). Since many employees have the expertise required
to develop and/or change WS, the latter are well suited for implementing emerging requirements rapidly
(Alter, 2014). For instance, using spreadsheet applications, many users can quickly and easily extend a
report with additional fields (Panko and Aurigemma, 2010; Powell et al., 2008, 2009). However, from an
organization’s perspective, the development and use of supplementary WS creates problems, such as the
limited reuse of reports
, inconsistent data, and poor decision making if decisions are based on outdated and
erroneous data (e.g., Brazel and Dang, 2008; Tyre and Orlikowski, 1994). In addition, the use of WS may
also lead to flawed and nonstandard routines and procedures (Alter, 2014).
To address these challenges, extant literature has focused on two IS governance-based approaches (Tiwana
and Kim, 2015). First, certain researchers focus on reducing the generation and use of WS by defining IS
governance policies and enforcing compliance (Liang et al., 2013; Rivard and Lapointe, 2012). Empirical
studies in this stream examine, for instance, individuals’ compliance with organizational guidelines and
policies (Abubakre et al., 2015; Xue et al., 2011). However, recent articles indicate that attempts to prohibit
the use of WS foster the use of “hidden” WS, known as shadow systems (e.g., Alter, 2014; Behrens, 2009).
A second research stream centers on empowering employees and managing WS. This stream acknowledges
the value of WS, allowing and empowering employees to build and use WS (Tiwana and Konsynski, 2010;
Weill and Ross, 2005). However, additional organizational units need to be established to manage and
balance the use of organizations’ core IS and supplementary WS. For instance, in the BIS context, these
units are generally cross-functional and focus on tasks such as gathering and synthesizing requirements,
integrating data, designing report templates, developing user authorization concepts, and defining data and
application responsibilities (O’Neill, 2011; Unger et al., 2008). Unfortunately, establishing and running these
additional, cross-functional organizational units create high costs for organizations and, by introducing an
additional governance layer, reduce the WS’ flexibility. Thus, both governance-based approaches have
significant limitations.
As a consequence, we suggest that BIS should recommend existing reports to users in order to complement
governance-based approaches and to support report reuse. This supports reuse of reports and prevents
users from creating redundant reports in WS. Specifically, BIS should be extended with recommendation
agents (RAs) that help users search for required reports. RAs usually elicit users’ preferences and
requirements, as well as recommend items that address these. RAs therefore facilitate users’ search for
information (Arazy et al., 2010). For instance, many online stores adopt RAs to help them recommend
products to their customers (Xiao and Benbasat, 2007). However, although RAs are very popular extensions
In this paper, reuse refers to different individuals’ usage of an IS resource. It does not refer to repeated usage of the
same IS resource by the same individual.
to systems that target end consumers such as online stores (Li and Karahanna, 2015), they are rarely used
as extensions to enterprise IS such as BIS. We make this distinction explicit by referring to RAs that extend
enterprise IS as enterprise recommendation agents (ERAs). To date, ERAs have rarely been examined in
the literature (Hess et al., 2009), although information retrieval is very important in the enterprise IS context
(Bernstein and Haas, 2008; Mukherjee and Mao, 2004). To address this gap and to support report reuse, the
first objective of this paper is to extend a BIS with an ERA that recommends reports to users. By
increasing the reuse of existing reports, such an ERA could prevent redundant reports, data inconsistencies
and, eventually, poor decision making.
However, only identifying and recommending reports are not sufficient to influence an individual’s decision
to actually reuse a certain report. Instead, in line with Simon’s (1954) multi-stage decision-making process,
the ERA also needs to influence an individual’s actual decision to click on a certain recommendation and,
thus, to reuse a certain report. Hence, the second objective of this paper is to influence users to choose
a specific report recommendation.
We address this second objective by drawing on the body of works that behavioral economists have
produced. It is well known that individuals’ behaviors are not entirely rational, because their cognitive biases
influence individuals (Goes, 2013). For instance, changing the way in which different options are presented
to individuals affects their choices (Hanna, 2015). Hence, even if the range of available choices is not
affected, individuals can be enticed to make certain choices. Social psychologists and behavioral economists
refer to this form of influencing as a nudge (Thaler and Sunstein, 2003) and recently were awarded with the
prestigious Nobel Prize and Laureate in Economic Sciences (The Royal Swedish Academy of Science, 2017).
Designing an ERA that influences users’ report recommendation choices is such a nudge.
To support report reuse, such a nudge should be based on a cognitive bias that frequently occurs in
organizational settings. In particular, the ERA in this study exploits the effects of previous users social
influence. The social influence of one individual on another is a well-known phenomenon in organizations.
For instance, the hierarchical power of individuals often influences others in organizations (Clegg, 2013;
Courpasson et al., 2012). Consequently, we refer to the ERA’s effects as social nudges. Building on
behavioral economists’ concept of a nudge (Sunstein and Thaler, 2003; Sunstein, 2014), we define two
criteria for a social nudge: (1) a social nudge steers an individual’s choice toward a desired option by
exploiting the effects of social influence between individuals, and (2) a social nudge does not change the
range of choices available to the individual.
Three common forms of social influence in organizations are social cohesion (proximity), institutional
isomorphism (similarity of positions), and hierarchical power (Borgatti and Foster, 2003; Fiske, 2010; Friedkin
and Cook, 1990). Building on these three forms of social influence, we design four social nudges. The first
social nudge aims to control an individual’s recommendation choice by exploiting that individual’s tendency
to be biased by his/her proximity to other individuals. The second and third social nudges aim to determine
individuals recommendation choice by exploiting their tendency to be biased by the similarity between their
position in the organization and other individuals’ positions. Finally, the fourth social nudge aims to determine
an individual’s recommendation choice by exploiting that individual’s tendency to be biased by other
individualshierarchical power.
In addition, as a third objective, we examine recommendation elaboration as a moderator. Although the
described social nudges are based on social influence’s well-known effects, RAs provide an important new
context. Since RAs only display “social” information about previous users, but are not humans, only the
displayed reference information exerts social influence. Researchers therefore need to determine that
individuals actually cognitively process the displayed information about previous users. For instance, an ERA
that wants to steer users’ choices toward certain report recommendations can only succeed if users view and
process the information it provides. If users were to not elaborate on the provided information about previous
users, this information would be ignored. We therefore follow Meservy et al.’s (2014) recommendations and
use contemporary eye-tracking devices to reliably compute users’ fixation and to control for recommendation
elaboration’s moderating effect.
To summarize, this paper has three objectives: First, we propose an alternative approach to balancing BIS
use and WS use. Extending a BIS with an ERA that supports report retrieval and, thus, report reuse, reduces
the need to develop supplementary WS. Second, we design and investigate the effects of four social nudges.
Since employees in an organization work together and influence each other, the social influence of prior
report users is a suitable bias in respect of designing nudges in organizational settings. Third, using eye-
tracking technology, we highlight the importance of recommendation elaboration as a moderator of social
nudges’ effects on individuals’ recommendation choices.
The remainder of this article is structured as follows. Section 2 introduces the underlying theoretical
foundations for designing social nudges, while section 3 develops our hypotheses. The ERA that aims at
steering users toward choosing certain report recommendations is introduced in section 4, which also
describes a lab experiment to evaluate the ERA’s effects. Subsequently, section 5 presents our manipulation
checks, data analysis, and the experiment results. The implications of our work for theory and practice are
discussed in section 6. Section 7 concludes this paper.
2 Theoretical Foundations
2.1 Nudge
The Nobel Prize Foundation introduces the 2017 Prize in Economic Sciences for Richard Thaler with the
words “Humans behave in complex ways. Although we try to make rational decisions, we have limited
cognitive abilities and limited willpower. […] Moreover, cognitive abilities, self-control, and motivation can
vary significantly across different individuals.” (The Royal Swedish Academy of Science, 2017, p. 1)
In fact, peoples’ decision making is not entirely rational, because many environmental features can influence
their decisions (Thaler et al., 2010). For instance, through changing a person’s environment, the person’s
decisions can be influenced. Literature refers to such intentional changes in a person’s environment as
nudges (Thaler et al., 2010). A nudge aims to influence peoples’ decision making in certain ways without
them necessarily noticing that they have been influenced. Sunstein (2014, p. 17), one of the advocates of the
nudge concept, defines a nudge as an “initiative that maintains freedom of choice while also steering people’s
decisions in the right direction.This definition is consistent with the meaning of a nudge in everyday life,
which refers to a gentle hint or suggestion, and is the reverse of an obligation, a strict requirement, and/or
the use of force (Halpern, 2015, p. 22).
For example, a restaurant’s menu is a nudge. Although the selection on the menu does not change depending
on how it is presented on the menu, its presentation may affect a guest’s choice of food. For example, by
clustering the items differently, using larger/smaller images of them, and showing/hiding their prices,
restaurant managers can steer guests toward making certain choices (Thaler et al., 2010). A picture of a
smoker’s lungs on a pack of cigarettes is a second example of a nudge (Sunstein, 2014). Although the picture
does not force individuals to stop smoking, it presents the hazards of smoking, thereby steering individuals
toward reducing the number of cigarettes they smoke. However, pictures and warnings are not the only form
of nudges. Other popular forms of nudges include disclosure of information, default rules that become
individuals’ choices if they do not change these, the framing of choices, and “cooling-off” periods (Hanna,
2015; Leonard, 2008).
The nudge concept is based on behavioral economists’ research stream, called libertarian paternalism
(Thaler and Sunstein, 2003). Advocates of this stream observe that there are ways of influencing decision
making that do not limit liberty, but also do not quite fit the mold of ordinary persuasion. Because individuals
are susceptible to various cognitive biases, the way in which options are structured and presented affects
their decision making (Hanna, 2015). Libertarian paternalism aims to exploit these effects in order to
encourage more prudent decision making. In particular, libertarian paternalism promotes the use of nudges
to create choice architectures. Like an architect who creates buildings, a choice architect creates a contextual
background against which choices need to be made (Thaler et al., 2010). By doing this, the choice architect
deliberately builds a choice architecture that presents choices in a certain way and, thus, nudges individuals
toward making certain choices (Sunstein, 2014).
On the whole, we can highlight three essential nudge characteristics: first, a nudge does not restrict the
choices available to an individual (Bovens, 2008; Cohen, 2013). Second, a nudge changes the environment
in which choices are made (Sunstein, 2014). Third, a nudge “harnesses cognitive biases for good” ends
(Thaler and Sunstein, 2008, p. 8; Trout, 2005, p. 432). Deception, or the deliberate withholding of information
that one is obliged to disclose, is not considered a nudge (Hanna, 2015). Hausman and Welch (2010)
describe a nudge as a preference-shaping intervention as opposed to a non-preference shaping intervention.
The use of nudges may seem morally disputable to advocates of freedom of choice (Leonard, 2008).
According to their criticism, any intentional influence of individuals’ decision making should generally be
avoided, including individuals’ rights to take risks and make errors. However, liberal paternalists counter that
decisions are not made in a vacuum (Thaler et al., 2010). Influences on choices are inevitable, whether they
are intentional, or the product of any kind of conscious design (Sunstein, 2014). Hence, Sunstein (2014)
argues that nudges should support individuals’ autonomy rather than reduce their freedom of choice. For
instance, nudges may enable individuals to consider relevant alternatives that would otherwise be ignored
due to information overload.
IS research studies on examining and designing nudges are still very rare. However, there are related studies
on cognitive biases (Goes, 2013). For instance, IS researchers have examined default options in the past
(Thaler et al., 2010). Allen and Parsons (2010) have showed that providing anchors affects individuals’
decision making and, specifically, that anchoring leads to an adjustment bias. If individuals who need to write
program code are provided with an anchor (i.e., code pieces), they frequently fail to make sufficient changes
to the anchor and are overconfident of their solution. Furthermore, Weinmann et al. (2016) provide an
overview of common forms of nudges and discuss them in the context of online product-rating platforms.
These authors also suggest a process for designing and evaluating other forms of nudges (Weinmann et al.,
2015). This work follows their suggestions.
In line with our second research objective, we design nudges to steer an individual toward choosing a certain
report recommendation from a set of multiple recommendations. In particular, we focus on nudges that exploit
social influence’s effects in order to control an individual’s recommendation choices. Social influence has
been identified as an important determinant for explaining individuals’ behaviors in organizations and
institutions (Tichy et al., 1979; Tsai and Goshal, 1998). Consequently, the following sections introduce forms
of social influence that arise from social networks within organizations and organizational hierarchies.
2.2 Social Influence
2.2.1 Social Influence based on Social Networks
Social influence is a process through which individuals modify others behaviors, thoughts, and feelings
(Cartwright, 1959; Lewin, 1951). Social psychology has examined many different forms and perspectives of
social influence (Fang et al., 2015; Kilduff and Tsai, 2003). For instance, Fiske (2010) and Hogg (2010) have
reviewed different forms of social influence. These include, for example, social cognition of attitude change
as a consequence of influence, propaganda and the mass transformation of attitudes, interpersonal
persuasion, the development and change of behavioral norms, behavioral regularities on people’s behavior,
and of group socialization processes. Owing to the sheer number of effects based on social influence, the
common approach to studying its effects is to focus on its specific effects in a particular context (Anderson
and Kilduff, 2009). In this paper, we therefore focus on social influence processes that are particularly strong
within organizations. Specifically, we focus on (1) social networks, because, from an employee’s perspective,
organizational structures represent a social network (Kilduff and Tsai, 2003; Sparrowe et al., 2001; Tichy et
al., 1979) and on (2) hierarchical power, because most organizations are based on hierarchical structures.
There are two main approaches to researching social influence in social networks. Borgatti and Foster (2003)
refer to them as a connectionist versus a structuralist approach (or a flow-based vs. a topology-based
approach, or a relational vs. a structural approach). The connectionist approach highlights an interpersonal
transmission process between those with pre-existing social ties (Kilduff and Tsai, 2003). Connectionists
argue that, at its core, the social cohesion between two individuals (typically defined as the proximity
between them) causes them to influence each other (Borgatti and Foster, 2003). In contrast, the structuralist
approach emphasizes the structural similarity of nodes in a network, although no tie may connect them.
According to this approach, the extent to which individuals share similar isomorphic positions in a network
determines the social influence they exert on each other.
It is important to note that the effects that high proximity between individuals may have on social influence
and the effects that the high isomorphic similarity between individuals positions may have on social
influence may overlap. Consequently, IS researchers focusing on social influence have developed and tested
hypotheses based on both the approaches. For instance, Singh and Phelps (2013) find that individuals’
decisions when choosing a license type for their open source projects are influenced by (1) the license type
of the previous projects to which these individuals were closely connected and by (2) the license choice of
isomorphic similar projects (i.e., projects with similar social network structures).
2.2.2 Social Influence based on Power in Organizational Hierarchies
Many organizations are based on hierarchical structures. Although recent studies indicate a shift to more
heterarchical structures (Kellogg et al., 2006; Stark, 2009), organizations are still highly hierarchical
(Courpasson et al., 2012) and organizational hierarchies represent people’s most common daily experience
of hierarchies outside the family (Fiske, 2010).
Status and power form the bases of hierarchical differentiation in organizations (Anicich et al., 2016; Clegg
et al., 2006). Social psychologists define status as social respect, recognition, importance, and prestige (e.g.,
Fiske, 1993). In contrast, power is defined as the control over valued resources (e.g., Fiske, 1993; Keltner et
al., 2003; Magee and Galinsky, 2008). Power and status often depend on each other (Clegg, 2013). For
instance, as explained by Fiske (2010), many managers in institutional hierarchies are respected (status) and
control resources (power).
3 Hypothesis Development
We design and evaluate an ERA that supports the reuse of reports. Limited report reuse is a common
phenomenon and a serious problem for organizations, because it results in operational inefficiencies and
poor decision making. To address these issues, our ERA uses social nudges that aim to steer individuals
toward choosing certain report recommendations and, thus, toward reusing certain desirable reports. In
particular, we describe the effects of these social nudges on individuals’ recommendation choices. We model
all nudges as external stimuli and individuals’ recommendation choice as responses to those stimuli. In
addition, we include elaboration in our model. This is important, because in our context, stimuli are messages
displayed on screens rather than “real” physical influences. Consequently, these messages’ effect on certain
individuals depends on the extent to which these individuals process them, which is commonly referred to as
elaboration (Angst and Agarwal, 2009; Bhattacherjee and Sanford, 2006; Ho and Bodoff, 2014; Meservy et
al., 2014). Elaboration is defined as the amount of message-relevant thinking in which an individual engages
while evaluating a message (Petty and Cacioppo, 1986a, 1986b). In our model, we examine elaboration as
a moderator, because it might strengthen the effects of messages displayed on computer monitors.
3.1 Social Nudges as External Stimuli
We design specific nudges by drawing on theoretical knowledge about social influence in networks, because
organizations can be viewed as networks (Kilduff and Tsai, 2003). We suggest effects based on (1) social
cohesion and (2) institutional isomorphism. We also consider (3) power in organizational hierarchies, because
most organizations are based on hierarchical structures.
3.1.1 Social Cohesion: Social Influence based on Proximity between Individuals
The connectionist view of social influence is based on the proximity between two individuals in a network,
which is commonly referred to as social cohesion (e.g., Gargiulo and Benassi, 2000). Social cohesion
focuses on how two individuals are connected to each other and how they communicate; it is sometimes also
referred to as the flow approach to social influence (Borgatti and Foster, 2003).
Social cohesion refers to the ties between individuals. Marsden and Friedkin (1993) coined the term cohesion
in terms of the number, length, and strength of the paths that connect actors in a network. According to this
approach, the degree of proximity (i.e., the degree of social cohesion) between two individuals is high if they
are directly tied in a network via a short connection. Conversely, the degree of proximity between two
individuals is low if they are not connected, or only connected via many intermediaries.
In line with the nudge concept, changing the presentation of recommendations may influence the
recommendation a user chooses. Displaying information about previous users of recommended items may
specifically influence the choices of new users presented with a set of alternative recommended items. This
study aims to exploit this potential bias by means of social influence’s effects.
This study therefore first draws on theoretical knowledge about social cohesion’s effects and suggests that
new users are likely to choose a certain recommendation if the social cohesion between them and the
previous user about whom information is displayed, is high. We therefore suggest the following hypothesis:
Hypothesis 1 (H1). High social cohesion between two individuals increases the probability that each of them
will choose a recommended item associated with the other.
Note that social cohesion focuses on the proximity between two individuals within a network. Multiple biasing
factors could, however, change the effect of social cohesion, for instance, if two individuals had bad
experiences when working together, this could reverse social cohesion’s effect. While H1 assumes that, as
such, the effect of social cohesion is positive, biasing factors, such as bad experiences, may cause it to have
a negative effect. That is, individuals will then be less likely to choose a recommended item associated with
the other. However, we do not consider these possibilities in our study, because we focus on social cohesion
and not on potentially biasing external factors.
3.1.2 Institutional Isomorphism: Social Influence based on Similar Positions in Organizational Structures
The structuralist view of social influence is based on the degree to which individuals have equivalent positions
in an organizational structure. The institutional isomorphism concept best captures the process of
structural equivalence in organizations (DiMaggio and Powell, 1983). Unlike social cohesion, isomorphism
does not depend on proximity (Borgatti and Everett, 1992). In an isomorphic network, “nodes may be
adjacent, distant, or completely unreachable from each other” (Borgatti and Everett, 1992).
In this study, we use the concept of isomorphism rather than equivalence. Whereas structural equivalence
only views two individuals as occupying the same position if they are connected to the same third individual,
structural isomorphism views two individuals as occupying the same position if they are connected to
corresponding others (Borgatti and Everett (1992). Isomorphism does not therefore depend on a direct
connection and can be distinguished from social cohesion. Structural equivalence, however, is an inseparable
part of social cohesion, as it requires links to the same individual (Borgatti and Everett, 1992). Furthermore,
institutional isomorphism specifically addresses the structural determinants that individuals perceive
(DiMaggio, 1986). It does not consider, for example, psychological determinants, which are difficult to
separate from the effects resulting from social cohesion (DiMaggio and Powell, 1983).
In Hawley’s (1968) description, isomorphism is a constraining process that forces one actor to resemble other
actors who face the same set of environmental conditions. Early research on institutional isomorphism
focused on isomorphic processes at the organizational level. For instance, DiMaggio and Powell (1983)
examined isomorphic processes that cause organizations to change and adapt the structural models of other
organizations if these organizations are competitors that are more successful. However, the theory of
isomorphism also applies to the individual level. Individual actors within organizations may adopt their
colleaguessuccessful practices. For instance, an internal blog may influence a software developer working
in the US department of a global company, because another software developer in the same company wrote
the blog. Even if the second software developer were working in Asia and was not connected to the software
developer in the US via a direct, personal tie, the similarity of their positions could cause the software
developer in Asia to influence the American software developer. Note that the social influence exerted in this
example arises from the individuals’ isomorphic positions within their organization both individuals are
software developers in the same organization.
The effects of institutional isomorphism and social cohesion may, obviously, overlap and leverage each other
(DiMaggio and Powell, 1983; Borgatti and Everett, 1992). For instance, if the software developer in the US
also knows the software developer in Asia personally, or if they are connected via the same supervisor, the
influence would be all the stronger. This shows that social cohesion and institutional isomorphism
complement each other.
However, it is important to keep the key differentiator between the two in mind. While influence based on
social cohesion depends on a tie between individuals, influence based on institutional isomorphism does not
need a direct tie, but does require an understanding of the structure or context of the individuals’ relationships.
Consequently, we also theorize that an individual is more likely to choose a report recommendation that is
based on a colleague who works in an isomorphic position.
We specifically examine business functions and locations (i.e., geographical regions, countries) as positions
in organizations’ networks, because organizations are usually structured according to their business functions
and/or locations (Miles, 2012). For instance, in an organization, common business functions include
accounting, marketing, IT, sales, and human resources. Since individuals working in the same business
function are considered to cooperate and exchange resources and knowledge frequently, many organizations
are primarily built on such functions. Consequently, we theorize as follows:
Hypothesis 2 (H2). High institutional isomorphism between two individuals (regarding their primary business
functions) increases the probability that both of them will choose a recommended item associated with the
other.
Organizations are also frequently structured according to locations. These may correspond directly to
countries (e.g., Brazil, China, India, the US, and the UK), but also to more generic regions, such as time-
zone-based and continent-based regions (e.g., America, Europe-Middle-East-Africa, Asia-Pacific-Australia).
Again, since employees in the same time-zone and/or the same country are expected to work together
closely, many organizations are structured according to locations. We therefore theorize that:
Hypothesis 3 (H3). High institutional isomorphism between two individuals (regarding the location in which
they primarily work) increases the probability that each of them will choose a recommended item associated
with the other.
In our study, we focus on these two social nudges based on institutional isomorphism. However,
organizations, or researchers, could also design and test alternative social nudges based on institutional
isomorphism. These could, for instance, focus on employees’ job roles, such as sales analysts, ad campaign
analysts, software engineers, etc. In real organizations, institutional isomorphism with regards to job roles
seems to be an especially powerful basis for nudging employees toward reusing certain reports, because we
assume that, for example, two sales analysts use their BIS to achieve similar objectives. We did not
investigate this nudge, because institutional isomorphism regarding job roles and business functions (e.g.,
sales departments, marketing departments, IT departments) would be very similar. We therefore only focused
on institutional isomorphism regarding business functions, because we assumed that job roles (e.g., ad
campaign analysts) would be more difficult to understand in a lab setting than business functions (e.g.,
marketing departments).
3.1.3 Power in Organizational Hierarchies
Like social networks, power in social hierarchies often causes influence in terms of changing other people’s
beliefs and behavior (Hogg, 2010). In general, in a hierarchy, the upper levels have more power than the
lower levels (Fiske, 2010).
The items that a RA suggests are often, at least partly, based on their historical usage. For instance, a movie
RA may suggest a movie to a new user, because another user, who seems to be similar to the (potential)
new user, had watched it. In general, RAs based on structured information (e.g., movies categorized into
movie genres) often provide more useful recommendations than those solely based on unstructured
information, or substantially less structured information (e.g., uncategorized news articles) (Paterek, 2007).
We therefore propose that using RAs for information retrieval within organizations is particularly useful,
because they already have a structure that could be used to compute similarities between potential users.
For instance, an ERA based on an organization’s hierarchy may suggest a report to an employee, because
this employee’s supervisor used it in the past. Drawing on the effects of power within organizational
hierarchies (Anicich et al., 2016; Clegg et al., 2006), we propose that an employee would prefer a
recommendation from a relatively powerful user in the organization’s hierarchy, who controls financial
budgets and employees. Thus, we suggest the following hypothesis:
Hypothesis 4 (H4): The extent of an individual’s power (regarding this individual’s hierarchical position and
control of financial budgets and employees) increases the probability that others will choose a recommended
item associated with this powerful individual.
Whether individuals know each other affects the hierarchical effect of power further. Employees in a high
position, such as directors, may influence other employees even if they are not directly connected to them.
Conversely, employees who are in low hierarchical positions, such as interns, may generally only influence
directly connected employees (e.g., colleagues whom they directly assist), but are unlikely to influence other
employees. Thus, the influence of employees in lower organizational positions will be impacted more by
providing additional information about their social cohesion (i.e., proximity) than about that of employees in
higher organizational positions. Accordingly, we define an interaction effect between power and social
cohesion:
Hypothesis 5 (H5): The extent of an individual’s power (regarding this person’s hierarchical position and
control over financial budgets and employees) moderates the effect of social cohesion between this individual
and others. Specifically, the degree to which the individual is powerful weakens the effect of social cohesion.
3.2 Recommendation Elaboration as Moderator
Individuals’ decision making is based on the identification of candidate options that are then evaluated and
reduced to the most appropriate choice or choices (Simon, 1957). This process depends greatly on the
degree to which an individual scrutinizes the set of available choices (Petty and Cacioppo, 1986a, 1986b). In
this study, individuals have to choose from a set of available recommendations, and, although they do not
know the recommended item (i.e., the report), they could process information about the recommendation.
For instance, they could consider information about colleagues associated with recommended items. This is
important, because individuals’ likelihood of engaging in effortful processing of such information determines
their chosen recommendations.
Petty and Cacioppo (1986a, 1986b) described this likelihood in terms of an elaboration continuum. At the
high end of the elaboration continuum, people assess all of the available information to obtain a carefully
considered, although not necessarily unbiased, evaluation (Gawronski and Creighton, 2013). This means
that the greater the extent to which individuals elaborate on a certain recommendation, the greater the
probability that any additional information about the recommendation will influence them. For instance, if a
recommendation is displayed with a short message describing the data on which the recommendation is
based, this message is more likely to affect users who elaborate very extensively. Accordingly, at the low end
of the elaboration continuum, people engage in considerably less scrutiny of object-relevant information
(Gawronski and Creighton, 2013). Any additional information provided about a certain recommended item is
therefore less likely to affect these individuals. In other words, if individuals do not engage in recommendation
elaboration, they will ignore additional information about recommended items and choose recommendations
randomly.
Thus, the extent to which an individual elaborates additional information about a set of recommended items
determines the effect this information has on this individual’s decision to choose a certain recommendation.
For instance, the effect of additional information about previous users associated with the recommended
items depends on new users’ elaboration of this information. Accordingly, we define the following hypotheses:
Hypothesis 6a (H6a). The degree to which an individual elaborates on information about a recommended
item strengthens the effect of providing information about social cohesion between that individual and another
associated with the recommended item.
Hypothesis 6b (H6b). The degree to which an individual elaborates on information about a recommended
item strengthens the effect of providing information about institutional isomorphism (regarding business
functions) between that individual and another associated with the recommended item.
Hypothesis 6c (H6c). The degree to which an individual elaborates on information about a recommended
item strengthens the effect of providing information about institutional isomorphism (regarding locations)
between that individual and another associated with the recommended item.
Hypothesis 6d (H6d). The degree to which an individual elaborates on information about a recommended
item strengthens the effect of providing information about the power of another individual associated with the
recommended item (regarding hierarchical position and control over financial budgets and employees).
For instance, the effect of providing information about the social cohesion between the potential new user
and a previous user (H1) depends on the new user’s elaboration of such information. Similarly, concerning
the effect of institutional isomorphism, we theorize that an individual needs to elaborate on a recommended
item in order for additional information about institutional isomorphism regarding business functions (H2) and
locations (H3) of previous users to influence this individual. Finally, the impact of providing information about
previous users’ power (H4, H5) also depends on new users’ elaboration of that information because, without
elaboration, the new user would ignore the information.
3.3 Recommendation Choice as Response to Social Nudges
As responses to the defined social nudges, we examine individuals’ recommendation choices. Currently,
many RAs provide multiple recommendations, rather than just one. Individuals can therefore choose between
several recommendations.
In line with our hypotheses, we suggest that social nudges could be used to steer individuals toward choosing
certain recommendations. We operationalize our measure of recommendation choice in section 4.4. We do
not examine any other individual responses besides recommendation choice. Recommendation choice is our
study’s only dependent variable.
Figure 1. Research model.
Figure 1 summarizes our research model. In line with other IS studies focusing on various design features
(e.g., Choi et al., 2015; Day et al., 2009; Xu et al., 2014; Zhang et al., 2011), we model our social nudges as
independent variables. In addition to the hypotheses developed above, we examine the influence of four
control variables: age, gender, nationality, and culture (with regard to one’s first language as a child).
4 Research Method
To test our research model, we conducted a laboratory experiment. Lab experiments are particularly suited
to examine the cause’s temporal precedence and to eliminate alternative explanations of possible cause-
effect connections in the IS discipline (Cook and Campbell, 1979; Colquitt, 2008; Dennis and Valacich, 2001;
James, 1980).
4.1 Material: BIS with ERA and Report Recommendations
Our study uses a self-developed prototype of a web-based BIS. This BIS is extended with an ERA which
provides users with report recommendations. Figure 2 shows a screen-shot of the BIS and the ERA. The
report recommendations (lower box on the left side of the screen) change every few seconds.
Figure 2. BIS extended with an ERA that suggests reports to users.
All the participants in our experiment used the BIS with the ERA that recommends reports. The BIS provides
users with business reports and the basic functionality for analyzing these reports, such as filtering reports,
sorting results, adjusting the number of rows shown on a single webpage, etc. Specifically, we used
Microsoft’s publicly available “Contoso” (Microsoft, 2016) BIS training data as the dataset. This dataset
contains data of an electronics retailer called “Contoso.The data includes all of Contoso’s sales transactions
over a period of three years, as well as detailed information about its stores, locations, customers, orders,
marketing campaigns, and inventory stocks. Using this dataset, a simulation program created a set of 75
reports. These reports only provide information in tables and not in charts, dashboards, or other
visualizations. The BIS provides access to these reports. Although each report is unique, parts of the reports
may overlap, which means that some reports may contain the same columns. Consequently, in some cases,
users may find the information they require in multiple reports.
The BIS provides a list of all reports and an ERA recommending reports to support users searching for certain
information. The BIS entry page shows a list of all the reports on the left side of the screen. Users can scroll
through the list and click on reports’ names to open them. All the report names start with the type of business
facts they provide. Based on the Contoso dataset, five types of business facts are distinguished: sales
transactions, orders, inventory stocks, product categories, and promotions. Examples of report names are
“Sales Report 1” and “Orders Report 4.
In addition to this list of available reports, the BIS provides an ERA that recommends reports for users. It is
important to note that, linked to a recommended report, each report recommendation provides additional
information about a previous user. In particular, each recommendation provides information about previous
users with regard to their business functions (e.g., sales department), the locations in which they work (e.g.,
the USA), and their positions (e.g., project manager). The recommendation also indicates whether the
recommended report’s previous user is directly connected to the current users (e.g., “Your director…”), or
not (e.g. A director…”). For instance, an example recommendation would be “Michael, a project manager
from the Sales department in the USA liked this report” whereas “this report” is a link that opens the
recommended report. Note that the recommendation does not show the name of the recommended report,
nor does the gender of the previous report users change within a set of alternatively displayed report
recommendations. The experiment participants cannot therefore select a certain report recommendation on
the grounds of gender preferences. We only use common names derived from an online database of baby
names (World-English, 2015).
With the additional information provided about previous users of recommended reports, we intend to nudge
new users toward choosing certain report recommendations. The information allows new users to infer the
social influence of the previous report users. Existing studies have shown that users viewing information
about others on their screen, associate this information with those people (Guadagno et al., 2011; Teubner
et al., 2015). This additional information conveys the social influence of these previous users. We use this
effect to generate different social nudges in our experiment. Table 1 lists our experimental treatments of
social nudges. In line with these treatments, Table 2 shows exemplary report recommendations as provided
to users by the ERA.
Table 1. Experimental treatments of social nudges.
Social nudge
Experimental treatment
Social cohesion
high
“Your [project manager…]”
low
“A [project manager…]”
Institutional isomorphism
with regard to business
function
high
Same department
medium
Similar, closely related department (e.g., sales and marketing)
low
Different, unrelated department (e.g., sales and risk mgmt.)
Institutional isomorphism
with regard to location
high
Same country
medium
Different country from same continent
low
Different country from different continent
Hierarchical power
high
Director
medium
Project manager, project leader
low
Intern
Table 2. Example report recommendations.
Social
cohesion
Institutional isomorphism w.r.t.
Hierarchical
power
Example report recommendation
business funct.
location
high
high
high
high
Your director from the Sales department
in Germany liked this report.”
low
medium
medium
medium
A project leader from the Marketing
department in France liked this report.”
low
low
low
low
An intern from the Risk Mgmt.
department in Canada liked this report.”
4.2 Experiment Design
Our experimental treatments distinguish between (a) low and high levels of social cohesion, (b) low, medium,
and high levels of institutional isomorphism regarding business function, (c) low, medium, and high levels of
institutional isomorphism regarding location, and (d) low, medium, and high levels of power in organizational
hierarchies.
Since we suggest an interaction effect between social cohesion and power in organizational hierarchies (H5),
we need a 2*3 cross-factorial design between these treatments. Crossing this design with additional
treatments is not reasonable, because (1) we did not theorize any interaction effects with institutional
isomorphism and (2) the unnecessary crossing of experimental treatments would reduce the analysis’s
statistical power. We therefore crossed the two forms of institutional isomorphism using a 3*3 cross-factorial
design, but did not cross the two factorial designs with each other. Consequently, we get 2*3 + 3*3 = 15
experimental treatments, with one experimental treatment included in both the factorial designs. In this study,
there are therefore 14 experimental treatments. Appendix 9.1 provides the details of all the experimental
treatments.
To collect data for each experimental treatment, we employed a counterbalanced within-subjects experiment
design for the following two reasons: first, each treatment represents one set of recommended items from
which a participant can choose one recommended item. This relatively fast experimental task does not take
longer than a minute. Subjects can therefore participate in multiple treatments, which makes a within-subjects
design feasible for our experiment. Second, the total of 14 experimental treatments is relatively high. A
between subjects design would require too many participants and is therefore not practically feasible.
We used common approaches to reduce bias from carryover effects. In particular, we randomized the order
of experimental treatments by using a Latin Square design. However, since experience with experimental
treatments does not help participants complete their tasks (i.e., answer the questions), the risk of carryover
effects is already low.
4.3 Sample, Scenario, and Task
The experiment was conducted with 187 students at a public university. The group consisted of 91 graduate
students specializing in Business Intelligence Systems and 96 undergraduate students specializing in
Development and Management of Information Systems. Detailed information about the sample is provided
in section 5.1 “Demographic Data” and in Appendix 9.2. Although the experiment was conducted in an
organizational setting, students are suitable subjects, because they are much less likely to be biased due to
personal experiences than experienced professionals. If the latter have had bad experiences when working
with people from certain countries, these experiences might influence whether they would, in the experiment,
choose a recommendation from someone working in that country.
In the experiment, the participants are provided with an organizational scenario. In this scenario, they take
the role of Thomas, who is an employee at Contoso; i.e., an employee at the company introduced above and
for which the BIS provides data. Thomas works for Contoso’s sales department in Germany.
In the scenario, Thomas needs to complete nine tasks. Each task consists of one question that needs to be
answered. All the questions focus on information in a specific report that the BIS has provided. To answer
the questions correctly, Thomas needs to identify the relevant report and find the required information. For
instance, one question could be: “To which income group does the customer with the customer key ‘19037’
belong?” Appendix 9.3.1 provides detailed information on all nine tasks, and Appendix 9.3.2 similar
information on Contoso’s organizational structure.
We used the following five techniques to train and prepare participants for the experiment: First, we made a
15-minute introductory video about the experiment available to participants one week before the experiment
took place. This video introduced them to the experiment, the scenario, and the usage of the BIS (incl. the
ERA). Second, before the start of the experiment, the experiment instructor personally introduced participants
to the scenario and the BIS, and demonstrated how to solve the first task. Third, we provided two training
tasks to familiarize the participants with the role of Thomas and the BIS. Hence, the first two of the nine tasks
are not considered in the data analysis. They are only used to familiarize participants with the role of Thomas
and the BIS. Only tasks 3-9 were relevant for data analysis. Fourth, each participant received a reference
paper illustrating Contoso’s organizational setting, which avoided the need to remember Thomas’s role at
Contoso and/or Contoso’s institutional or hierarchical structure. The reference paper is provided in Appendix
9.3.2. Fifth, during the experiment, the experiment instructor provided personal support with the use of the
BIS if participants required this.
The participants received a course credit for each task they answered correctly excluding the training tasks
1 and 2 to motivate them to perform well. This was communicated to them, as well as that the time they
required to complete the tasks would not be considered.
The participants could answer the tasks by either (1) browsing through the list of reports and then checking
those they expected to provide relevant information, or by (2) using the ERA’s report recommendations. Table
3 shows the process for completing an experimental task. The participants were always given a set of three
report recommendations, presented in turn (carousel effect). That is, every few seconds the recommendation
changed to the next one, and after the third the first recommendation appeared again.
The recommendations were not dynamic, but predefined. All the report recommendations within the same
set of three alternative recommendations would always link to the same report. However, the participants did
not know this. Predefined recommendations were important, because the participants’ usage history would
have influenced the recommended reports if dynamic recommendations were used. In turn, this could have
affected the recommended reportsusefulness, which could have affected how the users continued using the
system.
Each experimental treatment had three recommendations. After a participant had chosen one
recommendation out of the set of three recommendations, all three recommendations were updated with the
next experimental treatment’s recommendations.
Furthermore, all sets of report recommendations were predefined because they represent our experimental
treatments and thus had to be controlled. At the latest the third set of recommendations would forward
participants to a report that would provide them with the information they needed to answer one experimental
task. Consequently, we could gather data for up to three experimental treatments per experimental task.
Finally, after completing the nine tasks, participants were asked to answer a post-experiment survey. All 187
experiment participants also participated in the post-experiment survey.
Table 3. Exemplary experimental task.
Step
Description
1. Entry page
(each experimental task begins on
the BIS entry page)
In total, each participant receives nine tasks (two introductory tasks and
seven that will be analyzed). Each task consists of a question that has to be
answered (e.g., “How many female customers have a Bachelor’s degree?”).
Throughout the task, this question is shown in the upper part of the screen.
The participants enter their answer on the entry page (figure left). Once the
answer has been entered, the next question is shown. In addition, the
overview screen displays a list of 75 reports.
2.1 First report with first
experimental treatment
(report with ERA)
(ERA uses the three
recommendations in turn (carousel
effect))
By clicking on a certain report, the selected report is shown (Figure 2). In
addition, the ERA with a report recommendation is shown on the left side of
the screen (Figure 2). This report recommendation changes every few
seconds (carousel effect) and after the third recommendation, the first is
again shown (figure left). Note that the report recommendations only indicate
a previous user and do not provide a relevant description, name, or ID.
Each set of three recommendations represents one experimental treatment.
In experimental treatment 1, recommendations 1 and 3 always have the
same degree of social influence (low). Recommendation 2 only differs in its
degree of social influence. Our analysis thus focuses on the probability that
recommendation 2 will be chosen. We thus code our dependent variable as a
binary variable: “click recommendation 2” versus “click recommendation 1 or
3”.
Note: The use of the ERA is voluntary. Participants are not forced to use the
report recommendations. The answers to the questions can also be found by
examining the 75 reports one by one. The sample size differs slightly
between the experimental treatments.
2.2 More reports with more
experimental treatments
(the second report provides the
second treatment; the third report
provides the third treatment)
After the participants have clicked on one of the three report
recommendations, the next report (“report 2”) is shown. It is important to note
that all three report recommendations from the same experimental treatment
always link to the same report. Therefore, it does not matter which report
recommendation is chosen when answering the question. Note that the
participants do not know this.
The new report (“report 2”) shows a new set of three report
recommendations. This set of recommendations represents a new
experimental treatment (“treatment 2”). One experimental task can therefore
examine multiple experimental treatments.
3. Final report that provides the
answer to the exp. task
(recommendations in the final report
link to the final report)
The “final report,which is shown after completion of all the experimental
treatments of a specific experimental task, provides the answer to the
question.
We varied the number of experimental treatments per experimental task. The
“final report” is shown after one, two, or three experimental treatments. This
variation is useful, to prevent the participants recognizing that the “final report”
is always shown once the same number of recommendations have been
clicked.
The “final report” also shows a set of three report recommendations.
However, these do not represent an experimental treatment. When clicking
on one of them, the “final report” is reloaded.
4.4 Measurement of Recommendation Choice
The recommendation choice is the dependent variable in our study. We suggest that the probability of new
users choosing a certain recommendation changes depending on the information about the previous users
of recommended reports. Throughout the experiment, participants can select one of three recommendations
or examine a list of all the reports.
Since each set of three recommendations represents one experimental treatment, our analysis will compare
the probabilities of choosing a certain manipulated recommendation between different sets of (three)
recommendations. Consequently, it is important that the manipulated recommendation is always shown in
the same position throughout all the sets of recommendations to prevent the position from biasing our results.
For instance, some participants could simply click on recommendation 1, because it is the first they
encounter.
The manipulated recommendation is always shown as the second of three recommendations to avoid bias.
In other words, by keeping the position constant in which the manipulated recommendation is shown across
the experimental treatments (i.e., in all the treatments, the manipulated recommendation is show in position
2), we can assume that the position of the manipulated recommendation does not affect these treatments
differently. In turn, this ensures that the experimental treatments are comparable.
In contrast, a randomized approach would limit such a comparison, because the participants do not have to
use the ERA. Consequently, randomization of the position in which the manipulated recommendation is
provided, would probably lead to some variation between the ratio of users who used the ERA and received
the manipulated recommendation at position 1, 2, and 3. For a certain treatment, randomization could, e.g.,
lead to 36% of ERA users receiving the manipulated recommendation as recommendation 1, while, for
another treatment, randomization could lead to 30% of ERA users receiving the manipulated
recommendation as recommendation 1. Such a difference would bias our results. Thus, instead of
randomizing its position, the manipulated recommendation was always displayed as the second of three
recommendations throughout the experimental treatments.
In respect of the manipulated recommendation (i.e., recommendation 2), the degree of social influence
changes between the experimental treatments. Conversely, recommendation 1 and recommendation 3
always refer to a previous user with the same degree of social influence: A project manager not directly
connected to Thomas and who, compared to Thomas, works in a different department and on a different
continent (see Appendix 9.1 for a detailed overview of all the experimental treatments).
To collect data on the chosen recommendations, we logged the participants’ clicks on the recommendations.
We determined the probability of the participants choosing a certain recommendation by computing the
frequency at which recommendation 2 was selected in respect of each experimental treatment, divided by
the frequency at which any recommendation was selected within that experimental treatment (i.e., the sum
of the clicks on recommendations 1, 2, and 3). This ratio represents the probability that participants would
select recommendation 2 within a certain experimental treatment. In turn, this probability allows us to
compare the effects of social influence because this only differed from the others in recommendation 2. Table
3 shows an example of the procedure in an experimental task.
4.5 Measurement of Recommendation Elaboration
We measure the extent to which users elaborate on recommendations. Since all recommendations are
messages displayed on the screen, we need to measure the extent to which users process these messages
cognitively.
While a vast literature has used questionnaires and surveys to measure elaboration, or focused on the
antecedents of elaboration (e.g., Angst and Agarwal, 2009; Bhattacherjee and Sanford, 2006; Ho and Bodoff,
2014), only Meservy et al. (2014) demonstrate the benefits of eye tracking as a means to measure
elaboration. Contemporary eye-tracking devices provide multiple biometric metrics that can be aggregated
to compute users’ fixation (i.e., gaze) on certain screen elements (Just and Carpenter, 1976; Loftus, 1972).
In particular, state-of-the-art eye-tracking devices can measure users’ pupil size and, simultaneously,
measure coordinate points on a screen at which users look. This allows computing the amount of time a user
has gazed at a message on the screen and is an important part for developing a reliable elaboration metric.
In addition to gaze, users’ degree of cognitive processing can be computed by means of the collected pupil
size metrics (Goldberg and Kotval, 1999; Weigle and Banks, 2014). Analysis of pupil sizes allows us to
determine whether users either focus and process the displayed texts, or whether they only move their heads
and merely skim the texts. Therefore, the product of coordination point metrics and pupil size metrics allows
us to compute a reliable fixation metric.
To compute fixation, we use the fixation filter that the application ProStudio provides, because it is the
recommended filter for our eye-tracking device Tobii Pro X2 (Tobii, 2015). This filter ensembles multiple
fixation algorithms (Komogortsev et al., 2010; Rayner et al., 2007; Over et al., 2007). Figure 2 above shows
an example screen with the projection of fixation points (red dots). The size of the points represents the
fixation length in milliseconds.
4.6 Pretest
A pretest with 26 participants was conducted two months prior to the main experiment to ensure that the
manipulation of social cohesion, institutional isomorphism with regard to business function, institutional
isomorphism with regard to location, and hierarchical power was successful and that the scenario and the
experimental tasks were easy to understand.
5 Data Analysis and Results
This section first reports the experiment participants demographic data. This is followed by a description of
the manipulation checks and the presentation of our hypothesis tests’ results.
5.1 Demographic Data
Appendix 9.1 summarizes the participants’ characteristics. More men (73%) participated than women (27%).
The majority were between 21 and 29 years old. As cultural indicator, we asked them about their nationality
and the first language they learned as a child. Overall, 57% of participants were Germans and 48% learned
German was the first language they learned as a child. This is not surprising, since the study was conducted
at a university in Germany. The participants from Arabic countries, China, Egypt, India, Russia, Spain,
Turkey, Greece, Vietnam, and the USA were minorities. Thus, besides the focus on German participants, the
demographic profile is fairly distributed across countries and cultures.
5.2 Manipulation Checks
We conducted manipulation checks in respect of the experimental treatments. These are summarized in
Table 4. We administered a post-experiment survey, asking all the participants whether they had noticed that
the four experimental treatments (social cohesion, institutional isomorphism regarding business function,
institutional isomorphism regarding location, hierarchical power) differed between specific recommendations.
We also added two general items to control whether they had noticed that the recommendations always
changed. Importantly, the items were phrased such that a consistent answer required the participant to
answer one item with “yes” and the other item with “no” (see Table 4).
Eventually, 167 of the 187 participants (89.3%) stated that they had noticed all the differences and answered
the two additional items consistently (i.e., the first item with “yes” and second item with “no”). This is a large
majority, which allows us to consider the manipulation of the experimental treatments successful.
5.3 Measurement Model
Besides the manipulation checks, no variables in this study were measured by means of surveys. All the
independent variables represent experimental treatments. Recommendation elaboration was measured by
using gaze data (eye tracking) and the dependent variable was measured using log data of the BIS and the
ERA.
Table 4. Manipulation checks for experimental treatments.
Experimental treatment
Post-experiment survey items
Ratio of positive
(“yes”) answers
Social cohesion
I noticed that some recommendations were based on a
direct colleague (e.g., my own intern or my supervising
project manager), while other recommendations were
based on less close colleagues. {yes, no}
97.8%
Institutional isomorphism
regarding business function
I noticed that the colleagues, who were shown in the
report recommendations, worked in different
departments. {yes, no}
100.0%
Institutional isomorphism
regarding location
I noticed that the countries of the colleagues shown in
the report recommendations were changing. {yes, no}
98.3%
Hierarchical power
I noticed that the colleagues, who were shown in the
report recommendations, were assigned to different
hierarchical levels. {yes, no}
98.9%
Overall
I noticed that recommendations were changing. {yes, no}
98.9%
All recommendations were based on the same user. {yes,
no}
5.3%
5.4 Results of Hypothesis Tests
As described above, our experiment participants were asked to complete several tasks by answering
questions. Each task consisted of exactly one question. We assumed that the participants would answer
these questions correctly, because the recommended reports provided the information required to answer
correctly.
Eventually, 93% of all questions were answered correctly. In respect of individual questions, this ratio ranged
from 85% to 99%. The high ratio of correct answers indicates that the participants were motivated during the
experiment and, as expected, could complete the tasks correctly. However, to avoid potential bias from
unmotivated participants and/or guesses, our data analysis only considered the data of tasks completed
correctly, i.e., tasks whose questions were answered correctly. We provide detailed information about all the
tasks considered for the data analysis in Appendix 9.1; as well as a list of all the experimental treatments.
Appendix 9.1 provides (a) the frequency with which a participant chose any recommendation out of a set of
three and (b) the frequency with which a participant chose any recommendation out of a set of three and
answered the question correctly.
Since the manipulated recommendation (and thus the recommendation of interest) is always
recommendation 2 out of a set of three, the list also provides (c) the frequency with which a participant chose
recommendation 2 and answered the question correctly. Finally, this allows us to compute (d) the probability
of the participants choosing recommendation 2 if they chose one of the three recommendations and
answered the question correctly.
Overall, our results indicate that all the social nudges steered the users toward choosing a certain report
recommendation. By changing the social influence of the recommended reports’ previous users, the
probability that BIS users would choose a certain recommendation was increased. Figure 3 shows the mean
values of the probability of choosing a certain recommendation.
Figure 3. Mean values of probability of choosing a certain recommendation.
We conducted logistic linear mixed effects analysis, using the glmer() function of the statistical software
package lme4 for R (version 1.1-12) (Bates et al., 2015). A logistic linear model is suitable, because we
suggested linear effects and have one binary dependent variable (values: choose the manipulated
recommendation; do not choose the manipulated recommendation). Similarly, a mixed effects analysis is
suitable, because we administered a within-subject experiment (i.e., each participant was presented with
multiple treatments).
Table 5 and Table 6 present the results. As explained in section 4.2, we used a 2*3 experiment design to test
the effects of social cohesion, hierarchical power, and elaborated an additional 3*3 experiment design to test
the effects of institutional isomorphism (with regard to business functions), institutional isomorphism (with
regard to location), and elaboration. Table 5 presents the results of social cohesion, hierarchical power, and
elaboration (i.e., H1, H4, H5, H6a, H6d). Table 6 presents the results of institutional isomorphism (regarding
business functions), institutional isomorphism (regarding location), and elaboration (i.e., H2, H3, H6b, H6c).
To compare competing models (i.e., main models versus moderator models), we focus on information criteria
statistics because we have multiple variables and information criteria have penalties for including variables
that do not significantly improve fit (Williams, 2017). Particularly with large samples as in our case, information
criteria can lead to more parsimonious but adequate models. Consistent with recent statistics literature
(Müller et al., 2013), we report Akaike’s Information Criterion (AIC) and the Bayesian Information Criterion
(BIC). AIC and BIC estimate the difference between the “true data” and a fitted model. Thus, the smaller their
absolute values, the better the fit of the model. (Note: BIC penalizes model complexity more heavily. Müller
et al., 2013). Our results show that the moderator models have greater fit than the respective main effect
models because the AIC and BIC of the moderator models are smaller than the AIC and BIC of the main
models.
In addition, to provide a pseudo statistics, we report the R² for general linear mixed models (so-called
R2GLMM) after Nakagawa and Schielzeth (2013) and as implemented in the R package MuMIn version 1.4
(Barton, 2017). Note that the R2GLMM aims to allow comparison of general linear mixed models and to
represent an absolute value for the goodness-of-fit of a model (which is not yet given by the AIC or BIC).
However, the R2GLMM needs to be assessed with caution if compared to R² statistics from linear models or
general linear models (Nakagawa and Schielzeth, 2013). In line with the AIC and BIC, the GLMM values in
study confirm that the moderator models significantly improve the fit of the model. Regarding the effects of
social cohesion and hierarchical power, adding elaboration as moderator increases the R²GLMM from 17.2%
to 42.3%. Similarly, regarding the effect of institutional isomorphism, adding elaboration as moderator
increases the R²GLMM from 9.5% to 24.3%. (Note: We do not distinguish between R²GLMM for the fixed effects
model and the entire model because differences only occurred after the fourth relevant digit.)
Table 5 and Table 6: Interpretation of estimates (= log odds) and odds ratio:
Estimates represent the log odds factor according to which the probability of choosing the manipulated
recommendation changes in contrast to the baseline model (i.e., low social cohesion and low hierarchical
power). Example: In the logistic linear mixed effects model, high social cohesion (rather than low social
cohesion) increases this probability by a factor of 1.280 log odds. We also show the odds ratio to facilitate
interpretation further. Example: A factor of 1.280 log odds in a logistic linear model would reflect an e^1.280
= 3.596 odds ratio change. In other words, the probability increases by a factor of 3.596 if the social cohesion
is high in contrast to the baseline model, which has low social cohesion.
Table 5. Logistic linear mixed effects model of the effects of social cohesion, hierarchical power and
recommendation elaboration on the recommendation choice.
Main model (AIC=1248.6, BIC=1278.0, R²GLMM=17.2%):
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Hyp.
Intercept
-1.042
0.353
0.157
-6.623
3.51E-11
***
Social cohesion [high]
1.280
3.596
0.141
9.085
2.00E-16
***
H1
Hier. power [medium]
0.070
1.072
0.169
0.412
0.681
H4
Hier. power [high]
1.039
2.826
0.187
5.543
2.97E-08
***
H4
Elaboration
0.091
1.095
0.056
1.626
0.104
Random effects
Var.
Std. dev.
Observations
Groups
Subject (Intercept)
0
0
999
175
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-2.083
0.685
-0.594
0.858
1.684
Moderator model (AIC=1125.2, BIC=1189.0,GLMM=42.3%):
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Hyp.
Intercept
-0.739
0.478
0.259
-2.857
0.004
**
Social cohesion [high]
1.138
3.120
0.352
3.230
0.001
**
Hier. power [medium]
0.389
1.476
0.310
1.255
0.209
Hier. power [high]
0.727
2.068
0.363
2.000
0.046
*
Elaboration
-0.311
0.733
0.120
-1.554
0.120
Social Cohesion [high] *
Hierarchy [med.]
-1.386
0.250
0.489
-2.832
0.005
**
H5
Social Cohesion [high] *
Hierarchy [high]
-2.546
0.078
0.643
-3.958
7.55E-05
***
H5
Social Cohesion [high] *
Elaboration
0.293
1.341
0.288
1.019
0.308
H6a
Hierarchy [med.] * Elaboration
-0.485
0.616
0.260
-1.869
0.062
.
H6d
Hierarchy [high] * Elaboration
0.654
1.924
0.247
2.647
0.008
**
H6d
Social Cohesion [high] *
Hierarchy [med.] * Elab.
1.743
5.714
0.422
4.128
3.65E-05
***
Social Cohesion [high] *
Hierarchy [high] * Elab.
1.106
3.021
0.475
2.328
0.020
*
Random effects
Variance
Std. dev.
Observations
Groups
Subject (Intercept)
0
0
999
175
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-3.090
-0.716
-0.201
0.819
6.084
Significance levels: ***p<0.001, **p<0.01, *p<0.05.
Table 6. Logistic linear mixed effects model of the effects of institutional isomorphism and recommendation
elaboration on the recommendation choice.
Main model (AIC=1990.6, BIC=2027.9, R²GLMM=9.5%):
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Hyp.
Intercept
-0.973
0.3780
0.113
-8.640
2.00E-16
***
Inst. isomorphism regarding
business funct. [medium]
0.828
2.289
0.131
6.319
2.63E-10
***
H2
Inst. isomorphism regarding
business funct. [high]
1.004
2.728
0.130
7.711
1.25E-14
***
H2
Inst. isomorphism regarding
location [medium]
0.068
1.071
0.134
0.509
0.610451
H3
Inst. isomorphism regarding
location [high]
0.468
1.596
0.141
3.307
0.000943
***
H3
Elaboration
0.119
1.126
0.041
2.871
0.004092
**
Random effects
Var.
Std. dev.
Observations
Groups
Subject (Intercept)
0
0
1513
179
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-1.618
-0.930
-0.615
0.910
1.627
Moderator model (AIC=1888.6, BIC=1989.8, R²GLMM=24.3%):
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Hyp.
Intercept
-0.349
0.705
0.172
-2.038
0.042
*
Inst. isomorphism regarding
business funct. [medium]
-0.031
0.969
0.290
-0.107
0.915
Inst. isomorphism regarding
business funct. [high]
-0.153
0.858
0.284
-0.539
0.590
Inst. isomorphism w.r.t.
location [medium]
-0.300
0.741
0.298
-1.006
0.315
Inst. isomorphism regarding
location [high]
1.206
2.790
0.384
2.672
0.008
**
Elaboration
-0.796
0.451
0.165
-4.807
1.53E-06
***
Inst. isomorph. (bf)[med.] *
Inst. isomorph. (loc) [med.]
0.648
1.913
0.481
1.348
0.178
Inst. isomorph. (bf) [high] *
Inst. isomorph. (loc) [med.]
0.241
1.273
0.474
0.509
0.611
Inst. isomorph. (bf) [med.] *
Inst. isomorph.(loc)[high]
-1.801
0.165
0.697
-2.582
0.010
**
Inst. isomorph. (bf) [high] *
Inst. isomorph.(loc)[high]
-1.698
0.183
0.656
-2.589
0.010
**
Inst. iso. (bf)[med.] * Elab.
1.490
4.438
0.261
5.719
1.07E-08
***
H6b
Inst. iso. (bf) [high] * Elab.
1.609
4.999
0.256
6.292
3.13E-10
***
H6b
Inst. iso.(loc)[med.] * Elab.
0.846
2.329
0.202
4.179
2.92E-05
***
H6c
Inst. iso. (loc)[high] * Elab.
0.444
1.559
0.219
2.025
0.043
*
H6c
Inst. isom. (bf) [med.] * Inst.
isom.(loc)[med] * Elab
-1.475
0.229
0.307
-4.809
1.52E-06
***
Inst. isom. (bf) [high] *
Inst. isom.(loc)[med] * Elab
-1.222
0.295
0.308
-3.963
7.40E-05
***
Inst. isom. (bf) [med.] * Inst.
isom.(loc)[high] * Elab
-0.530
0.589
0.356
-1.488
0.137
Inst. isom. (bf) [high] *
Inst. isom.(loc)[high] * Elab
-0.444
0.629
0.350
-1.327
0.185
Random effects
Variance
Std. dev.
Observations
Groups
Subject (Intercept)
0
0
1513
179
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-2.788
-0.840
-0.193
0.912
6.084
Significance levels: ***p<0.001, **p<0.01, *p<0.05.
All the computed models statistical results indicate that the residuals vary around a slightly negative median
(between -0.2 and -0.6), that the first quantile varies between -0.9 and -0.7, and the third quantile between
0.7 and 0.9. However, the random effect (i.e., the variance explained by a certain participant or “subject”) is
approximately zero in all the computed models. This estimate by glmer() indicates that the residual term
alone explains the extent of the subject variation (Barr et al., 2013). The variation between the participants is
too small to add an additional random effect estimate.
While Figure 3 above shows the direct effects (H1-4), Figures 4a-d below show the moderation effects of
recommendation elaboration (H6a-d), and Figure 5 visualizes the interaction effect between social cohesion
and hierarchical power (H5). The following subsections explain the statistical and graphical results of all the
hypotheses individually.
Figures 4a-d. Moderating effects of recommendation elaboration.
Figure 5. Interaction effect between social cohesion and hierarchical power.
5.4.1 The Effects of Social Nudges based on Social Cohesion and Hierarchical Power
As shown in Figure 3, each of the four social nudges increased the probability of the participants choosing a
certain recommendation. The probability of a participant choosing a certain recommendation is significantly
higher if the recommendation is based on the usage data of a direct colleague (i.e., high social cohesion),
who is referred to, for example, as “your project manager” instead of “a project manager.” Since this increase
is also statistically significant at p<0.001, it provides evidence for H1. Specifically, the mixed effects model in
Table 5 (main effects model) indicates that participants choose recommendations with a high social cohesion
3.6 times more often than those with a low social cohesion.
In addition, we suggested that the probability of a participant choosing a certain recommendation is also
significantly higher if the recommended report’s previous user is assigned to a hierarchical powerful position,
such as a director or a project manager (H4). However, only a high hierarchical power level (“director”)
increases the probability significantly (at p<0.001). Compared to a low hierarchical power level (“intern”), the
medium hierarchical power level (“project manager”) only increases the probability by a factor of 1.1, but a
high hierarchical power level (“director”) increases the probability by a factor of 2.8. This indicates that a
social nudge based on hierarchical power is generally suitable to influence users’ actions, but the effect size
varies strongly.
We found that employees in a powerful position, such as directors, can influence other employees even if
they are not directly connected to them (H5). However, the opposite does not seem to hold. Employees in
less powerful hierarchical positions, such as interns, can usually only influence directly connected employees.
Contrary to employees in highly powerful positions, such as directors, who can influence all employees,
interns are unlikely to influence employees with whom they do not have any ties. Consequently, nudges
based on them benefit far more from social cohesion.
H5 defined this interaction effect between hierarchical power and social cohesion. Our results support H5.
Figure 5 indicates that recommendations already based on users with high hierarchical power will benefit
very little from social cohesion (i.e., slight slope along the social cohesion dimension in Figure 5). However,
recommendations based on users with only medium, or even low, hierarchical power will benefit strongly if
social cohesion between these users is high (i.e., steep slope along the social cohesion dimension in Figure
5). The mixed effects model also shows statistical significance at p<0.001 in respect of this interaction effect
(Table 5, moderating effects model).
5.4.2 The Effects of Social Nudges based on Institutional Isomorphism
Furthermore, we designed two nudges based on institutional isomorphism. The results show that institutional
isomorphism with regards to a user’s business function (H2) increases the probability of that user choosing
a specific recommendation. Contrary to low similarity between two business functions (e.g., a sales
department and an IT department), medium similarity (e.g., a sales department and a marketing department)
increases the probability by a factor of 2.3, while high similarity (e.g., two sales departments) increases the
probability by a factor of 2.7 (Table 6, main effects model). Both increases are statistically significant at
p<0.001, which supports H2.
However, as can be seen in Figure 3, the effect size of the social nudge based on institutional isomorphism
is smaller regarding a user’s location (H3). Contrary to users’ locations on different continents (e.g., Germany
and Canada), locations on the same continent (e.g., Germany and France) only increase the probability by a
factor of 1.1, while locations within the same country (e.g., two locations in Germany) only increase the
probability by a factor of 1.6 (Table 6, main effects model). While this increase is still statistically significant
at p<0.001, which supports H3, it is interesting to note that, in our study, institutional isomorphism with regard
to users’ business functions has a much larger impact than institutional isomorphism with regard to users’
locations.
5.4.3 The Moderating Effect of Recommendation Elaboration
We suggested that the effect of any nudge depends on whether the participants elaborate and choose a
recommendation, or whether they merely click on one without processing it cognitively. As described above,
we used an eye tracker to measure recommendation elaboration. We identified five relevant elaboration
intervals to analyze recommendation elaboration: 0 seconds (i.e., no elaboration), 0.1 to 1 second, 1.1 to 2
seconds, 2.1 to 3 seconds, and 3.1 seconds or more. The plots of H6a-d in Figures 4a-d visualize the effects
of these elaboration intervals on all the experimental treatments. Furthermore, Appendix 9.4 provides detailed
information about all the combinations of elaboration intervals and experimental treatments.
The interaction plots support the directions of H6a-d. The probability of the participants choosing the
manipulated recommendation increases with high levels of social influence (e.g., high social cohesion).
However, the probability does not decrease with low levels of social influence (e.g., low social cohesion).
Specifically, it can be noted that the probabilities of no elaboration (0 seconds) and low elaboration (0.1 to 1
second) are similar. As soon as elaboration increases, our nudges’ effects become visible. This supports our
assumption that people need to cognitively process a recommendation aimed at nudging them. Regarding
the statistical significance of these results, our mixed effects models (Table 5 and Table 6, moderating effects
model) show that H6b-d are significant. Only H6a is not significant.
5.4.4 Control Model
The hypotheses analyzed above represent the main research model. We analyzed potentially biasing factors
effects. Our analysis controlled for the participants’ age, gender, nationality, and their culture (with regard to
their first language). However, none of these control variables had any significant influence on our main
research model (Appendix 9.5, Table 12 and Table 13). We therefore conclude that the experiment
participants age, gender, nationality, and/or culture do not bias our results.
6 Discussion
Our findings provide important insights for improving the reuse of reports, as well as for designing social
nudges in the context of BIS. The findings shed light on how individuals’ recommendation elaboration
determines the influence of RAs and, thus, the influence of recommendations that provide additional
information to steer users toward specific choices.
6.1 Implications for Theory
6.1.1 A novel approach for managing core BIS and supplementary WS
Our study describes a new approach for tackling the challenges of workaround systems (WS), such as the
limited reuse of information, poor decision making based on inconsistent data, and loss of synergies across
employees. The idea behind our approach is that large BIS should proactively reduce individuals’ need to
develop and use WS. We therefore proposed and examined an enterprise recommendation agent (ERA) that
extends existing BIS. This ERA reduces individuals’ need to develop and use WS that store individuals’
reports, because it facilitates the retrieval of relevant existing reports. Specifically, the ERA uses social
nudges to motivate individuals to use report recommendations and, thus, increase report reuse.
While previous literature focused on BIS governance to tackle the challenges of WS, the ERA focuses on
facilitating information reuse. The ERA supports BIS users’ search for potentially relevant reports. According
to Simon’s (1957) decision-making process, individuals first need to identify relevant choices and thereafter
select the most suited option. Meservy et al. (2014) show that this process also holds in the IS context. Thus,
by suggesting that candidates should report to BIS users, the ERA supports their report reuse decisions. The
ERA helps users identify potentially relevant reports and, thus, the reuse of reports that their colleagues
originally developed. Ultimately, this should reduce individuals’ need to develop WS, because they will find
existing reports that provide them with the required information more often. The ERA thus helps increase the
reuse of BIS reports and reduce the use of WS.
The existing literature suggests approaches focusing on IS governance to manage and balance the use of
large enterprise IS and WS (Alter, 2014). The ERA proposed in this study complements these approaches,
because the downsides of these IS governance-based approaches do not affect it. Most IS governance-
based approaches either emphasize the need to prevent WS, or the need to acknowledge the value of WS
and help individuals build WS. However, both approaches have their limitations. Attempts to control the use
of large enterprise IS and to prevent WS have shown to cause shadow systems (Alter, 2013, 2014; Behrens,
2009; Sun, 2012). Similarly, the value of empowering individuals is also limited. If organizations foster the
development of WS, the complexity of the overall IS environment inevitably increases, which in turn needs
to be managed, usually by means of additional IS governance units. However, establishing and running these
organizational units are very costly and only the specific context can determine whether introducing additional
IS governance units will increase the IS’ flexibility (Brown and Magill, 1994, 1998; Gebauer and Schober,
2006; Tiwana and Kim, 2015). In contrast to these IS governance-based approaches, our ERA neither limits
individuals’ flexibility regarding using existing information, nor requires additional governance units to manage
increasingly complex IS environments. The ERA is therefore a valuable complement to existing approaches
for managing core BIS and supplementary WS.
6.1.2 Design and evaluation of social nudges
Individuals’ decision making is not entirely rational. Numerous cognitive biases may influence their attempts
to make a rational decision. Extant literature has showed that these biases also exist in the IS context (e.g.,
Adomavicius et al., 2013; Allen and Parsons, 2010; Goes; 2013). However, empirical IS studies have not
examined how additional information about previous users can be used to exert a social influence on new
users and, thus, steer them toward making certain desirable choices. We addressed this gap by designing
and evaluating four social nudges.
Social psychologists and behavioral economists use the notion of a nudge to refer to a change in the way
different options are presented (without affecting the options themselves) in order to promote desirable
choices (Thaler and Sunstein, 2003). A frequently mentioned example of a nudge is a picture of a smoker’s
lungs on a pack of cigarettes (Sunstein, 2014). Although the picture does not force smokers to stop smoking,
it influences their choice. In other words, the picture of the smoker’s lungs nudges (potential) smokers and
influences their decisions in desirable ways (i.e., to quit smoking). Adapting the nudge concept to the IS
concept, we introduced the notion of social nudges. In line with the nudge literature (Halpern, 2015; Sunstein,
2014), a social nudge is a nudge that uses the effects of social influence to promote desirable choices.
Compared to pictures showing the adverse health effects of certain activities, social influence is far more
relevant in the enterprise IS context and, thus, in the BIS context.
We presented four social nudges based on three different forms of social influence in organizations: social
influence based on proximity between individuals (i.e., social cohesion), social influence based on similar
positions in organizational settings (i.e., institutional isomorphism) regarding business functions and
locations, and social influence based on the power in organizational hierarchies. Our results indicate that
providing additional information about previous users may have a social influence on new users and, thus,
increases the probability of these new users choosing a specific recommendation.
By demonstrating the concrete application of social nudges, this study motivates further IS research that aims
to explore ways to utilize cognitive biases in order to promote desired user behaviors. While the vast body of
works by behavioral economists’ theorizes deeply on nudges’ potential benefits (e.g., Hanna, 2015), IS
research can contribute by refining and designing concrete nudges, as well as demonstrating and testing
their effects.
6.1.3 Effect of recommendation elaboration
As a third theoretical implication, this study reveals how an individual’s recommendation elaboration shapes
the effect that a recommendation agent (RA) has on a user. We operationalized users’ recommendation
elaboration as their fixation on the recommendation agent and used eye-tracking devices to measure this
fixation measure. The results demonstrate that, without recommendation elaboration, users’ recommendation
choice is random and not influenced by additionally provided information about a recommendation.
Consequently, social nudges that provide additional information about previous users’ influences do not affect
the recommendation choice.
The results of examining recommendation elaboration indicate that just 1 to 2 seconds of recommendation
elaboration allowed the designed social nudges to influence individuals’ recommendation choices. This
finding indicates that even very little elaboration is influential, which addresses calls for research on
recommendation timing (e.g., Ho et al., 2011). Finally, the effect of recommendation elaboration could also
be very interesting for other IS research and marketing domains that attempt to optimize the frequency with
which display advertisements are updated (e.g., Balseiro et al., 2014).
6.2 Implications for Practice
Besides implications for theory, our findings provide interesting insights for managers, IS designers, IS
developers, and RA designers. First, we present a novel approach for managing the trade-off between core
BIS and supplementary WS. That is, we propose an ERA to increase the reuse of BIS reports. Higher report
reuse generates synergies across employees. It reduces data inconsistencies, which in turn improves
managers’ decision making. In contrast to other approaches, the proposed ERA targets report reuse without
restricting user authorizations and without requiring expensive IS governance units. In addition, this study
designed and tested four social nudges, using information about reports’ previous users. This information
included a proximity indicator (“a”, “your”), previous users’ department and country, as well as their role in
their organizations’ hierarchies. These information chunks are concrete examples of social nudges in an
enterprise IS context. They are therefore useful guidelines for IS designers and developers.
Finally, our findings provide two interesting insights for RA designers. First, the proposed ERA indicates a
new context for which RAs could be designed and developed. In addition, our findings indicate that RA
designers need to take care when building RAs that update their recommendations after certain time
intervals. Our experiment showed that most users required 1-3 seconds to process a small amount of
additional information about the recommendations. However, since this finding may be highly context and
RA-dependent, we recommend that RAs should be tested within their specific context in order to determine
suitable recommendation update frequencies.
6.3 Limitations and Suggestions for Future Research
Despite its contributions to theory and practice, our study has limitations and also opens opportunities for
future research. To begin with, we focused on nudges in the context of BIS. Although BIS are a common
context in the IS research domain, the generalizability of our results should also be examined in other
contexts. We conducted a lab experiment for two main reasons. First, the lab experiment allowed us to control
for external effects. In a field setting, controlling for social influence would have been almost impossible,
because of the numerous, potentially confounding, factors (e.g., experience with working together, trust,
looks, etc.). Second, the lab experiment allowed us to conduct fixation analysis and, thus, reliably measure
recommendation elaboration. However, a future study could build on our findings and test our hypotheses in
a field setting.
Next, choosing students as the experiment participants may have reduced our findings’ generalizability to
employees within organizations. However, although students do not have extensive experience with working
with BIS, we argue that they are a reasonably representative group, because external factors, such as
historical experiences, bias them less (e.g., Choi et al., 2015; Colquitt, 2008; Xu et al., 2014). In addition, to
mitigate the risk associated with inexperienced participants, we only used tabular reports, as well as very
basic and intuitive functionalities and tasks (e.g., open a report from a list, open a recommended report, filter
a report, and look for certain information within a report). We used several techniques to train the participants
to use the BIS (i.e., introduction video, personal introduction, personal support, training tasks, and reference
papers) and after the experiment, we did not find any indication that the participants had had difficulties with
using the BIS. The number of correctly completed experimental tasks (93%) also supports this assumption.
Finally, a third limitation relates to the measurement of elaboration, which we measured with regard to each
recommendation with state-of-the-art eye-tracking technology. However, since eye-tracking devices
constantly improve, future studies could leverage, for instance, a more granular resolution. This would allow
researchers to provide detailed elaboration measurements of sub-parts of the recommendations.
Besides tackling these potential limitations, future research could extend our work, as well as design and
investigate other nudges. We focused on social nudges due to the significance of social influence on
individuals’ behaviors in organizations (Tichy et al., 1979; Tsai and Goshal, 1998). However, IS users’
behaviors could perhaps not only be manipulated by means of social effects and future studies could
therefore investigate similar nudges that leverage other cognitive biases (Goes, 2013).
From a practitioner’s perspective, future research could also extend our ERA by adding concrete information
about the recommended reports. For instance, report recommendations could be extended with short
descriptions that indicate why other colleagues use the reports. These studies could specifically improve our
ERA, because, in its current form, it only focuses on presenting information about previous users without
considering information about the content of the recommended reports.
7 Conclusion
Recent IS articles have called researchers to draw on and contribute to behavioral economists’ research
(Goes 2013). This study draws on their nudge concept. We suggested a specific refinement described as a
social nudge, which refers to an attempt to steer an individual toward desirable choices by exploiting social
influence’s effects on this individual. However, in line with the nudge concept, a social nudge should not
change the range of available options from which the individual can choose.
Moreover, we designed four social nudges based on theoretical knowledge about social cohesion,
institutional isomorphism, and hierarchical power. We implemented these social nudges and examined their
effects on BIS users. Our findings showed that the four implementations of social nudges steered users
toward making the targeted choices. These findings are interesting for IS researchers and behavioral
economists because they provide insights into concrete applications and the effects of the nudge concept.
We studied the effects of the four social nudges in the context of RAs. Previous RA literature focused on end-
consumers rather than employees as RA users. However, we suggest that RAs should also be considered
potential extensions of enterprise IS. We therefore proposed an enterprise recommendation agent (ERA) as
an extension of BIS and showed that an ERA can complement existing approaches to managing core BIS
and supplementary WS. Specifically, an ERA can facilitate information retrieval and, thus, increase reuse of
existing information and reports. Since this reduces employees’ needs to build redundant and duplicate
reports using WS, the ERA represents a means for better balancing core BIS and supplementary WS. Finally,
we also contribute to RA literature and elaboration research by empirically examining and reliably quantifying
the effects of users’ recommendation elaboration using recent eye tracking technology and conducting gaze
analysis.
8 References
Abubakre, M. A., Ravishankar, M. N., & Coombs, C. R. (2015). The role of formal controls in facilitating
information system diffusion. Information and Management, 52, 599-609.
Adomavicius, G., Bockstedt, J. C., Curley, S. P., & Zhang, J. (2013). Do recommender systems manipulate
consumer preferences? A study of anchoring effects. Information Systems Research, 24(4), 956-975.
Allen, H., &, Parsons, J. (2010). Is query reuse potentially harmful? Anchoring and adjustment in adapting
existing database queries. Information Systems Research, 21(1), 56-77.
Alter, S. (2014). Theory of workarounds. Communications of the AIS, 34(55), 1041-1066.
Alter, S. (2013). Work system theory: Overview of core concepts, extensions, and challenges for the future.
Journal of the AIS, 14(2), 72-121.
Anderson, C., & Kilduff, G. J. (2009). Why do dominant personalities attain influence in face-to-face groups?
The competence-signaling effects of trait dominance. Journal of Personality and Social Psychology,
96(2), 491-503.
Anicich, E. M., Fast, N. J., Halevy, N., & Galinsky, A. D. (2016). When the bases of social hierarchy collide:
Power without status drives interpersonal conflict. Organization Science, 27(1), 123-140.
Angst, C. M., & Agarwal, R. (2009). Adoption of electronic health records in the presence of privacy concerns:
The elaboration likelihood model and individual persuasion. MIS Quarterly, 33(2), 339-370.
Arazy, O., Kumar, N., & Shapira, B. (2010). A theory-driven design framework for social recommender
systems. Journal of the AIS, 11(9), 455-490.
Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C. (2015). Fitting linear mixed-effects models using lme4.
Journal of Statistical Software, 67(1).
Bagayogo, F. F., Lapointe, L., & Bassellier, G. (2014). Enhanced use of IT: A new perspective on post-
adoption. Journal of the AIS, 15(7), 361-387.
Balseiro, S. R., Feldman, J., Mirrokni, V., & Muthukrishnan, S. (2014). Yield optimization of display advertising
with ad exchange. Management Science, 60(12), 2886-2907.
Bartón, K. (2017). Package ‘MuMIn’. Retrieved November 27, 2017, from https://cran.r-
project.org/web/packages/MuMIn/MuMIn.pdf
Barr, D. J., Levy, R., Scheepers, C., & Tilly, H. J. (2013). Random effects structure for confirmatory hypothesis
testing: keep it maximal. Journal of Memory and Language, 68, 255-278.
Behrens, S. (2009). Shadow systems: The good, the bad, and the ugly. Communications of the ACM, 52(2),
124-129.
Bernstein, P. A., & Haas, L. M. (2008). Information integration in the enterprise. Communications of the ACM,
51(9), 72-79.
Bhattacherjee, A., & Sanford, C. (2006). Influence processes for information technology acceptance: An
elaboration likelihood model. MIS Quarterly, 30(4), 805-825.
Borgatti, S. P., & Everett, M. G. (1992). Notions of position in social network analysis. Sociological
Methodology, 22, 1-35.
Borgatti, S. P., & Foster, P. C. (2003). The network paradigm in organizational research: A review and
typology. Journal of Management, 29, 991-1013.
Bovens, L. (2008). The ethics of nudge. In T. Grüne-Yanoff & S. O. Hansson (Eds.), Preference change:
Approaches from philosophy, economics and psychology. Berlin: Springer (pp. 207220).
Burton-Jones, A., & Grange, C. (2013). From use to effective use: A representation theory perspective.
Information Systems Research, 24(3), 632-658.
Brazel, J. F., & Dang, L. (2008). The effect of ERP system implementations on the management of earnings
and earnings release dates. Journal of Information Systems, 22, 122.
Brown, C., & Magill, S. (1994). Alignment of the IS function with the enterprise: Toward a model of
antecedents. MIS Quarterly, 18(4), 371403.
Brown, C., & Magill, S. (1998). Reconceptualizing the context-design issue for the information systems
function. Organization Science, 9(2):176194.
Cartwright, D. (1959). Introduction. In D. Cartwright (Ed.), Studies in social power (pp. 150-165). Ann Arbor:
Institute for Social Research.
Choi, B. C. F., Jiang, Z. J., Xiao, B., & Kim, S. S. (2015). Embarrassing exposure in online social networks:
An integrated perspective of privacy invasion and relationship bonding. Information Systems Research,
26(4), 675-694.
Clegg, S. (2013). The theory of power and organization. New York: Routledge.
Clegg, S., Courpasson, D., & Phillips, N. (2006). Power and organizations. London: Sage.
Cohen, S. (2013). Nudging and informed consent. The American Journal of Bioethics, 13(6), 3-11.
Colquitt, J. A. (2008). From the editors. Publishing laboratory research in AMJ: A question of when, not if.
Academy of Management Journal, 51(4), 616-620.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings.
Boston: Houghton-Mifflin.
Courpasson, D., Golsorkhi, D., & Salaz, J. J. (2012). Rethinking power in organizations, institutions, and
markets. Research in Sociology of Organizations, 34, 1-20.
Day, J. M., Junglas, I., & Silva, L. (2009). Information flow impediments in disaster relief supply chains.
Journal of the AIS, 10(8), 637-660.
Davenport, T. H. (2014). Big data @ work. Uncovering the myths, uncovering the opportunities. Boston:
Harvard Business Review Press.
Dennis, A. R., & Valacich, J. S. (2001). Conducting experimental research in Information Systems.
Communications of the AIS, 7(5).
DiMaggio, P. (1986). Structural analysis of organizational fields: A blockmodel approach. Research in
Organizational Behavior, 8, 335-370.
DiMaggio, P., & Powell, W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality
in organizational fields. American Sociological Review, 48(2), 147-160.
Fang, R., Landis, B., Zhang, Z., Anderson, M. H., Shaw, J. D., & Kilduff, M. (2015). A meta-analysis of
personality, network position, and work outcomes in organizations. Organization Science, 26(4), 1243-
1260.
Field, A., Miles, J., & Field, Z (2012). Discovering statistics using R. London: Sage.
Fiske, S. T. (2010). Interpersonal stratification. Status, power, and subordination. In S. Fiske, D. Gilbert, & G.
Lindzey (Eds.), Handbook of social psychology (pp. 941-982).
Fiske, S. T. (1993). Controlling other people: The impact of power on stereotyping. American Psychologist,
48, 621-628.
Friedkin, N. E., & Cook, K. S. (1990). Peer group influence. Sociological Methods & Research, 19(1), 122-
143.
Gargiulo, M., & Benassi, M. (2000). Trapped in your own net? Network cohesion, structural holes, and the
adaptation of social capital. Organization Science, 11(2), 183-196.
Gass, O., Ortbach, K., Kretzer, M., Maedche, A., & Niehaves, B. (2015). Conceptualizing individualization in
information systems A literature review. Communications of the AIS, 37(3), 64-88.
Gawronski, B., & Creighton, L. A. (2013). Dual-process theories. In D. E. Carlston (Ed.), The Oxford
handbook of social cognition. New York: Oxford University Press.
Gebauer, J., & Schober, F. (2006). Information system flexibility and the cost efficiency of business
processes. Journal of the AIS, 7(3), 122-147.
Goes, P. B. (2013). Information systems research and behavioral economics. MIS Quarterly, 37(3), pp. iii-
viii.
Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: Methods and
constructs. International Journal of Industrial Ergonomics, 24, 631-645.
Guadagno, R.E., Swinth, K.R., & Blascovich, J. (2011). Social evaluations of embodied agents and avatars.
Computers in Human Behavior, 27(6), 2380-2385.
Halpern, D. (2015). Inside the nudge unit. How small changes can make a big difference. London: WH Allen.
Hanna, J. (2015). Libertarian paternalism, manipulation, and the shaping of preferences. Social Theory and
Practice, 41(4), 618-643.
Hausman, D., & Welch, B. (2010). Debate: To nudge or not to nudge. Journal of Political Philosophy, 18,
123-136.
Hawley, A. (1968). Human ecology. In D. L. Sills (Ed.), International encyclopedia of the social sciences. New
York: Macmillan.
Hess, T. J., Fuller, M., & Campbell, D. E. (2009). Designing interfaces with social presence: using vividness
to create social recommendation agents. Journal of the Association for Information Systems, 10(12),
889.
Ho, S. Y., & Bodoff, D. (2014). The effects of web personalization on user attitude and behavior: An integration
of the elaboration likelihood model and consumer search theory. MIS Quarterly, 38(2), 497-520.
Ho, S. Y., Bodoff, D., & Tam, K. Y. (2011). Timing of adaptive web personalization and its effects on online
consumer behavior. Information Systems Research, 22(3), 660-679.
Hogg, M. A. (2010). Influence and leadership. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.), Handbook of Social
Psychology (pp. 1166-1207).
James, L. R. (1980). The unmeasured variables problem in path analysis. Journal of Applied Psychology, 65,
415-421.
Jasperson, J. S., Carter, P. E., & Zmud, R. W. (2005). A comprehensive conceptualization of the post-
adoptive behaviors associated with IT-enabled work systems. MIS Quarterly 29(3), 525-557.
Just, M. A., & Carpenter, P. A. (1976). Eye fixations and cognitive processes. Cognitive Psychology, 8, 441-
480.
Kellogg, K. C., Orlikowski, W. J., & Yates, J. (2006). Life in the trading zone: Structuring coordination across
boundaries in post-bureaucratic organizations. Organization Science, 17(1), 22-44.
Keltner, D., Gruenfeld, D., & Anderson, C. (2003). Power, approach, and inhibition. Psychological Review,
110, 625-284.
Kilduff, M., & Tsai, W. (2003). Social networks and organizations. London: Sage.
Komogortsev, O. V., Gobert, D. V., Jayarathna, S., Koh, D. H., & Gowda, S. M. (2010). Standardization of
automated analyses of oculomotor fixation and saccadic behaviors. IEEE Transactions on Biomedical
Engineering, 57(11), 2635-2645.
Leonard, T. C. (2008). Richard H. Thaler, Cass, R. Sunstein, Nudge. Improving decisions about health,
wealth, and happiness. Constitutional Political Economy, 19, 356-360.
Lewin, K. (1951). Field theory in social science: Selected theoretical papers. New York: Harpers.
Liang, H., Xue, Y., & Wu, L. (2013). Ensuring employees’ IT compliance: Carrot or stick? Information Systems
Research, 24(2), 279-294.
Li, X., Hsieh, J. J. P.-A., & Rai, A. (2013). Motivational differences across post-acceptance information system
usage behaviors: An investigation in the business intelligence systems context. Information Systems
Research, 24(3), 659-682.
Li, S. S., & Karahanna, E. (2015). Online recommendation systems in a B2C e-commerce context: A review
and future directions. Journal of the AIS, 16(2), 72-107.
Loftus, G. R. (1972). Eye fixations and recognition memory for pictures. Cognitive Psychology, 3, 525-551.
Magee, J. C., & Galinsky, A. D. (2008). Social hierarchy: The self-reinforcing nature of power and status.
Academy of Management Annals, 2, 351-398.
Marsden, P. V., & Friedkin, N. E. (1993). Network studies of social influence. Sociological Methods &
Research, 22(1), 127-151.
Meservy, T. O., Jensen, M. L., & Fadel, K. J. (2014). Evaluation of competing candidate solutions in electronic
networks of practice. Information Systems Research, 25(1), 15-34.
Microsoft (2016). Microsoft Contoso BI demo dataset for retail industry. Retrieved March 21, 2016, from
http://www.microsoft.com/en-us/download/confirmation.aspx?id=18279
Miles, J. A. (2012). Management and organization theory. San Francisco: John Wiley.
Mukherjee, R., & Mao, J. (2004). Enterprise search: tough stuff. ACM Queue, 2(2), 36-46.
Müller, S., Scealy, J. L., & Welsh, A. H (2013). Model selection in linear mixed models. Statistical Science,
28(2), 135-167.
Nakagawa, S., and Schielzeth, H. (2013). A general and simple method for obtaining R² from generalized
linear mixed-effects models. Methods in Ecology and Evolution, 4, 133-142.
O’Neill, D. (2011). Business intelligence competency centers: Centralizing an enterprise business intelligence
strategy. International Journal of Business Intelligence Research 2(3), 21-35.
Over, E. A. B., Hooge, I. T. C., Vlaskamp, B. N. S., & Erkelens, C. J. (2007). Coarse-to-fine eye movement
strategy in visual search. Vision Research, 47(17), 2272-2280.
Panko, R. R., & Aurigemma, S. (2010). Revising the PankoHalverson taxonomy of spreadsheet errors.
Decision Support Systems, 49, 235-244.
Paterek, A. (2007). Improving regularized singular value decomposition for collaborative filtering,
Proceedings of the KDD 2007.
Petty, R. E., & Cacioppo, J. T. (1986a). The elaboration likelihood model of persuasion. Advances in
Experimental Psychology, 19, 123-192.
Petty, R. E., & Cacioppo, J. T. (1986b). Communication and persuasion: Central and peripheral routes to
attitude change. New York: Springer.
Powell, S. G., Baker, K. R., & Lawson, B. (2009). Impact of errors in operational spreadsheets. Decision
Support Systems, 47, 126-132.
Powell, S. G., Baker, K. R., & Lawson, B. (2008). A critical review of the literature on spreadsheet errors,
Decision Support Systems, 46, 128-138.
Rayner, K., Li, X., Williams, C. C., Cave, K. R., & Well, A. D. (2007). Eye movements during information
processing tasks: Individual differences and cultural effects. Vision Research, 47(21), 2714-2726.
Rivard, S., & Lapointe, L. (2012). Information technology implementers’ responses to user resistance: Nature
and effects. MIS Quarterly, 36(3), 897-920.
Simon, H. A. (1957). Administrative Behavior, 4th ed., Cambridge: Cambridge University Press.
Singh, P. V., & Phelps, C. (2013). Networks, social influence, and the choice among competing innovations:
Insights from open source software licenses. Information Systems Research, 24(3), 539-560.
Sparrowe, R. T., Liden, R. C., Wayne, S. J., & Kraimer, M. L. (2001). Social networks and the performance
of individuals and groups. Academy of Management Journal, 44(2), 316-325.
Stark, D. (2009). The Sense of dissonance: Accounts of worth in economic life. Princeton: Princeton
University Press.
Sun, H. (2012). Understanding user revision when using information system features: Adaptive system use
and triggers. MIS Quarterly, 36(2), 453-478.
Sunstein, C. R. (2014). Why nudge? The politics of libertarian paternalism. New Haven: Yale University
Press.
Sunstein, C. R., & Thaler, R. H. (2003). Libertarian paternalism is not an oxymoron. The University of Chicago
Law Review, 70, 1159-1202.
Sutton, R. I., & Staw, B. M. (1995). What theory is not. Administrative Science Quarterly, 40(3), 371-384.
Teubner, T., Adam, M., & Riordan, R. (2015). The impact of computerized agents on immediate emotions,
overall arousal and bidding behavior in electronic auctions. Journal of the AIS, 16(10), 838-879.
Thaler, R. H. & Sunstein, C. R. (2003). Libertarian paternalism. American Economic Review, 93, 175-179.
Thaler, R. H., Sunstein, C. R., & Balz, J. P. (2010). Choice architecture. Retrieved November 19, 2016, from
http://ssrn.com/abstract=1583509
The Royal Swedish Academy of Science (2017). Scientific background on the Sveriges Riksbank Prize in
Economic Sciences in Memory of Alfred Nobel 2017. Richard H Thaler: Integrating economics with
psychology. The Committee for the Prize in Economic Sciences in Memory of Alfred Nobel. Retrieved
November 27, 2017, from: https://www.nobelprize.org/nobel_prizes/economic-
sciences/laureates/2017/
Tichy, N. M., Tushman, M. L., & Fombrun, C. (1979). Social network analysis for organizations. Academy of
Management Review, 4(4), 507-519.
Tiwana, A., & Kim, S. K. (2015). Discriminating IT governance. Information Systems Research, forthcoming,
1-19.
Tiwana, A., & Konsynski, B. (2010). Complementarities between organizational and IT architecture and
governance structure. Information Systems Research, 21(2), 288-304.
Tobii. (2015). User manual Tobii Studio. Retrieved September 24, 2015, from http://acuity-
ets.com/downloads/Tobii%20Studio%203.3%20User%20Guide.pdf
Trout. (2005). Paternalism and cognitive bias. Law and Philosophy, 24, 393-434.
Tsai, W., & Goshal, S. (1998). Social capital and value creation: The role of intrafirm networks. Academy of
Management Journal, 41(4), 464-476.
Tyre, M. J., & Orlikowski, W. J. (1994). Windows of opportunity: Temporal patterns of technological adaptation
in organizations. Organization Science, 5(1), 98-118.
Unger, C., Kemper, H.-G., & Russland, A. (2008). Business intelligence center concepts. Proceedings of the
14th American Conference on Information Systems, paper 147.
Weill, P., & Ross, J. (2005). “A matrixed approach to designing IT governance. Sloan Management Review,
46(2), 2634.
Weigle, C., & Banks, D. C. (2014). Analysis of eye-tracking experiments performed on a Tobii T60.
Proceedings of The International Society for Optical Engineering.
Weinmann, M., Schneider, C., & vom Brocke, J. (2016). Digital nudging (updated version). Business and
Information Systems Engineering, 58(6), 443-436.
Weinmann, M., Schneider, C., & vom Brocke, J. (2015). Digital nudging. Retrieved November 29, 2015, from
http://ssrn.com/abstract=2708250
Williams, R. 2017. Scalar measures of fit: Pseudo R² and information measures (AIC & BIC). University of
Notre Dame. Revision February 2, 2017, from https://www3.nd.edu/~rwilliam/stats3/L05.pdf
World-English. 2015. Baby Names. Retrieved September 24, 2015, from http://www.world-
english.org/baby_names.htm
Xiao, B., & Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and
impact. MIS Quarterly, 31(1), 137-209.
Xu, J. D., Benbasat, I., & Cenfetelli, R. T. (2014). The nature and consequences of trade-off transparency in
the context of recommendation agents. MIS Quarterly, 38(2), 379-406.
Xue, Y., Liang, H., & Wu, L. (2011). Punishment, justice, and compliance in mandatory IT settings.
Information Systems Research, 22(2), 400-414.
Zhang, X., Venkatesh, V., & Brown, S. A. (2011). Designing collaborative systems to enhance team
performance. Journal of the AIS, 12(8), 556-584.
9 Appendix
9.1 Experimental Treatments and Probabilities
We used a within-subjects experiment design. Experimental treatments 1-9 are used for the cross-factorial
examination of institutional isomorphism with regard to business function and location. In addition,
experimental treatments 10-15 are used for the cross-factorial examination of social cohesion and
hierarchical power. Note, in treatments 10 and 11, the hierarchical position of the user referred to in
recommendation 1 and recommendation 3 is “intern, because, recommendation 2 would otherwise have
referred to users with a lower social influence than recommendation 1 and recommendation 3.
In total, 187 subjects participated in our experiment. However, since we did not want to force them to select
a recommendation, some participants solved the experimental task without choosing any of the three
provided report recommendation. The actual sample size N of each experimental treatment is therefore less
than 187. Specifically, it ranges from 149 to 171.
Note that one experimental treatment is included in both of the cross-factorial designs. Experimental
treatment 1 and experimental treatment 12 are the same. Consequently, the sample size of these treatments
is approximately twice the sample size of the other treatments.
Table 7 provides all experimental treatments. Table 8 shows (a) the frequency with which a participant
followed any recommendation from a set of three, and (b) the frequency with which a user followed any
recommendation from a set of three and answered the task correctly. Since the manipulated recommendation
(and thus the recommendation of interest) is always the second from a set of three “rotating”
recommendations, the list also provides (c) the frequency with which a user followed the second
recommendation and answered the task correctly. Finally, this allows us to compute (d) the probability of
users selecting the second recommendation if they followed one of the three recommendations of the
condition and answered the task correctly.
Note that the number of observations in Table 5 and Table 6 corresponds to the sum of column (b) in Table
8. Specifically, the number of observations (1513) in the models estimating the effects of social cohesion,
power, and elaboration equals the sum of Table 8, column b, rows 1-9. In contrast, the number of
observations (999) in the models estimating the effects of institutional isomorphism (regarding business
functions and regarding location), and elaboration equals the sum of Table 8, column b, rows 10-15.
Table 7. Experimental treatments: Overview.
Exp.
treatment id
Inst. isomorph.
regarding business
functions
Inst. isomorph.
regarding
locations
Social
cohesion
Hierarchical
power
(a)
N
1
different
different
low
medium
(314)
2
different
similar
low
medium
160
3
different
same
low
medium
168
4
similar
different
low
medium
152
5
similar
similar
low
medium
163
6
similar
same
low
medium
153
7
same
different
low
medium
171
8
same
similar
low
medium
161
9
same
same
low
medium
166
10
different
different
low
low
150
11
different
different
high
low
149
12 (=1)
different
different
low
medium
(314)
13
different
different
high
medium
158
14
different
different
low
high
158
15
different
different
high
high
156
Table 8. Experimental treatments - Analysis.
Exp.
treatment id
(a)
N
(b) N with
correctly
answered
question
(c)
Recommendation 2
chosen
(d)
Probability of choosing
recommendation 2
1
(314)
(297)
(78)
26,26%
2
160
150
54
36,00%
3
168
155
75
48,39%
4
152
147
82
55,78%
5
163
159
84
52,83%
6
153
140
85
60,71%
7
171
155
86
55,48%
8
161
146
83
56,85%
9
166
164
113
68,90%
10
150
141
37
26,24%
11
149
126
75
59,52%
12 (=1)
(314)
(297)
(78)
26,26%
13
158
141
96
68,09%
14
158
153
94
61,44%
15
156
141
101
71,63%
9.2 Demographic Data
Table 9. Demographic Data.
Category
Value
Absolute
Percentage
Participants
na
187
100.00%
Sex
Men
136
72.73%
Women
51
27.27%
_Total
187
100.00%
Age
17 or younger
1
0.53%
18-20
47
25.13%
21-29
135
72.19%
30-39
4
2.14%
40-49
0
0.00%
50-59
0
0.00%
60 or older
0
0.00%
_Total
187
100.00%
Nationality
Albanian
4
2.14%
Belarus
1
0.53%
Bolivian
1
0.53%
Bulgarian
3
1.60%
Chinese
12
6.42%
Colombian
2
1.07%
Dutch
1
0.53%
Egyptian
4
2.14%
French
1
0.53%
German
106
56.68%
Greek
4
2.14%
Hungarian
1
0.53%
Indian
6
3.21%
Iraqi
1
0.53%
Italian
3
1.60%
Jordanian
1
0.53%
Korean (Republic)
2
1.07%
Lithuanian
1
0.53%
Mexican
2
1.07%
Moroccan
1
0.53%
Norwegian
1
0.53%
Pakistan
1
0.53%
Peruvian
1
0.53%
Romanian
1
0.53%
Russian
5
2.67%
Spanish
4
2.14%
Suisse
1
0.53%
Swedish
2
1.07%
Syrian
1
0.53%
Turkish
6
3.21%
US American
3
1.60%
Vietnamese
4
2.14%
_Total
187
100.00%
First language as a child
Albanian
4
2.14%
Arabic
8
4.28%
Bulgarian
3
1.60%
Cantonese
2
1.07%
Chechen
1
0.53%
Chinese
11
5.88%
Dutch
1
0.53%
English
2
1.07%
German
90
48.13%
Greek
3
1.60%
Hindi
2
1.07%
Hungarian
1
0.53%
Italian
3
1.60%
Korean
2
1.07%
Lithuanian
2
1.07%
Norwegian
1
0.53%
Romanian
3
1.60%
Russian
11
5.88%
Serbian
1
0.53%
Shanghainese
1
0.53%
Slovak
1
0.53%
Spanish
10
5.35%
Swedish
2
1.07%
Tamil
2
1.07%
Telugu
2
1.07%
Turkish
11
5.88%
Urdu
1
0.53%
Vietnamese
6
3.21%
_Total
187
100.00%
9.3 Experiment Design
9.3.1 Experimental tasks
The experiment participants were asked to complete nine tasks. Each task consists of one question, which
they could answer by searching for specific information in the BIS and using the ERA and/or the list of all
reports. The time needed to answer a question was not measured and participants were informed about this
before the experiment. Table 10 provides the list of all tasks. Note that task 1 and task 2 were used as training
tasks and not considered in our data analysis.
Table 10. List of experimental tasks.
Task
Type
Question
Answer
1
Training
Where does the customer come from who placed the order with
the number 200701011CS567?
United Kingdom
2
Training
What is the product sub-category of order number
20070101311515?
Computers
Accessories
3
Experiment
How many male customers have 5 children and own a house
(HouseOwnerFlag=1)? [count rows]
2
4
Experiment
How many Contoso Carrying Case E312 Silver were ordered in
the year 2007? [count the rows]
4
5
Experiment
What is the gender and customer key of the customer with the
highest consumption?
M and 178
6
Experiment
What is the yearly income of customer 137?
40.000,00
7
Experiment
How many female customers have a Bachelor’s degree? [count
rows]
6
8
Experiment
In which country does customer 343 usually place his orders?
France
9
Experiment
Where does customer 252 live and how old is he/she?
83 and Canada
9.3.2 Organizational structure of Contoso
Contoso is the scenario company in the experiment. The departments, organizational hierarchy, and regions
in which Contoso operates are displayed in Figure 6. Figure 6 was also printed and given to all the participants
during the experiment.
Figure 6. Contoso reference paper provided to participants during the experiment.
9.4 Recommendation Elaboration Analysis
Recommendation elaboration was measured using Tobii Pro X2 (Tobii, 2015) eye-tracking devices. A
detailed description of this measure is provided in section 4.5. To visualize the effects of recommendation
elaboration, we defined five intervals and computed the probabilities that participants would choose the
manipulated recommendation, i.e., recommendation 2. The results are illustrated in Figures 4a-d in section
5.4. Table 11 below provides detailed probabilities. Related statistical analyses and significance tests of H6a-
d are presented in Table 5 and Table 6 in section 5.4.
Table 11. Probabilities to choose manipulated recommendation by rec. elaboration.
Elaboration [sec]
0.0s
0.1-1.0s
1.1-2.0s
2.1-3.0s
3.1s-inf
Probability
Recommendation elaboration and social cohesion:
Probability of choosing the manipulated
rec. if social cohesion was high
0.47
0.56
0.72
0.86
0.97
Probability of choosing the manipulated
rec. if social cohesion was low
0.42
0.37
0.30
0.29
0.31
Recommendation elaboration and institutional isomorphism regarding business function:
Probability of choosing the manipulated
rec. if the business function was the same
0.37
0.44
0.55
0.72
0.82
Probability of choosing the manipulated
rec. if the business function was similar
0.43
0.47
0.59
0.60
0.65
Probability of choosing the manipulated
rec. if the business function differed
0.40
0.38
0.37
0.27
0.24
Recommendation elaboration and institutional isomorphism regarding location:
Probability of choosing the manipulated
rec. if location was the same
0.39
0.48
0.57
0.61
0.64
Probability of choosing the manipulated
rec. if the location was similar
0.40
0.40
0.49
0.55
0.56
Probability of choosing the manipulated
rec. if the location differed
0.40
0.43
0.41
0.43
0.40
Recommendation elaboration and hierarchical power:
Probability of choosing the manipulated
rec. if the hierarchical power was high
0.40
0.57
0.67
0.90
0.84
Probability of choosing the manipulated
rec. if the hierarchical power was medium
0.42
0.37
0.38
0.38
0.45
Probability of choosing the manipulated
rec. if the hierarchical power was low
0.48
0.44
0.38
0.29
0.30
9.5 Control Model Analysis
Table 12. Logistic linear mixed effects model with control variables.
Social cohesion, hierarchical power, recommendation elaboration, control variables:
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Intercept
-1.479
0.228
0.583
-2.538
0.011
*
Social Cohesion [high]
0.787
2.197
0.220
3.572
3.54E-04
***
Hierarchical power [med.]
0.255
1.291
0.192
1.330
0.184
Hierarchical power [high]
0.469
1.599
0.227
2.066
0.039
*
Elaboration
-0.152
0.859
0.111
-1.369
0.171
Social Cohesion [high] * Hierarchy [med.]
-0.860
0.423
0.306
-2.810
0.005
**
Social Cohesion [high] * Hierarchy [high]
-1.576
0.207
0.397
-3.972
7.13E-05
***
Social Cohesion [high] * Elaboration
0.120
1.128
0.174
0.695
0.487
Hierarchy [med.] * Elaboration
-0.286
0.751
0.141
-2.037
0.042
*
Hierarchy [high] * Elaboration
0.401
1.493
0.142
2.821
0.005
**
Social Cohesion [high] * Hierarchy [med.] *
Elab.
1.067
2.907
0.248
4.300
1.71E-05
***
Social Cohesion [high] * Hierarchy [high] *
Elab.
0.670
2.014
0.283
2.471
0.013
*
Age
0.093
1.098
0.114
0.817
0.414
Gender [woman]
-0.031
0.969
0.132
-0.238
0.812
Nationality [Belarus]
0.129
1.138
0.933
0.138
0.890
Nationality [Bulgarian]
0.431
1.539
0.639
0.675
0.450
Nationality [Chinese]
-0.656
0.519
0.801
-0.819
0.413
Nationality [Colombian]
0.688
1.989
0.591
1.163
0.245
Nationality [Dutch]
0.496
1.642
0.621
0.799
0.424
Nationality [Egyptian]
0.489
1.631
0.527
0.928
0.354
Nationality [French]
0.185
1.203
1.043
0.177
0.860
Nationality [German]
0.052
1.053
0.690
0.075
0.940
Nationality [Greek]
5.573
26.313
162.452
0.034
0.973
Nationality [Hungarian]
1.780
5.927
0.748
2.380
0.017
*
Nationality [Indian]
0.729
2.072
0.992
0.735
0.463
Nationality [Iraqi]
0.548
1.730
0.783
0.700
0.484
Nationality [Italian]
1.157
3.181
0.574
2.017
0.044
*
Nationality [Jordanian]
0.454
1.574
0.702
0.646
0.518
Nationality [Korea (Rep.)]
1.409
4.090
0.638
2.208
0.027
*
Nationality [Lithuanian]
-5.182
0.006
132.642
-0.039
0.969
Nationality [Mexican]
0.714
2.041
0.578
1.234
0.217
Nationality [Moroccan]
0.759
2.137
0.715
1.062
0.288
Nationality [Norwegian]
0.510
1.665
0.692
0.737
0.461
Nationality [Pakistan]
0.154
1.167
0.967
0.160
0.873
Nationality [Peruvian]
0.488
1.629
0.638
0.764
0.445
Nationality [Romanian]
-0.542
0.581
1.100
-0.493
0.622
Nationality [Russian]
0.390
1.476
0.786
0.496
0.620
Nationality [Spanish]
0.998
2.712
0.541
1.842
0.065
.
Nationality [Suisse]
0.411
1.508
0.891
0.461
0.644
Nationality [Swedish]
0.922
2.515
0.764
1.207
0.228
Nationality [Syrian]
1.465
4.326
0.701
2.088
0.037
*
Nationality [Turkish]
0.312
1.367
0.800
0.390
0.696
Nationality [US American]
-4.829
0.008
132.641
-0.036
0.971
Nationality [Vietnamese]
0.600
1.822
0.566
1.059
0.289
Culture FL [Cantonese]
1.834
6.261
0.810
2.263
0.024
*
Culture FL [Chechen]
-0.168
0.845
0.861
-0.195
0.845
Culture FL [Chinese]
1.164
3.204
0.619
1.880
0.060
.
Culture FL [English]
0.656
1.926
1.084
0.604
0.546
Culture FL [German]
0.675
1.964
0.521
1.295
0.195
Culture FL [Greek]
-5.210
0.005
162.453
-0.032
0.974
Culture FL [Hindi]
0.506
1.659
0.656
0.772
0.440
Culture FL [Lithuanian]
5.964
38.903
132.641
0.045
0.964
Culture FL [Romanian]
0.285
1.330
0.807
0.354
0.724
Culture FL [Russian]
0.342
1.407
0.591
0.578
0.563
Culture FL [Serbian]
1.056
2.875
0.735
1.437
0.151
Culture FL [Shanghainese]
1.284
3.612
0.884
1.453
0.146
Culture FL [Slovak]
5.071
15.938
132.642
0.038
0.970
Culture FL [Tamil]
0.097
1.102
0.952
0.102
0.919
Culture FL [Telugu]
-0.825
0.438
1.045
-0.790
0.430
Culture FL [Turkish]
0.615
1.850
0.579
1.062
0.288
Random effects
Var.
Std. dev.
Observ.
Groups
Subject (Intercept)
0
0
999
175
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-3.245
-0.690
-0.142
0.766
7.481
Table 13. Logistic linear mixed effects model with control variables.
Institutional isomorphism regarding business function, institutional isomorphism regarding location,
recommendation elaboration, control variables:
Fixed effects
Estimate
Odds ratio
Std. error
z value
Pr(>|z|)
Intercept
-0.768
0.464
0.459
-1.671
0.095
Inst. isomorphism regarding business funct.
[medium]
-0.002
0.998
0.181
-0.012
0.990
Inst. isomorphism regarding business funct.
[high]
-0.083
0.920
0.177
-0.470
0.638
Inst. isomorphism regarding location
[medium]
-0.211
0.810
0.187
-1.127
0.260
Inst. Isomorp. regarding location [high]
0.681
1.976
0.242
2.816
0.005
**
Elaboration
-0.459
0.632
0.087
-5.300
1.16E-07
***
Inst. isomorph. (bf)[med.] * Inst. isomorph.
(loc) [med.]
0.454
1.574
0.305
1.489
0.136
Inst. isomorph. (bf) [high] * Inst. isomorph.
(loc) [med.]
0.160
1.174
0.298
0.539
0.590
Inst. isomorph. (bf) [med.] * Inst.
isomorph.(loc)[high]
-1.191
0.304
0.434
-2.747
0.006
**
Inst. isomorph. (bf) [high] * Inst.
isomorph.(loc)[high]
-1.012
0.363
0.406
-2.496
0.013
*
Inst. isomorph. (bf)[med.] * Elab.
0.880
2.410
0.145
6.073
1.26E-09
***
Inst. isomorph. (bf) [high] * Elab.
0.968
2.632
0.146
6.606
3.96E-11
***
Inst. isomorph.(loc)[med.] * Elab.
0.500
1.650
0.114
4.389
1.14E-05
***
Inst. isomorph. (loc)[high] * Elab.
0.218
1.243
0.126
1.734
0.083
.
Inst. isom. (bf) [med.] * Inst. isom.(loc)[med]
* Elab
-0.895
0.409
0.178
-5.023
5.09E-07
***
Inst. isom. (bf) [high] *
Inst. isom.(loc)[med] * Elab
-0.732
0.481
0.181
-4.039
5.36E-05
***
Inst. isom. (bf) [med.] * Inst. isom.(loc)[high]
* Elab
-0.245
0.783
0.208
-1.175
0.240
Inst. isom. (bf) [high] *
Inst. isom.(loc)[high] * Elab
-0.270
0.764
0.206
-1.311
0.190
Age
0.071
1.074
0.085
0.835
0.404
Gender [woman]
0.062
1.064
0.102
0.606
0.545
Nationality [Belarus]
-0.560
0.571
0.785
-0.713
0.476
Nationality [Bulgarian]
0.300
1.350
0.528
0.569
0.569
Nationality [Chinese]
0.216
1.241
0.663
0.326
0.745
Nationality [Colombian]
-0.180
0.835
0.481
-0.374
0.708
Nationality [Dutch]
0.582
1.789
0.576
1.009
0.313
Nationality [Egyptian]
0.109
1.115
0.421
0.258
0.797
Nationality [French]
-0.232
0.793
0.847
-0.274
0.784
Nationality [German]
0.204
1.226
0.591
0.345
0.730
Nationality [Greek]
-0.158
0.854
0.921
-0.172
0.864
Nationality [Hungarian]
0.557
1.745
0.538
1.035
0.301
Nationality [Indian]
0.979
2.663
0.832
1.177
0.239
Nationality [Iraqi]
0.322
1.380
0.533
0.603
0.546
Nationality [Italian]
0.442
1.555
0.477
0.926
0.354
Nationality [Jordanian]
-0.211
0.810
0.576
-0.366
0.715
Nationality [Korea (Rep.)]
0.664
1.942
0.524
1.268
0.205
Nationality [Lithuanian]
1.535
4.643
1.351
1.137
0.256
Nationality [Mexican]
0.434
1.544
0.466
0.932
0.351
Nationality [Moroccan]
-0.285
0.752
0.568
-0.502
0.616
Nationality [Norwegian]
0.429
1.535
0.582
0.737
0.461
Nationality [Pakistan]
0.818
2.265
0.623
1.312
0.190
Nationality [Peruvian]
0.351
1.421
0.571
0.615
0.539
Nationality [Romanian]
-0.313
0.731
0.923
-0.340
0.734
Nationality [Russian]
0.316
1.372
0.659
0.479
0.632
Nationality [Spanish]
0.390
1.476
0.422
0.923
0.356
Nationality [Suisse]
0.940
2.559
0.770
1.220
0.222
Nationality [Swedish]
0.848
2.335
0.536
1.581
0.114
Nationality [Syrian]
-0.226
0.797
0.604
-0.375
0.708
Nationality [Turkish]
0.130
1.139
0.666
0.195
0.845
Nationality [US American]
0.896
2.450
1.172
0.765
0.445
Nationality [Vietnamese]
0.421
1.524
0.437
0.963
0.335
Culture FL [Cantonese]
-0.127
0.881
0.634
-0.200
0.842
Culture FL [Chechen]
0.331
1.392
0.703
0.470
0.638
Culture FL [Chinese]
-0.003
0.997
0.524
-0.006
0.995
Culture FL [English]
-0.920
0.399
0.877
-1.048
0.294
Culture FL [German]
0.192
1.211
0.466
0.412
0.680
Culture FL [Greek]
0.297
1.346
0.901
0.330
0.742
Culture FL [Hindi]
-0.396
0.673
0.560
-0.660
0.509
Culture FL [Lithuanian]
-1.520
0.219
1.223
-1.244
0.214
Culture FL [Romanian]
0.792
2.209
0.645
1.229
0.219
Culture FL [Russian]
0.030
1.031
0.504
0.060
0.952
Culture FL [Serbian]
0.265
1.303
0.613
0.432
0.666
Culture FL [Shanghainese]
-0.544
0.581
0.711
-0.765
0.444
Culture FL [Slovak]
-0.837
0.433
1.204
-0.695
0.487
Culture FL [Tamil]
-0.319
0.727
0.814
-0.392
0.695
Culture FL [Telugu]
-1.925
0.146
0.935
-2.059
0.040
*
Culture FL [Turkish]
0.170
1.185
0.513
0.331
0.740
Random effects
Var.
Std. dev.
Observ.
Groups
Subject (Intercept)
0
0
1513
179
Residuals
Min.
1Q
Median
3Q
Max.
Residuals
-3.487
-0.832
-0.163
0.870
7.524