Conference PaperPDF Available

Designing Multimodal BI&A Systems for Co-Located Team Interactions

Authors:

Abstract and Figures

Teams are crucial for organizations in making data-driven decisions. However, current business intelligence & analytics (BI&A) systems are primarily designed to support individuals and, therefore, cannot be used effectively in co-located team interactions. To address this challenge, we conduct a design science research (DSR) project to design a multimodal BI&A system providing touch and speech interactions that can be used effectively by teams. Drawing on the theory of effective use and existing guidelines for multimodal user interfaces, we propose three design principles and instantiate them in a software artifact. The results of a focus group evaluation indicate that enhancing the BI&A system with multimodal capabilities increases transparent interaction and facilitates effective use of the system in co-located team interactions. Our DSR project contributes novel design knowledge for multimodal BI&A systems with touch and speech modalities that facilitate effective use in co-located team interactions
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Ruoff, Marcel and Gnewuch, Ulrich (2021). "Designing Multimodal BI&A Systems for Co-Located Team
Interactions" (2020). in Proceedings of the 29th European Conference on Information Systems (ECIS
2021), A Virtual AIS Conference.
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Kaiserstraße 89-93
Kollegiengebäude am Kronenplatz (Geb. 05.20)
76133 Karlsruhe
http://iism.kit.edu
© 2021. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 1
DESIGNING MULTIMODAL BI&A SYSTEMS FOR
CO-LOCATED TEAM INTERACTIONS
Research Paper
Marcel Ruoff, Karlsruhe Institute of Technology (KIT), Institute of Information Systems and
Marketing (IISM), Karlsruhe, Germany, marcel.ruoff@kit.edu
Ulrich Gnewuch, Karlsruhe Institute of Technology (KIT), Institute of Information Systems
and Marketing (IISM), Karlsruhe, Germany, ulrich.gnewuch@kit.edu
Abstract
Teams are crucial for organizations in making data-driven decisions. However, current business
intelligence & analytics (BI&A) systems are primarily designed to support individuals and, therefore,
cannot be used effectively in co-located team interactions. To address this challenge, we conduct a
design science research (DSR) project to design a multimodal BI&A system providing touch and
speech interactions that can be used effectively by teams. Drawing on the theory of effective use and
existing guidelines for multimodal user interfaces, we propose three design principles and instantiate
them in a software artifact. The results of a focus group evaluation indicate that enhancing the BI&A
system with multimodal capabilities increases transparent interaction and facilitates effective use of
the system in co-located team interactions. Our DSR project contributes novel design knowledge for
multimodal BI&A systems with touch and speech modalities that facilitate effective use in co-located
team interactions.
Keywords: Multimodal Interaction, Business Intelligence and Analytics, Theory of Effective Use,
Design Science Research, Co-Located Team Interaction.
1 Introduction
The increasing importance of data-driven decision making in organizations reshapes work practices of
employees at any level (Chen et al., 2012). To support employees’ data understanding and decision
making, most organizations have implemented business intelligence & analytics (BI&A) systems.
These systems process and present data to a broad spectrum of users, for example, in the form of
reports or dashboards. Given their widespread availability, BI&A systems are now used in all areas of
business to facilitate decision making. However, the success of BI&A systems will be determined by
how effectively they are used (Burton-Jones & Grange, 2013).
Today, decisions based on BI&A systems are not only made by individuals alone but increasingly also
by teams. Due to this trend, teams are crucial for organizations in making data-driven decisions
(Majchrzak et al., 2012). For example, before deciding on a new customer retention strategy,
employees from sales, controlling, and management departments meet and analyze churn data from
the past. These insights and informed actions are derived in co-located team interactions (Dennis,
1996; Isenberg et al., 2012; Schmidt et al., 2001). Yet, surprisingly few BI&A systems support co-
located team interactions (Berthold et al., 2010; Isenberg et al., 2012) and many teams struggle with
working together equitable and flexible using current BI&A systems (Dayal et al., 2008; Kaufmann &
Chamoni, 2014). For example, with current BI&A systems, only one person in a team meeting would
interact with the system and carry out the analysis, while the other meeting participants can only
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 2
observe the activities or comment on the results. Consequently, achieving effective use of BI&A
systems in co-located team interactions remains a challenge.
According to Burton-Jones and Grange (2013), effective use of information systems (IS) involves
three core elements: transparent interaction, representational fidelity, and informed action. Teams need
to unimpededly interact with a BI&A system in order to obtain faithful representations (e.g., data
analyses), which ultimately enables them to take informed actions (e.g., make business decisions).
Therefore, at the most fundamental level, BI&A systems need to be designed in a way that facilitates
transparent interaction because otherwise achieving effective use is likely not possible. One approach
to facilitate transparent interaction with BI&A systems in co-located team interactions could be to
supplement the established interaction modalities of BI&A systems (i.e., mouse, keyboard, and touch)
with speech interaction. In recent years, the capabilities of conversational user interfaces (CUI) have
greatly improved and they are increasingly used to enable users to access information and interact with
a system in a more natural and intuitive way (McTear, 2017). Hence, combining existing interaction
modalities with speech interaction provided through a CUI may compensate for the disadvantages of
each modality and, therefore, facilitating effective use of BI&A systems. Consequently, BI&A
systems that support multiple modalities (hereafter referred to as multimodal BI&A systems) could
enable teams to interact with a BI&A system in a flexible and effective manner and more actively
support involving all team members in the decision making process (Deng et al., 2004; Oviatt, 1999).
However, while there is a large body of design knowledge on BI&A systems for individual use
contexts, research on the effective use of BI&A systems for team interaction is scarce. Furthermore,
multimodal BI&A systems have been predominantly studied from a technology-centric perspective
(Turk, 2014). Thus, there is a lack of prescriptive knowledge on how to design multimodal BI&A
systems for co-located team interactions. Moreover, it is not well understood whether and how
multimodal BI&A systems can facilitate effective use and support decision making in co-located team
interactions. Hence, we address the following research question:
How to design multimodal BI&A systems for co-located team interactions in order to facilitate the
systems’ effective use?
To address this question, we conduct a Design Science Research (DSR) project (Kuechler &
Vaishnavi, 2008). Drawing on the theory of effective use (Burton-Jones & Grange, 2013) and existing
design knowledge for multimodal user interfaces (MUI) (Deng et al., 2004; Reeves et al., 2004), we
designed, implemented, and evaluated a multimodal BI&A system that combines touch and speech
interaction. We developed and evaluated our software artifact using a confirmatory focus group in
cooperation with the finance & accounting department of a large European energy provider.
This paper presents the results of our first design cycle. Overall, our DSR project contributes to the
body of design knowledge for BI&A systems by demonstrating how the combination of touch and
speech increases transparent interaction and representational fidelity in order to achieve effective use
in co-located team interactions. Furthermore, our proposed design principles advance existing
guidelines for MUIs and ground them in the theory of effective use. In particular, we contribute with
three design principles for multimodal BI&A systems for teams. Overall, our work represents an
improvement in the DSR knowledge contribution framework (Gregor & Hevner, 2013), as it
represents a more efficient and effective solution for a known problem. For practitioners, we provide
applicable guidelines for the implementation of multimodal BI&A systems (Gregor & Jones, 2007).
2 Related Work and Theoretical Foundations
2.1 Business Intelligence & Analytics Systems for Teams
Business intelligence & analytics (BI&A) is often described as “techniques, technologies, systems,
practices, methodologies, and applications that analyze critical business data to help an enterprise
better understand its business and market and make timely business decisions” (Chen et al. 2012, p.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 3
1166). BI&A reinforces human cognition as well as capitalize on human perceptual capabilities by
integrating data analysis systems with decision support systems (Yigitbasioglu & Velcu, 2012). In
order to accomplish this, tools, applications, and technologies focussing on decision making are
required (Larson & Chang, 2016).
In the process of deriving knowledge from the data using BI&A systems and making decisions,
additionally, tools are required that support teams in collaborating (Abbasi et al., 2016). Different
approaches have been used to support teams during decision making and data understanding. Group
decision support systems (GDSS), for example, have been researched for a long time in order to
increase team effectiveness, efficiency, and satisfaction in decision making (Burstein et al., 2008;
Nunamaker & Deokar, 2008). Key insights from these research streams are, that cross-functional
teams can lead to an increase in effectiveness due to synergies. However, they can also lead to
incomplete access to and use of information needed for successful decision making (Nunamaker &
Deokar, 2008). These insights are crucial to data-driven decision making in organizations and,
therefore, the collaborative aspect of decision making receives increasing relevance in BI&S system
research (Abelló et al., 2013; Berthold et al., 2010). Suggesting that during the transfer from individual
to team level, especially, the functional and technical aspects need to be mapped to the requirements
teams pose to BI&A systems (Kaufmann & Chamoni, 2014). However, research on BI&A systems for
co-located team interaction and their requirements is crucial but scarce (Berthold et al., 2010; Ruoff et
al., 2020).
2.2 Multimodal User Interfaces
Multimodal user interfaces (MUI) enable processing two or more input modalities from users, such as
speech, touch, or gaze (Oviatt, 2003). Their fundamental idea is to remove existing constraints on
human-computer interaction by leveraging the full communication and interaction capabilities of
humans in order to provide a natural interaction between the user and the system (Turk, 2014). The
first MUI was Bolt's "Put-that-there" system (1980) integrating speech and gesture to increase the ease
of use of the system. Since then, many MUIs have been developed (e.g., Turk, 2014). Particularly,
speech input has been often used in combination with other modalities, since speech has powerful
complementary capabilities, such as providing complex interactions in contrast to the simple
interactions of touch (Deng et al., 2004; Saktheeswaran et al., 2020). Several guidelines have been
published by research describing the general requirements for MUIs (Reeves et al., 2004) and by
practice describing requirements for the combination of specific modalities (Deng et al., 2004).
Integrating insights from different research streams, such as research on CUI (Gnewuch et al., 2018;
McTear, 2017) as well as interaction preferences (Pitt et al., 2011). Today, MUIs are attributed a high
degree of relevance for BI&A systems as they can provide fluid interactions during decision making
(Dayal et al., 2008; Roberts et al., 2014; Saktheeswaran et al., 2020). However, there is still a lack of
research on multimodal BI&A systems, even though this could enhance the interaction between users
and BI&A systems and could lead to improved effectiveness and efficiency (Dayal et al., 2008).
2.3 Theory of Effective Use
IS should be used effectively since the shallow use of them alone is not sufficient to ensure that the
organization’s objectives are met (Seddon, 1997). According to Burton-Jones and Grange (2013),
effective use can be defined as “using a system in a way that helps attain the goals for using the
system” (p. 4). Based on their conceptualization, effective use is an aggregated construct comprising
three hierarchical dimensions: (1) transparent interaction, (2) representational fidelity, and the
outcome dimension (3) informed action (Burton-Jones and Grange 2013). As illustrated in Figure 1,
the three dimensions of effective use are influencing each other. Initially, the unimpeded access to the
system’s representations (transparent interaction) improves the ability to obtaining representations that
faithfully reflect the domain (representational fidelity). The representational fidelity in turn aims to
improve informed action, which is the extent to which a user acts on faithful representations.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 4
Therefore, a user’s overall level of effective use is determined by the aggregated levels of the three
dimensions (Burton-Jones & Grange, 2013). For example, users of a BI&A system need to access
accurate business information (transparent interaction), such as which products had lower revenue
than expected based on the purchase history (representational fidelity), to be able to make decisions
for future business endeavors (informed action).
Representational fidelity
Informed Action
Transparent Interaction
Improves ability to take…
Improves ability to obtain…
Effectiveness
Efficiency
Effective
Use
Figure 1. Theory of Effective Use (adapted from Burton-Jones & Grange (2013))
In order to positively influence effective use during the interaction between users and IS, Burton-Jones
and Grange (2013) identified two major drivers: adaptation actions and learning actions. In our paper,
we focus on adaptation actions, which are defined as any action a user takes to improve (1) a system’s
representation of the domain of interest; or (2) his or her access to them, through a system’s surface or
physical structure. Therefore, researchers in the context of BI&A systems need to expand their focus
from organizational aspects and data quality (Surbakti et al., 2020) to include also the interaction
between users and the system. Especially, when designing multimodal BI&A systems, researchers
should consider how users are able to adapt their interaction with multimodal BI&A systems
according to the task and context.
3 Design Science Research Project
To design a multimodal BI&A system that can be effectively used in co-located team interactions, we
follow the DSR approach as described by Kuechler and Vaishnavi (2008). We argue that this research
approach is particularly suited to address our research question because it allows us to integrate
existing design knowledge (Deng et al., 2004; Reeves et al., 2004), descriptive knowledge from the
theory of effective use (Burton-Jones & Grange, 2013), and empirical results from our evaluation
phases to incrementally improve our artifact. These foundations provide a rigorous grounding and
allow us to contribute to the existing knowledge base. To further provide relevance to our rigorous
approach (Hevner, 2007) in understanding multimodal BI&A systems, we collaborate with an industry
partner serving as our research case. Our industry partner is the finance & accounting department of a
large European energy provider. The joint research project is conducted because the company is aware
of the need to establish new forms of interaction with data. The access to practitioners enables us to
sharpen our awareness of the problem as well as to perform evaluations with practitioners.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 5
Awareness
of Problem
Suggestion
Development
Evaluation
Conclusion
Synthesis of design principles
based on empirical findings and
theory
Instantiation of design principles as a
software artifact
Qualitative evaluation of software
artifact (confirmatory focus group)
Adapt design principles based on
evaluation results and insights from
focus groups
General Design
Science Cycle
Cycle 1
Literature review
Interaction-elicitation study
Reflection of
focus group analysis
Modification of software artifact
Quantitative evaluation of
software artifact (lab experiment)
Cycle 2 Cycle 3
Adapt design principles based on
evaluation results and insights
from lab experiment
Reflection of
experiment analysis
Modification of software artifact
Quantitative evaluation of
software artifact (field experiment)
Deliver nascent design theory
Understanding Lab Experiment Evaluation Application to practice
Figure 2. Design Science Research Project (adopted from Kuechler & Vaishnavi (2008))
In our first design cycle, we focus on the fundamental dimension of the theory of effective use, the
transparent interaction with multimodal BI&A systems, and the impact of the systems’ design on their
effective use.
Awareness of Problem: In order to better understand issues of data-driven decisions in co-located
teams and potential issues in the design of multimodal BI&A systems, we started our research by
conducting a literature review on multimodal BI&A systems for co-located team interactions. This
literature review provided us with potential issues in the design of multimodal BI&A systems for co-
located team interactions and allowed us to extract approaches on how to tackle these issues from
various disciplines, such as the discipline of computer-supported cooperative work and information
visualization.
Subsequently, we conducted an interaction-elicitation study following the approach by Morris (2012)
to derive data on how people would want to interact with a multimodal BI&A system to compare the
proposed guidelines to feedback from potential users. Overall, 30 participants with an average age of
22.8 years (SD = 1.9) took part in the study. There were 8 female and 22 male participants, mostly
students with a background in economics and engineering. In accordance with Badam and Elmqvist
(2019), we motivate the choice of using students as the representative population as the focus of this
study was to extract interactions with multimodal BI&A systems, and therefore, no specific expertise
except the experience of using touch and speech interfaces was needed.
The interaction-elicitation study consisted of two parts (Ruoff & Maedche, 2020). First, the
participants were shown 14 randomized core functionalities of BI&A systems, such as filtering,
selecting, and obtaining details, which we extracted based on the framework of Yi et al. (2007). After
each demonstration of a functionality, the participant was asked to propose an interaction on how s/he
would invoke the functionality using speech, touch, and the combination of these modalities. For each
modality, the participant stated in which context s/he would use this interaction. Furthermore, the
participant rated for each functionality which modality s/he would prefer and stated why s/he rated the
modalities in this order. Finally, after proposing interactions for each functionality, a semi-structured
post-study interview was conducted with a focus on the use of multimodal BI&A systems as well as
on how they provide assistance to users in order to interact properly. With the consent of the
participants, audio and video were recorded for the whole interaction-elicitation study.
In order to analyze our results, we coded the post-study interviews to derive common issues from the
users’ perspective and the user-defined interactions for the core functionalities. To calculate the
agreement for the interaction of each modality and core functionality, we derived the percentage of
participants proposing the most popular interaction (Morris, 2012). For example, 17 participants
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 6
proposed the interaction “Filter for <Entity>” as a speech interaction for the functionality “filtering”.
Therefore, the interaction for filtering using the modality speech has an agreement of 57%.
Furthermore, based on the ranking of the modalities for each functionality, we were able to derive the
modalities preferred for the functionalities.
Suggestion: To address the issues identified in the problem awareness phase, we proposed three
design principles for multimodal BI&A systems. These design principles were derived based on our
literature review, the results of our interaction-elicitation study, and the theory of effective use as our
kernel theory.
Development: To demonstrate how these design principles can be implemented, we instantiated them
in a software artifact using state-of-the-art technologies for the recognition of speech and touch input.
Evaluation: In the evaluation phase, we opted for confirmatory focus groups as they provide a
collective view on a topic of interest from a group of experienced participants and to establish the
utility of the software artifact in field use (Tremblay et al., 2010). We invited thirteen employees from
the finance & accounting department with a focus on controlling, customer processes, data science, as
well as general management in the context of finance (9 males, Mage = 34.6 years, MWorkExp = 10.1
years). Therefore, all practitioners have experience using BI&A systems in co-located team
interactions and can provide insights into the topic of interest. The guiding thought of these
confirmatory focus groups were issues related to the use of multimodal BI&A systems of practitioners
in co-located team interactions and possible strengths, weaknesses, opportunities, and threads of
facilitating the interactions of the multimodal BI&A system through touch and speech.
After a short introduction into the goal and procedure of the confirmatory focus group, we separated
them into two groups of six and seven practitioners. The confirmatory focus group with both groups
followed the same procedure. First, the use case of leveraging the multimodal BI&A system in co-
located team interactions was presented to the practitioners. The moderating researcher guided the
practitioners through questions that are of interest in a typical decision making task (e.g., whether the
price for an energy product should be increased in the future). During the demonstration of the use
case, the moderating researcher was supported by our multimodal BI&A system and used various
possible interactions with the multimodal BI&A system, such as speech for filtering or touch to select
data of interest. The practitioners were included in the interaction with the system and could also use
the multimodal BI&A system during the demonstration. After the demonstration, questions regarding
the use case and the multimodal BI&A system were discussed. Following a 20 minute discussion, we
explained the Strength-Weakness-Opportunity-Threat (SWOT) analysis method to the practitioners
which was used to structure the confirmatory focus group. Subsequently, the practitioners were given
time to write down their perceived strengths, weaknesses, opportunities, and threats of multimodal
BI&A systems in co-located team interactions on index cards. Finally, the index cards were read out
loud and explained by the respective practitioner, providing the researchers with the possibility to ask
follow-up questions on recurring points. Both sessions were recorded with the consent of the
practitioners and transcribed after the workshop.
Following the confirmatory focus groups, all audio recordings were transcribed using MAXQDA
2018. Similar to previous evaluation studies that used recorded verbalization, our “coding scheme
consisted of a series of categories about the behavior to be studied (Vitalari, 1985, p. 226). More
specifically, our coding scheme included the concepts of effective use (e.g., transparent interaction and
representation fidelity) and the relationships between them. In the first step, we combined similar
index cards with overlapping explanations by the respective practitioner based on the results of the
initial coding. In a second step, we derived first-order concepts from these groups (Zhang, 2017). For
example, “no tool knowledge needed” and “makes it easier to find options that can otherwise only be
reached with many clicks” were combined with other similar statements to a group and the first-order
concept “Limited knowledge about the functionality of the system necessary” was derived and mapped
to the corresponding design principle.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 7
As depicted in Figure 2, we plan to conduct two additional cycles to further refine our design and
evaluate it in a lab and field experiment. In the second design cycle, we plan to refine the design
principles based on the evaluation results of the first cycle. Furthermore, we will focus on how to
adapt the multimodal BI&A system to team characteristics and context. We plan to experimentally
evaluate how the adaptation of the transparent interaction and representation fidelity affects the
effective use of the BI&A system. The final and third design cycle aims to fine-tune our design
principles using the results of the previous evaluations. This will provide us the opportunity to
introduce the multimodal BI&A system to various teams in the finance & accounting department and
to better understand the impact of the design principles on effective use. Our ultimate goal is to deliver
a nascent design theory for multimodal BI&A systems as described by Gregor and Jones (2007).
4 Results
4.1 Awareness of the Problem
In the following, we present the results of the problem awareness phase along the two main
dimensions of effective use as a lens: (1) transparent interaction and (2) representational fidelity.
Specifically, we raise three major issues (I) with regards to current BI&A systems.
Transparent Interaction: Researchers aim to facilitate effective use by providing unimpeded access
to current BI&A systems through additional input modalities. Multiple studies have explored how the
combination of different modalities in multimodal BI&A systems can assist teams during co-located
team interactions (Badam et al., 2016; Langner et al., 2018; Lee et al., 2015; Nguyen et al., 2017). The
combination of modalities used in these studies varies between touch and speech, mid-air hand
gestures and touch, mid-air hand gestures and speech as well as touch and pen. Therefore, it is difficult
to generalize the results of these studies. However, the general conclusion of these studies is that only
providing additional modalities to users does not automatically increase effective use (Nguyen et al.,
2017). Therefore, it is unclear which and how multiple modalities in BI&A systems should be
combined in order to facilitate transparent interaction (I1).
A common modality used for multimodal BI&A systems is touch since it conveys the team member’s
“intention quickly and unambiguous to the system” (Badam et al., 2016) and is in line with the
affordance of displays to be touched (Norman, 2016). However, teams are still unable to convey
complex information to the multimodal BI&A systems without help from menus. To tackle the
limitations of touch and to fulfill the requirements of the adaptivity of MUIs (Reeves et al., 2004),
researchers combine touch with additional modalities. To augment touch as a modality, guidelines for
MUIs and the results of our interaction-elicitation study indicate that speech could be beneficial to
convey complex information (Deng et al., 2004; Saktheeswaran et al., 2020). Especially since the team
can “easily manipulate the visualized data in a natural and intuitive approach” (Nguyen et al. 2017, p.
7) through speech. However, in most multimodal BI&A systems, speech is still a hidden affordance as
the microphone is subtly integrated into the display and the interaction provides no physical feedback.
Therefore, individuals and teams struggle to use modalities, such as speech, because they are less
“visible” (I2).
Representational Fidelity: In many studies, achieving representational fidelity is supported by
providing either a dashboard (Badam et al., 2016; Langner et al., 2019; Lee et al., 2015) or a single
information visualization (Nguyen et al., 2017). In order to maintain representational fidelity during
decision making, teams need to be able to adapt the visual representations using transparent interaction
(Srinivasan et al., 2020), by altering queries to the data (Jetter et al., 2011), or by enhancing or
changing the underlying data (Chung et al., 2014). These adaptation actions can be performed using
different modalities. For example, users could click on a filter (touch) or ask the system to select a
specific year (speech). However, in the context of MUIs, researchers currently design the mapping
between interaction techniques, which users can utilize to maintain the representational fidelity, and
the system functionality bottom-up based on their specific system. As a result, a guiding paradigm or
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 8
design principle is missing to guide this process. Therefore, it is unclear how to map fundamental
dashboard interaction techniques to multimodal system functionalities (I3).
In summary, there are several issues in the design of multimodal BI&A systems for co-located team
interactions. Based on the results of our literature review and interaction-elicitation study, we
determined that existing research is missing an understanding how to facilitate effective use of BI&A
systems in co-located team interactions. Therefore, we subsequently focus on the gap in how
multimodal BI&A systems need to be designed to facilitate effective use and how teams can be
assisted during their interaction.
4.2 Suggestion
To address the identified issues of multimodal BI&A systems, we suggest designing a system that
facilitates effective use by providing a MUI. Building on the theory of effective use, we argue that a
multimodal BI&A system that provides unimpeded access to the system’s representation (transparent
interaction) and enables users to obtain faithful representations (representational fidelity) will
positively influence informed actions and, therefore, facilitate effective use. Consequently, we
formulate two meta-requirements (MR) based on the dimensions of effective use: Multimodal BI&A
systems should provide a high level of transparent interaction (MR1) and representational fidelity
(MR2).
To increase transparent interaction (MR1) and to tackle I1 & I2, the theory of effective use suggests
adapting the physical structure and the surface structure. It further indicates, that the sole purpose of
these structures is to support access to representations” (Burton-Jones & Grange 2013, p. 646). In the
context of the physical structure, the core strength of providing multiple input modalities is that
multimodal systems decrease the distance between intent and interaction (Lee et al., 2012) and,
therefore, support the access to the representations of the system, which is based upon the use of
different modalities complementing each other (Sundar et al., 2015). By providing the possibility to
choose between modalities, the multimodal BI&A system is robust to varying contexts, such as noise,
and team member preferences. This addresses the guidelines for error prevention and adaptivity in the
“Guidelines for multimodal user interface design” by Reeves et al. (2004). Furthermore, unimpeded
access to the system’s representation in the context of co-located team interaction is only possible if
the whole team can view the multimodal BI&A system. Particularly during decision-making,
perspectives of all team members need to be considered in the analysis and thus systems are required
to support all team members in their transparent interaction (Dennis, 1996; Dennis et al., 2001).
Therefore, we articulate the first DP:
DP1: To improve team members’ transparent interaction with a BI&A system in co-located team
interactions, integrate multimodal interaction capabilities on large interactive displays.
In the context of adapting the surface structure, the most critical mechanisms are the affordances and
the feedback the system provides. In order to address the issues of hidden affordances (I2), we propose
to implement signifiers for the affordances of multimodal BI&A systems, in accordance with the
theory of affordances (Norman, 2016). The crucial affordances of multimodal BI&A systems
providing touch and speech as modalities are touching the system and speaking to the system.
However, even though most of the displays used in co-located team interactions integrate
microphones, in conformance with I2, the affordance to speak to the system is not visible to the team
members and lacks signifiers. Therefore, an approach to make speech perceptible is to provide
signifiers to the team members. These signifiers create awareness for team members on what
modalities are available for interacting with the multimodal BI&A system. Furthermore, teams need to
understand how to properly interact with the multimodal BI&A system in order to increase transparent
interaction. Therefore, the multimodal BI&A system should provide perceptual information on the
basis of which teams can reinforce and, if necessary, modify their behavior. Deng et al. (2004)
proposed to implement reactive feedback in CUI in order to assist users during the interaction. Using
reactive feedback, the system’s interpretation of the team members’ speech interaction can be
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 9
visualized for confirmation and missing information can be requested by the multimodal BI&A
system. For example, after a complex speech interaction, team members should be able to understand
whether the system invoked the correct functionalities or if the team members need to undo the last
step and try again in a different way. Therefore, we articulate the second DP:
DP2: To improve team members’ transparent interaction with a BI&A system in co-located team
interactions, employ feedback and signaling affordances that clarify its interaction capabilities.
To increase the representational fidelity (MR2) and to tackle Issue 3, the theory of effective use
suggests adapting the representations of the system. In this cycle, we focus on the visual
representations of the system and not on adapting the mapping of the database or the functionalities of
the system, which is also part of representational fidelity. In order to achieve higher representational
fidelity by adapting the visual representations using transparent interaction, direct manipulation of the
visual representations is crucial. Direct manipulation has been shown to simplify the mapping between
goals and actions by reducing the semantic and articulatory distance (Frohlich, 1993). Furthermore, Yi
et al. (2007) proposed a set of interaction techniques for visual representations, which are independent
of the modality used for facilitation. Combining these two concepts enables the user to utilize
transparent interaction to adapt the representation of the system in order to maintain representational
fidelity during decision making. Even when the problem statement or the information need shifts.
Therefore, we articulate the third DP:
DP3: To support team members in obtaining faithful representations while using a multimodal BI&A
system in co-located team interactions, enable direct manipulation of visual representations using
common interaction techniques (e.g., selecting, filtering).
4.3 Development
For our first design principle, we chose a Microsoft Surface Hub 2S to provide the touch and speech
modality as well as the visualization of the system (Figure 3), as it provides a large interactive display
to the team. Our multimodal BI&A system should be independent of specific BI&A systems used in
teams. Therefore, we used a two-layer architecture. The first layer is responsible for the integration of
the BI&A system and its corresponding data into the system. We used the SDK of Microsoft Power
BI, which is the platform for BI&A mainly used in the case organization. However, the focus of our
system is on the second layer, which is responsible for the interaction between teams and the BI&A
system. To provide a CUI and to implement speech interaction into our BI&A system we used
Microsoft’s Cognitive Services. This provides us the capability to perform speech-to-text analysis and
the identification of intentions. The touch interactions were facilitated by JavaScript.
Figure 3. Multimodal BI&A System in co-located team interactions at industry partner
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 10
To instantiate the second design principle, we provide a signifier for the affordance of speaking to the
system. Signifiers in the digital context can consist of but are not limited to buttons, labels, and sounds
coming out of a speaker, or haptic vibration. We provide a signifier, which is constantly available to
the team. Furthermore, in the post-study interviews from our interaction-elicitation study, participants
stated that they would prefer a visual representation, indicating the availability of speech to the team.
Therefore, we opted for a visual representation of the affordance that provides a visible signifier to the
team at all times during the interaction. A microphone symbol on the large interactive display
indicates the ability to speak to the system and tapping the symbol initiates speech interaction.
Additionally, to provide reactive feedback to the team members (DP2), the system displays the
interpretation of the speech input and explains the changes that were made based on that interpretation
in the CUI. Figure 4 shows the feedback that the team receives after filtering the dashboard via speech.
It includes the functionality invoked (“filter”) and what parameters were changed (i.e., ”Planning
Status”). This provides team members the ability to check whether the system understood them
correctly or if they need to undo the last step and try again in a different way.
for
You have set a filter for Planning Status.
DP3
DP2
Figure 4. Instantiation of the second and third design principle
Finally, to instantiate the third design principle, we used the results of the interaction-elicitation study
to understand how users would like to perform the interaction techniques provided by Yi et al. (2007)
with BI&A systems using touch and speech. To demonstrate the capabilities of a multimodal BI&A
system and the implementation of our third design principle, we opted for filtering, selecting,
reconfiguring visualizations, interacting with bookmarks, asking questions to the data (ex. What is the
Product with the highest return in 2019?) as well as switching tabs as core functionalities provided by
multimodal BI&A systems.
We selected for each modality and functionality the interaction that was proposed by most participants
of the interaction-elicitation study. However, if multiple interactions had a high agreement for a
modality and functionality and did not have a conflict, we integrated all. For example, for filtering and
touch the integration of a drop-down menu has an agreement rate of 53%, and tapping on the depiction
of a variable in a visualization has an agreement rate of 40%. By providing both possibilities, we are
able to provide interactions independent of team member preferences. Furthermore, we provide the
possibility to choose between speech and touch at any step of the interaction. To continue the example
of filter, as depicted in Figure 4, the team members are able to use speech (“Filter for Prognose”) or
touch (Drop-Down Menu OR Tap on Variable in a Visualization) based on their current context and
preferences.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 11
4.4 Evaluation
In order to evaluate our multimodal BI&A system for co-located team interactions, we conducted
confirmatory focus groups (Tremblay et al., 2010) with thirteen employees of our industry partner.
The recorded focus group discussions were analyzed using a SWOT analysis. The results of the
SWOT analysis in the context of each design principle are explained in more detail below.
DP
Strengths
DP
Weaknesses
1
S1. Modality can be selected based on
context, team member characteristics,
and task
S2. Increased interactivity of co-located team
interactions and involvement of all team
members
1
W1. Missing trust in the reliability of speech
and its adaptivity to the context and team
member characteristics
W2. Speech is seen to reduce the privacy of its
users
2
S3. The team can concentrate on the
communication and the task at hand
3
S4. Limited knowledge about the
functionality of the system necessary
S5. Increased effectiveness of co-located
team interactions due to ad-hoc analysis
3
W3. Onboarding needed to provide teams the
ability to interact properly with the system
DP
Opportunities
DP
Threats
2
O1. Shifting the role of the BI&A system
from an information provision platform
towards becoming a key tool for
teamwork
1
T1. Every team member can interact with the
system which limits the control of a
presenter and may lead to inefficient
teamwork
3
O2. Increased effectiveness of co-located
team interactions as additional
information can be acquired based on
more complex interactions with and
drill-down into the data
3
T2. Simplification and automation of the
functionality through more intuitive
modalities can lead to unnoticed mistakes
Table 1. Summary of the SWOT Analysis
First, participants stated that integrating multimodal interaction capabilities on large interactive
displays (DP1) would help them in more effectively using the BI&A system. Particularly the
interactivity and involvement of all team members in co-located team interactions was regarded as a
major benefit. One participant stated: When working with people who are experts in their field,
everyone can interact from their standpoint and provide insights to the discussion” and that “the
modalities in the system assist the interactivity of the meeting”. Furthermore, the first design principle
was regarded as a key strength of the multimodal BI&A system, “as it offers more possibilities in
contrast to current systems and, therefore, enables us to choose the fitting modality. For example, if
the noise in the room is too loud, the team members can switch to touch.” Moreover, the participants
confirmed the insights from existing literature that “the combination of touch and speech is beneficial,
as they are able to use speech for complex interactions and touch for simple and fast interactions.”
However, one major weakness of the multimodal BI&A system, hindering effective use, is the missing
trust in the reliability of speech processing and its adaptivity to the context and team member
characteristics. The participants fear, that “the system would require an unnatural syntax for speech
interaction” and that it cannot be adapted to the respective team members. Finally, participants
mentioned that speech “decreases privacy, as everyone hears what you are working on.
In general, the participants also liked the fact that the multimodal BI&A system employs feedback and
signaling affordances that clarify its multimodal interaction capabilities (DP2). Especially, since in the
context of decision making using BI&A systems, they fear that “through the ability to invoke complex
functionalities with simple interactions, multimodal BI&A systems may misinterpret the intentions and
provide the wrong information for the following discussion.”. Therefore, the reactive feedback would
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 12
help them spot mistakes in the system’s interpretation of the interaction. However, during the
discussion, team members may still miss the feedback provided by the system and use the information
provided by an unfaithful representation to derive wrong insights.
The participants additionally mentioned that enabling direct manipulation of visual representations
using common multimodal interaction techniques helps them to “derive insights and configurations
that else would be hard to find” and enables “ad-hoc analysis to answer questions arising in the
discussion”, which supports the third design principle. They further stated that this would help them to
improve their informed actions and would, therefore, facilitate the effective use of the multimodal
BI&A system. As the system already provides transparent interaction (DP1 & DP2), in order to easily
invoke complex functionalities of the system, the participants imagine the third design principle could
provide “additional insights that would be overlooked in current meetings and would currently
require the team to reschedule the meeting.” Moreover, “meetings and analysis, in general, could get
faster.” However, according to the participants, providing the direct manipulation of the visual
representations using speech might require “the user to learn the syntax beforehand.”
5 Discussion
While important decisions based on data are often made by cross-functional teams, current BI&A
systems are primarily designed to support individual decision makers. To address this problem, we
conduct a DSR project to design multimodal BI&A systems for co-located team interactions. Drawing
on the theory of effective use, we examined how the combination of touch and speech modalities can
facilitate the effective use of multimodal BI&A systems. In the first cycle of our DSR project, we
proposed three design principles and instantiated them in our artifact. Subsequently, we conducted a
confirmatory focus group evaluation with our industry partner. The results of our evaluation suggest
that the combination of touch and speech for multimodal BI&A systems provides teams with
additional possibilities to interact properly based on the team characteristics and context. However, the
results also illustrate that the adaptivity of the speech interaction and an onboarding phase might
further increase transparent interaction. Therefore, our DSR project provides valuable theoretical
contributions and practical implications that we discuss in the following.
First, our research contributes to the body of design knowledge for multimodal BI&A systems in
particular, and MUIs in general. The results of our evaluation suggest that the effective use of
multimodal BI&A systems in co-located team interactions can be increased by offering touch and
speech modalities on a large interactive display (DP1). This design principle enables team members to
select modalities depending on their preferences and their current tasks, but they also have the ability
to choose another modality if the context changes. Furthermore, the system creates awareness of
possible modalities and provides reactive feedback (DP2), which allows team members to understand
how to properly interact with the system and to spot mistakes in the system’s interpretation (e.g., of
their speech input). This reduces team member’s worry to overlook possible mistakes of the system
and using the wrong information to make decisions. Moreover, all design principles are key to provide
the possibility to conduct ad-hoc analysis during co-located team interactions and to derive insights
that would otherwise be overlooked. Therefore, these design principles can facilitate effective use of
multimodal BI&A systems in co-located team interactions. Taken together, our research shows how
the theory of effective use can be applied to improve the interaction of users with BI&A systems and
advances our understanding of how users interact with MUIs.
Our evaluation also sheds light on additional design issues, which offer valuable starting points for a
further improvement of multimodal BI&A systems. First, one weakness of multimodal BI&A systems
derived in our evaluation indicates that the users need to be able to perform adaptation actions on the
speech interaction itself. If the speech interaction feels unnatural to team members or the system
repeatedly fails to understand their speech input, teams are unlikely to use multimodal BI&A systems.
To provide the system with the capabilities of adapting its speech interaction and to facilitate
transparent interaction, Li et al. (2017) propose to make multimodal systems “instructable”. This
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 13
would imply that, if the multimodal BI&A system fails to understand the teams’ intention or input,
team members are able to provide feedback back to the system. More specifically, teams could not
only mark their input as interpreted incorrectly but also demonstrate the correct intention to the system
using touch which the multimodal BI&A system provides due to its multimodal nature. For example,
if a user wants to “Filter for the critical customers”, the system would not know what critical
customers are. Therefore, the user can demonstrate for future cases using touch that critical customers
have an order volume of higher than 1 million and a remaining contract term of 1 year. Therefore,
providing MUIs with the ability to improve their recognition of intentions for a certain modality using
input from another modality could facilitate the effective use of MUIs in general.
Furthermore, our results suggest that an initial onboarding could further facilitate effective use as it
helps teams to learn how to interact properly with the system. Using multimodal BI&A systems during
co-located team interactions allows everyone to interact with the system and contribute equally to the
discussion and derivation of insights. However, this brings new challenges to the moderator of the
discussion and the proper interaction with the system. Therefore, teams should be guided through the
system in an onboarding phase to help them adapt their behavior to the system (e.g., how to formulate
their questions in natural language) and show them how to get information using which modalities.
Furthermore, during the use of the multimodal BI&A system, feedback should be provided based on
the current interactions to help teams understand which information is further needed by the system,
where the boundaries of the system are, and what modalities are available. Our reactive feedback
(DP2) already provides feedback to teams on their current interactions. However, it does not provide
explicit suggestions on how to interact with the system and how teams may adapt the multimodal
BI&A systems in accordance with their team characteristics. This reactive feedback could be enhanced
with further inquiries, suggestions, and insights in order to make the interaction between the team and
the multimodal BI&A system not a one-way, but a two-way conversation.
Finally, there are also some limitations of work that should be considered. First, our multimodal BI&A
system only implemented two modalities: touch and speech. Although they are generally considered to
be important modalities in HCI, future research could evaluate how other modalities (e.g., gaze and
speech) complement each other and can be integrated into multimodal BI&A systems to facilitate
effective use. Second, we instantiated our design principles on a large interactive display. However,
the size, as well as the appearance of the interactive surface, may influence how people interact with
our artifact. Consequently, future research could evaluate the influence of the type of device used for
the provision of the artifact. Finally, we used a confirmatory focus group to perform a qualitative
evaluation of the impact of the software artifact on the facilitation of effective use. Although we argue
that this approach is appropriate given the innovative nature of multimodal BI&A systems, further
research using quantitative evaluation methods is needed. Therefore, a quantitative field-based study
could provide additional insights into the impact of multimodal BI&A systems on their effective use in
co-located team interaction.
6 Conclusion
This paper reports the results of the first cycle of a DSR project focusing on the design of multimodal
BI&A systems for co-located team interactions. Overall, our DSR project contributes with design
knowledge that can be applied to facilitate the effective use of multimodal BI&A systems in co-
located team interactions. In particular, we contribute with three design principles in order to provide a
multimodal BI&A system to teams consisting of user-defined multimodal interactions as well as
feedback and signaling affordances for speech interaction. The design principles were derived based
on the theory of effective use, guidelines for the design of MUIs, and empirical insights of an
interaction-elicitation study. We instantiated our design principles and developed a running software
artifact based on state-of-the-art technology. Finally, our evaluation of the software artifact in the form
of a confirmatory focus group with an industry partner demonstrates the potential of our proposed
software artifact.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 14
References
Abbasi, A., Sarker, S., & Chiang, R. (2016). Big Data Research in Information Systems: Toward an
Inclusive Research Agenda. Journal of the Association for Information Systems, 17(2), IXXXII.
Abelló, A., Darmont, J., Etcheverry, L., Golfarelli, M., Mazón, J. N., Naumann, F., Pedersen, T. B.,
Rizzi, S., Trujillo, J., Vassiliadis, P., & Vossen, G. (2013). Fusion cubes: Towards self-service
business intelligence. International Journal of Data Warehousing and Mining, 9(2), 6688.
Badam, S. K., Amini, F., Elmqvist, N., & Irani, P. (2016). Supporting visual exploration for multiple
users in large display environments. 2016 IEEE Conference on Visual Analytics Science and
Technology (VAST), 110.
Badam, S. K., & Elmqvist, N. (2019). Visfer: Camera-based visual data transfer for cross-device
visualization. Information Visualization, 18(1), 6893.
Berthold, H., Rösch, P., Zöller, S., Wortmann, F., Carenini, A., Campbell, S., Bisson, P., &
Strohmaier, F. (2010). An architecture for ad-hoc and collaborative business intelligence.
Proceedings of the 1st International Workshop on Data Semantics - DataSem ’10, 1.
Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. Proceedings of the
7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1980,
262270.
Burstein, F., W. Holsapple, C., Carlsson, S. A., & El Sawy, O. A. (2008). Decision Support in
Turbulent and High-Velocity Environments. In Handbook on Decision Support Systems 2 (pp. 3
17). Springer Berlin Heidelberg.
Burton-Jones, A., & Grange, C. (2013). From Use to Effective Use: A Representation Theory
Perspective. Information Systems Research, 24(3), 632658.
Chen, Chiang, & Storey. (2012). Business Intelligence and Analytics: From Big Data to Big Impact.
MIS Quarterly, 36(4), 1165.
Chung, H., North, C., Self, J. Z., Chu, S., & Quek, F. (2014). VisPorter: Facilitating information
sharing for collaborative sensemaking on multiple displays. Personal and Ubiquitous
Computing, 18(5), 11691186.
Dayal, U., Vennelakanti, R., Sharma, R., Castellanos, M., Hao, M., & Patel, C. (2008). Collaborative
business intelligence: Enabling Collaborative decision making in enterprises. In Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics): Vol. 5331 LNCS (pp. 825).
Deng, L., Wang, Y., Wang, K., Acero, A., Hon, H., Droppo, J., Boulis, C., Mahajan, M., & Huang, X.
D. (2004). Speech and Language Processing for Multimodal Human-Computer Interaction. The
Journal of VLSI Signal Processing-Systems for Signal, Image, and Video Technology, 36(2/3),
161187.
Dennis, A. R. (1996). Information Exchange and Use in Group Decision Making: You Can Lead a
Group to Information, but You Can’t Make It Think. MIS Quarterly, 20(4), 433.
Dennis, A. R., Wixom, B. H., & Vandenberg, R. J. (2001). Understanding Fit and Appropriation
Effects in Group Support Systems via Meta-Analysis. MIS Quarterly, 25(2), 167.
Frohlich, D. M. (1993). The history and future of direct manipulation. Behaviour and Information
Technology, 12(6), 315329.
Gnewuch, U., Morana, S., & Maedche, A. (2018). Towards Designing Cooperative and Social
Conversational Agents for Customer Service. Proceedings of the 38th International Conference
on Information Systems (ICIS2017), Seoul, South Korea., December, 011.
Gregor, S., & Hevner, A. R. (2013). Positioning and Presenting Design Science Research for
Maximum Impact. MIS Quarterly, 37(2), 337355.
Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for
Information Systems, 8(5), 312335.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 15
Hevner, A. R. (2007). A Three Cycle View of Design Science Research. Scandinavian Journal of
Information Systems, 19(2), 8792.
Isenberg, P., Fisher, D., Paul, S. A., Morris, M. R., Inkpen, K., & Czerwinski, M. (2012). Co-Located
Collaborative Visual Analytics around a Tabletop Display. IEEE Transactions on Visualization
and Computer Graphics, 18(5), 689702.
Jetter, H.-C., Gerken, J., Zöllner, M., Reiterer, H., & Milic-Frayling, N. (2011). Materializing the
query with facet-streams. Proceedings of the 2011 Annual Conference on Human Factors in
Computing Systems - CHI ’11, 3013.
Kaufmann, J., & Chamoni, P. (2014). Structuring collaborative business intelligence: A literature
review. Proceedings of the Annual Hawaii International Conference on System Sciences, 3738
3747.
Kuechler, B., & Vaishnavi, V. (2008). On theory development in design science research: anatomy of
a research project. European Journal of Information Systems, 17(5), 489504.
Langner, R., Horak, T., & Dachselt, R. (2018). VisTiles: Coordinating and Combining Co-located
Mobile Devices for Visual Data Exploration. IEEE Transactions on Visualization and Computer
Graphics, 24(1), 626636.
Langner, R., Kister, U., & Dachselt, R. (2019). Multiple Coordinated Views at Large Displays for
Multiple Users. IEEE Transactions on Visualization and Computer Graphics, 25(1), 608618.
Larson, D., & Chang, V. (2016). A review and future direction of agile, business intelligence,
analytics and data science. International Journal of Information Management, 36(5), 700710.
Lee, B., Isenberg, P., Riche, N. H., & Carpendale, S. (2012). Beyond Mouse and Keyboard:
Expanding Design Considerations for Information Visualization Interactions. IEEE Transactions
on Visualization and Computer Graphics, 18(12), 26892698.
Lee, B., Smith, G., Riche, N. H., Karlson, A., & Carpendale, S. (2015). SketchInsight: Natural data
exploration on interactive whiteboards leveraging pen and touch interaction. 2015 IEEE Pacific
Visualization Symposium (PacificVis), 2015-July, 199206.
Li, T. J.-J., Azaria, A., & Myers, B. A. (2017). SUGILITE. Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems, 2017-May, 60386049.
Majchrzak, A., More, P. H. B., & Faraj, S. (2012). Transcending Knowledge Differences in Cross-
Functional Teams. Organization Science, 23(4), 951970.
McTear, M. F. (2017). The Rise of the Conversational Interface: A New Kid on the Block? In Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics): Vol. 10341 LNAI (pp. 3849). Springer Verlag.
Morris, M. R. (2012). Web on the wall. Proceedings of the 2012 ACM International Conference on
Interactive Tabletops and Surfaces - ITS ’12, 95.
Nguyen, H., Ketchell, S., Engelke, U., Thomas, B. H., & de Souza, P. (2017). Augmented Reality
Based Bee Drift Analysis: A User Study. 2017 International Symposium on Big Data Visual
Analytics (BDVA), 18.
Norman, D. (2016). The Design of Everyday Things. In The Design of Everyday Things. Vahlen.
Nunamaker, J. F., & Deokar, A. V. (2008). GDSS Parameters and Benefits. In Handbook on Decision
Support Systems 1 (pp. 391414). Springer Berlin Heidelberg.
Oviatt, S. (1999). Ten myths of multimodal interaction. Communications of the ACM, 42(11), 7481.
Oviatt, S. (2003). Advances in robust multimodal interface design. IEEE Computer Graphics and
Applications, 23(5), 6268.
Pitt, L., Berthon, P., & Robson, K. (2011). Deciding When to Use Tablets for Business Applications.
MIS Quarterly Executive, 10(3).
Reeves, L. M., Lai, J., Larson, J. A., Oviatt, S., Balaji, T. S., Buisine, S., Collings, P., Cohen, P.,
Kraal, B., Martin, J. C., McTear, M., Raman, T. V., Stanney, K. M., Su, H., & Wang, Q. Y.
Ruoff & Gnewuch / Designing Multimodal BI&A Systems
Twenty-Ninth European Conference on Information Systems (ECIS 2021), A Virtual AIS Conference. 16
(2004). Guidelines for multimodal user interface design. Communications of the ACM, 47(1),
5759.
Roberts, J. C., Ritsos, P. D., Badam, S. K., Brodbeck, D., Kennedy, J., & Elmqvist, N. (2014).
Visualization beyond the desktop-the next big thing. IEEE Computer Graphics and Applications,
34(6), 2634.
Ruoff, M., Gnewuch, U., & Maedche, A. (2020). Designing Multimodal BI&A Systems for Face-to-
Face Team Interactions. SIGHCI 2020 Proceedings.
Ruoff, M., & Maedche, A. (2020). Towards Understanding Multimodal Interaction for Visual Data
Analysis. Posters of IEEE Visualization, Oct 2020, Salt Lake City, United States.
Saktheeswaran, A., Srinivasan, A., & Stasko, J. (2020). Touch? Speech? or Touch and Speech?
Investigating Multimodal Interaction for Visual Network Exploration and Analysis. IEEE
Transactions on Visualization and Computer Graphics, 26(6), 21682179.
Schmidt, J. B., Montoya-Weiss, M. M., & Massey, A. P. (2001). New Product Development Decision-
Making Effectiveness: Comparing Individuals, Face-To-Face Teams, and Virtual Teams.
Decision Sciences, 32(4), 575600.
Seddon, P. B. (1997). A Respecification and Extension of the DeLone and McLean Model of IS
Success. Information Systems Research, 8(3), 240253.
Srinivasan, A., Lee, B., Henry Riche, N., Drucker, S. M., & Hinckley, K. (2020). InChorus: Designing
Consistent Multimodal Interactions for Data Visualization on Tablet Devices. Proceedings of the
2020 CHI Conference on Human Factors in Computing Systems, 113.
Sundar, S. S., Jia, H., Waddell, T. F., & Huang, Y. (2015). Toward a Theory of Interactive Media
Effects (TIME). In The Handbook of the Psychology of Communication Technology (pp. 4786).
Wiley.
Surbakti, F. P. S., Wang, W., Indulska, M., & Sadiq, S. (2020). Factors influencing effective use of big
data: A research framework. Information and Management, 57(1), 103146.
Tremblay, M. C., Hevner, A. R., Berndt, D. J., Tremblay, M., Chiarini, ;, & Hevner, A. R. ; (2010).
Focus Groups for Artifact Refinement and Evaluation in Design Research. Communications of
the Association for Information Systems, 26, 599618.
Turk, M. (2014). Multimodal interaction: A review. Pattern Recognition Letters, 36, 189195.
Vitalari, N. P. (1985). Knowledge as a basis for expertise in systems analysis: An empirical study. MIS
Quarterly, 9(3), 221240.
Yi, J. S., Kang, Y. A., & Stasko, J. (2007). Toward a Deeper Understanding of the Role of Interaction
in Information Visualization. IEEE Transactions on Visualization and Computer Graphics,
13(6), 12241231.
Yigitbasioglu, O. M., & Velcu, O. (2012). A review of dashboards in performance management:
Implications for design and research. International Journal of Accounting Information Systems,
13(1), 4159.
Zhang, X. (2017). Knowledge Management System Use and Job Performance: A Multilevel
Contingency Model. MIS Quarterly, 41(3), 811840.
... Today, decision makers can use BI&A systems on their smart phones or tablets while on the go (Power 2013). In meetings, cross-functional teams can make decisions together by collaboratively interacting with BI&A systems on large interactive screens (e.g., Microsoft's Surface Hub) (Ruoff and Gnewuch 2021). To facilitate transparent interaction, particularly for non-expert users, BI&A systems increasingly support multiple modalities. ...
... A promising application of CUIs is their ability to "aid, assist and advise people in personal and organizational decision situations" (Power et al., 2019, p. 1). Therefore, CUIs are increasingly implemented in AI-enabled systems in general and dashboards in specific to assist the user in the interaction with these systems (Morana et al., 2020;Ruoff et al., 2020) and to enhance current systems for better decision making (Quamar et al., 2020;Rzepka & Berger, 2018). However, existing research has often focused on the technical challenges (Gao et al., 2015;Quamar et al., 2020), resulting in a lack of design knowledge for conversational dashboards. ...
Conference Paper
Full-text available
Dashboards are increasingly used by governments and health organizations to provide important information to the general public during a crisis. However, in contrast to organizational settings, the majority of the general population has not or rarely used dashboards before and therefore often struggles to interact effectively with these dashboards. To address this challenge, we conduct a design science research (DSR) project to design a conversational dashboard that enables natural language-based interactions to facilitate its effective use. Drawing on the theory of effective use, our DSR project aims to provide theory-grounded design knowledge for conversational dashboards that help users to access and find information via natural language. Moreover, we seek to provide novel insights that support researchers and practitioners in understanding and designing more natural and effective interactions between users and dashboards.
Article
Full-text available
Interaction plays a vital role during visual network exploration as users need to engage with both elements in the view (e.g., nodes, links) and interface controls (e.g., sliders, dropdown menus). Particularly as the size and complexity of a network grow, interactive displays supporting multimodal input (e.g., touch, speech, pen, gaze) exhibit the potential to facilitate fluid interaction during visual network exploration and analysis. While multimodal interaction with network visualization seems like a promising idea, many open questions remain. For instance, do users actually prefer multimodal input over unimodal input, and if so, why? Does it enable them to interact more naturally, or does having multiple modes of input confuse users? To answer such questions, we conducted a qualitative user study in the context of a network visualization tool, comparing speech- and touch-based unimodal interfaces to a multimodal interface combining the two. Our results confirm that participants strongly prefer multimodal input over unimodal input attributing their preference to: 1) the freedom of expression, 2) the complementary nature of speech and touch, and 3) integrated interactions afforded by the combination of the two modalities. We also describe the interaction patterns participants employed to perform common network visualization operations and highlight themes for future multimodal network visualization systems to consider.
Conference Paper
Full-text available
The idea of interacting with computers through natural language dates back to the 1960s, but recent technological advances have led to a renewed interest in conversational agents such as chatbots or digital assistants. In the customer service context, conversational agents promise to create a fast, convenient, and cost-effective channel for communicating with customers. Although numerous agents have been implemented in the past, most of them could not meet the expectations and disappeared. In this paper, we present our design science research project on how to design cooperative and social conversational agents to increase service quality in customer service. We discuss several issues that hinder the success of current conversational agents in customer service. Drawing on the cooperative principle of conversation and social response theory, we propose preliminary meta-requirements and design principles for cooperative and social conversational agents. Next, we will develop a prototype based on these design principles.
Article
Information systems (IS) research has explored “effective use” in a variety of contexts. However, it is yet to specifically consider it in the context of the unique characteristics of big data. Yet, organizations have a high appetite for big data, and there is growing evidence that investments in big data solutions do not always lead to the derivation of intended value. Accordingly, there is a need for rigorous academic guidance on what factors enable effective use of big data. With this paper, we aim to guide IS researchers such that the expansion of the body of knowledge on the effective use of big data can proceed in a structured and systematic manner and can subsequently lead to empirically driven guidance for organizations. Namely, with this paper, we cast a wide net to understand and consolidate from literature the potential factors that can influence the effective use of big data, so they may be further studied. To do so, we first conduct a systematic literature review. Our review identifies 41 factors, which we categorize into 7 themes, namely data quality; data privacy and security and governance; perceived organizational benefit; process management; people aspects; systems, tools, and technologies; and organizational aspects. To explore the existence of these themes in practice, we then analyze 45 published case studies that document insights into how specific companies use big data successfully. Finally, we propose a framework for the study of effective use of big data as a basis for future research. Our contributions aim to guide researchers in establishing the relevance and relationships within the identified themes and factors and are a step toward developing a deeper understanding of effective use of big data.
Conference Paper
The conversational interface has become a hot topic in the past year or so, providing the primary means of interaction with chatbots, messaging apps, and virtual personal assistants. Major tech companies have been making huge investments in the supporting technologies of artificial intelligence, such as deep learning and natural language processing, with the aim of creating systems that will enable users of smartphones and other devices to obtain information and access services in a natural, conversational way. Yet the vision of the conversational interface is not new, and indeed there is a history of research in dialogue systems, voice user interfaces, embodied conversational agents, and chatbots that goes back more than fifty years. This chapter explores what has changed to make the conversational interface particularly relevant today, examines some key issues from earlier work that could inform the next generation of conversational systems, and highlights some challenges for future work.
Article
Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this article, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the article by presenting the application examples of our Visfer framework.
Article
We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.
Article
This paper seeks to develop a better understanding of job performance in the context of a knowledge management system (KMS) implementation. This work adopts the context theorizing approach that informs the conceptualization of KMS use and identification of contingency factors. Specifically, the literature on rich system use is adapted to develop the construct in the context of a KMS. The literature related to task, system, user, and leadership is also drawn upon to identify four contingency factors - task nonroutineness, perceived support for contextualization, absorptive capacity, and transformational leadership - that affect the KMS use and job performance relationship. The paper argues that rich use of a KMS positively affects job performance and the four contingency factors moderate this relationship. A mixed methods approach that includes a quantitative study (n = 1,441) among knowledge workers in seven business units of a large organization in the finance industry was used to validate the theoretical model. A follow-up qualitative study (n = 48) was conducted in one business unit to cross-validate the findings and explain unsupported findings. Data were collected from multiple sources (i.e., surveys, interviews, and system archives). The results largely supported the model. Theoretical and practical implications of the results are discussed.