ArticlePDF Available

How to use behavioral research insights on trust for HCI system design

Authors:

Abstract and Figures

Trust has been shown to be a major antecedent of technology acceptance and usage. Consequently, behavioral research has created vast insights on trust building. However, only a small fraction of the existing literature also shows ways of systematically including these insights into system design. Hence, the potential of most behavioral insights on trust for developing new systems often remains only partly realized. To alleviate this problem, we present a way to systematically derive trust-supporting design elements using trust theory. Using a laboratory experiment, we show that the trust-related design elements derived from theory are regarded as being important by the participants, and significantly increased their trust in a restaurant recommendation system as well as in their intention to use it in the future.
Content may be subject to copyright.
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Please quote as: Söllner, M.; Hoffmann, A.; Hoffmann, H. & Leimeister, J. M. (2012):
How to Use Behavioral Research Insights on Trust for HCI System Design. In: ACM
SIGCHI Conference on Human Factors in Computing Systems (CHI), Austin, Texas,
USA.
How to Use Behavioral Research Insights on
Trust for HCI System Design
We present a way to systematically derive
trust-supporting design elements (TSDE)
using trust theory.
Trust is the belief “that an agent will help achieve an
individual’s goal in a situation characterized by
uncertainty and vulnerability”
Approach for Systematically Deriving TSDEs
Select suitable
antecedents
from theory
Translate
into functional
requirements
Identify &
prioritize
uncertainties
Derive
TSDE
Insights on Trust
Construct
Dimensions
Antecedents
Trust
Performance
Process
Purpose
Competence
forms
Information
accuracy
Reliability
over time
Responsibility
Dependability
Understand
-ability
Control
Predictability
Motives
Benevolence
Faith
Dinner Now - A Restaurant Recommendation System
Uncertainties
Quality of the
recommendation
Loss of control
over Dinner Now
Reliability of the
user ratings
Antecedents
Understandability
Control
Information
accuracy
Functional Requirements
The user should be able to
receive information
regarding the degree the
configured preferences
were considered
access the available
restaurants and select a
restaurant on his own
using different selection
criteria
explicitly rely on ratings of
friends
Redesign
with TSDEs
Original
Version
Using the Approach for Dinner Now
Matthias Söllner, Axel Hoffmann, Holger Hoffmann, Jan Marco Leimeister
Information Systems, Kassel University, Germany
Evaluation
166 undergraduate students were split into to groups and each group evaluated one version of the prototype.
The TSDEs, we derived from theory using the approach, were regarded as being important by the participants.
We observed a significant increase in the mean scores of users’ trust in Dinner Now as well as their intention
to use it in the future.
Motivation
Trust has been shown to be a major
antecedent of technology acceptance and
usage.
Behavioral research has created a vast
amount of insights on trust building.
Only a small fraction of the existing
literature also shows ways of systematically
including these insights into system design.
Potential of most behavioral insights on
trust for developing new systems often
remains only partly realized.
Behavioral research insight on trust can be
systematically integrated into system
design.
VENUS is a research cluster at the interdisciplinary Research Center for Information Systems Design (ITeG) at Kassel University. We thank
Hesse’s Ministry of Higher Education, Research, and the Arts for funding the project as part of the research funding program “LOEWE – Landes-
Offensive zur Entwicklung Wissenschaftlich-ökonomischer Exzellenz”. For further information, please visit: http://www.iteg.uni-kassel.de/venus.
* Trust research has shown that formal pictures of the authors increase the readers/viewers trust in their research. Do they?
*
How to Use Behavioral Research
Insights on Trust for HCI System
Design
Abstract
Trust has been shown to be a major antecedent of
technology acceptance and usage. Consequently,
behavioral research has created vast insights on trust
building. However, only a small fraction of the existing
literature also shows ways of systematically including
these insights into system design. Hence, the potential
of most behavioral insights on trust for developing new
systems often remains only partly realized. To alleviate
this problem, we present a way to systematically derive
trust-supporting design elements using trust theory.
Using a laboratory experiment, we show that the trust-
related design elements derived from theory are
regarded as being important by the participants, and
significantly increased their trust in a restaurant
recommendation system as well as in their intention to
use it in the future.
Author Keywords
Theory; trust; technology acceptance and usage;
system design; laboratory experiment
ACM Classification Keywords
H.5.m [Information Interfaces and Presentation (e.g.,
HCI)];
Copyright is held by the author/owner(s).
CHI’12, May 5–10, 2012, Austin, Texas, USA.
ACM 978-1-4503-1016-1/12/05.
Matthias Söllner
Kassel University
Information Systems
Nora-Platiel-Str. 4
Kassel, D-34127, Germany
soellner@uni-kassel.de
Axel Hoffmann
Kassel University
Information Systems
Nora-Platiel-Str. 4
Kassel, D-34127, Germany
axel.hoffmann@uni-kassel.de
Holger Hoffmann
Kassel University
Information Systems
Nora-Platiel-Str. 4
Kassel, D-34127, Germany
hoffmann@uni-kassel.de
Jan Marco Leimeister
Kassel University
Information Systems
Nora-Platiel-Str. 4
Kassel, D-34127, Germany
leimeister@uni-kassel.de
Introduction
Even in technology-oriented fields, such as Information
Systems (IS) and Human Computer Interaction (HCI),
a significant number of empirical papers focus on
understanding human behavior [11]. The reason is that
researchers have discovered the importance of
understanding human behavior for designing better
systems. One heavily researched topic in this regard is
“trust,” as can be seen in several special issues of
major journals in IS [1, 2] and HCI [3]. Despite the fact
that the synergetic potential of behavioral and design
research has been emphasized [6], very little of the
literature addresses the issue of how to use the insights
created by behavioral research on trust to
systematically design more trustworthy systems.
To address this weakness, we present an approach to
systematically derive trust-supporting design elements
(TSDE) from theory on trust in automation [8]. We
then illustrate its application to a restaurant
recommendation system and evaluate the effects of the
TSDEs in a laboratory experiment, including 166
participants.
The interplay between behavioral research
and design research
In general, there exist two complementary types of
research: behavioral research and design research [6].
Behavioral research develops and justifies theories
explaining or predicting phenomena relevant for an
identified need. It aims at discovering “truth.” Design
research builds and evaluates artifacts, which are
designed to meet an identified need. It aims at creating
utility. According to Hevner et al. [6], truth and utility
can hardly be separated. For example, researchers may
discover surprising utility in an artifact simply because
a truth has not yet been discovered. On the other hand,
artifacts may lack utility because a previously
discovered truth was not considered when designing
the artifact. In this work-in-progress paper, we focus
on the latter case, arguing that several valuable
insights from behavioral research on trust are not
systematically considered during system design. Thus,
the utility of a system is often lower than is the utility
that could be achieved if behavioral research insights
on trust had been systematically considered right from
the beginning.
Behavioral research insights on trust
Research on technology acceptance shows that trust is
a key determinant of technology adoption and usage
[5]. Since the early 1990s, a stream of HCI research
has focused on trust in automation. According to Lee
and See [8], automation is defined as “technology that
actively selects data, transforms information, makes
decisions, or controls processes” (p. 50) – a definition
that fits well with most recently designed systems.
Regarding trust, we adopt the definition of Lee and See
[8], and define trust as the belief “that an agent will
help achieve an individual’s goal in a situation
characterized by uncertainty and vulnerability” (p. 51).
In behavioral literature, trust is interpreted as being a
multi-dimensional latent construct [7]. Consequently,
research on trust in automation shares this view and
identifies three dimensions forming a user’s trust in
automated systems: performance, process, and
purpose. The performance dimension reflects the
capability of the system in helping the user to achieve
his goals, the process dimension reflects the user’s
perception regarding the degree to which the system’s
algorithms are appropriate, and the purpose dimension
reflects the user’s perception of the intentions that the
designers of the system have.
Each of the three dimensions is formed by a number of
different antecedents [8, 9]. We focus on describing the
antecedents that will be used later for deriving TSDEs
for a restaurant recommendation system. A detailed
description of the remaining antecedents can be found
in [12]. The antecedents addressed in this work-in-
progress paper are: understandability – covering the
aspect of how good the user was able to understand
how the system works, control – dealing with the
degree to which the user has the feeling of having the
system under control, and information accuracy
focusing on the aspect that the information provided by
the system is accurate. Figure 1 includes the
dimensions and the complete set of antecedents of
trust in an automated system, referring to [12].
Latentconstruct
Dimensions
Antecedents
Trust
Performance
Process
Purpose
Competence
forms
Information
accuracy
Reliability
over time
Responsibility
Dependability
Understand
-ability
Control
Predictability
Motives
Benevolence
Faith
Figure 1. The formation of trust in automated systems [12].
Using behavioral research insights on trust
to systematically derive TSDEs
In order to systematically derive TSDEs from theory on
trust in automation, we developed an approach
consisting of four steps (see Figure 2).
Translate
into functional
requirements
Derive
TSDE
Select
suitable
antecedents
from theory
Identify &
prioritize
uncertainties
Figure 2. Approach for systematically deriving TSDEs.
As we know from the definition of trust, trust is only
important in situations characterized by uncertainty.
Thus, the uncertainties that the user has to face when
using a particular system need to be identified first
then prioritized, based on their threat to successful user
adoption of the system. The prioritization is necessary,
since every uncertainty that shall be countered will lead
to additional development effort, and thus costs. Based
on the given frame conditions (budget, time, etc.), the
number of uncertainties that shall be countered need to
be defined, and suitable antecedents of trust for
countering these uncertainties need to be identified
from theory. As is known from requirements
engineering, the antecedents of a latent construct can
be interpreted as under-specified functional
requirements [10]. Thus, when these requirements are
considered during system design, they need to be
translated into functional requirements. These
functional requirements, in turn, will later be included
in the software engineering approach of choice, and
ultimately lead to the desired TSDEs. To further
illustrate the approach, we apply the approach to derive
TSDEs for a restaurant recommendation system.
Dinner Now – a restaurant recommendation
system
To show how to systematically include behavioral
research insights on trust into system design, we
developed an improved version of an existing prototype
of a context sensitive, self-adaptive restaurant
recommendation system, called “Dinner Now.”
Compared to the existing version we changed only
design elements related to trust theory in order to limit
the observed effects during the evaluation to the
presence of the derived TSDEs.
Dinner Now allows a user to find the best restaurant for
himself and his company, based upon their preferences
and current location. The user’s and his company’s
preferences regarding the ethnicity of the restaurant
(style of food), the ambience, and previous experience
can be included in the recommendation generation
process, as well as user ratings found on the Internet.
After the user has selected the data to be included and
started the search, the most suitable restaurant is
presented. On the restaurant screen, the user has the
possibility of calling the restaurant, e.g., to request a
reservation, or switching to a map that shows the
shortest route from his current position to the
restaurant. Alternatively, the user can generate a new
recommendation if he is not satisfied with the current
one.
Systematically deriving TSDEs for Dinner
Now
Following our approach, we first identified the
uncertainties the user is confronted with in different
situations during the interaction process with the
system. For the most important uncertainties (quality
of the recommendation, loss of control over Dinner
Now, and reliability of the user ratings) identified by
test-user prioritization, we selected one antecedent to
counter each uncertainty: understandability, control,
and information accuracy. This is necessary, as every
requirement considered in system design increases
development costs. We hence decided to reduce the
number of under-specified functional requirements that
would be translated into functional requirements in
order to obtain a scenario that is economically sensible.
Concretizing these antecedents resulted in the following
functional requirements: Understandability – R1) after
getting the recommendation, the user should be able to
receive information regarding the degree to which the
configured preferences were considered. Control – R2)
after getting the recommendation, the user should be
able to access the available restaurants and select a
restaurant on his own using different selection criteria.
Information accuracy – R3) the user should be able to
explicitly rely on ratings of friends before a
recommendation is generated. R4) the user should be
able to rely on ratings of friends for accessing the
quality of a presented recommendation. The last two
requirements (R3 and R4) are based on the insight that
people tend to trust their friends the most [4]. Thus,
they should perceive ratings from their friends as being
more accurate than those from anonymous users. The
set of four requirements were used as an input in a
standard software engineering process. Figure 3
illustrates the approach for systematically deriving
TSDEs using Dinner Now and the uncertainty regarding
the quality of recommendation as an example.
Altogether, using the approach, we were able to derive
the four TSDEs highlighted in Figure 4.
R1) After getting the
recommendation, the
user should be able to
receive information
regarding the degree the
configured preferences
were considered.
Understand-
ability
Uncertainty
regarding the
quality of the
recommen-
dation
Figure 3. Example of the outcomes of each of the four steps of
the approach for deriving TSDEs.
TSDE 3
TSDE 4
TSDE 1
TSDE 2
Figure 4. Two screens of "Dinner Now" including the
TSDEs.
Study Design
To investigate the effects of the TSDEs implemented in
Dinner Now, we recruited 166 undergraduate students
(85 female, 81 male, mean age 24) to evaluate two
versions (with and without the TSDEs) of the system
using a between-subjects laboratory experiment. Each
participant received a ten minute introduction: an
explanation of the idea and how to control the system.
They were then given several tasks they had to fulfill
using the system, ensuring that the participants got in
touch with the full functionality of Dinner Now. This
took the participants between 15 and 20 minutes.
Afterwards, they were asked to fill out a questionnaire
capturing the measures necessary for the evaluation
(we used a bipolar 7-point Likert response format
ranging from strongly disagree to strongly agree). The
items were adapted from literature. After consistency
checks, we included 143 questionnaires (68 referring to
the system with the TSDEs) into the evaluation.
Results and Discussion
In the questionnaire, we asked participants whether
they missed (version without TSDEs) or especially
appreciated (version with TSDEs) the features reflecting
the implemented TSDEs. The mean values that the
groups reported regarding the importance of the TSDEs
ranged from 5.41 to 6.01 (standard deviations ranged
from 1.16 to 1.57). The results show that both groups
regard all four TSDEs, as being important (lowest mean
value is 5.41). Thus, using trust theory, we were able
to derive four design elements for Dinner Now, which
were regarded as being important by the participants.
The second question we intended to answer was
whether this result is also reflected in the values for
trust and intention to use the system in the future, as
indicated by the participants. Using a t-test in SPSS 20,
we are able show that the means of both the
participants’ trust and their intention to use the system
in the future are significantly higher in the group that
evaluated the system with TSDEs. The mean value for
trust increased from 4.81 to 5.11 (p < 0.075), and the
mean value for intention to use increased from 4.88 to
5.39 (p < 0.01).
Thus, the comparison shows that the high importance
of TSDEs we deducted from theory is attested by the
participants. Furthermore, the TSDEs designed to
improve trust and intention to use the system in the
future resulted in a significant rise of both values. The
results of the evaluation show that our approach is
suitable for deriving specific design elements from
behavioral research insights on trust that increase
users’ trust in the system and lead to a higher chance
of the system being adopted and used by potential
users.
Conclusion and Next Steps
The objective of this work-in-progress paper is to show
that the behavioral research insight on trust can be
systematically integrated into system design. We
present an approach to systematically derive TSDE and
apply it to redesign a restaurant recommendation
system, and then evaluate the effects in a laboratory
experiment. We show that our approach is feasible and
the results of this first evaluation shows that the
systematically derived TSDEs for the restaurant
recommendation system has led to design elements
that were later regarded as being important by
participants, and increased their trust in the system as
well as their intention to use it in the future.
Nevertheless, more research is necessary to reliably
prove the value of systematically integrating behavioral
research insights on trust into system design. First,
although we were able to show that the approach
works for one specific recommender system, we need
to investigate whether the observed results hold across
different recommender systems, as well as other
classes of systems. Second, we evaluated the effects of
the TSDEs in only one usage setting, and need to
investigate whether the observed effects hold across
different laboratory settings, as well as across other
types of studies (e.g., field studies). Third, we
evaluated the effects in only a single point in time,
which was right after the participants’ first usage
experience. Since trust building is a dynamic process,
we need to investigate whether the observed effects
hold over time. Finally, the current results of the
evaluation were limited to the population of
undergraduate students, and thus we need to
investigate whether the observed results hold across
different populations.
References
[1] Benbasat, I., D. Gefen, and P.A. Pavlou, Special
Issue: Trust in Online Environments. Journal of
Management Information Systems 24, 4 (2008), 5-11.
[2] Benbasat, I., D. Gefen, and P.A. Pavlou,
Introduction to the Special Issue on Novel Perspectives
on Trust in Information Systems. MIS Quarterly 34, 2
(2010), 367-371.
[3] Corritore, C.L., B. Kracher, and S. Wiedenbeck,
Editorial. International Journal of Human-Computer
Studies 58, 6 (2003), 633-635.
[4] Forrester Research, North American
Technographics Media and Marketing Online Survey.
2009, Forrester Research, Inc.
[5] Gefen, D., E. Karahanna, and D.W. Straub, Trust
and TAM in Online Shopping: An Integrated Model. MIS
Quarterly 27, 1 (2003), 51-90.
[6] Hevner, A.R., S.T. March, P. Jinsoo, and S. Ram,
Design Science in Information Systems Research. MIS
Quarterly 28, 1 (2004), 75-105.
[7] Jarvis, C.B., S.B. Mackenzie, and P.M. Podsakoff, A
Critical Review of Construct Indicators and
Measurement Model Misspecification in Marketing and
Consumer Research. Journal of Consumer Research 30,
2 (2003), 199-218.
[8] Lee, J.D. and K.A. See, Trust in Automation:
Designing for Appropriate Reliance. Human Factors 46,
1 (2004), 50-80.
[9] Muir, B.M., Trust in automation: Part I. Ergonomics
37, 11 (1994), 1905 - 1922.
[10] Pohl, K., Requirements Engineering. 2008,
Heidelberg: dpunkt. Verlag.
[11] Sidorova, A., N. Evangelopoulos, J.S. Valacich, and
T. Ramakrishnan, Uncovering the Intellectual Core of
the Information Systems Discipline. MIS Quarterly 32,
3 (2008), 467-A20.
Söllner, M., A. Hoffmann, H. Hoffmann, and J.M.
Leimeister, Towards a Theory of Explanation and
Prediction for the Formation of Trust in IT Artifacts, in
SIGHCI 2011 Proceedings. Paper 6. 2011.
... Previous research has stated that trust can be engineered in a system systematically during the development process [18]. Researchers have suggested methods to develop user trust in software systems, such as software patterns [14] and factor analysis [19]. ...
... Previous literature suggested that trust can be very difficult to address, therefore, to examine trust the best way is to investigate on the factors that influences trust [18]. Trust has been studied as a formative construct as well as reflective construct in existing literature. ...
... Human computer interaction (HCI) investigates the design and psychological mechanisms that impact users' trusting perceptions in AI systems and their subsequent use behaviors (Lee & See, 2004;Robert Jr et al., 2020;Söllner et al., 2012). The HCI studies advocate for greater transparency, systematicity, level of control, structuring, and rigor in the development of AI systems (Hoff & Bashir, 2015;Lee & See, 2004). ...
Article
Full-text available
With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual , theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.
... For example, how the AI and its capabilities are framed by the designer highly impact acceptance and accuracy perceptions of users [76]. While trust has been established as an influential factor in technology use, it has only been fairly recently that methods have emerged to systematically include trust insights in system design [120]. The research on explainable AI, which attempts to find user-friendly ways of opening up the 'black box' of deep learning systems, is an example of how HCI researchers are attempting to achieve appropriate user trust by influencing mental models [2]. ...
Conference Paper
Full-text available
While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.
... Based on our results, such a communication strategy should be superior compared to purely informative or persuasive communication strategies. A starting point is the work of Söllner et al. (2012), who outlined drivers of trust with three dimensions: performance (e.g., competence, information accuracy), processes (e.g., control, predictability), and purpose (e.g., motives, faith). Hence, a CA should appeal to these factors to ensure compliance. ...
Conference Paper
Full-text available
The COVID-19 pandemic challenged the existing healthcare system by demanding potential patients to self-diagnose and self-test a potential virus contraction. In this process, some individuals need help and guidance. However, the previous modus-operandi to go to a physician is no longer viable because of the limited capacity and danger of spreading the virus. Hence, digital means had to be developed to help and inform individuals at home, such as conversational agents (CA). The human-like design and perceived social presence of such a CA are central to attaining users' compliance. Against this background, we surveyed 174 users of a commercial COVID-19 chatbot to investigate the role of perceived social presence. Our results provide support that the perceived social presence of chatbots leads to higher levels of trust, which are a driver of compliance. In contrast, perceived persuasiveness seems to have no significant effect.
... Trust related to the end user includes both initial and ongoing trust [2]. Previous research stated that trust can be systematically engineered into a system [3], e.g. using software patterns [4] and factor analysis [5] to investigate the concept of trust in software. ...
... Trust related to the end user includes both initial and ongoing trust [2]. Previous research stated that trust can be systematically engineered into a system [3], e.g. using software patterns [4] and factor analysis [5] to investigate the concept of trust in software. ...
Article
Full-text available
Transparency is key to enhancing users’ trust by enabling their judgment on the outcomes and consequences of a system’s operations. This paper presents the transparency engineering methodology (TEM) to generate transparency requirements that enable users’ trust judgement. The idea is to identify where transparency is lacking and to address this through patterns augmenting the specification of data, use case, and process requirements. Due to the complexity of software, it is impossible (and undesirable) to achieve full transparency throughout the system. However, transparency can be improved for selected system aspects. This is demonstrated using the results from an industrial case study with a medical technology company where, with the help of TEM, existing functional requirements were refined, and transparency requirements generated systematically.
... Transparency means to provide users with adequate information for informed choices whether to trust a system, when and how to use it to achieve their goals [5]. While trust can be addressed systematically [22], with methods including software patterns [6] and factor analysis [14], methods for engineering transparency requirements in software have not been studied in this context. In fact, transparency requirements are often underestimated during software development. ...
Chapter
Full-text available
Users have the right to know how their software works, what data it collects about them and how this data is used. This is a legal requirement under General Data Protection Regulation (GDPR) and fosters users’ trust in the system. Transparency, when used correctly, is a tool to achieve this. The adoption of agile approaches, focused on coding and rapidly evolving functionality in situations where requirements are unclear or fast changing, poses new problems for the systematic elicitation and implementation of transparency requirements which are driven by, but lag behind, the functionality. We propose requirements patterns addressing GDPR’s principle of transparency by default, i.e., through a systematic and structured approach based on the artefacts of agile development. We present a case study using a SCRUM process to demonstrate the effectiveness and usability of the patterns.
Article
We review HCI history from both the perspective of its 1980s split with human factors and its nature as a discipline. We then revisit human augmentation as an alternative to user friendliness that seems particularly relevant in the areas of inclusive design and artificial intelligence. Viewing human-AI interaction as a kind of human augmentation raises issues such as how to promote trust and situation awareness. We also pose the question: Can HCI and human factors engineering work together to solve the increasingly urgent challenges of human-AI technology? In an initial look at this question, we contrast the different approaches of HCI and human factors on emerging AI research. This paper concludes by considering other potentially promising paths for HCI. We propose more collaboration between HCI and human factors, or related disciplines, in the future to address the massive challenges posed by the rapid growth in data science and artificial intelligence.
Preprint
Recent research has supported that system explainability improves user trust and willingness to use medical AI for diagnostic support. In this paper, we use chest disease diagnosis based on X-Ray images as a case study to investigate user trust and reliance. Building off explainability, we propose a support system where users (radiologists) can view causal explanations for final decisions. After observing these causal explanations, users provided their opinions of the model predictions and could correct explanations if they did not agree. We measured user trust as the agreement between the model's and the radiologist's diagnosis as well as the radiologists' feedback on the model explanations. Additionally, they reported their trust in the system. We tested our model on the CXR-Eye dataset and it achieved an overall accuracy of 74.1%. However, the experts in our user study agreed with the model for only 46.4% of the cases, indicating the necessity of improving the trust. The self-reported trust score was 3.2 on a scale of 1.0 to 5.0, showing that the users tended to trust the model but the trust still needs to be enhanced.
Article
Full-text available
Several aspects of trust in new and under-researched Information Systems have been discussed. One of the experts, Dimoka, captured the location, timing and level of brain activity that are associated with trust and distrust when subjects interacted with four experimentally manipulated seller profiles that differed on their level of trust and distrust in a neuroeconomics experiment in the context of online auctions. The fMRI results showed that trust and distrust are associated with the activation of different brain areas, thus offering evidence that trust and distrust are distinct constructs that are associated with different neurological processes. Riedl, Hubert, and Kenning have written on a topic, 'Are there Neural Gender Differences in Online Trust: An fMRI Study on the Perceived Trustworthiness of eBay Offers', which deals with the relationship between trust and gender.
Article
Full-text available
In this paper we argue that the predominant trust conceptualization in IS has a major weakness when researching trust in IT artifacts and that a theory of explanation and prediction for the formation of trust in IT artifacts is necessary to face the upcoming challenges. Thus, we motivatea trust conceptualization from the HCI discipline, and develop a formative measurement model for trust in IT artifacts to achieve deeper insights on the formation of trust. The results of our pre-study with 102 undergraduate students suggest that the new conceptualization is valueable for creating the desired insights on the formation of trust in IT artifacts. In an upcoming field experiment with about 250 users we expect to gain more detailed and reliable insights in the formation of trust in IT artifacts allowingus to derive a first theory of explanation and prediction for the formation of trust in IT artifacts.
Article
Full-text available
A separate and distinct interaction with both the actual e-vendor and with its IT Web site interface is at the heart of online shopping. Previous research has established, accordingly, that online purchase intentions are the product of both consumer assessments of the IT itself-specifically its perceived usefulness and ease-of-use (TAM)-and trust in the e-vendor. But these perspectives have been examined independently by IS researchers. Integrating these two perspectives and examining the factors that build online trust in an environment that lacks the typical human interaction that often leads to trust in other circumstances advances our understanding of these constructs and their linkages to behavior. Our research on experienced repeat online shoppers shows that consumer trust is as important to online commerce as the widely accepted TAM use-antecedents, perceived usefulness and perceived ease of use. Together these variable sets explain a considerable proportion of variance in intended behavior. The study also provides evidence that online trust is built through (1) a belief that the vendor has nothing to gain by cheating, (2) a belief that there are safety mechanisms built into the Web site, and (3) by having a typical interface, (4) one that is, moreover, easy to use.
Article
Full-text available
What is the intellectual core of the information systems disci-pline? This study uses latent semantic analysis to examine a large body of published IS research in order to address this question. Specifically, the abstracts of all research papers over the time period from 1985 through 2006 published in three top IS research journals—MIS Quarterly, Information Systems Research, and Journal of Management Information Systems—were analyzed. This analysis identified five core research areas: (1) information technology and organiza-tions; (2) IS development; (3) IT and individuals; (4) IT and markets; and (5) IT and groups. Over the time frame of our analysis, these core topics have remained quite stable. However, the specific research themes within each core area have evolved significantly, reflecting research that has focused less on technology development and more on the social context in which information technologies are designed and used. As such, this analysis demonstrates that the information systems academic discipline has maintained a relatively stable research identity that focuses on how IT systems are developed and how individuals, groups, organizations, and markets interact with IT.
Article
The information systems field emerged as a new discipline of artificial science as a result of intellectual efforts to understand the nature and consequences of computer and communication technology in modern organizations. As the rapid development of ...
Data
Two paradigms characterize much of the research in the Information Systems discipline: behavioral science and design science. The behavioral-science paradigm seeks to develop and verify theories that explain or predict human or organizational behavior. The design-science paradigm seeks to extend the boundaries of human and organizational capabilities by creating new and innovative artifacts. Both paradigms are foundational to the IS discipline, positioned as it is at the confluence of people, organizations, and technology. Our objective is to describe the performance of design-science research in Information Systems via a concise conceptual framework and clear guidelines for understanding, executing, and evaluating the research. In the design-science paradigm, knowledge and understanding of a problem domain and its solution are achieved in the building and application of the designed artifact. Three recent exemplars in the research literature are used to demonstrate the application of these guidelines. We conclude with an analysis of the challenges of performing high-quality design-science research in the context of the broader IS community.
Article
Along with the elicitation of requirements from different sources, the domain requirements engineering sub-process has to identify which requirements are common to all applications, and which requirements differ among the applications. Hence, also during domain requirements engineering a commonality and variability analysis is performed. The application-requirements matrix provides a synopsis of the high-level requirements for several applications and can thus be used to support the identification of commonality and variability. A more sophisticated analysis can be performed on a set of prioritised requirements. In addition, checklists can be used to guide the identification of common and variable requirements. To support efficient communication and to enforce consistency of the variability of the software product line, the variability is defined in the orthogonal variability model. Variability modelling involves the identification and definition of variation points, variants, variability dependencies, and constraint dependencies. Variation points and variants are identified by abstracting from variable requirements and/or by grouping similar requirements artefacts. Architects have to be involved in the definition of product line variability as the variability has a strong influence on the reference architecture. External variability is defined together with product management.
Article
This study examines the impact of culture on trust determinants in computer-mediated commerce transactions. Adopting trust-building foundations from cross-culture literature and focusing on a set of well-established cultural constructs as groups of culture ...