Preprint

Designing Trustworthy User Interfaces

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Interface design can directly influence trustworthiness of a software. Thereby, it affects users' intention to use a tool. Previous research on user trust has not comprehensively addressed user interface design, though. We lack an understanding of what makes interfaces trustworthy (1), as well as actionable measures to improve trustworthiness (2). We contribute to this by addressing both gaps. Based on a systematic literature review, we give a thorough overview over the theory on user trust and provide a taxonomy of factors influencing user interface trustworthiness. Then, we derive concrete measures to address these factors in interface design. We use the results to create a proof of concept interface. In a preliminary evaluation, we compare a variant designed to elicit trust with one designed to reduce it. Our results show that the measures we apply can be effective in fostering trust in users.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Abstract Visualizing data through graphs can be an effective way to communicate one’s results. A ubiquitous graph and common technique to communicate behavioral data is the bar graph. The bar graph was first invented in 1786 and little has changed in its format. Here, a replacement for the bar graph is proposed. The new format, called a hat graph, maintains some of the critical features of the bar graph such as its discrete elements, but eliminates redundancies that are problematic when the baseline is not at zero. Hat graphs also include design elements based on Gestalt principles of grouping and graph design principles. The effectiveness of the hat graph was tested in five empirical studies. Participants were nearly 40% faster to find and identify the condition that led to the biggest difference from baseline to final test when the data were plotted with hat graphs than with bar graphs. Participants were also more sensitive to the magnitude of an effect plotted with a hat graph compared with a bar graph that was restricted to having its baseline at zero. The recommendation is to use hat graphs when plotting data from discrete categories.
Article
Full-text available
The Internet of Things (IoT) refers to the network of devices which contain electronics, sensors or software that enables them to connect at anytimeand anywhere through a cyber-physical system. Before the establishment of such a system, it should be considered to what extent the users are ready to adopt and use it in their daily routines. Therefore, this paper explores users’ attitudestowardsusing IoT technologies to receive healthcare services. This is in contrast to most previous research, which has studied the technical requirements or devices of the IoT that are required in healthcare services, or ways in which connectivity and performance can be improved using the IoT. Based on known models of technology acceptance, an integrated framework was developed to investigate the impact of security and privacy concerns, and familiarity with the technology, on users’ trust in the IoT, and then to measure the effect of that trust on Omani users’ attitudes regarding use ofIoT technologies to receive healthcare services. This framework enabled the measurement of risk perception as a mediator between user trust and their attitudes towards using the IoT. Data were collected from 387 respondents and were analysed using SPSS 25 and AMOS 25 statistics software. Exploratory and confirmatory analysis and structural equation modelling were applied. The findings showed that levels of security, privacy and familiarity affected trustin the IoT. Furthermore, these levels of trust in the IoT were found to affect both users’ perceptions of risk in, and their attitude towards, using the IoT. The users’ risk perception partially mediated the relations between users’ trustand their attitude regarding use of the IoT. The framework was supported and interpreted by 40 per cent of the variance in the attitude towards usingthe IoT in healthcare, while the mediator showed 47 per cent of the variance in the attitude towards using the IoT inhealthcare.
Article
Full-text available
Trust in automation has become a topic of intensive study over the past two decades. While the earliest trust experiments involved human interventions to correct failures/errors in automated control systems a majority of subsequent studies have investigated information acquisition and analysis decision aiding tasks such as target detection for which automation reliability is more easily manipulated. Despite the high level of international dependence on automation in industry and transport almost all current studies have employed Western samples primarily from the US. The present study addresses these gaps by running a large sample experiment in three (US, Taiwan and Turkey) diverse cultures using a ‘trust sensitive task’ consisting of both automated control and target detection subtasks. This paper presents results for the target detection subtask for which reliability and task load were manipulated. The current experiments allow us to determine whether reported effects are universal or specific to Western culture, vary in baseline or magnitude, or differ across cultures. Results generally confirm consistent effects of manipulations across the three cultures as well as cultural differences in initial trust and variation in effects of manipulations consistent with 10 cultural hypotheses based on Hofstede’s Cultural Dimensions and Leung and Cohen’s theory of Cultural Syndromes. These results provide critical implications and insights for enhancing human trust in intelligent automation systems across cultures. Our paper presents the following contributions: First, to the best of our knowledge, this is the first set of studies that deal with cultural factors across all the cultural syndromes identified in the literature by comparing trust in the Honor, Face, Dignity cultures. Second, this is the first set of studies that uses a validated cross-cultural trust measure for measuring trust in automation. Third, our experiments are the first to study the dynamics of trust across cultures.
Article
Full-text available
This study examined the utility of the concept of expressive aesthetics by testing websites that did or did not match this concept. A website scoring highly on this concept was created and was then compared to websites that were either non-aesthetic or corresponded to the concept of classical aesthetics. Sixty website users of a broad age range (18–60 years) were allocated to three experimental groups (expressive, classical, and non-aesthetic) and asked to complete a series of information search tasks. During the experiment, measures were taken of performance, perceived usability, perceived aesthetics, emotion, and trustworthiness. The results showed that expressive aesthetics can be considered a distinct concept. It also emerged that the website scoring high on expressive aesthetics shows a similar pattern of results to classical aesthetics. Both aesthetically appealing websites received higher ratings of perceived usability and trustworthiness than the non-aesthetic website. The effects of website aesthetics on subjective measures were not moderated by age.
Technical Report
Full-text available
The purpose of this document is to describe the development, evaluation, validation and potential use of a measure of Air Traffic Control (ATC) trust. The measure, named ‘SATI’ for ‘SHAPE ATM Trust Index’, is primarily concerned with human trust of ATC computer-assistance tools and other forms of automation support, which are expected to be major components of future Air Traffic Management (ATM) systems.
Article
Full-text available
In this study, we attempt to evaluate the user preferences for web design attributes (i.e., typography, color, content quality, interactivity, and navigation) to determine the trust, satisfaction, and loyalty for uncertainty avoidance cultures. Content quality and navigation have been observed as strong factors in building user trust with e-commerce websites. In contrast, interactivity, color, and typography have been observed as strong determinants of user satisfaction. The most relevant and interesting finding is related to typography, which has been rarely discussed in e-commerce literature. A questionnaire was designed to collect data to corroborate the proposed model and hypotheses. Furthermore, the partial least-squares method was adopted to analyze the collected data from the students who participated in the test ( $n$ = 558). Finally, the results of this study provide strong support to the proposed model and hypotheses. Therefore, all the web design attributes were observed as important design features to develop user trust and satisfaction for uncertainty avoidance cultures. Although both factors seem to be relevant, the relationship between trust and loyalty was observed to be stronger than between satisfaction and loyalty; thus, trust seems to be a stronger determinant of loyalty for risk/high uncertainty avoidance cultures.
Conference Paper
Full-text available
Cloud services are changing the software development context and are expected to increase dramatically in the forthcoming years. Within the cloud context, platform-as-a-service tools emerge as an important segment with an expected yearly growth between 25 to 50% in the next decade. These tools enable businesses to design and deploy new applications easily, thereby reducing operational expenses and time to market. This is increasingly important due to the lack of professional developers and it also raises a long standing issue in computer-aided software engineering: the need for easy to learn (low-threshold), functional (high-ceiling) tools enabling non-experts to create and adapt new cloud services. Despite their importance and impact, no research to date addressed the measurement of tools' ceiling and threshold. In this paper we describe a first attempt to advance the state of the art in this area through an in-depth usability study of platform-as-a-service tools in terms of their threshold (learnability) and ceiling (functionality). The measured learnability issues evidenced a strong positive correlation with usability defects and a weaker correlation with performance. Remarkably, the fastest and easiest to use and learn tool falls into the low-threshold/low-ceiling pattern.
Conference Paper
Full-text available
Trust seals, such as the VeriSign and TRUSTe logos, are widely used to indicate a website is reputable. But how much protection do they offer to online shoppers? We conducted a study in which 60 experienced online shoppers rated 6 websites – with and without trust seals - based on how trustworthy they perceived them to be. Eye tracking data reveals that 38% of participants failed to notice any of the trust seals present. When seals were noticed, the ratings assigned to each website were significantly higher than for the same website without a seal, but qualitative analysis of the interview data revealed significant misconceptions of their meaning (e.g. “presence of seals automatically legitimizes any website”). Participants tended to rely on self-developed – but inaccurate – heuristics for assessing trustworthiness (e.g. perceived investment in website development, or references to other recognizable entities). We conclude that trust seals currently do not offer effective protection against scam websites; and suggest that other mechanisms – such as automatic verification of authenticity are required to support consumers’ trust decisions.
Conference Paper
Full-text available
Trust is conceived as an attitude leading to intentions resulting in user actions involving automation. It is generally believed that trust is dynamic and that a user’s prior experience with automation affects future behavior indirectly through causing changes in trust. Additionally, individual differences and cultural factors have been frequently cited as the contributors to influencing trust beliefs about using and monitoring automation. The presented research focuses on modeling human’s trust when interacting with automated systems across cultures. The initial trust assessment instrument, comprising 110 items along with 2 perceptions (general vs. specific use of automation), has been empirically validated. Detailed results comparing items and dimensionality with our new pooled measure will be presented.
Article
Full-text available
Smart environments are able to support users during their daily life. For example, smart energy systems can be used to support energy saving by controlling devices, such as lights or displays, depending on context information, such as the brightness in a room or the presence of users. However, proactive decisions should also match the users’ preferences to maintain the users’ trust in the system. Wrong decisions could negatively influence the users’ acceptance of a system and at worst could make them abandon the system. In this paper, a trust-based model, called User Trust Model (UTM), for automatic decision-making is proposed, which is based on Bayesian networks. The UTM’s construction, the initialization with empirical data gathered in an online survey, and its integration in an office setting are described. Furthermore, the results of a live study and a live survey analyzing the users’ experience and acceptance are presented.
Article
Full-text available
Objective: We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. Background: Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators’ trust. Method: We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. Results: Our analysis revealed three layers of variability in human–automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. Conclusion: Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Article
Full-text available
This paper explores the importance of transparency and control to users in the context of inferred user interests. More specifically, we illustrate the association between various levels of control the users have on their inferred interests and users' trust in organizations that provide corresponding content. Our results indicate that users value transparency and control very differently. We segment users in two groups, one who states to not care about their personal interest model and another group that desires some level of control. We found substantial differences in trust impact between segments, depending on actual control option provided.
Article
Full-text available
The aim of this study was to investigate the antecedents of trust in technology for active users and passive users working with a shared technology. According to the prominence-interpretation theory, to assess the trustworthiness of a technology, a person must first perceive and evaluate elements of the system that includes the technology. An experimental study was conducted with 54 participants who worked in two-person teams in a multi-task environment with a shared technology. Trust in technology was measured using a trust in technology questionnaire and antecedents of trust were elicited using an open-ended question. A list of antecedents of trust in technology was derived using qualitative analysis techniques. The following categories emerged from the antecedent: technology factors, user factors, and task factors. Similarities and differences between active users and passive user responses, in terms of trust in technology were discussed.
Article
Full-text available
As mobile technology has developed, mobile banking has become accepted as part of daily life. Although many studies have been conducted to assess users’ satisfaction with mobile applications, none has focused on the ways in which the three quality factors associated with mobile banking – system quality, information quality and interface design quality – affect consumers’ trust and satisfaction. Our proposed research model, based on DeLone and McLean’s model, assesses how these three external quality factors can impact satisfaction and trust. We collected 276 valid questionnaires from mobile banking customers, then analyzed them using structural equation modeling. Our results show that system quality and information quality significantly influence customers’ trust and satisfaction, and that interface design quality does not. We present herein implications and suggestions for further research.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
Trust has been shown to be a key factor for technology adoption by users, that is, users prefer to use applications they trust. While existing literature on trust originating in computer science mostly revolves around aspects of information security, authentication, etc., research on trust in automation—originating from behavioral sciences—almost exclusively focuses on the sociotechnical context in which applications are embedded. The behavioral theory of trust in automation aims at explaining the formation of trust, helping to identify countermeasures for users’ uncertainties that lead to lessened trust in an application. We hence propose an approach to augment the system development process of ubiquitous systems with insights into behavioral trust theory. Our approach enables developers to derive design elements that help foster trust in their application by performing four key activities: identifying users’ uncertainties, linking them to trust antecedents from theory, deducting functional requirements and finally designing trust-supporting design elements (TSDEs). Evaluating user feedback on two recommender system prototypes, gathered in a study with over 160 participants, we show that by following our process, we were able to derive four TSDEs that helped to significantly increase the users’ trust in the system.
Article
Full-text available
It is proposed that trust is a critical element in the interactive relations between humans and the automated and robotic technology they create. This article presents (a) why trust is an important issue for this type of interaction, (b) a brief history of the development of human-robot trust issues, and (c) guidelines for input by human factors/ergonomics professionals to the design of human-robot systems with emphasis on trust issues. Our work considers trust an ongoing and dynamic dimension as robots evolve from simple tools to active, sentient teammates.
Article
Full-text available
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A 3-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study, was performed to better understand similarities and differences in the concepts of trust and distrust, and among the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Article
Full-text available
Over the last decade, 'user experience' (UX) became a buzzword in the field of human – computer interaction (HCI) and interaction design. As technology matured, interactive products became not only more useful and usable, but also fashionable, fascinating things to desire. Driven by the impression that a narrow focus on interactive products as tools does not capture the variety and emerging aspects of technology use, practitioners and researchers alike, seem to readily embrace the notion of UX as a viable alternative to traditional HCI. And, indeed, the term promises change and a fresh look, without being too specific about its definite meaning. The present introduction to the special issue on 'Empirical studies of the user experience' attempts to give a provisional answer to the question of what is meant by 'the user experience'. It provides a cursory sketch of UX and how we think UX research will look like in the future. It is not so much meant as a forecast of the future, but as a proposal – a stimulus for further UX research.
Conference Paper
Full-text available
Among the variety of heuristics evaluation methods available, four paramount approaches have emerged: Nielsen’s ten usability heuristics, Shneiderman’s Eight Golden Rules of Interface Design, Tognazzini’s First Principles of Interaction Design, and a set of principles based on Edward Tufte’s visual display work. To simplify access to a comprehensive set of heuristics, this paper describes an approach to integrate existing approaches (i.e., identify overlap, combine conceptually related heuristics) in a single table hereafter referred to as the Multiple Heuristics Evaluation Table (MHET). This approach also seeks to update these approaches by addressing existing gaps and providing concrete examples that illustrate the application of concepts. Furthermore, the authors identify three decision factors that support meaningful communication among stakeholders (e.g., product managers, engineers) and apply them to the MHET heuristics. Finally, this paper discusses the practical implications and limitations of the MHET.
Conference Paper
Full-text available
Robotic systems are being introduced into military echelons to extend warfighter capabilities in complex, dynamic environments. While these systems are designed to complement human capabilities (e.g., aiding in battlefield situation awareness and decision making, etc), they are often misused or disused because the user does not have an appropriate level of trust in his or her robotic counterpart(s). We describe a continuing body of research that identifies factors impacting a human's level of trust in a robotic teammate. The factors identified to date can be categorized as human influences (e.g., individual differences in terms of personality, experience, culture), machine influences (e.g., robotic platform, robot performance in terms of levels of automation, failure rates, false alarms), and environmental influences (e.g. task type, operational environment, shared mental models). A framework for human-robot team trust was constructed, which is evolving into a working model contingent upon the results of an on-going meta-analysis.
Article
Password authentication is still ubiquitous although alternatives have been developed to overcome its shortcomings such as high cognitive load for users. Using an objective rating scheme Bonneau et al. (2012) demonstrated that replacing the password poses a quest that yet remains unsolved. To shine light on this intractable issue we turn towards subjective user perceptions that influence acceptance and actual use of authentication schemes. We first conducted an extensive rating of objective features of authentication schemes to inform our selection of schemes for this research. Building on the findings thereof, 41 users interacted with twelve different authentication schemes in a laboratory study. The participants’ ratings revealed that the password followed by fingerprint authentication scored highest in terms of preference, usability, intention to use and lowest in terms of expected problems and effort. Usability and effort seem to be important factors for users’ preference rating whereas security and privacy ratings were not correlated with preference. One reason for these factors to fall behind might be their opacity and the resulting difficulty to evaluate them from a user perspective. Further, security and usability perceptions deviated from objective factors and should therefore be carefully considered before making decisions in terms of authentication. Suggestions for making security and privacy features more tangible and to allow for an easier integration in the users’ decision process are discussed.
Article
Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. In addition, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks. Practitioner Summary: This article illustrates how the communication of system uncertainty information helps operators calibrate their trust in automation and, consequently, gain situation awareness. Multilevel analysis results of a driving simulator study affirm the benefits for trust calibration and highlight that operators adjust their behaviour according to multiple uncertainty levels.
Chapter
While creating an optimal assortment of products, assortment planners need to take into account an important amount of information, which leads to a certain level of uncertainty. These trade-offs can diminish the quality of the assortment decisions made by the planners. To reduce their impact, assortment planners can now use artificial intelligence (AI) based recommendation agents (RAs) throughout their decision-making process, thus benefiting from their ability to process a large quantity of information to improve their decisions. However, research on user-RA shows that there are some challenges to their adoption. For instance, RA adoption depends on the users perceived credibility of its recommendations. Hence, this study investigates how the richness of the information provided by the RA and the necessary effort to access this information influences the assortment planners’ usage behavior (visual attention) and perceptions (credibility, satisfaction, performance, intention to adopt the RA). A within-subject lab experiment was conducted with twenty participants. The results show the importance of the RA’s recommendations that include easily accessible explanations of the variables included in their calculations on the usage behavior, perceptions, and decision quality of the assortment planners. These findings contribute to the HCI literature and the theory of RA adoption in B2B contexts by providing insights on features enhancing employee adoption.
Article
With the development of pervasive and ubiquitous computing, of IoT and personal devices, user-centric solutions will be the paradigm for most of the future applications. In this context, user-centric solutions must be proposed from deployment models to the content management. Obviously suitable Security, Privacy and Trust (SPT) solutions have to be proposed to ensure the smooth operation of systems and their straightforward managements required for a successful mass-user adoption. In this paper, we summarize the literature related to user-centric SPT scenarios and present a selection of the most recent advances in these areas.
Article
Purpose The characteristics of the Internet of Things (IoT) are such that traditional models of trust developed within interpersonal, organizational, virtual and information systems contexts may be inappropriate for use within an IoT context. The purpose of this paper is to offer empirically generated understandings of trust within potential IoT applications. Design/methodology/approach In an attempt to capture and communicate the complex and all-pervading but frequently inconspicuous nature of ubiquitous technologies within potential IoT techno-systems, propositions developed are investigated using a novel mixed methods research design combining a videographic projective technique with a quantitative survey, sampling 1,200 respondents. Findings Research findings suggest the dimensionality of trust may vary according to the IoT techno-service context being assessed. Originality/value The contribution of this paper is twofold. First, and from a theoretical perspective, it offers a conceptual foundation for trust dimensions within potential IoT applications based upon empirical evaluation. Second, and from a pragmatic perspective, the paper offers insights into how findings may guide practitioners in developing appropriate trust management systems dependent upon the characteristics of particular techno-service contexts.
Article
Software-based agents are becoming increasingly ubiquitous and automated. However, current technology and algorithms are still fallible, which considerably affects users' trust and interaction with such agents. In this article, we investigate two factors that can engender user trust in agents: Reliability and attractiveness of agents. We show that agent reliability is not more important than agent attractiveness. Subjective user ratings of agent trust and perceived accuracy suggest that attractiveness may be even more important than reliability.
Article
The use of Enterprise Resource Planning (ERP) systems is proven to be valuable in several ways and it is considered a necessity in today's business. However, despite the high cost and efforts required in implementing ERPs, the success rate is reported unsatisfactory in Iranian organizations. It is argued that the success of ERP implementation is significantly related to the users’ adoption behavior. As one of the most important predictors of adoption behavior, this study investigates factors affecting the intention to use ERP systems. In particular, using Technology Acceptance Model (TAM), we examined the effects of absorptive capacity, communication and trust on the intention to use ERP systems. A questionnaire was sent to ERP users in 7 organizations in Iran, and 184 responses were used for the analysis. The findings suggest that trust, together with perceived ease of use and perceived usefulness, have a positive significant relationship with intention to use ERP. Furthermore, absorptive capacity and communication have a direct effect on the perceived ease of use which, in turn, impacts the intention to use ERP. As such, this study advances the current knowledge of adoption behavior by investigating the role of trust, communication and absorptive capacity on the intention to use.
Conference Paper
We present a prototype of the user interface of a transparency tool that displays an overview of a user's data disclosures to different online service providers and allows them to access data collected about them stored at the services' sides. We explore one particular type of visualization method consisting of tracing lines that connect a user's disclosed personal attributes to the service to which these attributes have been disclosed. We report on the ongoing iterative process of design of such visualization, the challenges encountered and the possibilities for future improvements.
Conference Paper
In today’s rapidly developing Internet, the web sites and services end users see are more and more composed of multiple services, originating from many different providers in a dynamic way. This means that it can be difficult for the user to single out individual web services or service providers and consequently judge them regarding how much they trust them. So the question is how to communicate indicators of trustworthiness and provide adequate security feedback to the user in such a situation. Contemporary literature on trust design and security feedback is mostly focused on static web services and, therefore, only partially applicable to dynamic composite web services. We conducted two consecutive studies (a qualitative and a quantitative one) to answer the questions of how and when security feedback in dynamic web service environments should be provided and how it influences the user’s trust in the system. The findings from the studies were then analyzed with regards to Riegelsberger and Sasse’s ten principles for trust design [24]. The outcome we present in this paper is an adapted list of trust principles for dynamic systems.
Chapter
Usability and user experience are two important factors in the development of mass-customizable personalized products. A broad range of evaluation methods is available to improve products during an user-centered development process. This chapter gives an overview on these methods and how to apply them to achieve easy-to-use, efficient and effective personalized products that are additionally fun to use. A case study on the development of a new interaction technique for interactive TV helps to understand how to set up a mix of evaluation methods to cope with some of the limitations of current usability and user experience evaluation methods. The chapter concludes with some guidelines of how to change organizations to focus on usability and user experience.
Article
This paper explores the differences in users' responses to a spoken language search interface through voice and touch gesture input when compared with a textual input search interface. A Wizard of Oz user experiment was conducted with 48 participants who were asked to complete an entry questionnaire and then six tasks on a spoken search interface and six tasks on a textual search interface. Post-task and post-system questionnaires were also completed followed by an exit interview. The content analysis method was used to analyze the transcribed exit interview data. Results from the content analysis indicated that users' familiarity with the system, ease-of-use of the system, speed of the system, as well as trust, comfort level, fun factor and novelty were all factors that affected users' perception. We identified several major factors that may have implications for the design of future spoken language search interfaces and potential improvements in the user experience of such interfaces or systems.
Conference Paper
Perceived visual aesthetics of a web site positively affects a user’s credibility assessment of the site and less visual complex web page is associated with more favorable attitudes toward the page. Here we further investigate whether the visual complexity of a web site affects its aesthetic preference and as a consequence is associated with the users’ credibility. Two experiments with on-line payment scenario were conducted. Experiment 1 shows users trust pages with higher text-based complexity more. Experiment 2 shows perceived image-based complexity is negatively correlated with credibility. Our results show text-based complexity and image-based complexity have different effects on the credibility of on-line shopping site. Designers can decrease image-based complexity of a web site to increase users’ aesthetic preference and trust. This work can serve as the fundament to develop an automatic evaluation tools to predict the users’ trust and preference of a web page based on the visual complexity computation.
Conference Paper
Even though trust is a frequently articulated topic in software technology literatures, yet the user centered point of view of trust is hardly discussed. How users perceive the trustworthiness of software systems is not trivial, in fact, if a user cannot trust a program to execute on his behalf, then he should not run it [36]. This paper identifies a potential lack in examination of trust in software systems from user's perspective and aims to develop a conceptual User-Centered-Trust (UCT) framework to model it. This model integrates both Technology Acceptance Model (TAM) and trust under Theory of Reasoned Action (TRA) nomological network. In order to integrate them, trust has been conceptualized as an attitude towards the usage of the systems having two distinct dimensions: cognitive and affective.
Conference Paper
Assessing the quality of information on the Web is a challenging issue for at least two reasons. First, as a decentralized data publishing platform in which anyone can share nearly anything, the Web has no inherent quality control mechanisms to ensure that content published is valid, legitimate, or even just interesting. Second, when assessing the trustworthiness of web pages, users tend to base their judgments upon descriptive criteria such as the visual presentation of the website rather than more robust normative criteria such as the author's reputation and the source's review process. As a result, Web users are liable to make incorrect assessments, particularly when making quick judgments on a large scale. Therefore, Web users need credibility criteria and tools to help them assess the trustworthiness of Web information in order to place trust in it. In this paper, we investigate the criteria that can be used to collect supportive data about a piece of information in order to improve a person's ability to quickly judge the trustworthiness of the information. We propose the normative trustworthiness criteria namely, authority, currency, accuracy and relevance which can be used to support users' assessments of the trustworthiness of Web information. In addition, we validate these criteria using an expert panel. The results show that the proposed criteria are helpful. Moreover, we obtain weighting scores for criteria which can be used to calculate the trustworthiness of information and suggest a piece of information that is more likely to be trustworthy to Web users.
Article
In the context of personalization technologies, such as Web-based product-brokering recommendation agents (RAs) in electronic commerce, existing technology acceptance theories need to be expanded to take into account not only the cognitive beliefs leading to adoption behavior, but also the affect elicited by the personalized nature of the technology. This study takes a trust-centered, cognitive and emotional balanced perspective to study RA adoption. Grounded on the theory of reasoned action, the IT adoption literature, and the trust literature, this study theoretically articulates and empirically examines the effects of perceived personalization and familiarity on cognitive trust and emotional trust in an RA, and the impact of cognitive trust and emotional trust on the intention to adopt the RA either as a decision aid or as a delegated agent. An experiment was conducted using two commercial RAs. PLS analysis results provide empirical support for the proposed theoretical perspective. Perceived personalization significantly increases customers' intention to adopt by increasing cognitive trust and emotional trust. Emotional trust plays an important role beyond cognitive trust in determining customers' intention to adopt. Emotional trust fully mediates the impact of cognitive trust on the intention to adopt the RA as a delegated agent, while it only partially mediates the impact of cognitive trust on the intention to adopt the RA as a decision aid. Familiarity increases the intention to adopt through cognitive trust and emotional trust.
Article
Web site usability is concerned with how easy and intuitive it is for individuals to learn to use and interact with a Web site. It is a measure of the quality of a Web site's presence, as perceived by users. The usability of Web sites is important, because high usability is associated with a positive attitude toward the Web site and results in higher online transactions. Poorly designed Web sites with low usability, on the other hand, lead to negative financial impacts. Existing approaches to Web site usability include measurement and tracking of parameters, such as response time and task completion time, and software engineering approaches that specify general usability guidelines and common practices during software development. This paper analyzes usability from the point of view of Web site design parameters. An analysis of usability and other design characteristics of 200 Web sites of different kinds revealed that design aspects, such as information content, ease of navigation, download delay, and Web site availability positively influence usability. Web site security and customization were not found to influence usability. The paper explains these results and suggests design strategies for increasing Web site usability.
Article
Website design elements (information design, information content, navigation design, visual design), disposition to trust, website trust, and transaction security are examined for differences in an eight country sample with a total of 1156 participants (including Canada, the United States, India, Germany, Japan, Mexico, Chile, and China). Within Canada, users from English Canada and French Canada were also compared. In a theoretical context that includes cultural differences for uncertainty avoidance (e.g. Hofstede’s classification) and the GLOBE study which identifies similar country clusters, overall and as predicted, low uncertainty avoidance countries of French Canada, English Canada, and the United States have the highest scores on the various constructs indicating more favorable reactions by users. Largest differences across most of the constructs occur between Germany, Japan, and China with other countries in the sample.
Article
This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.
Article
Data from 574 participants were used to assess perceptions of message, site, and sponsor credibility across four genres of websites; to explore the extent and effects of verifying web-based information; and to measure the relative influence of sponsor familiarity and site attributes on perceived credibility.The results show that perceptions of credibility differed, such that news organization websites were rated highest and personal websites lowest, in terms of message, sponsor, and overall site credibility, with e-commerce and special interest sites rated between these, for the most part.The results also indicated that credibility assessments appear to be primarily due to website attributes (e.g. design features, depth of content, site complexity) rather than to familiarity with website sponsors. Finally, there was a negative relationship between self-reported and observed information verification behavior and a positive relationship between self-reported verification and internet/web experience. The findings are used to inform the theoretical development of perceived web credibility.
Article
The Federal Trade Commission has declared the privacy and security of consumer information to be two major issues that stem from the rapid growth in e-cotnmerce, particularly in terms of consumer-related commerce on the Internet. Although prior studies have assessed online retailer responses to privacy and security concerns with respect to retailers' disclosure of their practices, these studies have been fairly general in their approaches and have not explored the potential for such disclosures to affect consumers. The authors examine online retailer disclosures of various privacy- and security-related practices for 17 product categories. They also compare the prevalence of disclosures to a subset of data from a consumer survey to evaluate potential relationships between online retailer practices and consumer perceptions of risk and purchase intentions across product categories.
Article
End-user trust has been increasingly recognized as important for successful management and use of web information systems (IS). This research investigates influential factors for the establishment of end-user trust toward web IS. The theory of reasoned action and the technology acceptance model are adopted as the theoretical foundation for the research model. Based on analysis of data collected from 88 participants of an international coffee company, the results reveal that perceived ease of use, perceived usefulness, user familiarity, and system normality have positively contributed to the establishment of end-user toward a web communication IS. This research highlights the importance of trust in web IS adoption and concludes with theoretical and practical contributions.
Article
We performed a study to determine the influence that perceived usability has on the user's loyalty to websites that they visit. The results of the empirical analysis confirmed that the trust of the user increases when the user perceived that the system was usable and that there was a consequent increase in the degree of website loyalty. In the same way, greater usability was found to have a positive influence on user satisfaction, and this also generated greater website loyalty. Finally, it was found that user trust was partially dependent on the degree of consumer website satisfaction.
Article
Trust is emerging as a key element of success in the on-line environment. Although considerable research on trust in the offline world has been performed, to date empirical study of on-line trust has been limited. This paper examines on-line trust, specifically trust between people and informational or transactional websites. It begins by analysing the definitions of trust in previous offline and on-line research. The relevant dimensions of trust for an on-line context are identified, and a definition of trust between people and informational or transactional websites is presented. We then turn to an examination of the causes of on-line trust. Relevant findings in the human–computer interaction literature are identified. A model of on-line trust between users and websites is presented. The model identifies three perceptual factors that impact on-line trust: perception of credibility, ease of use and risk. The model is discussed in detail and suggestions for future applications of the model are presented.
Article
Advancements in computer technology have allowed the development of human-appearing and -behaving virtual agents. This study examined if increased richness and anthropomorphism in interface design lead to computers being more influential during a decision-making task with a human partner. In addition, user experiences of the communication format, communication process, and the task partner were evaluated for their association with various features of virtual agents. Study participants completed the Desert Survival Problem (DSP) and were then randomly assigned to one of five different computer partners or to a human partner (who was a study confederate). Participants discussed each of the items in the DSP with their partners and were then asked to complete the DSP again. Results showed that computers were more influential than human partners but that the latter were rated more positively on social dimensions of communication than the former. Exploratory analysis of user assessments revealed that some features of human–computer interaction (e.g. utility and feeling understood) were associated with increases in anthropomorphic features of the interface. Discussion focuses on the relation between user perceptions, design features, and task outcomes.
Article
With the rapid change in all types of working environment, there is a need to implement electronic learning (e-learning) systems to train people in new technologies, products, and services. However, the large investment in e-learning has made user acceptance an increasingly critical issue for technology implementation and management. Although user acceptance received fairly extensive attention in prior research, efforts were needed to examine or validate previous results, especially in different technologies, user populations, and/or organizational contexts. We therefore proposed a new construct, perceived credibility, to examine the applicability of the technology acceptance model (TAM) in explaining engineers’ decisions to accept e-learning, and address a pragmatic technology management issue. Based on a sample of 140 engineers taken from six international companies, the results strongly support the extended TAM in predicting engineers’ intention to use e-learning.