Conference PaperPDF Available

Abstract and Figures

Now and then, users are asked to accept terms and conditions (T&C) before using Internet-based services. Previous studies show that users ignore reading T&C most of the time and accept them tacitly without reading, while they may include critical information. This study targets solving this problem by designing an innovative NeuroIS application called EyeTC. EyeTC uses webcam-based eye tracking technology to track users' eye movement data in real-time and provide attention feedback when users do not read T&C of Internetbased services. We tested the effectiveness of using EyeTC to change users’ behavior for reading T&C. The results show that when users receive EyeTCbased attention feedback, they allocate more attention to the T&C, leading to a higher text comprehension. However, participants articulated privacy concerns about providing eye movement data in a real-world setup.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Toreini, Peyman; Langner, Moritz; Tobias Vogel; Maedche, Alexander (2021)
EyeTC: Attentive Terms and Conditions of Internetbased Services with Webcam-
based Eye Tracking”, to appear in: Information Systems and Neuroscience
(NeuroIS Retreat 2021) Proceedings
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing
(IISM)
Kaiserstraße 89-93
76133 Karlsruhe- Germany
http://iism.kit.edu/
© 2017. This manuscript version is made available under the
CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/
EyeTC: Attentive Terms and Conditions of Internet-
based Services with Webcam-based Eye Tracking
Peyman Toreini1, Moritz Langner1, Tobias Vogel2, and Alexander Maedche1
1 Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
{peyman.toreini,moritz.langner,alexander.maedche}@kit.edu
2 University of Mannheim, Mannheim, Germany
Darmstadt University of Applied Sciences, Darmstadt, Germany
vogel@uni-mannheim.de, tobias.vogel@h-da.de
Abstract. Now and then, users are asked to accept terms and conditions (T&C)
before using Internet-based services. Previous studies show that users ignore
reading T&C most of the time and accept them tacitly without reading, while
they may include critical information. This study targets solving this problem
by designing an innovative NeuroIS application called EyeTC. EyeTC uses
webcam-based eye tracking technology to track users' eye movement data in re-
al-time and provide attention feedback when users do not read T&C of Internet-
based services. We tested the effectiveness of using EyeTC to change users’
behavior for reading T&C. The results show that when users receive EyeTC-
based attention feedback, they allocate more attention to the T&C, leading to a
higher text comprehension. However, participants articulated privacy concerns
about providing eye movement data in a real-world setup.
Keywords: eye tracking, attentive user interface, attention feedback, NeuroIS
1 Introduction
Internet users are confronted with legally binding documents such as terms and
conditions (T&C) on a daily basis. However, almost no one reads them before agree-
ing on the content, which is also named as “the biggest lie on the internet” [1]. Never-
theless, such documents may include critical information that allow third parties to
benefit from users’ information while they do not truly agree on that. Users often give
the provider permission to keep, analyze and sell their data when accepting T&Cs of
Internet-based services. Previous studies show that when users signed up for a ficti-
tious social network service, 98% of them missed clauses to allow data sharing with
the NSA and employers [1]. Besides, not reading important legal texts has also been
analyzed for computer usage policies [2], security warnings for downloads [3], or
when connecting to public Wi-Fi [4]. One of the reasons users accept such infor-
mation without reading it is that they consider it an interruption of their primary task
like finishing an online purchase transaction or signing up for a new Internet-based
service [5]. Attitude, social trust, and apathy are also found to explain partially why
2
users elect not to read such legal documents [2]. Also, habituation might explain such
behavior, while the design of T&C can create this habituation and lead to fewer peo-
ple reading and cognitively processing what they agree to [3, 5].
Apart from the reason why people do not read T&C, there is a need to increase us-
er's awareness about their failure and to guide them in reading missed parts of T&C,
especially when it includes critical information. Existing approaches focus on forcing
users to stay on the T&C page for a specific time or force them to scroll until the end
of the T&C before accepting them to inspire users to read them. However, these ap-
proaches do not guarantee that users properly read the document, and there is a need
to design more intelligent approaches. One solution is to convert T&C to attentive
documents in order to track of how documents are really read [6]. Attentive user inter-
faces (AUI) are known as user interfaces that are aware of the user’s attention and
support them to allocate their limited attention [710]. Eye tracking technology is the
primary device for designing such AUIs as it allows to retrieve information about
visual attention [11, 12]. NeuroIS researchers also suggested using this technology to
design innovative applications [1316] and AUIs [1721]. However, there is a lack of
research on using eye tracking devices for designing attention feedback [22]. There-
fore, in this study, we suggest designing an AUI that focuses on T&C. We name this
application EyeTC. EyeTC refers to an attentive T&C that tracks users' eye movement
in real-time and provides attention feedback when users ignore reading the content of
T&C. We especially focus on using webcam-based eye trackers since they are cheap
and available for users, and they do not need to buy extra tools to use EyeTC. There-
fore, in this study, we focus on answering the following research question (RQ):
RQ: How to design attentive T&Cs with webcam-based eye tracking to enhance
user’s attention to T&Cs and their comprehension?
To answer this question, we investigated webcam-based eye trackers' usage for de-
signing attentive T&C within a design science research (DSR) project. Scholars have
emphasized the need for the integration of the DSR and NeuroIS fields in order to
designing innovative applications [14, 15]. In this project, we propose the EyeTC
application that can track users eye movement via webcams in real-time and use this
information to provide attention feedback while processing T&C. In this study, we
focus on the development and evaluation phase of the first design cycle. After instan-
tiating the suggested design, we evaluated it in a laboratory experiment. Our results
show that using attentive T&C improves users attention allocation on T&C as well as
their text comprehension. However, they articulated privacy concerns for sharing their
eye movement data in a real-world scenario. We contribute to the field of NeuroIS by
providing evidence of how eye tracking technology can be used for designing AUIs
that support users to read T&C.
3
2 The EyeTC Prototype
To conceptualize and implement EyeTC, we followed the eighth and ninth-
contribution types of the NeuroIS field suggested by [12]. Specifically, we defined
two main components of EyeTC: an attentive T&C, which is considered as a neuro-
adaptive IS, and attention feedback, in the form of live biofeedback. Figure 1 depicts
an overview of the instantiation of these two dimensions in EyeTC.
For developing the attentive T&C component, we used webcam-based eye tracking
technology. Using low-cost eye trackers is suggested for information system research
[20, 23], and one of the options is using webcam-based eye trackers [24]. We con-
verted webcams to eye trackers by integrating WebGazer JavaScript
1
[25]. Next, the
eye tracking system retrieves gaze data using the webcam recording and stores the
information about the predicted gaze position (sensing attention). After the user
agrees to the T&C, the reading detector system of the attentive T&C analyses the
users reading intensity (reasoning about attention), and if visual attention does not
pass a certain threshold, users will receive feedback on the lack of attention. Later, if
the user agreed to the T&C without reading the text, the attention feedback system is
activated (regulating interactions). First, in the attention feedback component, users
are informed by a pop-up warning message stating the importance of reading legal
documents and upcoming attention feedback design. Next, the attention feedback
system uses the information about the reading activity of the user to highlight the
specific AOIs that were not read yet by users sufficiently while accepting the T&C.
Gaze data is collected
and analyzed in real-time
to derive attentional state
of the user on T&C
Webcam-based
Eye Tracker
Attentive T&C Attention Feedback
Warning Message
Highlight the previously unread
paragraphs based on eye
movement data
2
1
Sensing Reasoning Regulating
Interaction
Fig. 1. Components of EyeTC to enhance users’ attention to T&C and comprehension
1
https://webgazer.cs.brown.edu/
4
3 Experimental Design
To evaluate EyeTC, we executed a controlled laboratory experiment with two
groups in which attention feedback types were manipulated between subjects. As
apparatus, we used Logitech Brio 4K Ultra HD webcam on all laboratory computers
and the WebGazer. In the following we discuss the two presented attention feedback
types as well as the experimental procedure.
3.1 Attention Feedback Types
In this study, we designed two different types of attention feedback for T&C read-
ers distributed in the control and treatment groups. Both groups received feedback
types after being forced to read T&C by scrolling until the end of the T&C and
choose to the agreement on the provided content (similar to existing approaches on
the internet when facing T&C). After users click on the continue, the treatment group
received EyeTC and the corresponding attention feedback with both warning message
and highlighting option discussed in the previous section. The control group received
general attention feedback in the form of only a warning message. This warning mes-
sage aimed to create bottom-up attention and reminded participants about reading the
legal text carefully. Both groups received the same warning message with the same
primary text. The only difference is that the treatment group users were informed
about receiving the highlighted passage in the next step. Therefore, with this design,
we argue that both groups experienced the same situation except the personalized
highlighted passage provided by EyeTC to the participants in the treatment group.
Treatment Group
(Warning Message + Highlighting Text) Control Group
(Warning Message)
2
1
Fig. 2. Two types of attention feedback used in this study to investigate the EyeTC
3.2 Experimental Procedure
Figure 2 shows the experimental steps to evaluate EyeTC. After reading the experi-
mental instruction and performing the calibration, we started a bogus experiment. In
this experiment, we asked users to choose their favorite pictures among two options
while we track their eyes to find the relationship between their choice and the visual
5
behavior. After performing the bogus experiment, we offered the users to participate
in a lottery to win an extra 20 euros besides the compensation for the experiment
participation. For that, they had to read and accept our designed T&C. Both groups
were forced to scroll down the T&C before the accept button got activated. In this
phase, the attentive T&C started to record and analyze the user’s eye movements
while reading the T&C. After users accepted the T&C, the treatment group received a
warning message and attention feedback, and the control group received only a warn-
ing message. Next, the participants from both groups were forced to check the T&C
again, which was considered as their revisit phase. During all these steps, the user's
interaction data is recorded, and users' exploration time on each step is considered as
the duration of their allocated attention. After they were done with the experiment, we
measured the T&C text comprehension of the participants with a declarative
knowledge test in the form of 15 multiple-choice questions. Last, participants joined a
survey for demographic questions, perceived usefulness of the attention feedback
types, and the ability to articulate privacy concerns using webcam-based eye trackers.
Bogus Experiment First Visit
T&C
(Forced to scroll) Attention
Feedback Revisit
T&C Declarative
Knowledge
Test Survey
Fig. 3. Experiment steps used for evaluating EyeTC
4 Results
In total 62 university students (32 female, 30 male) with an average age of 22.82
(SD=2.61) participated in this laboratory experiment. Users were assigned randomly
to one of the two groups. Furthermore, all the participants in both groups visited the
T&C for two times and system did not detect anyone that read the T&C precisely in
the first visit.
First, we checked the users' first visit duration. Executing Wilcoxon rank-sum test
shows that the first visit duration for the treatment group (M=122s, SD=87s) did not
differ significantly from participants in the control group (M=125s, SD=81s), W=463,
p=.81, r=-.03. It shows that both groups had similar behavior regarding reading T&C.
However, in the revisit phase, participants in the treatment group (M=144s, SD=97s)
had a significantly higher reading duration than the control group (M= 38s, SD=42s),
W=814, p<.001, r=-.595. It shows that users that received attention feedback changed
their behavior and spend more time on T&C. The provided T&C includes 914 words
and with an assumed reading speed of 250 words/minute [26] the reader might need
around of 195.85 seconds to read the text. By comparing the total reading time (first
visit and revisit) of both groups we argue that a reader might read the T&C in the
treatment condition (total time spent M=266s, SD=130s), but not in the control condi-
tion (with a total time spent M = M=163s, SD=84s). Also, comparing the total dura-
6
tion time on T&C shows that the treatment group spent significantly more seconds on
T&C than participants in the control group, t=3.69, p<.001.
Furthermore, the performance in a declarative knowledge test as measured by the
number of correct answers was higher for users in the treatment group (M=10,
SD=2.8) than for users in the control group (M=8.8, SD=2.1), W=650, p<.05, r=-.305.
Despite, the results of the survey show that both groups have high privacy concerns
about using eye tracking technology and there is no difference between the treatment
group (M=5.74, SD=1.1) and the control group (M=5.55, SD=1.12), W=527.5,
p=0.511, r=-.083.
Fig. 4. The influence of EyeTC on attention allocation and text comprehension
5 Discussion
Our experimental results show a positive effect of EyeTC on the users to read T&C.
The personalized highlighting of passages that have not been read was significantly
more effective than a simple reminder in the form of a prompt. In conclusion, EyeTC
caused a higher reading duration on the T&C and better text comprehension. Tracking
tools are often known to decrease privacy, but we show that eye tracking can be used to
increase privacy by supporting people in reading T&C and understanding them. Based
on the DSR contribution types provided by Gregor and Hevner [27] this project is con-
sidered as "improvement" type since we could provide as the solution (EyeTC) for a
known problem (ignore reading T&C). Furthermore, by implementing EyeTC as a
trustable eye tracking software [28], users can decide to use eye tracking in a way to
help them not to miss out important content.
However, this research also has some limitations that should be covered in the future.
Using webcam-based eye tracking was beneficial for designing EyeTC as they are inte-
grated into most personal computers and are more available than using eye trackers.
However, they are less accurate and precise compared to the infrared eye trackers. Also,
they are very sensitive to movements, and we controlled for the steady posture of the
participants during the experiment. However, there is a chance that the EyeTC did not
provide accurate highlighting visualization for some participants during the experiment.
However, as people typically ignore reading T&C, it was not reported by any partici-
pants. Furthermore, we did not consider the user’s eye movement data in the evaluation
7
section to control data noise regarding the webcam-based eye trackers. For the evalua-
tion, we focused on the users' mouse clicks as interaction data as well as the survey
results. As future work, we suggest general highlighting of typical passages that people
do not read and investigate the users' reaction and the need for personalized adaption of
the system. Also, to validate the results, we suggest designing and evaluating EyeTC
with accurate eye trackers in the future. A more accurate eye tracker can also help to
better understand how users process T&C. Also can support EyeTC to distinguish be-
tween skimming, reading, and non-reading behavior, etc. [6, 29, 30]. Furthermore, the
results are based on a controlled lab environment, and there is a need to check the
effectiveness of EyeTC in the field and as long-term studies. Also, the future agenda
is to establish standards for integrating EyeTC either by T&C providers or in a way
that users can install it to receive support. Also, the findings from this study may be
further developed to create applications beyond attentive T&C. For example, this
system could be used in e-learning courses to motivate learners to read factual texts;
companies might find it helpful to implement a reading enhancing system for certain
documents, reading other legal documents like a contract, etc.
References
1. Obar, J.A., Oeldorf-Hirsch, A.: The biggest lie on the Internet: ignoring the privacy
policies and terms of service policies of social networking services. Inf. Commun.
Soc. 23, 128147 (2020).
2. Bryan Foltz, C., Schwager, P.H., Anderson, J.E.: Why users (fail to) read computer
usage policies. Ind. Manag. Data Syst. 108, 701712 (2008).
3. Anderson, B.B., Jenkins, J.L., Vance, A., Kirwan, C.B., Eargle, D.: Your memory is
working against you: How eye tracking and memory explain habituation to security
warnings. Decis. Support Syst. 92, 313 (2016).
4. Fox-Brewster, T.: Londoners give up eldest children in public Wi-Fi security horror
show, https://www.theguardian.com/technology/2014/sep/29/londoners-wi-fi-security-
herod-clause#:~:text=Londoners give up eldest children in public Wi-Fi security
horror show,-This article is&text=When people connected to the,for the duration of
eternity.
5. Böhme, R., Köpsell, S.: Trained to accept? A field experiment on consent dialogs.
Conf. Hum. Factors Comput. Syst. - Proc. 4, 24032406 (2010).
6. Buscher, G., Dengel, A., Biedert, R., Elst, L. V.: Attentive Documents: Eye Tracking
as Implicit Feedback for Information Retrieval and Beyond Attentive Documents: Eye
Tracking as Implicit Feedback for Information Retrieval and Beyond. ACM Trans.
Interact. Intell. Syst. 1, 130 (2012).
7. Vertegaal, R.: Attentive User Interfaces. Commun. ACM. 46, 3033 (2003).
8. Anderson, C., Hübener, I., Seipp, A.-K., Ohly, S., David, K., Pejovic, V.: A Survey of
Attention Management Systems in Ubiquitous Computing Environments. Proc. ACM
Interactive, Mobile, Wearable Ubiquitous Technol. 2, 127 (2018).
9. Bulling, A.: Pervasive Attentive User Interfaces. Computer (Long. Beach. Calif). 49,
9498 (2016).
8
10. Roda, C., Thomas, J.: Attention aware systems: Theories, applications, and research
agenda. Comput. Human Behav. 22, 557587 (2006).
11. Duchowski, A.T.: Eye Tracking Methodology: Theory and Practice. Springer
International Publishing, Cham (2017).
12. Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., Van De
Weijer, J.: Eye Tracking: A comprehensive guide to methods and measures. Oxford
University Press, Oxford (2011).
13. Davis, F.D., Riedl, R., Hevner, A.R.: Towards a NeuroIS Research Methodology:
Intensifying the Discussion on Methods, Tools, and Measurement. J. Assoc. Inf. Syst.
15, IXXXV (2014).
14. Riedl, R., Léger, P.-M.: Fundamentals of NeuroIS: Information systems and the brain.
Springer, Berlin, Heidelberg (2016).
15. vom Brocke, J., Riedl, R., Léger, P.-M.: Application Strategies for Neuroscience in
Information Systems Design Science Research. J. Comput. Inf. Syst. 53, 113 (2013).
16. Dimoka, A., Davis, F.D., Pavlou, P.A., Dennis, A.R.: On the Use of
Neurophysiological Tools in IS Research: Developing a Research Agenda for
NeuroIS. MIS Q. 36, 679702 (2012).
17. Hummel, D., Toreini, P., Maedche, A.: Improving Digital Nudging Using Attentive
User Interfaces: Theory Development and Experiment Design Using Eye-tracking. In:
Research in Progress Proceedings of the 13th International Conference on Design
Science Research in Information Systems and Technology (DESRIST). pp. 18. ,
Chennai, India (2018).
18. Toreini, P., Langner, M.: Desiginig User-Adaptive Information Dashboards:
Considering Limited Attention and Working Memory. In: Proceedings of the 27th
European Conference on Information Systems (ECIS2019). , Stockholm-Uppsala,
Sweden (2019).
19. Langner, M., Toreini, P., Maedche, A.: AttentionBoard: A Quantified-Self Dashboard
for Enhancing Attention Management with Eye-Tracking (in press). In: Davis, F.,
Riedl, R., vom Brocke, J., Léger, P., Randolph, A., and Fischer, T. (eds.) Information
Systems and Neuroscience (NeuroIS Retreat 2020). , Virtual Conference (2020).
20. Toreini, P., Langner, M., Maedche, A.: Using Eye-Tracking for Visual Attention
Feedback. In: Davis, F., Riedl, R., vom Brocke, J., Léger, P., Randolph, A., and
Fischer, T. (eds.) Information Systems and Neuroscience (NeuroIS Retreat 2019).
Lecture Notes in Information Systems and Organisation. pp. 261270. Springer,
Vienna, Austria (2020).
21. Toreini, P., Langner, M., Maedche, A.: Use of attentive information dashboards to
support task resumption in working environments. In: Proceedings of the 2018 ACM
Symposium on Eye Tracking Research & Applications. pp. 13. ACM Press (2018).
22. Lux, E., Adam, M.T.P., Dorner, V., Helming, S., Knierim, M.T., Weinhardt, C.: Live
Biofeedback as a User Interface Design Element: A Review of the Literature.
Commun. Assoc. Inf. Syst. 43, 257296 (2018).
23. Zugal, S., Pinggera, J.: LowCost EyeTrackers: Useful for Information Systems
Research? In: Iliadis, L., Papazoglou, M., and Pohl, K. (eds.) Advanced Information
Systems Engineering Workshops. CAiSE 2014. Lecture Notes in Business Information
Processing. pp. 159170. Springer, Cham (2014).
9
24. Burton, L., Albert, W., Flynn, M.: A Comparison of the Performance of Webcam vs.
Infrared Eye Tracking Technology. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 58,
14371441 (2014).
25. Papoutsaki, A.: Scalable Webcam Eye Tracking by Learning from User Interactions.
In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human
Factors in Computing Systems. pp. 219222. ACM, New York, NY, USA (2015).
26. Rayner, K.: Eye movements in reading and information processing: 20 years of
research. Psychol. Bull. 124, 372422 (1998).
27. Gregor, S., Hevner, A.R.: Positioning and Presenting Design Science Research for
Maximum Impact. MIS Q. 37, 337355 (2013).
28. Steil, J., Hagestedt, I., Huang, M.X., Bulling, A.: Privacy-aware eye tracking using
differential privacy. In: Proceedings of the 11th ACM Symposium on Eye Tracking
Research & Applications. pp. 19. ACM, New York, NY, USA (2019).
29. Biedert, R., Buscher, G., Schwarz, S., Hees, J., Dengel, A.: Text 2.0. In: Proceedings
of the 28th of the international conference extended abstracts on Human factors in
computing systems - CHI EA 10. p. 4003. ACM Press, New York, New York, USA
(2010).
30. Gwizdka, J.: Differences in reading between word search and information relevance
decisions: Evidence from eye-tracking. Lect. Notes Inf. Syst. Organ. 16, 141147
(2017).
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
In the age of information, office workers process huge amounts of in- formation and distribute their attention to several tasks in parallel. However, attention is a scarce resource and attentional breakdowns, such as missing important information, may occur while using information systems (IS). Currently, there is a lack of support to understand and improve attention management to avoid such breakdowns. In the meantime, self-tracking applications are becoming popular due to the increasing sensory capabilities of smart devices. These systems support their users in understanding and reflecting their behavior. In this research-in-progress paper, we suggest leveraging self-tracking concepts for attention management while working with ISs and describe the design of the NeuroIS-based system called “AttentionBoard”. The goal of AttentionBoard is to help office workers in improving their attention management competencies. The system records attention allocation in real-time using eye- tracking and presents the aggregated data as metrics and visualizations on a dashboard. This paper presents the first step by motivating and introducing an initial design following the design science research (DSR) methodology.
Conference Paper
Full-text available
Business intelligence systems provide information dashboards that aim to assist decision-makers in understanding business situations and consequently support in decision-making. They typically include compressed visual information to raise data exploration from different perspectives. Although information visualization is known as a possible solution to overcome human cognitive limitations such as attention and working memory, we do not know much about existing cognitive challenges of users when performing data exploration tasks using information dashboards. In this paper, we present the results of an eye-tracking experiment to study the impact of attention and working memory limitations on the effectiveness of dashboards. For that, we explicitly considered visuospatial working memory capacity (WMC) as one critical individual difference and investigated how users with different visuospatial WMC allocate attentional resources while conducting data exploration task. We found that both users with high and low visuospatial WMC have difficulties to control their attentional resources. However, these difficulties are more for users with low visuospatial WMC in compare with high WMC. Our results highlighted the need for designing user-adaptive information dashboards. Therefore, we articulated meta-requirements for designing information dashboards that are sensitive to the attention and WMC of their users as two central components of information processing theory.
Article
Full-text available
Today's information and communication devices provide always-on connectivity, instant access to an endless repository of information, and represent the most direct point of contact to almost any person in the world. Despite these advantages, devices such as smartphones or personal computers lead to the phenomenon of attention fragmentation, continuously interrupting individuals' activities and tasks with notifications. Attention management systems aim to provide active support in such scenarios, managing interruptions, for example, by postponing notifications to opportune moments for information delivery. In this article, we review attention management system research with a particular focus on ubiquitous computing environments. We first examine cognitive theories of attention and extract guidelines for practical attention management systems. Mathematical models of human attention are at the core of these systems, and in this article, we review sensing and machine learning techniques that make such models possible. We then discuss design challenges towards the implementation of such systems, and finally, we investigate future directions in this area, paving the way for new approaches and systems supporting users in their attention management.
Conference Paper
Full-text available
Digital nudging is building on the nudging concept established by behavioral economics. Although nudging and digital nudging have received increasing attention from academia and practitioners, there is evidence that it might be less effective than expected. This lacking effectiveness is in part due to not noticing and cognitively processing the digital nudge. Thus, more invasive methods are needed, and we suggest that using attentive user interfaces based on eyetracking technology can further enhance the impact of digital nudges. These will be particularly effective when users would have missed out on certain digital nudges presented on the user interface. Hence, we propose a design science project with a focus on the evaluation phase which includes the theoretical underpinning as well as an experimental design of attentive user interfaces for digital nudges. Thereby, we build on an e-commerce context and suggest giving customers, that do not recognize a digital nudge, interactive real-time feedback. Our artifact can be used by practitioners to improve the usability of digital interfaces.
Conference Paper
With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users' privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.
Article
With the advances in sensor technology and real-time processing of neurophysiological data, a growing body of academic literature has begun to explore how live biofeedback can be integrated into information systems for everyday use. While researchers have traditionally studied live biofeedback in the clinical domain, the proliferation of affordable mobile sensor technology enables researchers and practitioners to consider live biofeedback as a user interface element in contexts such as decision support, education, and gaming. In order to establish the current state of research on live biofeedback, we conducted a literature review on studies that examine self and foreign live biofeedback based on neurophysiological data for healthy subjects in an information systems context. By integrating a body of highly fragmented work from computer science, engineering and technology, information systems, medical science, and psychology, this paper synthesizes results from existing research, identifies knowledge gaps, and suggests directions for future research. In this vein, this review can serve as a reference guide for researchers and practitioners on how to integrate self and foreign live biofeedback into information systems for everyday use.
Article
This paper addresses ‘the biggest lie on the internet’ with an empirical investigation of privacy policy (PP) and terms of service (TOS) policy reading behavior. An experimental survey (N = 543) assessed the extent to which individuals ignored PP and TOS when joining a fictitious social networking service (SNS), NameDrop. Results reveal 74% skipped PP, selecting the ‘quick join’ clickwrap. Average adult reading speed (250–280 words per minute), suggests PP should have taken 29–32 minutes and TOS 15–17 minutes to read. For those that didn’t select the clickwrap, average PP reading time was 73 seconds. All participants were presented the TOS and had an average reading time of 51 seconds. Most participants agreed to the policies, 97% to PP and 93% to TOS, with decliners reading PP 30 seconds longer and TOS 90 seconds longer. A regression analysis identifies information overload as a significant negative predictor of reading TOS upon sign up, when TOS changes, and when PP changes. Qualitative findings suggest that participants view policies as nuisance, ignoring them to pursue the ends of digital production, without being inhibited by the means. Implications are revealed as 98% missed NameDrop TOS ‘gotcha clauses’ about data sharing with the NSA and employers, and about providing a first-born child as payment for SNS access.
Conference Paper
Interruptions are known as one of the big challenges in working environments. Due to improper resuming the primary task, such interruptions may result in task resumption failures and negatively influence the task performance. This phenomenon also occurs when users are working with information dashboards in working environments. To address this problem, an attentive dashboard issuing visual feedback is developed. This feedback supports the user in resuming the primary task after the interruption by guiding the visual attention. The attentive dashboard captures visual attention allocation of the user with a low-cost screen-based eye-tracker while they are monitoring the graphs. This dashboard is sensitive to the occurrence of external interruption by tracking the eye-movement data in real-time. Moreover, based on the collected eye-movement data, two types of visual feedback are designed which highlight the last fixated graph and unnoticed ones.
Article
We investigated differences in reading strategies in relation to information search task goals and perceived text relevance. Our findings demonstrate that some aspects of reading when looking for a specific target word are similar to reading relevant texts to find information, while other aspects are similar to reading irrelevant texts to find information. We also show significant differences in pupil dilation on final fixations on relevant words and on relevance decisions. Our results show feasibility of using eye-tracking data to infer timing of decisions made on information search tasks in relation to the required depth of information processing and the relevance level.
Article
Security warnings are critical to the security of end users and their organizations, often representing the final defense against an attack. Because warnings require users to make a contextual judgment, it is critical that they pay close attention to warnings. However, research shows that users routinely disregard them. A major factor contributing to the ineffectiveness of warnings is habituation, the decreased response to a repeated warning. Although previous research has identified the problem of habituation, the phenomenon has only been observed indirectly through behavioral measures. Therefore, it is unclear how habituation develops in the brain in response to security warnings, and how this in turn influences users' perceptions of these warnings. This paper contributes by using eye tracking to measure the eye movement-based memory (EMM) effect, a neurophysiological manifestation of habituation in which people unconsciously scrutinize previously seen stimuli less than novel stimuli. We show that habituation sets in after only a few exposures to a warning and progresses rapidly with further repetitions. Using guidelines from the warning science literature, we design a polymorphic warning artifact which repeatedly changes its appearance. We demonstrate that our polymorphic warning artifact is substantially more resistant to habituation than conventional security warnings, offering an effective solution for practice. Finally, our results highlight the value of applying neuroscience to the domain of information security behavior.