Conference Paper

A Systematic Analysis of User Evaluations in Security Research

Authors:
  • Capgemini Invent
  • Continental Automotive Technologies GmbH
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We conducted a literature survey on reproducibility and replicability of user surveys in security research. For that purpose, we examined all papers published over the last five years at three leading security research conferences and recorded the type of study and whether the authors made the underlying responses available as open data, as well as if they published the used questionnaire respectively interview guide. We uncovered how user surveys become more widespread in security research and how authors and conferences are increasingly publishing their methodologies, while we had no examples of data being made available. Based on these findings, we recommend that future researchers publish their data in addition to their results to facilitate replication and ensure a firm basis for user studies in security research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This is exactly where the CS community falls short as not much research has been done on this aspect that explores the human factors in cybersecurity from a multidisciplinary perspective. There are a few works that focus on developer's perspective of cybersecurity [42], the methodologies that are used for gathering data from humans in a cybersecurity environment [15], where the current cybersecurity education is heading towards [41], or creating a taxonomy for cybersecurity research [40]. But the issue of "human-in-the-link" has largely been ignored due to which not much is known about the current state of the art of human factors research in cybersecurity, and what lessons can be learned for the future. ...
... Nevertheless, the very few literatures that are available in this context are presented in this section to highlight the research gaps. Authors in [15] conducted a survey on the replicability of user surveys in humancentric cybersecurity research. They concluded that user surveys are the most commonly used methodology in security research involving human subjects, however, such data has never been made publicly available. ...
Conference Paper
Full-text available
Humans are often considered to be the weakest link in the cyber-security chain. However, traditionally the Computer Science (CS) researchers have investigated the technical aspects of cybersecurity, focusing on the encryption and network security mechanisms. The human aspect although very important is often neglected. In this work we carry out a scoping review to investigate the take of the CS community on the human-centric cybersecurity paradigm by considering the top conferences on network and computer security for the past six years. Results show that broadly two types of users are considered: expert and non-expert users. Qualitative techniques dominate the research methodology employed, however, there is a lack of focus on the theoretical aspects. Moreover, the samples have a heavy bias towards the Western community, due to which the results cannot be generalized, and the effect of culture on cyber-security is a lesser known aspect. Another issue is with respect to the unavailability of standardized security-specific scales that can measure the cybersecurity perception of the users. New insights are obtained and avenues for future research are presented.
... Overall, we systematize a significantly larger body of literature than related surveys, for example, Hamm et al., who cover three conferences over five years [114], or Tahei and Vaniea, who focus only on developers [263]. ...
Thesis
Full-text available
Technological systems and infrastructures form the bedrock of modern society and it is system administrators (sysadmins) who configure, maintain and operate these infrastructures. More often than not, they do so behind the scenes. The work of system administration tends to be unseen and, consequently, not well known. After all, do you think of your IT help-desk when everything is working just fine? Usually, people reach out for help when something is not working as expected or when they need something. A lot of work and effort goes into ensuring that systems are working as expected most of the time and, paradoxically, this smooth functioning results in the invisibilization of the work and effort that went into it. This PhD research focuses on system administration work and what that entails in day-to-day tasks. Instead of proposing technical and social solutions, we try to better understand the “problem” that these proposed solutions are meant to solve. Drawing from safety science research and feminist research approaches, we perform a qualitative exploration of sysadmins’ work. We center their experiences via an in-depth interview investigation and a focus group study. We identify and describe the coordination mechanisms and gender considerations embedded in their work. We shed light on care work as part of sysadmin work and the phenomenon of double invisibility that is experienced by sysadmins who are not cis men. The thesis wraps up with a set of recommendations for moving toward safe and equitable work environments for sysadmins.
... Overall, we systematize a significantly larger body of literature than related surveys, for example, Hamm et al., who cover three conferences over five years [59], or Tahei and Vaniea, who focus only on developers [133]. ...
Preprint
Full-text available
Instead of only considering technology, computer security research now strives to also take into account the human factor by studying regular users and, to a lesser extent, experts like operators and developers of systems. We focus our analysis on the research on the crucial population of experts, whose human errors can impact many systems at once, and compare it to research on regular users. To understand how far we advanced in the area of human factors, how the field can further mature, and to provide a point of reference for researchers new to this field, we analyzed the past decade of human factors research in security and privacy, identifying 557 relevant publications. Of these, we found 48 publications focused on expert users and analyzed all in depth. For additional insights, we compare them to a stratified sample of 48 end-user studies. In this paper we investigate: (i) The perspective on human factors, and how we can learn from safety science (ii) How and who are the participants recruited, and how this -- as we find -- creates a western-centric perspective (iii) Research objectives, and how to align these with the chosen research methods (iv) How theories can be used to increase rigor in the communities scientific work, including limitations to the use of Grounded Theory, which is often incompletely applied (v) How researchers handle ethical implications, and what we can do to account for them more consistently Although our literature review has limitations, new insights were revealed and avenues for further research identified.
Article
Full-text available
Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
Conference Paper
Full-text available
Identifying security vulnerabilities in software is a critical task that requires significant human effort. Currently, vulnerability discovery is often the responsibility of software testers before release and white-hat hackers (often within bug bounty programs) afterward. This arrangement can be ad-hoc and far from ideal; for example, if testers could identify more vulnerabilities, software would be more secure at release time. Thus far, however, the processes used by each group — and how they compare to and interact with each other — have not been well studied. This paper takes a first step toward better understanding, and eventually improving, this ecosystem: we report on a semi-structured interview study (n=25) with both testers and hackers, focusing on how each group finds vulnerabilities, how they develop their skills, and the challenges they face. The results suggest that hackers and testers follow similar processes, but get different results due largely to differing experiences and therefore different underlying knowledge of security concepts. Based on these results, we provide recommendations to support improved security training for testers, better communication between hackers and developers, and smarter bug bounty policies to motivate hacker participation.
Article
Full-text available
Unpublished code and sensitivity to training conditions make many claims hard to verify.
Conference Paper
Full-text available
Despite security advice in the official documentation and an extensive body of security research about vulnerabilities and exploits, many developers still fail to write secure Android applications. Frequently, Android developers fail to adhere to security best practices, leaving applications vulnerable to a multitude of attacks. We point out the advantage of a low-time-cost tool both to teach better secure coding and to improve app security. Using the FixDroid IDE plug-in, we show that professional and hobby app developers can work with and learn from an in-environment tool without it impacting their normal work; and by performing studies with both students and professional developers, we identify key UI requirements and demonstrate that code delivered with such a tool by developers previously inexperienced in security contains significantly less security problems. Perfecting and adding such tools to the Android development environment is an essential step in getting both security and privacy for the next generation of apps.
Article
Full-text available
Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user's behalf by detecting when context has changed and, when necessary, inferring privacy preferences based on the user's past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user's preferences. We show that our approach can accurately predict users' privacy decisions 96.8% of the time, which is a four-fold reduction in error rate compared to current systems.
Conference Paper
Full-text available
In this paper, we study the Over-The-Top (OTT) bypass fraud, a recent form of interconnect telecom fraud. In OTT bypass, a normal phone call is diverted over IP to a voice chat application on a smartphone, instead of being terminated over the normal telecom infrastructure. This rerouting (or hijack) is performed by an international transit operator in coordination with the OTT service provider, but without explicit authorization from the caller, callee and their operators. By doing so, they collect a large share of the call charge and induce a significant loss of revenue to the bypassed operators. Moreover, this practice degrades the quality of service without providing any benefits for the users. In this paper, we study the possible techniques to detect and measure this fraud and evaluate the real impact of OTT bypass on a small European country. For this, we performed more than 15,000 test calls during 8 months and conducted a user study with more than 8,000 users. In our measurements, we observed up to 83% of calls being subject to OTT bypass. Additionally, we show that OTT bypass degrades the quality of service, and sometimes collide with other fraud schemes, exacerbating the quality issues. Our user study shows that OTT bypass and its effects are poorly understood by users.
Conference Paper
Full-text available
Detecting phishing attacks (identifying fake vs. real websites) and heeding security warnings represent classical user-centered security tasks subjected to a series of prior investigations. However, our understanding of user behavior underlying these tasks is still not fully mature, motivating further work concentrating at the neuro-physiological level governing the human processing of such tasks. We pursue a comprehensive three-dimensional study of phishing detection and malware warnings, focusing not only on what users' task performance is but also on how users process these tasks based on: (1) neural activity captured using Electroencephalogram (EEG) cognitive metrics, and (2) eye gaze patterns captured using an eye-tracker. Our primary novelty lies in employing multi-modal neuro-physiological measures in a single study and providing a near realistic set-up (in contrast to a recent neuro-study conducted inside an fMRI scanner). Our work serves to advance, extend and support prior knowledge in several significant ways. Specifically, in the context of phishing detection, we show that users do not spend enough time analyzing key phishing indicators and often fail at detecting these attacks, although they may be mentally engaged in the task and subconsciously processing real sites differently from fake sites. In the malware warning tasks, in contrast, we show that users are frequently reading, possibly comprehending, and eventually heeding the message embedded in the warning. Our study provides an initial foundation for building future mechanisms based on the studied real-time neural and eye gaze features, that can automatically infer a user's "alertness" state, and determine whether or not the user's response should be relied upon.
Conference Paper
Full-text available
Users are increasingly storing, accessing, and exchanging data through public cloud services such as those provided by Google, Facebook, Apple, and Microsoft. Although users may want to have faith in cloud providers to provide good security protection, the confidentiality of any data in public clouds can be violated, and consequently, while providers may not be " doing evil, " we can not and should not trust them with data confidentiality. To better protect the privacy of user data stored in the cloud, in this paper we propose a privacy-preserving system called Mimesis Aegis (M-Aegis) that is suitable for mobile platforms. M-Aegis is a new approach to user data privacy that not only provides isolation but also preserves the user experience through the creation of a conceptual layer called Layer 7.5 (L-7.5), which is inter-posed between the application (OSI Layer 7) and the user (Layer 8). This approach allows M-Aegis to implement true end-to-end encryption of user data with three goals in mind: 1) complete data and logic isolation from untrusted entities; 2) the preservation of original user experience with target apps; and 3) applicable to a large number of apps and resilient to app updates. In order to preserve the exact application workflow and look-and-feel, M-Aegis uses L-7.5 to put a transparent window on top of existing application GUIs to both intercept plaintext user input before transforming the input and feeding it to the underlying app, and to reverse-transform the output data from the app before displaying the plaintext data to the user. This technique allows M-Aegis to transparently integrate with most cloud services without hindering usability and without the need for reverse engineering. We implemented a prototype of M-Aegis on Android and show that it can support a number of popular cloud services, e.g. Gmail, Facebook Messenger , WhatsApp, etc. Our performance evaluation and user study show that users incur minimal overhead when adopting M-Aegis on Android: imperceptible encryption/decryption latency and a low and adjustable false positive rate when searching over encrypted data.
Article
Full-text available
Empirically analyzing empirical evidence One of the central goals in any scientific endeavor is to understand causality. Experiments that seek to demonstrate a cause/effect relation most often manipulate the postulated causal factor. Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study. Science , this issue 10.1126/science.aac4716
Article
Full-text available
Transparency, openness, and reproducibility are readily recognized as vital features of science (1, 2). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (4–6).
Conference Paper
Full-text available
Immersive experiences that mix digital and real-world objects are becoming reality, but they raise serious privacy concerns as they require real-time sensor input. These experiences are already present on smartphones and game consoles via Kinect, and will eventually emerge on the web platform. However, browsers do not expose the display interfaces needed to render immersive experiences. Previous security research focuses on controlling application access to sensor input alone, and do not deal with display interfaces. Recent research in human computer interactions has explored a variety of high-level rendering interfaces for immersive experiences, but these interfaces reveal sensitive data to the application. Bringing immersive experiences to the web requires a high-level interface that mitigates privacy concerns. This paper presents SurroundWeb, the first 3D web browser, which provides the novel functionality of rendering web content onto a room while tackling many of the inherent privacy challenges. Following the principle of least privilege, we propose three abstractions for immersive rendering: 1) the room skeleton lets applications place content in response to the physical dimensions and locations of renderable surfaces in a room; 2) the detection sandbox lets applications declaratively place content near recognized objects in the room without revealing if the object is present; and 3) satellite screens let applications display content across devices registered with SurroundWeb. Through user surveys, we validate that these abstractions limit the amount of revealed information to an acceptable degree. In addition, we show that a wide range of immersive experiences can be implemented with acceptable performance.
Conference Paper
Full-text available
Establishing secure voice, video and text over Internet (VoIP) communications is a crucial task necessary to prevent eavesdropping and man-in-the-middle attacks. The traditional means of secure session establishment (e.g., those relying upon PKI or KDC) require a dedicated infrastructure and may impose unwanted trust onto third-parties. "Crypto Phones" (popular instances such as PGPfone and Zfone), in contrast, provide a purely peer-to-peer user-centric secure mechanism claiming to completely address the problem of wiretapping. The secure association mechanism in Crypto Phones is based on cryptographic protocols employing Short Authenticated Strings (SAS) validated by end users over the voice medium. The security of Crypto Phones crucially relies on the assumption that the voice channel, over which SAS is validated by the users, provides the properties of integrity and source authentication. In this paper, we challenge this assumption, and report on automated SAS voice imitation man-in-the-middle attacks that can compromise the security of Crypto Phones in both two-party and multi-party settings, even if users pay due diligence. The first attack, called the short voice reordering attack, builds arbitrary SAS strings in a victim's voice by reordering previously eavesdropped SAS strings spoken by the victim. The second attack, called the short voice morphing attack, builds arbitrary SAS strings in a victim's voice from a few previously eavesdropped sentences (less than 3 minutes) spoken by the victim. We design and implement our attacks using off-the-shelf speech recognition/synthesis tools, and comprehensively evaluate them with respect to both manual detection (via a user study with 30 participants) and automated detection. The results demonstrate the effectiveness of our attacks against three prominent forms of SAS encodings: numbers, PGP word lists and Madlib sentences. These attacks can be used by a wiretapper to compromise the confidentiality and privacy of Crypto Phones voice, video and text communications (plus authenticity in case of text conversations).
Article
Full-text available
Due to the amount of data that smartphone applications can potentially access, platforms enforce permission systems that allow users to regulate how applications access protected resources. If users are asked to make security decisions too frequently and in benign situations, they may become habituated and approve all future requests without regard for the consequences. If they are asked to make too few security decisions, they may become concerned that the platform is revealing too much sensitive information. To explore this tradeoff, we instrumented the Android platform to collect data regarding how often and under what circumstances smartphone applications are accessing protected resources regulated by permissions. We performed a 36-person field study to explore the notion of "contextual integrity," that is, how often are applications accessing protected resources when users are not expecting it? Based on our collection of 27 million data points and exit interviews with participants, we examine the situations in which users would like the ability to deny applications access to protected resources. We found out that at least 80% of our participants would have preferred to prevent at least one permission request, and overall, they thought that over a third of requests were invasive and desired a mechanism to block them.
Article
Full-text available
Two-factor authentication protects online accounts even if passwords are leaked. Most users, however, still prefer password-only authentication. One of the reasons behind two-factor authentication being unpopular is the extra steps that the user must complete in order to log in. Current two-factor authentication mechanisms require the user to interact with his phone, and e.g., copy a verification code to the browser. In this paper we propose Sound-Proof, a two-factor authentication mechanism that does not require interaction between the user and his phone. In Sound-Proof the second authentication factor is the proximity of the user's phone to the device being used to log in. The proximity of the two devices is verified by comparing the ambient noise recorded by their microphones. Audio recording and comparison are transparent to the user. Sound-Proof can be easily deployed as it works with major browsers without plugins. We build a prototype for both Android and iOS. We provide empirical evidence that ambient noise is a robust discriminant to determine the proximity of two devices both indoors and outdoors, and even if the phone is in a pocket or purse. We further conduct a user study designed to compare the perceived usability of Sound-Proof with Google 2-Step Verification. Participants ranked Sound-Proof as more usable and the majority would be willing to use Sound-Proof even for scenarios in which two-factor authentication is optional.
Conference Paper
Nowadays, security incidents have become a familiar "nuisance," and they regularly lead to the exposure of private and sensitive data. The root causes for such incidents are rarely complex attacks. Instead, they are enabled by simple misconfigurations, such as authentication not being required, or security updates not being installed. For example, the leak of over 140 million Americans' private data from Equifax's systems is among most severe misconfigurations in recent history: The underlying vulnerability was long known, and a security patch had been available for months, but was never applied. Ultimately, Equifax blamed an employee for forgetting to update the affected system, highlighting his personal responsibility. In this paper, we investigate the operators' perspective on security misconfigurations to approach the human component of this class of security issues. We focus our analysis on system operators, who have not received significant attention by prior research. Hence, we investigate their perspective with an inductive approach and apply a multi-step empirical methodology: (i), a qualitative study to understand how to approach the target group and measure the misconfiguration phenomenon (ii) a quantitative survey rooted in the qualitative data. We then provide the first analysis of system operators' perspective on security misconfigurations, and we determine the factors that operators perceive as the root causes. Based on our findings, we provide practical recommendations on how to reduce security misconfigurations' frequency and impact.
Conference Paper
Password reuse is widespread, so a breach of one provider's password database threatens accounts on other providers. When companies find stolen credentials on the black market and notice potential password reuse, they may require a password reset and send affected users a notification. Through two user studies, we provide insight into such notifications. In Study 1, 180 respondents saw one of six representative notifications used by companies in situations potentially involving password reuse. Respondents answered questions about their reactions and understanding of the situation. Notifications differed in the concern they elicited and intended actions they inspired. Concerningly, less than a third of respondents reported intentions to change any passwords. In Study 2, 588 respondents saw one of 15 variations on a model notification synthesizing results from Study 1. While the variations' impact differed in small ways, respondents' intended actions across all notifications would leave them vulnerable to future password-reuse attacks. We discuss best practices for password-reuse notifications and how notifications alone appear insufficient in solving password reuse.
Conference Paper
People tend to choose short and predictable passwords that are vulnerable to guessing attacks. Passphrases are passwords consisting of multiple words, initially introduced as more secure authentication keys that people could recall. Unfortunately, people tend to choose predictable natural language patterns in passphrases, again resulting in vulnerability to guessing attacks. One solution could be system-assigned passphrases, but people have difficulty recalling them. With the goal of improving the usability of system-assigned passphrases, we propose a new approach of reinforcing system-assigned passphrases using implicit learning techniques. We design and test a system that implements this approach using two implicit learning techniques: contextual cueing and semantic priming. In a 780-participant online study, we explored the usability of 4-word system-assigned passphrases using our system compared to a set of control conditions. Our study showed that our system significantly improves usability of system-assigned passphrases, both in terms of recall rates and login time.
Conference Paper
Internet users can download software for their computers from app stores (e.g., Mac App Store and Windows Store) or from other sources, such as the developers' websites. Most Internet users in the US rely on the latter, according to our representative study, which makes them directly responsible for the content they download. To enable users to detect if the downloaded files have been corrupted, developers can publish a checksum together with the link to the program file; users can then manually verify that the checksum matches the one they obtain from the downloaded file. In this paper, we assess the prevalence of such behavior among the general Internet population in the US (N=2,000), and we develop easy-to-use tools for users and developers to automate both the process of checksum verification and generation. Specifically, we propose an extension to the recent W3C specification for sub-resource integrity in order to provide integrity protection for download links. Also, we develop an extension for the popular Chrome browser that computes and verifies checksums of downloaded files automatically, and an extension for the WordPress CMS that developers can use to easily attach checksums to their remote content. Our in situ experiments with 40participants demonstrate the usability and effectiveness issues of checksums verification, and shows user desirability for our extension.
Conference Paper
The strength of an anonymity system depends on the number of users. Therefore, User eXperience (UX) and usability of these systems is of critical importance for boosting adoption and use. To this end, we carried out a study with 19 non-expert participants to investigate how users experience routine Web browsing via the Tor Browser, focusing particularly on encountered problems and frustrations. Using a mixed-methods quantitative and qualitative approach to study one week of naturalistic use of the Tor Browser, we uncovered a variety of UX issues, such as broken Web sites, latency, lack of common browsing conveniences, differential treatment of Tor traffic, incorrect geolocation, operational opacity, etc. We applied this insight to suggest a number of UX improvements that could mitigate the issues and reduce user frustration when using the Tor Browser.
Conference Paper
Many computer-security defenses are reactive---they operate only when security incidents take place, or immediately thereafter. Recent efforts have attempted to predict security incidents before they occur, to enable defenders to proactively protect their devices and networks. These efforts have primarily focused on long-term predictions. We propose a system that enables proactive defenses at the level of a single browsing session. By observing user behavior, it can predict whether they will be exposed to malicious content on the web seconds before the moment of exposure, thus opening a window of opportunity for proactive defenses. We evaluate our system using three months' worth of HTTP traffic generated by 20,645 users of a large cellular provider in 2017 and show that it can be helpful, even when only very low false positive rates are acceptable, and despite the difficulty of making "on-the-fly'' predictions. We also engage directly with the users through surveys asking them demographic and security-related questions, to evaluate the utility of self-reported data for predicting exposure to malicious content. We find that self-reported data can help forecast exposure risk over long periods of time. However, even on the long-term, self-reported data is not as crucial as behavioral measurements to accurately predict exposure.
Conference Paper
The security field relies on user studies, often including survey questions, to query end users' general security behavior and experiences, or hypothetical responses to new messages or tools. Self-report data has many benefits -- ease of collection, control, and depth of understanding -- but also many well-known biases stemming from people's difficulty remembering prior events or predicting how they might behave, as well as their tendency to shape their answers to a perceived audience. Prior work in fields like public health has focused on measuring these biases and developing effective mitigations; however, there is limited evidence as to whether and how these biases and mitigations apply specifically in a computer-security context. In this work, we systematically compare real-world measurement data to survey results, focusing on an exemplar, well-studied security behavior: software updating. We align field measurements about specific software updates (n=517,932) with survey results in which participants respond to the update messages that were used when those versions were released (n=2,092). This allows us to examine differences in self-reported and observed update speeds, as well as examining self-reported responses to particular message features that may correlate with these results. The results indicate that for the most part, self-reported data varies consistently and systematically with measured data. However, this systematic relationship breaks down when survey respondents are required to notice and act on minor details of experimental manipulations. Our results suggest that many insights from self-report security data can, when used with care, translate to real-world environments; however, insights about specific variations in message texts or other details may be more difficult to assess with surveys.
Conference Paper
In 2016, a large North American university was subject to a significant crypto-ransomware attack and did not pay the ransom. We conducted a survey with 150 respondents and interviews with 30 affected students, staff, and faculty in the immediate aftermath to understand their experiences during the attack and the recovery process. We provide analysis of the technological, productivity, and personal and social impact of ransomware attacks, including previously unaccounted secondary costs. We suggest strategies for comprehensive cyber-response plans that include human factors, and highlight the importance of communication. We conclude with a Ransomware Process for Organizations diagram summarizing the additional contributing factors beyond those relevant to individual infections. FULL TEXT AVAILABLE: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-zhang-kennedy.pdf
Conference Paper
Humanitarian action, the process of aiding individuals in situations of crises, poses unique information-security challenges due to natural or manmade disasters, the adverse environments in which it takes place, and the scale and multi-disciplinary nature of the problems. Despite these challenges, humanitarian organizations are transitioning towards a strong reliance on the digitization of collected data and digital tools, which improves their effectiveness but also exposes them to computer security threats. In this paper, we conduct a qualitative analysis of the computer-security challenges of the International Committee of the Red Cross (ICRC), a large humanitarian organization with over sixteen thousand employees, an international legal personality, which involves privileges and immunities, and over 150 years of experience with armed conflicts and other situations of violence worldwide. To investigate the computer security needs and practices of the ICRC from an operational, technical, legal, and managerial standpoint by considering individual, organizational, and governmental levels, we interviewed 27 field workers, IT staff, lawyers, and managers. Our results provide a first look at the unique security and privacy challenges that humanitarian organizations face when collecting, processing, transferring, and sharing data to enable humanitarian action for a multitude of sensitive activities. These results highlight, among other challenges, the trade offs between operational security and requirements stemming from all stakeholders, the legal barriers for data sharing among jurisdictions; especially, the need to complement privileges and immunities with robust technological safeguards in order to avoid any leakages that might hinder access and potentially compromise the neutrality, impartiality, and independence of humanitarian action.
Article
Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which most published results are unreliable due to growing problems with research and publication practices. This article provides an overview of recent evidence suggesting that this narrative is mistaken, and argues that a narrative of epochal changes and empowerment of scientists would be more accurate, inspiring, and compelling.
Article
Privacy policies are the primary channel through which companies inform users about their data collection and sharing practices. In their current form, policies remain long and difficult to comprehend, thus merely serving the goal of legally protecting the companies. Short notices based on information extracted from privacy policies have been shown to be useful and more usable, but face a significant scalability hurdle, given the number of policies and their evolution over time. Companies, users, researchers, and regulators still lack usable and scalable tools to cope with the breadth and depth of privacy policies. To address these hurdles, we propose Polisis, an automated framework for privacy Policies analysis. It enables scalable, dynamic, and multi-dimensional queries on privacy policies. At the core of Polisis is a privacy-centric language model, built with 130K privacy policies, and a novel hierarchy of neural network classifiers that caters to the high-level aspects and the fine-grained details of privacy practices. We demonstrate Polisis's modularity and utility with two robust applications that support structured and free-form querying. The structured querying application is the automated assignment of privacy icons from the privacy policies. With Polisis, we can achieve an accuracy of 88.4% on this task, when evaluated against earlier annotations by a group of three legal experts. The second application is PriBot, the first free-form Question Answering about Privacy policies. We show that PriBot can produce a correct answer among its top-3 results for 82% of the test questions.
Article
ASR (automatic speech recognition) systems like Siri, Alexa, Google Voice or Cortana has become quite popular recently. One of the key techniques enabling the practical use of such systems in people's daily life is deep learning. Though deep learning in computer vision is known to be vulnerable to adversarial perturbations, little is known whether such perturbations are still valid on the practical speech recognition. In this paper, we not only demonstrate such attacks can happen in reality, but also show that the attacks can be systematically conducted. To minimize users' attention, we choose to embed the voice commands into a song, called CommandSong. In this way, the song carrying the command can spread through radio, TV or even any media player installed in the portable devices like smartphones, potentially impacting millions of users in long distance. In particular, we overcome two major challenges: minimizing the revision of a song in the process of embedding commands, and letting the CommandSong spread through the air without losing the voice "command". Our evaluation demonstrates that we can craft random songs to "carry" any commands and the modify is extremely difficult to be noticed. Specially, the physical attack that we play the CommandSongs over the air and record them can success with 94 percentage.
Article
Ideas supported by well-defined and clearly described methods and evidence are one of the cornerstones of science. After several publications indicated that a substantial number of scientific reports may not be readily reproducible, the scientific community and public began engaging in discussions about mechanisms to measure and enhance the reproducibility of scientific projects. In this context, several innovative steps have been taken in recent years. The results of these efforts confirm that improving reproducibility will require persistent and adaptive responses, and as we gain experience, implementation of the best possible practices.
Article
Passwords are still a mainstay of various security systems, as well as the cause of many usability issues. For end-users, many of these issues have been studied extensively, highlighting problems and informing design decisions for better policies and motivating research into alternatives. However, end-users are not the only ones who have usability problems with passwords! Developers who are tasked with writing the code by which passwords are stored must do so securely. Yet history has shown that this complex task often fails due to human error with catastrophic results. While an end-user who selects a bad password can have dire consequences, the consequences of a developer who forgets to hash and salt a password database can lead to far larger problems. In this paper we present a first qualitative usability study with 20 computer science students to discover how developers deal with password storage and to inform research into aiding developers in the creation of secure password systems.
Conference Paper
The computer security community has advocated widespread adoption of secure communication tools to counter mass surveillance. Several popular personal communication tools (e.g., WhatsApp, iMessage) have adopted end-to-end encryption, and many new tools (e.g., Signal, Telegram) have been launched with security as a key selling point. However it remains unclear if users understand what protection these tools offer, and if they value that protection. In this study, we interviewed 60 participants about their experience with different communication tools and their perceptions of the tools' security properties. We found that the adoption of secure communication tools is hindered by fragmented user bases and incompatible tools. Furthermore, the vast majority of participants did not understand the essential concept of end-to-end encryption, limiting their motivation to adopt secure tools. We identified a number of incorrect mental models that underpinned participants' beliefs.
Conference Paper
Few users have a single, authoritative, source from whom they can request digital-security advice. Rather, digital-security skills are often learned haphazardly, as users filter through an overwhelming quantity of security advice. By understanding the factors that contribute to users' advice sources, beliefs, and security behaviors, we can help to pare down the quantity and improve the quality of advice provided to users, streamlining the process of learning key behaviors. This paper rigorously investigates how users' security beliefs, knowledge, and demographics correlate with their sources of security advice, and how all these factors influence security behaviors. Using a carefully pre-tested, U.S.-census-representative survey of 526 users, we present an overview of the prevalence of respondents' advice sources, reasons for accepting and rejecting advice from those sources, and the impact of these sources and demographic factors on security behavior. We find evidence of a "digital divide" in security: the advice sources of users with higher skill levels and socioeconomic status differ from those with fewer resources. This digital security divide may add to the vulnerability of already disadvantaged users. Additionally, we confirm and extend results from prior small-sample studies about why users accept certain digital-security advice (e.g., because they trust the source rather than the content) and reject other advice (e.g., because it is inconvenient and because it contains too much marketing material). We conclude with recommendations for combating the digital divide and improving the efficacy of digital-security advice.
Article
To encourage repeatable research, fund repeatability engineering and reward commitments to sharing research artifacts.
Article
In addition to storing a plethora of sensitive personal and work information, smartphones also store sensor data about users and their daily activities. In order to understand users' behaviors and attitudes towards the security of their smartphone data, we conducted 28 qualitative interviews. We examined why users choose (or choose not) to employ locking mechanisms (e.g., PINs) and their perceptions and awareness about the sensitivity of the data stored on their devices. We performed two additional online experiments to quantify our interview results and the extent to which sensitive data could be found in a user's smartphone-accessible email archive. We observed a strong correlation between use of security features and risk perceptions, which indicates rational behavior. However, we also observed that most users likely underestimate the extent to which data stored on their smartphones pervades their identities, online and offline.
Article
Multiple-In-Multiple-Out (MIMO) offers great potential for increasing network capacity by exploiting spatial diversity with multiple antennas. Multiuser MIMO (MU-MIMO) further enables Access Points (APs) with multiple antennas to transmit multiple data streams concurrently to several clients. In MU-MIMO, clients need to estimate Channel State Information (CSI) and report it to APs in order to eliminate interference between them. We explore the vulnerability in clients' plaintext feedback of estimated CSI to the APs and propose two advanced attacks that malicious clients can mount by reporting forged CSI: (1) sniffing attack that enables concurrently transmitting malicious clients to eavesdrop other ongoing transmissions; (2) power attack that enables malicious clients to enhance their own capacity at the expense of others'. We have implemented and evaluated these two attacks in a WARP testbed. Based on our experimental results, we suggest a revision of the current CSI feedback scheme and propose a novel CSI feedback system, called the CSIsec, to prevent CSI forging without requiring any modification at the client side, thus facilitating its deployment.
Article
One of the largest outstanding problems in computer security is the need for higher awareness and use of available security tools. One promising but largely unexplored approach is to use social proof: by showing people that their friends use security features, they may be more inclined to explore those features, too. To explore the efficacy of this approach, we showed 50,000 people who use Facebook one of 8 security announcements - 7 variations of social proof and 1 non-social control - to increase the exploration and adoption of three security features: Login Notifications, Login Approvals, and Trusted Contacts. Our results indicated that simply showing people the number of their friends that used security features was most effective, and drove 37% more viewers to explore the promoted security features compared to the non-social announcement (thus, raising awareness). In turn, as social announcements drove more people to explore security features, more people who saw social announcements adopted those features, too. However, among those who explored the promoted features, there was no difference in the adoption rate of those who viewed a social versus a non-social announcement. In a follow up survey, we confirmed that the social announcements raised viewer's awareness of available security features.
Article
Mobile applications are part of the everyday lives of billions of people, who often trust them with sensitive information. These users identify the currently focused app solely by its visual appearance, since the GUIs of the most popular mobile OSes do not show any trusted indication of the app origin. In this paper, we analyze in detail the many ways in which Android users can be confused into misidentifying an app, thus, for instance, being deceived into giving sensitive information to a malicious app. Our analysis of the Android platform APIs, assisted by an automated state-exploration tool, led us to identify and categorize a variety of attack vectors (some previously known, others novel, such as a non-escapable full screen overlay) that allow a malicious app to surreptitiously replace or mimic the GUI of other apps and mount phishing and click-jacking attacks. Limitations in the system GUI make these attacks significantly harder to notice than on a desktop machine, leaving users completely defenseless against them. To mitigate GUI attacks, we have developed a two-layer defense. To detect malicious apps at the market level, we developed a tool that uses static analysis to identify code that could launch GUI confusion attacks. We show how this tool detects apps that might launch GUI attacks, such as ransom ware programs. Since these attacks are meant to confuse humans, we have also designed and implemented an on-device defense that addresses the underlying issue of the lack of a security indicator in the Android GUI. We add such an indicator to the system navigation bar, this indicator securely informs users about the origin of the app with which they are interacting (e.g., The Pay Pal app is backed by 'Pay Pal, Inc.'). We demonstrate the effectiveness of our attacks and the proposed on-device defense with a user study involving 308 human subjects, whose ability to detect the attacks increased significantly when using a system equipped with our defense.