Article

Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security? We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software. To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards: PGP 5.0. Our case study used a cognitive walkthrough analysis together with a laboratory user test to evaluate whether PGP 5.0 can be successfully used by cryptography novices to achieve effective electronic mail security. The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were g...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the context of S&P, motivation and ability are both well understood "barriers" to pro-S&P behaviors. The feld of usable privacy and security traces its origins to identifying and addressing the ability challenges in user-facing security systems (Zurko and Simon, 1996;Whitten and Tygar, 1999;Adams and Sasse, 1999). As early as 1996, for example, Zurko and Simon discuss the need for "user-centered security" in which they outline a vision for "considering user needs as a primary goal at the start of secure system development." ...
... As early as 1996, for example, Zurko and Simon discuss the need for "user-centered security" in which they outline a vision for "considering user needs as a primary goal at the start of secure system development." (Zurko and Simon, 1996) In their seminal 1999 paper, "Why Johnny Can't Encrypt", Whitten and Tygar systematically uncovered a wide array of usability faws that inhibited average end-users from properly using PGP 5.0 to encrypt email correspondences (Whitten and Tygar, 1999). Around that same time, Adams and Sasse ofered a rebuttal against the prevailing notion of "users being the weakest link" in security by arguing that security lacked a user-centered design process which, in turn, resulted in security controls that were fundamentally unusable (Adams and Sasse, 1999). ...
... Awareness, alone, does not result in action: people must also be willing and motivated to act. However, users are often unmotivated to behave in accordance with expert-recommended S&P recommendations -an observation that dates back to the origins of the usable security and privacy feld of study (Zurko and Simon, 1996;Whitten and Tygar, 1999;Adams and Sasse, 1999). Inspired by the factors that make up general models of human behavior and technology adoption, our review of prior work suggests that there are 4 factors that impact end-user motivation towards accepting and/or adopting expert-recommended security advice: subjective norms, perceived relative advantage, trialability, and compatibility. ...
Book
Cybersecurity and Privacy (S&P) unlock the full potential of computing. Use of encryption, authentication, and access control, for example, allows employees to correspond with professional colleagues via email with reduced fear of leaking confidential data to competitors or cybercriminals. It also allows, for example, parents to share photos of children with remote loved ones over the Internet with reduced fear of this data reaching the hands of unknown strangers, and anonymous whistleblowers to share information about problematic practices in the workplace with reduced fear of being outed. Conversely, failure to employ appropriate S&P measures can leave people and organizations vulnerable to a broad range of threats. In short, the security and privacy decisions we make on a day-to-day basis determine whether the data we share, manipulate, and store online is protected from theft, surveillance, and exploitation. How can end-users be encouraged to accept recommended S&P behavior from experts? In this monograph, prior art in human-centered S&P is reviewed, and three barriers to end-user acceptance of expert recommendations have been identified. These three barriers make up what we call the “Security & Privacy Acceptance Framework” (SPAF). The barriers are: (1) awareness: i.e., people may not know of relevant security threats and appropriate mitigation measures; (2) motivation: i.e., people may be unwilling to enact S&P behaviors because, e.g., the perceived costs are too high; (3) and, ability: i.e., people may not know when, why, and how to effectively implement S&P behaviors. This monograph also reviews and critically analyzes prior work that has explored mitigating one or more of the barriers that make up the SPAF. Finally, using the SPAF as a lens, discussed is how the human-centered S&P community might re-orient to encourage widespread end-user acceptance of pro-S&P behaviors by employing integrative approaches that address each one of the awareness, motivation, and ability barriers.
... In the context of S&P, motivation and ability are both well understood "barriers" to pro-S&P behaviors. The feld of usable privacy and security traces its origins to identifying and addressing the ability challenges in user-facing security systems (Zurko and Simon, 1996;Whitten and Tygar, 1999;Adams and Sasse, 1999). As early as 1996, for example, Zurko and Simon discuss the need for "user-centered security" in which they outline a vision for "considering user needs as a primary goal at the start of secure system development." ...
... As early as 1996, for example, Zurko and Simon discuss the need for "user-centered security" in which they outline a vision for "considering user needs as a primary goal at the start of secure system development." (Zurko and Simon, 1996) In their seminal 1999 paper, "Why Johnny Can't Encrypt", Whitten and Tygar systematically uncovered a wide array of usability faws that inhibited average end-users from properly using PGP 5.0 to encrypt email correspondences (Whitten and Tygar, 1999). Around that same time, Adams and Sasse ofered a rebuttal against the prevailing notion of "users being the weakest link" in security by arguing that security lacked a user-centered design process which, in turn, resulted in security controls that were fundamentally unusable (Adams and Sasse, 1999). ...
... Awareness, alone, does not result in action: people must also be willing and motivated to act. However, users are often unmotivated to behave in accordance with expert-recommended S&P recommendations -an observation that dates back to the origins of the usable security and privacy feld of study (Zurko and Simon, 1996;Whitten and Tygar, 1999;Adams and Sasse, 1999). Inspired by the factors that make up general models of human behavior and technology adoption, our review of prior work suggests that there are 4 factors that impact end-user motivation towards accepting and/or adopting expert-recommended security advice: subjective norms, perceived relative advantage, trialability, and compatibility. ...
... The groundbreaking works 'User-Centered Security' (Zurko and Simon 1996), 'Users Are Not the Enemy' (Adams and Angela Sasse 1999), and 'Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0' (Whitten and Doug Tygar 1999), which manifest the beginning of the research field of usable security around 1996, already convey a different paradigm through their titles. They address the problem of how systems can support users in the reliable use of security mechanisms. ...
... When the new research field of usable security was launched in the mid-90s (Zurko and Simon 1996), it focussed primarily on passwords and email security from the end user perspective, building on the pioneering work of Angela Sasse (1999), andWhitten andDoug Tygar (1999). The field gained further traction through special workshops, which over the years have become established venues that are now an integral part of the scientific discourse. ...
... When the new research field of usable security was launched in the mid-90s (Zurko and Simon 1996), it focussed primarily on passwords and email security from the end user perspective, building on the pioneering work of Angela Sasse (1999), andWhitten andDoug Tygar (1999). The field gained further traction through special workshops, which over the years have become established venues that are now an integral part of the scientific discourse. ...
Article
In the last decades, research has shown that both technical solutions and user perceptions are important to improve security and privacy in the digital realm. The field of ‘usable security’ already started to emerge in the mid-90s, primarily focussed on password and email security. Later on, the research field of ”usable security and privacy” evolved and broadened the aim to design concepts and tools to assist users in enhancing their behaviour with regard to both privacy and security. Nevertheless, many user interventions are not as effective as desired. Because of highly diverse usage contexts, leading to different privacy and security requirements and not always to one-size-fits-all approaches, tailorability is necessary to address this issue. Furthermore, transparency is a crucial requirement, as providing comprehensible information may counter reactance towards security interventions. This article first provides a brief history of the research field in its first quarter-century and then highlights research on the transparency and tailorability of user interventions. Based on this, this article then presents six contributions with regard to (1) privacy concerns in times of COVID-19, (2) authentication on mobile devices, (3) GDPR-compliant data management, (4) privacy notices on websites, (5) data disclosure scenarios in agriculture, as well as (6) rights under data protection law and the concrete process should data subjects want to claim those rights. This article concludes with several research directions on user-centred transparency and tailorability.
... At the same time, USEC researchers have argued for the importance of human-centered design since the 1970's, when Saltzer and Schroeder (1975) outlined that security protection mechanisms require "psychological acceptability." This position was taken further by researchers from both the security and HCI communities (Adams & Sasse, 1999;Whitten & Tygar, 1999;Zurko & Simon, 1996). ...
... Conducting human-centered studies should be an integral part of the evaluation of usable privacy and security systems rather than just "box ticking," and should be integrated at an early stage of the process. That being said, while some communities appreciate innovative solutions even if they lack in-depth user evaluations, a novel privacy or security system that is not usable will not be secure; if a system is not usable, users will work around it or misuse it, resulting in poor privacy and security (Adams & Sasse, 1999;Whitten & Tygar, 1999). Besides users' perceived ease of use and social benefits that can influence users' adoption decision, the potential risks associated to other users' privacy also play an important role in the likelihood of use and adoption of ubiquitous systems (Rauschnabel et al., 2016). ...
... No, not at all. While the birth of usable privacy and security happened around 1995 with the works by Zurko and Simon (1996), Whitten and Tygar (1999), Adams and Sasse (1999), and Jermyn et al. (1999), the first formal gathering of the USEC community can indeed be traced back to a workshop at a non security-focused venue: ACM CHI 2003 (Andrew et al., 2003). The history of usable privacy and security research, including the way in which the USEC community has been established and our experts' voiced challenges, shows that USEC research does not exist in a vacuum. ...
Article
Full-text available
Iterative design, implementation, and evaluation of prototype systems is a common approach in Human-Computer Interaction (HCI) and Usable Privacy and Security (USEC); however, research involving physical prototypes can be particularly challenging. We report on twelve interviews with established and nascent USEC researchers who prototype security and privacy-protecting systems and have published work in top-tier venues. Our interviewees range from professors to senior PhD candidates, and researchers from industry. We discussed their experiences conducting USEC research that involves prototyping, opinions on the challenges involved, and the ecological validity issues surrounding current evaluation approaches. We identify the challenges faced by researchers in this area such as the high costs of conducting field studies when evaluating hardware prototypes, the scarcity of open-source material, and the resistance to novel prototypes. We conclude with a discussion of how the USEC community currently supports researchers in overcoming these challenges and places to potentially improve support.
... a usable way. However, while many unusable security mechanisms are known to have created problems in the past [69], and while research has been done on how developers (fail to) implement security [1], [2], [24], the root causes for lack of usable security (or successfully implemented usable security) in software products is yet unknown. ...
... In 1996, Mary Ellen Zurko established the term user-centered security and made the case that usability for different groups of non-security specialists (software developers, system administrators, and end-users) was a necessary condition for systems to be secure and function in practice [71]. In 1999, Whitten and Tygar demonstrated in their seminal paper that the target users of PGP 5.0 were not able to operate it securely and identified the usability issues that were the cause [69]. Stransky et al. [60] conducted a field study in 2021 and confirmed that end-to-end encryption for email is still a huge issue. ...
Conference Paper
Full-text available
For software to be secure in practice, users need to be willing and able to appropriately use security features. These features are usually implemented by software professionals during the software development process (SDP), who may be unable to consider the usability of these mechanisms. While research has made progress in supporting developers in creating secure software products, very little attention has been paid to whether and how these security features are made usable. In a semi-structured interview study with 25 software professionals (software developers, designers, architects), we explored how they and other decision-makers encounter and deal with security and usability during the software development process in their companies. Based on 37 hours of interview recordings, we qualitatively analyzed and investigated 23 distinct development contexts in detail. In addition to individual awareness and factors that directly influence the implementation phase, we identify a high impact of contextual factors, such as stakeholder pressure, presence of expertise, and collaboration culture, and the specific implementation of the SDP on usable security in software products. We conclude our work by highlighting important gaps, such as studying and improving contextual factors that contribute to usable security and discussing potential improvements of the status quo.
... Today recommendation engines are used in many fields, starting from mini news headlines on various social media pages, online articles from various news portals, providing streaming services with watchable content, friendly requests to join groups or individuals, all the way to various e-commerce sites with their products available for purchase [1,2]. During daily use, recommendation engines can play an important role in users' decision-making regarding whether they like the platform that serves the data [3,4]. ...
... The cosine similarity index calculation[4]. ...
Article
Full-text available
One of the essential components of Recommender Systems in Software Engineering is a static analysis that is answerable for producing recommendations for users. There are different techniques for how static analysis is carried out in recommender systems. This paper drafts a technique for the creation of recommendations using Cosine Similarity. Evaluation of such a system is done by using precision, recall, and so-called Dice similarity coefficient. Ground truth evaluations consisted of using experienced software developers for testing the recommendations. Also, statistical T-test has been applied in comparing the means of the two evaluated approaches. These tests point out the significant difference between the two compared sets.
... In general, there is only a little research on the usability of password managers, authentication methods, and identity management solutions, and so far, the results have not been particularly encouraging: Password management and digital identity solutions that do not involve significant privacy and control trade-offs (like federated identity management) are complex to use, have low to moderate usability, and are hard to understand for users (Acemyan et al., 2018;Bonneau et al., 2012;Dillon et al., 2020;Dunphy et al., 2018;Whitten et al., 1999;Zaeem et al., 2021). This lack of technical understanding also leads to security issues (Dunphy et al., 2018;Vaziripour et al., 2017) or even security fatigue (Cram et al., 2021). ...
... Apart from SSI-specific studies, there have been several publications on the usability of identity management in general. The earliest systematic usability study with participants has been conducted by Whitten et al. (1999) on Pretty Good Privacy (PGP). Involved researchers tested individually prioritized attributes of usability with 12 participants. ...
Conference Paper
Today’s systems for digital identity management exhibit critical security, efficiency, and privacy issues. A new paradigm, called Self-Sovereign Identity (SSI), addresses these shortcomings by equipping users with mobile wallets and empowering them to manage their digital identities. Various companies and governments back this paradigm and promote its development and diffusion. User experience often plays a subordinate role in these efforts, even though it is crucial for user satisfaction and adoption. We thus conduct a comprehensive user experience study of four prominent SSI wallets using a mixed-method approach that involves moderated and remote interviews and the User Experience Questionnaire (UEQ). We find that the examined wallets already provide a decent level of user experience, yet further improvements need to be done. In particular, the examined wallets do not make their novelty and benefits sufficiently apparent to users. Our analysis contributes to user experience research and offers guidance for SSI practitioners.
... Usability and operational security. Usability is a well-known weakness in security systems [38,40,49]; if Johnny can't encrypt, how can we expect him to blow the whistle anonymously? Operational security mistakes can happen at either end of the communication channel and are more likely when journalists are doing something they do rarely, and the source is doing something for the first time in their life under great stress [3]. ...
... PGP offers the major benefit that messages are not passed in plaintext, but PGP clients are notoriously dif-ficult for even college-educated people to use [38,40,49]. There is also no onion encryption or cover traffic to thwart correlation attacks. ...
Article
Full-text available
Whistleblowing is hazardous in a world of pervasive surveillance, yet many leading newspapers expect sources to contact them with methods that are either insecure or barely usable. In an attempt to do better, we conducted two workshops with British news organisations and surveyed whistleblowing options and guidelines at major media outlets. We concluded that the soft spot is a system for initial contact and trust establishment between sources and reporters. CoverDrop is a two-way, secure system to do this. We support secure messaging within a news app, so that all its other users provide cover traffic, which we channel through a threshold mix instantiated in a Trusted Execution Environment within the news organisation. CoverDrop is designed to resist a powerful global adversary with the ability to issue warrants against infrastructure providers, yet it can easily be integrated into existing infrastructure. We present the results from our workshops, describe CoverDrop’s design and demonstrate its security and performance.
... L'histoire de Johnny commence en 1999, dans un article intitulé « Pourquoi Johnny ne parvient-il pas à chi rer ? » [WT99]. L'objectif de l'étude était d'identi er, dans le design d'une application de sécurité, les obstacles qui pouvaient empêcher un utilisateur -Johnny -de faire bon usage de l'application, voire le décourager de l'utiliser. ...
... Johnny is the reluctant hero of a series of works about security and usability. Johnny's story begins in 1999, in the episode "Why Johnny can't encrypt" by Whitten and Tygar ( [WT99]). The purpose was to evaluate the design of security interfaces, to identify what where the obstacles that refrain a user -Johnny -from correctly using them (or, even more, from using them at all). ...
Thesis
Secure Instant Messaging applications (such as WhatsApp or Signal) have become unavoidable means of communications in our every day lives. These applications offer desirable security features such as end-to-end encryption, forward and post-compromise security. However, these properties are often limited to one-to-one communications. The purpose of the work presented here is to reach, in the multi device context as well as in group messaging, an optimal level of security, as for the classial one-to-one communications. On the multi-device side, we propose a Multi-Device Instant Messaging protocole, based on the Ratcheted Key exchange used in Signal, and already widely deployed in other applications. On the group side, we are insterested in the Messaging Layer Security (MLS) protocol, which aims at providing a secure group messaging solution. The security of the protocol relies in particular on the possibility for any user to update the group secrets. In its actual design, a flaw appears in this updating process. We propose a solution to secure the update mechanism, using Zero-Knowledge (ZK) technics as building blocks. As a main contribution, we provide two different ZK protocols to prove knowledge of the input of a pseudorandom function implemented as a circuit, given an algebraic commitment of the output and the input.
... Except computer security, where the idea was that people should do as they are told, because security is important. In 1999, two seminal papers highlighted the consequences of unusable security "Users are not the enemy" [1] and "Why Johnny can't encrypt" [51]. In 2000, Bruce Schneier introduced the narrative of "the user as weakest link" [47]. ...
... In 2000, Bruce Schneier introduced the narrative of "the user as weakest link" [47]. This implicitly blames people -a perspective that even most usable security researchers subscribe to: when they can't or won't follow expert prescriptions, it is because they are "unmotivated" [51] or "lazy" [45]. Klimburg-Witjes [30] recount how this perspective is pervasive in both academic and practioner events today, and so deeply ingrained that some employees believe it themselves [14]. ...
Chapter
Full-text available
Over the past decade, researchers investigating IT security from a socio-technical perspective have identified the importance of trust and collaboration between different stakeholders in an organisation as the basis for successful defence. Yet, when employees do not follow security rules, many security practitioners attribute this to them being “weak” or “careless”; many employees in turn hide current practices or planned development because they see security as “killjoys” who “come and kill our baby”. Negative language and blaming others for problems are indicators of dysfunctional relationships. We collected a small set of statements from security experts’ about employees to gauge how widespread this blaming is. To understand how employees view IT security staff, we performed a prolific survey with 100 employees (n = 92) from the US & UK, asking them about their perceptions of, and emotions towards, IT security staff. Our findings indicate that security relationships are indeed often dysfunctional. Psychology offers frameworks for identifying relationship and communication flows that are dysfunctional, and a range of interventions for transforming them into functional ones. We present common examples of dysfunctionality, show how organisations can apply those interventions to rebuild trust and collaboration, and establish a positive approach to security in organisations that seizes human potential instead of blaming the human element. We propose Transactional Analysis (TA) and the OLaF questionnaire as measurement tools to assess how organisations deal with error, blame and guilt. We continue to consider possible interventions inspired by therapy such as conditions from individual and group therapy which can be implemented, for example, in security dialogues or the use of humour and clowns. KeywordsHuman factors in IT securityIT security awarenessDysfunctional relationshipSocio-technical systemsInterpersonal communicationTransactional analysisJoint optimisation
... The burden of privacy should not fall most heavily on the users of devices. Protection by default avoids the limitations of user interventions [57][58][59] or providing security advice [56], but can be complex or costly. Default protections also ameliorate the scrutiny opt-in measures may attract [60]. ...
... Following [15], we extend our analysis to include usability (for users) and adoptability (for providers). For users, these factors intuitively include performance and ease of use [57][58][59]. For providers, they include costs of implementation, and for cloud services the costs of maintenance [67]. ...
Article
Full-text available
Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.
... The burden of privacy should not fall most heavily on the users of devices. Protection by default avoids the limitations of user interventions [57][58][59] or providing security advice [56], but can be complex or costly. Default protections also ameliorate the scrutiny opt-in measures may attract [60]. ...
... Following [15], we extend our analysis to include usability (for users) and adoptability (for providers). For users, these factors intuitively include performance and ease of use [57][58][59]. For providers, they include costs of implementation, and for cloud services the costs of maintenance [67]. ...
Preprint
Full-text available
Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.
... having their messages protected, while the manual effort and expertise needed to use PGP or S/MIME has led to the longest-running series of usability failures, 11 leaving email security vastly unused. ...
... However, before secure communication can be enabled, users must generate encryption and signature key pairs, be verified by a certificate authority (CA) and receive CA-signed certificates. Furthermore, key management issues including the key storage capacity required to archive all the private keys for distinct users and key certification and validation processes [24,25] result in major drawbacks of the public-key cryptography's practical implementation. The idea of identity-based cryptography was first proposed by Shamir in 1984 [26], putting forth the notion of using a unique string such as a user's name, email address or contact number to explicitly compute the user's private key. ...
Article
Full-text available
The Internet of Things (IoT) represents a growing aspect of how entities, including humans and organizations, are likely to connect with others in their public and private interactions. The exponential rise in the number of IoT devices, resulting from ever-growing IoT applications, also gives rise to new opportunities for exploiting potential security vulnerabilities. In contrast to conventional cryptosystems, frameworks that incorporate fine-grained access control offer better opportunities for protecting valuable assets, especially when the connectivity level is dense. Functional encryption is an exciting new paradigm of public-key encryption that supports fine-grained access control, generalizing a range of existing fine-grained access control mechanisms. This survey reviews the recent applications of functional encryption and the major cryptographic primitives that it covers, identifying areas where the adoption of these primitives has had the greatest impact. We first provide an overview of different application areas where these access control schemes have been applied. Then, an in-depth survey of how the schemes are used in a multitude of applications related to IoT is given, rendering a potential vision of security and integrity that this growing field promises. Towards the end, we identify some research trends and state the open challenges that current developments face for a secure IoT realization.
... These certificates indicate that the key was compromised and should therefore no longer be used. However, PGP and its web of trust have been shown to be impractical [37] and require central key servers. Another alternative to PKI is the Decentralised Public Key Infrastructure (DPKI) [35,38]. ...
Preprint
Full-text available
Self-Sovereign Identity (SSI) aspires to create a standardised identity layer for the Internet by placing citizens at the centre of their data, thereby weakening the grip of big tech on current digital identities. However, as millions of both physical and digital identities are lost annually, it is also necessary for SSIs to possibly be revoked to prevent misuse. Previous attempts at designing a revocation mechanism typically violate the principles of SSI by relying on central trusted components. This lack of a distributed revocation mechanism hampers the development of SSI. In this paper, we address this limitation and present the first fully distributed SSI revocation mechanism that does not rely on specialised trusted nodes. Our novel gossip-based propagation algorithm disseminates revocations throughout the network and provides nodes with a proof of revocation that enables offline verification of revocations. We demonstrate through simulations that our protocol adequately scales to national levels.
... As usability studies of PGP tools have indicated[41,47], I was far from alone in my confusion. ...
Chapter
Full-text available
The PGP Web of Trust was intended to provide a decentralised trust model for digital security, an alternative to centralised security models that might be subject to government control. Drawing from five years of ethnographic research among cybersecurity engineers into the everyday practice of using the Web of Trust, I critically examine the relationship between security and trust in distributed computing systems. I employ sociological perspectives on trust to examine the distinct roles that decentralised interpersonal trust and centralised assurance structures play in ensuring security in the Web of Trust. I illustrate how the Web of Trust, although designed to evade government control, paradoxically relies upon assurances provided by government-issued documents to validate identity, even while also relying upon interpersonal trust for this purpose. Through my analysis, I offer a framework for thinking about the relationship between centralisation and decentralisation, and between trust and assurance, to ensure security in the design and operation of distributed computing systems.
... Recommender Systems for Software Engineering (RSSEs) are software tools that can assist developers with a wide range of activities, from reusing codes to suggest developers what to do during development [1]. These tools can be used also to guide and recommend a developer for next activities, based on codes write from other developers. ...
Article
Full-text available
Recommender Systems are software tools that can assist developers with a wide range of activities, from reusing codes to suggest developers what to do during development of these systems. This paper outlines an approach to generating recommendation using SQL Semantic Search. Performance measurement of this recommender system is conducted by calculating precision, recall and F1-measure. Subjective evaluations consisted of 10 experienced developers for validating the recommendation. A statistical test t-Test is used to compare the means of two approaches of evaluations.
... Finally, given that users often struggle with and reject security and privacy solutions that encumber or complicate their use of computing systems [15,38,43,43,86,104], a core design goal for Image DePO was to minimize disruption to the user experience of Facebook. Accordingly, we directly integrated all end-user touch-points for Image DePO within Facebook's web interface. ...
Article
Full-text available
Centralized online social networks --- e.g., Facebook, Twitter and TikTok --- help drive social connection on the Internet, but have nigh unfettered access to monitor and monetize the personal data of their users. This centralization can especially undermine the use of the social internet by minority populations, who disproportionately bear the costs of institutional surveillance. We introduce a new class of privacy-enhancing technology --- decentralized privacy overlays (DePOs) --- that helps cOSN users regain some control over their personal data by allowing them to selectively share secret content on cOSNs through decentralized content distribution networks. As a first step, we present an implementation and user evaluation of Image DePO, a proof-of-concept design probe that allows users to upload and share secret photos on Facebook through the Interplanetary File System peer-to-peer protocol. We qualitatively evaluated Image DePO in a controlled, test environment with 19 queer and Black, Indigenous, (and) Person of Color (BIPOC) participants. We found that while Image DePO could help address the institutional threats with which our participants expressed concern, interpersonal threats were the more salient concern in their decisions to share content. Accordingly, we argue that in order to see widespread use, DePOs must align protection against abstract institutional threats with protection against the more salient interpersonal threats users consider when making specific sharing decisions.
... It is well known that it is hard to get users to use PICS tools. Unsurprisingly, the usability of such tools has been the attention of much research, and it is obvious that usability of PICS tools is a factor that determines what tools and features users choose to adopt or not [19,3,41,31]. A related discussion is that of the digital divide, a phenomenon that can be described as some people being excluded from the digital world for various reasons [32]. ...
Chapter
Privacy, Information, and Cybersecurity (PICS) are related properties that have become a concern for more or less everyone. A large portion of the responsibility for PICS is put on the end-user, who is expected to adopt PICS tools, guidelines, and features to stay secure and maintain organizational security. However, the literature describes that many users do not adopt PICS tools and a key reason seems to be usability. This study acknowledges that the usability of PICS tools is a crucial concern and seeks to problematize further by adding cognitive ability as a key usability aspect. We argue that a user’s cognitive abilities determine how the user perceives the usability of PICS tools and that usability guidelines should account for varying cognitive abilities held by different user groups. This paper presents a case study with focus on how cognitive disabilities can affect the usability of PICS tools. Interviews with users with cognitive disabilities as well as usability experts, and experts on cognitive disabilities were conducted. The results suggest that many of the usability factors are shared by all users, cognitive challenges or not. However, cognitive challenges often cause usability issues to be more severe. Based on the results, several design guidelines for the usability of PICS tools are suggested.
... The research into usable security solutions is fortunately growing, starting with the question why users didn't use email encryption back in 1999 [97]. While single security tools might be designed in ways that users are easily able to use them and such is verified in lab experiments and with surveys and interviews, the research on how well these tools fit into the organizational context and into the every-day-routine of employees is limited. ...
Conference Paper
Full-text available
Security awareness is big business-virtually every organization in the Western world provides some form of awareness or training, mostly bought from external vendors. However, studies and industry reports show that these programs have little to no effect in terms of changing the security behavior of employees. We explain the conditions that enable behavior change, and identify one significant blocker in the implementation phase: not disabling existing (inse-cure) routines-failure to take out the trash-prevents embedding of new (secure) routines. Organizational Psychology offers the paradigm Intentional Forgetting (IF) and associated tools for replacing old (insecure) behaviors with new (secure) ones by identifying and eliminating different cues (sensoric, routine-based, time and space based as well as situational strength cues) that trigger old behavior. We introduce the underlying theory, examples of successful application in safety contexts, and show how its application leads to effective behavior change by reducing the information that needs to be transmitted to employees, and suppressing obsolete routines. CCS Concepts
... Also, poor usability hinders the adoption of security and privacy tools. For instance, user interface design problems lead to users' failure to utilize secure tools for creating PGP encrypted emails [18,58]. However, some of the secure messaging applications (e.g., WhatsApp and Signal) hide the encryption details, and the users are unable to perform the authentication ceremony without adequate instructions [52]. ...
Article
Full-text available
Gmail’s confidential mode enables a user to send confidential emails and control access to their content through setting an expiration time and passcode, pre-expiry access revocation, and prevention of email forwarding, downloading, and printing. This paper aims to understand user perceptions and motivations for using Gmail’s confidential mode (GCM). Our structured interviews with 19 Gmail users at UNC Charlotte show that users utilize this mode to share their private documents with recipients and perceive that this mode encrypts their emails and attachments. The most commonly used feature of this mode is the default time expiration of one week, and the least used feature is the pre-expiry access revocation. Our analysis suggests several design improvements.
... The vulnerability and dependence of digital system end users are often discussed in the computer science literature. Under the famous "Why Johnny can't encrypt" lemma, several studies demonstrate users' reticence to adopt information security measures mostly because of the limited usability of available solutions (Whitten and Tygar 1999;Sheng et al. 2006;Ruoti et al. 2015). These problems affect individual users as well as entire industries, nation-states, and corporations, as pointed out by scholars working in the field of information security economics (Anderson and Moore 2006). ...
Book
When we click on a picture of a shopping cart it connects a complex set of technologies to represent a simple idea that we’re all familiar with. A heart icon under a photo is understood as an expression of interest or appreciation. Digital metaphors like these are everywhere in our lives, but they aren’t just a clever way to describe technology. Metaphors like “the cloud” are also changing the way we think. They are the linchpin for a constant feedback loop between us and the digital technologies we consume. Meaningful Technologies focuses on the feedback loop between digital metaphors and technology: its meaning, how it changes over time, and how it impacts learning and attention. Uniting approaches from philosophy and the cognitive and computer sciences, the authors examine digital technologies, especially social media and smartphone apps like Facebook, Snapchat, and TikTok to offer a systematic reconsideration of the ways in which digital technologies impact our lives both individually and collectively. Eric Chown is the Sarah and James Bowdoin Professor of Digital and Computational Studies at Bowdoin College. Fernando Nascimento is Assistant Professor in Digital and Computational Studies at Bowdoin College.
Chapter
Pretty Good Privacy (PGP) is well-known in the civil rights community – its author Phil Zimmerman was sued for violating US export controls, and Edward Snowden used PGP to secure his email communication with Glenn Greenwald. This chapter will concentrate on OpenPGP as a universal cryptographic data format. It can be used to encrypt files in an operating system, sign software updates, or communicate securely via email. Like any other cryptographic data format, it has been subject to attacks. Attacks on the data format itself are covered in this chapter, and attacks on OpenPGP-based email encryption are covered in chapter 18.
Chapter
Massive amounts of data on human beings can now be analyzed. Pragmatic purposes abound, including selling goods and services, winning political campaigns, and identifying possible terrorists. Yet 'big data' can also be harnessed to serve the public good: scientists can use big data to do research that improves the lives of human beings, improves government services, and reduces taxpayer costs. In order to achieve this goal, researchers must have access to this data - raising important privacy questions. What are the ethical and legal requirements? What are the rules of engagement? What are the best ways to provide access while also protecting confidentiality? Are there reasonable mechanisms to compensate citizens for privacy loss? The goal of this book is to answer some of these questions. The book's authors paint an intellectual landscape that includes legal, economic, and statistical frameworks. The authors also identify new practical approaches that simultaneously maximize the utility of data access while minimizing information risk.
Article
We investigate the privacy practices of labor organizers in the computing technology industry and explore the changes in these practices as a response to remote work. Our study is situated at the intersection of two pivotal shifts in workplace dynamics: (a) the increase in online workplace communications due to remote work, and (b) the resurgence of the labor movement and an increase in collective action in workplaces-especially in the tech industry, where this phenomenon has been dubbed the tech worker movement. The shift of work-related communications to online digital platforms in response to an increase in remote work is creating new opportunities for and risks to the privacy of workers. These risks are especially significant for organizers of collective action, with several well-publicized instances of retaliation against labor organizers by companies. Through a series of qualitative interviews with 29 tech workers involved in collective action, we investigate how labor organizers assess and mitigate risks to privacy while engaging in these actions. Among the most common risks that organizers experienced are retaliation from their employer, lateral worker conflict, emotional burnout, and the possibility of information about the collective effort leaking to management. Depending on the nature and source of the risk, organizers use a blend of digital security practices and community-based mechanisms. We find that digital security practices are more relevant when the threat comes from management, while community management and moderation are central to protecting organizers from lateral worker conflict. Since labor organizing is a collective rather than individual project, individual privacy and collective privacy are intertwined, sometimes in conflict and often mutually constitutive. Notions of privacy that solely center individuals are often incompatible with the needs of organizers, who noted that safety in numbers could only be achieved when workers presented a united front to management. Based on our interviews, we identify key topics for future research, such as the growing prevalence of surveillance software and the needs of international and gig worker organizers.We conclude with design recommendations that can help create safer, more secure and more private tools to better address the risks that organizers face.
Conference Paper
The importance of designing for actual security, i.e., security in practice, is underlined by previous work. However, clear design principles for actual security are missing. This paper reports early work on establishing such design principles based on heuristic evaluations of security-enhancing applications. Seven experts evaluated the actual security of four applications for secure email communication. Based on a qualitative analysis of the experts’ findings, we formulate six design principles for actual security that apply to secure email, secure communication, and other security-enhancing applications.
Article
While recent research on intelligent transportation systems including vehicular communication systems has focused on technical aspects, little research work has been conducted on drivers’ privacy perceptions and preferences. Understanding the driver’s privacy perceptions and preferences will allow researchers to design usable privacy and identity management systems offering user privacy choices and controls for intelligent transportation systems. We conducted in-depth semi-structured interviews with 17 Swedish drivers to analyse their privacy perceptions and preferences for intelligent transportation systems, particularly for user control and for privacy trade-offs with cost, safety and usability. We also compare our results from the interviews with Swedish drivers with results from interviews that we conducted previously with South African drivers. Our cross-cultural comparison shows that perceived privacy implications, the drivers’ willingness to share location information under certain conditions with other parties, as well as their appreciation of Privacy Enhancing Technologies differ significantly across drivers with different cultural backgrounds. We further discuss the cultural impact on privacy preferences, including those for privacy trade-offs, and the implications of our results for usable privacy-enhancing Identity Management for future vehicular communication systems. In particular, we provide recommendations for suitable pre-defined privacy options to be offered to users with different cultural backgrounds enabling them to easily make privacy-related control choices.
Chapter
Considering the cyber-threats landscape over the past few years, the security community has repeatedly advocated the use of encryption for ensuring the privacy and integrity of the user’s data and communications. As a result, we have witnessed a growth in number and usage of secure messaging tools. End-to-end encryption has been adopted by several existing and popular messaging apps. Email service providers have made efforts to integrate encryption as a core feature rather than treating it as optional. Recently, the pandemic situation has only reinforced this belief, causing us to rethink communication on a global scale, therefore put special emphasis on the security aspect of it. Despite the advances in research on usable security, the majority of these software still violates best practices in UI design and uses ineffective design strategies to communicate and let users control the security status of a message. Prior research indicates that many users have difficulty using encryption tools correctly and confidently. They either lack an understanding of encryption or the knowledge of its features. The paper investigates usable security design guidelines and models that lead to a better user experience for message encryption software. The study includes qualitative and quantitative research methods, conducted on a group of users, to discover the different factors that affect the user experience of encryption in everyday life. In the meantime, it also uncovers issues that current users are facing. Several recommendations and practical guidelines are outlined, these can be used by practitioners to make encryption more usable and thus increase the overall user experience of the software.KeywordsInformation security and privacysecurityEmail securityEncryption
Chapter
Today people depend on technology, but often do not take the necessary steps to prioritize privacy and security. Researchers have been actively studying usable security and privacy to enable better response and management. A breadth of research focuses on improving the usability of tools for experts and organizations. Studies that look at non-expert users tend to analyze the experience for a device, software, or demographic. There is a lack of understanding of the security and privacy among average users, regardless of the technology, age, gender, or demographic. To address this shortcoming, we surveyed 47 publications in the usable security and privacy space. The work presented here uses qualitative text analysis to find major themes in user-focused security research. We found that a user’s misunderstanding of technology is central to risky decision-making. Our study highlights trends in the research community and remaining work. This paper contributes to this discussion by generalizing key themes across user experience in usable security and privacy.KeywordsCybersecurityHuman factorsInformation securityPrivacyUsability
Article
Supporting users with secure password creation is a well-explored yet unresolved research topic. A promising intervention is the password meter, i.e. providing feedback on the user's password strength as and when it is created. However, findings related to the password meter's effectiveness are varied. An extensive literature review revealed that, besides password feedback, effective password meters often include: (a) feedback nudges to encourage stronger passwords choices and (b) additional guidance. A between-subjects study was carried out with 645 participants to test nine variations of password meters with different types of feedback nudges exploiting various heuristics and norms. This study explored differences in resulting passwords: (1) actual strength, (2) memorability, and (3) user perceptions. The study revealed that password feedback, in combination with a feedback nudge and additional guidance, labelled a hybrid password meter, was generally more efficacious than either intervention on its own, on all three metrics. Yet, the type of feedback nudge targeting either the person, the password creation task, or the social context, did not seem to matter much. The meters were nearly equally efficacious. Future work should explore the long-term effects of hybrid password meters in real-life settings to confirm the external validity of these findings.
Chapter
Full-text available
An increasing amount of sensitive information is being communicated and stored online. Frequent reports of data breaches and sensitive data disclosures underscore the need for effective technologies that users and administrators can deploy to protect sensitive data. Privacy-enhancing technologies can control access to sensitive information to prevent or limit privacy violations. This chapter focuses on some of the technologies that prevent unauthorized access to sensitive information. These technologies include secure messaging, secure email, HTTPS, two-factor authentication, and anonymous communication. Usability is an essential component of a security evaluation because human error or unwarranted inconvenience can render the strongest security guarantees meaningless. Quantitative and qualitative studies from the usable security research community evaluate privacy-enhancing technologies from a socio-technical viewpoint and provide insights for future efforts to design and develop practical techniques to safeguard privacy. This chapter discusses the primary privacy-enhancing technologies that the usable security research community has analyzed and identifies issues, recommendations, and future research directions.
Conference Paper
Full-text available
Although end-to-end encryption (E2EE) is more widely available than ever before, many users remain confused about its security properties. As a result, even users with access to E2EE tools turn to less secure alternatives for sending private information. To investigate these issues, we conducted a 357-participant online user study analyzing how explanations of security impact user perceptions. In a between-subjects design, we varied the terminology used to detail the security mechanism, whether encryption was on by default, and the prominence of security in an app-store-style description page. We collected participants' perceptions of the tool's utility for privacy, security against adversaries, and whether use of the tool would be seen as "paranoid.'' Compared to "secure,'' describing the tool as "encrypted'' or "military-grade encrypted'' increased perceptions that it was appropriate for privacy-sensitive tasks, whereas describing it more precisely as "end-to-end encrypted'' did not. However, "military-grade encrypted'' was also associated with a greater perception of tool use as paranoid. Overall, we find that --- compared to prior work from 2006 --- the social stigma associated with encrypted communication has largely disappeared.
Chapter
We now aim to develop an awareness of what can go wrong on the web, through browser- server interactions as web resources are transferred and displayed to users. When a browser visits a web site, the browser is sent a page (HTML document). The browser renders the document by first assembling the specified pieces and executing embedded executable content (if any), perhaps being redirected to other sites. Much of this occurs without user involvement or understanding. Documents may recursively pull in content from multiple sites (e.g., in support of the Internet’s underlying advertising model), including scripts (active content). Two security foundations discussed in this chapter are the same-origin policy (SOP), and how http traffic is sent over TLS (i.e., https). http proxies and http cookies also play important roles. As representative classes of attacks, we discuss cross-site request forgery, cross-site scripting and sql injection. Many aspects of security from other chapters tie in to web security.
Article
Full-text available
Access control is an indispensable part of any information sharing system. Collaborative environments introduce new requirements for access control, which cannot be met by using existing models developed for non-collaborative domains. We have developed a new access control model for meeting these requirements. The model is based on a generalized editing model of collaboration, which assumes that users interact with a collaborative application by concurrently editing its data structures. It associates fine-grained data displayed by a collaborative application with a set of collaboration rights and provides programmers and users a multi-dimensional, inheritance-based scheme for specifying these rights. The collaboration rights include traditional read and write rights and several new rights such as viewing rights and coupling rights. The inheritance-based scheme groups subjects, protected objects, and access rights; allows each component of an access specification to refer to both groups...
Article
We present a detailed case study of a computer scientist learning and using the Cognitive Walkthrough (CW) technique to assess a multimedia authoring tool. We then compare the predictions produced by the analysis to the usability problems actually found in empirical usability tests. In this case, the concepts of Cognitive Walkthrough were learnable by the analyst who had little prior training in psychology or Human-Computer Interaction, flexible enough for the analyst to make reasonable modifications to the technique to fit the design situation, but disappointingly ineffective in predicting actual user problems. We present several hypotheses about the cause of low effectiveness, which suggest that additional knowledge, currently tacit in CW-experts, could be used to improve the technique. In addition, the emergent picture of the process this evaluator went through to produce his analysis sets realistic expectations for other novice analysts who contemplate learning and using Cognitive Walkthroughs. © 1997 John Wiley & Sons, Inc.
This paper reports the results of three iterative usability tests of a security application as it evolved through the application development process and highlights the use of several methodological techniques: 1) reusable color foil prototypes of application panels as an alternative to developing online prototypes during short development cycles, 2) field tests as a complement to laboratory tests, 3) iterative testing of an evolving prototype, and 4) analysis of dollar value of usability work. The techniques used represent an attempt to apply usability engineering to system design (Whiteside, Bennett, and Holtzblatt; 1988) and to provide management with a dollar value estimate of human factors work (Mantei, 1988). Significant improvements in end user performance and satisfaction occurred across the three iterative tests (field prototype test, laboratory prototype test, and laboratory integration test) conducted across 7 months with 27 participants. The product usability objective was met during the third test. By using the reusable foil prototypes of the interface panels, usability staff were able to efficiently and effectively identify problems, make design changes, and retest the panels. The field test furnished unique data necessary to understanding end user issues. Iterative testing provided the opportunity to test the impact of changes made to the interface and a reliability check on previous results. The methodology for computing the value of usability work provided a feasible way of analyzing the cost benefit of the human factors work.
Conference Paper
We introduce the term user-centered security to refer to security models, mechanisms, systems, and software that have usability as a primary motivation or goal. We discuss the history of usable secure systems, citing both past problems and present studies. We develop three categories for work in user-friendly security: applying usability testing and tech- niques to secure systems, developing security models and mechanisms for user-friendly systems, and considering user needs as a primary design goal at the start of secure system development. We discuss our work on user-centered authoriza- tion, which started with a rules-based authorization engine (MAP) and will continue with Adage. We outline the lessons we have learned to date and how they apply to our future work. We evaluate the pros and cons of this effort, as a precursor to fur- ther work in this area, and include a brief description of our current work in user-centered authorization. As our conclusion points out, we hope to see more work in user-centered security in the future; work that enables users to choose and use the protection they want, that matches their intuitions about security and privacy, and that supports the policies that teams and organizations need and use to get their work done. ~~1~~~1~~~1~~~1~~~1~~~1.
Article
Analyzes trends in Internet security through an investigation of incidents reported to the CERT Coordination Center. Outcomes include a taxonomy for classifying Internet attacks and incidents, analysis of incident records available at the CERT/CC, and recommendations to improve Internet security. Thesis (Ph. D.)--Carnegie Mellon University, 1997. Includes bibliographical references (leaves 243-246). Photocopy.
Article
Human factors are perhaps the greatest current barrier to effective computer security. Most security mechanisms are simply too difficult and confusing for the average computer user to manage correctly. Designing security software that is usable enough to be effective is a specialized problem, and user interface design strategies that are appropriate for other types of software will not be sufficient to solve it. In order to gain insight and better define this problem, we studied the usability of PGP 5.0, which is a public key encryption program mainly intended for email privacy and authentication. We chose PGP 5.0 because it has a good user interface by conventional standards, and we wanted to discover whether that was sufficient to enable non-programmers who know little about security to actually use it effectively. After performing both user testing and a cognitive walkthrough analysis, we conclude that PGP 5.0 is not sufficiently usable to provide effective security for most users. In the course of our study, we developed general principles for evaluating the usability of computer security utilities and systems. This study is of interest not only because of the conclusions that we reach, but also because it can serve as an example of how to evaluate the usability of computer security software. This publication was supported by Contract No. 102590-98-C-3513 from the United States Postal Service. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the United States Postal Service. Keywords: security, human-computer interaction, usability, public key cryptography, electronic mail, PGP. 1.
Article
We present a detailed case study, drawn from many information sources, of a computer scientist learning and using Cognitive Walkthrough to assess a multi-media authoring tool. We then compare the predictions produced by the analysis to the usability problems actually found though empirical usability tests. This study results in several clear messages to both system designers and to developers of evaluation techniques: (1) the Cognitive Walkthrough technique is currently learnable and usable, but (2) it may not predict most usability errors found empirically, and (3) several elaborations of the technique might improve its effectiveness. In addition, the emergent picture of the process this evaluator went through to produce his analysis sets realistic expectations for other novice analysts who contemplate learning and using Cognitive Walkthroughs. 1. Introduction Information systems in our networked and multimedia age must be usable by the people they are intended to serve. From the pr...
Article
Designers of cryptographic systems are at a disadvantage to most other engineers, in that information on how their systems fail is hard to get: their major users have traditionally been government agencies, which are very secretive about their mistakes. In this article, we present the results of a survey of the failure modes of retail banking systems, which constitute the next largest application of cryptology. It turns out that the threat model commonly used by cryptosystem designers was wrong: most frauds were not caused by cryptanalysis or other technical attacks, but by implementation errors and management failures. This suggests that a paradigm shift is overdue in computer security; we look at some of the alternatives, and see some signs that this shift may be getting under way. 1 Introduction Cryptology, the science of code and cipher systems, is used by governments, banks and other organisations to keep information secure. It is a complex subject, and its national security ov...
Article
Public-key cryptography has low infrastructural overhead because public-key users bear a substantial but hidden administrative burden. A public-key security system trusts its users to validate each others' public keys rigorously and to manage their own private keys securely. Both tasks are hard to do well, but publickey security systems lack a centralized infrastructure for enforcing users' discipline. A compliance defect in a cryptosystem is such a rule of operation that is both difficult to follow and unenforceable. This paper presents five compliance defects that are inherent in public-key cryptography; these defects make publickey cryptography more suitable for server-to-server security than for desktop applications. 1 Introduction Public-key cryptography is uniquely well-suited to certain parts of a secure global network. It is widely accepted that public-key security systems are easier to administer, more secure, less trustful, and have better geographical reach, than symmetric...
In More Than Screen Deep: Toward Every-Citizen Interfaces to the Nation's Information Infrastructure
  • Stephen Kent
  • Security
Stephen Kent. Security. In More Than Screen Deep: Toward Every-Citizen Interfaces to the Nation's Information Infrastructure. National Academy Press, Washington, D.C., 1997.
PGP: Pretty Good Privacy. O'Reilly and Associates
  • Simson Garfinkel
Simson Garfinkel. PGP: Pretty Good Privacy. O'Reilly and Associates, 1995.
User- Centered Security. New Security Paradigms Workshop
  • Mary Ellen Zurko
  • Richard T Simon
Mary Ellen Zurko and Richard T. Simon. User- Centered Security. New Security Paradigms Workshop, 1996.
Threats and Solutions. Presentation to SHARE 86.0
  • Matt Bishop
  • Security
Matt Bishop. UNIX Security: Threats and Solutions. Presentation to SHARE 86.0, March 1996. At http://seclab.cs.ucdavis.edu/ bishop/scriv/1996-share86.pdf.