Article

Increasing Security Sensitivity With Social Proof: A Large-Scale Experimental Confirmation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

One of the largest outstanding problems in computer security is the need for higher awareness and use of available security tools. One promising but largely unexplored approach is to use social proof: by showing people that their friends use security features, they may be more inclined to explore those features, too. To explore the efficacy of this approach, we showed 50,000 people who use Facebook one of 8 security announcements - 7 variations of social proof and 1 non-social control - to increase the exploration and adoption of three security features: Login Notifications, Login Approvals, and Trusted Contacts. Our results indicated that simply showing people the number of their friends that used security features was most effective, and drove 37% more viewers to explore the promoted security features compared to the non-social announcement (thus, raising awareness). In turn, as social announcements drove more people to explore security features, more people who saw social announcements adopted those features, too. However, among those who explored the promoted features, there was no difference in the adoption rate of those who viewed a social versus a non-social announcement. In a follow up survey, we confirmed that the social announcements raised viewer's awareness of available security features.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This attitude reflects how VPN providers tend to write about the practice, emphasizing and sometimes misrepresenting the risks of using Tor to recommend their VPN for accessing the dark net. Our document analysis suggests that background information establishes normative beliefs when users look to others for appropriate security behavior (i.e., social proof [11][12][13]), contributing to the spread of this kind of security folklore. The importance of normative beliefs, which do not rely on knowledge about the practice's purpose and effects, explains why many Tor over VPN users expect general security benefits. ...
... Fishbein and Icek [22, p. 131] differentiate two types of norms: injunctive norms, which concern other individuals' or groups' moral judgment on behavior, and descriptive norms, which concern the perception of other individuals' or groups' behavior. Related Usable Security research [12,13] also uses the term social proof for these descriptive norms to describe how people look to others to learn the appropriate behavior. ...
... Descriptive normative beliefs concern peoples' perceptions of typically performed behavior based on observations. Related security research refers to this behavior as social proof [11][12][13]. Our analysis focused on these descriptive norms, i.e., what users would find in information sources when they look for social proof about the appropriate use of Tor. ...
Article
Users face security folklore in their daily lives in the form of security advice, myths, and word-of-mouth stories. Using a VPN to access the Tor network, i.e., Tor over VPN, is an interesting example of security folklore because of its inconclusive security benefits and its occurrence in pop-culture media. Following the Theory of Reasoned Action, we investigated the phenomenon with three studies: (1) we quantified the behavior on real-world Tor traffic and measured a prevalence of 6.23%; (2) we surveyed users' intentions and beliefs, discovering that they try to protect themselves from the Tor network or increase their general security; and (3) we analyzed online information sources, suggesting that perceived norms and ease-of-use play a significant role while behavioral beliefs about the purpose and effect are less crucial in spreading security folklore. We discuss how to communicate security advice effectively and combat security misinformation and misconceptions.
... Similarly, others found that individuals often work together within their social network of friends, family members, and coworkers to resolve privacy issues [9], [10]. Social influence exerted by trusted individuals has a significant impact on behavior [11]. ...
... While previous studies have confirmed the value of leveraging social processes in privacy decisionmaking [11], our study is one of the first to conceptualize a tool that incorporates these social features into the design of an app prototype that helps users collaboratively manage their mobile app privacy permissions. ...
... Goecks et al. proposed a privacy management system that helped individuals manage their cookies based on feedback from the community of users who have previously visited the site [25]. In a study to observe how user actions are influenced by their friends, Das et al. [11] presented Facebook users with announcements prompting them to check on extra security features available to them. They found that an announcement that included a message saying a user's Facebook friends used the security feature being advertised influenced the user receiving the message to explore the feature but not to act on it. ...
... Sparks and facilitators also pose interesting opportunities for S&P, as few end-users have both high motivation and high ability to engage in pro-S&P behaviors. An example of a spark that encourages S&P behaviors is Das et al. (2014b)'s social proof notifcations, which informed Facebook users of the number of their friends who used optional security tools on Facebook. An example of an efective facilitator that simplifes S&P behaviors comes from Akhawe and Felt (2013)'s redesign of the Chrome SSL warning to simplify exiting out of suspicious webpages. ...
... In a 3.2. Awareness later experiment with 50,000 Facebook users, Das et al. (2014b) found that increasing the observability of the use of optional security and privacy tools was signifcantly more likely to result in end-users exploring the adoption of those tools themselves. Prior work has also found that social sharing of pertinent media information can increase awareness of S&P threats and the mitigation strategies thereof (Das et al., 2018b). ...
... Motivation intentions in both the theory of reasoned action and the theory of planned behavior (Ajzen, 1991;Fishbein and Ajzen, 1977) as well as in technology adoption in both the DoI (Rogers, 1962), the technology acceptance model (TAM) (Davis, 1989), and the universal theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003). In the context of security and privacy, prior work suggests that subjective norms and social infuences can strongly infuence people's motivation to accept and/or adopt expert-recommended security and privacy advice (Rader et al., 2012;Das et al., 2014a;2014b;. ...
Book
Cybersecurity and Privacy (S&P) unlock the full potential of computing. Use of encryption, authentication, and access control, for example, allows employees to correspond with professional colleagues via email with reduced fear of leaking confidential data to competitors or cybercriminals. It also allows, for example, parents to share photos of children with remote loved ones over the Internet with reduced fear of this data reaching the hands of unknown strangers, and anonymous whistleblowers to share information about problematic practices in the workplace with reduced fear of being outed. Conversely, failure to employ appropriate S&P measures can leave people and organizations vulnerable to a broad range of threats. In short, the security and privacy decisions we make on a day-to-day basis determine whether the data we share, manipulate, and store online is protected from theft, surveillance, and exploitation. How can end-users be encouraged to accept recommended S&P behavior from experts? In this monograph, prior art in human-centered S&P is reviewed, and three barriers to end-user acceptance of expert recommendations have been identified. These three barriers make up what we call the “Security & Privacy Acceptance Framework” (SPAF). The barriers are: (1) awareness: i.e., people may not know of relevant security threats and appropriate mitigation measures; (2) motivation: i.e., people may be unwilling to enact S&P behaviors because, e.g., the perceived costs are too high; (3) and, ability: i.e., people may not know when, why, and how to effectively implement S&P behaviors. This monograph also reviews and critically analyzes prior work that has explored mitigating one or more of the barriers that make up the SPAF. Finally, using the SPAF as a lens, discussed is how the human-centered S&P community might re-orient to encourage widespread end-user acceptance of pro-S&P behaviors by employing integrative approaches that address each one of the awareness, motivation, and ability barriers.
... Sparks and facilitators also pose interesting opportunities for S&P, as few end-users have both high motivation and high ability to engage in pro-S&P behaviors. An example of a spark that encourages S&P behaviors is Das et al. (2014b)'s social proof notifcations, which informed Facebook users of the number of their friends who used optional security tools on Facebook. An example of an efective facilitator that simplifes S&P behaviors comes from Akhawe and Felt (2013)'s redesign of the Chrome SSL warning to simplify exiting out of suspicious webpages. ...
... In a 3.2. Awareness later experiment with 50,000 Facebook users, Das et al. (2014b) found that increasing the observability of the use of optional security and privacy tools was signifcantly more likely to result in end-users exploring the adoption of those tools themselves. Prior work has also found that social sharing of pertinent media information can increase awareness of S&P threats and the mitigation strategies thereof (Das et al., 2018b). ...
... Motivation intentions in both the theory of reasoned action and the theory of planned behavior (Ajzen, 1991;Fishbein and Ajzen, 1977) as well as in technology adoption in both the DoI (Rogers, 1962), the technology acceptance model (TAM) (Davis, 1989), and the universal theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003). In the context of security and privacy, prior work suggests that subjective norms and social infuences can strongly infuence people's motivation to accept and/or adopt expert-recommended security and privacy advice (Rader et al., 2012;Das et al., 2014a;2014b;. ...
... Das et al. describe security sensitivity as comprised of users' awareness of both the relevant threats and the means to combat them, along with motivation to comply with security advice and knowledge of how to use tools that will protect systems across the technology stack [15,16]. These studies found that such compliance with advice and conformity with security practices are driven in part by social influences [16,18] in the form of social proof via Facebook posts [17,18], conversations sparked by personally experiencing a privacy or security breach [16] and hearing or seeing news about privacy and security breaches [19]. Building particularly on this work with security sensitivity, we developed and validated a relatively short measure of security attitudes called SA-6 [30]. ...
... In our past work on SA-6 [30], we generated an initial set of 200+ candidate scale items, drawn from empirical research by Das et al. [15][16][17][18][19], Egelman and Peer [25] and other work in usable security and in the psychology of computer use [35,74], as well as from their own expertise in the subject. These items were comprised of statements of cybersecurity attitudes rated on a 5point Likert-type agreement scale (1=Strongly disagree, 5=Strongly agree). ...
... THE SECURITY ATTITUDE INVENTORY (SA-13) | Page 39 I generally am aware of methods to send email or text messages that can't be spied on. [15][16][17][18] I have much bigger problems than my risk of a security breach. [16,22] I need to change my security behaviors to improve my protection against security threats (such as phishing, computer viruses, identity theft, password hacking). ...
Preprint
Full-text available
We present SA-13, the 13-item Security Attitude inventory. We develop and validate this assessment of cybersecurity attitudes by conducting an exploratory factor analysis, confirmatory factor analysis, and other tests with data from a U.S. Census-weighted Qualtrics panel (N=209). Beyond a core six indicators of Engagement with Security Measures (SA-Engagement, three items) and Attentiveness to Security Measures (SA-Attentiveness, three items), our SA-13 inventory adds indicators of Resistance to Security Measures (SA-Resistance, four items) and Concernedness with Improving Compliance (SA-Concernedness, three items). SA-13 and the subscales exhibit desirable psychometric qualities; and higher scores on SA-13 and on the SA-Engagement and SA-Attentiveness subscales are associated with higher scores for security behavior intention and for self-reported recent security behaviors. SA-13 and the subscales are useful for researchers and security awareness teams who need a lightweight survey measure of user security attitudes. The composite score of the 13 indicators provides a compact measurement of cybersecurity decisional balance.
... For instance, only 28% of adults surveyed could correctly identify an example of two-factor authentication, and only 24% were familiar with the concept of private browsing [49]. The report also found that younger adults (ages [18][19][20][21][22][23][24][25][26][27][28][29] were significantly more likely to answer questions about digital privacy and security correctly, compared to older adults (ages 65 and older). The lack of general knowledge around digital privacy and security, combined with the prevalent use of personal digital devices, creates a critical need for innovative approaches that fill knowledge gaps in a way that helps protect individuals from potential privacy and security threats. ...
... As such, researchers have begun to identify the importance of social support in managing individual and collective digital privacy and security (e.g., [18,19,35]). Several studies have demonstrated that social influence plays an important role in gaining knowledge and also changing an individual's privacy behaviors [23,28,45]. ...
... Their study revealed that people with low self-efficacy on privacy matters were more open to adopting privacy practices when their social network influences them. Das et al. [19,20] found evidence that an individual can be motivated to adopt a security feature merely by viewing how many of their friends used that feature, which is referred to as social proof. Similarly, Tabassum et al. [47] sought to understand users' perspectives about privacy and trust in connection to sharing smart home devices with individuals living outside of the home and discovered that smart device owners took a community-based approach to the safety and care of their home. ...
Article
Full-text available
Managing digital privacy and security is often a collaborative process, where groups of individuals work together to share information and give one another advice. Yet, this collaborative process is not always reciprocal or equally shared. In many cases, individuals with more expertise help others without receiving help in return. Therefore, we studied the phenomenon of "Tech Caregiving" by surveying 20 groups (112 individuals) comprised of friends, family members, and/or co-workers who identified at least one member of their group as a someone who provides informal technical support to the people they know. We found that tech caregivers reported significantly higher levels of power use and self-efficacy for digital privacy and security, compared to tech caregivees. However, caregivers and caregivees did not differ based on their self-reported community collective-efficacy for collaboratively managing privacy and security together as a group. This finding demonstrates the importance of tech caregiving and community belonging in building community collective efficacy for digital privacy and security. We also found that caregivers and caregivees most often communicated via text message or phone when coordinating support, which was most frequently needed when troubleshooting or setting up new devices. Meanwhile, discussions specific to privacy and security represented only a small fraction of the issues for which participants gave or received tech care. Thus, we conclude that educating tech caregivers on how to provide privacy and security-focused support, as well as designing technologies that facilitate such support, has the potential to create positive networks effects towards the collective management of digital privacy and security.
... Several recent human-computer interaction studies have explored community approaches to security and privacy, such as social influence [12,28] and social support [2,9,22,27,44,46]. Das et al. found that close social members may influence users to adopt similar security behaviors and have conversations about security features [12]. ...
... Several recent human-computer interaction studies have explored community approaches to security and privacy, such as social influence [12,28] and social support [2,9,22,27,44,46]. Das et al. found that close social members may influence users to adopt similar security behaviors and have conversations about security features [12]. Mendel and Toch have shown that social ties influence users' susceptibility to adopt security and privacy behaviors than formal sources [28]. ...
... For example, when an application requests permission that is not legitimate, the user should not trust the developer and designer of the application. Second, social help may encourage actual conversations with a family member about security features, which are key enablers of a socially-driven behavioral change and essential for online safety learning [12]. Third, older adults prioritize social resources based on availability rather than cybersecurity expertise (e.g., developers and designers), and they may avoid using the internet for cybersecurity information [33]. ...
... For this purpose, we introduce a 6-item self-report measure of security attitudes: SA-6. Our measure is based on usercentered empirical and theoretical studies of awareness, motivation to use and knowledge of expert-recommended security tools and practices (security sensitivity) [20][21][22][23][24]. Using principles of psychological scale development [28,39,46,53], we generate 48 candidate items that on their face corresponded to prior work on security attitudes and that pilot testers found to be unambiguous and easily answered. ...
... In the field of usable security, end-user security sensitivity is defined by Das as "the awareness of, motivation to use, and knowledge of how to use security tools" and practices [20]. Das and collaborators based this construct on empirical findings in interview studies that many people believe themselves in no danger of falling victim to a security breach and are unaware of the existence of tools to protect them against those threats; also, they perceive the inconvenience and cost to their time and attention of using these tools and practices as outweighing the harm of experiencing a security breach; and, they think these measures are too difficult to use or lack the knowledge to use them effectively [20][21][22][23]. ...
... I care very much about the issue of security threats (such as phishing, computer viruses, identity theft, password hacking). [20][21][22][23] I dread that using recommended security measures will backfire on me (such as forgetting a needed password, updated software becoming unusable, etc.). [21,45] I feel guilty when I do not use recommended security measures (such as by reusing passwords, putting off software updates, etc.). ...
Preprint
Full-text available
We present SA-6, a six-item scale for assessing people's security attitudes that we developed by following standardized processes for scale development. We identify six scale items based on theoretical and empirical research with sufficient response variance, reliability, and validity in a combined sample (N = 478) from Amazon Mechanical Turk and a university-based study pool. We validate the resulting measure with a U.S. Census-tailored Qualtrics panel (N = 209). SA-6 significantly associates with self-report measures of behavior intention and recent secure behaviors. Our work contributes a lightweight method for (1) quantifying and comparing people's attitudes toward using recommended security tools and practices, and (2) improving predictive modeling of who will adopt security behaviors.
... An approach that has shown promise is to leverage the social influence of informal networks to infuse expertise and exert influence on privacy and security decisions [50]. For example, several studies have shown that people tend to trust and follow privacy advice from their trusted circles, often turning to friends and family for guidance on digital privacy and security topics [24,25,50]. ...
... However, a large body of research work also demonstrated that individuals learn and change their digital privacy and security behaviors when they become aware of their close trusted circle's privacy and security practices. For example, Das et al. 's studies [25,26] demonstrated the effectiveness of "social proof" -i.e. being able to view how many friends in their social network use a specific security feature -and showed that individuals are more influenced to adopt a privacy and security feature when they witness adoption by others. ...
Preprint
Full-text available
We conducted a 4-week field study with 101 smartphone users who self-organized into 22 small groups of family, friends, and neighbors to use ``CO-oPS,'' a mobile app for co-managing mobile privacy and security. We differentiated between those who provided oversight (i.e., caregivers) and those who did not (i.e., caregivees) to examine differential effects on their experiences and behaviors while using CO-oPS. Caregivers reported higher power use, community trust, belonging, collective efficacy, and self-efficacy than caregivees. Both groups' self-efficacy and collective efficacy for mobile privacy and security increased after using CO-oPS. However, this increase was significantly stronger for caregivees. Our research demonstrates how community-based approaches can benefit people who need additional help managing their digital privacy and security. We provide recommendations to support community-based oversight for managing privacy and security within communities of different roles and skills.
... Social proof, in which people look to others for signifiers of correct behaviors [4], is an influence on security awareness and adoption [9,11] that can operate at mass scale through social computing [14,58]. A pair of studies found that social influence in Facebook friend networks affected users' likelihood to adopt a security feature, varying by the attributes of the feature (observability) and how the feature has already diffused through the network [12,13]. ...
... (11) Immediately installing needed updates to the operating system and other software. (12) Setting your computing devices to automatically lock when you do not use them. ( Q4.14 Using a password, passcode, thumbprint or other method to unlock your computing devices Q6.1 On the next page, we will present a series of statements about the use of security measures [19,21]. ...
Preprint
Full-text available
Much research has found that social influences (such as social proof, storytelling, and advice-seeking) help boost security awareness. But we have lacked a systematic approach to tracing how awareness leads to action, and to identifying which social influences can be leveraged at each step. Toward this goal, we develop a framework that synthesizes our design ideation, expertise, prior work, and new interview data into a six-step adoption process. This work contributes a prototype framework that accounts for social influences by step. It adds to what is known in the literature and the SIGCHI community about the social-psychological drivers of security adoption. Future work should establish whether this process is the same regardless of culture, demographic variation, or work vs. home context, and whether it is a reliable theoretical basis and method for designing experiments and focusing efforts where they are likely to be most productive. CCS CONCEPTS • Security and privacy; • Human and societal aspects of security and privacy; Usability in security and privacy; • Human-centered computing; • Human computer interaction (HCI); HCI design and evaluation methods, User studies; HCI theory , concepts and models; Empirical studies in collaborative and social computing;
... Social proof is that people rely on others' behavior to direct their actions (Cialdini, 2009), and serves as an important factor in influencing trust (MacCoun, 2012); such indirect information persuades users to embrace algorithms because the behavior of people around individuals influences their actions (Das et al., 2014). For instance, in human-AI interactions, participants tended to rely on an AI recommend system when they were informed that other people had used it (Alexander et al., 2018). ...
... There were two ways that social proof could affect trust in AI. When DMP was absent or quantitatively presented, AI was regarded as a product, and social proof affected trust in AI by providing social acceptance information about the product AI (Alexander et al., 2018;Das et al., 2014). Whereas when DMP was non-quantitatively presented, AI was regarded as a social agent, and social proof could affect trust in AI by providing social acceptance information about the social-agent AI. ...
... To answer these research questions, we conducted an in-depth user study with 19 parent-teen (ages [13][14][15][16][17] pairs. Participants were first asked about their current mobile privacy and security practices (RQ1). ...
... Rader et al. reported in their studies [41,42] that individuals often learn privacy strategies from their loved ones (e.g., families, friends, colleagues). Moreover, users are influenced by others' privacy behavior and adopt online safety tools to keep themselves safe online based on the advice of others [17,18,33]. In fact, teens often provide tech support within their families. ...
Article
Full-text available
Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. COoPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the COoPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using COoPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced that made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study.
... To answer these research questions, we conducted an in-depth user study with 19 parent-teen (ages [13][14][15][16][17] pairs. Participants were first asked about their current mobile privacy and security practices (RQ1). ...
... Rader et al. reported in their studies [41,42] that individuals often learn privacy strategies from their loved ones (e.g., families, friends, colleagues). Moreover, users are influenced by others' privacy behavior and adopt online safety tools to keep themselves safe online based on the advice of others [17,18,33]. In fact, teens often provide tech support within their families. ...
Preprint
Full-text available
Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. CO-oPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the CO-oPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using CO-oPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication, but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced that made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study.
... Specifically, we have been analyzing and testing how to apply Cialdini's Social Influence Theory [1] to improve end users' awareness, motivation and knowledge of cybersecurity tools and best practices. Previous work has found that social factors were responsible for nearly half of all reported changes in security behaviors, such as using a smartphone PIN or enabling a Facebook security feature [3][4][5]. We now are extending this research to a workplace context. ...
... Das et al. found support for the application of social influence theory to problems in usable privacy and security [5], notably in a large-scale study of Facebook's implementation of the "social proof" concept for the Trusted Contacts user authentication feature [4]. We now are turning our focus to the workplace as a context in which social factors can be leveraged to spur the adoption of security tools and best practices. ...
Preprint
Full-text available
Among the underappreciated roles of information technology professionals is that of "sales and marketing" for end-user security compliance. In this workshop paper, I offer ideas drawn from social psychology for communication strategies and micro-interventions that could help IT professionals on the front lines of end-user support to improve voluntary compliance with mandated security tools and best practices. I then describe our team's current work to document and analyze workplace resource sharing through questionnaires on Amazon Mechanical Turk and interviews with local IT professionals to get a better picture of workers as social actors within a mixture of enterprise and consumer systems for user authentication and authorization. We hope our work can help identify and lead to effective interventions for pain points in end-user security support.
... Security sensitivity is defined by Das as "the awareness of, motivation to use, and knowledge of how to use security tools" [3]. Das and collaborators based this construct on prior findings that many people believe themselves in no danger of falling victim to a security breach and are unaware of the existence of tools to protect them against those threats; they perceive the inconvenience and cost to their time and attention as outweighing the harm of experiencing a security breach, and they think they are too difficult to use or lack the knowledge to use them effectively [3][4][5]. This conception builds in turn on work from Davis et al. [6,7] on user perceptions of usefulness and ease of use, from Egelman et al. [10]'s adaptation of the Communication-Human Information Processing cognitive model to end-user security, and from Rogers' Diffusion of Innovations theory [15] of how messages spread in a social network about a "new ideal." ...
... These processes of change can also be effective for those in the Contemplation stage, who are beginning to doubt their negative attitude toward change (corresponding to a statement such as "I worry about the impact of my lax security behaviors"), but the focus shifts to Self re-evaluation, combining cognitive and affective assessments of how unhealthy habits affect their self-image and confidence; followed by Self liberation and Social liberation for the Preparation/Determination stage ("I want to change" or "I need to change" statements from end users). In the latter stage, a public commitment to behavior change can be particularly effective [14], which echoes Das et al.'s findings that social influence techniques such as observable adoption of security behaviors can drive secure behavior adoption by social ties [4]. ...
Preprint
Full-text available
The continued susceptibility of end users to cybersecurity attacks suggests an incomplete understanding of why some people ignore security advice and neglect to use best practices and tools to prevent threats. A more detailed and nuanced approach can help more accurately target security interventions for end users according to their stage of intentional security behavior change. In this paper, we adapt the Transtheoretical Model of Behavior Change for use in a cybersecurity design context. We provide a visual diagram of our model as adapted from public health and cybersecurity literature. We then contribute advice for designers' use of our model in the context of human-computer interaction and the specific domain of usable privacy and security, such as for encouraging timely software updates, voluntary use of two-factor authentication and attention to password hygiene.
... Facebook groups are virtual, informal gatherings of like-minded people, making social proof easier to establish. Social proof can motivate people to act (Das et al., 2014) by convincing them that success is attainable. ...
Article
Even though tourism and hospitality employ large numbers of women and female micro-entrepreneurship plays a significant role in sustainable tourism development, the proportion of female micro-entrepreneurs is low. Particularly in developing countries, barriers to female micro- entrepreneurship remain significant. This article explores how social media platforms like Facebook can empower female tourism and hospitality micro-entrepreneurs in developing and highly tourism-dependent economies. Employing a netnographic approach, data were collected in two stages: (1) A total of 3214 posts by female micro-entrepreneurs were gathered from two Facebook groups to identify dimensions of platform empowerment; (2) semi-structured interviews with twelve members of the two groups were conducted to further explore these dimensions. The findings show four ways in which Facebook supports empowerment processes and outcomes at individual and collective levels, namely as a (1) learning resource; (2) informal entrepreneurial ecosystem; (3) self-development tool; and (4) business development exchange. By identifying the role social media platforms play in bridging social policy gaps, this study contributes knowledge that is critical for a more inclusive development of sustainable tourism.
... Social nudges refer to a social norm which supposedly favors the behavior the nudge aims to facilitate. Social nudges have been applied successfully with regard to healthy eating [29,66], online security [17], and cookie privacy [15]. The efect seems to be strongest when the social norm includes a percentage of people who show the desired behavior [84]. ...
Article
Legal frameworks rely on users to make an informed decision about data collection, e.g., by accepting or declining the use of tracking technologies. In practice, however, users hardly interact with tracking consent notices on a deliberate website per website level, but usually accept or decline optional tracking technologies altogether in a habituated behavior. We explored the potential of three different nudge types (color highlighting, social cue, timer) and default settings to interrupt this auto-response in an experimental between-subject design with 167 participants. We did not find statistically significant differences regarding the buttons clicked. Our results showed that opt-in default settings significantly decrease tracking technology use acceptance rates. These results are a first step towards understanding the effects of different nudging concepts on users’ interaction with tracking consent notices.
... Secondly, they tap into the desire to avoid the disutility associated with failing to conform with the behavioral expectations of the group. Providing social proof has been shown to change behavior in the areas of energy consumption (Brandon et al., 2017), over-prescribing (Hallsworth et al., 2016), and adoption of computer security features (Das et al., 2014). ...
... Das defines this concept as "the awareness of, motivation to use, and knowledge of how to use security tools" [15]. His empirical research documented how lack of information reduces sensitivity due to people not correctly perceiving their danger of falling victim to a security breach and failing to register the existence of tools to protect them and their close ties against such threats [15,[17][18][19] [44,63]. The TTM marks a shift from thinking of behavior change as occurring in a single, decisive moment to that of a longerterm, cyclical process in which people balance pros and cons along with self-efficacy and temptation in their decision making. ...
Preprint
Full-text available
Behavior change ideas from health psychology can also help boost end user compliance with security recommendations, such as adopting two-factor authentication (2FA). Our research adapts the Transtheoretical Model Stages of Change from health and wellness research to a cybersecurity context. We first create and validate an assessment to identify workers on Amazon Mechanical Turk who have not enabled 2FA for their accounts as being in Stage 1 (no intention to adopt 2FA) or Stages 2-3 (some intention to adopt 2FA). We randomly assigned participants to receive an informational intervention with varied content (highlighting process, norms, or both) or not. After three days, we again surveyed workers for Stage of Amazon 2FA adoption. We found that those in the intervention group showed more progress toward action/maintenance (Stages 4-5) than those in the control group, and those who received content highlighting the process of enabling 2FA were significantly more likely to progress toward 2FA adoption. Our work contributes support for applying a Stages of Change Model in usable security.
... One such work proposed interfaces for filesystems that show people how others implement security [26]. Similar work found that showing people that their friends use security-enhancing features on social networks increases the uptake of these features [24]. ...
... Social factors are influential in secure behavior adoption broadly [10][11][12]. More specifically, De Luca et al. and Abu-Salma et al. found that peer influence significantly outweighs privacy protection in adoption of secure messaging systems [4,13]. ...
Conference Paper
Full-text available
Although end-to-end encryption (E2EE) is more widely available than ever before, many users remain confused about its security properties. As a result, even users with access to E2EE tools turn to less secure alternatives for sending private information. To investigate these issues, we conducted a 357-participant online user study analyzing how explanations of security impact user perceptions. In a between-subjects design, we varied the terminology used to detail the security mechanism, whether encryption was on by default, and the prominence of security in an app-store-style description page. We collected participants' perceptions of the tool's utility for privacy, security against adversaries, and whether use of the tool would be seen as "paranoid.'' Compared to "secure,'' describing the tool as "encrypted'' or "military-grade encrypted'' increased perceptions that it was appropriate for privacy-sensitive tasks, whereas describing it more precisely as "end-to-end encrypted'' did not. However, "military-grade encrypted'' was also associated with a greater perception of tool use as paranoid. Overall, we find that --- compared to prior work from 2006 --- the social stigma associated with encrypted communication has largely disappeared.
... Older adults tend to rely on digital security advice from family and friends, significantly more than younger people [34,35]. Studies have shown that social ties influence more than others sources in users' susceptibility (for all age groups) to adopt security and privacy behaviors [26] and that social cues can make users more likely to adopt the same security behaviors as their friends [15]. Furthermore, older adults may have concerns about sharing their personal information with strangers, which is often the case with security and privacy support [22]. ...
Conference Paper
Older people experience difficulties when managing their security and privacy in mobile environments. However, support from the older adult's social network, and especially from close-tie relations such as family and close friends, is known to be an effective source of help in coping with technological tasks. On the basis of this existing phenomena, I investigate how new methods can increase the availability of social support to older adults and enhance learning in tackling privacy and security challenges. I will develop and evaluate several technological interventions in the support process within social networks for older adults: finding methods that increase seekers' technology learning and methods that increase help availability and quality. In my Ph.D., I suggest conducting three studies: the first study aims to analyze existing approaches and scenarios of social support to older adults. The initial results suggest that people have a significant willingness to help their older relatives (specifically, their parents), but the actual instances in which they do so is much rarer. We conclude that the potential for social help is far from being exploited. In the second study, I plan to explore social support as a system to increase older adults' self-efficacy and collective efficacy to overcome privacy and security problems. The final study will investigate physiological signals to identify when an older adult required help with mobile security and privacy issues. A successful outcome will be a theoretical model of social support, focused on the domain of privacy and security, and based on vulnerable populations such as older adults. From a practical standpoint, the thesis will offer and evaluate a set of technologies that enable and encourage social support for older adults on mobile platforms.
... Adoption criteria of secure messaging tools and services as well as social influence on users decision of security tools and security behavior have been intensively investigated in the past [17,11,23,12]. For example, De Luca et al. [23] conducted an online survey with 1500 participants, making a quantitative analysis of how much of a role security played in people's decisions to use a mobile messenger. ...
... Researchers have explored the effectiveness of different types of interventions for increasing cybersecurity awareness. Recent approaches for cybersecurity awareness and behavior change include delivering just-in-time notifications using browser plug-ins [6], and raising awareness using games [1,8]. Some commonly used formats for these games include but are not limited to role-playing, puzzle, interactive narratives, and attack-and-defend games [1,8]. ...
Conference Paper
Eliciting cybersecurity behavior change in users has been a difficult task. Although most users have concerns about their safety online, few take precautions. Transformational games offer a promising avenue for cybersecurity behavior change. To date, however, studies typically focus on entertainment value instead of investigating the effectiveness and design potential of games in cybersecurity. As a first step to filling this gap, we present the design of Hacked Time, a desktop game that aims to encourage cybersecurity behavior change by translating self-efficacy theory into the game's design. As cybersecurity games are a relatively novel area, our design aims to serve as a prototype for mapping specific behavior change principles relevant to this area onto game design practice.
... hackers) using everyday words to talk about computer security concerns, while newspaper reports focus on sensational rather than 'mundane' attacks [41]. Yet, news articles typically drive everyday discussions about security [11], which means that citizens are more likely to talk about large-scale attacks rather than focus on their own everyday problems. ...
Preprint
Full-text available
Older adults are increasingly vulnerable to cybersecurity attacks and scams. Yet we know relatively little about their understanding of cybersecurity, their information-seeking behaviours, and their trusted sources of information and advice in this domain. We conducted 22 semi-structured interviews with community-dwelling older adults in order to explore their cybersecurity information seeking behaviours. Following a thematic analysis of these interviews, we developed a cybersecurity information access framework that highlights shortcomings in older adults' choice of information resources. Specifically, we find that older users prioritise social resources based on availability, rather than cybersecurity expertise, and that they avoid using the Internet for cybersecurity information searches despite using it for other domains. Finally, we discuss the design of cybersecurity information dissemination strategies for older users, incorporating favoured sources such as TV adverts and radio programming. CCS Concepts: • Security and privacy → Social aspects of security and privacy. Additional Key Words and Phrases: Cybersecurity; digital literacy; information seeking; older adults; social aspects of security.
... Making users aware of how the underlying technology works and the associated security and privacy issues helps build their trust and confidence in the technology. It is worth further exploring how peer-to-peer learning and social influence [54,55,99], which are more effective in a collectivist society compared to an individualist society, could be used to reduce security and privacy knowledge gaps. ...
Conference Paper
Prior research suggests that security and privacy needs of users in developing regions are different than those in developed regions. To better understand the underlying differentiating factors, we conducted a systematic review of Human-Computer Interaction for Development and Security & Privacy publications in 15 proceedings, such as CHI, SOUPS, ICTD, and DEV, from the past ten years. Through an in-depth analysis of 114 publications that discuss security and privacy needs of people in developing regions, we identified five key factors---culture, knowledge gaps, unintended technology use, context, and usability and cost considerations---that shape security and privacy preferences of people in developing regions. We discuss how these factors influence their security and privacy considerations using case studies on phone sharing and surveillance. We then present a set of design recommendations and research directions for addressing security and privacy needs of people in resource-constrained settings.
Article
Internet-based social engineering (SE) attacks are a major cyber threat. These attacks often serve as the first step in a sophisticated sequence of attacks that target, among other things, victims’ credentials and can cause financial losses. The problem has received mounting attention in recent years, with many publications proposing defenses against SE attacks. Despite this, the situation has not improved. In this article, we aim to understand and explain this phenomenon by investigating the root cause of the problem. To this end, we examine Internet-based SE attacks and defenses through a unique lens based on psychological factors (PFs) and psychological techniques (PTs). We find that there is a key discrepancy between attacks and defenses: SE attacks have deliberately exploited 46 PFs and 16 PTs in total, but existing defenses have only leveraged 16 PFs and seven PTs in total. This discrepancy may explain why existing defenses have achieved limited success and prompt us to propose a systematic roadmap for future research.
Article
In order to keep one's computing systems and data secure, it is critical to be aware of how to effectively maintain security and privacy online. Prior experimental work has shown that social media are effective platforms for encouraging security-enhancing behavior. Through an analysis of historical social media logs of 38 participants containing almost 200,000 social media posts, we study the extent to which participants talked about security and privacy on social media platforms, specifically Facebook and Twitter. We found that interactions with posts that feature content relevant to security and privacy made up less than 0.09% of all interactions we observed. A thematic analysis of the security- and privacy-related posts that participants interacted with revealed that such posts very rarely discussed security and privacy constructively, instead often joking about security practices or encouraging undesirable behavior. Based on the overall findings from this thematic analysis, we develop and present a taxonomy of how security and privacy may be typically discussed on social networks, which is useful for constructing helpful security and privacy advice or for identifying advice that may have an undesirable impact. Our findings, though based on a fraction of the population of social media users, suggest that while social networks may be effective in influencing security behavior, there may not be enough substantial or useful discussions of security and privacy to encourage better security behaviors in practice and on a larger scale. Our findings highlight the importance of increasing the prevalence of constructive security and privacy advice on online social media in order to encourage widespread adoption of healthy security practices.
Chapter
Persuasive techniques and persuasive technologies have been suggested as a means to improve user cybersecurity behaviour, but there have been few quantitative studies in this area. In this paper, we present a large scale evaluation of persuasive messages designed to encourage University staff to complete security training. Persuasive messages were based on Cialdini’s principles of persuasion, randomly assigned, and transmitted by email. The training was real, and the messages sent constituted the real campaign to motivate users during the study period. We observed statistically significant variations, but with mild effect sizes, in participant responses to the persuasive messages. ‘Unity’ persuasive messages that had increased emphasis on the collaborative role of individual users as part of an organisation-wide team effort towards cybersecurity were more effective compared to ‘Authority’ messages that had increased emphasis on a mandatory obligation of users imposed by a hierarchical authority. Participant and organisational factors also appear to impact upon participant responses. The study suggests that the use of messages emphasising different principles of persuasion may have different levels of effectiveness in encouraging users to take particular security actions. In particular, it suggests that the use of social capital, in the form of increased emphasis of ‘unity’, may be more effective than increased emphasis of ‘authority’. These findings motivate further studies of how the use of Social capital may be beneficial for encouraging individuals to adopt similar positive security behaviours.
Chapter
The concept of “personalized security nudges” promises to solve the contradictions between people’s heterogeneity and one-size-fits-all security nudges, whereas the psychological traits needed for personalization are not easy to obtain. To address the problem, we propose to leverage users’ behaviors logged by information systems, from which multiple behavioral features are extracted. A between-subjects lab experiment was conducted, during which participants’ behavioral features and responses to three famous security nudges (the so-called nudge effects) were logged. To test the feasibility of our proposal, we analyzed the relationships between the behavioral features with the nudge effects and discovered the significant moderation effects expected for all the three security nudges involved. The results indicate the feasibility of personalizing security nudges according to user behaviors, liberating the personalized security nudge schemes from the dependence on psychological scales.KeywordsNudgePersonalizationBehavioral features
Article
Since personalization was introduced to security nudges, several approaches using the correlations between the General Decision-Making Styles (GDMS) and nudge effects have been proposed. However, the GDMS-based schemes do not apply to real systems well since it is challenging, if not impossible, to obtain the GDMS without psychological scales. Instead, we propose a practical scheme that leverages users’ system-use behaviors to personalize security nudges. To verify the effectiveness of the developed scheme, we analyze the data collected through two between-subjects lab experiments (N1 = 312, N2 = 696). By comparing the efficacy of the behavior-based and the GDMS-based approaches, we find that the behaviors outperform the GDMS in accurately predicting nudge effects, and more importantly, the behavior-based personalization scheme is comparably effective and more robust in improving nudge effects. This confirms that the behavior-based framework can be a practical and promising solution when implementing personalized nudge schemes to improve security behaviors.
Article
Computer users are generally faced with difficulties in making correct security decisions. While an increasingly fewer number of people are trying or willing to take formal security training, online sources including news, security blogs, and websites are continuously making security knowledge more accessible. Analysis of cybersecurity texts from this grey literature can provide insights into the trending topics and identify current security issues as well as how cyber attacks evolve over time. These in turn can support researchers and practitioners in predicting and preparing for these attacks. Comparing different sources may facilitate the learning process for normal users by creating the patterns of the security knowledge gained from different sources. Prior studies neither systematically analysed the wide range of digital sources nor provided any standardisation in analysing the trending topics from recent security texts. Moreover, existing topic modelling methods are not capable of identifying the cybersecurity concepts completely and the generated topics considerably overlap. To address this issue, we propose a semi-automated classification method to generate comprehensive security categories to analyse trending topics. We further compare the identified 16 security categories across different sources based on their popularity and impact. We have revealed several surprising findings as follows: (1) The impact reflected from cybersecurity texts strongly correlates with the monetary loss caused by cybercrimes, (2) security blogs have produced the context of cybersecurity most intensively, and (3) websites deliver security information without caring about timeliness much.
Article
Full-text available
Peer support is a powerful tool in improving the digital literacy of older adults. However, while existing literature investigated reactive support, this paper examines proactive support for mobile safety. To predict moments that users need support, we conducted a user study to measure the severity of mobile scenarios (n=300) and users' attitudes toward receiving support in a specific interaction around safety on a mobile device (n=150). We compared classification methods and showed that the random forest method produces better performance than other regression models. We show that user anxiety, openness to social support, self-efficacy, and security awareness are important factors to predict willingness to receive support. We also explore various age variations in the training sample on moments users need support prediction. We find that training on the youngest population produces inferior results for older adults, and training on the aging population produces poor outcomes for young adults. We illustrate that the composition of age can affect how the sample impacts model performance. We conclude the paper by discussing how our findings can be used to design feasible proactive support applications to provide support at the right moment.
Article
Full-text available
Two-factor authentication (2FA) is a recommended or imposed authentication mechanism for valuable online assets. However, 2FA mechanisms usually exhibit user experience issues that create user friction and even lead to poor acceptance, hampering the wider spread of 2FA. In this paper, we investigate user perceptions of 2FA through in-depth interviews with 42 participants, revealing key requirements that are not well met today despite recently emerged 2FA solutions. First, we investigate past experiences with authentication mechanisms emphasizing problems and aspects that hamper good user experience. Second, we investigate the different authentication factors more closely. Our results reveal particularly interesting preferences regarding the authentication factor ”ownership” in terms of properties, physical realizations, and interaction. These findings suggest a path towards 2FA mechanisms with considerably better user experience, promising to improve the acceptance and hence, the proliferation of 2FA for the benefit of security in the digital world.
Article
Digital extortion has emerged as a significant threat to organizations that rely on information technologies for their operations. Using human subject experimentation, we study the effectiveness of message appeals in encouraging defenders to adopt two mitigation strategies, investment in security and refusal to pay ransoms, to digital extortion threats. We explore two types of appeals, benefit and normative, for this purpose. We find that the decisions of the defenders (representing any organization that can be a potential victim) deviate from the predictions of game theory. However, given the strategic interactions between the defenders and the attacker as well as noisy decision-making behaviors, it is challenging to untangle the influence of the appeals on the defenders. We develop a structural model based on the quantal response equilibrium framework to measure how message appeals change the defenders’ utilities of investment and payment refusal. Although the interventions may be successful in increasing the utilities of investment and/or payment refusal, their impacts on investment rate and payment rate are mitigated by the attacker reducing ransoms. Thus, it is challenging for an intervention to significantly boost a community’s investment rate or to suppress the ransom payment rate. We characterize how security outcomes of a community (including expected ransom, attack rate, investment rate, and payment rate) vary with the defenders’ utilities of investment and pay refusal. This paper was accepted by Chris Forman, information systems.
Chapter
The Model of Influence in Cybersecurity with Frames unifies the current literature around influence and media effects in cybersecurity messaging. Building on the Process Model of Framing Research by Scheufele, this new model applies directly to the cybersecurity area and provides a macro-level view to further researcher understand of cybersecurity influence and provide options for intervention by organizational security professionals. This analysis included 42 documents concerning the work of influencing users to engage in secure behavior covering topics in persuasion, user interface design, equivalency framing, managing, and understanding user perceptions, and exploring user mental models regarding cybersecurity. This review also investigates the use of framing in cybersecurity and the definitions needed to contextualize and understand research in cybersecurity that uses framing. This model is intended as a starting point with which to build a larger understanding of cybersecurity communication to address human factors in cybersecurity.
Chapter
The growth in computer-mediated communication has created real challenges for society; in particular, the internet has become an important resource for “convincing” or persuading a person to make a decision. From a cybersecurity perspective, online attempts to persuade someone to make a decision has implications for the radicalisation of individuals. This chapter reviews multiple definitions and theories relating to decision making to consider the applicability of these to online decision making in areas such as buying behaviour, social engineering, and radicalisation. Research investigating online decision making is outlined and the point is made that research examining online research has a different focus than research exploring online decision making. The chapter concludes with some key questions for scholars and practitioners. In particular, it is noted that online decision making cannot be explained by one single model, as none is sufficient in its own capacity to underpin all forms of online behaviour.
Conference Paper
We conducted a literature survey on reproducibility and replicability of user surveys in security research. For that purpose, we examined all papers published over the last five years at three leading security research conferences and recorded the type of study and whether the authors made the underlying responses available as open data, as well as if they published the used questionnaire respectively interview guide. We uncovered how user surveys become more widespread in security research and how authors and conferences are increasingly publishing their methodologies, while we had no examples of data being made available. Based on these findings, we recommend that future researchers publish their data in addition to their results to facilitate replication and ensure a firm basis for user studies in security research.
Article
Full-text available
Non-expert computer users regularly need to make security-relevant decisions; however, these decisions tend not to be particularly good or sophisticated. Nevertheless, their choices are not random. Where does the information come from that these non-experts base their decisions upon? We argue that much of this information comes from stories they hear from other people. We conducted a survey to ask open- and closed- ended questions about security stories people hear from others. We found that most people have learned lessons from stories about security incidents informally from family and friends. These stories impact the way people think about security, and their subsequent behavior when making security-relevant decisions. In addition, many people retell these stories to others, indicating that a single story has the potential to influence multiple people. Understanding how non-experts learn from stories, and what kinds of stories they learn from, can help us figure out new methods for helping these people make better security decisions.
Article
Full-text available
Reports on the relationship between the size of a stimulus crowd, standing on a busy city street looking up at a building, and the response of passersby. As the size of the stimulus crowd was increased a greater proportion of passersby adopted the behavior of the crowd. Ss were 1424 pedestrians. The results suggest a modification of the J. S. Coleman and J. James (see 36:1) model of the size of free-forming groups to include a contagion assumption. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted 3 experiments to test the effectiveness of a rejection-then-moderation procedure for inducing compliance with a request for a favor. Ss were a total of 202 passersby on a university campus. All 3 experiments included a condition in which a requester first asked for an extreme favor (which was refused to him) and then for a smaller favor. In each instance, this procedure produced more compliance with the smaller favor than a procedure in which the requester asked solely for the smaller favor. Additional control conditions in each experiment support the hypothesis that the effect is mediated by a rule for reciprocation of concessions. Several advantages to the use of the rejection-then-moderation procedure for producing compliance are discussed. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Human behaviour is thought to spread through face-to-face social networks, but it is difficult to identify social influence effects in observational studies, and it is unknown whether online social networks operate in the same way. Here we report results from a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users' friends, and friends of friends. The effect of social transmission on real-world voting was greater than the direct effect of the messages themselves, and nearly all the transmission occurred between 'close friends' who were more likely to have a face-to-face relationship. These results suggest that strong ties are instrumental for spreading both online and real-world behaviour in human social networks.
Article
Full-text available
There are currently dozens of freely available tools to combat phishing and other web-based scams, many of which are web browser extensions that warn users when they are browsing a suspected phishing site. We developed an automated test bed for testing anti-phishing tools. We used 200 verified phishing URLs from two sources and 516 legitimate URLs to test the effectiveness of 10 popular anti-phishing tools. Only one tool was able to consistently identify more than 90% of phishing URLs correctly; however, it also incorrectly identified 42% of legitimate URLs as phish. The performance of the other tools varied considerably depending on the source of the phishing URLs. Of these remaining tools, only one correctly identified over 60% of phishing URLs from both sources. Performance also changed significantly depending on the freshness of the phishing URLs tested. Thus we demonstrate that the source of phishing URLs and the freshness of the URLs tested can significantly impact the results of anti-phishing tool testing. We also demonstrate that many of the tools we tested were vulnerable to simple exploits. In this paper we describe our anti-phishing tool test bed, summarize our findings, and offer observations about the effectiveness of these tools as well as ways they might be improved.
Article
Full-text available
This paper reviews past and current work on usability of security mechanisms. Given that most users interact with computer security on a daily basis, it is astonishing how little interest the CHI community has taken in the design of security systems. Many usability problems associated with security mechanisms could be avoided through application of basic usability knowledge and methods. At the same time, the design of security systems raises some issues that cannot be met with existing CHI knowledge and methods. In conclusion, I will outline the research challenges for improving usability of security systems.
Article
Full-text available
Ubiquitous and mobile technologies create new challenges for system security. Effective security solutions depend not only on the mathematical and technical properties of those solutions, but also on peoples ability to understand them and use them as part of their work. As a step towards solving this problem, we have been examining how people experience security as a facet of their daily life, and how they routinely answer the question, is this system secure enough for what I want to do? We present a number of findings concerning the scope of security, attitudes towards security, and the social and organizational contexts within which security concerns arise, and point towards emerging technical solutions.
Conference Paper
Full-text available
Many popular web browsers now include active phishing warnings since research has shown that passive warnings are often ignored. In this laboratory study we examine the effectiveness of these warnings and examine if, how, and why they fail users. We simulated a spear phishing attack to expose users to browser warnings. We found that 97% of our sixty participants fell for at least one of the phishing messages that we sent them. However, we also found that when presented with the active warnings, 79% of partici- pants heeded them, which was not the case for the passive warning that we tested—where only one participant heeded the warnings. Using a model from the warning sciences we analyzed how users perceive warning messages and offer suggestions for creating more effective phishing warnings. Author Keywords Phishing, warning messages, mental models, usable privacy and security
Conference Paper
Full-text available
In this paper we describe the design and evaluation of Anti- Phishing Phil, an online game that teaches users good habits to help them avoid phishing attacks. We used learning science principles to design and iteratively refine the game. We evaluated the game through a user study: participants were tested on their ability to identify fraudulent web sites before and after spending 15 minutes engaged in one of three anti-phishing training activities (playing the game, reading an anti-phishing tutorial we created based on the game, or reading existing online training materials). We found that the participants who played the game were better able to identify fraudulent web sites compared to the participants in other conditions. We attribute these effects to both the content of the training messages presented in the game as well as the presentation of these materials in an interactive game format. Our results confirm that games can be an effective way of educating people about phishing and other security attacks.
Conference Paper
Full-text available
As interest in usable security spreads, the use of visual approaches in which the functioning of a distributed system is made visually available to end users is an approach that a number of researchers have examined. In this paper, we discuss the use of the social navigation paradigm as a way of organizing visual displays of system action. Drawing on a previous study of security in the Kazaa peer to peer system, we present some examples of the ways in which social navigation can be incorporated in support of usable security.
Article
Full-text available
In this article, the author discusses why users compromise computer security mechanisms and how to take remedial measures. Confidentiality is an important aspect of computer security. It depends on authentication mechanisms, such as passwords, to safeguard access to information. Traditionally, authentication procedures are divided into two stages: identification and secret password. To date, research on password security and the usability of these mechanisms has rarely been investigated. Since security mechanisms are designed, implemented, applied and breached by people, human factors should be considered in their design. It seems that currently, hackers pay more attention to the human link in the security chain than security designers do, by using social engineering techniques to obtain passwords. The key element in password security is the crackablity of a password combination. System-generated passwords are essentially the optimal security approach; user-generated passwords are potentially more memorable and thus less likely to be disclosed. Password composition, alphanumeric password is more secure than one composed of letters alone. INSET: Recommendations.
Conference Paper
Full-text available
Social networking sites (SNS) are only as good as the content their users share. Therefore, designers of SNS seek to improve the overall user experience by encouraging members to contribute more content. However, user motivations for contribution in SNS are not well understood. This is particularly true for newcomers, who may not recognize the value of contribution. Using server log data from approximately 140,000 newcomers in Facebook, we predict long-term sharing based on the experiences the newcomers have in their first two weeks. We test four mechanisms: social learning, singling out, feedback, and distribution. In particular, we find support for social learning: newcomers who see their friends contributing go on to share more content themselves. For newcomers who are initially inclined to contribute, receiving feedback and having a wide audience are also predictors of increased sharing. On the other hand, singling out appears to affect only those newcomers who are not initially inclined to share. The paper concludes with design implications for motivating newcomer sharing in online communities.
Article
Full-text available
Current systems for banking authentication require that customers not reveal their access codes, even to members of the family. A study of banking and security in Australia shows that the practice of sharing passwords does not conform to this requirement. For married and de facto couples, password sharing is seen as a practical way of managing money and a demonstration of trust. Sharing Personal Identification Numbers (PINs) is a common practice among remote indigenous communities in Australia. In areas with poor banking access, this is the only way to access cash. People with certain disabilities have to share passwords with carers, and PIN numbers with retail clerks. In this paper we present the findings of a qualitative user study of banking and money management. We suggest design criteria for banking security systems, based on observed social and cultural practices of password and PIN number sharing. Yes Yes
Article
Full-text available
Two field experiments examined the effectiveness of signs requesting hotel guests' participation in an environmental conservation program. Appeals employing descriptive norms (e.g., "the majority of guests reuse their towels") proved superior to a traditional appeal widely used by hotels that focused solely on environmental protection. Moreover, normative appeals were most effective when describing group behavior that occurred in the setting that most closely matched individuals' immediate situational circumstances (e.g., "the majority of guests in this room reuse their towels"), which we refer to as provincial norms. Theoretical and practical implications for managing proenvironmental efforts are discussed. (c) 2008 by JOURNAL OF CONSUMER RESEARCH, Inc..
Article
Full-text available
INVESTIGATED THE EXTINCTION OF AVOIDANCE RESPONSES THROUGH OBSERVATION OF MODELED APPROACH BEHAVIOR DIRECTED TOWARD A FEARED STIMULUS WITHOUT ANY ADVERSE CONSEQUENCES ACCRUING TO THE MODEL. CHILDREN WHO DISPLAYED FEARFUL AND AVOIDANT BEHAVIOR TOWARD DOGS WERE ASSIGNED TO A CONDITION IN WHICH THEY (1) PARTICIPATED IN A SERIES OF BRIEF MODELING SESSIONS IN WHICH THEY OBSERVED, WITHIN A HIGHLY POSITIVE CONTEXT, A FEARLESS PEER MODEL EXHIBIT PROGRESSIVELY STRONGER APPROACH RESPONSES TOWARD A DOG; (2) OBSERVED THE SAME GRADUATED MODELING STIMULI, BUT IN A NEUTRAL CONTEXT; (3) MERELY OBSERVED THE DOG IN THE POSITIVE CONTEXT, WITH THE MODEL ABSENT; OR (4) PARTICIPATED IN THE POSITIVE ACTIVITIES WITHOUT ANY EXPOSURE TO EITHER THE DOG OR THE MODELED DISPLAYS. THE 2 GROUPS WHO HAD OBSERVED THE MODEL INTERACT NONANXIOUSLY WITH THE DOG DISPLAYED STABLE AND GENERALIZED REDUCTION IN AVOIDANCE BEHAVIOR AND DIFFERED SIGNIFICANTLY IN THIS RESPECT FROM CHILDREN IN THE DOG-EXPOSURE AND THE POSITIVE-CONTEXT CONDITIONS. HOWEVER, THE POSITIVE CONTEXT, WHICH WAS DESIGNED TO INDUCE ANXIETY-COMPETING RESPONSES, DID NOT ENHANCE THE EXTINCTION EFFECTS PRODUCED THROUGH MODELING.
Article
In this editorial the author discusses publication and the publication delay the journal experienced. He reports to the readers and authors that the mounting publication lag, which reached a peak during early 1966 of 18 months, dropped to 12 months by the end of that year. Factors contributing to the delay are then reviewed. The editorial also provides a listing of reviewers in 1966 who reviewed two or more papers during the previous year. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Conference Paper
Password meters tell users whether their passwords are "weak" or "strong." We performed a laboratory experiment to examine whether these meters influenced users' password selections when they were forced to change their real passwords, and when they were not told that their passwords were the subject of a study. We observed that the presence of meters yielded significantly stronger passwords. We performed a followup field experiment to test a different scenario: creating a password for an unimportant account. In this scenario, we found that the meters made no observable difference: participants simply reused weak passwords that they used to protect similar low-risk accounts. We conclude that meters result in stronger passwords when users are forced to change existing passwords on "important" accounts and that individual meter design decisions likely have a marginal impact.
Conference Paper
We explore how well the intersection between our own everyday memories and those captured by our smartphones can be used for what we call autobiographical authentication-a challenge-response authentication system that queries users about day-to-day experiences. Through three studies-two on MTurk and one field study-we found that users are good, but make systematic errors at answering autobiographical questions. Using Bayesian modeling to account for these systematic response errors, we derived a formula for computing a confidence rating that the attempting authenticator is the user from a sequence of question-answer responses. We tested our formula against five simulated adversaries based on plausible real-life counterparts. Our simulations indicate that our model of autobiographical authentication generally performs well in assigning high confidence estimates to the user and low confidence estimates to impersonating adversaries.
Article
In this paper we study large-scale emotional contagion through an examination of Facebook status updates. After a user makes a status update with emotional content, their friends are significantly more likely to make a valence-consistent post. This effect is significant even three days later, and even after controlling for prior emotion expressions by both users and their friends. This indicates not only that emotional contagion is possible via text-only communication and that emotions flow through social networks, but also that emotion spreads via indirect communications media.
Conference Paper
To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.
Book
Generalized linear models (GLMs) extend standard linear (Gaussian) regression techniques to models with a non-Gaussian, or even discrete, response. GLM theory is predicated on the exponential family of distributions—a class so rich that it includes the commonly used logit, probit, and Poisson distributions. Although one can fit these models in Stata by using specialized commands (e.g., logit for logit models), fitting them under the GLM paradigm with Stata’s glm command offers the advantage of having many models under the same roof. For example, model diagnostics may be calculated and interpreted similarly regardless of the assumed distribution. This text thoroughly covers GLMs, both theoretically and computationally. The theory consists of showing how the various GLMs are special cases of the exponential family, general properties of this family of distributions, and the derivation of maximum likelihood (ML) estimators and standard errors. The book shows how iteratively reweighted least squares, another method of parameter estimation, is a consequence of ML estimation via Fisher scoring. The authors also discuss different methods of estimating standard errors, including robust methods, robust methods with clustering, Newey–West, outer product of the gradient, bootstrap, and jackknife.
Article
Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.
Article
User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security? We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software. To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards: PGP 5.0. Our case study used a cognitive walkthrough analysis together with a laboratory user test to evaluate whether PGP 5.0 can be successfully used by cryptography novices to achieve effective electronic mail security. The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were g...
Cyber security solutions underused
  • R C Johnson
AP Twitter hack causes panic on Wall Street and sends Dow plunging. The Guardian
  • H Moore
  • D Roberts
Claims Barack Obama Injured In White House Explosions Business Insider
  • G Wyler
  • Ap Twitter
  • Hacked
Diffusion of Innovations
  • E M Rogers
  • Rogers E.M.
AP Twitter Hacked, Claims Barack Obama Injured In White House Explosions
  • G Wyler
  • Twitter Hacked
  • Wyler G.
Cyber security solutions underused . EE Times
  • R C Johnson
  • Johnson R.C.