Conference Paper

Where the Wild Warnings Are: Root Causes of Chrome HTTPS Certificate Errors

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

HTTPS error warnings are supposed to alert browser users to network attacks. Unfortunately, a wide range of non-attack circumstances trigger hundreds of millions of spurious browser warnings per month. Spurious warnings frustrate users, hinder the widespread adoption of HTTPS, and undermine trust in browser warnings. We investigate the root causes of HTTPS error warnings in the field, with the goal of resolving benign errors. We study a sample of over 300 million errors that Google Chrome users encountered in the course of normal browsing. After manually reviewing more than 2,000 error reports, we developed automated rules to classify the top causes of HTTPS error warnings. We are able to automatically diagnose the root causes of two-thirds of error reports. To our surprise, we find that more than half of errors are caused by client-side or network issues instead of server misconfigurations. Based on these findings, we implemented more actionable warnings and other browser changes to address client-side error causes. We further propose solutions for other classes of root causes.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For the TLS infrastructure to work, end entities authenticate themselves using X.509 certificates [14]. Certificate validation errors are quite common [4,5], although an error does not necessarily imply a security incident. For example, getting a self-signed certificate may be either an attack (adversary pretending to be a trusted site) or only a misconfiguration (the local administrator unable or unwilling to obtain a certificate signed by a trusted certificate authority). ...
... an extra 's' in the name). Browser-based measurements show roughly 10% [4] to 18% [5] of the certificate errors due to a name mismatch. • Self-signed. ...
... was signed by its own key (it did not chain up to a trusted root). Such errors constitute roughly 1% of browser certificate errors [4], but their prevalence in TLS scans can be as high as 25% [24] or 88% [12]. ...
Preprint
Flawed TLS certificates are not uncommon on the Internet. While they signal a potential issue, in most cases they have benign causes (e.g., misconfiguration or even deliberate deployment). This adds fuzziness to the decision on whether to trust a connection or not. Little is known about perceptions of flawed certificates by IT professionals, even though their decisions impact high numbers of end users. Moreover, it is unclear how much the content of error messages and documentation influences these perceptions. To shed light on these issues, we observed 75 attendees of an industrial IT conference investigating different certificate validation errors. We also analyzed the influence of reworded error messages and redesigned documentation. We find that people working in IT have very nuanced opinions, with trust decisions being far from binary. The self-signed and the name-constrained certificates seem to be over-trusted (the latter also being poorly understood). We show that even small changes in existing error messages can positively influence resource use, comprehension, and trust assessment. At the end of the article, we summarize lessons learned from conducting usable security studies with IT professionals.
... For the TLS infrastructure to work, end entities authenticate themselves using X.509 certificates [14]. Certificate validation errors are quite common [4,5], although an error does not necessarily imply a security incident. For example, getting a self-signed certificate may be either an attack (adversary pretending to be a trusted site) or only a misconfiguration (the local administrator unable or unwilling to obtain a certificate signed by a trusted certificate authority). ...
... an extra 's' in the name). Browser-based measurements show roughly 10% [4] to 18% [5] of the certificate errors due to a name mismatch. • Self-signed. ...
... was signed by its own key (it did not chain up to a trusted root). Such errors constitute roughly 1% of browser certificate errors [4], but their prevalence in TLS scans can be as high as 25% [24] or 88% [12]. ...
Article
Flawed TLS certificates are not uncommon on the Internet. While they signal a potential issue, in most cases they have benign causes (e.g., misconfiguration or even deliberate deployment). This adds fuzziness to the decision on whether to trust a connection or not. Little is known about perceptions of flawed certificates by IT professionals, even though their decisions impact high numbers of end users. Moreover, it is unclear how much the content of error messages and documentation influences these perceptions. To shed light on these issues, we observed 75 attendees of an industrial IT conference investigating different certificate validation errors. We also analyzed the influence of reworded error messages and redesigned documentation. We find that people working in IT have very nuanced opinions, with trust decisions being far from binary. The self-signed and the name-constrained certificates seem to be over-trusted (the latter also being poorly understood). We show that even small changes in existing error messages can positively influence resource use, comprehension, and trust assessment. At the end of the article, we summarize lessons learned from conducting usable security studies with IT professionals.
... For the TLS infrastructure to work, end entities authenticate themselves using X.509 certificates [13]. Certificate validation errors are quite common [4,5], although an error does not necessarily imply a security incident. For example, getting a self-signed certificate may be either an attack (adversary pretending to be a trusted site) or only a misconfiguration (the local administrator unable or unwilling to obtain a certificate signed by a trusted certificate authority). ...
... (an extra 's' in the name). Browser-based measurements show roughly 10% [4] to 18% [5] of the certificate errors due to a name mismatch. • Self-signed. ...
... was signed by its own key (it did not chain up to a trusted root). Such errors constitute roughly 1% of browser certificate errors [4], but their prevalence in TLS scans can be as high as 25% [23] or 88% [11]. • Expired. ...
Conference Paper
Flawed TLS certificates are not uncommon on the Internet. While they signal a potential issue, in most cases they have benign causes (e.g., misconfiguration or even deliberate deployment). This adds fuzziness to the decision on whether to trust a connection or not. Little is known about perceptions of flawed certificates by IT professionals, even though their decisions impact high numbers of end users. Moreover, it is unclear how much does the content of error messages and documentation influence these perceptions. To shed light on these issues, we observed 75 attendees of an industrial IT conference investigating different certificate validation errors. We also analysed the influence of re-worded error messages and redesigned documentation. We find that people working in IT have very nuanced opinions with trust decisions being far from binary. The self-signed and the name constrained certificates seem to be over-trusted (the latter also being poorly understood). We show that even small changes in existing error messages can positively influence resource use, comprehension, and trust assessment. Our re-worded error messages and documentation can be directly adopted.
... It has the potential to be utilized in various applications: (1) It could be adopted by individual users for self-censorship and parental controls, to prevent highly sensitive content from being posted to online social networks, especially when the users are careless or emotional. (2) PrivScore could be integrated with AI-based interactive agents, especially the ones with learning capabilities, such as social media chatbots (Twitterbots, Dominator) and virtual assistants (Siri, Alexa, Cortana), to evaluate the content before delivering to users. (3) PrivScore could be aggregated over a large population (across demographic groups, friend circles, users in an organization, etc.) to examine privacy attitudes from a statistical perspective. ...
... The contributions of this paper are threefold: (1) We collect the privacy perceptions from a diverse set of users, and examine the consensuses in the responses to model the sensitiveness of content. (2) We make the first attempt to develop a computational model for quantitative assessment of content sensitiveness using deep neural networks. The context-free privacy score resembles the "consensus" perception of average users. ...
... As we can see, the majority of the tweets in this set gets PrivScores close to 1. Similarly, the bottom-right sub-figure is for tweets annotated as [3,3,3], whose PrivScores lean toward 3. Moreover, PrivScores in sets [1,1,2] and [2,3,3] also demonstrate clear tendencies towards 1 and 3, respectively. It is worth pointing out that the PrivScore distribution of set [1,2,3] shows the maximal randomness (i.e., almost uniformly distributed in [1,3]). ...
Article
Full-text available
With the growing popularity of online social networks, a large amount of private or sensitive information has been posted online. In particular, studies show that users sometimes reveal too much information or unintentionally release regretful messages, especially when they are careless, emotional, or unaware of privacy risks. As such, there exist great needs to be able to identify potentially-sensitive online contents, so that users could be alerted with such findings. In this paper, we propose a context-aware, text-based quantitative model for private information assessment, namely PrivScore , which is expected to serve as the foundation of a privacy leakage alerting mechanism. We first solicit diverse opinions on the sensitiveness of private information from crowdsourcing workers, and examine the responses to discover a perceptual model behind the consensuses and disagreements. We then develop a computational scheme using deep neural networks to compute a context-free PrivScore (i.e., the “consensus” privacy score among average users). Finally, we integrate tweet histories, topic preferences and social contexts to generate a personalized context-aware PrivScore. This privacy scoring mechanism could be employed to identify potentially-private messages and alert users to think again before posting them to OSNs.
... Other groups of people deployed snippets of JavaScript code on websites that recruited their visitor's CPU power, often unknowingly, to mine for them as part of a bigger mining network (i.e., a mining pool). However, both approaches quickly became infeasible as the computing power required 1. Coinmarketcap -Global Charts -Accessed: 2017-12-14 https:// coinmarketcap.com/ Figure 1. ...
... Determining appropriate for thresholds for client-side processing that are high enough to allow legitimate applications and low enough to deter cryptojacking is an open research problem, as would be the wording of any notifications to the user that would lead the user to make an informed decision about allowing or not allowing resource consumption (cf. SSL/TLS warnings [31], [30], [1]). Browsers such as Opera, have taken a stance against cryptojacking scripts and blocked them via their "NoCoin" blacklist [25]. ...
Preprint
In this paper, we examine the recent trend towards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency, typically without her consent or knowledge, and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.
... On the other hand, users tend to perceive security dialogs as rather annoying and ignore them by clicking through them, even if risks are present [7], [12], [32]. Besides identifying and reducing root causes of false-positives [1], it is an important goal for usable security and privacy research to design security dialogs that prevent such habituation effects and are harder to ignore [2], [11]. ...
... The study results (1) confirm previous findings that both habituation and visual attractors influence the rate of (non-)compliant decisions in the replication as well as in the revised study design. Furthermore, we show that (2) monetary incentives have a significant influence on reducing the number of non-compliant answers to dialogs. ...
... Alternatively, trusted CAs can cross-sign other CAs to extend their trust to them-thereby mitigating the lengthy and costly validation process that new CAs need to undergo. Cross-signing describes the approach to obtain signatures from several issuers for one certificate 1 . It enables new CAs to quickly establish trust. ...
... However, their analysis is limited to an overview on the occurrence of general cross-signing and a very brief statement on effect on root store coverage without any further analysis. Acer et al. [1] analyzed the causes for certificate validation errors encountered by Chrome users during web browsing. They find that missing cross-sign certificates in a presented certificate path cause some errors and thus acknowledge the importance of cross-signing as means to root trust in widely trusted stores. ...
Preprint
Public Key Infrastructures (PKIs) with their trusted Certificate Authorities (CAs) provide the trust backbone for the Internet: CAs sign certificates which prove the identity of servers, applications, or users. To be trusted by operating systems and browsers, a CA has to undergo lengthy and costly validation processes. Alternatively, trusted CAs can cross-sign other CAs to extend their trust to them. In this paper, we systematically analyze the present and past state of cross-signing in the Web PKI. Our dataset (derived from passive TLS monitors and public CT logs) encompasses more than 7 years and 225 million certificates with 9.3 billion trust paths. We show benefits and risks of cross-signing. We discuss the difficulty of revoking trusted CA certificates where, worrisome, cross-signing can result in valid trust paths to remain after revocation; a problem for non-browser software that often blindly trusts all CA certificates and ignores revocations. However, cross-signing also enables fast bootstrapping of new CAs, e.g., Let's Encrypt, and achieves a non-disruptive user experience by providing backward compatibility. In this paper, we propose new rules and guidance for cross-signing to preserve its positive potential while mitigating its risks.
... One reason to believe that disinformation warnings can be made effective is the evolution of security warnings. At first, warnings for threats such as malware and phishing websites broadly failed to protect users [8,9], but after a decade of iterative, multi-method research, these warnings became highly effective [10][11][12][13][14][15][16][17][18][19][20][21]. Modern security warnings reliably inform users' security decisions and help users distinguish harmful and inauthentic content online [10,17]. ...
... We adapted contextual and interstitial disinformation warnings from modern security warnings used by Google. Google's warnings are well-studied [10,14,15,17,19,21,103] and widely deployed, making them a useful template to design warnings that participants will believe are real. ...
Preprint
Full-text available
Online platforms are using warning messages to counter disinformation, but current approaches are not evidence-based and appear ineffective. We designed and empirically evaluated new disinformation warnings by drawing from the research that led to effective security warnings. In a laboratory study, we found that contextual warnings are easily ignored, but interstitial warnings are highly effective at inducing subjects to visit alternative websites. We then investigated how comprehension and risk perception moderate warning effects by comparing eight interstitial warning designs. This second study validated that interstitial warnings have a strong effect and found that while warning design impacts comprehension and risk perception, neither attribute resulted in a significant behavioral difference. Our work provides the first empirical evidence that disinformation warnings can have a strong effect on users' information-seeking behaviors, shows a path forward for effective warnings, and contributes scalable, repeatable methods for establishing evidence on the effects of disinformation warnings.
... Modern browsers mitigate this severe threat by implementing the Mixed Content policy. 2 Roughly, this policy mandates that active contents like scripts must be blocked when they are included in HTTPS pages over HTTP connections, while browser vendors are left at liberty of being more tolerant towards passive content like images. It is a good practice to avoid the use of mixed content in high-security sites to ensure appropriate protection also to users of browsers which do not implement the Mixed Content policy, e.g., legacy browsers. ...
... Luckily, anecdotal evidence shows that HSTS has been gaining traction over the years: for example, a recent small-scale session security study on 20 popular sites found that more than a half of the analyzed sites made use of HSTS [8]. Other studies on HTTPS security focused on the certificate ecosystem [11,1,15] and certificate errors in particular [2]. ...
Conference Paper
Full-text available
In this paper we carry out a systematic analysis of the state of the HTTPS deployment of the most popular Italian university websites. Our analysis focuses on three different key aspects: HTTPS adoption and activation, HTTPS certificates, and cryptographic TLS implementations. Our investigation shows that the current state of the HTTPS deployment is unsatisfactory, yet it is possible to significantly improve the level of security by working exclusively at the web application layer. We hope this observation will encourage site operators to take actions to improve the current state of protection.
... Inspired by the work of Acer et al. [8], we designed a normalized solution to process the various verification results. In details, we grouped the verification results of each program encountered in our experiment into 16 categories, with a new error code, making it easy to redefine the reward and analyze verification codes. ...
... Acer et al. [8] investigated the root causes of Chrome HTTPS certificate errors. Since hundreds of millions of spurious browser warnings are triggered per month, they frustrated users and undermined the trust in browser warnings, which may cause users to ignore the real warning. ...
Preprint
The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols are the foundation of network security. The certificate verification in SSL/TLS implementations is vital and may become the weak link in the whole network ecosystem. In previous works, some research focused on the automated testing of certificate verification, and the main approaches rely on generating massive certificates through randomly combining parts of seed certificates for fuzzing. Although the generated certificates could meet the semantic constraints, the cost is quite heavy, and the performance is limited due to the randomness. To fill this gap, in this paper, we propose DRLGENCERT, the first framework of applying deep reinforcement learning to the automated testing of certificate verification in SSL/TLS implementations. DRLGENCERT accepts ordinary certificates as input and outputs newly generated certificates which could trigger discrepancies with high efficiency. Benefited by the deep reinforcement learning, when generating certificates, our framework could choose the best next action according to the result of a previous modification, instead of simple random combinations. At the same time, we developed a set of new techniques to support the overall design, like new feature extraction method for X.509 certificates, fine-grained differential testing, and so forth. Also, we implemented a prototype of DRLGENCERT and carried out a series of real-world experiments. The results show DRLGENCERT is quite efficient, and we obtained 84,661 discrepancy-triggering certificates from 181,900 certificate seeds, say around 46.5% effectiveness. Also, we evaluated six popular SSL/TLS implementations, including GnuTLS, MatrixSSL, MbedTLS, NSS, OpenSSL, and wolfSSL. DRLGENCERT successfully discovered 23 serious certificate verification flaws, and most of them were previously unknown.
... Other groups of people deployed snippets of JavaScript code on websites that recruited their visitor's CPU power, often unknowingly, to mine for them as part of a bigger mining network (i.e., a mining pool). However, both approaches quickly became infeasible as the computing power required 1. Coinmarketcap -Global Charts -Accessed: 2017-12-14 https:// coinmarketcap.com/ Figure 1. ...
... Determining appropriate for thresholds for client-side processing that are high enough to allow legitimate applications and low enough to deter cryptojacking is an open research problem, as would be the wording of any notifications to the user that would lead the user to make an informed decision about allowing or not allowing resource consumption (cf. SSL/TLS warnings [31], [30], [1]). Browsers such as Opera, have taken a stance against cryptojacking scripts and blocked them via their "NoCoin" blacklist [25]. ...
Conference Paper
Full-text available
In this paper, we examine the recent trend towards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency, typically without her consent or knowledge, and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.
... In order to use certificate validity as a proxy for detecting DNS manipulation, we need to account for certificates that would be invalid even in a control setting, as invalid certificates are common on the Internet [1,4,34,70]. These "control certificates" serve as ground truth for the case where the certificates are invalid because of deployment errors from domain administrators. ...
Article
Full-text available
DNS manipulation is an increasingly common technique used by censors and other network adversaries to prevent users from accessing restricted Internet resources and hijack their connections. Prior work in detecting DNS manipulation relies largely on comparing DNS resolutions with trusted control results to identify inconsistencies. However, the emergence of CDNs and other cloud providers practicing content localization and load balancing leads to these heuristics being inaccurate, paving the need for more verifiable signals of DNS manipulation. In this paper, we develop a new technique, CERTainty, that utilizes the widely established TLS certificate ecosystem to accurately detect DNS manipulation, and obtain more information about the adversaries performing such manipulation. We find that untrusted certificates, mismatching hostnames, and blockpages are powerful proxies for detecting DNS manipulation. Our results show that previous work using consistency-based heuristics is inaccurate, allowing for 72.45% false positives in the cases detected as DNS manipulation. Further, we identify 17 commercial DNS filtering products in 52 countries, including products such as SafeDNS, SkyDNS, and Fortinet, and identify the presence of 55 ASes in 26 countries that perform ISP-level DNS manipulation. We also identify 226 new blockpage clusters that are not covered by previous research. We are integrating techniques used by CERTainty into active measurement platforms to continuously and accurately monitor DNS manipulation.
... In order to use certificate validity as a proxy for detecting DNS manipulation, we need to account for certificates that would be invalid even in a control setting, as invalid certificates are common on the Internet [1,4,34,70]. These "control certificates" serve as ground truth for the case where the certificates are invalid because of deployment errors from domain administrators. ...
Preprint
Full-text available
DNS manipulation is an increasingly common technique used by censors and other network adversaries to prevent users from accessing restricted Internet resources and hijack their connections. Prior work in detecting DNS manipulation relies largely on comparing DNS resolutions with trusted control results to identify inconsistencies. However, the emergence of CDNs and other cloud providers practicing content localization and load balancing leads to these heuristics being inaccurate, paving the need for more verifiable signals of DNS manipulation. In this paper, we develop a new technique, CERTainty, that utilizes the widely established TLS certificate ecosystem to accurately detect DNS manipulation, and obtain more information about the adversaries performing such manipulation. We find that untrusted certificates, mismatching hostnames, and blockpages are powerful proxies for detecting DNS manipulation. Our results show that previous work using consistency-based heuristics is inaccurate, allowing for 72.45% false positives in the cases detected as DNS manipulation. Further, we identify 17 commercial DNS filtering products in 52 countries, including products such as SafeDNS, SkyDNS, and Fortinet, and identify the presence of 55 ASes in 26 countries that perform ISP-level DNS manipulation. We also identify 226 new blockpage clusters that are not covered by previous research. We are integrating techniques used by CERTainty into active measurement platforms to continuously and accurately monitor DNS manipulation.
... While in theory, cryptographic solutions are provably secure, in practice, the security of communication depends on the correctness of implementation of the existing tools that support encryption standards. Over the past decade, numerous studies pointed out weaknesses of cryptographic security of various protocols (TLS/SSL [18], SSH [16], HTTPS [2,10,17]). The majority of these studies investigated insufficient security of generated keys as a main root cause of the problem. ...
Chapter
In spite of strong mathematical foundations of cryptographic algorithms, the practical implementations of cryptographic protocols continue to fail. Insufficient entropy, faulty library implementation, API misuse do not only jeopardize the security of cryptographic keys, but also lead to distinct patterns that can result in keys’ origin attribution. In this work, we examined attribution of cryptographic keys based on their moduli. We analyzed over 6.5 million keys generated by 43 cryptographic libraries versions on 20 Linux OS versions released over the past 8 years. We showed that with only a few moduli characteristics, we can accurately (with 75% accuracy) attribute an individual key to the originating library. Depending on the library, our approach is sensitive enough to pinpoint the corresponding major, minor, and build release of several libraries that generated an individual key with an accuracy of 81%–98%. We further explore attribution of SSH keys collected from publicly facing IPv4 addresses showing that our approach is able to differentiate individual libraries of RSA keys with 95% accuracy.
... While in theory, cryptographic solutions are provably secure, in practice, the security of communication depends on the correctness of implementation of the existing tools that support encryption standards. Over the past decade, numerous studies pointed out weaknesses of cryptographic security of various protocols (TLS/SSL [18], SSH [16], HTTPS [10,17,2] ). The majority of these studies investigated insufficient security of generated keys as a main root cause of the problem. ...
Conference Paper
In spite of strong mathematical foundations of cryptographic algorithms, the practical implementations of cryptographic protocols continue to fail. Insufficient entropy, faulty library implementation, API misuse do not only jeopardize the security of cryptographic keys, but also lead to distinct patterns that can result in keys' origin attribution. In this work, we examined attribution of cryptographic keys based on their moduli. We analyzed over 6.5 million keys generated by 43 cryptographic libraries versions on 20 Linux OS versions released over the past 8 years. We showed that with only a few moduli characteristics, we can accurately (with 75% accuracy) attribute an individual key to the originating library. Depending on the library, our approach is sensitive enough to pinpoint the corresponding major, minor, and build release of several libraries that generated an individual key with an accuracy of 81%-98%. We further explore attribution of SSH keys collected from publicly facing IPv4 addresses showing that our approach is able to differentiate individual libraries of RSA keys with 95% accuracy.
... Researchers have been studying the users' behavior and response to the security warnings in Web browsers to improve its effectiveness. Apart from the general technical problems such as date/time mismatch and antivirus alerts [14], security warnings can be triggered by visiting malicious Websites that may download malware or cause phishing attacks by spoofing popular Websites. Egelman et al. [15] designed an experiment to study participants' behavior when presented with security warnings in the browser caused by spear-phishing attacks. ...
Article
Full-text available
This paper reports a formative evaluation of auditory representations of cyber security threat indicators and cues, referred to as sonifications, to warn users about cyber threats. Most Internet browsers provide visual cues and textual warnings to help users identify when they are at risk. Although these alarming mechanisms are very effective in informing users, there are certain situations and circumstances where these alarming techniques are unsuccessful in drawing the user’s attention: (1) security warnings and features (e.g., blocking out malicious Websites) might overwhelm a typical Internet user and thus the users may overlook or ignore visual and textual warnings and, as a result, they might be targeted, (2) these visual cues are inaccessible to certain users such as those with visual impairments. This work is motivated by our previous work of the use of sonification of security warnings to users who are visually impaired. To investigate the usefulness of sonification in general security settings, this work uses real Websites instead of simulated Web applications with sighted participants. The study targets sonification for three different types of security threats: (1) phishing, (2) malware downloading, and (3) form filling. The results show that on average 58% of the participants were able to correctly remember what the sonification conveyed. Additionally, about 73% of the participants were able to correctly identify the threat that the sonification represented while performing tasks using real Websites. Furthermore, the paper introduces “CyberWarner”, a sonification sandbox that can be installed on the Google Chrome browser to enable auditory representations of certain security threats and cues that are designed based on several URL heuristics. Article highlights It is feasible to develop sonified cyber security threat indicators that users intuitively understand with minimal experience and training. Users are more cautious about malicious activities in general. However, when navigating real Websites, they are less informed. This might be due to the appearance of the navigating Websites or the overwhelming issues when performing tasks. Participants’ qualitative responses indicate that even when they did not remember what the sonification conveyed, the sonification was able to capture the user’s attention and take safe actions in response.
... On the other hand, end-user clients often do not have a precise clock time available [2]. To counteract that, we propose to consider the epoch before and after the current one during the delegation's validation. ...
... On the other hand, end-user clients often do not have a precise clock time available [2]. To counteract that, we propose to also consider the epoch before and after the current one during validation of the delegation. ...
Preprint
Full-text available
On today's Internet, combining the end-to-end security of TLS with Content Delivery Networks (CDNs) while ensuring the authenticity of connections results in a challenging delegation problem. When CDN servers provide content, they have to authenticate themselves as the origin server to establish a valid end-to-end TLS connection with the client. In standard TLS, the latter requires access to the secret key of the server. To curb this problem, multiple workarounds exist to realize a delegation of the authentication. In this paper, we present a solution that renders key sharing unnecessary and reduces the need for workarounds. By adapting identity-based signatures to this setting, our solution offers short-lived delegations. Additionally, by enabling forward-security, existing delegations remain valid even if the server's secret key leaks. We provide an implementation of the scheme and discuss integration into a TLS stack. In our evaluation, we show that an efficient implementation incurs less overhead than a typical network round trip. Thereby, we propose an alternative approach to current delegation practices on the web.
... Based on the passive measurement of over 300,000 users over nine months, Akhawe et al. identified the low-risk TLS warnings that consume most user attention and proposed recommendations to browser developers that help to maintain the user attention in high-risk warnings [3]. The large-scale study in 2017 shows that most HTTPS errors are caused by client-side or network issues, instead of server misconfigurations [1]. ...
Conference Paper
To detect fraudulent TLS server certificates and improve the accountability of certification authorities (CAs), certificate transparency (CT) is proposed to record certificates in publicly-visible logs, from which the monitors fetch all certificates and watch for suspicious ones. However, if the monitors, either domain owners themselves or third-party services, fail to return a complete set of certificates issued for a domain of interest, potentially fraudulent certificates may not be detected and then the CT framework becomes less reliable. This paper presents the first systematic study on CT monitors. We analyze the data in 88 public logs and the services of 5 active third-party monitors regarding 3,000,431 certificates of 6,000 selected Alexa Top-1M websites. We find that although CT allows ordinary domain owners to act as monitors, it is impractical for them to perform reliable processing by themselves, due to the rapidly increasing volume of certificates in public logs (e.g., on average 5 million records or 28.29 GB daily for the minimal set of logs that need to be monitored). Moreover, our study discloses that (a) none of the third-party monitors guarantees to return the complete set of certificates for a domain, and (b) for some domains, even the union of the certificates returned by the five third-party monitors can probably be incomplete. As a result, the certificates accepted by CT-enabled browsers are not absolutely visible to the claimed domain owners, even when CT is adopted with well-functioning logs. The risk of invisible fraudulent certificates in public logs raises doubts on the reliability of CT in practice.
... Automation serves several goals for Let's Encrypt. On the Web server side, it greatly reduces the human effort required for HTTPS deployment, along with the concomitant risk of configuration errors that can lead to security problems [9,14]. Automated support for Let's Encrypt has been integrated into Web server software [40,67], IOT devices [16], large host platforms [71,75], and CDNs [12]. ...
Conference Paper
Let's Encrypt is a free, open, and automated HTTPS certificate authority (CA) created to advance HTTPS adoption to the entire Web. Since its launch in late 2015, Let's Encrypt has grown to become the world's largest HTTPS CA, accounting for more currently valid certificates than all other browser-trusted CAs combined. By January 2019, it had issued over 538~million certificates for 223~million domain names. We describe how we built Let's Encrypt, including the architecture of the CA software system (Boulder) and the structure of the organization that operates it (ISRG), and we discuss lessons learned from the experience. We also describe the design of ACME, the IETF-standard protocol we created to automate CA--server interactions and certificate issuance, and survey the diverse ecosystem of ACME clients, including Certbot, a software agent we created to automate HTTPS deployment. Finally, we measure Let's Encrypt's impact on the Web and the CA ecosystem. We hope that the success of Let's Encrypt can provide a model for further enhancements to the Web PKI and for future Internet security infrastructure.
... Akhawe et al. studied the browser warnings of malware, phishing and TLS error, and analyzed the probabilities that the warnings are ignored by users [17]. [18] investigated the causes of HTTPS certificate errors, and found that lots of non-attack events trigger browser warnings. Akhawe et al. [19] studied the TLS warnings based on a large-scale measurement, and identified the low-risk scenarios that consume a large chunk of the user attention. ...
... Telemetry methodology thus has yielded some impressive findings [2,22], but it has its limits; it could generate statistics about how users behave in the wild, but not why. Thus, several questions remain, including: (1) if the classic problems have largely been solved, why, in some situations, do users still not adhere to or comprehend warnings?; and (2) when users do adhere to warnings, why do they do so? ...
Conference Paper
Full-text available
Web browser warnings should help protect people from malware, phishing, and network attacks. Adhering to warnings keeps people safer online. Recent improvements in warning design have raised adherence rates, but they could still be higher. And prior work suggests many people still do not understand them. Thus, two challenges remain: increasing both comprehension and adherence rates. To dig deeper into user decision making and comprehension of warnings, we performed an experience sampling study of web browser security warnings, which involved surveying over 6,000 Chrome and Firefox users in situ to gather reasons for adhering or not to real warnings. We find these reasons are many and vary with context. Contrary to older prior work, we do not find a single dominant failure in modern warning design---like habituation---that prevents effective decisions. We conclude that further improvements to warnings will require solving a range of smaller contextual misunderstandings.
Article
Digital certificates are frequently used to secure communications between users and web servers. Critical to the Web’s PKI is the secure validation of digital certificates. Nonetheless, certificate validation itself is complex and error-prone. Moreover, it is also undermined by particular constraints of mobile browsers. However, these issues have long been overlooked. In this article, we undertook the first systematic and large-scale study of the certificate validation mechanism within popular mobile browsers to highlight the necessity of reassessing it among all released browsers. To this end, we first compile a comprehensive test suite to identify security flaws in certificate validation from various aspects. By designing and implementing a generic, automated testing pipeline, we effectively evaluate 30 popular browsers on two mobile OS versions and compare them with five representative desktop browsers. We found the latest mobile browsers Accept as many as 33.2% invalid certificates and Reject merely 5.4% invalid ones on average, leaving the majority of them to be decided by users who usually have little expertise. Our findings shed light on the severity and inconsistency of certificate validation flaws across mobile browsers, which are likely to expose users to MITM attacks, spoofing attacks, and so forth.
Article
Webmail, protected by the HTTPS protocol, only works correctly if both the server and client implement HTTPS-related features without vulnerability. Nevertheless, the deployment situation of these features in the webmail world is still unclear. To this end, we perform the first end-to-end and large-scale measurement of webmail service. For the server side, we first build an email address set with a size of 2.2 billion. Then we construct two webmail domain datasets: one contains 21 k domains filtered from the email address set; the other only includes 34 domains but supports more than 75% of the 2.2 billion email addresses. After performing a comprehensive measurement on these two webmail domain datasets, we find that some features are poorly deployed. Furthermore, we also rank servers by analyzing the properties of HTTPS-related features. For the client side, we investigate implement of HTTPS-related features in 50 different combinations of web browsers and operating systems (OSes). We find that even the latest browsers have poor support for some features. For example, Firefox in all OSes does not support CT. Our findings highlight that the full deployment of the security features for the HTTPS ecosystem is still a challenge, even in the webmail service.
Chapter
Certificates are the foundation of secure communication over the internet. However, not all certificates are created and managed in a consistent manner and the certificate authorities (CAs) issuing certificates achieve different levels of trust. Furthermore, user trust in public keys, certificates, and CAs can quickly change. Combined with the expectation of 24/7 encrypted access to websites, this quickly evolving landscape has made careful certificate management both an important and challenging problem. In this paper, we first present a novel server-side characterization of the certificate replacement (CR) relationships in the wild, including the reuse of public keys. Our data-driven CR analysis captures management biases, highlights a lack of industry standards for replacement policies, and features successful example cases and trends. Based on the characterization results we then propose an efficient solution to an important revocation problem that currently leaves web users vulnerable long after a certificate has been revoked.
Chapter
Full-text available
An increasing amount of sensitive information is being communicated and stored online. Frequent reports of data breaches and sensitive data disclosures underscore the need for effective technologies that users and administrators can deploy to protect sensitive data. Privacy-enhancing technologies can control access to sensitive information to prevent or limit privacy violations. This chapter focuses on some of the technologies that prevent unauthorized access to sensitive information. These technologies include secure messaging, secure email, HTTPS, two-factor authentication, and anonymous communication. Usability is an essential component of a security evaluation because human error or unwarranted inconvenience can render the strongest security guarantees meaningless. Quantitative and qualitative studies from the usable security research community evaluate privacy-enhancing technologies from a socio-technical viewpoint and provide insights for future efforts to design and develop practical techniques to safeguard privacy. This chapter discusses the primary privacy-enhancing technologies that the usable security research community has analyzed and identifies issues, recommendations, and future research directions.
Chapter
Users in blockchain systems are exposed to address replacement attacks due to the weak binding between websites and smart contracts, as they have no way to verify the authenticity of obtained addresses. Prior research introduced TLS-endorsed Smart Contracts (TeSC) that equip Smart Contracts with authentication information, proving the relation to the domain name of the respective website. For an efficient and user-friendly approach, this technology needs to be integrated with wallets. Based on the analysis of browser warnings regarding TLS-certificates, we augment MetaMask with the ability to detect TeSC and warn users if attack scenarios are detected. To evaluate our work, we conduct a study with 40 participants to show the effectiveness of TeSC to prevent address-replacement attacks and ensure the safe interaction of users and addresses.
Chapter
HTTPS is the typical security best practice to protect data transmission. However, it is difficult to correctly deploy HTTPS even for administrators with technical expertise, and mis-configurations often lead to user-facing errors and potential vulnerabilities. One major reason is that administrators do not follow new features of HTTPS ecosystem evolution, and mistakes were unnoticed and existed for years.
Article
To detect fraudulent TLS server certificates and improve the accountability of certification authorities (CAs), certificate transparency (CT) is proposed to record certificates in publicly-visible logs, from which the monitors fetch all certificates and watch for suspicious ones. However, if the monitors, either domain owners themselves or third-party services, fail to return a complete set of certificates issued for a domain of interest, potentially fraudulent certificates may not be detected and then the CT framework becomes less reliable. This paper presents the first systematic study on CT monitors. We analyze the data in 88 public logs and the services of 5 active third-party monitors regarding 3,000,431 certificates of 6,000 selected Alexa Top-1M websites. We find that although CT allows ordinary domain owners to act as monitors, it is impractical for them to perform reliable processing by themselves, due to the rapidly increasing volume of certificates in public logs (e.g., on average about 5 million records or 28.29 GB daily for the minimal set of logs that need to be monitored in 2018, or more than 7 million records per day in 2020, according to the Chrome CT policy). Moreover, our study discloses that ( a{a} ) none of the third-party monitors guarantees to return the complete set of certificates for a domain, and ( b{b} ) for some domains, even the union of the certificates returned by the five third-party monitors can probably be incomplete. As a result, the certificates accepted by CT-enabled browsers are not actually visible to the claimed domain owners, even when CT is adopted with well-functioning logs. The risk of invisible fraudulent certificates in public logs raises doubts on the reliability of CT in practice.
Chapter
Full-text available
Topological index is the molecular-graph-based structure descriptors. Computational chemistry is a discipline in which we use mathematical approaches for the computation and simulation of molecular behaviour or properties. Detour index is one of the topological index in the collected works of computational chemistry. In this article the authors have computed detour index of join of certain graphs.
Chapter
The usage of anonymous proxies and virtual private network has increased due to the privacy and Internet censorship issues. The traffic passing through proxies (Middle boxes) can be easily intercepted and modified by the controller to perform man-in-the-middle attacks like data injection, data tampering, and data deletion. A stealthy attack called cryptojacking started infecting the popular Web sites to mine cryptocurrency without the Web site visitor’s consent. This paper proposes an effective and stealthy approach to perform cryptojacking attack by injecting cryptomining script on anonymous proxy’s Web site traffic. To increase the efficiency of the attack on larger scale, a testbed environment for private The onion router (Tor) network is deployed to implement the same attack on tor exit node. Our study shows that covertness of the attack can be improvised by varying the central processing unit usage of the victim during mining to avoid detection. The existing defensive mechanism to prevent this attack is also reviewed.
Chapter
Full-text available
In computational chemistry, numbers programming certain structural skin appearance of normal molecules with derivative as of the parallel molecular diagram are called the graph invariants otherwise topological indices. In QSAR and QSPR learn, topological indices be utilized to estimate the bioactivity of substance compound. The Sanskruti index is one among them. This index has a very excellent connection with entropy of octane isomers. In this present study we find the Sanskruti index of certain silicate structures.
Chapter
A cart-pole system is a highly nonlinear as well as an unstable system, which can be utilized as a benchmark system for the testing and designing purposes of different control efforts and it is the widely used application of control system and robotics. For getting the stability of cart-pole system Linear Quadratic Gaussian optimal control problem is formulated which is based on the design of state observer. According to the principal of separation of the problem, at the beginning the control law is generated just after solving ARE using Schur Decomposition to design a controller, which is totally based on principal of state feedback and the point of time comes when all the states of the system can not be measured at the same time there is a presence of process noise as well as measurement noise, optimal state estimator (i.e. Kalman Filter) is made for cart-pole system. Robust H∞ controller has been designed using plant augmentation with weighting functions for the system to carry out the frequency domain analysis of the given system. The simulation results reveal that the controllers can stabilize the cart-pole system at the same time it eliminates noise presents in the system and makes the system robust. Numerical experimentation has been carried out to compare the different approaches.
Chapter
Modern smartphone messaging apps now use end-to-end encryption to provide authenticity, integrity and confidentiality. Consequently, the preferred strategy for wiretapping such apps is to insert a ghost user by compromising the platform’s public key infrastructure. The use of warning messages alone is not a good defence against a ghost user attack since users change smartphones, and therefore keys, regularly, leading to a multitude of warning messages which are overwhelmingly false positives. Consequently, these false positives discourage users from viewing warning messages as evidence of a ghost user attack. To address this problem, we propose collecting evidence from a variety of sources, including direct communication between smartphones over local networks and CONIKS, to reduce the number of false positives and increase confidence in key validity. When there is enough confidence to suggest a ghost user attack has taken place, we can then supply the user with evidence to help them make a more informed decision.
Chapter
In recent years, multiple security incidents involving Certificate Authority (CA) misconduct demonstrated the need for strengthened certificate issuance processes. Certificate Transparency (CT) logs make the issuance publicly traceable and auditable. In this paper, we leverage the information in CT logs to analyze if certificates adhere to the industry’s Baseline Requirements. We find 907 k certificates in violation of Baseline Requirements, which we pinpoint to issuing CAs. Using data from active measurements we compare certificate deployment to logged certificates, identify non-HTTPS certificates in logs, evaluate CT-specific HTTP headers, and augment IP address hitlists using data from CT logs. Moreover, we conduct passive and active measurements to carry out a first analysis of CT’s gossiping and pollination approaches, finding low deployment. We encourage the reproducibility of network measurement research by publishing data from active scans, measurement programs, and analysis tools.
Article
Full-text available
Article
Full-text available
Users will pay attention to reliable and credible indicators of a risk they want to avoid. More accurate detection and better security tools are necessary to regain users' attention and respect.
Conference Paper
Full-text available
We report the results of a large-scale measurement study of the HTTPS certificate ecosystem---the public-key infrastructure that underlies nearly all secure web communications. Using data collected by performing 110 Internet-wide scans over 14 months, we gain detailed and temporally fine-grained visibility into this otherwise opaque area of security-critical infrastructure. We investigate the trust relationships among root authorities, intermediate authorities, and the leaf certificates used by web servers, ultimately identifying and classifying more than 1,800 entities that are able to issue certificates vouching for the identity of any website. We uncover practices that may put the security of the ecosystem at risk, and we identify frequent configuration problems that lead to user-facing errors and potential vulnerabilities. We conclude with lessons and recommendations to ensure the long-term health and security of the certificate ecosystem.
Article
Full-text available
The authors review the diverse literature on the effects of product warnings. They conclude that warnings infortn rather than persuade consumers and consumers selectively attend to warning messages. They also examine research on potential warning message ineffectiveness due to frequent use and on possible reactive behavior induced by warning messages. They conclude that greater caution in the design of warning messages is needed because of the multiple effects of warnings and the varying responses of different groups of consumers. Furthermore, they suggest that warning messages should be designed using empirical research rather than expert opinion or judgment.
Article
Full-text available
The SSL and TLS infrastructure used in important protocols like HTTPs and IMAPs is built on an X.509 public-key infrastructure (PKI). X.509 certificates are thus used to authenticate services like online banking, shopping, e-mail, etc. However, it always has been felt that the certification processes of this PKI may lack in stringency, resulting in a deployment where many certificates do not meet the requirements of a secure PKI. This paper presents a comprehensive analysis of X.509 certificates in the wild. To shed more light on the state of the deployed and actually used X.509 PKI, we obtained and evaluated data from many different sources. We conducted HTTPs scans of a large number of popular HTTPs servers over a 1.5-year time span, including scans from nine locations distributed over the globe. To compare certification properties of highly ranked hosts with the global picture, we included a third-party scan of the entire IPv4 space in our analyses. Furthermore, we monitored live SSL/TLS traffic on a 10Gbps uplink of a large research network. This allows us to compare the properties of the deployed PKI with the part of the PKI that is being actively accessed by users. Our analysis reveals that the quality of certification lacks in stringency, due to a number of reasons among which incorrect certification chains or invalid certificate subjects give the most cause for concern. Similar concerns can be raised for the properties of certification chains and many self-signed certificates used in the deployed X.509 PKI. Our findings confirm what has long been believed -- namely that the X.509 PKI we often use in our everyday's lives is in a sorry state.
Conference Paper
As miscreants routinely hijack thousands of vulnerable web servers weekly for cheap hosting and traffic acquisition, security services have turned to notifications both to alert webmasters of ongoing incidents as well as to expedite recovery. In this work we present the first large-scale measurement study on the effectiveness of combinations of browser, search, and direct webmaster notifications at reducing the duration a site remains compromised. Our study captures the life cycle of 760,935 hijacking incidents from July, 2014--June, 2015, as identified by Google Safe Browsing and Search Quality. We observe that direct communication with webmasters increases the likelihood of cleanup by over 50% and reduces infection lengths by at least 62%. Absent this open channel for communication, we find browser interstitials---while intended to alert visitors to potentially harmful content---correlate with faster remediation. As part of our study, we also explore whether webmasters exhibit the necessary technical expertise to address hijacking incidents. Based on appeal logs where webmasters alert Google that their site is no longer compromised, we find 80% of operators successfully clean up symptoms on their first appeal. However, a sizeable fraction of site owners do not address the root cause of compromise, with over 12% of sites falling victim to a new attack within 30 days. We distill these findings into a set of recommendations for improving web security and best practices for webmasters.
Conference Paper
We measure the prevalence and uses of TLS proxies using a Flash tool deployed with a Google AdWords campaign. We generate 2.9 million certificate tests and find that 1 in 250 TLS connections are TLS-proxied. The majority of these proxies appear to be benevolent, however we identify over 1,000 cases where three malware products are using this technology nefariously. We also find numerous instances of negligent, duplicitous, and suspicious behavior, some of which degrade security for users without their knowledge. Distinguishing these types of practices is challenging in practice, indicating a need for transparency and user awareness.
Conference Paper
Browsers warn users when the privacy of an SSL/TLS connection might be at risk. An ideal SSL warning would empower users to make informed decisions and, failing that, guide confused users to safety. Unfortunately, users struggle to understand and often disregard real SSL warnings. We report on the task of designing a new SSL warning, with the goal of improving comprehension and adherence. We designed a new SSL warning based on recommendations from warning literature and tested our proposal with microsurveys and a field experiment. We ultimately failed at our goal of a well-understood warning. However, nearly 30% more total users chose to remain safe after seeing our warning. We attribute this success to opinionated design, which promotes safety with visual cues. Subsequently, our proposal was released as the new Google Chrome SSL warning. We raise questions about warning comprehension advice and recommend that other warning designers use opinionated design.
Conference Paper
The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed.
Article
Previous research showed that the SSL infrastructure is a fragile system: X.509 certificate validation fails for a non-trivial number of HTTPS-enabled websites resulting in SSL warning messages presented to users. Studies revealed that warning messages do not provide easy-to-understand information or are ignored by webbrowser users. SSL warning messages are a critical component in the HTTPS infrastructure and many attempts have been made to improve these warning messages. However, an important question has not received sufficient attention yet: Why do webmasters (deliberately) deploy non-validating, security-critical X.509 certificates on publicly available websites? In this paper, we conduct the first study with webmasters operating non-validating X.509 certificates to understand their motives behind deploying those certificates. We extracted the non-validating certificates from Google's webcrawler body of X.509 certificates, informed webmasters about the problem with the X.509 certificate configuration on their website and invited a random sample of the respective webmasters to participate in our study. 755 webmasters participated, allowing us insight into their motives. While one third of them admitted to having misconfigured their webserver accidentally, two thirds of them gave reasons for deliberately using a non-validating X.509 certificate.
Article
Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions.
Conference Paper
When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results.
Here's My Cert, So Trust Me
  • Bernhard Devdatta Akhawe
  • Matthias Amann
  • Robin Vallentin
  • Sommer
The Transport Layer Security (TLS) Protocol Version 1.2
  • T Dierks
  • E Rescorla
T. Dierks and E. Rescorla. 2008. The Transport Layer Security (TLS) Protocol Version 1.2. https://tools.ietf.org/html/rfc5246#section-7.4.2.
An update on SHA-1 certificates in Chrome
  • Lucas Garron
  • David Benjamin
Lucas Garron and David Benjamin. 2015. An update on SHA-1 certificates in Chrome. https://security.googleblog.com/2015/12/ an-update-on-sha-1-certificates-in.html.
Kaspersky: SSL interception differentiates certificates with a 32bit hash
  • Tavis Ormandy
  • Ormandy Tavis
Tavis Ormandy. 2016. Kaspersky: SSL interception differentiates certificates with a 32bit hash. https://bugs.chromium.org/p/project-zero/issues/detail?id=978.
Survey on Behaviors of Captive Portals
  • Mariko Kobayashi
Mariko Kobayashi. 2017. Survey on Behaviors of Captive Portals. https://www. ietf.org/proceedings/98/slides/slides-98-capport-survey-00.pdf.
Chromium Blog: Changes to the Field Trials Infrastructure
  • Tyler Odean
Tyler Odean. 2012. Chromium Blog: Changes to the Field Trials Infrastructure. https://blog.chromium.org/2012/05/changes-to-field-trials-infrastructure.html.
How to Fix Slow or Incorrect Windows Computer Clock
  • Waseem Patwegar
Waseem Patwegar. 2016. How to Fix Slow or Incorrect Windows Computer Clock. http://www.techbout.com/ fix-slow-incorrect-windows-computer-clock-14287/.
Avast Web Shield scans HTTPS sites for malware and threats
  • Deborah Salmi
Deborah Salmi. 2015. Avast Web Shield scans HTTPS sites for malware and threats. https://blog.avast.com/2015/05/25/explaining-avasts-https-scanning-feature/.
A Week to Remember: The Impact of Browser Warning Storage Policies
  • Joel Weinberger
  • Adrienne Porter Felt
Joel Weinberger and Adrienne Porter Felt. 2016. A Week to Remember: The Impact of Browser Warning Storage Policies. In Twelfth Symposium on Usable Privacy and Security (SOUPS 2016). USENIX Association, Denver, CO, 15-25. https://www. usenix.org/conference/soups2016/technical-sessions/presentation/weinberger
Safe Browsing protection
  • Google Chrome
  • Privacy Whitepaper
Google Chrome Privacy Whitepaper: Safe Browsing protection. https: //www.google.com/chrome/browser/privacy/whitepaper.html#malware.
Here's My Cert So Trust Me Maybe
Lucas Garron and David Benjamin 2015. An update on SHA-1 certificates in Chrome
  • Lucas Garron
  • David Benjamin