## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... By a standard hybrid argument, we furthermore have Adv INT-CTXT AEAD,A (q F ) ≤ q F · Adv INT-CTXT AEAD,A (1). This linear loss in the number of forgery attempts indeed surfaces in the security bounds of many AEAD schemes, including AES-CCM [29], AES-GCM [23][24][25], and ChaCha20+Poly1305 [16,37] underlying DTLS 1.3 and QUIC. The forgery limits for packet encryption added to QUIC in draft-29 and DTLS 1.3 in draft-38 [40,[46][47][48] following our analysis are determined based on these AEAD schemes' integrity bounds, aiming at similar security margins as for the key usage limits in TLS 1.3 for confidentiality (cf. ...

... Luyx and Paterson [32]). Both standards, as well as an IRTF CFRG draft on AEAD usage limits [22], further take the protocols' rekeying mechanisms into account through multi-user AEAD bounds [12,16,23]. ...

... Compared to secure channels over reliable transports (like TLS over TCP), the integrity bound is not tight but, at its core, contains a loss linear in the number of received ciphertexts (denoted by q R in the theorem statement below): the channel's robustness leads to the adversary being able make multiple forgery attempts on the underlying AEAD scheme-in principle with every delivered ciphertext. This result matches the linear loss in the security bounds of many AEAD schemes, including AES-CCM [29], AES-GCM [23][24][25], and ChaCha20+Poly1305 [16,37] underlying QUIC and DTLS 1.3. It also coincides with the observation that vulnerabilities in a channel's encryption scheme are easier to exploit over non-reliable networks; see, e.g., the Lucky Thirteen attack on the (D)TLS record protocols [3]. ...

The common approach in secure communication channel protocols is to rely on ciphertexts arriving in-order and to close the connection upon any rogue ciphertext. Cryptographic security models for channels generally reflect such design. This is reasonable when running atop lower-level transport protocols like TCP ensuring in-order delivery, as for example, is the case with TLS or SSH. However, protocols like QUIC or DTLS which run over a non-reliable transport such as UDP, do not—and in fact cannot—close the connection if packets are lost or arrive in a different order. Those protocols instead have to carefully catch effects arising naturally in unreliable networks, usually by using a sliding-window technique where ciphertexts can be decrypted correctly as long as they are not misplaced too far. In order to be able to capture QUIC and the newest DTLS version 1.3, we introduce a generalized notion of robustness of cryptographic channels. This property can capture unreliable network behavior and guarantees that adversarial tampering cannot hinder ciphertexts that can be decrypted correctly from being accepted. We show that robustness is orthogonal to the common notion of integrity for channels, but together with integrity and chosen-plaintext security it provides a robust analog of chosen-ciphertext security of channels. In contrast to prior work, robustness allows us to study packet encryption in the record layer protocols of QUIC and of DTLS 1.3 and the novel sliding-window techniques both protocols employ. We show that both protocols achieve robust chosen-ciphertext security based on certain properties of their sliding-window techniques and the underlying AEAD schemes. Notably, the robustness needed in handling unreliable network messages requires both record layer protocols to tolerate repeated adversarial forgery attempts. This means we can only establish non-tight security bounds (in terms of AEAD integrity), a security degradation that was missed in earlier protocol drafts. Our bounds led the responsible IETF working groups to introduce concrete forgery limits for both protocols and the IRTF CFRG to consider AEAD usage limits more broadly.

... Nonce collisions are not the only threat one needs to consider when using authenticated encryption schemes. Other security properties, like integrity or indistinguishability, also degrade with increasing data sizes and rotated encryption keys [9,16,31,39]. However, in the case of the considered schemes in this paper, the practical importance of this degradation on session tickets is limited. ...

... To detect a CBC padding oracle vulnerability, we need to modify the second to last ciphertext block n − 1. Based on the assumed format, we combine possible MAC lengths (0, 16, 20, 28, 32, 48, 64) with possible block lengths (8,16) to get possible positions for block n − 1. To get the last byte, we need to modify this block such that the last block n contains a valid 1-byte padding. ...

... These changes can have an impact on the security of session tickets. Given the previous works on security bounds affecting indistinguishability and integrity of encryption schemes in different contexts [9,16,31,39], we would like to encourage implementors to be mindful when selecting different algorithms and to have these limits in mind. ...

Session tickets improve the performance of the TLS protocol. They allow abbreviating the handshake by using secrets from a previous session. To this end, the server encrypts the secrets using a Session Ticket Encryption Key (STEK) only know to the server, which the client stores as a ticket and sends back upon resumption. The standard leaves details such as data formats, encryption algorithms, and key management to the server implementation. TLS session tickets have been criticized by security experts, for undermining the security guarantees of TLS. An adversary , who can guess or compromise the STEK, can passively record and decrypt TLS sessions and may impersonate the server. Thus, weak implementations of this mechanism may completely undermine TLS security guarantees. We performed the first systematic large-scale analysis of the cryptographic pitfalls of session ticket implementations. (1) We determined the data formats and cryptographic algorithms used by 12 open-source implementations and designed online and offline tests to identify vulnerable implementations. (2) We performed several large-scale scans and collected session tickets for extended offline analyses. We found significant differences in session ticket implementations and critical security issues in the analyzed servers. Vulnerable servers used weak keys or repeating keystreams in the used tickets, allowing for session ticket decryption. Among others, our analysis revealed a widespread implementation flaw within the Amazon AWS ecosystem that allowed for passive traffic decryption for at least 1.9% of the Tranco Top 100k servers.

Cryptographers rely on visualization to effectively communicate cryptographic constructions with one another. Visual frameworks such as constructive cryptography (TOSCA 2011), the joy of cryptography (online book) and state-separating proofs (SSPs, Asiacrypt 2018) are useful to communicate not only the construction, but also their proof visually by representing a cryptographic system as graphs.
One SSP core feature is the re-use of code, e.g., a package of code might be used in a game and be part of the description of a reduction as well. Thus, in a proof, the linear structure of a paper either requires the reader to turn pages to find definitions or writers to re-state them, thereby interrupting the visual flow of the game hops that are defined by a sequence of graphs.
We present an interactive proof viewer for state-separating proofs (SSPs) which addresses the limitations and perform three case studies: The equivalence between simulation-based and game-based notions for symmetric encryption, the security proof of the Goldreich-Goldwasser-Micali construction of a pseudorandom function from a pseudorandom generator, and Brzuska’s and Oechsner’s SSP formalization of the proof for Yao’s garbling scheme.

A Rugged Pseudorandom Permutation (RPRP) is a variable-input-length tweakable cipher satisfying a security notion that is intermediate between tweakable PRP and tweakable SPRP. It was introduced at CRYPTO 2022 by Degabriele and Karadžić, who additionally showed how to generically convert such a primitive into nonce-based and nonce-hiding AEAD schemes satisfying either misuse-resistance or release-of-unverified-plaintext security as well as Nonce-Set AEAD which has applications in protocols like QUIC and DTLS. Their work shows that RPRPs are powerful and versatile cryptographic primitives. However, the RPRP security notion itself can seem rather contrived, and the motivation behind it is not immediately clear. Moreover, they only provided a single RPRP construction, called UIV, which puts into question the generality of their modular approach and whether other instantiations are even possible. In this work, we address this question positively by presenting new RPRP constructions, thereby validating their modular approach and providing further justification in support of the RPRP security definition. Furthermore, we present a more refined view of their results by showing that strictly weaker RPRP variants, which we introduce, suffice for many of their transformations. From a theoretical perspective, our results show that the well-known three-round Feistel structure achieves stronger security as a permutation than a mere pseudorandom permutation—as was established in the seminal result by Luby and Rackoff. We conclude on a more practical note by showing how to extend the left domain of one RPRP construction for applications that require larger values in order to meet the desired level of security.

The common approach in secure communication channel protocols is to rely on ciphertexts arriving in-order and to close the connection upon any rogue ciphertext. Cryptographic security models for channels generally reflect such design. This is reasonable when running atop lower-level transport protocols like TCP ensuring in-order delivery, as for example, is the case with TLS or SSH. However, protocols like QUIC or DTLS which run over a non-reliable transport such as UDP, do not—and in fact cannot—close the connection if packets are lost or arrive in a different order. Those protocols instead have to carefully catch effects arising naturally in unreliable networks, usually by using a sliding-window technique where ciphertexts can be decrypted correctly as long as they are not misplaced too far. In order to be able to capture QUIC and the newest DTLS version 1.3, we introduce a generalized notion of robustness of cryptographic channels. This property can capture unreliable network behavior and guarantees that adversarial tampering cannot hinder ciphertexts that can be decrypted correctly from being accepted. We show that robustness is orthogonal to the common notion of integrity for channels, but together with integrity and chosen-plaintext security it provides a robust analog of chosen-ciphertext security of channels. In contrast to prior work, robustness allows us to study packet encryption in the record layer protocols of QUIC and of DTLS 1.3 and the novel sliding-window techniques both protocols employ. We show that both protocols achieve robust chosen-ciphertext security based on certain properties of their sliding-window techniques and the underlying AEAD schemes. Notably, the robustness needed in handling unreliable network messages requires both record layer protocols to tolerate repeated adversarial forgery attempts. This means we can only establish non-tight security bounds (in terms of AEAD integrity), a security degradation that was missed in earlier protocol drafts. Our bounds led the responsible IETF working groups to introduce concrete forgery limits for both protocols and the IRTF CFRG to consider AEAD usage limits more broadly.

Polynomial hashing as an instantiation of universal hashing is a widely employed method for the construction of MACs and authenticated encryption (AE) schemes, the ubiquitous GCM being a prominent example. It is also used in recent AE proposals within the CAESAR competition which aim at providing nonce misuse resistance, such as POET. The algebraic structure of polynomial hashing has given rise to security concerns: At CRYPTO 2008, Handschuh and Preneel describe key recovery attacks, and at FSE 2013, Procter and Cid provide a comprehensive framework for forgery attacks. Both approaches rely heavily on the ability to construct forgery polynomials having disjoint sets of roots, with many roots (“weak keys”) each. Constructing such polynomials beyond naïve approaches is crucial for these attacks, but still an open problem.
In this paper, we comprehensively address this issue. We propose to use twisted polynomials from Ore rings as forgery polynomials. We show how to construct sparse forgery polynomials with full control over the sets of roots. We also achieve complete and explicit disjoint coverage of the key space by these polynomials. We furthermore leverage this new construction in an improved key recovery algorithm.
As cryptanalytic applications of our twisted polynomials, we develop the first universal forgery attacks on GCM in the weak-key model that do not require nonce reuse. Moreover, we present universal weak-key forgeries for the nonce-misuse resistant AE scheme POET, which is a CAESAR candidate.

In this paper we study time/memory/data trade-off attacks from two points of view. We show that Time-Memory trade-off (TMTO) by Hellman may be extended to Time/Memory/Key trade-off. For ex- ample, AES with 128-bit key has only 85-bit security if 243 encryptions of an arbitrary fixed text under different keys are available to the at- tacker. Such attacks are generic and are more practical than some recent high complexity chosen related-key attacks on round-reduced versions of AES. They constitute a practical threat for any cipher with 80-bit or shorter keys and are marginally practical for 128-bit key ciphers. We show that UNIX password scheme even with carefully generated passwords is vulnerable to practical trade-off attacks. Our second contribution is to present a unifying framework for the analysis of multiple data trade- offs. Both Babbage-Golic (BG) and Biryukov-Shamir (BS) formulas can be obtained as special cases of this framework. Moreover we identify a new class of single table multiple data trade-offs which cannot be ob- tained either as BG or BS trade-off. Finally we consider the analysis of the rainbow method of Oechslin and show that for multiple data, the TMTO curve of the rainbow method is inferior to the TMTO curve of the Hellman method.

Constructing a Pseudo Random Function (PRF) is a fundamental problem in cryptology. Such a construction, implemented by truncating the last m bits of permutations of {0,1}n was suggested by Hall et al. (1998). They conjectured that the distinguishing advantage of an adversary with q queries, Advn,m(q), is small if q=o(2(n+m)∕2), established an upper bound on Advn,m(q) that confirms the conjecture for m<n∕7, and also declared a general lower bound Advn,m(q)=Ω(q2∕2n+m). The conjecture was essentially confirmed by Bellare and Impagliazzo (1999). Nevertheless, the problem of estimating Advn,m(q) remained open. Combining the trivial bound 1, the birthday bound, and a result of Stam (1978) leads to the upper bound Advn,m(q)=Ominq(q−1)2n,q2n+m2,1.In this paper we show that this upper bound is tight for every 0≤m<n and any q. This, in turn, verifies that the converse to the conjecture of Hall et al. is also correct, i.e., that Advn,m(q) is negligible only for q=o(2(n+m)∕2).

The multi-key, or multi-user, setting challenges cryptographic algorithms to maintain high levels of security when used with many different keys, by many different users. Its significance lies in the fact that in the real world, cryptography is rarely used with a single key in isolation. A folklore result, proved by Bellare, Boldyreva, and Micali for public-key encryption in EUROCRYPT 2000, states that the success probability in attacking any one of many independently keyed algorithms can be bounded by the success probability of attacking a single instance of the algorithm, multiplied by the number of keys present. Although sufficient for settings in which not many keys are used, once cryptographic algorithms are used on an internet-wide scale, as is the case with TLS, the effect of multiplying by the number of keys can drastically erode security claims. We establish a sufficient condition on cryptographic schemes and security games under which multi-key degradation is avoided. As illustrative examples, we discuss how AES and GCM behave in the multi-key setting, and prove that GCM, as a mode, does not have multi-key degradation. Our analysis allows limits on the amount of data that can be processed per key by GCM to be significantly increased. This leads directly to improved security for GCM as deployed in TLS on the Internet today.

We initiate the study of multi-user (mu) security of authenticated encryption (AE) schemes as a way to rigorously formulate, and answer, questions about the “randomized nonce” mechanism proposed for the use of the AE scheme GCM in TLS 1.3. We (1) Give definitions of mu ind (indistinguishability) and mu kr (key recovery) security for AE (2) Characterize the intent of nonce randomization as being improved mu security as a defense against mass surveillance (3) Cast the method as a (new) AE scheme RGCM (4) Analyze and compare the mu security of both GCM and RGCM in the model where the underlying block cipher is ideal, showing that the mu security of the latter is indeed superior in many practical contexts to that of the former, and (5) Propose an alternative AE scheme XGCM having the same efficiency as RGCM but better mu security and a more simple and modular design.

Universal hash functions are commonly used primitives for fast and secure message authentication in the form of message authentication codes or authenticated encryption with associated data schemes. These schemes are widely used and standardised, the most well known being McGrew and Viega’s Galois/Counter Mode (GCM). In this paper we identify some properties of hash functions based on polynomial evaluation that arise from the underlying algebraic structure. As a result we are able to describe a general forgery attack, of which Saarinen’s cycling attack from FSE 2012 is a special case. Our attack removes the requirement for long messages and applies regardless of the field in which the hash function is evaluated. Furthermore we provide a common description of all published attacks against GCM, by showing that the existing attacks are the result of these algebraic properties of the polynomial-based hash function. We also greatly expand the number of known weak GCM keys and show that almost every subset of the keyspace is a weak key class. Finally, we demonstrate that these algebraic properties and the corresponding attacks are highly relevant to GCM/$$2^+$$2+, a variant of GCM designed to increase the efficiency in software.

A constant of 222 appears in the security bounds of the Galois/Counter Mode of Operation, GCM. In this paper, we first develop an algorithm to generate nonces that have a high counter-collision probability. We show concrete examples of nonces with the counter-collision probability of about 220.75/2128. This shows that the constant in the security bounds, 222, cannot be made smaller than 219.74 if the proof relies on “the sum bound.” We next show that it is possible to avoid using the sum bound, leading to improved security bounds of GCM. One of our improvements shows that the constant of 222 can be reduced to 32.

In this paper, we study the security proofs of GCM (Galois/Counter Mode of Operation). We first point out that a lemma, which is related to the upper bound on the probability of a counter collision, is invalid. Both the original privacy and authenticity proofs by the designers are based on the lemma. We further show that the observation can be translated into a distinguishing attack that invalidates the main part of the privacy proof. It turns out that the original security proofs of GCM contain a flaw, and hence the claimed security bounds are not justified. A very natural question is then whether the proofs can be repaired. We give an affirmative answer to the question by presenting new security bounds, both for privacy and authenticity. As a result, although the security bounds are larger than what were previously claimed, GCM maintains its provable security. We also show that, when the nonce length is restricted to 96 bits, GCM has better security bounds than a general case of variable length nonces.

ChaCha8 is a 256-bit stream cipher based on the 8-round cipher Salsa20/8. The changes from Salsa20/8 to ChaCha8 are designed to improve diffusion per round, conjecturally increasing resistance to cryptanalysis, while preserving—and often improving—time per round. ChaCha12 and ChaCha20 are analogous modifications of the 12-round and 20-round ciphers Salsa20/12 and Salsa20/20. This paper presents the ChaCha family and explains the differences between Salsa20 and ChaCha.

Suppose we sequentially throw m balls into n bins. It is a natural question to ask for the maximum number of balls in any bin. In this paper we shall derive sharp upper
and lower bounds which are reached with high probability. We prove bounds for all values of m(n) ≧ n/polylog(n) by using the simple and well-known method of the first and second moment.

In this paper we analyze the complexity of recovering cryptographic keys when messages are encrypted under various keys. We suggest key-collision attacks, which show that the theoretic strength of a block cipher (in ECB mode) cannot exceed the square root of the size of the key space. As a result, in some circumstances, some keys can be recovered while they are still in use, and these keys can then be used to substitute messages by messages more favorable to the attacker (e.g., transfer $1000000 to bank account 123-4567890). Taking DES as our example, we show that one key of DES can be recovered with complexity 228, and one 168-bit key of (three-key) triple-DES can be recovered with complexity 284. We also discuss the theoretic strength of chaining modes of operation, and show that in some cases they may be vulnerable to such attacks.

Poly1305-AES is a state-of-the-art message-authentication code suitable for a wide variety of applications. Poly1305-AES computes a 16-byte authenticator of a variable-length message, using a 16-byte AES key, a 16-byte additional key, and a 16-byte nonce. The security of Poly1305-AES is very close to the security of AES; the security gap is at most 14DdL=16e=2 106 if messages have at most L bytes, the attacker sees at most 2 64 authenticated messages, and the attacker attempts D forgeries. Poly1305-AES can be computed at extremely high speed: for example, fewer than 3:625(' + 170) Athlon cycles for an '-byte message. This speed is achieved without precomputation; consequently, 1000 keys can be handled simultaneously without cache misses. Special-purpose hardware can compute Poly1305-AES at even higher speed. Poly1305- AES is parallelizable, incremental, and not subject to any intellectual- property claims.

The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most e#cient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet tra#c in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.

We describe an RSA-based signing scheme which combines essentially optimal efficiency with attractive security properties. Signing takes one RSA decryption plus some hashing, verification takes one RSA encryption plus some hashing, and the size of the signature is the size of the modulus. Assuming the underlying hash functions are ideal, our schemes are not only provably secure, but are so in a tight way--- an ability to forge signatures with a certain amount of computational resources implies the ability to invert RSA (on the same size modulus) with about the same computational effort. Furthermore, we provide a second scheme which maintains all of the above features and in addition provides message recovery. These ideas extend to provide schemes for Rabin signatures with analogous properties; in particular their security can be tightly related to the hardness of factoring. Department of Computer Science and Engineering, Mail Code 0114, University of California at San Diego, 9500 Gil...

The Transport Layer Security (TLS) Protocol Version 1.3. RFC 8446 (Proposed Standard

- E Rescorla

ChaCha20 and Poly1305 based Cipher suites for TLS draft-agl-tls-chacha20poly1305-00

- Langley

Improved Algorithms for the Shortest Vector Problem and the Closest Vector Problem in the Infinity Norm

- Divesh Aggarwal
- Priyanka Mukhopadhyay
- Aggarwal Divesh

The "Coefficients H" Technique (Invited Talk)

- Jacques Patarin
- Patarin Jacques

Usage Limits on AEAD Algorithms - - draft-irtf-cfrg-aead-limits-03

- Felix Günther
- Martin Thomson
- Christopher A Wood
- Günther Felix

The Multi-user Security of GCM , Revisited : Tight Bounds for Nonce Randomization

- Stefano Viet Tung Hoang
- Aishwarya Tessaro
- Thiruvengadam
- Hoang Viet Tung

Atul Luykx and Kenneth G Paterson. 2015. Limits on authenticated encryption use in TLS

- Atul Luykx
- Kenneth G Paterson
- Luykx Atul

Using TLS to Secure QUIC

- Thomson M.

ChaCha20 and Poly1305 for IETF Protocols

- Y Nir
- A Langley