Public-Key Cryptosystems Resilient to Key Leakage

SIAM Journal on Computing (Impact Factor: 0.74). 01/2009; 2009(4):105. DOI: 10.1007/978-3-642-03356-8_2
Source: DBLP

ABSTRACT Most of the work in the analysis of cryptographic schemes is concentrated in abstract ad- versarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the \cold boot attacks" of Halderman et al. (USENIX Security '08), Akavia, Goldwasser and Vaikuntanathan (TCC '09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of side- channel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev's lattice-based scheme (STOC '05) is resilient to any leakage of L=polylog(L) bits, where L is the length of the secret key. In this paper we revisit the above-mentioned framework and our main results are as follows: † We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any universal hash proof system. The construction does not rely on additional computational assumptions, and the resulting scheme is as e-cient as the un- derlying hash proof system. Existing constructions of hash proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Di-e-Hellman assumption (and its progressively weaker d-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption. † We construct a new hash proof system based on the decisional Di-e-Hellman assumption (and its d-Linear variants), and show that the resulting scheme is resilient to any leakage of L(1¡o(1)) bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a \circular-secure" encryption scheme, flts our generic approach and is also resilient to any leakage of L(1 ¡ o(1)) bits. † We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of L(1 ¡ o(1)) bits. On the practical side, we prove that variants of the Cramer- Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of L=4 bits, and CCA2-secure with any leakage of L=6 bits.

20 Reads
  • Source
    • "As noted previously, standard privacy amplification (e.g., postprocessing using a randomness extractor) does not work in this setting, because the adversary also knows the seed for the extractor. However, there are other ways of solving this problem, for instance by assuming the availability of a random oracle, or by using something similar to leakage-resilient encryption [31] [32] (but with a different notion of leakage, where the " leakage function " is restricted to use only LOCC operations, but is allowed access to side-information). "
    [Show abstract] [Hide abstract]
    ABSTRACT: One-time memories (OTM's) are simple, tamper-resistant cryptographic devices, which can be used to implement sophisticated functionalities such as one-time programs. OTM's cannot exist in a fully-classical world, or in a fully-quantum world, but there is evidence that they can be built using "isolated qubits" -- qubits that can only be accessed using local operations and classical communication (LOCC). Here we present new constructions for OTM's using isolated qubits, which improve on previous work in several respects: they achieve a stronger "single-shot" security guarantee, which is stated in terms of the (smoothed) min-entropy; they are proven secure against general LOCC adversaries; and they are efficiently implementable. These results use Wiesner's idea of conjugate coding, combined with error-correcting codes that approach the capacity of the q-ary symmetric channel, and a high-order entropic uncertainty relation, which was originally developed for cryptography in the bounded quantum storage model.
  • Source
    • "There is also a relationship to recent work on leakage resilient and auxiliary input models of encryption, which mostly falls into the " self-loop " category. In leakage resilient models, such as those of Akavia, Goldwasser and Vaikuntanathan [4] and Naor and Segev [32], the adversary is given some function h of the secret key, not necessarily an encryption, such that it is information theoretically impossible to recover sk . The auxiliary input model, introduced by Dodis, Kalai and Lovett [17], relaxes this requirement so that it only needs to be difficult to recover sk . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional definitions of encryption security guarantee secrecy for any plaintext that can be computed by an outside adversary. In some settings, such as anonymous credential or disk encryption systems, this is not enough, because these applications encrypt messages that depend on the secret key. A natural question to ask is do standard definitions capture these scenarios? One area of interest is n-circular security where the ciphertexts $E(pk_1,sk_2),\allowbreak E(pk_2,sk_3)$, …$,\allowbreak E(pk_{n-1},sk_n), E(pk_n, sk_1)$ must be indistinguishable from encryptions of zero. Acar et al. (Eurocrypt 2010) provided a CPA-secure public key cryptosystem that is not 2-circular secure due to a distinguishing attack. In this work, we consider a natural relaxation of this definition. Informally, a cryptosystem is n-weak circular secure if an adversary given the cycle $E(pk_1,sk_2),\allowbreak E(pk_2,sk_3), \dots,\allowbreak E(pk_{n-1},sk_n), E(pk_n, sk_1)$ has no significant advantage in the regular security game, (e.g., CPA or CCA) where ciphertexts of chosen messages must be distinguished from ciphertexts of zero. Since this definition is sufficient for some practical applications and the Acar et al. counterexample no longer applies, the hope is that it would be easier to realize, or perhaps even implied by standard definitions. We show that this is unfortunately not the case: even this weaker notion is not implied by standard definitions. Specifically, we show:For symmetric encryption, under the minimal assumption that one-way functions exist, n -weak circular (CPA) security is not implied by CCA security, for any n . In fact, it is not even implied by authenticated encryption security, where ciphertext integrity is guaranteed. For public-key encryption, under a number-theoretic assumption, 2-weak circular security is not implied by CCA security. In both of these results, which also apply to the stronger circular security definition, we actually show for the first time an attack in which the adversary can recover the secret key of an otherwise-secure encryption scheme after an encrypted key cycle is published. These negative results are an important step in answering deep questions about which attacks are prevented by commonly-used definitions and systems of encryption. They say to practitioners: if key cycles may arise in your system, then even if you use CCA-secure encryption, your system may break catastrophically; that is, a passive adversary might be able to recover your secret keys.
    Proceedings of the 15th international conference on Practice and Theory in Public Key Cryptography; 05/2012
  • Source
    • "Once side channels and faults are recognized as legitimate threats, they can be addressed within the paradigm. The challenge is to identify side channels appropriately, extend the threat model by modeling side channels abstractly, and then design systems that remain secure under this threat model [8], [9]. In extending security modeling beyond cryptography , there may be good reasons to consider several different threat models. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The past and the future of privacy and cybersecurity are addressed from four perspectives, by different authors: theory and algorithms, technology, policy, and economics. Each author considers the role of the threat from the corresponding perspective, and each adopts an individual tone, ranging from a relatively serious look at the prospects for improvement in underlying theory and algorithms to more lighthearted considerations of the unpredictable futures of policy and economics.
    Proceedings of the IEEE 05/2012; 100(Special Centennial Issue):1659-1673. DOI:10.1109/JPROC.2012.2189794 · 4.93 Impact Factor
Show more

Preview (2 Sources)

20 Reads
Available from