[Show abstract][Hide abstract] ABSTRACT: Location-sharing-based services (LSBSs) allow users to share their location with their friends in a sporadic manner. In currently deployed LSBSs users must disclose their location to the service provider in order to share it with their friends. This default disclosure of location data introduces privacy risks. We define the security properties that a privacy-preserving LSBS should fulfill and propose two construc-tions. First, a construction based on identity based broad-cast encryption (IBBE) in which the service provider does not learn the user's location, but learns which other users are allowed to receive a location update. Second, a construc-tion based on anonymous IBBE in which the service provider does not learn the latter either. As advantages with respect to previous work, in our schemes the LSBS provider does not need to perform any operations to compute the reply to a location data request, but only needs to forward IBBE ciphertexts to the receivers. We implement both construc-tions and present a performance analysis that shows their practicality. Furthermore, we extend our schemes such that the service provider, performing some verification work, is able to collect privacy-preserving aggregate statistics on the locations users share with each other.
[Show abstract][Hide abstract] ABSTRACT: Public key Kerberos (PKINIT) is a standard authentication and key establishment protocol. Unfortunately, it suffers from a security flaw when combined with smart cards. In particular, temporary access to a user’s card enables an adversary to impersonate that user for an indefinite period of time, even after the adversary’s access to the card is revoked. In this paper, we extend Shoup’s key exchange security model to the smart card setting and examine PKINIT in this model. Using this formalization, we show that PKINIT is indeed flawed, propose a fix, and provide a proof that this fix leads to a secure protocol.
International Journal of Information Security 06/2014; 13(3). · 0.94 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We analyze the Grøstl-0 hash function, that is the version of Grøstl submitted to the SHA-3 competition. This paper extends Peyrin’s internal differential strategy, that uses differential paths between the permutations P and Q of Grøstl-0 to construct distinguishers of the compression function. This results in collision attacks and semi-free-start collision attacks on the Grøstl-0 hash function and compression function with reduced rounds. Specifically, we show collision attacks on the Grøstl-0-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities 248 and 2112 and on the Grøstl-0-512 hash function reduced to 6 out of 14 rounds with time complexity 2183. Furthermore, we demonstrate semi-free-start collision attacks on the Grøstl-0-256 compression function reduced to 8 rounds and the Grøstl-0-512 compression function reduced to 9 rounds. Finally, we show improved distinguishers for the Grøstl-0-256 permutations with reduced rounds.
Designs Codes and Cryptography 03/2014; · 0.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Cryptographic hash functions play a central role in cryptography: they map arbitrarily large input strings to fixed length output strings. The main applications are to create a short unique identifier to a string, to transform a string with a one-way mapping, and to commit to a string or to confirm its knowledge without revealing it. Additional applications are the mapping of group or field elements to strings, key derivation and the extraction of entropy. The main security requirements are preimage and second preimage resistance, collision resistance and indifferentiability from a random oracle. During the last three decades, more than 200 hash functions designs have been published; many of those have been cryptanalyzed, including widely used schemes such as MD5 and SHA-1. Moreover, there was a lack of theoretical understanding of their constructions; as a consequence, structural flaws were identified in widely used designs. These concerns also undermined to some extent the confidence in the SHA-2 hash functions, that have been designed for long term security. As a consequence, the US National Institute for Standards and Technology has organized an open competition. The competition started in November 2007; after five years of intense design, analysis and debate the Keccak function was announced as the winner in October 2012. The new FIPS standard is expected to be published in 2014. This extended abstract will identify some lessons learned during this competition.
Proceedings of the 6th International Conference on Security of Information and Networks; 11/2013
[Show abstract][Hide abstract] ABSTRACT: In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications. In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security; 11/2013
[Show abstract][Hide abstract] ABSTRACT: Various Location Privacy-Preserving Mechanisms (LPPMs) have been proposed in the literature to address the privacy risks derived from the exposure of user locations through the use of Location Based Services (LBSs). LPPMs obfuscate the locations disclosed to the LBS provider using a variety of strategies, which come at a cost either in terms of quality of service, or of resource consumption, or both. Shokri et al. propose an LPPM design framework that outputs optimal LPPM parameters considering a strategic adversary that knows the algorithm implemented by the LPPM, and has prior knowledge on the users' mobility profiles . The framework allows users to set a constraint on the tolerable loss quality of service due to perturbations in the locations exposed by the LPPM. We observe that this constraint does not capture the fact that some LPPMs rely on techniques that augment the level of privacy by increasing resource consumption. In this work we extend Shokri et al.'s framework to account for constraints on bandwidth consumption. This allows us to evaluate and compare LPPMs that generate dummies queries or that decrease the precision of the disclosed locations. We study the trilateral trade-off between privacy, quality of service, and bandwidth, using real mobility data. Our results show that dummy-based LPPMs offer the best protection for a given combination of quality and bandwidth constraints, and that, as soon as communication overhead is permitted, both dummy-based and precision-based LPPMs outperform LPPMs that only perturb the exposed locations. We also observe that the maximum value of privacy a user can enjoy can be reached by either sufficiently relaxing the quality loss or the bandwidth constraints, or by choosing an adequate combination of both constraints. Our results contribute to a better understanding of the effectiveness of location privacy protection strategies, and to the design of LPPMs with constrained resource consumption.
Proceedings of the 12th ACM workshop on Workshop on privacy in the electronic society; 11/2013
[Show abstract][Hide abstract] ABSTRACT: In this paper we propose Sancus, a security architecture for networked embedded devices. Sancus supports extensibility in the form of remote (even third-party) software installation on devices while maintaining strong security guarantees. More specifically, Sancus can remotely attest to a software provider that a specific software module is running uncompromised, and can authenticate messages from software modules to software providers. Software modules can securely maintain local state, and can securely interact with other software modules that they choose to trust. The most distinguishing feature of Sancus is that it achieves these security guarantees without trusting any infrastructural software on the device. The Trusted Computing Base (TCB) on the device is only the hardware. Moreover, the hardware cost of Sancus is low. We describe the design of Sancus, and develop and evaluate a prototype FPGA implementation of a Sancus-enabled device. The prototype extends an MSP430 processor with hardware support for the memory access control and cryptographic functionality required to run Sancus. We also develop a C compiler that targets our device and that can compile standard C modules to Sancus protected software modules.
Proceedings of the 22nd USENIX conference on Security; 08/2013
[Show abstract][Hide abstract] ABSTRACT: In this paper we present a flexible hardware design for performing Simultaneous Exponentiations on embedded platforms. Simultaneous Exponentiations are often used in anonymous credentials protocols. The hardware is designed with VHDL and fit for use in embedded systems. The kernel of the design is a pipelined Montgomery multiplier. The length of the operands and the number of stages can be chosen before synthesis. We show the effect of the operand length and number of stages on the maximum attainable frequency as well as on the FPGA resources being used. Next to scalability of the hardware, we support different operand lengths at run-time. The design uses generic VHDL without any device-specific primitives, ensuring portability to other platforms. As a test-case we effectively integrated the hardware in a MicroBlaze embedded platform. With this platform we show that simultaneous exponentiations with our hardware are performed 70 times faster than with an all-software implementation.
Proceedings of the 9th international conference on Reconfigurable Computing: architectures, tools, and applications; 03/2013
[Show abstract][Hide abstract] ABSTRACT: In this paper, we analyze the communication of trusted platform modules and their interface to the hosting platforms. While trusted platform modules are considered to be tamper resistant, the communication channel between these modules and the rest of ...
Computers & Mathematics with Applications. 03/2013; 65(5).
[Show abstract][Hide abstract] ABSTRACT: Content distribution applications such as digital broadcasting, video-on-demand services (VoD), video conferencing, surveillance and telesurgery are confronted with difficulties - besides the inevitable compression and quality challenges - with respect to intellectual property management, authenticity, privacy regulations, access control etc. Meeting such security requirements in an end-to-end video distribution scenario poses significant challenges. If the entire content is encrypted at the content creation side, the space for signal processing operations is very limited. Decryption, followed by video processing and re-encryption is also to be avoided as it is far from efficient, complicates key management and could expose the video to possible attacks. Additionally, also when the content is delivered and decrypted, the protection is gone. Watermarking can complement encryption in these scenarios by embedding a message within the content itself containing for example ownership information, unique buyer codes or content descriptions. Ideally, securing the video distribution should therefore be possible throughout the distribution chain in a flexible way allowing the encryption, watermarking and encoding/transcoding operations to commute.
This paper introduces the reader to the relevant techniques that are needed to implement such an end-to-end commutative security system for video distribution, and presents a practical solution for encryption and watermarking compliant with H.264/AVC and the upcoming HEVC (High Efficiency Video Coding) video coding standards. To minimize the overhead and visual impact, a practical trade-off between the security of the encryption routine, robust watermarking and transcoding possibilities is investigated. We demonstrate that our combined commutative protection system effectively scrambles video streams, achieving SSIM (Structural Similarity Index) values below 0.2 across a range of practical bit rates, while allowing robust watermarking and transcoding.
IEEE Signal Processing Magazine 03/2013; 30(2):97-107. · 4.48 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: End-users have become accustomed to the ease with which online systems allow them to exchange messages, pictures, and other files with colleagues, friends, and family. This con- venience, however, sometimes comes at the expense of hav- ing their data be viewed by a number of unauthorized par- ties, such as hackers, advertisement companies, other users, or governmental agencies. A number of systems have been proposed to protect data shared online; yet these solutions typically just shift trust to another third party server, are platform specific (e.g., work for Facebook only), or fail to hide that confidential communication is taking place. In this paper, we present a novel system that enables users to exchange data over any web-based sharing platform, while both keeping the communicated data confidential and hiding from a casual observer that an exchange of confidential data is taking place. We provide a proof-of-concept implementa- tion of our system in the form of a publicly available Fire- fox plugin, and demonstrate the viability of our approach through a performance evaluation.
Proceedings of the third ACM conference on Data and application security and privacy; 02/2013
[Show abstract][Hide abstract] ABSTRACT: With the large growth of Online Social Networks (OSNs), several privacy threats have been highlighted, as well as solutions to mitigate them. Most solutions focus on restricting the visibility of users information. However, OSNs also represent a threat for contextual information, such as the OSN structure and how users communicate among each other. Recently proposed deanonymization techniques proved to be effective in re-identifying users in anonymized social network. In this paper, we present Friend in the Middle (FiM): a novel approach to make OSNs more resilient against de-anonymization techniques. Additionally we evaluate and demonstrate throughout experimental results the feasibility and effectiveness of our proposal.
Pervasive Computing and Communications Workshops (PERCOM Workshops), 2013 IEEE International Conference on; 01/2013
[Show abstract][Hide abstract] ABSTRACT: This paper describes a cross-protocol attack on all versions of TLS; it can be seen as an extension of the Wagner and Schneier attack on SSL 3.0. The attack presents valid explicit elliptic curve Diffie-Hellman parameters signed by a server to a client that incorrectly interprets these parameters as valid plain Diffie-Hellman parameters. Our attack enables an adversary to successfully impersonate a server to a random client after obtaining 240 signed elliptic curve keys from the original server. While attacking a specific client is improbable due to the high number of signed keys required during the lifetime of one TLS handshake, it is not completely unrealistic for a setting where the server has high computational power and the attacker contents itself with recovering one out of many session keys. We remark that popular open-source server implementations are not susceptible to this attack, since they typically do not support the explicit curve option. Finally we propose a fix that renders the protocol immune to this family of cross-protocol attacks.
Proceedings of the 2012 ACM conference on Computer and communications security; 10/2012
[Show abstract][Hide abstract] ABSTRACT: In the white-box attack context, i.e., the setting where an im-plementation of a cryptographic algorithm is executed on an untrusted platform, the adversary has full access to the implementation and its execution environment. In 2002, Chow et al. presented a white-box AES implementation which aims at preventing key-extraction in the white-box attack context. However, in 2004, Billet et al. presented an efficient practical attack on Chow et al.'s white-box AES implementation. In response, in 2009, Xiao and Lai proposed a new white-box AES implementation which is claimed to be resistant against Billet et al.'s attack. This paper presents a practical cryptanalysis of the white-box AES implementation proposed by Xiao et al. The linear equivalence algorithm presented by Biryukov et al. is used as a building block. The cryptanalysis efficiently extracts the AES key from Xiao et al.'s white-box AES implementation with a work factor of about 2^32 .
[Show abstract][Hide abstract] ABSTRACT: In 2004, we introduced the related-key boomerang/rectangle attacks, which allow us to enjoy the benefits of the boomerang attack and the related-key technique, simultaneously. The new attacks were used since then to attack numerous block ciphers. While the claimed applications are significant, most of them have a major drawback. Their validity cannot be verified experimentally due to their high complexity. Together with the lack of rigorous justification of the probabilistic assumptions underlying the technique, this lead Murphy to claim that attacks using the related-key boomerang/rectangle technique are not legitimate. This paper contains two contributions. The first is a rigorous analysis of the related-key boomerang/rectangle attacks, including devising provably optimal distinguishers and computing their success rate, and discussing the underlying independence assumptions. The second contribution is an extensive experimental verification of the related-key boomerang attack against the GSM block cipher, KASUMI. Our experiments reveal that the success probability of the distinguisher, when averaged over different choices of the keys, is close to the theoretical prediction. However, the exact probability depends on the key, such that for some portion of the keys, the distinguisher holds with a higher probability than expected, while for the rest of the keys, the distinguisher fails completely.
IEEE Transactions on Information Theory 07/2012; 58(7):4948-4966. · 2.65 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Public key Kerberos (PKINIT) is a standardized authenti-cation and key establishment protocol which is used by the Windows active directory subsystem. In this paper we show that card-based public key Kerberos is flawed. In particular, access to a user's card enables an adversary to impersonate that user even after the adversary's access to the card is re-voked. The attack neither exploits physical properties of the card, nor extracts any of its secrets. We propose protocol fixes and discuss properties that authentication and/or key establishment protocols should provide in order to be better equipped against the threats that arise due to the usage of smart cards.
[Show abstract][Hide abstract] ABSTRACT: Due to their fast performance in software, an increasing number of cryptographic primitives are constructed using the operations addition modulo 2n, bit rotation and XOR (ARX). However, the resistance of ARX-based ciphers against differential cryptanalysis is not well understood. In this paper, we propose a new tool for evaluating more accurately the probabilities of additive differentials over multiple rounds of a cryptographic primitive. First, we introduce a special set of additive differences, called UNAF (unsigned non-adjacent form) differences. Then, we show how to apply them to find good differential trails using an algorithm for the automatic search for differentials. Finally, we describe a key-recovery attack on stream cipher Salsa20 reduced to five rounds, based on UNAF differences.
Proceedings of the 19th international conference on Fast Software Encryption; 03/2012