Bart Preneel

iMinds, Ledeberg, Flanders, Belgium

Are you Bart Preneel?

Claim your profile

Publications (569)77.14 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Public key Kerberos (PKINIT) is a standard authentication and key establishment protocol. Unfortunately, it suffers from a security flaw when combined with smart cards. In particular, temporary access to a user’s card enables an adversary to impersonate that user for an indefinite period of time, even after the adversary’s access to the card is revoked. In this paper, we extend Shoup’s key exchange security model to the smart card setting and examine PKINIT in this model. Using this formalization, we show that PKINIT is indeed flawed, propose a fix, and provide a proof that this fix leads to a secure protocol.
    International Journal of Information Security 06/2014; 13(3). · 0.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We analyze the Grøstl-0 hash function, that is the version of Grøstl submitted to the SHA-3 competition. This paper extends Peyrin’s internal differential strategy, that uses differential paths between the permutations P and Q of Grøstl-0 to construct distinguishers of the compression function. This results in collision attacks and semi-free-start collision attacks on the Grøstl-0 hash function and compression function with reduced rounds. Specifically, we show collision attacks on the Grøstl-0-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities 248 and 2112 and on the Grøstl-0-512 hash function reduced to 6 out of 14 rounds with time complexity 2183. Furthermore, we demonstrate semi-free-start collision attacks on the Grøstl-0-256 compression function reduced to 8 rounds and the Grøstl-0-512 compression function reduced to 9 rounds. Finally, we show improved distinguishers for the Grøstl-0-256 permutations with reduced rounds.
    Designs Codes and Cryptography 03/2014; · 0.78 Impact Factor
  • Bart Preneel
    [Show abstract] [Hide abstract]
    ABSTRACT: Cryptographic hash functions play a central role in cryptography: they map arbitrarily large input strings to fixed length output strings. The main applications are to create a short unique identifier to a string, to transform a string with a one-way mapping, and to commit to a string or to confirm its knowledge without revealing it. Additional applications are the mapping of group or field elements to strings, key derivation and the extraction of entropy. The main security requirements are preimage and second preimage resistance, collision resistance and indifferentiability from a random oracle. During the last three decades, more than 200 hash functions designs have been published; many of those have been cryptanalyzed, including widely used schemes such as MD5 and SHA-1. Moreover, there was a lack of theoretical understanding of their constructions; as a consequence, structural flaws were identified in widely used designs. These concerns also undermined to some extent the confidence in the SHA-2 hash functions, that have been designed for long term security. As a consequence, the US National Institute for Standards and Technology has organized an open competition. The competition started in November 2007; after five years of intense design, analysis and debate the Keccak function was announced as the winner in October 2012. The new FIPS standard is expected to be published in 2014. This extended abstract will identify some lessons learned during this competition.
    Proceedings of the 6th International Conference on Security of Information and Networks; 11/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications. In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.
    Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security; 11/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Various Location Privacy-Preserving Mechanisms (LPPMs) have been proposed in the literature to address the privacy risks derived from the exposure of user locations through the use of Location Based Services (LBSs). LPPMs obfuscate the locations disclosed to the LBS provider using a variety of strategies, which come at a cost either in terms of quality of service, or of resource consumption, or both. Shokri et al. propose an LPPM design framework that outputs optimal LPPM parameters considering a strategic adversary that knows the algorithm implemented by the LPPM, and has prior knowledge on the users' mobility profiles [23]. The framework allows users to set a constraint on the tolerable loss quality of service due to perturbations in the locations exposed by the LPPM. We observe that this constraint does not capture the fact that some LPPMs rely on techniques that augment the level of privacy by increasing resource consumption. In this work we extend Shokri et al.'s framework to account for constraints on bandwidth consumption. This allows us to evaluate and compare LPPMs that generate dummies queries or that decrease the precision of the disclosed locations. We study the trilateral trade-off between privacy, quality of service, and bandwidth, using real mobility data. Our results show that dummy-based LPPMs offer the best protection for a given combination of quality and bandwidth constraints, and that, as soon as communication overhead is permitted, both dummy-based and precision-based LPPMs outperform LPPMs that only perturb the exposed locations. We also observe that the maximum value of privacy a user can enjoy can be reached by either sufficiently relaxing the quality loss or the bandwidth constraints, or by choosing an adequate combination of both constraints. Our results contribute to a better understanding of the effectiveness of location privacy protection strategies, and to the design of LPPMs with constrained resource consumption.
    Proceedings of the 12th ACM workshop on Workshop on privacy in the electronic society; 11/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose Sancus, a security architecture for networked embedded devices. Sancus supports extensibility in the form of remote (even third-party) software installation on devices while maintaining strong security guarantees. More specifically, Sancus can remotely attest to a software provider that a specific software module is running uncompromised, and can authenticate messages from software modules to software providers. Software modules can securely maintain local state, and can securely interact with other software modules that they choose to trust. The most distinguishing feature of Sancus is that it achieves these security guarantees without trusting any infrastructural software on the device. The Trusted Computing Base (TCB) on the device is only the hardware. Moreover, the hardware cost of Sancus is low. We describe the design of Sancus, and develop and evaluate a prototype FPGA implementation of a Sancus-enabled device. The prototype extends an MSP430 processor with hardware support for the memory access control and cryptographic functionality required to run Sancus. We also develop a C compiler that targets our device and that can compile standard C modules to Sancus protected software modules.
    Proceedings of the 22nd USENIX conference on Security; 08/2013
  • Selected Areas in Cryptography - SAC 2013 - 20th International Conference, Burnaby, BC, Canada; 08/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a flexible hardware design for performing Simultaneous Exponentiations on embedded platforms. Simultaneous Exponentiations are often used in anonymous credentials protocols. The hardware is designed with VHDL and fit for use in embedded systems. The kernel of the design is a pipelined Montgomery multiplier. The length of the operands and the number of stages can be chosen before synthesis. We show the effect of the operand length and number of stages on the maximum attainable frequency as well as on the FPGA resources being used. Next to scalability of the hardware, we support different operand lengths at run-time. The design uses generic VHDL without any device-specific primitives, ensuring portability to other platforms. As a test-case we effectively integrated the hardware in a MicroBlaze embedded platform. With this platform we show that simultaneous exponentiations with our hardware are performed 70 times faster than with an all-software implementation.
    Proceedings of the 9th international conference on Reconfigurable Computing: architectures, tools, and applications; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we analyze the communication of trusted platform modules and their interface to the hosting platforms. While trusted platform modules are considered to be tamper resistant, the communication channel between these modules and the rest of ...
    Computers & Mathematics with Applications. 03/2013; 65(5).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Content distribution applications such as digital broadcasting, video-on-demand services (VoD), video conferencing, surveillance and telesurgery are confronted with difficulties - besides the inevitable compression and quality challenges - with respect to intellectual property management, authenticity, privacy regulations, access control etc. Meeting such security requirements in an end-to-end video distribution scenario poses significant challenges. If the entire content is encrypted at the content creation side, the space for signal processing operations is very limited. Decryption, followed by video processing and re-encryption is also to be avoided as it is far from efficient, complicates key management and could expose the video to possible attacks. Additionally, also when the content is delivered and decrypted, the protection is gone. Watermarking can complement encryption in these scenarios by embedding a message within the content itself containing for example ownership information, unique buyer codes or content descriptions. Ideally, securing the video distribution should therefore be possible throughout the distribution chain in a flexible way allowing the encryption, watermarking and encoding/transcoding operations to commute. This paper introduces the reader to the relevant techniques that are needed to implement such an end-to-end commutative security system for video distribution, and presents a practical solution for encryption and watermarking compliant with H.264/AVC and the upcoming HEVC (High Efficiency Video Coding) video coding standards. To minimize the overhead and visual impact, a practical trade-off between the security of the encryption routine, robust watermarking and transcoding possibilities is investigated. We demonstrate that our combined commutative protection system effectively scrambles video streams, achieving SSIM (Structural Similarity Index) values below 0.2 across a range of practical bit rates, while allowing robust watermarking and transcoding.
    IEEE Signal Processing Magazine 03/2013; 30(2):97-107. · 3.37 Impact Factor
  • IEEE Signal Processing Magazine 01/2013; 30(2):97-107. · 3.37 Impact Factor
  • F. Beato, M. Conti, B. Preneel
    [Show abstract] [Hide abstract]
    ABSTRACT: With the large growth of Online Social Networks (OSNs), several privacy threats have been highlighted, as well as solutions to mitigate them. Most solutions focus on restricting the visibility of users information. However, OSNs also represent a threat for contextual information, such as the OSN structure and how users communicate among each other. Recently proposed deanonymization techniques proved to be effective in re-identifying users in anonymized social network. In this paper, we present Friend in the Middle (FiM): a novel approach to make OSNs more resilient against de-anonymization techniques. Additionally we evaluate and demonstrate throughout experimental results the feasibility and effectiveness of our proposal.
    Pervasive Computing and Communications Workshops (PERCOM Workshops), 2013 IEEE International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a cross-protocol attack on all versions of TLS; it can be seen as an extension of the Wagner and Schneier attack on SSL 3.0. The attack presents valid explicit elliptic curve Diffie-Hellman parameters signed by a server to a client that incorrectly interprets these parameters as valid plain Diffie-Hellman parameters. Our attack enables an adversary to successfully impersonate a server to a random client after obtaining 240 signed elliptic curve keys from the original server. While attacking a specific client is improbable due to the high number of signed keys required during the lifetime of one TLS handshake, it is not completely unrealistic for a setting where the server has high computational power and the attacker contents itself with recovering one out of many session keys. We remark that popular open-source server implementations are not susceptible to this attack, since they typically do not support the explicit curve option. Finally we propose a fix that renders the protocol immune to this family of cross-protocol attacks.
    Proceedings of the 2012 ACM conference on Computer and communications security; 10/2012
  • Source
    Yoni De Mulder, Peter Roelse, Bart Preneel
    [Show abstract] [Hide abstract]
    ABSTRACT: In the white-box attack context, i.e., the setting where an im-plementation of a cryptographic algorithm is executed on an untrusted platform, the adversary has full access to the implementation and its execution environment. In 2002, Chow et al. presented a white-box AES implementation which aims at preventing key-extraction in the white-box attack context. However, in 2004, Billet et al. presented an efficient practical attack on Chow et al.'s white-box AES implementation. In response, in 2009, Xiao and Lai proposed a new white-box AES implementation which is claimed to be resistant against Billet et al.'s attack. This paper presents a practical cryptanalysis of the white-box AES implementation proposed by Xiao et al. The linear equivalence algorithm presented by Biryukov et al. is used as a building block. The cryptanalysis efficiently extracts the AES key from Xiao et al.'s white-box AES implementation with a work factor of about 2^32 .
    SAC 2012; 08/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Due to their fast performance in software, an increasing number of cryptographic primitives are constructed using the operations addition modulo 2n, bit rotation and XOR (ARX). However, the resistance of ARX-based ciphers against differential cryptanalysis is not well understood. In this paper, we propose a new tool for evaluating more accurately the probabilities of additive differentials over multiple rounds of a cryptographic primitive. First, we introduce a special set of additive differences, called UNAF (unsigned non-adjacent form) differences. Then, we show how to apply them to find good differential trails using an algorithm for the automatic search for differentials. Finally, we describe a key-recovery attack on stream cipher Salsa20 reduced to five rounds, based on UNAF differences.
    Proceedings of the 19th international conference on Fast Software Encryption; 03/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In 2007, the US National Institute for Standards and Technology (NIST) announced a call for the design of a new cryptographic hash algorithm in response to vulner-abilities like differential attacks identified in existing hash functions, such as MD5 and SHA-1. NIST received many submissions, 51 of which got accepted to the first round. 14 candidates were left in the second round, out of which 5 candidates have been recently chosen for the final round. An important criterion in the selection process is the SHA-3 hash function security. We identify two important classes of security arguments for the new designs: (1) the possible re-ductions of the hash function security to the security of its underlying building blocks, and (2) arguments against dif-ferential attack on building blocks. In this paper, we com-pare the state of the art provable security reductions for the second round candidates, and review arguments and bounds against classes of differential attacks. We discuss all the SHA-3 candidates at a high functional level, analyze and summarize the security reduction results and bounds against differential attacks. Additionally, we generalize the well-known proof of collision resistance preservation, such that all SHA-3 candidates with a suffix-free padding are covered.
    International Journal of Information Security 02/2012; 11(2). · 0.48 Impact Factor
  • Bart Preneel, Jongsung Kim, Damien Sauveron
    Mathematical and Computer Modelling. 01/2012; 55:1-2.
  • Conference Paper: DES Collisions Revisited.
    Sebastiaan Indesteege, Bart Preneel
    [Show abstract] [Hide abstract]
    ABSTRACT: We revisit the problem of finding key collisions for the DES block cipher, twenty two years after Quisquater and Delescaille demonstrated the first DES collisions. We use the same distinguished points method, but in contrast to their work, our aim is to find a large number of collisions. A simple theoretical model to predict the number of collisions found with a given computational effort is developed, and experimental results are given to validate this model.
    Cryptography and Security: From Theory to Applications - Essays Dedicated to Jean-Jacques Quisquater on the Occasion of His 65th Birthday; 01/2012
  • Roel Peeters, Dave Singelée, Bart Preneel
    IEEE Pervasive Computing 01/2012; 11(3):76-83. · 2.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Perceptual hashing is a promising tool for multimedia content authentication. Digital watermarking is a convenient way of data hiding. By combining the two, we get a more efficient and versatile solution. In a typical scenario, multimedia data is sent from a server to a client. The corresponding hash value is embedded in the data. The data might undergo incidental distortion and malicious modification. In order to verify the authenticity of the received content, the client can compute a hash value from the received data, and compare it with the hash value extracted from the data. The advantage is that no extra communication is required --- the original hash value is always available and synchronized. However, on the other hand, image quality can be degraded due to watermark embedding. There is interesting interaction between hashing and watermarking. We investigate this issue by proposing a content authentication system. The hash algorithm and the watermarking algorithm are designed to have minimal interference. This is achieved by doing hashing and watermarking in different wavelet subbands. Through extensive experiments we show that the parameters of the watermarking algorithm have significant influence to the authentication performance. This work gives useful insights into the research and practice in this field.
    13th Pacific-Rim Conference on Multimedia - Advances in Multimedia Information Processing (PCM 2012). 01/2012;

Publication Stats

6k Citations
77.14 Total Impact Points


  • 2014
    • iMinds
      Ledeberg, Flanders, Belgium
  • 2012
    • University of Limoges
      Limages, Limousin, France
    • University of Luxembourg
      Letzeburg, Luxembourg, Luxembourg
  • 2002–2012
    • Technical University of Denmark
      • Department of Mathematics
      København, Capital Region, Denmark
  • 1970–2012
    • KU Leuven
      • Department of Electrical Engineering (ESAT)
      Leuven, VLG, Belgium
  • 2010
    • Tsinghua University
      Peping, Beijing, China
  • 2008
    • Katholieke Hogeschool Limburg
      Limburg, Walloon Region, Belgium
  • 2005
    • Universitair Psychiatrisch Centrum KU Leuven
      Cortenberg, Flanders, Belgium
    • Bulgarian Academy of Sciences
      • Institute of Mathematics and Informatics
      Sofia, Oblast Sofiya-Grad, Bulgaria
  • 2000
    • University of the Aegean
      • Department of Information and Communication Systems Engineering
      Mytilíni, Voreio Aigaio, Greece
  • 1998–1999
    • University of Bergen
      Bergen, Hordaland, Norway
  • 1997
    • University of London
      Londinium, England, United Kingdom