Updates on the Security of FPGAs Against Power Analysis Attacks.
ABSTRACT This paper reports on the security of cryptographic algorithms implemented on FPGAs against power analysis attacks. We first
present some improved experiments against these reconfigurable devices, due to an improved measurement process. Although it
is usually believed that FPGAs are noisy targets for such attacks, it is shown that simple power consumption models can nearly
perfectly correlate with actual measurements. Then, we evaluate how these correlation values depend on the resources used
in the FPGAs. Finally, we investigate the possibility to counteract these attacks by using random pre-charges in the devices
and determine how this technique allows a designer to increase the security of an implementation. These results confirm that
side-channel attacks present a serious threat for most microelectronic devices, including FPGAs. To conclude, we discuss the
security vs. efficiency tradeoffs.
- SourceAvailable from: etrij.etri.re.kr[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we first investigate the side channel analysis attack resistance of various FPGA hardware implementations of the ARIA block cipher. The analysis is performed on an FPGA test board dedicated to side channel attacks. Our results show that an unprotected implementation of ARIA allows one to recover the secret key with a low number of power or electromagnetic measurements. We also present a masking countermeasure and analyze its second-order side channel resistance by using various suitable preprocessing functions. Our experimental results clearly confirm that second-order differential side channel analysis attacks also remain a practical threat for masked hardware implementations of ARIA.Etri Journal 01/2008; 30(2):315-325. · 0.74 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Volatile FPGAs, the dominant type of programmable logic devices, are used in space, military, automotive, and consumer electronics applications which require them to operate in a wide range of environments. The continuous growth in both their capability and capacity now requires signicant resources to be invested in the designs that are created for them. This has brought increased interest in the security attributes of FPGAs; specically, how well do they protect the informa- tion processed within it, how are designs protected during distribution, and how developers' ownership rights are protected while designs from multiple sources are combined. This survey establishes the foundations for discussing \FPGA security", examines a wide range of attacks and defenses along with the current state of in- dustry oerings, and nally, outlines on-going research and latest developments.01/2008;
- [Show abstract] [Hide abstract]
ABSTRACT: With growing global bandwidth consumption and increasing cyber-attacks, information security actors are in constant need for scalable, high performance products that still provide a high level of security assurance. The French national project "SHIVA" aims at developing a new security architecture providing multiple services and such performance and security assurance levels. Based on research and development from various fields, this paper presents usages of technologies from the high performance computing systems (HPC clusters), FPGA-based reprogrammable devices and the use of formal methods to provide additional assurance to be tested under most standard evaluation criteria. A very high level of security assurance is targeted, hence high attack potentials are assumed as per the Common Criteria Vulnerability Analysis assurance requirements (CC AVA VAN.5): A distributed architecture using scalable InfiniBand interconnect is discussed as a new interconnect method for cryptographic devices. New usages and advantages of relying on such an architecture are presented, as well as various security considerations on threats, attacks and how reprogrammable devices bring innovative solutions to cryptographic initialization process on hostile platforms, as well as optimizations and opportunities opened by the use of pre-processing and formally designed software in handling operational data flow and critical information10/2010;
Updates on the Security of FPGAs
against Power Analysis Attacks
F.-X. Standaert?, F. Mace, E. Peeters, J.-J. Quisquater
UCL Crypto Group, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium
Abstract. This paper reports on the security of cryptographic algo-
rithms implemented on FPGAs against power analysis attacks. We first
present some improved experiments against these reconfigurable devices,
due to an improved measurement process. Although it is usually believed
that FPGAs are noisy targets for such attacks, it is shown that simple
power consumption models can nearly perfectly correlate with actual
measurements. Then, we evaluate how these correlation values depend
on the resources used in the FPGAs. Finally, we investigate the possibil-
ity to counteract these attacks by using random pre-charges in the de-
vices and determine how this technique allows a designer to increase the
security of an implementation. These results confirm that side-channel
attacks present a serious threat for most microelectronic devices, includ-
ing FPGAs. To conclude, we discuss the security vs. efficiency tradeoffs.
Hardware designs are usually evaluated within an area-time implementation
space. However, in the context of cryptographic implementations, the efficiency
is not the only metric by which one can measure an implementation’s quality.
In particular, the physical security of microelectronic circuits has recently at-
tracted a lot of attention. While originally applied to small devices like smart
cards, certain attacks have recently been shown quite efficient to defeat FPGA
implementations as well (e.g. [10,14]). As an illustration, in this paper, we con-
sider the resistance of FPGA implementations against power analysis attacks
and update certain assumptions on their actual security.
In these attacks, an adversary uses a hypothetical model of a target device
in order to predict its power consumption. The predictions are then compared
to the real, measured power consumption in order to recover secret information.
Therefore, the better a power consumption model can correlate with actual mea-
surements, the more efficient the resulting attack is. In this context, previously
published results against FPGA devices suggested that these are challenging
components to target with power analysis. Assumed reasons for this notably
were (1) the difficulty of obtaining good power consumption measurements for
FPGAs, (2) the possibility to perform parallel computing within these devices.
?Fran¸ cois-Xavier Standaert is a post doctoral researcher funded by the FNRS (Funds
for National Scientific Research, Belgium).
In this paper, we first suggest that, as far as the quality of the measurements
is concerned, FPGAs do not significantly differ from small devices like smart
cards. In particular, even very simple power consumption models based on the
prediction of the number of bit transitions within a device can nearly perfectly
correlate with actual measurements, if some simple signal processing is applied.
In practice, we perform and evaluate some improved experimental correlation
attacks against FPGA implementations of cryptographic algorithms. We also
discuss how these attacks depend on the resources used in the devices.
In a second part of the paper, we investigate the possibility to counteract these
attacks by using random pre-charges in the FPGAs and evaluate how this tech-
nique allows to increase the security of an implementation. In particular, as
already observed in the context of smart cards, such a proposal makes it impos-
sible to predict bit transitions. This is because one every two consecutive values
in a device is then random and unknown. As a consequence, targeting such de-
signs requires the use of more complex power consumption models (e.g. based
on distinguishing 0 → 1 from 1 → 0 bit transitions), for which the correlations
obtained are lower. We evaluate these correlation values in the paper.
The rest of the paper is structured as follows. Section 2 describes the principles
of power analysis attacks. Section 3 evaluates the correlation obtained between
a simple power consumption model based on the switching activity within an
FPGA and actual measurements. Section 4 illustrates how these correlations
depend on the resources used by a target design. Section 5 performs the same
experiments if random prcharges are used within the FPGA. Section 6 discusses
the resulting security vs. efficiency tradeoff and our conclusions are in Section 7.
Power analysis attacks  generally require a hypothetical model of the device
under attack to predict its power consumption. For example, FPGAs are usually
made of CMOS gates, for which it is reasonable to assume that the main com-
ponent of the power consumption is due to the switching activity. For a single
CMOS gate, we can express it as follows :
Correlation Power Analysis Attacks
where CLis the gate load capacitance, VDDthe supply voltage, P0→1the prob-
ability of a 0 → 1 output transition and f the clock frequency. Equation (1)
specifies that the power consumption of CMOS circuits is data-dependent. An
attacker may consequently estimate a device power consumption at time t by
the number of bit transitions inside the device at this time. Based on this simple
observation, power analysis attacks have been applied to numerous algorithms
and devices, including smart cards, ASICs and FPGAs. In practice, the use of
secret key information in cryptographic designs only allows us to predict a part
of the bit transitions, but it is sufficient to correlate with actual measurements
of the power consumption.
We illustrate the attack principle (e.g. see ) with the simple encryption net-
work of Figure 1, which contains the same basic elements as most present block
ciphers e.g. the DES  and AES Rijndael . That is, the plaintext is XORed
with a secret key, then goes through a layer of relatively small substitution boxes
and is finally sent to a larger permutation (e.g. a linear diffusion layer for the
AES Rijndael). The same operations are iterated a number of times. For the
purposes of this paper, it is not necessary to know more details on these algo-
rithms. The attack proceeds as follows.
S-boxS-boxS-box S-boxS-box S-box
Fig.1. A simple encryption network.
Let the adversary target the 4 key bits entering the left S-box of Figure 1, de-
noted as K0[0..3]. Then, for N different plaintexts, he first predicts the number
of transitions at the S-box output, for every possible value of K0[0..3]. The result
of this prediction is a N × 24selected prediction matrix P, containing numbers
between 0 and 4. For simulation purposes, it is also interesting to produce the
global prediction matrix G that contains the number of bit transitions inside
the whole design. This can of course not be computed by an actual adversary,
but can be done if the secret key is known (i.e. when evaluating the attacks).
In the second part of the attack, the adversary let the circuit encrypt the same
N plaintexts with a fixed key (the same as during the predictions if G was
computed, a secret one in case of real attacks) and he measures the power con-
sumption of the device while the chip is operating the targeted operation. This
results in a N × 1 measurement vector M.
Finally, the attacker computes the correlation between the measurement vec-
tor and all the columns of the selected prediction matrix (corresponding to all
the possible key guesses). If the attack is successful, it is expected that only
one value, corresponding to the correct key bits, leads to a high correlation. An
efficient way to compute the correlation is to use the Pearson coefficient that
can be expressed as follows:
C(M,P) =µ(M.P) − µ(M).µ(P)
In this expression, µ(M) denotes the mean of the set of measurements M and
σ2(M) its variance. For a more detailed explanation of the power analysis attack
principles, we refer to previous publications, e.g. [2,14]. We note that different
statistical tools could be considered to mount power analysis attacks and the
use of the correlation coefficient is not optimal with this respect. For example,
maximum likelihood techniques  may yield better results. However, with the
simple power consumption models considered here, correlation attacks provide
good results and are extremely easy to manipulate (e.g. they do not require any
estimation of the noise in the target devices).
Finally, let us recall two simple formulas, proven in . Firstly, the correla-
tion coefficient we are interested in during an attack is the one between the
selected predictions and the measurements. It can be rewritten as:
C(P,M) = C(P,G) × C(G,M)(3)
In this equation, the coefficient C(G,M) only relates to the quality of the mea-
surement and for example, is independent of the FPGA design considered. On
the contrary, the coefficient C(P,G) is specifically related to the implementation
under attack and depends on the number of bit transitions that can actually be
predicted. In our previous example, we did only predict the transitions of one tar-
get S-box, out of the 12 S-boxes in Figure 1. Secondly, the number of generated
plaintexts N required to have a successful correlation attack is worth:
N = c ×
where c is a small constant value. In the following sections, we would like to
answer the question: “How precisely can we correlate our simple power con-
sumption models with actual measurements of the power consumption?”
3 Correlations measurements and consequences
Target designs: For all our experiments, we used the four target designs repre-
sented in Figure 2. They are again made of XOR operations, substitution boxes
and diffusion layers. The three first designs loop on one iteration while the fourth
one loops on two iterations. These designs also differ by their various number
of pipeline stages. For simplicity purposes, we forced all the operations to be
performed by one single layer of look up tables (LUTs) in the FPGA (e.g. we
used the 4-bit substitution boxes of the Khazad block cipher  that perfectly
fit to these constraints). Also, the potential leaking points, corresponding to the
points in the design for which the transitions consume power, are denoted as
a,b,c and d (and further letters for the fourth design). All the architectures are
128-bit wide. When the values a,b,c,d,... are stored in registers, and according
to the terminology introduced in , the dark gray registers are full (meaning
that their bit transitions are strongly correlated to the key values) while the
light gray ones are empty (meaning the opposite). Also, the small black boxes
suggest that a part of the register can actually be predicted by an adversary,
because it does only depend on a limited number of key bits. On the opposite,
registers without black boxes cannot be predicted. This is typically the case of
the registers after the diffusion layer.
Fig.2. Target designs.
Our measurements were performed on a Xilinx Spartan-2 device. Although the
building of a good measurement setup is an important step in side-channel at-
tacks, the technical description of such a setup is out of the scope of this work. We
simply note that our approach was to use a dedicated board in order to isolate
the FPGA from any other component, representing potential noise sources in the
observations. It is important to have in mind that the following results highly
depend on our measurement capabilities and the context considered. For exam-
ple, targeting an FPGA on a prototyping board including various processors,
memories, ... would be more challenging. On the other hand, the measurement
process itself could still be improved and is under progress. Again, the objec-
tive of this paper is to suggest that basic methods can already yield good results.
Our initial strategy to evaluate the correlation coefficient C(G,M) was the fol-
lowing. We considered the third design of Figure 2, with four pipeline stages.
Then, we assumed that the leaking points a,b,c,d (all of them being stored in
registers) contributed for a similar part of the power consumption and predicted
the bit transitions in these registers. On a power trace like the one in the left
part of Figure 3, we finally observed the peaks occurring at the rising edges of
the clock signal and evaluated how the values of these peaks were correlated
with the total number of bit transitions predicted.
Fig.3. A single power trace: time and frequency domains.
Two simple signal processing steps were applied. First, the spectrum of the
power traces was observed (partially represented in Figure 3) and we identified
a number of parasitic signals that were filtered with their harmonics. Second, we
performed a small averaging on the filtered traces.
The correlations between our predictions and a single FPGA power trace, fil-
tered or not, are represented in the left part of Figure 4, for different numbers
of generated plaintexts. The correlations after averaging are in the right part
of the figure. Roughly, we observe that a single trace allows to reach a correla-
tion of up to 75% while a small averaging increases this value behind 90%. As a
comparison, previously published results, e.g.  suggested correlation values of
around 45%, roughly corresponding to our single non-filtered trace experiment.
Therefore, referring to Equation 4, the number of required plaintexts to perform
a successful attack would be divided by 4.
Fig.4. Correlation between our predictions and actual power consumption measure-
ments for the design (3), without random pre-charges: single trace and averaged traces.
4 Resources dependencies
In the previous experiment, we correlated the total number of bit transitions in
the device registers with actual power consumption measurements. As already
mentioned, this involves the important assumption that all the leaking points
contribute identically to the power consumption. Obviously, this may not be
formally correct and the aim of this section is to evaluate how relevant is this
assumption for practical applications. In particular, we would like to answer two
questions: (1) do the use of registers in a design influence its power consumption?
(2) how do the various FPGA resources contribute to the power consumption?
In order to answer the first question, we implemented the designs (1),(2) and
(3) of Figure 2. Since they only differ in their number of pipeline stages, they
actually require (roughly) the same number of LUTs and slices. The only differ-
ence is in their number of flip flops. Then, we measured the power consumption
of these three architectures when feeded with the same inputs. We could not
distinguish any significant difference between the power consumption patterns.
The assumed reason for this observation is that the overall power consumed by
an FPGA mainly depends on the amount of resources (e.g. the slices) used by a
design, that is roughly the same in the three experiments.
To answer the second question, we used again the designs (1), (2) and (3) and
evaluated separately the correlations between their power consumption measure-
ments and the bit transitions for the leaking points a,b,c,d. It is illustrated for
the design (1) in the left part of Figure 5 where we can clearly observe that the
leaking points do not correlate the same way and therefore do not contribute for
equal parts to the global power consumption. Then, to obtain better results, we
used a weighted sum of the predictions of the different leaking points. It is rep-
resented in the right part of the figure where it is compared with a non-weighted
sum (as we used in the previous section).
Fig.5. Resource dependent correlations without random pre-charges.
These results confirm the experiments performed in  where it is shown that
the dynamic power consumption in FPGAs actually depends on the effective
capacitances of the resources used. For example, it is shown that the effective
capacitances of signals within a slice are much lower than the ones of long connec-
tion wires. This could explain that the correlations with certain bit transitions
appear to be much higher than others. It is also interesting to observe that fil-
tering the trace can again yield an even better correlation. This could be easily
understood since filtering reduces the noise due to parasitic frequencies within
the signal while the use of weighted predictions increases the quality of our leak-
age model. That is, both techniques relate to different noise sources.
From a practical point of view, it is important to have in mind that the use
of weighted predictions involves a different attack context, usually denoted as
template attacks . Indeed, in the most general setting, an actual adversary will
not be able to determine precisely which transitions in a design contribute the
most to the power consumption. Therefore, the naive strategy (without attribut-
ing weights to the bit transitions) is the only one applicable. On the opposite, if
the adversary can use a programmable device to build a better power consump-
tion model (i.e. in the template attack context), improved strategies as the one
presented in this section are applicable. Note that, in our example, we only used
four different weights (i.e. for the a,b,c,d leaking points), although it would be
possible to further improve the process by considering more different weights.
To further analyze these observations, we use the following lemma, also applied
in : the correlation coefficient between the sum of n arbitrary independent
identically distributed random variables and the sum of the first m < n of these
additively to the global power consumption, it means that the four correlations
in the left part of Figure 5 respectively correspond to 0.22? 4%, 0.252? 6%,
0.452? 20% and 0.52? 25% of the total power consumption. That is a sum
of 55%. Such a prediction should allow a correlation of
close to the one observed in the right part of the figure, for the (non-filtered)
weighted sum experiment. This re-confirms that a significant part of the power
consumption is not predicted, which may be caused by various noise sources
and/or parasitic signals. Those could be removed by filtering (as the right part
of Figure 5 suggests) or averaging as the previous section underlined.
?m/n. If we assume that various bit transitions in a design contribute
√0.55 ? 74% which is
To conclude this section, we note that although the knowledge of a design’s
details may allow to improve correlation analysis attacks, a basic side-channel
adversary will probably be limited to simple strategies, e.g. assuming all the bit
transitions to contribute equally to the power consumption. It must be observed
that, if the leaking points targeted by an adversary are connected to low effective
capacitances within a FPGA, the actual attack may become more challenging.
Another remark is that, due to their high diffusion properties, encryption algo-
rithms usually require the use of long connection wires, which probably increases
their power consumption compared to other designs. Finally, we reproduced the
attack against an FPGA implementation of the AES Rijndael performed in 
with the improved measurement process corresponding to the right part of Fig-
ure 4. It is represented in the left part of Figure 6 where we observe that the
attack is successful after 300 generated plaintexts. Compared with the results in
, it confirms our expectations that this number is roughly divided by 4.
Fig.6. An attack against the AES Rijndael, 0 → 1 and 1 → 0 bit transition differences.
5 Random pre-charges and consequences
A common countermeasure used in the smart card industry to counteract side-
channel analysis is to pre-charge the buses with random values. Such a solution
can be straightforwardly transposed in the context of FPGA implementations
at the cost of a reduction of the throughput. Indeed, if one every two inputs
of the encryption design is a random number generated within the FPGA1, an
adversary will not be able to predict the transitions within the implementation
anymore (of course, the resulting ciphertext should not be outputted from the
device). As suggested in , the only solution is then to distinguish 0 → 1 from
1 → 0 bit transitions through the leakages. In the latter case, one can predict
the number of 0’s and 1’s in the device at some time, rather than predicting the
number of bit transitions at this time. That is, we use a model based on the
Hamming weight of the data manipulated rather than on its Hamming distance.
To confirm that such a model is applicable, we performed a preliminary exper-
iment, pictured on the right part of Figure 6. We observed the power traces of
large bit-vectors switching between “all zeroes” or “all ones” patterns. Typically,
this experiment suggested power consumption differences of about 10%.
The correlations between Hamming weight-based predictions and a single FPGA
power trace (using the design (3) of Figure 2, as in Section 3), filtered or not,
are represented in the left part of Figure 7, for different numbers of generated
plaintexts. The correlations after averaging are in the right part of the figure.
Roughly, we observe that a single trace allows to reach a correlation of up to 15%
while a small averaging increases this value behind 20%. One can conclude that,
although the correlations obtained are significantly lower (due to a much higher
model matching noise), they are still sufficient to perform the attacks. This is
specially true when considering that the measurement process is still likely to
be improved and that other side-channel information could be used to increase
these correlations, e.g. the electromagnetic radiation. On the other hand, if com-
bined with other countermeasures, such random pre-charges may increase the
difficulty of performing the attacks at a relatively low implementation cost.
1e.g.  could be used to produce the initial seeds of a pseudo-random number gen-
erator which will consequently generate the pre-charges.
Let us finally remark that the differences between 0 → 1 and 1 → 0 bit transi-
tions could also be used to slightly improve our power consumption model of the
previous sections, again using different weights for these different transitions.
Fig.7. Correlation between our predictions and actual power consumption measure-
ments, with random pre-charges: single trace and averaged traces.
6 Security vs. efficiency tradeoffs
The previous considerations can be summarized in order to easily determine the
number of plaintexts required to have a successful attack :
N = c ×
In this expression:
1. C(G,M) is the expected correlation obtained between the power consump-
tion model and the actual measurements (investigated in this paper). Our
results suggest that reasonable values for this parameter are:
– 0.50 < C(G,M) < 0.95 if no random pre-charges are used (we observed
values in this range). The better the correlation is, the more efficient the
resulting attack is.
– 0.10 < C(G,M) < 0.50 if random pre-charges are used. Our results
suggest no more than 0.25 but could possibly be improved. Therefore, a
small security margin is reasonable.
2. C(P,G) relates to the number of bits for which the power consumption can
be predicted in the attack. If npredis this number of predictable bits and ntot
is the total number of bits in the design, we roughly2have C(P,G)=
2More precisely, if npf is the number of predictable and full bits, npe the number of
predictable and empty bits and nu the number of unpredictable bits, with ntot =
npf+ npe+ nu, the correlation we are interested in is
3. c is a small constant value depending on the number of key bits targeted dur-
ing the attack. For example, it could be estimated once for 8-bit substitution
boxes (like the ones of the AES Rijndael) as follows. Knowing that:
– the attack of Figure 6 is using C(G,M)? 0.9,
– the ratio of predictable registers (from ) is worth
– the attack of Figure 6 is successful after 300 plaintexts,
we find that a reasonable value (including a small security margin) is c ? 10.
From Equation 5, it is now extremely simple to evaluate the security of our dif-
ferent implementations in Figure 2. For example, let us consider a correlation
attack against implementation (3), without random pre-charges. First, we as-
sume a reasonable value for C(G,M)? 0.8. Then, we know that we have a total
of 4 × 128 = 512 bits in the design, among which 3 × 8 = 24 are predictable.
This yields C(P,G)=
fourth design for which the total number of bits is 8 × 128 = 1024 and we still
have npred = 32. It yields N ? 666. If we additionally consider random pre-
charges in the same fourth design, we could have C(G,M)? 0.25 (at the cost
of a throughput reduction) and therefore N ? 6882. That is, any possible sim-
ilar hardware architecture could be analyzed. As already frequently discussed,
we observe that the attacks efficiencies depend on the implementation size and
therefore involve a security vs. efficiency tradeoff. Note that in addition to the
use of random pre-charges, various combinations of repetition codes (e.g. sending
one true plaintexts for x random ones, in variable orders to the encryption de-
vice) could be considered. Also, as suggested in , such countermeasures could
be particularly interesting in the context of feedback implementations, where
pipelining cannot be used for increasing the performances, but possibly for fault
detection or improved side-channel resistance.
512. Finally, we find: N ? 333. Now, let us consider the
We finally mention that in all our experiments, we only considered the peak
values of the power traces occurring at the rising edges of the clock. It is reason-
able (and verified in our experiments) to assume that these values give a good
image of the power consumption because of the inherently synchronous behavior
of RAM-based FPGAs. However, this could not be the case for other devices.
The correlation between the power consumption measurements of an isolated
FPGA implementation of a cryptographic algorithm and a simple prediction
based on the number of bit transitions within the devices can be up to 90%. Using
random pre-charges in the FPGA allows to decrease these correlation values (our
experiments suggest 25%) but is not sufficient to counteract the attacks. We
provide simple techniques for estimating the number of measurements required
to defeat one particular implementation. The latter estimations suggest that
most FPGA implementations of symmetric-key block ciphers can be defeated in
a low (e.g. a few hundred) number of power traces.
1. P. Barreto, V. Rijmen, The KHAZAD Legacy-Level Block Cipher, Submission to
NESSIE project, available from http://www.cosic.esat.kuleuven.ac.be/nessie/
2. E. Brier, C. Clavier, F. Olivier, Correlation Power Analysis with a Leakage Model,
in the proceedings of CHES 2004, Lecture Notes in Computer Science, vol 3156,
pp 16-29, Boston, USA, August 2004.
3. P. Buysschaert, E. De Mulder, S. B.¨Ors, P. Delmotte, B. Preneel, G. Vandenbosch,
I. Verbauwhede, Electromagnetic Analysis Attack on an FPGA Implementation of
an Elliptic Curve Cryptosystem, in the proceedings of EUROCON 2005 - The
International Conference on Computer as a Tool, IEEE, 4 pages, 2005.
4. S. Chari, J. Rao, P. Rohatgi, Template Attacks, in the proceedings of CHES 2002,
Lecture Notes in Computer Science, vol 2523, pp 13-28, Redwood City, CA, USA,
5. V. Fischer, M. Drutarovsky, True Random Number Generator Embedded in Recon-
figurable Hardware, in the proceedings of CHES 2002, Lecture Notes in Computer
Science, vol 2523, pp 415-430, Redwood Shores, California, USA, August 2002.
6. P. Kocher, J. Jaffe, B. Jun, Differential Power Analysis, in the proceedings of Cypto
1999, Lecture Notes in Computer Science, vol 1666, pp 398-412, Santa-Barbara,
USA, August 1999, Springer-Verlag.
7. National Bureau of Standards, FIPS PUB 46, The Data Encryption Standard,
Federal Information Processing Standard, NIST, U.S. Dept. of Commerce, Jan
8. National Bureau of Standards, FIPS 197, Advanced Encryption Standard, Federal
Information Processing Standard, NIST, U.S. Dept. of Commerce, November 2001.
9. T.G. Malkin, F.-X. Standaert, M. Yung, A Comparative Cost/Security Analysis
of Fault Attack Countermeasures, in the proceedings of FDTC 2005, Edinburgh,
Scotland, September 2005.
10. S.B. Ors, E. Oswald, B. Preneel, Power-Analysis Attacks on an FPGA – First
Experimental Results, in the proceedings of CHES 2003, Lecture Notes in Computer
Science, vol 2279, pp 35-50, Cologn, Germany, September 2003, Springer-Verlag.
11. E. Peeters, F.-X. Standaert, J.-J. Quisquater, Power and Electromagnetic Analysis:
Improved Model, Consequences and Comparisons, to appear in Integration, the
VLSI Journal, Spring 2006, Elsevier.
12. J.M. Rabaey, Digital Integrated Circuits, Prentice Hall International, 1996.
13. L. Shang, A. Kaviani, K. Bathala, Dynamic Power Consumption in Virtex-2 FPGA
Family, in the proceedings of FPGA 2002, pp 157-164, Monterey, California, USA,
14. F.-X. Standaert, S.B. Ors, B. Preneel, Power Analysis of an FPGA Implementation
of Rijndael : Is Pipelining a DPA Countermeasure?, in the proceedings of CHES
2004, Lecture Notes in Computer Science, vol 3156, pp 30-44, Cambridge, MA,
USA, August 2004.
15. F.-X. Standart, E. Peeters, G. Rouvroy, J.-J. Quisquater, Power Analysis Attacks
and Countermeasures of Field Programmable Gate Arrays: Recent Results, to ap-
pear in the Proceedings of the IEEE, special issue on Cryptographic Hardware and
Embedded Systems, Spring 2006.
16. K. Tiri, I. Verbauwhede, Synthesis of Secure FPGA Implementations, in the pro-
ceedings of the International Workshop on Logic and Synthesis (IWLS 2004), pp
224-231, June 2004.