Content uploaded by Frank Schuhmacher

Author content

All content in this area was uploaded by Frank Schuhmacher on Sep 23, 2020

Content may be subject to copyright.

Software-based self-testing for the

authentication of car components

Frank.Schuhmacher@segrids.com

Segrids GmbH

Abstract. We present a software solution for the authentication of

ECUs based on hardware intrinsic authentication features of standard

micro-controllers. It requires that the group of authentic ECUs is char-

acterized by a dedicated MCU model and a group identiﬁer in read-only

memory. No secret ECU key is required. We make use of the fact that an

MCU running a suitable self-test is a complex dynamical system that is

hard to simulate in a cycle-accurate way. We demonstrate that software-

based self-testing can serve as a “time-bounded” authentication method.

One ﬁeld of application is the detection or lock-out of counterfeits.

1 Introduction

A “security control unit” (SCU) in an on-board network might be required to

authenticate ECUs at startup. One motivation is the detection of counterfeit

spare parts.

Key based cryptography is not a good choice for the authentication of an

ECU without a secure hardware element since it is most likely vulnerable to

probing, DPA, or fault injection attacks [1]. Cryptographic authentication is

broken as soon as a single secret ECU key is disclosed. Due to immense revenues

in product counterfeiting [2], a strong attack potential must be presumed. Secure

hardware elements providing a suﬃcient key protection level are expensive for a

price sensitive market.

Physical unclonable functions (PUFs) were originally developed as an alter-

native to key based cryptographic authentication. However, only “weak” PUF

solutions [3] play a practical role so far. Weak PUFs are not used directly as

an authenticator but serve as a secure key storage. They support the secure

key handling in a cryptographic solution but do not solve the problem of a key

leakage during a cryptographic computation.

The inventions of a “public” PUF (PPUF) in [4] and of “time-bounded”

authentication (TBA) in [5] are based on the insight that it is not mandatory

for a device authentication that its challenge-response-function is completely

unclonable. It is suﬃcient that it can be computed faster on an authentic device

than on any simulator, assuming that the challenger veriﬁes that the response

time is below a suitable limit, additionally to the correctness of the response.

Both inventions are based on device unique signal propagation characteristics

of gate arrays due to production process variations. The article [6] suggests the

term SIMPL (simulation possible but laborious) instead of PPUF or TBA.

Our contribution is a new time-bounded authentication solution, which does

not require additional hardware or an FPGA. It perfectly suits for automotive

since it can be combined with software-based self-testing (SBST) for functional

safety [7] at start-up.

We consider an ECU as authentic if it has an authentic MCU. An MCU

is authentic if it is of the same model and has the same group identiﬁer in a

read-only memory space as an authentic reference MCU. Our solution requires

a contract between the ECU manufacturer and the MCU manufacturer and

organizational means to ensure that not unauthorized third party gets access to

authentic MCU samples.

We use the fact that an MCU executing an SBST is a complex dynamical

system, and that it is hard to simulate its behavior in a cycle-accurate way. In

the authentication, the responder MCU interprets a received challenge as a test

pattern of an SBST. The SBST stimulates multiple parts of the MCU, covering a

set of peripheral modules characteristic for the MCU model, measures the MCU

behavior by sampling static and dynamic system states, and computes a hash

sum over the sampled data. The sequence of hash states serves as pseudo-random

stimulus data to achieve an unpredictable behavior and timing. Two kinds of

dynamic data will only be generated by an authentic MCU and will serve as a

timing signature: clock counter values sampled at dedicated program states, and

program counters sampled at dedicated runtimes.

Authentication by software-based self-testing of MCU intrinsic group features

is diﬀerent in many aspects from state of the art authentication. Therefore,

we introduce a suitable formalism in Section 2, and provide a precise attacker

model in Section 3. We formally deﬁne four security objectives in Section 4, and

are able to prove the security of the authentication scheme provided that its

implementation achieves the four security objectives.

Apart from security, reliability is a central requirement. The complemen-

tary paper [8] describes an implementation of our authentication scheme on an

example MCU model, and reliability tests for the implementation. The test ob-

servations indicate that reliability can be achieved. It ﬁnally veriﬁes that the

implementation satisﬁes the four security objectives. Section 5 summarizes the

results of this reference.

Section 6 provides automotive speciﬁc details, e.g. the provisioning of the

SCU with challenge-response-pairs, and the setup of a back-end server for garage

updates or updates over-the-air. The ﬁnal Sections 7, 8, and 9 summarize advan-

tages and disadvantages with respect to the state of the art, draw conclusions,

and specify future work.

2 Deﬁnitions

2.1 Field of application

Aﬁeld of application is a pair (LCU,pth) of a set LCU of “limited resource”

control units, and a probability threshold pth deﬁning the acceptable prob-

ability for a malicious LCU to pass an authentication. Informally, the set LCU

will be a set of ECUs with the following limitations: (1) The production cost per

piece is limited by the sales price of an original ECU. (2) The size is limited by

the size of an original ECU. (3) No online connectivity.

Examples of LCUs are MCUs if not over-sized for the ﬁeld of application and

automotive ECUs. For counterfeit prevention, we only need to consider LCUs as

adversaries in the ﬁeld.

2.2 Invariants

For a given MCU instance M, denote model(M) the set of all MCUs of the

same MCU product model. Fix a challenge space and a response space. A CRT

program is a binary for processing a challenge from the challenge space, and

computing a response

response =prog(M, challenge)

in the response space depending deterministically on the triple

(prog, M, challenge).

The response of a CRT program depends in general on the MCU hardware

instance. We set prog(L, challenge) = undef if an LCU Ldoesn’t support a

program prog. A CRT program prog is called an invariant of a subset group ⊆

MCU if

prog(M, challenge) = prog(M0,challenge),

for each M, M 0∈group and each challenge.

2.3 Authenticators

Fix a ﬁeld of application, an invariant prog of a group ⊆MCU of “authentic”

MCUs, and a response time limit tlim. Introduce the relation

auth responder(response)

to state that the sender of a received response is a authentic, and the relation

match(challenge,response)

to state (a) that the responder is a an LCU, (b) that the response time is

below tlim, and (c) that the received response matches the authentic reference

response. The deﬁnition of an invariant requires the deterministic implication:

auth responder(response) =⇒match(challenge,response) (1)

Implication (1) is referred to as reliability. Reliability needs to be supported

by testing, including environmental stress tests (refer to Section 5).

In the case ¬auth responder(response), we need to work with probabilistic

implications. Introduce the relation fresh(challenge) to state that challenge

was never used for an authentication in the ﬁeld before. A pair (prog,tlim) of

an invariant prog of a subset group ⊆MCU and a response time limit tlim is an

authenticator for a given ﬁeld of application (LCU,pth) if the response time of

each group element is below tlim, and the conditional probability of

fresh(challenge)∧match(challenge,response) (2)

under the condition ¬auth responder(response) is smaller than pth. An im-

portant aspect in the authenticator deﬁnition is that only malicious devices L

in the universe LCU are to be considered. To prevent ECU counterfeiting, an

authenticator doesn’t need to detect non-authentic big and expensive hardware.

2.4 Software-based self-tests

Consider a software based self-test (SBST) as a CRT program prog that

loads and executes a challenge dependent self-test binary (code,data), such

that the execution covers:

1. System stimulation by writing stimulus data into hardware registers at lo-

cations w0, ..., wM−1;

2. Sampling a sequence of memory or register values s0, ..., sN−1at locations

r0, ..., rN−1and sample times t0, ..., tN−1;

3. Computing the response response =prog(M, challenge) covering a hash

sum hash(r0, s0, ..., rN−1, sN−1) over locations and samples;

and satisfying the following requirements: (A) The code instruction ﬂow uniquely

depends on the challenge. (B) The data depends on the challenge. (C) The se-

quence of locations w0, ..., wM−1depends on data. (D) The sequence of locations

r0, ..., rN−1depends on data.

An SBST measures a part of the system behavior. The self-test data deﬁnes

the test pattern.

2.5 MIGA

Consider an identiﬁer of a model(M) as a map id from model(M) to a set of bit

strings. A subset of the form

group(M, model,id) := {M0∈model(M)|id(M0) = id(M)}

will be called an MCU intrinsic group. Provided a ﬁeld of application (LCU,pth),

we call an authenticator (prog,pth) of an MCU intrinsic group, where prog is

an SBST an MCU intrinsic group authenticator (MIGA).

Obviously, the identiﬁer id(M) needs to be sampled by any authenticator

of group(M, model,id), and it needs to be ensured that the sampling is done

from the authentic identiﬁer address. We will show that at least for some MCU

models, a MIGA can be realized without the need of a secret key. A core element

will be the timing signature as deﬁned in the following subsection.

2.6 Timing signature

In our MIGA construction, the self-test code will sample some states siin order

to measure the timing of the system (M, prog) in a clock-cycle accurate way.

Timing covers the instruction ﬂow of code and the cycle-accurate behavior of the

peripheral MCU modules. There are two complementary variants of timing data:

system states changing every clock cycle (e.g. clock counters) measured at ded-

icated program states, and program counters measured at dedicated runtimes.

In [8], we will realize the second variant by sampling stacked return addresses

via timer interrupt service routines. The fastest changing bit of a timing sample

is bit zero (LSB) for the ﬁrst variant, and bit one for the second variant if the

width of most CPU instructions is 16 bit.

In our MIGA construction, the self-test code will successively append ded-

icated fast changing bits to a bit sequence called the timing signature and

denoted σ(challenge). In an SBST with timing signature, we assume that

the response contains the timing signature in addition to the hash sum.

A necessary condition for a unique timing signature is that all involved hard-

ware modules (in particular the timers and busses) are clocked without prescaler

by the same clock as the CPU. If for example timers are clocked with prescaler

2, in the best case the response takes one out of two possible values.

Deﬁnition 1. For an SBST prog with timing signature σand integers m≤

n≤N, denote [m:n]the set {m, m + 1, ..., n −1}, and

K(challenge, m, n)⊂[m:n]

the subset of indices iwhere the element siof the sequence s1, ..., sNof sampled

data contributes a bit to the timing signature, and call the minimal dsuch that

for each challenge:

m−n≥d=⇒ |K(m, n, challenge)| ≥ 1

the mesh size d(σ).

The security of MIGA is based on the hardness of cycle-accurate system

emulation, i.e. on the hardness to determine the timing signature on any non-

authentic hardware.

2.7 Synchronization

In order to get a deterministic response in an SBST with timing signature, the

MCU must execute code synchronously to a reference MCU. A synchronous

execution includes a synchronous instruction execution timing, and synchronous

read and write accesses to the system bus. It requires starting from a known

state [9]. The known state can only be achieved deterministically after receiving

the challenge if it is very simple, e.g. if memory and bus controllers have a known

initial conﬁguration, the peripheral modules to be used in the self-testing are

clocked but disabled and in a known initial conﬁguration, no interrupt is pending

or active, and no DMA process is running. The observations in [8] indicate that

– in combination with the the no prescaler assumption – these conditions are

also suﬃcient, at least for two tested MCU models (HT32F52 and SAM3x8e).

We will implement an SBST in such a way that after some system stimuli,

these prerequisites are not true any more: DMA processes will compete with the

CPU and memories for bus services, and timer interrupts of pseudo-randomly

conﬁgured timer modules will occur frequently. This makes the system state at

a dedicated code instruction or dedicated runtime thard to predict. Runtime

always refers to the number of CPU clock cycles counted from the known state.

It is even harder for any system (authentic or malicious) to synchronize with

a reference system at a runtime tif running some asynchronous code0before

t: since this would not only require the knowledge but also the capability to

establish an equivalent system state. The security of MIGA also depends on the

hardness of such a synchronization in the middle of a self-test execution.

3 Attacker model

In this section, we need to establish a suﬃciently generic attacker model. Con-

sider a triple (prog, σ, tlim) of an SBST with timing signature, and a time limit

implemented for the authentication of group(M, model,id) elements in a ﬁeld

of application (LCU,pth). An attacker is a counterfeit manufacturer, i.e. the im-

plementer of malicious systems (L, prog0) with

L∈LCU \group(M, model,id)

trying to pass challenge-response-tests with probability higher pth. For passing

a challenge-response-test, the malicious Lneeds to determine the hash sum and

the timing signature. The SBST shall be designed in such a way that for each

malicious Lthere exists at least one isuch that Lwill not sample authentic

system states siat the authentic location ri. For achieving the correct hash

result nevertheless, the malicious Lmight try to solve this by “spooﬁng”, to be

deﬁned in Section 3.1. Introduce the relation recovery(response) to indicate

that the responder correctly determined – deterministically or by guessing – the

timing signature. By construction:

match(challenge,response) =⇒recovery(response) (3)

If the challenge is fresh, a malicious responder has three possibilities for the

recovery of the timing signature: ﬁrst, by running a code0synchronously to an

authentic reference MCU running the authentic self-test code. In this case, we

assume that Lis able to sample the authentic timing values. It will be formalized

in Section 3.2. Secondly, by running an asynchronous code0and ﬁguring out

the authentic timing samples by a mix of simulation and guessing. This case

will be formalized in Section 3.3. Third, by switching between synchronous and

asynchronous sampling. This case will be formalized in Section 3.4.

3.1 Spooﬁng

Introduce the relation spoof(m, n, response) to indicate that for some i∈[m:

n] the responder did not sample the authentic siat the authentic location ribut

nevertheless performed a hashing of each authentic pair (ri, si) for i∈[m:n] to

obtain the authentic hash states hashiof an authentic MCU at sample time ti.

3.2 Synchronous sampling

Introduce the relation sync(m, n, response) to state that the responder has the

same processor model than an authentic MCU, and for integers 0 ≤m<n≤N

the responder sampled the timing values within sm, ..., sn−1synchronously to the

authentic reference device. This includes that the responder CPU executed an in-

struction sequence synchronously to an authentic MCU, that the responder sam-

pled a sequence s0

m, ..., s0

n−1of system states at the authentic sampling runtimes

tm, ..., tn−1, and that for i∈K(m, n, challenge), we have s0

i=si. Introduce

the logical relation sound(m, n, response) to state that sync(m, n, response)

holds, and for i∈[m:n], we have s0

i=siand r0

i=ri.

Introduce the relation auth binary(m, n, response) to state that the respon-

der executed the authentic self-test binary (code,data) without interruption for

sampling s0

m, ...., s0

n−1.

3.3 Asynchronous sampling

Introduce the relation async(m, n, response) to state that the responder is ma-

licious, and that for integers m<n≤Nit correctly determined the authentic

timing samples within sm, ..., sn−1asynchronously, or that the responder has

a diﬀerent processor model than an authentic MCU and sampled these timing

samples correctly by chance. The case m=nis used if a synchronous sampling

is interrupted and restarted after a re-synchronization, the case m=n= 0 shall

indicate that the responder started the sampling asynchronously. An example

for an asynchronous sampling would be a cycle-accurate emulation of an authen-

tic system. This is out of the scope for elements L∈LCU due to sophisticated

mechanism for branch prediction, bus arbitration, caching eﬀects and latencies

of peripheral responses [10], [11].

A malicious Lwill not be able to do cycle-accurate system emulation within

the time limit tlim but in the worst case it can do something close: mix functional

system emulation and guessing of timing data in such a way that if it has already

determined s1, ..., sn−1correctly, then it is able to determine non-timing snin

real-time and guess snfor n∈K(challenge) with a guess rate limited by some

global px <1.

Axiom 1 (No cycle accurate emulation). There exists a global boundary

px <1 such that for 0 ≤m≤n≤N, the probability of

async(m, n, response)

under the condition of a malicious responder is smaller equal pxK(m,n,response).

If a synchronous program sequence runs after an asynchronous one, a malicious

Lmust also ﬁrst do a synchronization, see Section 2.7. Such a synchronization

is a heuristic process that will only succeed with a small success rate.

Axiom 2. There exists a global boundary py <1/2 such that for 0 ≤l≤m <

n≤N, the probability of

async(l, m, response)∧sync(m, n, response)

under the condition of a malicious responder is smaller equal py.

The minimal pair (px,py) can be considered as the attack potential of the

worst malicious L∈LCU. For the example implementation in [8], we assume as

worst case attack potential (px,py) = (0.9,0.1).

3.4 Mixed attack strategy

We state that the recovery of the authentic timing signature requires a mix

between synchronous and asynchronous sampling, if the challenge is fresh.

Axiom 3. If fresh(challenge) and recovery(response), then there exists a

Y≥0 and a sequence

0 = n0≤n1≤... ≤n2Y+2 =N−1 (4)

with sync(ni, ni+1,response) for even i,async(ni, ni+1,response) for odd i,

and nistrictly smaller ni+1 for even i≥2.

We call Ythe re-synchronization count. It is zero if the responder is authen-

tic. If the responder is a malicious LCU, the sequence (4) can be considered as

the attack strategy. An attack strategy might be challenge dependent.

Remark 1. For an attacker with potential (px,py), the probability for an attack

with strategy (4) is limited by

pxX·pyY,

where Xis the number of timing indices in

[n1:n2]∪[n3:n4]∪... ∪[n2Y+1 :n2Y+2].

4 Security architecture

Fix a ﬁeld of application (L, pth). Section 4.1 deﬁnes security objectives for a

pair (prog,tlim) of a group(M, model,id) invariant SBST and a response time

limit. We prove in Section 4.2 that the security objectives are suitable for a

MIGA if the minimal samples count Nis chosen big enough.

4.1 Security objectives

Objectives 1. Provide a “coverage constant” cand a “feedback parameter” o,

such that for each m, n with m−n > c, and assuming a malicious responder,

the probability for each of the following implications is greater 1 −pth/4:

1. Hardware coverage: if the responder generated the response by faithful exe-

cution of the authentic binary, and sampled the authentic system states at

the authentic locations, then it is authentic. More precisely:

auth binary(m, n, response)

∧

sound(m, n, response)

=⇒auth responder(response)

2. Code coverage: if the responder sampled the authentic system states at the

authentic locations, it necessarily executed the authentic binary faithfully.

More precisely:

sound(m, n +o, response) =⇒auth binary(m, n, response)

3. Feedback: if the responder was in sync with an authentic reference MCU,

then it fed the authentic data into the authentic hash function. More pre-

cisely:

sync(m, n +o, response) =⇒

sound(m, n, response)

∨

spoof(m, n, response)

4. Spooﬁng detection: if the responder did not read the authentic data, or not

from the authentic addresses, but computed the hash sum nevertheless over

authentic data, then this was not in sync with an authentic reference MCU.

More precisely:

spoof(m, n, response) =⇒ ¬sync(m, n +o, response)

Remark 2. If Objectives 1 are satisﬁed for parameters (c, o), the implication

sync(m, n, response) & m−n>c+o=⇒auth responder(response)

holds with probability greater 1 −pth.

4.2 Rationale

By the following theorem, the four objectives are suﬃcient for a MIGA imple-

mentation if the minimal samples count Nis big enough. The latter only depends

on the maximal assumed attack potential (px,py).

Theorem 1. Let (LCU,pth)be a ﬁeld of application, prog an SBST invariant

of group(M, model,id)with minimal samples count Nand mesh width d, and

tlim a response time limit. Assume that for natural numbers c, o, the sum c+o

is a multiple of dand that Objectives 1 are satisﬁed. Assume a maximal attack

potential (px,py)for malicious devices in LCU. Set p:= max{px(c+d+o)/d ,py}

and let Zbe the smallest integer with pZ≤pth. If N > (c+d+o)·(Z+ 1),

then the probability for (2) is smaller pth, i.e. (prog,tlim)is an MCU intrinsic

group authenticator.

Proof. We have to show that the probability for (2) under the condition of a

malicious responder is smaller equal pth. By implication (3), the assumption

(2) implies recovery(response). By Axiom 3 and Remark 2 we can assume an

attack strategy (4) with ni+1 −ni≤c+ofor even i. By Deﬁnition 1, we get

X≥(N−X

ieven

(ni+1 −ni+d)) div d

≥(N−(Y+ 1)(c+d+o)) div d

>(Z−Y)(c+d+o)/d

In consequence, the probability for the attack is limited by:

pxX·pyY<px(Z−Y)(c+d+o)/d ·pyY

≤pZ−Y·pY≤pZ≤pth.

ut

5 Proof of concept

A proof of concept is provided in the reference [8]. This section brieﬂy summarizes

its results.

5.1 Platform

The reference speciﬁes an SBST design, and describes an example implemen-

tation for an MCU intrinsic group with the Holtek HT32F52 as MCU model,

and the default “custom ID” 0xFF...FF as group identiﬁer. The runtime for the

example implementation is 30ms for a clock rate of 8MHz.

5.2 Reliability

The reference describes reliability tests performed with six HT32 starter kits

as test boards. The testing covered (a) repeated power-on-power-oﬀ cycles, (b)

voltage variations (2V-6V), and (c) temperature variations (20◦C−120◦C). At

least 10 mio. challenge-response-tests were executed per sample. In none of the

tests a mismatch between a response and a reference response was observed,

except the case of missing responses due to a complete failure of the MCU

functionality, e.g. when VDD supply voltage was smaller 2V3 or greater 5V2.

Equivalent test observations were made for a second implementation on an

Atmel SAM3x8e (Arduino Due) with Cortex-M3 processor. We consider these

observations as promising. Further tests are work in progress. Before relying on

a MIGA scheme for a dedicated MCU model, reliability testing on a larger scale

is mandatory.

5.3 Security

The reference assumes an attack potential of (px,py) = (0.9,0.1), and requires

a maximal rate pth = 0.001 of false positives. The reference proves that the ex-

ample implementation satisﬁes Objectives 1 with coverage parameter c= 1548,

feedback parameter o= 220, mesh width d= 52, and minimal samples count

N= 7280. It follows from Theorem 1 that the example implementation is an

MCU intrinsic group authenticator for the group of all HT32F52 samples with

default custom ID.

6 Application to car components

In this Section, we want to show that the MIGA scheme is applicable to the

automotive use case sketched in the introduction.

6.1 Challenger application

A security control unit SCU will act as the challenger in CRTs with the to-be-

authenticated ECUs as responders. The challenger maintains for each responder

an individual set of CRPs consisting of a challenge and reference response, and

a response time limit. For the authentication, the challenger sends an individual

challenge to each of the responders, collects the responses, and measures the

response times. If a response time exceeds the response time limit, the response

is withdrawn. Otherwise, an authentication passes if the response matches its

reference value.

Since the to-be-authenticated ECUs can compute responses in parallel, and

the response time is about 30ms for each ECU, the time for the authentication

of multiple ECUs is only limited by the bandwidth of the on-board network.

6.2 Back-end

The implementer can choose a strategy for the reuse of CRPs, or for “oﬄine

CRP updates”, where an already authenticated ECU is misused as response

generator for future CRTs, see [12]. Nevertheless, the SCU might need to update

sets of CRPs sometimes. This requires a CRP update server in a back-end of the

manufacturer. CRP update requests can be sent over-the-air, if applicable, or by

a diagnosis tool in the garage. The garage update is applicable in particular after

replacing an ECU with an original spare part, and the SCU is not yet equipped

with CRPs for the new ECU model.

The CRP update server has a random number generator for the generation

of uniformly distributed challenges, and reference MCUs for generating reference

responses for each ECU model in the ﬁeld. It serves its client SCUs with CRPs

upon request. Each CRP must only be sent once to a client.

MIGA and symmetric key authentication can be mixed without any overhead

in the SCU software. In this case, the back-end is also used for the generation of

CRPs for ECUs with a symmetric group key as authentication factor. This only

requires reference MCUs with valid group key in the back-end.

7 Pros and cons

This section describes advantages and disadvantages of the MIGA scheme with

respect to the state of the art.

7.1 Compared to key based authentication

An obvious advantage of the MIGA scheme is that the costs per piece are sig-

niﬁcantly smaller compared to secure hardware elements.

As a counterfeit prevention, key based authentication is broken as soon as

a single key is disclosed. An attacker can buy and analyze authentic ECUs in

his laboratory in order to disclose an authentication key with methods including

DPA and DFA and probing. To achieve a suitable protection level, a certiﬁed

secure element is mandatory. But even certiﬁed secure elements are sometimes

vulnerable [13].

The advantage of the MIGA scheme is that no secret key is required, thus

there is no risk of a key disclosure. As shown in the present article, it achieves

nevertheless a high security level for the prevention of ECU counterfeits. Note

that the MIGA scheme does not provide data encryption.

7.2 Compared to PUFs

The advantage of the MIGA scheme with respect to non-cryptographic PUF

solutions [14], [15] is that it is much more practical due to the fact that it uses

a group authentication feature. A CRP can be used to authenticate any ECU of

the given product model. A responder ECU can be replaced by an ECU of the

same product model without the need of a CRP update. Non-cryptographic PUF

solutions require responder individual CRPs. The pre-generation and storage of

device individual CRPs in a back-end is very complex. Non-cryptographic PUF

solutions are not very common on the market. Available PUF solutions use a

weak PUF, e.g. an SRAM-PUF. Weak PUFs have too little entropy to serve for

challenge-response-testing. They are used instead as a secure key storage. Since

keys can be attacked not only at their memory location but also during crypto

operations, weak PUFs do not make secure hardware elements redundant.

7.3 Counterfeit MCUs

A disadvantage of the MIGA scheme is that it does not apply if a counterfeit

ECU manufacturer can produce or purchase MCU counterfeits behavioral equiv-

alent to the the authentic MCU model, and with the authentic group identiﬁer.

This would apply to (1) overproduced MCUs of the original model on behalf

of the counterfeit ECU manufacturer, (2) cloned MCUs based on illegal copies

of the authentic design data, (3) cloned MCUs based on layer-by-layer reverse

engineering of the original MCU. We consider recycled MCUs of the authentic

model as a minor problem. Recycled MCUs will not be available on a large scale

with the valid group identiﬁer.

The described disadvantage will decrease in the future due to advances in

MCU cloning prevention technologies, such as hardware obfuscation with key

based locking [16].

7.4 Combination of functional testing and authentication

An advantage of the MIGA approach is the possible combination of SBST driven

by functional safety requirements and authentication. Due to the coverage ob-

jective, an SBST implemented for the MIGA scheme already provides evidence

that the MCU is working properly. Bad responses in the MIGA scheme should

always be considered as a possible safety issue in safety-critical applications.

7.5 Crypto agility

The MIGA authentication permits the challenger device to choose the security

level dynamically. This only requires that the samples count Nis chosen by the

challenger (as a challenge parameter). The challenger could decide to choose

a small Nas long as no counterfeit ECUs were observed on the market, and

increase Nin dependency of an increased attack potential (px,py) of counterfeit

manufacturers. The suitable Nis provided by Theorem 1.

8 Conclusions

We introduced MIGA as a time bounded authentication scheme based on hard-

ware intrinsic characteristic group features of MCUs of the same model, and

with a common group identiﬁer. It is based on the fact that an MCU is a com-

plex dynamical system, and hard to emulate in a cycle-accurate way. The MIGA

scheme is optimized for counterfeit prevention. To our knowledge, it is the ﬁrst

authentication method based on digital hardware intrinsic group features.

We provided a generic attacker model, and deﬁned four security objectives

on a MIGA implementation in software. We proved the security of the MIGA

scheme provided that the four security objectives are satisﬁed. We referred to

[8] for a practical implementation and the proof that the four security objectives

can be achieved for a speciﬁc MCU model. In particular, the reference provides

promising results on the reliability of the MIGA scheme. We indicated how

the authentication scheme can be applied as an anti-counterfeiting solution to

automotive ECUs.

9 Future work

Planned work items cover: (1) Large scale reliability tests with the support of

an MCU manufacturer. (2) Testing if MIGA can help in detecting some class

of MCU counterfeits. (3) Prove the security of authentication with “relaxed

freshness” similar to the result in [12] for an embedded system with multiple to-

be-authenticated device components. (4) Build a demonstrator for a complete on-

board network where MIGA and key based authentication are mixed. (5) Publish

a MIGA implementation, and organize a hacker competition for circumventing

MIGA with a low resource control unit.

References

1. “Requirements to perform integrated circuit evaluations,” May 2013. https://

www.commoncriteriaportal.org/files/supdocs/CCDB-2013-05-001.pdf.

2. Michigan State University, “Deﬁning the threat of product counterfeiting,”

2019. https://www.michiganstateuniversityonline.com/resources/acapp/

threat-of-product-counterfeiting/.

3. T. McGrath, I. E. Bagci, Z. M. Wang, U. Roedig, and R. J. Young, “A PUF

taxonomy,” Applied Physics Reviews, vol. 6, no. 1, p. 011303, 2019.

4. N. Beckmann and M. Potkonjak, “Hardware-based public-key cryptography with

public physically unclonable functions,” Information Hiding, Springer-Verlag,

p. 206-220, 2009.

5. M. Majzoobi and F. Koushanfar, “Time-bounded authentication of FPGAs,” IEEE

Transactions on Information Forensics and Security, vol. 6, no. 3, pp. 1123–1135,

2011.

6. U. R¨uhrmair, “SIMPL systems as a keyless cryptographic and security primitive,”

D. Naccache (Editor), Cryptography and Security: From Theory to Applications -

Essays Dedicated to Jean-Jacques Quisquater on the Occasion of His 65th Birthday.

Lecture Notes in Computer Science, Vol. 6805, Springer, 2012.

7. M. Psarakis, D. Gizopoulos, E. Sanchez, and M. Sonza Reorda, “Microprocessor

software-based self-testing,” IEEE Design Test of Computers, vol. 27, no. 3, pp. 4–

19, 2010.

8. F. Schuhmacher, “A MIGA design and example implementation,” 2020.

https://www.researchgate.net/publication/344257849_A_MIGA_design_and_

example_implementation.

9. F. Reimann, M. Glaß, J. Teich, A. Cook, L. R. G´omez, D. Ull, H. Wunderlich,

U. Abelein, and P. Engelke, “Advanced diagnosis: Sbst and bist integration in au-

tomotive e/e architectures,” in 2014 51st ACM/EDAC/IEEE Design Automation

Conference (DAC), pp. 1–6, 2014.

10. J. Bauer and F. Freiling, “Towards cycle-accurate emulation of cortex-M code

to detect timing side channels,” 11th International Conference on Availability,

Reliability and Security, IEEE, 2016.

11. M. Reshadi and N. Dutt, “Generic pipelined processor modeling and high per-

formance cycle-accurate simulator generation,” Proceedings of the Conference on

Design, Automation and Test in Europe—Volume 2, Washington, DC, USA: IEEE

Computer Society, p. 786–791, 2005.

12. F. Schuhmacher, “Relaxed freshness in component authentication,” 2020. https:

//eprint.iacr.org/2020/106.

13. M. Wagner and S. Heyse, “Single–trace template attack on the DES round keys of

a recent smart card,” Cryptology ePrint Archive, Report 2017/057, 2017. https:

//eprint.iacr.org/2017/057.

14. R. S. Pappu, “Physical one-way functions,” PhD thesis, MIT, 2001.

15. S. Devadas, G. E. Suh, S. Paral, R. Sowell, T. Ziola, and V. Khandelwal, “Design

and implementation of PUF-based “unclonable” RFID ICS for anti-counterfeiting

and security applications,” International Conference on RFID, pages 58–64, 2008.

16. S. Amir, B. Shakya, D. Forte, M. Tehranipoor, and S. Bhunia, “Comparative anal-

ysis of hardware obfuscation for ip protection,” in Proceedings of the on Great Lakes

Symposium on VLSI 2017, GLSVLSI ’17, (New York, NY, USA), p. 363–368, As-

sociation for Computing Machinery, 2017.