Defending Against Attacks on Main Memory Persistence
ABSTRACT Main memory contains transient information for all resident applications. However, if memory chip contents survives power-off, e.g., via freezing DRAM chips, sensitive data such as passwords and keys can be extracted. Main memory persistence will soon be the norm as recent advancements in MRAM and FeRAM position non-volatile memory technologies for widespread deployment in laptop, desktop, and embedded system main memory. Unfortunately, the same properties that provide energy efficiency, tolerance against power failure, and "instant-on'' power-up also subject systems to offline memory scanning. In this paper, we propose a memory encryption control unit (MECU) that provides memory confidentiality during system suspend and across reboots. The MECU encrypts all memory transfers between the processor-local level 2 cache and main memory to ensure plaintext data is never written to the persistent medium. The MECU design is outlined and performance and security trade-offs considered. We evaluate a MECU-enhanced architecture using the SimpleScalar hardware simulation framework on several hardware benchmarks. This analysis shows the majority of memory accesses are delayed by less than 1 ns, with higher access latencies (caused by resume state reconstruction) subsiding within 0.25 seconds of a system resume. In effect, the MECU provides zero-cost steady state memory confidentiality for non-volatile main memory.
- [Show abstract] [Hide abstract]
ABSTRACT: Constructed wetlands are being considered a sustainable and promising option whose performance, cost and resources utilization can complement or replace conventional water treatment. The literature reported the fact that an insufficient residence time of pollutants in soils induces an incomplete and unfinished biodegradation process. In this work, engineering solutions are proposed with the objective of significantly increasing the solute retention capacity in the horizontal flow constructed wetland (HFCW). Using several numerical tracers experiments with different operating scenarios, such as the HFCW physical configuration, the flow rate, the boundary conditions, the adsorption layer thickness, practical methods and a new empirical law are suggested in order to substantially increase the adsorption ability in the HFCW, and hence the pollutant removal. Furthermore, it appears that there is no impact of the adsorbent layer thickness on the solute mean residence time with high values of adsorption coefficient (kd). For smaller kd values, the deeper the adsorption layer thickness, the higher the retention time.Ecological Engineering. 01/2011; 37(4):636-643.
Conference Paper: An Improved Recovery Algorithm for Decayed AES Key Schedule Images.[Show abstract] [Hide abstract]
ABSTRACT: A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al.  established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.Selected Areas in Cryptography, 16th Annual International Workshop, SAC 2009, Calgary, Alberta, Canada, August 13-14, 2009, Revised Selected Papers; 01/2009
- [Show abstract] [Hide abstract]
ABSTRACT: The existence of two basic levels of storage (fast/volatile and slow/non-volatile) has been a long-standing premise of most computer systems, influencing the design of OS components, including file systems, virtual memory, scheduling, execution models, and even their APIs. Emerging resistive memory technologies – such as phase-change memory (PCM) and memristors – have the potential to provide large, fast, non-volatile memory systems, changing the assumptions that motivated the design of current operating systems. This paper examines the implications of non-volatile memories on a number of OS mechanisms, functions, and properties.01/2011;
Defending Against Attacks on Main Memory Persistence∗
William Enck, Kevin Butler, Thomas Richardson, Patrick McDaniel, and Adam Smith
Systems and Internet Infrastructure Security (SIIS) Laboratory,
Department of Computer Science and Engineering, The Pennsylvania State University
Main memory contains transient information for all res-
ident applications. However, if memory chip contents sur-
vives power-off, e.g., via freezing DRAM chips, sensitive
data such as passwords and keys can be extracted. Main
memory persistence will soon be the norm as recent ad-
vancements in MRAM and FeRAM position non-volatile
memory technologies for widespread deployment in lap-
top, desktop, and embedded system main memory. Unfor-
tunately, the same properties that provide energy efficiency,
tolerance against power failure, and “instant-on” power-
up also subject systems to offline memory scanning. In
this paper, we propose a Memory Encryption Control Unit
(MECU) that provides memory confidentiality during sys-
tem suspend and across reboots. The MECU encrypts all
memory transfers between the processor-local level 2 cache
and main memory to ensure plaintext data is never writ-
ten to the persistent medium. The MECU design is out-
lined and performance and security trade-offs considered.
We evaluate a MECU-enhanced architecture using the Sim-
pleScalar hardware simulation framework on several hard-
ory accesses are delayed by less than 1 ns, with higher ac-
cess latencies (caused by resume state reconstruction) sub-
siding within 0.25 seconds of a system resume. In effect,
the MECU provides zero-cost steady state memory confi-
dentiality for non-volatile main memory.
Main memory containing sensitive information persists
for indefinite periods during system uptime . Recently,
Halderman et al.  demonstrated how to extend main
memory persistence by “freezing” DRAM chips to main-
tain memory cell state after the system is powered off, al-
lowing an adversary to retrieve any passwords or crypto-
graphic keys that were not overwritten before system shut-
∗This material is based upon work supported by the National Science
Foundation under Grant No. CCF-0621429, CNS-0627551, and CNS-
0643907. Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the author(s) and do not necessarily
reflect the views of the National Science Foundation.
down. While this attack provides an effective vector for key
retrieval, the adversary must have physical access before the
system is shut down. This precondition becomes unneces-
sary as new non-volatile memory technologies emerge.
Non-volatile memories such as MRAM (magnetic
RAM) and FeRAM (ferro-electric RAM)  provide en-
ergy efficiency, tolerance of power failure, and “instant-
on” power-up. These technologies are reaching maturity
and manufacturers are already selling chips with up to 4-
Mbit of storage [11, 12] to replace battery-backed SRAM
in embedded systems. Recent advancements in speed 
and capacity  make these technologies appropriate for
main memory in laptops, desktops, and embedded systems.
Because systems that use non-volatile main memory retain
all state across reboots and suspends, users need not en-
dure long boot cycles or memory restoration from slow sec-
ondary storage during resumption.
The characteristics of non-volatile main memory
(NVMM) that provide these advantages also introduce new
vulnerabilities–sensitive data can be extracted or modified
by an adversary who gains access to the memory while the
computer is not turned on or after reboot. Unlike the attack
described by Halderman et al., no freezing is required, and
the memory chips can be retrieved at any time. This work
ory while retaining the advantages of non-volatile memory.
Note that these techniques are also effective against frozen
volatile memory chips.
The remainder of this paper is structured as follows. Sec-
tion 2 discusses related work. Section 3 defines the problem
and threat model. Section 4 describes our solution. Sec-
tion 5 evaluates the performance impact of the MECU us-
ing SimpleScalar. Section 6 considers a number of practical
issues in the use of the MECU and its application to next
generation processors. Section 7 concludes.
2Secure Memory Systems
Operating systems and applications assume memory
does not survive across reboots.
such as passwords and cryptographic keys commonly re-
side in main memory . If this data is written to mag-
netic media (e.g., via swap operations), it may persist even
longer . Therefore, best practice recommends ensuring
memory plaintext never reaches disk. While data can be se-
curely deallocated  and crash reports can be cleansed ,
encrypted swap  is still required for reused data.
The introduction of NVMM invalidates a basic assump-
tion upon which operating system and application security
is based. Therefore, it is imperative that the underlying
architecture transparently preserve the security guarantees
upon which the systems where built, i.e., mechanisms must
be implemented within the hardware and BIOS. Our ap-
proach is unique in that it considers full memory encryption
without OS interaction and provides optimizations specific
to systems with NVMM. Many previous memory encryp-
tion architectures [8,17–19,22,31] were designed for a ver-
tical set of applications, e.g., Digital Rights Management
(DRM) and tamperproof computation for grid processing.
As such, only the memory segments of “protected” applica-
tions are encrypted. This DRM model has two significant
disadvantages: it often requires changes to the processor
instruction set, operating system, and/or applications, and
significant performance degradation results from processor
stalls necessary for protection againstonline attacks. A sim-
ilar side effect exists in architectures providing protection
against bus sniffing . Securing NVMM need not nec-
essarily require protection against online attacks, therefore
the associated performance penalty is avoidable.
While many previous systems do not directly provide
full memory encryption appropriate for efficiently protect-
ing systems with NVMM, lessons can be learned from their
evolution. Execute Only Memory (XOM) , an early ar-
chitecture designed to protect DRM applications, encrypted
data directly, resulting in significant performance degrada-
tion. Suh et al.  improved performance by applying a
variant of counter mode encryption to generate one time
pads in parallel with memory lookups. However, in order
to protect against online attacks, the secure processor must
bytes of memory (for systems with 64-byte cache lines).
The counters must be stored within the secure processor to
avoid the overhead of performing two memory accesses per
cache miss. As these storage requirements are often im-
practical, subsequent architectures minimized on-chip stor-
age using caches  and prediction algorithms [25, 28].
Unfortunately, these techniques still result in a significant
memory bottleneck throughout system run time. Further,
storing counters in memory is insecure, therefore Yan et
al.  ensure counter integrity using hash trees similar to
architectures designed by Suh et al. [13,31]. In addition to
ensuring counter integrity, Yan et al. also split the counter
into major and minor portions, thereby further decreasing
storage size. While their architecture provides improved
performance, the overhead due to processor stalls is con-
stant throughout the system operation. Additionally, an ar-
chitecture designed to protect the entire main memory must
be careful when storing counters to memory, otherwise the
counters may become inaccessible.
These preceding approaches fail to preserve the secu-
rity guarantees that modern operating systems will place
on NVMM. These operating systems require that the mem-
ory architecture defend against offline physical attacks and
avoid run-time processor stalls–a unique combination of
feature and performance that no memory system has pre-
viously achieved. Furthermore, the architecture must sup-
port all legacy software and hardware interfaces, including
DMA and multiprocessors [24,29], and do so within a mod-
est component footprint. We explore how these features
are simultaneously achieved within our MECU-enhanced
architecture in the following sections.
3 Non-Volatile Main Memory
Consider a commodity desktop machine with power
managementcapabilities. Duringnormaloperation, thesys-
tem is active, i.e., usable for processing data, performing
reads and writes from memory, etc. When the system is
not in use, it can move into a state of low power consump-
tion, either automatically or through user invoked suspen-
sion. There are two different suspend modes: powered sus-
pend and unpowered suspend (commonly known as hiber-
nate). When a volatile memory system enters powered sus-
pend mode, power-intensive components (e.g., displays and
disk drives) are turned off, while reduced power is applied
to others (e.g., main memory). Importantly, memory con-
tents persists while in the low power state. When a sys-
tem with volatile memory is placed into hibernate mode,
main memory is transferred to secondary storage (e.g., disk)
and power cut off, effectively zeroing the physical memory.
When the system is resumed, the memory is restored from
secondary storage. Conversely, architectures with NVMM
need not provide any facilities to retain memory state within
(even across system reboots).
Two attack vectors are enabled by the introduction of
NVMM into current architectures—an online attack where
a booted operating system accesses a previously booted op-
erating system’s memory, and an offline attack where the
physical memory is probed by an adversary while the sys-
tem is powered off, e.g., through regular read-out ports or
via more sophisticated techniques such as optical probing of
the memory with a laser and electromagnetic analysis .
We do not seek to protect main memory in normal oper-
ation, as solutions already exist . Additionally, we do
not consider hibernation as solutions such as encrypt-on-
For clarity in distinguishing between a reboot and suspend,
we introduce the concept of an OS instance. We assume that
the system has the ability to suspend operations as it tran-
sitions into suspend mode and to subsequently resume its
previous state. The system thus has the same OS instance
before the system is suspended and after it resumes, but a
different instance after it reboots.
An OS may reboot systematically or abruptly. In an on-
line attack, the new OS instance attempts to access the pre-
vious instance’s memory. Traditional OS security models
assume volatile main memory not survive across a reboot
(while undesirable, this is not always the case [7,15]). We
require that this characteristic also hold for non-volatile sys-
tems. The potential for abrupt power loss mandates that the
system always remain in a protected state: any attempt to
provide protection solely at suspend or shutdown could be
trivially circumvented by an adversary who cuts power be-
fore the security mechanisms are applied.
The vulnerabilities introduced by the use of NVMM lead
to the following informal design goals for the MECU. First,
a MECU-enhanced system must be resilient to physical at-
tacks on suspended memory. In particular, we principally
desire to protect confidentiality of the memory: an adver-
sary must not be able to derive the content of main memory
when the system is suspended or powered off. We defer is-
sues of integrity in the initial MECU design and analysis,
but sketch possible solutions in Section 6. Second, no op-
erating system state should be retrievable after shutdown or
reboot. Third, protections must be maintained without sup-
port from, or trust in, the operating systems running on the
host. Fourth, the protections must require little change to
the hardware architecture and operate virtually invisibly to
the rest of the system architecture. We term these latter two
goals transparency. Finally, the solution must induce little
overhead on memory accesses. Note that this work is re-
stricted to the security of main memory only. Systems that
ample, past systems have shown that virtual memory paged
to disk can expose a significant amount of sensitive infor-
mation . Other system artifacts such as network traffic
can similarly expose information . Such vulnerabilities
are outside the scope of the current work.
4 MECU Design
The threat model outlined above requires that data can
never be present in main memory in the clear: an adver-
sary able to abruptly cut power could thereafter read any
plaintext data present in memory. Therefore, we adopt an
approach in which data is encrypted when written by the
processor to main memory and decrypted when read, i.e.,
all memory operations are mediated by the MECU. This
has the advantage of transparency, where no changes to ei-
ther the processor or memory organization are necessary to
achieve the desired security. To be more precise, we intro-
duce a MECU on the memory bus between the processor-
local layer 2 memory cache and NVMM. Figure 1 depicts
the MECU’s placement in the architecture.
Figure 1. A MECU-enhanced architecture
The central design challenge of this approach is to en-
sure that the mediation of memory operations is both secure
and efficient. The naive implementation of processor writes
through the MECU encrypt data directly using a suitable
encryption algorithm (e.g., DES, AES) then write the re-
sulting ciphertext to memory. Read operations reverse this
operation, decrypting the ciphertext before use. Because
writes and reads would be delayed by cryptographic opera-
tions, unacceptable delays would be introduced. These de-
lays would ultimately lead to processor stalls (idle periods
where processor waits for memory operations to complete).
Encryption overhead can be mitigated by creating pads
that are XORed with plaintext to perform encryption. Here,
the creation of the pad is the computationally expensive
(and potentially offline) operation. Also illustrated in Fig-
ure 1, we apply this approach in the MECU, where we
mask the pad computation costs in read operations by par-
allelizing the memory fetch operation and pad creation.
The MECU computes the pad while the cache line is be-
ing fetched from memory.1Because the pad can be created
faster than the fetch delay, the pad is ready when the data
arrives on the bus from main memory. Therefore, the ob-
servable overhead for each memory access is only the one
or two gate delays (depending on the fabrication technol-
ogy) needed to XOR the data with the pad.
Memory writes are similar to reads. The MECU gener-
ates and applies the pad for each memory write as described
above. In this case there is no latency to mask the pad cre-
ation overhead. The MECU uses a write buffer to mask both
the encryption and memory delay, similar to methods of re-
ducing write latencies in write-through caches. Instead of
waiting for the pad creation operation to complete before
writing, the data is written into a MECU internal buffer.
When the pad is created (some cycles later), the data is
XORed and written to main memory.
While this high-level approach of XORing pads gener-
ated in parallel to memory access exists in previous sys-
tems [28, 31, 33], pad generation techniques vary.
curely and efficiently storing and accessing seed informa-
1For simplicity, we refer to cache line sized blocks in main memory as
a cache line. We assume a 64 byte cache line size in all discussions and
experiments below, but all results remain valid for any cache line size.
tion presents a nontrivial architectural challenge, and previ-
ous systems incur significant performance or storage penal-
ties. As described below, characteristics specific to NVMM
systems allow optimizations to provide memory confiden-
tiality with nominal overhead.
4.1 Pad Generation
As mentioned above, we assume that the adversary can
read the entire contents of main memory (and the memory
on the MECU) each time the computer is suspended. Let
Mtdenote the unencrypted, logical contents of the mem-
ory at time t, and let Ctbe the real contents seen by the
adversary who can access the raw NVMM (Ct consists
of one ciphertext for each cache-line-sized block of main
memory, plus an array of “state counters” stored on the
MECU – see below). The standard notion of confidential-
ity in cryptography is semantic security: namely, for any
two sequences of logical memory contents M0,M1,M2,...
tell which of the two sequences of plaintexts was actually
encrypted. The scheme described here achieves a slight
variant: as cache lines are only re-encrypted upon re-write,
the adversary may learn that certain portions of the memory
were not written to during a particular resume cycle.
We first outline the scheme, then discuss the notion of
security and implementation. The main components are:
(a) A master key k. This key is refreshed (i.e., generated
at random) when the system is rebooted.
(b) A state counter s (16 bits will typically suffice). This
counter is reset to 0 on reboot, and incremented by 1
on each resume. (Thus, it counts the number of resume
cycles since the last reboot).
(c) An array of 16-bit timestamps, one per memory block
(for now, think of blocks as cache lines). The entry sa
records the value held by the state counter the last time
block a of the memory was written to (i.e., sais the
number of the last resume cycle during which the pro-
cessor wrote to block a). To be clear: timestamps do
not record physical time, only the state counter value.
The master key (a) and state counter (b) are stored on a
removable device such as a smart card (see below). This
device is assumed to be removed on suspend.
The pads used for encryption are created by applying
a pseudorandom function Fk(·) to the pair (a,sa). Intu-
itively, this ensures that each pad is indistinguishable from a
uniformly random string, even given all the other pads used
in the system (even on different suspends). It also ensures
that a given pad is never used to encrypt different messages
on different suspends.
Blocks are only re-encrypted when written to by the pro-
cessor. Therefore, an adversary who observes the memory
on successive suspends can learn that whether a particular
block was overwritten. To specify the security properties
2,..., the adversary should not be able to
AES AESAES AESkkkk
Memory Block Pad (512 bits)
a . s . 00 . 0...0a. s . 01 . 0...0a . s . 10 . 0...0a . s . 11 . 0...0
Figure 2. A simple pseudorandom function
Fk(a,s) implemented using AES.
more precisely, we define the write footprint of a particu-
lar sequence of resume/suspend cycles to be a sequence of
sets of memory blocks S0,S1,S2,..., where Siis the set of
memory blocks written to during the ith resume cycle. In
our scheme, a passive adversary learns the write sequence
but nothing else. More specifically, the scheme maintains
completeness: assuming that the adversary is passive and
does not modify any information stored in the MECU or
mainmemory, thesystembehavesasexpected. Italsomain-
tains security, as described below.
Let A be a passive adversary who observes the main
memory and MECU contents on every suspend cycle (over
multiple reboots), and does not have direct access to the key
k. Consider choosing at random between two runs of the
system with identical write footprints, and giving A access
to one of the two runs. The probability that the adversary
can guess which of the two runs she is observing is at most
2+ O(?), where ? is the advantage a related adversary A?
would have at distinguishing Fk(·) from a random function.
The adversary A?simply simulates the system (that is, en-
cryption plus the adversary A), making appropriate queries
to Fk. The running time of A?is the running time of the OS
plus that of A. Hence, if the pseudorandom function fam-
ily F·(·) is secure against a polynomially-bounded adver-
sary, then the MECU prevents leakage of any information
beyond the write footprint of a particular run.
As described above, any secure pseudorandom function
F from a large enough input space (enough to contain the
address of a memory block and a state counter) to a large
enough output space (the size of a memory block) will suf-
fice for generating pads. The main efficiency requirement is
that the PRF be fast enough for the pad to be generated in
the difference between the round-trip time from the MECU
to the main memory and the time necessary to fetch the
timestamp safrom the MECU’s memory. This ensures the
pad will be ready before main memory responds and mini-
mizes the delay observed by the CPU.
A particular PRF, based on AES, is described in Fig-
ure 2. To evaluate Fk(a,s), one calls AES with key k
on several inputs constructed from (a,s) by appending ex-
tra digits. For example, in an architecture with 64-byte
memory blocks as in Figure 2, F makes 4 parallel calls
to AES. If ?PRP(q) is the probability that the adversary A
can distinguish AESkfrom a truly random family of per-
mutations using q queries, then the probability that A can
distinguish AESkfrom a random family of functions is at
most ?PRP(q) +?q
?· 2−128(extra term due to the birth-
day paradox). This implementation is convenient since fast
hardware implementations of AES exist and their timings
are well-studied. To understand the speed disparity be-
tween memory access times and in-hardware AES, consider
the high-speed Rambus DRAM (RDRAM). Access time re-
quirements for a 64-byte cache line are 131.25 ns, based on
a 3.75 ns clock cycle for the memory bus . Given a low-
end desktop machine with a 1 GHz processor, an AES en-
cryptionwill require44cycles, or44ns, farbelowthe mem-
ory access speed [28, 31, 33]. That said, our simulations
indicate that the PRF evaluation is not an efficiency bottle-
neck in our proposed MECU architecture, so it is likely that
other PRF implementations would work equally well.
As mentioned above, we propose using a 16-bit state
counter to prevent pads being reused to encrypt different
data. The state counter size has an effect on the maximum
uptime for an OS instance. For example, a two-byte state
counter supports up to 65,536 separate suspend/resume cy-
cles before an OS reboot must be forced. In this case, the
OS could be suspended and resumed an average of 179
times a day, or 7.5 times an hour, every hour for a year be-
fore requiring a OS reboot. For all practical purposes, this
is an infinite number of suspensions,2and thus the counters
are at least as large as needed for these systems. Smaller
state counters may be more desirable, but we defer consid-
eration to future work. We adopt conservatively large 16-bit
state counters in all experiments discussed below.
The security of the encryption scheme relies on keep-
ing k secret. Storing k on the MECU is problematic since
we assume the adversary has access to the MECU contents
during suspends. After considering several alternatives, we
decided to place k on a removable smart card (or similar re-
movable storage directly connected to and controlled by the
MECU firmware). This ensures that the system is resilient
to an offline physical attack as long as (i) the smart card is
removed during suspends and (ii) the circuitry which uses
k when the system is live bears no memory once power has
been suspended. We consider the practical use and implica-
tions of the smart card in latter sections of this paper.
For many architectures, the burden of providing a state
counter for every cache line may be unmanageable. To
illustrate, a system with a 4 gigabyte RAM memory re-
quires 128 megabytes of non-volatile state counters inter-
nal to the MECU—a considerable design and manufactur-
ing challenge. These costs can be mitigated by sharing a
state counter between multiple cache lines. Here the MECU
organizes contiguous cache lines into memory blocks shar-
ing a single state counter. Figure 3 juxtaposes the individual
and shared counter strategies. The cost savings can be sub-
2The probability any current system survives 65,000 suspends without
software or hardware failure requiring OS reboot is very, very low.
16 bits 64 bytes
(a) Per cache line state counters.
16 bits 64 bytes
Address Cache LineState
(b) Shared cache line state counters.
Figure 3. Optimizing storage via shared state
counters (in-MECU storage shown in gray).
stantial: in the above example, sharing a counter among 64
lines drops the requirements from 128MB to 2MB.
Shared state counters require further changes to the
MECU design and operation. When a counter is updated
for one cache line, all other cache lines within that block
must be encrypted with a pad based on the new state counter
value. Hence, the MECU must retrieve, re-encrypt, and
write back to main memory each associated cache line fol-
lowing a counter update. Fortunately, because counters are
only updated after the system is resumed, each memory
block must only be re-encrypted once per system resume.
Shared state counters exhibit subtle trade-offs between
performance and storage costs. In the degenerate case, all
of physical memory would map to a single state counter. In
this case, as soon as one cache line is written to memory af-
ter a resume, all of memory must be re-encrypted. However,
this is a one-time cost (per resume). By grouping cache
lines into smaller blocks, we allow for lazy re-encryption,
wherein only the cache lines spatially close to accessed
memory must be re-encrypted. As the breadth of memory
access increases, more blocks will be re-encrypted, effec-
tively diffusing the one-time cost into a series of smaller
costs. Section 5 empirically explores these trade-offs.
The optimized state counter storage can thus be com-
puted using the following equation:
Size =Smem· log2(Nstate)
where Smemis size of the byte-addressable physical ad-
dress space (232for a 32-bit processor)3, Slineis the size
of a cache line in bytes (typically 64), Nstateis the num-
ber of states supported, and Nlinesis the number of cache
lines in a memory block. Modifying the number of states
only logarithmically affects in-MECU storage, while an in-
verse linear relationship exists between the number of cache
lines per memory block and storage size. Thus, storage re-
quirements are better decreased by increasing the number
of cache lines per memory block, rather than reducing the
number of states.
3Note that this only needs to be equivalent to the maximum memory
size supported by the system (e.g., an embedded system may be designed
to support 256 MB of physical memory, so an Smemof 228is sufficient).