Content uploaded by Lorenzo Zuolo
Author content
All content in this area was uploaded by Lorenzo Zuolo on Mar 09, 2016
Content may be subject to copyright.
Performance Assessment of an All-RRAM Solid State Drive
through a Cloud-Based Simulation Framework
Lorenzo Zuolo†, Michele Cirella†, Cristian Zambelli†, *Rino Micheloni‡, Stephen Bates‡ and Piero Olivo†
†Dipartimento di Ingegneria – Università di Ferrara (Italy)
‡Microsemi
lorenzo.zuolo@unife.it
Solid State Drives (SSDs) are now the most effective solution for
mass storage applications
•High robustness
•No mechanical parts
•Low power consumption
•Power wall at 25W
•Good reliability
•Two years with ten Disk Fills Per Day
Performance?
Latency?
SSD’s performance
and latency are
tightly coupled with
that of the
underlying storage
system NAND FLASH MEMORIES
NAND FLASH MEMORIES ARE TOO SLOW:
•milliseconds to a program page
•hundred of microseconds to read page
•tens of milliseconds to erase block
Using RRAM memories as the main storage media of SSDs boosts performances but poses new challenges;
therefore, a thorough design space exploration is required.
RRAM memories seem to be a good candidate for NAND replacement
Host IF
DRAM
CTRL
SSD architecture
RRAM
RRAM
“All-RRAM” SSDs allow:
•High performance
•High reliability
•Low power
RRAMs ARE
EXTREMELY FAST
RRAM chips used in this work were designed to be connected on
standard ONFI buses which are normally used in SSDs
Chip Parameters
Configuration
IO
-
Bus interface
ONFI
IO-Bus speed 200 MT/s
Native
page size
1T
-
nR 256 Bytes
Emulated page
size
512-1024-
4096
Bytes
T
READ
per Page
1 μs
We used a cloud-based simulation framework to explore the behavior
of the target All-RRAM SSD under over 100 different working conditions
SSD Parameters
Configuration
Capacity 512 GBytes
Channels/Targets
16/8
Host interface PCI-E Gen2x8
Protocol NVMExpress
Logical Block
Address size
256-512-1024-
4096 Bytes
Data were gathered both as a function of the RRAM page size
and the host interface queue depth
All-RRAM SSD’s
bandwidth and
latency
All-RRAM SSD’s
percentage of
memory I/O bus
usesage and active
RRAM dies
All-RRAM SSD’s
latency PDF and CDF
for a queue depth 1
and a RRAM page
size of 256 Bytes
All-RRAM SSD’s
latency PDF and CDF
for a queue depth 32
and a RRAM page
size of 4096 Bytes
https://ssdexplorer.azurewebsites.net
Storage
paradigm
NAND Flash
1X-MLC
1T-nR
RRAM
T
READ 40 µs 1 µs
IO
-
Bus Interface
ONFI ONFI
IO
-Bus Speed 200 MT/s 200 MT/s
Page
size 4096 Bytes 4096 Bytes
Conclusions:
•All-RRAM SSDs show extremely low latency only when certain configurations are adopted (native page size and queue depth equal to 1)
•In standard use cases (4 kBytes pages) “all-RRAM” SSDs behave like standard NAND flash-based SSDs because of the high memory bus transfer time
•Using standard SSD controller architectures not designed for RRAM memories is a waste of resources
Take away #1: RRAM memories keep the promise of ultra-low latency
only when the native 256 Bytes page size is used and a queue depth of 1
is set
… But applications running on real file-systems are designed to issue 4
kBytes aligned transactions with a queue depth of 32 commands or
even more…
In normal working conditions,
will an All-RRAM SSD outperform
a NAND flash-based SSD?
Working assumption: use the same SSD controller configuration and
change only the storage paradigm.
Take away #2: the NAND flash-based SSD shows a higher variability
with respect to the All-RRAM SSD because of the higher TREAD of NAND
flash memories and the MLC storage paradigm
Take away #3: the higher transfer time due to the “emulated” 4 kBytes
RRAM page mode heavily impacts the All-RRAM SSD ‘s latency
Final take away: when standard 4 kBytes transactions
have to be served (like in normal file-systems), the
simulated All-RRAM SSD shows latencies in the same
order of magnitude of a NAND flash-based SSD
*Work done at Università degli Studi di Ferrara