Conference Paper

Designing Floating codes for expected performance

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Floating codes are codes designed to store multiple values in a write asymmetric memory, with applications to flash memory. In this model, a memory consists of a block of n cells, with each cell in one of q states {0,1, . . ., q - 1}. The cells are used to represent k variable values from an lscr -ary alphabet. Cells can move from lower values to higher values easily, but moving any cell from a higher value to a lower value requires first resetting the entire block to an all 0 state. Reset operations are to be avoided; generally a block can only experience a large but finite number of resets before wearing out entirely. A code here corresponds to mapping from cell states to variable values, and a transition function that gives how to rewrite cell states when a variable is changed. Previous work has focused on the developing codes that maximize the worst-case number of variable changes, or equivalently cell rewrites, that can be experienced before resetting. In this paper, we introduce the problem of maximizing the expected number of variable changes before resetting, given an underlying Markov chain that models variable changes. We demonstrate that codes designed for expected performance can differ substantially from optimal worst-case codes, and suggest constructions for some simple cases.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, as stated previously, flash cells today endure thousands of block erasures. It is quite unlikely that the most unfortunate scenario is repeated for thousands times, and the expected performance should have strong and direct relation to the lifetime of mass-produced flash memory products [2], [7]. In Ref. [2], the expected performance is discussed in terms of "cost" of moves of a certain Markov chain model, and a flash code with good expected performance was proposed. ...
... It is quite unlikely that the most unfortunate scenario is repeated for thousands times, and the expected performance should have strong and direct relation to the lifetime of mass-produced flash memory products [2], [7]. In Ref. [2], the expected performance is discussed in terms of "cost" of moves of a certain Markov chain model, and a flash code with good expected performance was proposed. The code construction is further improved in Ref. [7], but these two studies do not discuss ILIFC. ...
... The code construction is further improved in Ref. [7], but these two studies do not discuss ILIFC. Suzuki considered to improve the expected performance of IL-IFC, and applied the Markov chain formalization of Ref. [2] in their analysis [13]. The formalization contributes to estimate the expected performance of ILIFC with small parameters, though, the approach seems not scalable because we need to construct and analyze a Markov model whose size is exponential in N. Kaji modeled the behavior of ILIFC as a multi-token cyclic randomwalk model, and clarified the expected performance of ILIFC in a uniform writing scenario in which it is assumed that K data bits are selected by write operations with an equal probability [6]. ...
Article
A random-walk model is investigated and utilized to analyze the performance of a coding scheme that aims to extend the lifetime of flash memory. Flash memory is widely used in various products today, but the cells that constitute flash memory wear out as they experience many operations. This issue can be mitigated by employing a clever coding scheme that is known as a flash code. The purpose of this study is to establish a well-defined random-walk model of a flash code that is known as an index-less indexed flash code (ILIFC), and clarify the expected performance of ILIFC. Preliminary study has been made by the author for a simplified model of data operation, and the contribution of this study is to extend the model of data operation to more general and practical one. Mathematical properties of the random-walk model is reconsidered, and useful properties are derived that help analyzing the performance of ILIFC both in non-asymptotic and asymptotic scenarios.
... The cost per bit flip is averaged over time, based on an underlying probabilistic model, and the goal is to minimize this average cost. This formulation was introduced recently [5], and is presented in more detail in Chapter 4. ...
... Chapters 2 and 4 are original exposition of work that has been previously published. The results of Chapter 4 were developed in collaboration with Flavio Chierichetti, Zhenming Liu, and Michael Mitzenmacher [5]. ...
... In Section 4.5, we discuss the subset-flip model of floating codes, where any subset of the bits can flip in a given timestep. This chapter, with the exception of Proposition 4.5.1, is original exposition of work done in collaboration with Flavio Chierichetti, Zhenming Liu, and Michael Mitzenmacher which has been pubished [5]. Proposition 4.5.1 is a contribution of this thesis. ...
... Their data graphs D are generalized hypercubes and de Bruijn graphs, respectively. Multiple floating codes have been presented, including the code constructions in [13], [14], the flash codes in [16], [24], and the constructions based on Gray codes in [7]. The floating codes in [7] were optimized for the expected rewriting performance. ...
... Multiple floating codes have been presented, including the code constructions in [13], [14], the flash codes in [16], [24], and the constructions based on Gray codes in [7]. The floating codes in [7] were optimized for the expected rewriting performance. ...
... [23] floating code (D weakly robust codes [7] is a hypercube) when k = Θ(1) and = 2 WOM code (D is t(C) is asymptotically optimal this a complete graph) when n = Ω(log 2 ) paper more general t(C) is asymptotically optimal this coding (D has when n = Ω(L), or when n = paper maximum out-Ω(log 2 L) and ∆ = O( n log n log L ). degree ∆. For When n = Ω(log 2 L), t(C) is floating codes, asymptotically optimal in ∆ = k( − 1).) the worst case sense (worst case over all data graphs D). robust coding Strongly robust codes when this L 2 log L = o(qn). ...
Conference Paper
Full-text available
A constrained memory is a storage device whose elements change their states under some constraints. A typical example is flash memories, in which cell levels are easy to increase but hard to decrease. In a general rewriting model, the stored data changes with some pattern determined by the application. In a constrained memory, an appropriate representation is needed for the stored data to enable efficient rewriting. In this paper, we define the general rewriting problem using a graph model. This model generalizes many known rewriting models such as floating codes, WOM codes, buffer codes, etc. We present a novel rewriting scheme for the flash-memory model and prove it is asymptotically optimal in a wide range of scenarios. We further study randomization and probability distributions to data rewriting and study the expected performance. We present a randomized code for all rewriting sequences and a deterministic code for rewriting following any i.i.d. distribution. Both codes are shown to be optimal asymptotically.
... This takes time, consumes energy, and reduces the lifetime of the memory. Therefore, it is important to design efficient rewriting schemes that maximize the number of rewrites between two erasures [7], [1], [2], [4]. The rewriting schemes increase some cell charge levels based on the current This work was supported in part by the National Science Foundation under Grant No. 0747470. ...
... Two different objective functions for modulation codes are primarily considered in previous work: (i) maximizing the number of rewrites for the worst case [7], [1], [2] and (ii) maximizing for the average case [4]. As Finucane et al. [4] mentioned, the reason for considering average performance is the averaging effect caused by the large number of erasures during the lifetime of a flash memory device. ...
... Two different objective functions for modulation codes are primarily considered in previous work: (i) maximizing the number of rewrites for the worst case [7], [1], [2] and (ii) maximizing for the average case [4]. As Finucane et al. [4] mentioned, the reason for considering average performance is the averaging effect caused by the large number of erasures during the lifetime of a flash memory device. Our analysis shows that the worst-case objective and the average case objective are two extreme cases of our optimization objective. ...
Conference Paper
In this paper, we consider modulation codes for practical multilevel flash memory storage systems with q cell levels. Instead of maximizing the lifetime of the device we maximize the average amount of information stored per cell-level, which is defined as storage efficiency. Using this framework, we show that the worst-case criterion and the average-case criterion are two extreme cases of our objective function. A self-randomized modulation code is proposed which is asymptotically optimal, as q ¿ ¿, for an arbitrary input alphabet and i.i.d. input distribution. In practical flash memory systems, the number of cell-levels q is only moderately large. So the asymptotic performance as q ¿ ¿ may not tell the whole story. Using the tools from load-balancing theory, we analyze the storage efficiency of the self-randomized modulation code. The result shows that only a fraction of the cells are utilized when the number of cell-levels q is only moderately large. We also propose a load-balancing modulation code, based on a phenomenon known as ¿the power of two random choices¿, to improve the storage efficiency of practical systems. Theoretical analysis and simulation results show that our load-balancing modulation codes can provide significant gain to practical flash memory storage systems. Though pseudo-random, our approach achieves the same load-balancing performance, for i.i.d. inputs, as a purely random approach based on the power of two random choices.
... Most flash codes proposed thus far are designed to optimize the worst case performance, such as the write deficiency [2] [3][4] [7]. On the other hand, recently, a flash code to improve the average number of rewritable bits was reported [5] [6] [8]. The average number of rewritable bits is the average number of allowable bit flips between consecutive erase operations. ...
... The average number of rewritable bits is the average number of allowable bit flips between consecutive erase operations. The flash codes based on the Gray codes proposed by Finucane, Liu, and Mitzenmacher [5] exhibit excellent average performance. They also presented a method for analyzing the average number of rewritable bits, which is based on a Markov chain model constructed from the state diagram of the code and a probabilistic model for the rewriting process [5]. ...
... The flash codes based on the Gray codes proposed by Finucane, Liu, and Mitzenmacher [5] exhibit excellent average performance. They also presented a method for analyzing the average number of rewritable bits, which is based on a Markov chain model constructed from the state diagram of the code and a probabilistic model for the rewriting process [5]. In the lifetime of a flash memory, it is expected that poor average performance will result in early collapse of the cells. ...
Article
Full-text available
In the present paper, a modification of the Index-less Indexed Flash Codes (ILIFC) for flash memory storage system is presented. Although the ILIFC proposed by Mahdavifar et al. has excellent worst case performance, the ILIFC can be further improved in terms of the average case performance. The proposed scheme, referred to as the {\em layered ILIFC}, is based on the ILIFC. However, the primary focus of the present study is the average case performance. The main feature of the proposed scheme is the use of the layer-based index coding to represent indices of information bits. The layer index coding promotes the uniform use of cell levels, which leads to better average case performance. Based on experiments, the proposed scheme achieves a larger average number of rewritings than the original ILIFC without loss of worst case performance.
... How the stored data can change its value with each rewrite, which we call the rewriting model, depends on the data-storage application and the used data structure. Several more specific rewriting models have been studied in the past, including write-once memory (WOM) codes [4], [5], [7], [20], [23], [27], floating codes [6], [13], [15], [19], [33] and buffer codes [2], [32]. In WOM codes, with each rewrite, the data can change from any value to any other value. ...
... Their data graphs D are generalized hypercubes and de Bruijn graphs, respectively. Multiple floating codes have been presented, including the code constructions in [13], [15], the flash codes in [19], [33], and the constructions based on Gray codes in [6]. The floating codes in [6] were optimized for the expected rewriting performance. ...
... Multiple floating codes have been presented, including the code constructions in [13], [15], the flash codes in [19], [33], and the constructions based on Gray codes in [6]. The floating codes in [6] were optimized for the expected rewriting performance. The study of WOM codes – with new applications to flash memories – is also continued, with a number of improved code constructions [16], [28]–[31]. ...
Article
Full-text available
Flash memory is well-known for its inherent asymmetry: the flash-cell charge levels are easy to increase but are hard to decrease. In a general rewriting model, the stored data changes its value with certain patterns. The patterns of data updates are determined by the data structure and the application, and are independent of the constraints imposed by the storage medium. Thus, an appropriate coding scheme is needed so that the data changes can be updated and stored efficiently under the storage-medium's constraints. In this paper, we define the general rewriting problem using a graph model. It extends many known rewriting models such as floating codes, WOM codes, buffer codes, etc. We present a new rewriting scheme for flash memories, called the trajectory code, for rewriting the stored data as many times as possible without block erasures. We prove that the trajectory code is asymptotically optimal in a wide range of scenarios. We also present randomized rewriting codes optimized for expected performance (given arbitrary rewriting sequences). Our rewriting codes are shown to be asymptotically optimal.
... This takes time, consumes energy, and reduces the lifetime of the memory. Therefore, it is important to design efficient rewriting schemes that maximize the number of rewrites between two erasures [7], [1], [2], [4]. The rewriting schemes increase some cell charge levels based on the current ...
... In this paper, we call a rewriting scheme a modulation code. Two different objective functions for modulation codes are primarily considered in previous work: (i) maximizing the number of rewrites for the worst case [7], [1], [2] and (ii) maximizing for the average case [4]. As Finucane et al. [4] mentioned, the reason for considering average performance is the averaging effect caused by the large number of erasures during the lifetime of a flash memory device. ...
... Two different objective functions for modulation codes are primarily considered in previous work: (i) maximizing the number of rewrites for the worst case [7], [1], [2] and (ii) maximizing for the average case [4]. As Finucane et al. [4] mentioned, the reason for considering average performance is the averaging effect caused by the large number of erasures during the lifetime of a flash memory device. Our analysis shows that the worst-case objective and the average case objective are two extreme cases of our optimization objective . ...
Article
In this paper, we consider modulation codes for practical multilevel flash memory storage systems with cell levels. Instead of maximizing the lifetime of the device [Ajiang-isit07-01, Ajiang-isit07-02, Yaakobi_verdy_siegel_wolf_allerton08, Finucane_Liu_Mitzenmacher_aller08], we maximize the average amount of information stored per cell-level, which is defined as storage efficiency. Using this framework, we show that the worst-case criterion [Ajiang-isit07-01, Ajiang-isit07-02, Yaakobi_verdy_siegel_wolf_allerton08] and the average-case criterion [Finucane_Liu_Mitzenmacher_aller08] are two extreme cases of our objective function. A self-randomized modulation code is proposed which is asymptotically optimal, as, for an arbitrary input alphabet and i.i.d. input distribution. In practical flash memory systems, the number of cell-levels is only moderately large. So the asymptotic performance as may not tell the whole story. Using the tools from load-balancing theory, we analyze the storage efficiency of the self-randomized modulation code. The result shows that only a fraction of the cells are utilized when the number of cell-levels is only moderately large. We also propose a load-balancing modulation code, based on a phenomenon known as "the power of two random choices" [Mitzenmacher96thepower], to improve the storage efficiency of practical systems. Theoretical analysis and simulation results show that our load-balancing modulation codes can provide significant gain to practical flash memory storage systems. Though pseudo-random, our approach achieves the same load-balancing performance, for i.i.d. inputs, as a purely random approach based on the power of two random choices. Comment: This work was presented in the 47-th Allerton conference
... The focus is on the fundamental tradeoff between rewriting and storage capacities when both memories and data change in constrained ways. After their introduction in [3], [14], the study on floating and buffer codes has been continued by [8], [15], [17], [22], [29], [30], etc. In [8], [17] , the design of floating codes with good expected performance was studied. ...
... The focus is on the fundamental tradeoff between rewriting and storage capacities when both memories and data change in constrained ways. After their introduction in [3], [14], the study on floating and buffer codes has been continued by [8], [15], [17], [22], [29], [30], etc. In [8], [17] , the design of floating codes with good expected performance was studied. In [17], the rewriting model for data was further generalized using directed graphs of bounded degrees. ...
Article
Full-text available
Memories whose storage cells transit irreversibly between states have been common since the start of the data storage technology. In recent years, flash memories have become a very important family of such memories. A flash memory cell has q states-state 0, 1, ..., q-1-and can only transit from a lower state to a higher state before the expensive erasure operation takes place. We study rewriting codes that enable the data stored in a group of cells to be rewritten by only shifting the cells to higher states. Since the considered state transitions are irreversible, the number of rewrites is bounded. Our objective is to maximize the number of times the data can be rewritten. We focus on the joint storage of data in flash memories, and study two rewriting codes for two different scenarios. The first code, called floating code, is for the joint storage of multiple variables, where every rewrite changes one variable. The second code, called buffer code, is for remembering the most recent data in a data stream. Many of the codes presented here are either optimal or asymptotically optimal. We also present bounds to the performance of general codes. The results show that rewriting codes can integrate a flash memory's rewriting capabilities for different variables to a high degree.
... We note that a number of papers on coding for flash memories have recently appeared in the literarture. These include codes for efficient rewriting [3], [7], [10], [15] (also known as floating codes or flash codes), error-correcting codes [4], and rank-modulation codes for reliable cell programming [11], [13]. However, to the best of our knowledge, this paper is the first to address storage coding at the page level instead of the cell level. ...
... Although minimizing erasures for every instance is NP hard, both algorithms that use coding achieve an approximate ratio of two with respect to an optimal solution that minimizes the number of block erasures. There have been multiple recent works on coding for flash memories, including codes for efficient rewriting [6] [8] [12], error-correcting codes [5], and rank modulation for reliable cell programming [10] [11]. This paper is the first work on storage coding at the page level instead of the cell level, and the topic itself is also distinct from all previous works. ...
Article
Full-text available
Flash memory is a nonvolatile computer memory comprised of blocks of cells, wherein each cell is implemented as either NAND or NOR floating gate. NAND flash is currently the most widely used type of flash memory. In a NAND flash memory, every block of cells consists of numerous pages; rewriting even a single page requires the whole block to be erased and reprogrammed. Block erasures determine both the longevity and the efficiency of a flash memory. Therefore, when data in a NAND flash memory are reorganized, minimizing the total number of block erasures required to achieve the desired data movement is an important goal. This leads to the flash data movement problem studied in this paper. We show that coding can significantly reduce the number of block erasures required for data movement, and present several optimal or nearly optimal data-movement algorithms based upon ideas from coding theory and combinatorics. In particular, we show that the sorting-based (noncoding) schemes require O(n log n) erasures to move data among n blocks, whereas coding-based schemes require only O(n) erasures. Furthermore, coding-based schemes use only one auxiliary block, which is the best possible and achieve a good balance between the number of erasures in each of the n+1 blocks.
... Besides studying the worst-case performance of rewriting codes, the expected rewriting performance is equally interesting. Some rewriting codes for expected performance are reported in[7,19]. ...
... Optimizing rewriting codes for expected performance is also an interesting topic. In[7], floating codes of this type were designed based on Gray code constructions. In[19], randomized WOM codes of robust performance were proposed. ...
Chapter
Full-text available
We would like to thank all our co-authors for their collaborative work in this area. In particular, we would like to thank Mike Langberg and Moshe Schwartz for many of the main results discussed in this chapter.
... It is also related to the study on coding for flash memories, where many of the proposed coding schemes are based on the monotonic transitions of flash cell states [11], [13]. In particular, the works on rewriting codes for flash memories extend wom codes [2], [5], [12], [15]. ...
Preprint
A write-once memory (wom) is a storage medium formed by a number of ``write-once'' bit positions (wits), where each wit initially is in a `0' state and can be changed to a `1' state irreversibly. Examples of write-once memories include SLC flash memories and optical disks. This paper presents a low complexity coding scheme for rewriting such write-once memories, which is applicable to general problem configurations. The proposed scheme is called the \emph{position modulation code}, as it uses the positions of the zero symbols to encode some information. The proposed technique can achieve code rates higher than state-of-the-art practical solutions for some configurations. For instance, there is a position modulation code that can write 56 bits 10 times on 278 wits, achieving rate 2.01. In addition, the position modulation code is shown to achieve a rate at least half of the optimal rate.
... Since then, a number of coding schemes emerged. These codes commonly use an indexing scheme to associate the bit value with the index containing that bit value [2,4,5,13]. ...
... Recent advanced memory devises such as flash memory and phase change memory (PCM) require appropriate coding for improving its write efficiency. For example, coding schemes for flash memories [6][7] [8] [9] can improve write efficiency so that lifetime of the flash memory is lengthened. Several constrained coding schemes suitable for PCM has been presented [10] [11]. ...
Article
Full-text available
A code design problem for memory devises with restricted state transitions is formulated as a combinatorial optimization problem that is called a subgraph domatic partition (subDP) problem. If any neighbor set of a given state transition graph contains all the colors, then the coloring is said to be valid. The goal of a subDP problem is to find a valid coloring with the largest number of colors for a subgraph of a given directed graph. The number of colors in an optimal valid coloring gives the writing capacity of a given state transition graph. The subDP problems are computationally hard; it is proved to be NP-complete in this paper. One of our main contributions in this paper is to show the asymptotic behavior of the writing capacity C(G) for sequences of dense bidirectional graphs, that is given by C(G)=Omega(n/ln n) where n is the number of nodes. A probabilistic method called Lovasz local lemma (LLL) plays an essential role to derive the asymptotic expression.
... In papers such as [3,4,6], they discuss on lowering the worst-case write deficiency of flash codes. Other papers such as [2,5] discuss the importance of constructing flash code that has a low write deficiency in the average-case scenario. Both scenarios are equally important. ...
Chapter
Full-text available
This paper proposes a novel coding scheme which can extend the lifespan of flash memory. Flash memory has a number of advantages against conventional storage devices, but it must be noted that the flash cells which constitute a flash memory accommodate, not a small, but limited number of operations only. A flash code provides a clever way to represent data values in flash memory so that the number of operations over flash cells becomes as small as possible, and this contributes to extend the lifespan of flash memory. Several flash codes have been studied so far, and this paper proposes a novel coding scheme which makes use of two different modes of encoding. Computer simulation shows that the proposed coding scheme shows much better average-case performance than existing codes. Besides the computer simulation, the paper also gives detailed analysis of the performance which justifies the advantage of the proposed code from a more theoretical viewpoint.
... Despite the fact that Fiat and Shamir studied non-binary WOMcodes more than 20 years ago [2], only a few constructions for such codes exist. Notable work on this topic includes the results on the existence of some non-binary WOM-codes [3], and an expression for the capacity region for a fixed number of generations and levels [4]. In [10], a family of two-write non-binary WOM-codes was given. ...
Article
Full-text available
A Write-Once Memory (WOM)-code is a coding scheme that allows information to be written in a memory block multiple times, but in a way that the stored values are not decreased across writes. This work studies non-binary WOM-codes with applications to flash memory. We present two constructions of non-binary WOM-codes that leverage existing high sum-rate WOM-codes defined over smaller alphabets. In many instances, these constructions provide the highest known sum-rates of the non-binary WOM-codes. In addition, we introduce a new class of codes, called level distance WOM-codes, which mitigate the difficulty of programming a flash memory cell by eliminating all small-magnitude level increases. We show how to construct such codes and state an upper bound on their sum-rate.
... — were recently introduced in [1], [8], [9]. Since then, several more papers on this subject have appeared in the literature [5], [10]–[12], [15], [19]. It should be pointed out that flash codes and buffer codes can be regarded as examples of memories with constrained source, which were described in [12]. ...
Article
Full-text available
Flash memory is a non-volatile computer memory comprising blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes - known as floating codes (or flash codes) and buffer codes - have been designed in order to maximize the number of times that information stored in a flash memory can be written (and re-written) prior to incurring a block erasure. An (n,k,t)q flash code C is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of C, defined as \delta(C) = n(q-1)-t, is a measure of how close the code comes to perfectly utilizing all these transitions. In this paper, we show a construction of flash codes with write deficiency O(qk\log k) if q \geq \log_2k, and at most O(k\log^2 k) otherwise. An (n,r,\ell,t)q buffer code is a coding scheme for storing a buffer of r \ell-ary symbols such that for any sequence of t symbols it is possible to successfully decode the last r symbols that were written. We improve upon a previous upper bound on the maximum number of writes t in the case where there is a single cell to store the buffer. Then, we show how to improve a construction by Jiang et al. that uses multiple cells, where n\geq 2r.
... The expected number of writes for floating codes was studied in [23,24] and can be more important than the worst case analysis in determining which codes to use in practice. Code constructions in [15] have a guarantee of (q − 1) + ⌊ q−1 2 ⌋ writes for a k = 2-dimensional message space and n = 2 cells. ...
Article
This paper investigates the design and application of write-once memory (WOM) codes for flash memory storage. Using ideas from Merkx ('84), we present a construction of WOM codes based on finite Euclidean geometries over F2\mathbb{F}_2. This construction yields WOM codes with new parameters and provides insight into the criterion that incidence structures should satisfy to give rise to good codes. We also analyze methods of adapting binary WOM codes for use on multilevel flash cells. In particular, we give two strategies based on different rewrite objectives. A brief discussion of the average-write performance of these strategies, as well as concatenation methods for WOM codes is also provided.
... Since the cell levels monotonically increase during programming , flash memories are a type of Write Asymmetric Memory [5]. In [2], [4], [5], [6], [8], [9], [10], [13], [15], coding schemes are studied on how to modify data or correct errors by only increasing cell levels. It is clearly also interesting to study how to program cells accurately, as the precision of cell programming determines the storage capacity of flash memories. ...
Conference Paper
Flash memory cells use the charge they store to represent data. The amount of charge injected into a cell is called the cell's level. Programming a cell is the process of increasing a cell's level to the target value via charge injection, and the storage capacity of flash memories is limited by the precision of cell programming. To optimize the precision of the final cell level, a cell is programmed adaptively with multiple rounds of charge injection. Due to the high cost of block erasure, when cells are programmed, their levels are only allowed to increase. Such a storage medium can be modelled by a Write Asymmetric Memory model. It is interesting to study how well such storage media can be programmed. In this paper, we focus on the programming strategy that optimizes the expected precision. The performance criteria considered here include two metrics that are suitable for the multi-level cell technology and the rank modulation technology, respectively. Assuming that the charge-injection noise has a uniform random distribution, we present an effective algorithm for finding the optimal programming strategy. The optimal strategy can be used to program cells efficiently.
... Optimizing rewriting codes for expected performance is also an interesting topic. In [7] , floating codes of this type were designed based on Gray code constructions. In [18], randomized WOM codes of robust performance were proposed. ...
Conference Paper
Full-text available
Flash memories are a very widely used type of non-volatile memory. Like magnetic recording and optical recording, flash memories have their own distinct properties. These distinct properties introduce very interesting information-representation and coding problems, which address many aspects of a successful storage system. In this paper, we survey recent results in this area. A focus is placed on rewriting codes and rank modulation.
... There have been multiple recent works on coding for flash memories, including codes for efficient rewriting [5] [7] [11], error-correcting codes [4], and rank modulation for reliable cell programming [8] [10]. This paper is the first work on storage coding at the page level instead of the cell level, and the topic itself is also distinct from all previous works. ...
Conference Paper
Full-text available
NAND flash memories are currently the most widely used flash memories. In a NAND flash memory, although a cell block consists of many pages, to rewrite one page, the whole block needs to be erased and reprogrammed. Block erasures determine the longevity and efficiency of flash memories. So when data is frequently reorganized, which can be characterized as a data movement process, how to minimize block erasures becomes an important challenge. In this paper, we show that coding can significantly reduce block erasures for data movement, and present several optimal or nearly optimal algorithms. While the sorting-based non-coding schemes require O(n log n) erasures to move data among n blocks, coding-based schemes use only O(n) erasures and also optimize the utilization of storage space.
... Such coding schemes -known as floating codes or flash codes -were first introduced in [3] two years ago. Since then, a few more papers on this subject have appeared in the liter-ature [2,4,6,8]. It should be pointed out, however, that flash codes may be regarded as a generalization of codes for writeonce memories [1,7], that were studied since the early 1980s. ...
Conference Paper
Full-text available
Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes - known as floating codes or flash codes - have been designed in order to maximize the number of times that information stored in a flash memory can be written (and re-written) prior to incurring a block erasure. An (n, k, t)q flash code ¿ is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes (where a write is a transition 0 ¿ 1 or 1 ¿ 0 in any one of the k bits) can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of ¿, defined as ¿(¿) = n(q-1)-t, is a measure of how close the code comes to perfectly utilizing all these transitions. For k > 6 and large n, the best previously known construction of flash codes achieves a write defficiency of O(qk2). On the other hand, the best known lower bound on write deficiency is ¿(qk). In this paper, we present a new construction of flash codes that approaches this lower bound to within a factor logarithmic in k. To this end, we first improve upon the so-called ¿indexed¿ flash codes, due to Jiang and Bruck, by eliminating the need for index cells in the Jiang-Bruck construction. Next, we further increase the number of writes by introducing a new multi-stage (recursive) indexing scheme. We then show that the write defficiency of the resulting flash codes is O(qk log k) if q ¿ log2 k, and at most O(k log2 k) otherwise.
... It is also related to the study on coding for flash memories, where many of the proposed coding schemes are based on the monotonic transitions of flash cell states [11], [13]. In particular, the works on rewriting codes for flash memories extend wom codes [2], [5], [12], [15]. The motivation for this study is to look for a general method for constructing wom codes that can achieve low encoding/decoding complexity and high rates, which can potentially be used in practice (e.g., for flash memories). ...
Article
A write-once memory (wom) is a storage medium formed by a number of “write-once” bit positions (wits), where each wit initially is in a “0” state and can be changed to a “1” state irreversibly. Examples of write-once memories include SLC flash memories and optical disks. This paper presents a low complexity coding scheme for rewriting such write-once memories, which is applicable to general problem configurations. The proposed scheme is called the position modulation code, as it uses the positions of the zero symbols to encode some information. The proposed technique can achieve code rates higher than state-of-the-art practical solutions for some configurations. For instance, there is a position modulation code that can write 56 bits 10 times on 278 wits, achieving rate 2.01. In addition, the position modulation code is shown to achieve a rate at least half of the optimal rate.
... Suppose we have a cycle that traverses the ( n k )k! partial permutations, and increasing a logical digit by one corresponds to moving forward one step in the cycle. Then we can take a set of such logic digits, and increase some digits by a small amount at a time (e.g., floating code [3][4]), and construct a code for flash memory. The key result in this paper is a generalization of Gray codes from complete permutations [1] to k-partial permutations. ...
Conference Paper
Full-text available
Rank modulation was recently proposed as an information representation for multilevel flash memories, using permutations or ranks of n flash cells. The current decoding process finds the cell with the i-th highest charge level at iteration i, for i = 1, 2, ..., n-1. Motivated by the need to reduce the number of such iterations, we consider k-partial permutations, where only the highest k cell levels are considered for information representation. We propose a generalization of Gray codes for k-partial permutations such that information is updated efficiently.
... Nevertheless, the coding technique in the above algorithm can be readily utilized in any per-instance-optimal solution. A number of recent works have studied coding for rewriting [4], [6], [8], [10], [13] and error correction [9], [12] in flash memories at the cell level. There are also many works studying algorithms and data structures for flash data-storage systems [5]. ...
Conference Paper
Full-text available
NAND flash memories are the most widely used non-volatile memories, and data movement is common in flash storage systems. We study data movement solutions that minimize the number of block erasures, which are very important for the efficiency and longevity of flash memories. To move data among n blocks with the help of Δ auxiliary blocks, where every block contains m pages, we present algorithms that use θ(n • min{m, log_Δ n}) erasures without the tool of coding. We prove this is almost the best possible for non-coding solutions by presenting a nearly matching lower bound. Optimal data movement can be achieved using coding, where only θ(n) erasures are needed. We present a coding-based algorithm, which has very low coding complexity, for optimal data movement. We further show the NP hardness of both coding-based and non-coding schemes when the objective is to optimize data movement on a per instance basis.
... For example, a lookup operation in MicroHash may need to follow multiple pointers to locate the desired key in a chain of flash blocks and can be very slow. Other recent works on designing efficient codes for flash memory to increase its effective capac- ity [30, 27] are orthogonal to our work, and BufferHash can be implemented on top of these codes. ...
Conference Paper
Full-text available
We show how to build cheap and large CAMs, or CLAMs, using a combination of DRAM and flash mem- ory. These are targeted at emerging data-intensive net- worked systems that require massive hash tables running into a hundred GB or more, with items being inserted, updated and looked up at a rapid rate. For such systems, using DRAM to maintain hash tables is quite expen- sive, while on-disk approaches are too slow. In contrast, CLAMs cost nearly the same as using existing on-disk approaches but offer orders of magnitude better perfor- mance. Our design leverages an efficient flash-oriented data-structure called BufferHash that significantly lower s the amortized cost of random hash insertions and updates on flash. BufferHash also supports flexible CLAM evic- tion policies. We prototype CLAMs using SSDs from two different vendors. We find that they can offer aver- age insert and lookup latencies of 0.006ms and 0.06ms (for a 40% lookup success rate), respectively. We show that using our CLAM prototype significantly improves the speed and effectiveness of WAN optimizers.
... A rewriting code builds a one-to-many mapping from the data to the cells' states, so that the data can be changed repeatedly without a block erasure. The rewriting codes include write-once-memory (WOM) codes for storing individual variables [4], [6], [17], floating codes for the joint coding of multiple variables [5], [8], [9], [21], buffer codes for buffering recent values in a data stream [2], and the rewriting codes in [11] that generalize the aforementioned data models. In these rewriting codes, every cell has discrete levels, and the cell levels can only increase. ...
Article
Full-text available
Flash memories are currently the most widely used type of nonvolatile memories. A flash memory consists of floating-gate cells as its storage elements, where the charge level stored in a cell is used to represent data. Compared to magnetic recording and optical recording, flash memories have the unique property that the cells are programmed using an iterative procedure that monotonically shifts each cell's charge level upward toward its target value. In this paper, we model the cell as a monotonic storage channel, and explore its capacity and optimal programming. We present two optimal programming algorithms based on a few different noise models and optimization objectives.
... Rewriting codes are a coding-theoretic approach to allow rewriting to memories which have some type of write restriction, typically values stored in memory may only be increased. While codes for binary media were proposed in the 1980s [1], [2], within the past few years, a large number of rewriting codes directed at flash memory have been described [3], [4], [5], [6], [7], [8]. Most of these these codes are designed for flash memory cells that can store one of q discrete levels, where the values can only increase on successive rewrites. ...
Article
A rewriting code construction for flash memories based upon lattices is described. The values stored in flash cells correspond to lattice points. This construction encodes information to lattice points in such a way that data can be written to the memory multiple times without decreasing the cell values. The construction partitions the flash memory's cubic signal space into blocks. The minimum number of writes is shown to be linear in one of the code parameters. An example using the E8 lattice is given, with numerical results. Comment: Submitted to Globecom 2010. 5 pages, 2 figures
... There has been a number of recent works using the information theoretic approach to develop new storage schemes for flash memories. They include coding schemes for rewriting data [1] [4] [5] [6] [10], codes for correcting limited-magnitude errors [3], and the new rank modulation scheme for efficient and reliable cell programming and data storage [7] [8]. In this paper, we focus on and extend the rank modulation scheme. ...
Article
Full-text available
Rank modulation has been recently introduced as a new information representation scheme for flash memories. Given the charge levels of a group of flash cells, sorting is used to induce a permutation, which in turn represents data. Motivated by the lower sorting complexity of smaller cell groups, we consider bounded rank modulation, where a sequence of permutations of given sizes are used to represent data. We study the capacity of bounded rank modulation under the condition that permutations can overlap for higher capacity.
... The bits are represented in a clever way to guarantee that every sequence of up to t writes (of a single bit) does not lead to any of the n cells exceeding its maximum value q − 1. Recently, several more papers have appeared [1], [3], [5], [8], [9], [13], [14], [15], [17] that discuss coding techniques for this model of flash memories. ...
Article
Full-text available
Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.
Article
Every bit of information in a storage or memory device is bound by a multitude of performance specifications, and is subject to a variety of reliability impediments. At the other end, the physical processes tamed to remember our bits offer a constant source of risk to their reliability. These include a variety of noise sources, access restrictions, inter-cell interferences, cell variabilities, and many more issues. Tying together this vector of performance figures with that vector of reliability issues is a rich matrix of emerging coding tools and techniques. Channel coding schemes ensure target reliability and performance and have been at the core of memory systems since their nascent age. In this survey, we first overview the fundamentals of channel coding and summarize well-known codes that have been used in non-volatile memories (NVMs). Next, we demonstrate why the conventional coding approaches ubiquitously based on symmetric channel models and optimization for the Hamming metric fail to address the needs of modern memories. We then discuss several recently proposed innovative coding schemes. Behind each coding scheme lies an interesting theoretical framework, building on deep ideas from mathematics and the information sciences. We also survey some of the most fascinating bridges between deep theory and storage performance. While the focus of this survey is primarily on the pervasive multi-level NAND Flash, we envision that other benefiting memory technologies will include phase change memory, resistive memories, and others.
Article
Non-volatile memories (NVMs) have emerged as the primary replacement of hard-disk drives for a variety of storage applications, including personal electronics, mobile computing, intelligent vehicles, enterprise storage, data warehousing, and data-intensive computing systems. Channel coding schemes are a necessary tool for ensuring target reliability and performance of NVMs. However, due to operational asymmetries in NVMs, conventional coding approaches - commonly based on designing for the Hamming metric - no longer apply. Given the immediate need for practical solutions and the shortfalls of existing methods, the fast-growing discipline of coding for NVMs has resulted in several key innovations that not only answer the needs of modern storage systems but also directly contribute to the analytical toolbox of coding theory at large. This monograph discusses recent advances in coding for NVMs, covering topics such as error correction coding based on novel algebraic and graph-based methods, write-once memory (WOM) codes, rank modulation, and constrained coding. Our goal in this monograph is multifold: to illuminate the advantages - as well as challenges - associated with modern NVMs, to present a succinct overview of several exciting recent developments in coding for memories, and, by presenting numerous potential research directions, to inspire other researchers to contribute to this timely and thriving discipline.
Article
The expected write deficiency of the index-less indexed flash codes (ILIFC) is studied. ILIFC is a coding scheme for flash memory, and consists of two stages with different coding techniques. This study investigates the write deficiency of the first stage of ILIFC, and shows that omitting the second stage of ILIFC can be a practical option for realizing flash codes with good average performance. To discuss the expected write deficiency of ILIFC, a random walk model is introduced as a formalization of the behavior of ILIFC. Based on the random walk model, two different techniques are developed to estimate the expected write deficiency. One technique requires some computation, but gives very precise estimation of the write deficiency. The other technique gives a closed-form formula of the write deficiency under a certain asymptotic scenario.
Article
Novel flash codes with small average write deficiency are proposed. A flash code is a coding scheme for avoiding the wearing of cells in flash memory. One approach to develop flash codes with large parameters is to make use of slices which are small groups of cells. Preliminary study shows that using small slices brings several favorable characteristics, but naive use of small slices induces a certain overhead. In this study, a new structure which is called a cluster is devised to develop a good slice-based flash code. Two different slice encoding schemes are used in a cluster, which decreases the overhead of using small slices while retaining its advantage. The proposed flash codes show much smaller write deficiency compared to another slice-based flash code.
Article
The goal of this paper is to present constructions of high-rate nonbinary write-once memory (WOM) codes for multilevel flash memories. The constructions provided here are all based on the basic idea of mapping high-rate binary codebooks to nonbinary codebooks. The proposed codes maintain the same length and encoding complexity as their underlying binary constituents. We begin by presenting some elementary, yet rate-efficient constructions. Afterward, we consider a high-rate two-write WOM-code defined over an alphabet of size four. In addition, we consider the application of our constructions to the creation of fixed-rate WOM codes. The constructions presented in this paper improve upon the best-known code constructions for certain code lengths.
Conference Paper
Flash memory is a non-volatile, non-mechanical data storage technology that stores data by trapping charge and can be reused by freeing the trapped charge with an internal erase operation. When flash memory cells are erased, there is a considerable negative impact on the longevity and performance of the device. To defer and minimize these erasures, a floating code is able to store variable updates as cell increments. A (n, q, k) floating code uses an array of n cells with q levels to store k binary variables. In this paper, we investigate the poset (partially ordered sets) structures derived from the various states of the n cells and k variables. These posets have fundamentally different structures that makes designing floating codes a challenge, most notably their structure of their vertex covers. Based on the poset structure, we present a new floating code for l = 2 and arbitrary q, k and n ∈ {k, k+1}, or arbitrary n, q and k = 2 that is optimal for single cell increments and has a deficiency of O(qk), the best possible deficiency. We present an algorithm for constructing the floating code and prove that the algorithm produces a valid floating code.
Conference Paper
A rewriting code construction for flash memories based upon lattices is described, where the values stored in flash cells correspond to lattice points. This construction encodes information to lattice points in such a way that data can be written to the memory multiple times without decreasing the cell values. The construction partitions the flash memory's cubic signal space into blocks, which aids with encoding. The minimum number of writes is approximately linear in one of the code parameters. Using the E8 lattice as an example, the average number of writes can be increased by introducing randomization in the encoding.
Article
This work investigates the structure of capacity achieving write once memory codes with particular attention to the case where each cell of the flash memory device is capable of representing more than one bit. These results are used to characterize the rates achieved across generations for capacity achieving codes as well to construct a high rate ternary two write code. Additionally, the problem of maximizing the sum rate for two writes given that both writes encode at the same rate is considered.
Article
The expected write deficiency of the index-less indexed flash codes (ILIFC) is studied, and a technique is developed to improve the write deficiency of ILIFC. ILIFC is a coding scheme for flash memory, and consists of two stages with different coding techniques. This study first clarify the average write deficiency of the first stage of ILIFC, and shows that omitting the second stage of ILIFC can be a practical option for realizing flash codes with good average performance. The study also investigates an improvement of the index-less coding which is used in the first stage of ILIFC. The improvement reduces the write deficiency of ILIFC, and relaxes the constraints on the rate of the code.
Conference Paper
Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.
Conference Paper
Floating codes are codes for multi-level flash memories. There are two main properties which those codes have, the worst-case block erasure period and the average block erasure period. Codes with large average block erasure period can be constructed from the Gray code. First it is shown that a construction method for the floating codes can be interpreted by using labelled graphs. Floating codes are proposed for cases (a) n = 5 and k = 4, l = 2, q >; 2; (b) n = 8, k = 4, I = 2, q>; 2, where n is the number of cells in a block, k the number of information variables, I the number of levels of information variables and q the number of levels of cells. It is shown that if input data do not distribute uniformly, then the frequency of block erasure is low when the proposed code is used.
Article
Full-text available
We explore a novel data representation scheme for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The only allowed charge-placement mechanism is a "push-to-the-top" operation which takes a single cell of the set and makes it the top-charged cell. The resulting scheme eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. We present unrestricted Gray codes spanning all possible n-cell states and using only "push-to-the-top" operations, and also construct balanced Gray codes. We also investigate optimal rewriting schemes for translating arbitrary input alphabet into n-cell states which minimize the number of programming operations.
Conference Paper
Full-text available
Flash memory is an electronic non-volatile memory with wide applications. Due to the substantial impact of block erasure operations on the speed, reliability and longevity of flash memories, writing schemes that enable data to be modified numerous times without incurring the block erasure is desirable. This requirement is addressed by floating codes, a coding scheme that jointly stores and rewrites data and maximizes the rewriting capability of flash memories. In this paper, we present several new floating code constructions. They include both codes with specific parameters and general code constructions that are asymptotically optimal. We also present bounds to the performance of floating codes.
Conference Paper
Full-text available
Certain storage media such as flash memories use write-asymmetric, multi-level storage elements. In such media, data is stored in a multi-level memory cell the contents of which can only be increased, or reset. The reset operation is expensive and should be delayed as much as possible. Mathematically, we consider the problem of writing a binary sequence into write-asymmetric q-ary cells, while recording the last r bits written. We want to maximize t, the number of possible writes, before a reset is needed. We introduce the term Buffer Code, to describe the solution to this problem. A buffer code is a code that remembers the r most recent values of a variable. We present the construction of a single-cell (n=1) buffer code that can store a binary (l=2) variable with t=[q/2r-1]+r-2 and a universal upper bound to the number of rewrites that a single-cell buffer code can have: t ¿ [q-1/lr-1]·r+[log l {[(q-1) mod (lr - 1)]+1}]. We also show a binary buffer code with arbitrary n, q, r, namely, the code uses n q-ary cells to remember the r most recent values of one binary variable. The code can rewrite the variable t = (q-1)(n-2r+1)+r-1 times, which is asymptotically optimal in q and n. We then extend the code construction for the case r=2, and obtain a code that can rewrite the variable t=(q-1)(n-2)+1 times. When q=2, the code is strictly optimal.
Conference Paper
Full-text available
Memories whose storage cells transit irreversibly between states have been common since the start of the data storage technology. In recent years, flash memories and other non-volatile memories based on floating-gate cells have become a very important family of such memories. We model them by the Write Asymmetric Memory (WAM), a memory where each cell is in one of q states – state 0, 1, ... , q-1 – and can only transit from a lower state to a higher state. Data stored in a WAM can be rewritten by shifting the cells to higher states. Since the state transition is irreversible, the number of times of rewriting is limited. When multiple variables are stored in a WAM, we study codes, which we call floating codes, that maximize the total number of times the variables can be written and rewritten. In this paper, we present several families of floating codes that either are optimal, or approach optimality as the codes get longer. We also present bounds to the performance of general floating codes. The results show that floating codes can integrate the rewriting capabilities of different variables to a surprisingly high degree.
Conference Paper
Full-text available
Several physical effects that limit the reliability and performance of Multilevel Flash memories induce errors that have low magnitude and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions for such channels when the number of errors is bounded by t. The construction uses known codes for symmetric errors over small alphabets to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. An extension of the construction is proposed to include systematic codes as a benefit to practical implementation.
Article
Full-text available
Many forms of digital memory have been developed for the permanent storage of information. These include keypunch cards, paper tapes, PROMs, photographic film and, more recently, digital optical disks. All these "write-once" memories have the property that once a "one" is written in a particular cell, this cell becomes irreversibly set at one. Thus, the ability to rewrite information in the memory is hampered by the existence of previously written ones. The problem of storing temporary data in permanent memory is examined here. Consider storing a sequence of t messages W_{1}, W_{2}, cdots , W_{t} in such a device. Let each message W_{i} consist of k_{i} bits and let the memory contain n cells. We say that a rate t -tuple (R_{1} = k_{1} / n, R_{2} = k_{2} / n, cdots , R_{t} = k_{t} / n) is achievable if we can store a sequence of messages at these rates for some n . The capacity C_{t}^{ast} subset R_{+}^{t} is the closure of the set of achievable rates. The capacity C_{t}^{ast} for an optical disk-type memory is determined. This result is related to the work of Rivest and Shamir. A more general model for permanent memory is introduced. This model allows for the possibility of random disturbances (noise), larger input and output alphabets, more possible cell states, and a more flexible set of state transitions. An inner bound on the capacity region C_{t}^{ast} for this model is presented. It is shown that this bound describes C_{t}^{ast} in several instances.
Article
Full-text available
A computer memory with defects is modeled as a discrete memoryless channel with states that are statistically determined. The storage capacity is found when complete defect information is given to the encoder or to the decoder, and when the defect information is given completely to the decoder but only partially to the encoder. Achievable storage rates are established when partial defect information is provided at varying rates to both the encoder and the decoder. Arimoto-Blahut type algorithms are used to compute the storage capacity.
Article
Full-text available
This paper mainly focuses on the development of the NOR flash memory technology, with the aim of describing both the basic functionality of the memory cell used so far and the main cell architecture consolidated today. The NOR cell is basically a floating-gate MOS transistor, programmed by channel hot electron and erased by Fowler-Nordheim tunneling. The main reliability issues, such as charge retention and endurance, are discussed, together with an understanding of the basic physical mechanisms responsible. Most of these considerations are also valid for the NAND cell, since it is based on the same concept of floating-gate MOS transistor. Furthermore, an insight into the multilevel approach, where two bits are stored in the same cell, is presented. In fact, the exploitation of the multilevel approach at each technology node allows an increase of the memory efficiency, almost doubling the density at the same chip size, enlarging the application range and reducing the cost per bit. Finally, NOR flash cell scaling issues are covered, pointing out the main challenges. Flash cell scaling has been demonstrated to be really possible and to be able to follow Moore's law down to the 130-nm technology generations. Technology development and consolidated know-how is expected to sustain the scaling trend down to 90- and 65-nm technology nodes. One of the crucial issues to be solved to allow cell scaling below the 65-nm node is the tunnel oxide thickness reduction, as tunnel thinning is limited by intrinsic and extrinsic mechanisms.
Article
Storage media such as digital optical disks, PROMS, or paper tape consist of a number of ”write-once” bit positions (wits); each wit initially contains a ”0” that may later be irreversibly overwritten with a ”1”. It is demonstrated that such ”write-once memories” (woms) can be ”rewritten” to a surprising degree. For example, only 3 wits suffice to represent any 2-bit value in a way that can later be updated to represent any other 2- bit value. For large k, 1·29····k wits suffice to represent a k- bit value in a way that can be similarly updated. Most surprising, allowing t writes of a k-bit value requires only t+o(t) wits, for any fixed k. For fixed t, approximately k·t/log(t) wits are required as k→∞. An n-wit WOM is shown to have a ”capacity” (i.e., k·t whenn writing a k-bit value t times) of up to n·log(n) bits.
Article
A write-once memory (WOM) is a binary storage medium in which the individual bit positions can be changed from the 0 state to the 1 state only once. Examples of WOMs are paper tapes, punched cards, and, most importantly, optical disks. For the latter storage medium, the l's are marked by a laser that burns away a portion of the disk. In a recent paper, Rivest and Shamir showed that it is possible to update or rewrite a WOM to a surprising degree, and that the total amount of information which can be stored in an JV-position WOM in many write/read "generations" or "stages" can be much larger than N.1 In this paper we extend their results in several directions. Let C(T, N) be the total number of bits of information that can be stored in an N-position WOM using T write/read generations. We consider the four cases that result when the writer (encoder) and/or reader (decoder) know the state of the memory at the previous generation. For three of these cases, when either the encoder and/or decoder knows the previous state, we show that C(T, N) ∼ N log(T + 1), with T held fixed, as A→∞. For the remaining case, when neither the encoder nor the decoder knows the previous state, we show that C(T, N) < N Π2(6 In 2) ≈AT (2.37) and that this bound can be approached arbitrarily closely with T, N sufficiently large.
Conference Paper
WOM (Write Once Memory) codes are codes for efficiently storing and updating data in a memory whose state transition is irreversible. Storage media that can be classified as WOM includes flash memories, optical disks and punch cards. Error-correcting WOM codes can correct errors besides its regular data updating capability. They are increasingly important for electronic memories using MLCs (multi-level cells), where the stored data are prone to errors. In this paper, we study error-correcting WOM codes that generalize the classic models. In particular, we study codes for jointly storing and updating multiple variables - instead of one variable - in WOMs with multi-level cells. The error-correcting codes we study here are also a natural extension of the recently proposed floating codes [7]. We analyze the performance of the generalized error-correcting WOM codes and present several bounds. The number of valid states for a code is an important measure of its complexity. We present three optimal codes for storing two binary variables in n q-ary cells, where n = 1, 2, 3, respectively. We prove that among all the codes with the minimum number of valid states, the three codes maximize the total number of times the variables can be updated.
Article
We introduce write-efficient memories (WEM) as a new model for storing and updating information on a rewritable medium. There is a cost ϕ: × → ∞ assigned to changes of letters. A collection of subsets = {Ci: 1 ≤ i ≤ M} of n is an (n, M, D) WEM code, if for all i ≠ j and if . Dmax is called the maximal correction cost with respect to the given cost function. The performance of a code can also be measured by two parameters, namely, the maximal cost per letter d = n−1Dmax and the rate of the size r = n−1 log M. The rate achievable with a maximal per letter cost d is thus . This is the most basic quantity (the storage capacity) of a WEM (n, ϕn)n = 1∞. We give a characterization of this and related quantities.
Article
Storage media such as digital optical discs, PROM's, or punched cards consist of a number of write-once bit positions (WIT's); each WIT initially contains a "0" that may later be irreversibly overwritten with a "r'. Rivest and Shamir have shown that such write-once memories (WOM's) can be reused very efficiently. Generalized WOM's are considered, in which the basic storage element has more than two possible states and the legal state transitions are described by an arbitrary directed acyclic graph. The capabilities of such memories depend on the depth of the graphs rather than on their size, and the decision problem associated with the generalized WOM's in NP-hard even for 3 -ary symbols rewritten several times or multiple values rewritten once.
Article
Write-efficient memories (WEMs) were introduced by Ahlswede and Zhang (1989) as a model for storing and updating information on a rewritable medium with cost constraints. We note that the research work of Justesen and Hoholdt (1984) on maxentropic Markov chains actually provide a method for calculating the capacity of WEM. By using this method, we derive a formula for the capacity of WEM with a double-permutation cost matrix. Furthermore, some capacity theorems are established for a special class of WEM called deterministic WEM. We show that the capacity of deterministic WEM is equal to the logarithm of the largest eigenvalue of the corresponding connectivity matrix, it is interesting to note that the deterministic WEM behaves like the discrete noiseless channels of Shannon (1948). By specializing our results, we also obtain some interesting properties for the maximization problem of information functions with multiple variables which are difficult to obtain otherwise. Finally, we present a method for constructing error-correcting codes for WEM with the Hamming distance as the cost function. The covering radius of linear codes plays an important role in the constructions
Article
The generalized write-once memory introduced by Fiat and Shamir (1984) is a q-ary information storage medium. Each storage cell is expected to store one of q symbols, and the legal state transitions are described by an arbitrary directed acyclic graph. This memory model can be understood as a generalization of the binary write-once memory which was introduced by Rivest and Shamir (1982). During the process of updating information, the contents of a cell can be changed from a 0-state to a 1-state but not vice versa. We study the problem of reusing a generalized write-once memory for T successive cycles (generations). We determine the zero-error capacity region and the maximum total number of information hits stored in the memory for T consecutive cycles for the situation where the encoder knows and the decoder does not know the previous state of the memory. These results extend the results of Wolf, Wyner, Ziv, and Korner (1984) for the binary write-once memory
Article
Write-unidirectional memories generalize write-once memories storing binary sequences of some fixed length in a reusable manner. At every new usage the content of the memory can be rewritten by either changing some of the zeroes to ones or changing some of the ones to zeroes, but not both. The author constructs codes of rate 0.5325. He discusses the four cases that arise according to whether or not the encoder and/or the decoder is informed of the previous state of the memory. J.M. Borden's converse bound (submitted to IEEE Trans. Inf. Theory) is rederived using Fibonacci sequences
Pulse code communications
  • F Gray
On the capacity of computer memory with defectsOn the generalization of error-correcting WOM codes
  • C A Heegard
  • Jiang