Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In many channels, the transmitted signals do not only face noise, but offset mismatch as well. In the prior art, maximum likelihood (ML) decision criteria have already been developed for noisy channels suffering from signal independent offset . In this paper, such ML criterion is considered for the case of binary signals suffering from Gaussian noise and signal dependent offset . The signal dependency of the offset signifies that it may differ for distinct signal levels, i.e., the offset experienced by the zeroes in a transmitted codeword is not necessarily the same as the offset for the ones. Besides the ML criterion itself, also an option to reduce the complexity is considered. Further, a brief performance analysis is provided, confirming the superiority of the newly developed ML decoder over classical decoders based on the Euclidean or Pearson distances.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Reliability is a critical issue for modern multi-level cell memories. We consider a multi-level cell channel model such that the retrieved data is not only corrupted by Gaussian noise, but hampered by scaling and offset mismatch as well. We assume that the intervals from which the scaling and offset values are taken are known, but no further assumptions on the distributions on these intervals are made. We derive maximum likelihood (ML) decoding methods for such channels, based on finding a codeword that has closest Euclidean distance to a specified set defined by the received vector and the scaling and offset parameters. We provide geometric interpretations of scaling and offset and also show that certain known criteria appear as special cases of our general setting.
Conference Paper
Full-text available
Data storage systems may not only be disturbed by noise. In some cases, the error performance can also be seriously degraded by offset mismatch. Here, channels are considered for which both the noise and offset are bounded. For such channels, Euclidean distance-based decoding, Pearson distance-based decoding, and Maximum Likelihood decoding are considered. In particular, for each of these decoders, bounds are determined on the magnitudes of the noise and offset intervals which lead to a word error rate equal to zero. Case studies with simulation results are presented confirming the findings.
Conference Paper
Full-text available
We consider the transmission and storage of data that use coded symbols over a channel, where a Pearson-distance based detector is used for achieving resilience against unknown channel gain and offset, and corruption with additive noise. We discuss properties of binary Pearson codes, such as the Pearson noise distance that plays a key role in the error performance of Pearson-distance-based detection. We also compare the Pearson noise distance to the well-known Hamming distance, since the latter plays a similar role in the error performance of Euclidean distance- based detection.
Conference Paper
Full-text available
We investigate machine learning based on clustering techniques that are suitable for the detection of $n$-symbol words of $q$-ary symbols transmitted over a noisy channel with partially unknown characteristics. We consider the detection of the $n$-symbol $q$-ary data as a classification problem, where objects are recognized from a corrupted vector, which is obtained by an unknown corruption process.
Article
Full-text available
Sequences encoded with Pearson codes are immune to channel gain and offset mismatch that cause performance loss in communication systems. In this paper, we introduce an efficient method of constructing capacity-approaching variable-length Pearson codes. We introduce a finite state machine (FSM) description of Pearson codes, and present a variable-length code construction process based on this FSM. We then analyze the code rate, redundancy and the convergence property of our codes. We show that our proposed codes have less redundancy than codes recently described in the literature and that they can be implemented in a straightforward fashion.
Article
Full-text available
We consider the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal. We derive simple rules for dynamically estimating the unknown scale (gain) and offset. The estimates of the actual gain and offset so obtained are used to adjust the threshold levels or to re-scale the received signal within its regular range. Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where optimum use can be made of the distance properties of the code by applying, for example, the Chase algorithm. A worked example of a spin-torque transfer magnetic random access memory (STT-MRAM) with an application to an extended (72, 64) Hamming code is described, where the retrieved signal is perturbed by additive Gaussian noise and unknown gain or offset.
Conference Paper
Full-text available
The recently proposed Pearson codes offer immunity against channel gain and offset mismatch. These codes have very low redundancy, but efficient coding procedures were lacking. In this paper, systematic Pearson coding schemes are presented. The redundancy of these schemes is analyzed for memoryless uniform sources. It is concluded that simple coding can be established at only a modest rate loss.
Article
Full-text available
The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.
Article
Full-text available
This paper presents a practical writing/reading scheme in nonvolatile memories, called balanced modulation, for minimizing the asymmetric component of errors. The main idea is to encode data using a balanced error-correcting code. When reading information from a block, it adjusts the reading threshold such that the resulting word is also balanced or approximately balanced. Balanced modulation has suboptimal performance for any cell-level distribution and it can be easily implemented in the current systems of nonvolatile memories. Furthermore, we studied the construction of balanced error-correcting codes, in particular, balanced LDPC codes. It has very efficient encoding and decoding algorithms, and it is more efficient than prior construction of balanced error-correcting codes.
Article
Full-text available
In 1986, Don Knuth published a very simple algorithm for constructing sets of bipolar codewords with equal numbers of one's and zero's, called balanced codes. Knuth's algorithm is well suited for use with large codewords. The redundancy of Knuth's balanced codes is a factor of two larger than that of a code comprising the full set of balanced codewords. In this paper, we will present results of our attempts to improve the performance of Knuth's balanced codes.
Conference Paper
Full-text available
Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.
Conference Paper
Maximum likelihood (ML) decision criteria have been developed for channels suffering from signal independent offset mismatch. Here, such criteria are considered for signal dependent offset, which means that the value of the offset may differ for distinct signal levels rather than being the same for all levels. An ML decision criterion is derived, assuming uniform distributions for both the noise and the offset. In particular, for the proposed ML decoder, bounds are determined on the standard deviations of the noise and the offset which lead to a word error rate equal to zero. Simulation results are presented confirming the findings.
Article
Besides the omnipresent noise, other important inconveniences in communication and storage systems are formed by gain and/or offset mismatches. In the prior art, a maximum likelihood (ML) decision criterion has already been developed for Gaussian noise channels suffering from unknown gain and offset mismatches. Here, such criteria are considered for Gaussian noise channels suffering from either an unknown offset or an unknown gain. Furthermore, ML decision criteria are derived when assuming a Gaussian or uniform distribution for the offset in the absence of gain mismatch.
Article
K.A.S. Immink and J.H. Weber recently defined and studied a channel with both gain and offset mismatch, modelling the behaviour of charge-leakage in flash memory. They proposed a decoding measure for this channel based on minimising Pearson distance (a notion from cluster analysis). The paper derives a formula for maximum likelihood decoding for this channel, and also defines and justifies a notion of minimum distance of a code in this context.
Article
Flash, already one of the dominant forms of data storage for mobile consumer devices, such as smartphones and media players, is experiencing explosive growth in cloud and enterprise applications. Flash devices offer very high access speeds, low power consumption, and physical resiliency. Our goal in this article is to provide a high-level overview of error correction for Flash. We will begin by discussing Flash functionality and design. We will introduce the nature of Flash deficiencies. Afterwards, we describe the basics of ECCs. We discuss BCH and LDPC codes in particular and wrap up the article with more directions for Flash coding.
Article
The reliability of mass storage systems, such as optical data recording and non-volatile memory (Flash), is seriously hampered by uncertainty of the actual value of the offset (drift) or gain (amplitude) of the retrieved signal. The recently introduced minimum Pearson distance detection is immune to unknown offset or gain, but this virtue comes at the cost of a lessened noise margin at nominal channel conditions. We will present a novel hybrid detection method, where we combine the outputs of the minimum Euclidean distance and Pearson distance detectors so that we may trade detection robustness versus noise margin. We will compute the error performance of hybrid detection in the presence of unknown channel mismatch and additive noise.
Article
Coding schemes for storage channels, such as optical recording and non-volatile memory (Flash), with unknown gain and offset are presented. In its simplest case, the coding schemes guarantee that a symbol with a minimum value (floor) and a symbol with a maximum (ceiling) value are always present in a codeword so that the detection system can estimate the momentary gain and the offset. The results of the computer simulations show the performance of the new coding and detection methods in the presence of additive noise.
Conference Paper
Multilevel-cell (MLC) storage is the preferred way for achieving increased capacity and thus lower cost-per-bit in memory technologies. In phase-change memory (PCM), MLC storage is hampered by noise and resistance drift. In this paper the issue of reliability in MLC PCM devices is addressed at the array level. The purpose of this study is to identify the dominant reliability issues in PCM arrays and to provide a practical methodology to assess the reliability and predict the retention of multilevel states. Experimental data are used to derive and fit simple empirical models which can be used to assess the device reliability over the course of time.
Book
When first published in 2005, Matrix Mathematics quickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully updated and expanded edition, the author brings together the latest results on matrix theory to make this the most complete, current, and easy-to-use book on matrices. Each chapter describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly and rigorously with cross references, citations to the literature, and illuminating remarks. Beginning with preliminaries on sets, functions, and relations,Matrix Mathematics covers all of the major topics in matrix theory, including matrix transformations; polynomial matrices; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; vector and matrix norms; the matrix exponential and stability theory; and linear systems and control theory. Also included are a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page references, and an exhaustive subject index. This significantly expanded edition of Matrix Mathematics features a wealth of new material on graphs, scalar identities and inequalities, alternative partial orderings, matrix pencils, finite groups, zeros of multivariable transfer functions, roots of polynomials, convex functions, and matrix norms. Covers hundreds of important and useful results on matrix theory, many never before available in any book. Provides a list of symbols and a summary of conventions for easy use. Includes an extensive collection of scalar identities and inequalities. Features a detailed bibliography and author index with page references. Includes an exhaustive subject index with cross-referencing.
Article
The temperature dependent tunneling resistance of magnetic tunnel junctions with MgO barriers was characterized. In the junctions prepared by magnetron sputtering, the tunnel magnetoresistance decreases with increasing temperature. Various contributions to the tunnel conductance are discussed using different models. Not only the direct elastic tunneling contributes to the temperature dependence of tunnel magnetoresistance, but also the assisted, spin-independent tunneling plays an important role in determining the temperature dependent behavior in our magnetic tunneling junctions. The process is further investigated assuming magnon and phonon assisted tunneling and compared to junctions with alumina tunnel barrier.
Article
Since DC-offsets can have a large, negative impact on the performance of direct-conversion receivers, it is important to determine the offset performance of a given design. This article discusses one technique that can be used to characterize and measure dc-offsets in DCR circuit applications. DC-offsets are a primary concern in the design of DCRs and must be characterized in determining the performance of a particular receiver design. By measuring the dc level at the output of a DCR front-end in separate steps, the source of the offsets - device mismatches, LO self-mixing, and second order nonlinearities - can be resolved.
Article
A class of codes and decoders is described for transmitting digital information by means of bandlimited signals in the presence of additive white Gaussiau noise. The system, called permutation modulation, has many desirable features. Each code word requires the same energy for transmission. The receiver, which is maximum likelihood, is algebraic in nature, relatively easy to instrument, and does not require local generation of the possible sent messages. The probability of incorrect decoding is the same for each sent message. Certain of the permutation modulation codes are more efficient (in a sense described precisely) than any other known digital modulation scheme. PCM, ppm, orthogonal and biorthogonal codes are included as special cases of permutation modulation.
Rank modulation for flash memories
  • A Jiang
  • R Mateescu
  • M Schwartz
  • J Bruck