**Figures**

Explore figures and images from publications

Fig 5 - uploaded by Jehoshua Bruck

Content may be subject to copyright.

Source publication

This paper presents a practical writing/reading scheme in nonvolatile
memories, called balanced modulation, for minimizing the asymmetric component
of errors. The main idea is to encode data using a balanced error-correcting
code. When reading information from a block, it adjusts the reading threshold
such that the resulting word is also balanced o...

## Context in source publication

**Context 1**

... two trends. First, due to cell-level drift, the difference between the means of g t (v) and h t (v) becomes smaller. Second, due to the existence of different types of noise and disturbance, their variances increases over time. To study the performance of balanced modulation, we consider both of the effects separately in some simple scenarios. Fig. 5. We assume that the fixed threshold is v f = 1 2 , which satisfies g 0 (v f ) = h 0 (v f ). In the above example, the cell-level distribution correspond- ing to bit '1' drifts but its variance does not change. We ...

## Similar publications

We propose a novel joint decoding technique for distributed source-channel
(DSC) coded systems for transmission of correlated binary Markov sources over
additive white Gaussian noise (AWGN) channels. In the proposed scheme,
relatively short-length, low-density parity-check (LDPC) codes are
independently used to encode the bit sequences of each sour...

In this work we study the reliability and secrecy performance achievable by
practical LDPC codes over the Gaussian wiretap channel. While several works
have already addressed this problem in asymptotic conditions, i.e., under the
hypothesis of codewords of infinite length, only a few approaches exist for the
finite length regime. We propose an appr...

We consider the decoding of LDPC codes over GF(q) with the low-complexity
majority algorithm from [1]. A modification of this algorithm with multiple
thresholds is suggested. A lower estimate on the decoding radius realized by
the new algorithm is derived. The estimate is shown to be better than the
estimate for a single threshold majority decoder....

Spatially coupled low-density parity-check (SC-LDPC) codes can achieve the
channel capacity under low-complexity belief propagation (BP) decoding. For
practical finite coupling lengths however, there is a non-negligible rate-loss
because of termination effects. In this paper, we focus on tail-biting SC-LDPC
codes which do not require termination an...

## Citations

... The detector resilience to unknown mismatch by drift can be improved in various ways, for example, by employing coding techniques. Balanced codes [6,7,8,9] and composition check codes [10,11], in conjunction with Slepian's optimal detection [12] offer excellent resilience in the face of channel mismatch on a block of symbols basis. These coding and signal processing techniques are often considered too expensive in terms of code redundancy and hardware, in particular when high-speed applications are considered. ...

We report on the feasibility of k-means clustering techniques for the dynamic threshold detection of encoded q-ary symbols transmitted over a noisy channel with partially unknown channel parameters. We first assess the performance of k-means clustering technique without dedicated constrained coding. We apply constrained codes which allows a wider range of channel uncertainties so improving the detection reliability.

... To illustrate, consider the situation in Fig. 7(a). If we set L max = 10, then the received sequence is padded with invalid symbol values, resulting in a vector of [0, 1, 0, 0, 0, 1, 0, 0, 1, −1] that has length 10, and the output codeword boundary vector is γ = [2,6,9,11,11]. ...

... To illustrate, consider the situation in Fig. 7(a). If we set L max = 10, then the received sequence is padded with invalid symbol values, resulting in a vector of [0, 1, 0, 0, 0, 1, 0, 0, 1, −1] that has length 10, and the output codeword boundary vector is γ = [2,6,9,11,11]. ...

Constrained sequence (CS) codes, including fixed-length CS codes and variable-length CS codes, have been widely used in modern wireless communication and data storage systems. Sequences encoded with constrained sequence codes satisfy constraints imposed by the physical channel to enable efficient and reliable transmission of coded symbols. In this paper, we propose using deep learning approaches to decode fixed-length and variable-length CS codes. Traditional encoding and decoding of fixed-length CS codes rely on look-up tables (LUTs), which is prone to errors that occur during transmission. We introduce fixed-length constrained sequence decoding based on multiple layer perception (MLP) networks and convolutional neural networks (CNNs), and demonstrate that we are able to achieve low bit error rates that are close to maximum a posteriori probability (MAP) decoding as well as improve the system throughput. Further, implementation of capacity-achieving fixed-length codes, where the complexity is prohibitively high with LUT decoding, becomes practical with deep learning-based decoding. We then consider CNN-aided decoding of variable-length CS codes. Different from conventional decoding where the received sequence is processed bit-by-bit, we propose using CNNs to perform one-shot batch-processing of variable-length CS codes such that an entire batch is decoded at once, which improves the system throughput. Moreover, since the CNNs can exploit global information with batch-processing instead of only making use of local information as in conventional bit-by-bit processing, the error rates can be reduced. We present simulation results that show excellent performance with both fixed-length and variable-length CS codes that are used in the frontiers of wireless communication systems.

... Estimation of the unknown shifts may be achieved by using reference cells, but this is very expensive with respect to redundancy. Also, coding techniques can be applied to strengthen the detector's reliability in case of scaling and offset mismatch; these include rank modulation [9], balanced codes [10], and composition check codes [11]. However, these methods often suffer from large redundancy and high complexity. ...

Reliability is a critical issue for modern multi-level cell memories. We consider a multi-level cell channel model such that the retrieved data is not only corrupted by Gaussian noise, but hampered by scaling and offset mismatch as well. We assume that the intervals from which the scaling and offset values are taken are known, but no further assumptions on the distributions on these intervals are made. We derive maximum likelihood (ML) decoding methods for such channels, based on finding a codeword that has closest Euclidean distance to a specified set defined by the received vector and the scaling and offset parameters. We provide geometric interpretations of scaling and offset and also show that certain known criteria appear as special cases of our general setting.

... Other approaches are error correcting techniques. Up to now, various coding techniques have been applied to alleviate the detection in case of channel mismatch, specifically rank modulation [4], balanced codes [5] and composition check codes [6]. These methods are often considered too expensive in terms of redundancy and complexity. ...

Data storage systems may not only be disturbed by noise.
In some cases, the error performance can also be seriously degraded by offset mismatch. Here, channels are considered for which both the noise and offset are bounded. For such channels, Euclidean distance-based decoding, Pearson distance-based decoding, and Maximum Likelihood decoding are considered. In particular, for each of these decoders, bounds are determined on the magnitudes of the noise and offset intervals which lead to a word error rate equal to zero. Case studies with simulation results are presented confirming the findings.

... Since we now have W p (Ψ) which contains the same number of words in each state, we can perform partial extensions and NGH coding to obtain the codebook in a manner similar to that introduced in Section III-D. As above, the evaluation of the average code rate is given by (10). Within predetermined limits on n max and/or l max , an exhaustive search can be performed to determine the codebook with the highestR. ...

We consider the construction of capacity-approaching variable-length constrained sequence codes based on multi-state encoders that permit state-independent decoding. Based on the finite state machine description of the constraint, we first select the principal states and establish the minimal sets. By performing partial extensions and normalized geometric Huffman coding, efficient codebooks that enable state-independent decoding are obtained. We then extend this multi-state approach to a construction technique based on n-step FSMs. We demonstrate the usefulness of this approach by constructing capacity-approaching variable-length constrained sequence codes with improved efficiency and/or reduced implementation complexity to satisfy a variety of constraints, including the runlength-limited (RLL) constraint, the DC-free constraint, and the DC-free RLL constraint, with an emphasis on their application in visible light communications.

... Alternatively, coding techniques can be applied to alleviate the detection in case of channel mismatch. Specifically balanced codes [5], [6], [7] and composition check codes [8], [9] preferably in conjunction with Slepian's optimal detection [10] offer resilience in the face of channel mismatch. These coding methods are often considered too expensive in terms of coding hardware and redundancy, specifically when high-speed applications are considered. ...

We investigate machine learning based on clustering techniques that are suitable for the detection of encoded strings of q-ary symbols transmitted over a noisy channel with partially unknown characteristics. We consider the detection of the q-ary data as a classification problem, where objects are recognized from a corrupted vector, which is obtained by an unknown corruption process. We first evaluate the error performance of k-means clustering technique without constrained coding. Secondly, we apply constrained codes that create an environment that improves the detection reliability and it allows a wider range of channel uncertainties.

... Also, coding techniques can be applied to alleviate the detection in case of channel mismatch. Specifically balanced codes [3], [4], [5] and composition check codes [6], [7] preferably in conjunction with Slepian's optimal detection [8] have been shown to offer solace in the face of channel mismatch. These coding methods are often considered too expensive in terms of coding hardware and redundancy when high-speed applications are considered. ...

We consider the transmission and storage of encoded strings of symbols over a noisy channel, where dynamic threshold detection is proposed for achieving resilience against unknown scaling and offset of the received signal. We derive simple rules for dynamically estimating the unknown scale (gain) and offset. The estimates of the actual gain and offset so obtained are used to adjust the threshold levels or to re-scale the received signal within its regular range. Then, the re-scaled signal, brought into its standard range, can be forwarded to the final detection/decoding system, where optimum use can be made of the distance properties of the code by applying, for example, the Chase algorithm. A worked example of a spin-torque transfer magnetic random access memory (STT-MRAM) with an application to an extended (72, 64) Hamming code is described, where the retrieved signal is perturbed by additive Gaussian noise and unknown gain or offset.

... B ALANCED, sometimes called dc-free, q-ary sequences have found widespread application in popular optical recording devices such as CD, DVD, and Blu-Ray [1], cable communication [2], and recently in non-volatile (flash) memories [3]. A sequence of symbols is said to be balanced if the sum of the symbols equals the prescribed balancing value. ...

... Example 3: Let q = 5 and k = 2, and let the user word be a = (3,2). Using the shortened linear code with generator matrix the user word is encoded as x = (3, 2, 0, 1, 1, 4) and after appending a redundant '0' we have x = (3, 2, 0, 1, 1, 4, 0). ...

We investigate a Knuth-like scheme for balancing q-ary codewords, which has the virtue that look-up tables for coding and decoding the prefix are avoided by using precoding and error correction techniques. We show how the scheme can be extended to allow for error correction of single channel errors using a fast decoding algorithm that depends on syndromes only, making it considerably faster compared to the prior art exhaustive decoding strategy. A comparison between the new and prior art schemes, both in terms of redundancy and error performance, completes the study.

... Constant weight codes were recently proposed for use with rank modulation in NAND flash memory devices [7]. They were also shown to be efficient in coping with electrical charge leakage in the cells, when used with dynamic reading thresholds [24], [25]. ...

In this article, we study properties and algorithms for constructing sets of 'constant weight' codewords with bipolar symbols, where the sum of the symbols is a constant q, q 6 0. We show various code constructions that extend Knuth's balancing vector scheme, q = 0, to the case where q > 0. We compute the redundancy of the new coding methods. Index Terms—Balanced code, channel capacity, constrained code, magnetic recording, optical recording. I. INTRODUCTION Let q be an integer. A setC, which is a subset of ( w = (w1;w2;:::;wn)2f 1; +1g n : n X i=1 wi = q )

... Balanced, sometimes called dc-free, q-ary sequences have found widespread application in popular optical recording devices such as Compact Disc, DVD and, Blu-Ray [1], cable communication, and recently in non-volatile (Flash) memories [2]. Prior art codes were presented by Capocelli et al. [3], Tallini and Vacaro [4], and Swart and Weber [5]. ...

We present a Knuth-like method for balancing q-ary codewords, which is characterized by the absence of a prefix that carries the information of the balancing index. Look-up tables for coding and decoding the prefix are avoided. We also show that this method can be extended to include error correction of single channel errors.