J.K. Wolf's research while affiliated with University of California, San Diego and other places

Publications (26)

Article
Full-text available
Flash memory is a nonvolatile computer memory comprised of blocks of cells, wherein each cell is implemented as either NAND or NOR floating gate. NAND flash is currently the most widely used type of flash memory. In a NAND flash memory, every block of cells consists of numerous pages; rewriting even a single page requires the whole block to be eras...
Conference Paper
Full-text available
We consider terminated LDPC convolutional codes (LDPC-CC) constructed from photographs and explore the performance of these codes on correlated erasure channels including a single-burst channel (SBC) and Gilbert-Elliott channel (GEC). We consider code performance with a latency-constrained message passing decoder and the belief propagation decoder....
Conference Paper
Despite flash memory's promise, it suffers from many idiosyncrasies such as limited durability, data integrity problems, and asymmetry in operation granularity. As architects, we aim to find ways to overcome these idiosyncrasies while exploiting flash memory's useful characteristics. To be successful, we must understand the trade-offs between the p...
Conference Paper
The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) arising from previously written transitions is one of these. The signal distortion induced by NLTS is reduced by use of write precompensation during data recording. In this paper, we numerically evaluate the...
Conference Paper
In this paper we compare some aspects of the design and analysis of one-dimensional (1-D) and two-dimensional (2-D) storage systems. We show that for modulation codes and for the detection of signals corrupted by intersymbol interference and additive white Gaussian noise, the design and analysis is much more complicated in the 2-D case as compared...
Conference Paper
Full-text available
The achievable information rates for multilevel coding (MLC) systems with multistage decoding (MSD) are examined on two-dimensional binary-input intersymbol interference (ISI) channels. One MSD scheme employs trellis-based detection, while another involves zero-forcing equalization and linear noise prediction. Information rates are determined by ex...
Conference Paper
Full-text available
We present bit-stuffing schemes which encode arbitrary data sequences into two-dimensional (2-D) constrained arrays. We consider the class of 2-D runlength-limited (RLL) (d, infin) constraints as well as the 'no isolated bits' (n.i.b.) constraint, both defined on the square lattice. The bit stuffing technique was previously introduced and applied t...
Conference Paper
This paper describes the analysis of convolutional codes on the erasure channel. We compare the maximum likelihood (ML) sequence decision and the maximum a posteriori (MAP) symbol decision for codes, which are transmitted over the erasure channel. When a codeword from a linear error correcting code with elements from the field GF is transmitted ove...
Article
Full-text available
Uniform magnetization (dc) noise is studied using numerically generated arrays of grains. The effects of packing fraction, grain size distribution, and anisotropy orientation are examined. From electron microscopy results, in this initial study, the intergranular grain boundary separation is held fixed. In general, the dc noise increases with incre...
Conference Paper
Analytic expressions for the exact probability of erasure for systematic, rate- 1/2 convolutional codes used to communicate over the binary erasure channel and decoded using the soft-input, soft-output (SISO) and a posteriori probability (APP) algorithms are given. An alternative forward-backward algorithm which produces the same result as the SISO...
Conference Paper
The performance of iterative detection methods is evaluated for the binary-input two-dimensional (2D), linear inter-symbol interference (ISI) channel with additive white Gaussian noise (AWGN). All of the proposed detection schemes involve message passing between detectors operating iteratively on the rows and columns of the 2D channel observations....
Conference Paper
Parallel message-passing detectors for partial-response channels have the property that a bit is estimated using channel symbols in a window of size W centered upon that bit. Distinct input sequences that produce the same output sequence result in undesirable failure of window decoders, but preceding can eliminate this input-to-output mapping ambig...
Article
The design of finite-length decision-feedback equalization (DFE) forward and feedback filters under the assumption of genie-aided feedback and independent and equally likely transmitted symbols is considered. It is shown that the problem of determining DFE filters that minimize the probability of symbol error at high signal-to-noise ratio (SNR) is...
Conference Paper
Previous work on the application of turbo decoding techniques to partial response channels has focused on additive white Gaussian noise channel models. Simulations using these ideal partial response channel models show gains exceeding 5 dB over uncoded systems at bit error rates of 10<sup>-5</sup>. Since the APP detectors of the turbo decoder assum...
Article
Two algorithms for characterization of input error events producing specified distance at the output of certain binary-input partial-response (PR) channels are presented. Lists of error events are tabulated for PR channels of interest in digital recording
Article
Constrained codes are a kev component in the digital recording devices that have become ubiquitous in computer data storage and electronic entertainment applications. This paper surveys the theory and practice of constrained coding, tracing the evolution of the subject from its origins in Shannon's classic 1948 paper to present-day applications in...
Conference Paper
Bit-stuffing constructions of binary 2-dimensional constrained arrays satisfying (d,∞) or (0,k) runlength constraints in both horizontal and vertical dimensions are described. Lower bounds on the capacity of these constrained arrays are derived
Conference Paper
The microtrack model for transition noise with partial erasure in thin film media has been shown to be a useful tool for simulating and analyzing detector performance under realistic conditions. However, several effects resulting from interactions between neighboring transitions are not accounted for in this model. These effects include transition...
Article
In this paper, data-dependent partial local feedback noise prediction schemes will be considered for magnetic recording applications where media noise is significant. These schemes are shown to provide improvement over standard partial response maximum likelihood detection as well as non-data-dependent noise prediction
Article
Distance spectra for partial response maximum likelihood (PRML) channels are computed by finding generating functions for their error state diagrams. A matrix oriented technique for calculating the generating function is discussed that is reasonable for systems with a relatively small number of states. For those cases where the number of states is...
Article
A new model for the signal dependent transition noise and partial erasure which occur in the readback signal from thin film recording media is reviewed. This model, which is referred to as the microtrack model, has its basis in physics and can be entirely specified by three parameters of the media. It is used both as a simulation tool and for analy...
Article
An interpretation for controlled polarity modulation that allows the write waveforms to be generated using a standard run-length limited modulator is presented. The output of the run-length limited modulator passes through a digital filter and is then shifted up in frequency via amplitude modulation. This resultant signal has a bias signal added to...

Citations

... Constrained coding is a method of eliminating error-prone sequences in data recording and communication systems, by encoding arbitrary user data sequences into sequences that respect a constraint (see, for example, [1] or [2]). In this work, we consider the problem of designing explicit constrained binary codes that achieve good rates of transmission over a noisy binary memoryless symmetric (BMS) channel. ...
... NAND flash memory exhibits limited endurance, which is typically specified by the maximum number of program-erase operations (or PE cycles) allowed on a memory block. The number of PE cycles may impact the nominal page program time, t prog , and stressed pages with a high number of PE cycles may take more time to program [29][30][31]. Hence, an implementation of EXPRESS needs to consider the number of PE cycles. Figure 12a shows the cumulative distribution of the nominal page program time for SLC pages in a fresh flash memory block, and in a memory block that has been exposed to 10,000 PE cycles. ...
... The eIRA codes can be designed to have bit-error-rate floors below and codeword-error-rate floors below [20], [21]. The matrices have a semirandom format (1) where is a sparse random matrix containing no weight-two columns and is a full-rank matrix given by (2) Note that, with in the above form, encoding may be performed directly from the matrix. That is, one may solve for the parity bits recursively from . ...
... In this section, frame error rate performance of irregular turbo codes over the BEC is shown. Although the analysis so far is based on the BCJR algorithm that supposes soft information exchange, it was shown in [8] that a hard-input hard-output (HIHO) decoding algorithm (namely the Viterbi algorithm [27]) for convolutional codes is optimal in terms of bit error probability over the BEC. For this reason, we will use a HIHO decoding algorithm for irregular turbo codes inspired by the algorithm in [28] for LDPC codes, in that it propagates in the trellis of the turbo code by removing transitions in the same way edges are removed in a bipartite graph under message-passing decoding [29]. ...
... The procedure defined above can be regarded as a function from a set of 1D data sequences to 2D patterns. The function can also be regarded as a variant of bit stuffing algorithm in [12], [13], [14]. Hence the procedure can be used in calculating lower bounds of the capacity of a given constrain [15]. ...
... capacity signal-to-noise-ratio (SNR) limit [61], [62], [70], [145]. 7 In the past two decades, many researchers have tried to develop effective algorithms to calculate the SIR of MR systems [64]- [67], [70], [126], [145]. To be specific, the SIR of OD-ISI channels has first been investigated in [61], [70], [145], which has been subsequently generalized to SIR of TD-ISI channels [64]- [67], [126]. ...
... A dynamic programming method [18] is studied to write precompensate the NLTS and partial erasure in a longitudinal recording channel. Later a twolevel precompensation scheme [19] is proposed to apply precompensation according to the transition pattern in the two preceding bit positions for the conventional perpendicular recording channel. ...
... 1) There are many papers in the literature discussing TD constrained coding. Bounds on the capacity of TD RLL codes were discussed in [47] and [48]. Explicit coding techniques to stuff bits into a TD grid such that certain RLL constraints are satisfied in both directions were presented in [25] and [26]. ...
... Techniques to reduce or tolerate cell wear have been examined for flash and emerging non-volatile memory devices. In general, these techniques fall into three categories depending on if they reduce wear by (1) throttling writes to a device (e.g., [179]), by (2) converting data to a representation that does not cause as much wear (e.g., [79]), or by (3) wear-leveling writes to a device (e.g., [29,139,142,254,317,318,253] Techniques to reduce or tolerate cell wear per unit time have been examined extensively for storage technologies, most notably non-volatile storage and memory devices [29,139,142,254,317,318,253]. ...
... If a complete SP is erased, all check nodes that have connections to this SP have at least two erased connections and hence, the erased variable nodes of that SP are not recoverable. This situation has been studied in [14], where the authors additionally provide protograph constructions that avoid this situation and additionally maximize the correctable burst length given some structural constraints of the code. To correct bursts encompassing an SP, the authors in [15] apply interleaving (therein denoted band splitting) to a protograph-based SC-LDPC code. ...