## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Besides the omnipresent noise, other important inconveniences in communication and storage systems are formed by gain and/or offset mismatches. In the prior art, a maximum likelihood (ML) decision criterion has already been developed for Gaussian noise channels suffering from unknown gain and offset mismatches. Here, such criteria are considered for Gaussian noise channels suffering from either an unknown offset or an unknown gain. Furthermore, ML decision criteria are derived when assuming a Gaussian or uniform distribution for the offset in the absence of gain mismatch.

To read the full-text of this research,

you can request a copy directly from the authors.

... Blackburn [61] investigates an ML criterion for channels with Gaussian noise and unknown gain and offset mismatch. In a subsequent study, ML decoding criteria are derived for Gaussian noise channels when assuming various distributions for the offset in the absence of gain mismatch [62]. This research aims to investigate possible coding techniques for noisy channels with gain and/or offset mismatch. ...

... Further, in 3.2. MAXIMUM LIKELIHOOD DECODING FOR CHANNELS WITH BOUNDED NOISE AND OFFSET 3 33 [56] and [62] a decoder was proposed based on minimizing a weighted sum of Euclidean and Pearson distances, which is proved to be optimal for channels with Gaussian noise and offset mismatch. ...

... In addition, for Gaussian distributed noise and offset mismatch, we derive the ML criterion considering successive channel outputs, which includes the results in [56,62] as its particular case. A concatenated coding scheme is proposed in the case of Gaussian noise and offset mismatch. ...

... Secondly, the proposed ML criterion provides a general framework, including the scaling-only case and the offset-only case. Some known criteria [13] [14] are shown to be special cases of this framework for particular a 1 , a 2 , b 1 , and b 2 settings. This paper aims to generalize ML decoding for multilevel cell channel with Gaussian noise and scaling and offset mismatch. ...

... In Theorem 2 of [13], the following ML criterion was presented for the case that there is bounded scaling (0 < a 1 ≤ a ≤ a 2 ) and no offset mismatch (b = 0): /a 1 ,x) if r,x > r, r /a 1 , L e (r/a 2 ,x) if r,x < r, r /a 2 , ...

... In Theorem 1 of [13], the following ML criterion was presented for the case that a = 1 and b 1 ≤ b ≤ b 2 : ...

Reliability is a critical issue for modern multi-level cell memories. We consider a multi-level cell channel model such that the retrieved data is not only corrupted by Gaussian noise, but hampered by scaling and offset mismatch as well. We assume that the intervals from which the scaling and offset values are taken are known, but no further assumptions on the distributions on these intervals are made. We derive maximum likelihood (ML) decoding methods for such channels, based on finding a codeword that has closest Euclidean distance to a specified set defined by the received vector and the scaling and offset parameters. We provide geometric interpretations of scaling and offset and also show that certain known criteria appear as special cases of our general setting.

... In this paper, we have proposed a model with improved performance which built on recent entropy model techniques [4]. We have introduced a discretized Gaussian mixture likelihood based entropy model which is flexible and accurate. ...

There have been many compression standards developed during the past few decades and technological advances has resulted in introducing many methodologies with promising results. As far as PSNR metric is concerned, there is a performance gap between reigning compression standards and learned compression algorithms. Based on research, we experimented using an accurate entropy model on the learned compression algorithms to determine the rate-distortion performance. In this paper, discretized Gaussian Mixture likelihood is proposed to determine the latent code parameters in order to attain a more flexible and accurate model of entropy. Moreover, we have also enhanced the performance of the work by introducing recent attention modules in the network architecture. Simulation results indicate that when compared with the previously existing techniques using high-resolution and Kodak datasets, the proposed work achieves a higher rate of performance. When MS-SSIM is used for optimization, our work generates a more visually pleasant image.

... In [10], Blackburn investigated a maximum likelihood (ML) criterion for channels with Gaussian noise and unknown gain and offset mismatch. In a subsequent study, ML decision criteria were derived for Gaussian noise channels when assuming various distributions for the offset in the absence of gain mismatch [11]. ...

Data storage systems may not only be disturbed by noise.
In some cases, the error performance can also be seriously degraded by offset mismatch. Here, channels are considered for which both the noise and offset are bounded. For such channels, Euclidean distance-based decoding, Pearson distance-based decoding, and Maximum Likelihood decoding are considered. In particular, for each of these decoders, bounds are determined on the magnitudes of the noise and offset intervals which lead to a word error rate equal to zero. Case studies with simulation results are presented confirming the findings.

... The single label distribution of fatty liver is a balanced proportion, while the single label distributions of hypertension and diabetes are both imbalanced as you can see from Figure 1B. The correlation coefficient analysis can indicate the label dependencies, and it can be calculated by Pearson product-moment correlation coefficient (PMMC) (Mohamad Asri et al., 2018;Weber and Immink, 2018). Figure 1C shows that the correlation coefficient value between hypertension and flatty liver is maximum among three chronic disease pairs, but the correlation coefficient value is only 0.24. ...

Chronic diseases are one of the biggest threats to human life. It is clinically significant to predict the chronic disease prior to diagnosis time and take effective therapy as early as possible. In this work, we use problem transform methods to convert the chronic diseases prediction into a multi-label classification problem and propose a novel convolutional neural network (CNN) architecture named GroupNet to solve the multi-label chronic disease classification problem. Binary Relevance (BR) and Label Powerset (LP) methods are adopted to transform multiple chronic disease labels. We present the correlated loss as the loss function used in the GroupNet, which integrates the correlation coefficient between different diseases. The experiments are conducted on the physical examination datasets collected from a local medical center. In the experiments, we compare GroupNet with other methods and models. GroupNet outperforms others and achieves the best accuracy of 81.13%.

In many channels, the transmitted signals do not only face noise, but offset mismatch as well. In the prior art, maximum likelihood (ML) decision criteria have already been developed for noisy channels suffering from
signal independent offset
. In this paper, such ML criterion is considered for the case of binary signals suffering from Gaussian noise and
signal dependent offset
. The signal dependency of the offset signifies that it may differ for distinct signal levels, i.e., the offset experienced by the zeroes in a transmitted codeword is not necessarily the same as the offset for the ones. Besides the ML criterion itself, also an option to reduce the complexity is considered. Further, a brief performance analysis is provided, confirming the superiority of the newly developed ML decoder over classical decoders based on the Euclidean or Pearson distances.

Maximum likelihood (ML) decision criteria have been developed for channels suffering from signal independent offset mismatch. Here, such criteria are considered for signal dependent offset, which means that the value of the offset may differ for distinct signal levels rather than being the same for all levels. An ML decision criterion is derived, assuming uniform distributions for both the noise and the offset. In particular, for the proposed ML decoder, bounds are determined on the standard deviations of the noise and the offset which lead to a word error rate equal to zero. Simulation results are presented confirming the findings.

The performance of certain transmission and storage channels, such as optical data storage and nonvolatile memory (flash), is seriously hampered by the phenomena of unknown offset (drift) or gain. We will show that minimum Pearson distance (MPD) detection, unlike conventional minimum Euclidean distance detection, is immune to offset and/or gain mismatch. MPD detection is used in conjunction with (T) -constrained codes that consist of (q) -ary codewords, where in each codeword (T) reference symbols appear at least once. We will analyze the redundancy of the new (q) -ary coding technique and compute the error performance of MPD detection in the presence of additive noise. Implementation issues of MPD detection will be discussed, and results of simulations will be given.

We explore a novel data representation scheme for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The only allowed charge-placement mechanism is a "push-to-the-top" operation which takes a single cell of the set and makes it the top-charged cell. The resulting scheme eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells.
We present unrestricted Gray codes spanning all possible n-cell states and using only "push-to-the-top" operations, and also construct balanced Gray codes. We also investigate optimal rewriting schemes for translating arbitrary input alphabet into n-cell states which minimize the number of programming operations.

Predetermined fixed thresholds are commonly used in nonvolatile memories for reading binary sequences, but they usually result in significant asymmetric errors after a long duration, due to voltage or resistance drift. This motivates us to construct error-correcting schemes with dynamic reading thresholds, so that the asymmetric component of errors are minimized. In this paper, we discuss how to select dynamic reading thresholds without knowing cell level distributions, and present several error-correcting schemes. Analysis based on Gaussian noise models reveals that bit error probabilities can be significantly reduced by using dynamic thresholds instead of fixed thresholds, hence leading to a higher information rate.

K.A.S. Immink and J.H. Weber recently defined and studied a channel with both
gain and offset mismatch, modelling the behaviour of charge-leakage in flash
memory. They proposed a decoding measure for this channel based on minimising
Pearson distance (a notion from cluster analysis). The paper derives a formula
for maximum likelihood decoding for this channel, and also defines and
justifies a notion of minimum distance of a code in this context.

Flash, already one of the dominant forms of data storage for mobile consumer devices, such as smartphones and media players, is experiencing explosive growth in cloud and enterprise applications. Flash devices offer very high access speeds, low power consumption, and physical resiliency. Our goal in this article is to provide a high-level overview of error correction for Flash. We will begin by discussing Flash functionality and design. We will introduce the nature of Flash deficiencies. Afterwards, we describe the basics of ECCs. We discuss BCH and LDPC codes in particular and wrap up the article with more directions for Flash coding.

The reliability of mass storage systems, such as optical data recording and non-volatile memory (Flash), is seriously hampered by uncertainty of the actual value of the offset (drift) or gain (amplitude) of the retrieved signal. The recently introduced minimum Pearson distance detection is immune to unknown offset or gain, but this virtue comes at the cost of a lessened noise margin at nominal channel conditions. We will present a novel hybrid detection method, where we combine the outputs of the minimum Euclidean distance and Pearson distance detectors so that we may trade detection robustness versus noise margin. We will compute the error performance of hybrid detection in the presence of unknown channel mismatch and additive noise.