ArticlePDF Available

Lossless predictive compression of medical images

Authors:

Abstract and Figures

Among the many categories of images that require lossless compression, medical images can be indicated as one of the most important category. Medical image compression with loss impairs of diagnostic value, therefore, there are often legal restrictions on the image compression with losses. Among the common approaches to medical image compression we can distinguish the transformation-based and prediction-based approaches. This paper presents algorithms for the prediction based on the edge detection and estimation of local gradient. Also, a novel prediction algorithm based on advantages of standardized median predictor and gradient predictor is presented and analyzed. Removed redundancy estimation was done by comparing entropies of the medical image after prediction.
Content may be subject to copyright.
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING
Vol. 8, No. 1, February 2011, 27-36
27
Lossless Predictive Compression of
Medical Images*
Aleksej Avramović1, Slavica Savić1
Abstract: Among the many categories of images that require lossless
compression, medical images can be indicated as one of the most important
category. Medical image compression with loss impairs of diagnostic value,
therefore, there are often legal restrictions on the image compression with losses.
Among the common approaches to medical image compression we can
distinguish the transformation-based and prediction-based approaches. This
paper presents algorithms for the prediction based on the edge detection and
estimation of local gradient. Also, a novel prediction algorithm based on
advantages of standardized median predictor and gradient predictor is presented
and analyzed. Removed redundancy estimation was done by comparing
entropies of the medical image after prediction.
Keywords: Medical Imaging, Digital image processing, Lossless image compres-
sion, Prediction.
1 Introduction
Digital radiology has resulted in significant increase of use of digital
medical images in the process of diagnosis [1]. In order to preserve the value of
diagnostic medical images, it is necessary to provide lossless image
compression. Apart from practical reasons, there are often legal restrictions on
the lossless medical image compression. As for the method of compression,
predictive compression is much simpler then transformation-based compression,
and in addition usually results in lower bit rate. During recent years, several
algorithms for predictive lossless image compression have been presented.
Predictive algorithms for image compression can be classified in two
groups:
the algorithms with a single pass, and
the algorithms with two passes.
Algorithms with one pass count all required parameters for compression
during one scanning of image, such as the optimal predictor parameters. Coding
1Faculty of Electrical Engineering, University of Banja Luka, Patre 5, 78000 Banja Luka, Bosnia and
Herzegovina, E-mails: aleksej@etfbl.net; slavica.savic@etfbl.net
*Award for the best paper presented in Section Electric Circuits and Systems and Signal Processing, at Conference
ETRAN 2010, June 7-11, Donji Milanovac, Serbia.
UDK: 004.92.032.2:616-7
A. Avramović, S. Savić
28
of prediction error can be done in combination with prediction or in a separate
phase after the prediction. Algorithms with two passes have the ability to
analyze image and efficiently determine the optimal parameters for the
prediction and coding in the first pass, while in the second pass consist of
prediction and coding. Optimization of parameters is usually based on the
principle of least square error, which generally gives better results than
switching predictors. Switching predictors choose sub predictor from the set of
predictors based on the context in which the pixel is located.
This paper is organized as follows. The second section gives a brief
description of the predictive methods of compression, while the third section
describes the most important predictors used in the proposed algorithms. The
proposed predictor and its characteristics is also described. The fourth section
provides a comparative analysis of the proposed predictor with other predictors,
while the fifth section is conclusion.
2 Predictive Image Compression
Predictive compression is based on several stages, as shown in Fig. 1, that
are: prediction, contextual modeling, error modeling and entropy coding. This
type of lossless method of compression became a standard during the last
decade of the last century. At the time, several algorithms were proposed and
tested for the purpose of adoption of a standard for lossless image compression
[2]. Prediction is the crucial part of the compression, because it removes most of
the spatial redundancy, and the choice of the optimal predictor is essential for
the efficiency of compression methods. The prediction may be linear or
nonlinear. Linear prediction is based on one or more fixed predictors, which can
be combined with appropriate weights or it is possible to choose the optimal sub
predictor on the basis of the properties of a certain part of the picture, i.e. on the
basis of context of the region. The context can be determined for each pixel
separately, for example, during the raster scanning, pixel can be found in the
region of the horizontal edge, vertical edge, sloping edges, certain texture,
smooth region, etc. Linear predictors based on the finite group of sub predictors,
are simple and fast, a significant advantage is possibility of realization of the
integer system. Several predictors, based on the least squares optimization, have
been proposed, which in comparison with fixed predictors can be more
efficient, but they are much more complicated and computationally demanding.
Optimization is based on finite group of casual pixels, so called, context. Since
the optimization for each pixel can be computationally demanding, adaptation
of coefficients is usually done when another type of region occurs, for example
an edge. For effective adaptation a larger context, comparing with fixed
predictors, is required. Nonlinear prediction is based on neural networks, vector
quantization, etc.
Lossless Predictive Compression of Medical Images
29
Fig. 1General scheme of lossless predictive image compression.
Contextual modeling means adaptive correction of prediction of pixels in
order to exploit repeated schemes in a picture. For example, for certain layout of
values of contextual pixels is possible to use information about prediction for
the next occurrence of such layout and correct prediction for each time these
occur. Even for the 8-bit images, the number of contexts, defined according to
the layout of pixel values, significantly increases with increased the number of
contextual pixels. Therefore, a much smaller number of contexts are defined.
Particular context may be selected using quantization of contextual pixel values
or using other transform operation.
Modeling error of prediction enables more efficient coding using entropy
coder. Error modeling can further reduce the entropy of prediction error image.
Mathematically, the conditional probability of the error prediction in relation to
the particular context of the error is smaller compared to the probability of
errors without contexts. In addition, it is possible to transform the alphabet
properly adapted to a particular type of entropy coder. For example, there are
Golomb-Rice versions of codes that require non-negative input data.
Entropy coding removes statistical redundancy of the prediction error
images. The most commonly used are Huffman code, Golomb-Rice code, or
arithmetic coding. Entropy coding actually performs the compression and gives
the final output sequence of compressed data. Coding can be done after the
prediction as an independent phase or progressively during the prediction.
3 Linear Predictors
As the most important predictors for lossless image compression the
median predictor used in standard JPEG-LS and the gradient predictor, used in
CALIC algorithm, are emphasized.
Median Edge Detection (MED) belongs to the group of switching
predictors, that select one of the three optimal sub predictors depending on
whether it found the vertical edge, horizontal edge, or smooth region [3]. In
fact, MED predictor selects the median value between three possibilities W, N
A. Avramović, S. Savić
30
and W + N – NW (common labels are chosen after sides of the world, Fig. 2),
and the optimal combination of simplicity and efficiency. The prediction is
based on:
min( , ), max( , ),
max( , ), min( , ),
,.
WN ifNW WN
PWNifNWWN
NW NW else
=≤
+−
(1)
Gradient Adjusted Predictor (GAP) is based on gradient estimation around
the current pixel [4]. Gradient estimation is estimated by the context of current
pixel, which in combination with predefined thresholds gives final prediction.
GAP distinguishes three types of edges, strong, simple and a soft edge, and is
characterized by high flexibility to different regions. Gradient estimation is
done using:
,
,
v
h
g
WWW NNW N NE
g
W NW N NN NE NNE
=− +− +
=− +− + − (2)
and the prediction is made by the algorithm:
()( )
()
()
()
()
80,
80,
{/2 /4
32, / 2
8, 3 / 4
32, / 2
8, 3 / 4},
vh
vh
vh
vh
vh
vh
if g g P W
elseif g g P N
else P W N NE NW
if g g P P W
elseif g g P P W
elseif g g P P N
elseif g g P P N
>=
−<− =
=+ + −
−> =+
−> = +
−<− =+
−<− = +
(3)
where the label of causal pixels in (2) and (3), also harmonized according to Fig. 2.
Fig. 2Causal and contextual neighbors.
Lossless Predictive Compression of Medical Images
31
The authors in [5] have proposed adaptive prediction based on a
combination of thirteen simple predictors and the appropriate penalty of
predictors which result in large prediction errors. In the same work, similar
results can be obtained using only six simple predictors. The six predictors
defined in a P6 set, are:
()
,,,
,/2,,
WWNNW N
N
ENWW NW
+
+ (4)
where the label harmonized according to Fig. 2. The authors have concluded
that a much reduced and more simple version of the P6 gives slightly worse
results than the P13 version. DARC is adaptive predictor which adjusts the
prediction based on a simple estimation of horizontal and vertical gradient [2]:
,
,
v
h
WNW
NNW
=−
=− (5)
and the corresponding weighted coefficient ()
vv h
ggg
α
=+. The prediction is
then performed by applying the equation:
(
)
1PW N + −α . (6)
Predictors based on Minimum Mean Square Error – MMSE perform the
adaptation prediction coefficient on the basis of a training set of causal pixels.
This approach can achieve better results for entropy compared to the using a
fixed number sub predictors. However, in adaptation, is necessary to count
pseudo inversion whose order increases with increasing training set. If we use
last m pixels for the adaptation of predictor coefficients k-th order, vector of
coefficients is calculated according to the expression:
(
)( )
1
TT
=aCC Cy, (7)
where the matrix of training set order is m×k marked as C, and vector last m
pixels value indicated by y.
In the case of compressing 3D medical images, in [6] simple solution,
based on the differential code modulation, which exploits the spatial
redundancy in all three dimensions is proposed. Algorithm called 3DPCM made
the prediction in all three coordinates based only on previous pixel value. For
each frame of the 3D picture, firstly, prediction is done with pixel W for
species, then with pixel N for columns. Redundancy between frames is reduced
by the prediction of each pixel based on the value of that pixel from the
previous frame.
Here we have proposed a solution that takes advantage of the MED and
GAP predictors, i.e. simplicity of the median predictor and advantages of
gradient estimation in GAP predictor. As MED predictor, chooses between the
A. Avramović, S. Savić
32
context of vertical edges, horizontal edges and smooth regions. Selection is
done by simple estimation of horizontal and vertical gradient and one threshold.
The number of contextual pixel is also a compromise between the MED and
GAP predictors, and in contrast to the GAP predictor, which has three
predefined threshold, the proposed predictor is based on one threshold. Gradient
estimation is based on the equation, predictor is based on edge detection, so we
adopt the name of the Gradient Edge Detection (GED). GED local gradient
estimation is done using following equations:
,
.
v
h
g
NW W NN N
g
WW W NW N
=−+
=−+−
(8)
Local gradient estimation is followed by a simple predictor. The prediction
is done using the algorithm:
,
,
3.
812
vh
vh
if g g T P W
elseif g g T P N
N
WWWNWNN
else P
−> =
−<− =
+++
=+
(9)
where T is the designated threshold, which can be predefined or may be ordered
based on the context. Second variant, named GED2 predictor, is slightly
different from the first one in the case when smooth region is detected. For the
estimation of local gradient, is also used equation (8). Complicated and
inefficient equation given in (9) is replaced by same equation that uses MED
predictor in the case of smooth region. Therefore, GED2 prediction is made by
following equation:
,
,
.
vh
vh
if g g T P W
elseif g g T P N
else P N W NW
−> =
<− =
=+−
(10)
GED2 predictor is a simple combination of gradient and median predictors.
Estimation of local gradient and a threshold is used to decide which of the three
sub predictors is optimal, i.e. is the pixel in context of horizontal edge, vertical
edge or smooth region.
4 Comparative Analysis
Lossless image compression has to preserve the exact value of each pixel,
regardless of whether there is noise or not. Measure performance predictor can
be expressed over the degree of compression, i.e. relations between required
memory space before and after compression, or equivalently by bit rate, which
is the average number of bits needed to code a single pixel. However, predictors
Lossless Predictive Compression of Medical Images
33
evaluation with these parameters was not possible to use. Predictor only
eliminates redundancy, and in fact does not do compression. As a measure of
predictor efficiency entropy of prediction error image is used. Assuming that the
image can be modeled as Markov model without memory, entropy is the
minimum possible bit rate that can be achieved after the coding. If the picture
denote as random variable X with alphabet 012 1
(,,,, )
N
Aaaa a
=
, which
means that we’re processing of N-bit image, we can calculate the entropy:
(
)
(
)
(
)
2
log
xA
H X px px
=−
, (11)
where p(x) indicates the probability of occurrence of symbol x, i.e. level of
grayness value x, in our case.
Prediction algorithms described in the previous chapter were tested on a set
of uncompressed medical images. In practice, medical images can be a series of
2D data, i.e. slices. Those often use as sequential slices with low spatial step,
and then they make 3D image, Fig. 3. Therefore, series images of computer
tomography (CT) and magnetic resonance imaging (MRI) are used for testing.
As an output parameter, the total entropy of prediction error 3D image has
calculated. Test images set contains a 150 previously uncompressed medical
images, some of which are shown of Fig. 4.
Fig. 33D medical image.
Medical images are usually represented as a 12-bit or 16-bit images, and
gradient adaptive predictor used fixed thresholds defined with 8-bit images.
Therefore the version with the scaling gradient estimation is tested, as described
in [4]. If N-bit image is compressed, the authors in [4] have suggested scaling
the gradient estimation based on the empirical relation:
()
2
8max 0, log 5
2
2
N
⎢⎥
−− σ
⎢⎥
⎣⎦
λ= , (12)
A. Avramović, S. Savić
34
where σ is a total absolute prediction error prior scanned row. Table 1 provides
an overview of performance of different predictors applied to 3D medical
images described with entropy. In addition to MED predictor, two versions of
GAP predictor are tested, the basic version described by (2) and (3), and version
GAPs scaled with (12). The proposed GED predictor is tested with different
thresholds, which are selected to be degree of number 2. Mark GED16 says that
it is GED predictor with threshold 16. Similarly, the second alternative predictor
is marked, e.g. GED216 is GED2 predictor with threshold 16. Also, described
predictors DARC, P6 and DPCM3d are tested, and predictors based on the
MMSE method are not considered as damaging the principle of simplicity.
By analysis of the results in Table 1 can be concluded that the best
performance has predictor P6 based on the penalty and the combination of
several simple sub predictors. However, this predictor is much more
complicated than the other examined predictors. Scaling method of gradient
estimation GAP predictor, given by (12), does not guarantee efficient
prediction, although considerably complicates prediction algorithm. Also, is
noticeable that simple predictors, such as those in the MED, P6 and GED2, give
a better prediction than more complicated predictors, such as those in the GAP
or GED.
Table 1
Comparative analysis entropy of prediction error
3D medical images, applying different predictors.
Predictor CT1 CT2 CT3 MRI1 MRI2
MED 7,615 5,796 3,998 4,747 4,253
GAP 7,563 5,727 4,304 4,851 4,375
GAPs 7,507 5,705 4,305 4,976 4,514
GED16 7,606 5,779 4,598 5,072 4,684
GED64 7,495 5,758 4,711 5,264 4,876
GED128 7,445 5,787 4,786 5,359 4,945
GED216 7,754 6,007 4,150 4,847 4,278
GED264 7,893 6,032 3,918 4,738 4,170
GED2128 7,926 6,025 3,788 4,702 4,144
DARC 7,725 5,937 4,522 5,044 4,596
P6 7,532 5,703 3,706 4,643 4,107
DPCM3d 7,755 6,322 5,558 6,238 5,745
Lossless Predictive Compression of Medical Images
35
Table 2
Entropy mean value after prediction of 150 CT images with different predictors.
Predictor MED GAP GAPs GED16 GED64
Entropy 3,953 4,199 4,210 4,475 4,606
Predictor GED216 GED264 GED2128 DARC P6
Entropy 4,078 3,883 3,780 4,417 3,690
Table 2 provides the analysis of predictors performances in a large number
of independent 2D medical images. Set of 150 2D CT images are tested, and the
final entropy is the result of averaging individual image entropy. Examples of
used medical images are shown on Fig. 4.
Fig. 4 – Examples of medical images.
5 Conclusion
This paper describes predictive lossless image compression process. Also,
the most important predictors of the most important algorithms for lossless
compression are described, which are accepted in standards or that are
A. Avramović, S. Savić
36
representative to their characteristics. Novel solution for the simple linear
prediction is based on the detection of edges, called the GED and its comparison
with the described predictors is made. GED algorithm is a mixture of
distinguish features of most representative linear predictors, namely MED and
GAP. Proposed predictor has shown satisfactory results on a chosen set of
uncompressed medical images.
6 Acknowledgment
This paper is the result of research during the project called Lossless image
compression, which is supported by the Ministry of Science and Technology of
the Republic of Srpska, No. contract 06/0-020/961-166/09. Also, the authors
wish to thank KBC Paprikovac, Banja Luka, who supplied medical images for
testing that are used in this research and professor Branimir Reljin, School of
Electrical Engineering in Belgrade, on his useful suggestions.
7 References
[1] I.N. Bankman: Handbook of Medical Image Processing and Analysis, Elsevier, London, 2009.
[2] N. Memon, X. Wu: Recent Developments in Context-based Predictive Techniques for
Lossless Image Compression, The Computer Journal, Vol. 40, No. 2/3, 1997, pp. 127 – 136.
[3] M.J. Wienberger, G. Seroussi, G. Sapiro: LOCO-I: A Low Complexity, Context-based,
Lossless Image Compression Algorithm, Conference on Data Compression, Snowbird, UT,
USA, March/April 1996, pp.140 – 149.
[4] X. Wu, N. Memon: CALIC – A Context based Adaptive Lossless Image Codec, IEEE
International Conference on Acoustics, Speech, and Signal Processing, Atlanta, GA, USA,
Vol. 4, May 1996, pp. 1890 – 1893.
[5] G. Deng, H. Ye: Lossless Image Compression using Adaptive Predictor Combination,
Symbol Mapping and Context Filtering, Conference on Image Processing, Kobe, Japan,
Vol. 4, Oct. 1999, pp. 63 – 67.
[6] V. Nandedkar, S.V.B. Kumar, S. Mukhopadhyay: Lossless Volumetric Medical Image
Compression with Progressive Multiplanar Reformatting using 3D DPCM, National Con-
ference on Image Processing, Bangalore, India, March 2005.
... Prediction is the important part of the compression, because it removes most of the spatial redundancy, and the choice of the optimal predictor is essential for the efficiency of compression methods. The prediction may be linear or nonlinear [12]. Linear predictors based on the finite group of sub predictors, are simple and fast. ...
... Linear predictors based on the finite group of sub predictors, are simple and fast. Nonlinear prediction is based on neural networks, vector quantization, etc [12]. Contextual modeling means adaptive correction of prediction of pixels in order to exploit repeated schemes in a picture. ...
... Error modeling can further reduce the entropy of prediction error image. Entropy coding removes statistical redundancy of the prediction error images [12]. ...
... Avramovic and Savic proposed a predictive algorithm for the estimation of local gradients and detection of edges. Entropy analysis for different predictors is done after prediction for different images like CT and MRI [15]. Owen Zhao et al. proposed an efficient lossless image compression scheme called super-spatial structure prediction [16]. ...
... A combination of both standard predictors results in GED that takes the merit of simplicity and efficiency from both MED and GAP predictors. GED is also threshold based, just like GAP, but the threshold value is user-defined in case of GED [15]. In literature, different encoding techniques are available like Huffman, run-length, Dictionary, arithmetic and bit-plane coding, etc. ...
Article
The proposed block-based lossless coding technique presented in this paper targets at the compression of volumetric medical images of 8 bit and 16-bit depth. The novelty of the proposed technique is its ability of threshold selection for prediction and optimal block size for encoding. Resolution Independent Gradient Edge Detector along with Block Adaptive Arithmetic Encoding algorithm with extensive experimental tests to find universal threshold value and optimal block size that is independent of image resolution and modality. Performance of the proposed technique is demonstrated and compared with the benchmark lossless compression algorithms. BPP values obtained from the proposed algorithm shows that it is capable of effective reduction of inter-pixel and coding redundancy. The proposed technique for volumetric medical images outperforms CALIC and JPEG-LS by 0.70 % and 4.62 % respectively in terms of coding efficiency.
... Prediction is a crucial part of compression because it can remove most of the spatial redundancy between pixels, and the choice of an optimal predictor is essential for the efficiency of compression methods. Linear predictors have a significant advantage which is the possibility of realization of the integer system [3]. In our method, a proposed twodimensional linear prediction method is used to remove the inter-pixel redundancy of images. ...
Article
Full-text available
There is an increasing number of image data produced in our life nowadays, which creates a big challenge to store and transmit them. For some fields requiring high fidelity, the lossless image compression becomes significant, because it can reduce the size of image data without quality loss. To solve the difficulty in improving the lossless image compression ratio, we propose an improved lossless image compression algorithm that theoretically provides an approximately quadruple compression combining the linear prediction, integer wavelet transform (IWT) with output coefficients processing and Huffman coding. A new hybrid transform exploiting a new prediction template and a coefficient processing of IWT is the main contribution of this algorithm. The experimental results on three different image sets show that the proposed algorithm outperforms state-of-the-art algorithms. The compression ratios are improved by at least 6.22% up to 72.36%. Our algorithm is more suitable to compress images with complex texture and higher resolution at an acceptable compression speed.
Chapter
The digitization of human body, especially for treatment of diseases can generate a large volume of data. This generated medical data has a large resolution and bit depth. In the field of medical diagnosis, lossless compression techniques are widely adopted for the efficient archiving and transmission of medical images. This article presents an efficient coding solution based on a predictive coding technique. The proposed technique consists of Resolution Independent Gradient Edge Predictor16 (RIGED16) and Block Based Arithmetic Encoding (BAAE). The objective of this technique is to find universal threshold values for prediction and provide an optimum block size for encoding. The validity of the proposed technique is tested on some real images as well as standard images. The simulation results of the proposed technique are compared with some well-known and existing compression techniques. It is revealed that proposed technique gives a higher coding efficiency rate compared to other techniques.
Chapter
Due to rapid development of multimedia communication and advancement of image acquisition process, there is a crucial requirement of high storage and compression techniques to mitigate high data rate with limited bandwidth scenario for telemedicine application. Lossless compression is one of the challenging tasks in applications like medical, space, and aerial imaging field. Apart from achieving high compression ratio, in these mentioned applications there is a need to maintain the original imaging quality along with fast and adequate processing. Predictive coding was introduced to remove spatial redundancy. The accuracy of predictive coding is based on the choice of effective and adaptive predictor which is responsible for removing spatial redundancy. Medical images like computed tomography (CT) and magnetic resonance imaging (MRI) consume huge storage and utilize maximum available bandwidth. To overcome these inherent challenges, the authors have reviewed various adaptive predictors and it has been compared with existing JPEG and JPEG LS-based linear prediction technique for medical images.
Chapter
Due to rapid development of multimedia communication and advancement of image acquisition process, there is a crucial requirement of high storage and compression techniques to mitigate high data rate with limited bandwidth scenario for telemedicine application. Lossless compression is one of the challenging tasks in applications like medical, space, and aerial imaging field. Apart from achieving high compression ratio, in these mentioned applications there is a need to maintain the original imaging quality along with fast and adequate processing. Predictive coding was introduced to remove spatial redundancy. The accuracy of predictive coding is based on the choice of effective and adaptive predictor which is responsible for removing spatial redundancy. Medical images like computed tomography (CT) and magnetic resonance imaging (MRI) consume huge storage and utilize maximum available bandwidth. To overcome these inherent challenges, the authors have reviewed various adaptive predictors and it has been compared with existing JPEG and JPEG LS-based linear prediction technique for medical images.
Article
Full-text available
The digitization of human body, especially for treatment of diseases can generate a large volume of data. This generated medical data has a large resolution and bit depth. In the field of medical diagnosis, lossless compression techniques are widely adopted for the efficient archiving and transmission of medical images. This article presents an efficient coding solution based on a predictive coding technique. The proposed technique consists of Resolution Independent Gradient Edge Predictor16 (RIGED16) and Block Based Arithmetic Encoding (BAAE). The objective of this technique is to find universal threshold values for prediction and provide an optimum block size for encoding. The validity of the proposed technique is tested on some real images as well as standard images. The simulation results of the proposed technique are compared with some well-known and existing compression techniques. It is revealed that proposed technique gives a higher coding efficiency rate compared to other techniques.
Article
Digital visualization of human body in terms of medical images with high resolution and bit depth generates tremendous amount of data. In the field of medical diagnosis, lossless compression technique is preferred that facilitates efficient archiving and transmission of medical images avoiding false diagnosis. Among various approaches to lossless compression of medical images, predictive coding techniques have high coding efficiency and low complexity. Gradient Edge Detector (GED) used in predictive coding is based on threshold value for prediction and choice of threshold is very important for efficient prediction. However, no specific method is adopted in the literature for threshold value selection. This paper presents an efficient prediction solution targeted at lossless compression of 8 bits and higher bit depth volumetric medical images up to 16 bits. Novelty of the proposed technique is developing Resolution Independent Gradient Edge Predictor (RIGED) algorithm to support 8- and 16-bit depth medical images. Percentage improvement of the proposed model is 30.39% over state-of-the-art Median Edge Detector (MED) and 0.92% over Gradient Adaptive Predictor (GAP) in terms of entropy for medical image dataset of different modalities having different resolutions and bit depths. Free eprint: https://www.tandfonline.com/eprint/eV7TCaQnAnmUjM9KiYdM/full?target=10.1080/1206212X.2019.1610994
Conference Paper
Full-text available
Common components in recently published lossless image compression algorithms include adaptive prediction, context-based error feedback and adaptive entropy coding. Each component has a number of building blocks. In this paper, we present a new algorithm which uses three new building blocks: an adaptive predictor, a symbol mapping scheme and a context filtering scheme. Experimental results show that the compression performance of the proposed algorithm is better than those of the three state-of-the-art algorithms: CALIC, HBB and LOGO. Experimental results also show that the proposed new building blocks are promising tools for lossless image compression
Conference Paper
Full-text available
We propose a context-based, adaptive, lossless image codec (CALIC). CALIC obtains higher lossless compression of continuous-tone images than other techniques reported in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. CALIC puts heavy emphasis on image data modeling. A unique feature of CALIC is the use of a large number of modeling contexts to condition a non-linear predictor and make it adaptive to varying source statistics. The non-linear predictor adapts via an error feedback mechanism. In this adaptation process, CALIC only estimates the expectation of prediction errors conditioned on a large number of contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the sparse context problem. The low time and space complexities of CALIC are attributed to efficient techniques for forming and quantizing modeling contexts
Article
In this paper we describe some recent developments that have taken place in context-based predictive coding, in response to the JPEG/JBIG committee's recent call for proposals for a new international standard on lossless compression of continuous-tone images. We describe the different prediction techniques that were proposed and give a performance comparison. We describe the notion of context-based bias cancellation, which represents one of the key ideas that was proposed and incorporated in the final standard. We also describe the different error modelling and entropy coding techniques that were proposed for encoding prediction errors, the most important development here being an ingeniously simple and effective technique for adaptive Golomb-Rice coding. We conclude with a short discussion on avenues for future research.
Article
LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus "enjoying the best of both worlds." The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications.
LOCO-I: A Low Complexity, Context-based
  • M J Wienberger
  • G Seroussi
  • G Sapiro
M.J. Wienberger, G. Seroussi, G. Sapiro: LOCO-I: A Low Complexity, Context-based, Lossless Image Compression Algorithm, Conference on Data Compression, Snowbird, UT, USA, March/April 1996, pp.140 -149.