Fig 1- - uploaded by Aleksej Avramovic
Content may be subject to copyright.
Source publication
Among the many categories of images that require lossless compression, medical images can be indicated as one of the most important category. Medical image compression with loss impairs of diagnostic value, therefore, there are often legal restrictions on the image compression with losses. Among the common approaches to medical image compression we...
Context in source publication
Context 1
... compression is based on several stages, as shown in Fig. 1, that are: prediction, contextual modeling, error modeling and entropy coding. This type of lossless method of compression became a standard during the last decade of the last century. At the time, several algorithms were proposed and tested for the purpose of adoption of a standard for lossless image compression [2]. Prediction is the ...
Similar publications
In this paper, we propose a novel method to select the most informativesubset of features, which has little redundancy andvery strong discriminating power. Our proposed approach automaticallydetermines the optimal number of features and selectsthe best subset accordingly by maximizing the averagepairwise informativeness, thus has obvious advantage...
Data aggregation has been an important mechanism for achieving energy efflciency in WSN's. The aggregation reduces the transmission of redundant data which results in improved energy usage. A large number of data aggregation protocols have been developed in the past based on various techniques of optimizing the delay and energy. This paper surveyed...
Visual context is fundamental to understand human actions in videos. However, the discriminative temporal information of videos is usually sparse and most frames are redundant mixed with a large amount of interference information, which may result in redundant computation and recognition failure. Hence, an important question is how to efficiently e...
A technique for controlling errors in the functioning of nodes for the formation of $q$-valued pseudo-random sequences (PRS) operating under both random errors and errors generated through intentional attack by an attacker is provided, in which systems of characteristic equations are realized by arithmetic polynomials that allow the calculation pro...
Cascaded outages often result in power system islanding followed by a blackout and therefore considered as a severe disturbance. Maintaining the observability of each island may help in taking proper control actions to preserve the stability of individual islands thus, averting system collapse. With this intent, a strategy for placement of synchron...
Citations
... Prediction is a crucial part of compression because it can remove most of the spatial redundancy between pixels, and the choice of an optimal predictor is essential for the efficiency of compression methods. Linear predictors have a significant advantage which is the possibility of realization of the integer system [3]. In our method, a proposed twodimensional linear prediction method is used to remove the inter-pixel redundancy of images. ...
There is an increasing number of image data produced in our life nowadays, which creates a big challenge to store and transmit them. For some fields requiring high fidelity, the lossless image compression becomes significant, because it can reduce the size of image data without quality loss. To solve the difficulty in improving the lossless image compression ratio, we propose an improved lossless image compression algorithm that theoretically provides an approximately quadruple compression combining the linear prediction, integer wavelet transform (IWT) with output coefficients processing and Huffman coding. A new hybrid transform exploiting a new prediction template and a coefficient processing of IWT is the main contribution of this algorithm. The experimental results on three different image sets show that the proposed algorithm outperforms state-of-the-art algorithms. The compression ratios are improved by at least 6.22% up to 72.36%. Our algorithm is more suitable to compress images with complex texture and higher resolution at an acceptable compression speed.
... MED and GAP were analyzed and comparative analysis of these predictors were also done in terms of entropy. Authors in [15] adopted a compression method that is based on a combination between predictive coding and bit plane slicing for compression of medical and natural image samples. High system performance is achieved by this lossless compression technique with high compression ratio. ...
Technological advancement of medical imaging techniques are progressing constantly, dealing with images of increasing resolutions. In hospitals, medical imaging techniques like X-rays, magnetic resonance imaging and computed tomography etc. are of high resolution consuming large storage space. Such high resolution of medical images transmitted over the network utilizes large bandwidth that often results in degradation of image quality. So, compression of images is only a solution for efficient archival and communication of medical images. Predictive based coding technique is explored in this paper for medical image compression as it performs well for lossless compression. This paper presents a comparative investigation on 2D predictor’s coding efficiency and complexity on CT images. It was observed that among 2D predictors Gradient Edge Detection (GED) predictor gave better results than Median Edge Detector (MED) and DPCM. GED predictor at proper threshold value achieved approximately same results in terms of various performance metrics as Gradient Adaptive Predictor (GAP) though it is less complex.
... Avramovic and Savic proposed a predictive algorithm for the estimation of local gradients and detection of edges. Entropy analysis for different predictors is done after prediction for different images like CT and MRI [15]. Owen Zhao et al. proposed an efficient lossless image compression scheme called super-spatial structure prediction [16]. ...
... A combination of both standard predictors results in GED that takes the merit of simplicity and efficiency from both MED and GAP predictors. GED is also threshold based, just like GAP, but the threshold value is user-defined in case of GED [15]. In literature, different encoding techniques are available like Huffman, run-length, Dictionary, arithmetic and bit-plane coding, etc. ...
The proposed block-based lossless coding technique presented in this paper targets at the compression of volumetric medical images of 8 bit and 16-bit depth. The novelty of the proposed technique is its ability of threshold selection for prediction and optimal block size for encoding. Resolution Independent Gradient Edge Detector along with Block Adaptive Arithmetic Encoding algorithm with extensive experimental tests to find universal threshold value and optimal block size that is independent of image resolution and modality. Performance of the proposed technique is demonstrated and compared with the benchmark lossless compression algorithms. BPP values obtained from the proposed algorithm shows that it is capable of effective reduction of inter-pixel and coding redundancy. The proposed technique for volumetric medical images outperforms CALIC and JPEG-LS by 0.70 % and 4.62 % respectively in terms of coding efficiency.
... This predictor was proposed in [3]. It is a causal predictor that combines the simplicity of the MED predictor and the efficiency of the GAP predictor. ...
With the broad development and evolution of digital data exchange, security has become an important issue in data storage and transmission since digital data can be easily manipulated and modified. Reversible data hiding algorithms are special class of steganography that are capable of recovering the original cover image upon the extraction of the secret data. This issue is of interest in medical and military imaging applications. Many algorithms in this class exploit the idea of prediction in order to increase the embedding capacity as well as the quality of the stego image. However, the performance of these algorithms depends on the type of predictor that is being used. The main goal in this paper is to survey different predictors and evaluate their performance when employed in two classical reversible data hiding algorithms. The evaluation considered plugging 22 predictors in the two algorithms to process 1438 test images. Experimental results validated the varying capabilities of different predictors and showed that the non-causal median predictor had the best performance in the two algorithms. Further more, the paper proposes a new multi-predictor reversible data hiding algorithm. Basically, the algorithm employs multiple predictors in an extended version of the modification of prediction errors (MPE) algorithm. The algorithm takes advantage of the results obtained from the performance evaluation of different predictors to select the best set of predictors. Performance evaluation proved the ability of the proposed algorithm in increasing the embedding capacity while maintaining high stego image quality.
... The prediction coding techniques based on modelling concept increasingly used in image compression, due to simplicity, symmetry of encoder/decoder and flexibility of use are the most significant advantages of this technique [9]. The core of prediction coding lies in the design of mathematical models, in which predicting or estimating each pixel value from nearby or neighbouring pixels, and then followed by finding the differences between the predicted value and the actual value that called residual or prediction error [10][11][12]. Today, the polynomial linear based adopted by [13], and followed by [14][15][16][17][18] to compress the images effectively based on modelling distance between image pixels and the centre, using the linearization base or the first order Taylor series. In this paper an extended approximated non-linear polynomial model (second order Taylor series) for compressing images utilized. ...
This paper introduced a new image compression techniques based on using the modelling concept of polynomial second order Taylor series representation of nonlinear base. The results showed highly performance in terms of compression ratio and quality even with more complexity of coefficients estimation.
... Prediction is the important part of the compression, because it removes most of the spatial redundancy, and the choice of the optimal predictor is essential for the efficiency of compression methods. The prediction may be linear or nonlinear [12]. Linear predictors based on the finite group of sub predictors, are simple and fast. ...
... Linear predictors based on the finite group of sub predictors, are simple and fast. Nonlinear prediction is based on neural networks, vector quantization, etc [12]. Contextual modeling means adaptive correction of prediction of pixels in order to exploit repeated schemes in a picture. ...
... Error modeling can further reduce the entropy of prediction error image. Entropy coding removes statistical redundancy of the prediction error images [12]. ...
... Ferni Ukrit et al. (2011) have performed a survey on various lossless compressing techniques. Avramovic and Savić (2011) have described predictive lossless image compression process. Sridevi et al. (2012) have used various medical image compression techniques such as JPEG2000 image compression, JPEG2000 scaling-based ROI coding. ...
Digital radiology has resulted in significant increase in use of digital medical images in the process of diagnosis. Lossless image compression is necessary to preserve the value of diagnostic medical images. This method will be applicable to mainly compressing information without any loss. More particularly, this paper aims to increase compression ratio and quality (PSNR) of medical images. Initially, MRI brain image is segmented using grow cut. This algorithm extracts the region of tumour (abnormal region) and non-region of tumour (normal region) parts. The region of tumour is selected using seed point which is selected by extracting run length features. Morphological processing is used for highlighting the tumour region. The integer wavelet transform is applied on both region of tumour and non-region of tumour parts. The abnormal region is compressed losslessly using arithmetic coding and normal region is compressed using lossy compression (i.e.) EZW coding. Finally, the compressed image of the region of tumour (abnormal region) is fused with compressed image of the normal region. This implementation is found to possess superior reconstruction properties and better compression ratio.
... Prediction is the important part of the compression, because it removes most of the spatial redundancy, and the choice of the optimal predictor is essential for the efficiency of compression methods. The prediction may be linear or nonlinear [12]. Linear predictors based on the finite group of sub predictors, are simple and fast. ...
... Linear predictors based on the finite group of sub predictors, are simple and fast. Nonlinear prediction is based on neural networks, vector quantization, etc [12]. Contextual modeling means adaptive correction of prediction of pixels in order to exploit repeated schemes in a picture. ...
... Error modeling can further reduce the entropy of prediction error image. Entropy coding removes statistical redundancy of the prediction error images [12]. ...
... Visually lossless compression with high PSNR value is obtained. M.Ferni Ukrit et al [12] performed a survey on various lossless compressing techniques. Aleksej Avramovic et al [13] described predictive lossless image compression process. ...
This paper proposes an improved medical image compression based on seam identification using integer wavelet transform and near loss-less encoding techniques, image retargeting is generally required at the user end of the mobile multimedia communications. This work addresses the increasing demand of visual signal delivery to terminals with arbitrary resolutions, without heavy computational burden to the receiving end. The block based seam energy map is generated in the pixel domain for each input image and the integer wavelet transform (IWT) is performed on the retargeted image. IWT coefficients here are grouped and encoded according to the resultant seam energy map using SPIHT followed by arithmetic coding. At the decoder side, the end user has the ultimate choice for the spatial scalability without the need to examine the visual content; the received images with an arbitrary resolution preserve important content while achieving high coding efficiency for transmission
... GED predictor is a simple combination of gradient and median predictors. Estimation of local gradient and a threshold is used to decide which of the three sub predictors is optimal, i.e. is the pixel in context of horizontal edge, vertical edge or smooth region [7]. ...
... h v bin cont W N NW WW NN Er g g Er P (7) where first five inputs of cont function are neighbor pixels, Er is a prediction error for previous pixels, g h and g v are local gradient estimations and P is current pixels prediction. First five bits of unique binary number are determined by comparing of neighbor pixels with current prediction. ...
In this paper, a novel predictive-based lossless image compression algorithm is presented. Lossless compression must be applied when data acquisition is important and expensive, as in aerial, medical and space imaging. Besides requirements of high compression ratios as much as it is possible, lossless image coding algorithms must be fast. Proposed algorithm is developed for efficient and fast processing of 12-bit medical images. Comparison with standardized lossless compression algorithm, JPEG-LS is done on a set of 12-bit medical images with different statistical features. It is shown that proposed solution can achieve approximately same bitrates as JPEG-LS even though it is much simpler.