Conference Paper

Comparative Study of Neural Network based Compression Techniques for Medical Images

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
A compression technique for still digital images is proposed with deep neural networks (DNNs) employing rectified linear units (ReLUs). We tend to exploit the DNNs capabilities to find a reasonable estimate of the underlying compression/decompression relationships. We aim for a DNN for image compression purpose that has better generalization property and reduced training time and support real time operation. The use of ReLUs which map more plausibly to biological neurons, makes the training of our DNN significantly faster, shortens the encoding/decoding time, and improves its generalization ability. The introduction of the ReLUs establishes an efficient gradient propagation, induces sparsity in the proposed network, and is efficient in terms of computations making these networks suitable for real time compression systems. Experiments performed on standard real world images show that using ReLUs instead of logistic sigmoid units speeds up the training of the DNN by converging markedly faster. The evaluation of objective and subjective quality of reconstructed images also proves that our DNN achieves better generalization as most of the images are never seen by the network before.
Article
Full-text available
Despite rapid improvements in storage and data transmission techniques, there is also an increasing need for medical image compression. The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for compression of these images for storage and communication purposes. In this paper we proposed a lossless method of volumetric medical image compression and decompression using a block-based coding technique. The algorithm is tested for different sets of CT colour images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse nature of the algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of images to produce a comparative analysis between computational burden and compression ratio for various values of predefined block sizes. This paper finally proves the proposed methodology is better in terms of computational complexity and compression ratio.
Article
Full-text available
A novel dual level differential pulse code modulation (DL-DPCM) is proposed for lossless compression of medical images. The DL-DPCM consists of a linear DPCM followed by a nonlinear DPCM namely, context adaptive switching neural network predictor (CAS-NNP). The CAS-NNP adaptively switches between three NN predictors based on the context texture of the predicted pixel in the image. Experiments on magnetic resonance (MR) images showed lower prediction error for the DL-DPCM compared to the GAP and the MED, which are used in benchmark algorithms CALIC and LOCO-I respectively. The overall improvement in data reduction after entropy coding the prediction error were 0.21 bpp (6.5%) compared to the CALIC and 0.40 bpp (11.7%) compared to the LOCO-I.
Article
Full-text available
Through the development of medical imaging systems and their integration into a complete information system, the need for advanced joint coding and network services becomes predominant. PACS (Picture Archiving and Communication System) aims to acquire, store and compress, retrieve, present and distribute medical images. These systems have to be accessible via the Internet or wireless channels. Thus protection processes against transmission errors have to be added to get a powerful joint source-channel coding tool. Moreover, these sensitive data require confidentiality and privacy for archiving and transmission purposes, leading to use cryptography and data embedding solutions. This chapter introduces data integrity protection and developed dedicated tools of content protection and secure bitstream transmission for medical encoded image purposes. In particular, the LAR image coding method is defined together with advanced securization services.
Article
Full-text available
In this paper, a novel medical data compression algorithm, termed layered set partitioning in hierarchical trees (LSPIHT) algorithm, is presented for telemedicine applications. In the LSPIHT, the encoded bit streams are divided into a number of layers for transmission and reconstruction. Starting from the base layer, by accumulating bit streams up to different enhancement layers, we can reconstruct medical data with various signal-to-noise ratios (SNRs) and/or resolutions. Receivers with distinct specifications can then share the same source encoder to reduce the complexity of telecommunication networks for telemedicine applications. Numerical results show that, besides having low network complexity, the LSPIHT attains better rate-distortion performance as compared with other algorithms for encoding medical data.
Article
Full-text available
We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.
Article
The generation of high volume of medical images in recent years has increased the demand for more efficient compression methods to cope up with the storage and transmission problems. In the case of medical images, it is important to ensure that the compression process does not affect the image quality adversely. In this paper, a predictive image coding method is proposed which preserves the quality of the medical image in the diagnostically important region (DIR) even after compression. In this method, the image is initially segmented into two portions, namely, DIR and non-DIR portions using a graph based segmentation procedure. The prediction process is implemented using two identical feed- forward neural networks (FF-NNs) at the compression and decompression stages. Gravitational search and particle swarm algorithms are used for training the FF-NNs. Prediction is performed both in a lossless (LLP) and near lossless (NLLP) manner for evaluating the performances of the two FF-NN training algorithms. The prediction error sequence which is the difference between the actual and predicted pixel values is further compressed using a Markov model based arithmetic coding. The proposed method is tested using CLEF med 2009 database. The experimental results demonstrate that the proposed method is equipped for compressing the medical images with minimum degradation in the image quality. It’s found that the gravitational search method achieves higher PSNR values compared to the particle swarm and backpropagation methods.
Article
In many multimedia applications, such as image storage and transmission image, compression plays a major role. The fundamental objective of image compression is to represent an image with least number of bits of an acceptable image quality. A technique based on second-generation curvelet transform and Back-Propagation Neural Network (BPNN) has been proposed. The image compression is accomplished by approximating curvelet coefficients using BPNN. By applying BPNN into compressing curvelet coefficients, we have proposed a new compression algorithm derived from characteristic of curvelet transform. Initially, the image is translated by fast discrete curvelet transform and then based on their statistical properties; different coding and quantization schemes are employed. Differential Pulse Code Modulation (DPCM) is employed to compress low-frequency band coefficients and BPNN is used to compress high-frequency band coefficients. Subsequently, vector quantization is performed on BPNN hidden layer coefficients, thereby resulting in a reconstructed image with less degradation at higher compression ratios. For a given bits per pixel (bpp), the Curvelet Transform with Back-Propagation Neural Network (BPNN) gives better performance in terms of Peak Signal-to-Noise Ratio (PSNR) and Computation Time (CT) when compared to Wavelet Transform with BPNN and JPEG.
Article
Background/Objectives: The Main aim of this Hybrid Image compression method is that it should provide good picture quality, better compression ratio and also it can remove block artifacts in the reconstructed image. Methods/Statistical analysis: To compress an image using the proposed algorithm, images are first digitized. With the digital Information of an image different types of transformations are applied. In this method wavelet transformations (haar, daubechies wavelet transformations) are used. The output of transformation coefficients are quantized into nearest integer values. Vector Quantization takes an important role in quantizing the transformation coefficients. After quantization they are encoded by using any one of the compression encoding techniques. Huffmann encoding is used for compressing Tablet images and Tablet strip images. It is derived from exact frequencies of text. The variable length code table is an output of Huffmann's algorithm. The source symbol is encoded and stored in the above table which is further transferred through the channel for decoding. Findings: Since Unsupervised Neural Network learning algorithms are added in this algorithm increase the picture quality is improved and it removes the problems of block artifacts. Conclusion/Application: Since cloud computing provides elastic services, high performance and scalable large data storage, to facilitate long term storage and efficient transmission Image files are compressed and stored using this hybrid compression algorithm to enhance the performance of recent compression algorithms. The compressed and reconstructed images are evaluated using the error measures like CR (Compression Ratio), PSNR (Peak Signal Ratio). It shows that the above explained algorithm provides better results than the traditional results.
Article
This paper considers a novel image compression technique called hybrid predictive wavelet coding. The new proposed technique combines the properties of predictive coding and discrete wavelet coding. In contrast to JPEG2000, the image data values are pre-processed using predictive coding to remove interpixel redundancy. The error values, which are the difference between the original and the predicted values, are discrete wavelet coding transformed. In this case, a nonlinear neural network predictor is utilised in the predictive coding system. The simulation results indicated that the proposed technique can achieve good compressed images at high decomposition levels in comparison to JPEG2000.
Article
Image compression technique is used to reduce the number of bits required in representing image, which helps to reduce the storage space and transmission cost. Image compression techniques are widely used in many applications especially, medical field. Large amount of medical image sequences are available in various hospitals and medical organizations. Large images can be compressed into smaller size images, so that the memory occupation of the image is considerably reduced. Image compression techniques are used to reduce the number of pixels in the input image, which is also used to reduce the broadcast and transmission cost in efficient form. This is capable by compressing different types of medical images giving better compression ratio (CR), low mean square error (MSE), bits per pixel (BPP), high peak signal to noise ratio (PSNR), input image memory size and size of the compressed image, minimum memory requirement and computational time. The pixels and the other contents of the images are less variant during the compression process. This work outlines the different compression methods such as Huffman, fractal, neural network back propagation (NNBP) and neural network radial basis function (NNRBF) applied to medical images such as MR and CT images. Experimental results show that the NNRBF technique achieves a higher CR, BPP and PSNR, with less MSE on CT and MR images when compared with Huffman, fractal and NNBP techniques.
Conference Paper
A Technique for the encryption of the image is proposed using the random phase masks and fractional Fourier transform. The method uses four random phase masks and two fractional orders that act as the encryption key. The encryption scheme transmits the data to the authorized user maintaining its integrity and confidentiality. Numerical simulations results have been carried out to validate the algorithm and its Mean Square Error (MSE) is calculated. Furthermore, an image is divided in to four sections and on each of the section of the image different algorithms are applied and then there encryption and the decryption time is studied and also their MSEs are calculated and compared to find an algorithm which is most optimal.
Article
Data and especially image compression is becoming increasingly important for efficient resource utilization. Many digital image file formats therefore include universally usable compression methods. They treat every image separately and do not profit from a larger image data set's similar image contents, which are present in numerous biomedical applications. This situation provided the impetus to develop and implement a technical system that incorporates a priori information on typical image contents in image compression on the basis of artificial neural networks and thus increases compression performance for larger image data sets with frequently recurring image contents.
Article
Efficient storage and transmission of medical images in telemedicine is of utmost importance however, this efficiency can be hindered due to storage capacity and constraints on bandwidth. Thus, a medical image may require compression before transmission or storage. Ideal image compression systems must yield high quality compressed images with high compression ratio; this can be achieved using wavelet transform based compression, however, the choice of an optimum compression ratio is difficult as it varies depending on the content of the image. In this paper, a neural network is trained to relate radiograph image contents to their optimum image compression ratio. Once trained, the neural network chooses the ideal Haar wavelet compression ratio of the x-ray images upon their presentation to the network. Experimental results suggest that our proposed system, can be efficiently used to compress radiographs while maintaining high image quality.
Article
High resolution images acquired by an aerial digital camera and high resolution satellite images are expected to become more powerful data source of GIS. Since the large data volume of a high resolution image brings difficulties in dealing with it, lossy image compression is going to be indispensable. Image quality of a reconstructed image after decompression is usually evaluated by visual inspection. Although some numerical measures such as RMSE or PSNR are used to compare various image compression techniques, numerical evaluation of quality of a reconstructed image is seldom conducted. Therefore, we decided to carry out an empirical investigation into the effects of lossy image compression on quality of color aerial images by using color and texture measures. From the experiment results, it can be concluded that color space conversion and downsampling in JPEG compression have an effect on quality of a reconstructed image. The results supported that lossy JPEG 2000 compression is superior to lossy JPEG compression in color features. However, lossy JPEG 2000 compression does not necessarily provide an image of good quality in texture features. Moreover, the results indicated that an image of finer texture features is less compressible, and quality of the reconstructed image is worse in both color and texture features. Finally, it was confirmed that it is difficult to set an appropriate the quality factor, because the optimal setting of the quality factor varies from one image to another.
Article
In this paper, we present an implementation of the image compression technique set partitioning in hierarchical trees (SPIHT) in programmable hardware. The lifting based Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels. In addition, we provide a study on what storage elements are required for the wavelet coefficients. A modified SPIHT (Set Partitioning in Hierarchical Trees) algorithm is presented for encoding the wavelet coefficients. The modifications include a simplification of coefficient scanning process, use of a 1-D addressing method instead of the original 2-D arrangement for wavelet coefficients and a fixed memory allocation for the data lists instead of the dynamic allocation required in the original SPIHT. The proposed algorithm has been illustrated on both the 2-D Lena image and a 3-D MRI data set and is found to achieve appreciable compression with a high peak-signal-to-noise ratio (PSNR).
Article
We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.
Article
A Hilbert space-filling curve is a curve traversing the 2(n) x 2(n)two-dimensional space and it visits neighboring points consecutively without crossing itself. The application of Hilbert space-filling curves in image processing is to rearrange image pixels in order to enhance pixel locality. A computer program of the Hilbert space-filling curve ordering generated from a tensor product formula is used to rearrange pixels of medical images. We implement four lossless encoding schemes, run-length encoding, LZ77 coding, LZW coding, and Huffman coding, along with the Hilbert space-filling curve ordering. Combination of these encoding schemes are also implemented to study the effectiveness of various compression methods. In addition, differential encoding is employed to medical images to study different format of image representation to the above encoding schemes. In the paper, we report the testing results of compression ratio and performance evaluation. The experiments show that the pre-processing operation of differential encoding followed by the Hilbert space-filling curve ordering and the compression method of LZW coding followed by Huffman coding will give the best compression result.
Article
A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.
Conference Paper
Image compression using Discrete Cosine Transform (DCT) is one of the simplest commonly used compression methods. The quality of compressed images, however, is marginally reduced at higher compression ratios due to the lossy nature of DCT compression, thus, the need for finding an optimum DCT compression ratio. An ideal image compression system must yield high quality compressed images with good compression ratio, while maintaining minimum time cost. Neural networks perform well in simulating non-linear relationships. This paper suggests that a neural network could be trained to recognize an optimum ratio for DCT compression of an image upon presenting the image to the network. The neural network associates the image intensity with its compression ratios in search for an optimum ratio. Experimental results suggest that a trained neural network can simulate such non-linear relationship and thus can be successfully used to provide an intelligent optimum image compression system.
Article
Neural networks can be of benefit in many image compression schemes. However, any system is constrained by the performance of the paradigm on which it is based. For example, although neural networks have been shown to improve differential pulse code modulation (DPCM) image compression, the overall performance of the system is still limited by the performance of DPCM. In this work a multiresolution neural network (MRNN) filter bank has been created for use within a state-of-the-art subband-coding framework. A polyphase implementation and training algorithm is presented. A filter bank that can synthesize the signal accurately from only the reference coefficients will be well suited for low-bitrate coding where the detail coefficients are coarsely quantized. Thus, the low-pass channel of the MRNN filter bank is trained to recreate the signal accurately. The high-pass channel is trained for perfect reconstruction so that the MRNN filter bank will also be effective at high bitrates. This paper presents an analysis of the MRNN filter bank and its potential as a transform for coding. The MRNN filter bank has been used in place of a linear filter bank in the set partitioning in hierarchical trees (SPIHT) coder. The new filter bank shows advantages over the linear filter bank for coding at low bitrates, although its performance suffers at high bitrates. However, the results are encouraging and suggest that further work in this area warranted.
Article
One of the purposes of this article is to give a general audience sufficient background into the details and techniques of wavelet coding to better understand the JPEG 2000 standard. The focus is on the fundamental principles of wavelet coding and not the actual standard itself. Some of the confusing design choices made in wavelet coders are explained. There are two types of filter choices: orthogonal and biorthogonal. Orthogonal filters have the property that there are energy or norm preserving. Nevertheless, modern wavelet coders use biorthogonal filters which do not preserve energy. Reasons for these specific design choices are explained. Another purpose of this article is to compare and contrast “early” wavelet coding with “modern” wavelet coding. This article compares the techniques of the modern wavelet coders to the subband coding techniques so that the reader can appreciate how different modern wavelet coding is from early wavelet coding. It discusses basic properties of the wavelet transform which are pertinent to image compression. It builds on the background material in generic transform coding given, shows that boundary effects motivate the use of biorthogonal wavelets, and introduces the symmetric wavelet transform. Subband coding or “early” wavelet coding method is discussed followed by an explanation of the EZW coding algorithm. Other modern wavelet coders that extend the ideas found in the EZW algorithm are also described
Article
A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method
Article
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques which can achieve high compression ratios with user specified distortion rates become necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. Unlike existing lossy transform-based compression techniques such as FFT and DCT, edge preservation is addressed in this new compression scheme. The proposed Edge Preserving Image Compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on Dynamic Associative Neural Networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvemen...
A Simple Block-Based Lossless Image CompressionScheme
  • S G Chang
  • G S Yovanof
S.G. Chang and G.S. Yovanof, "A Simple Block-Based Lossless Image CompressionScheme," in Proc. Thirtieth Asilomar Conference on Signals, Systems, and Computer, 1996; pp. 591-595.
Wavelet-based Medical Image Compression withAdaptive Prediction
  • Y T Chen
  • D C Tseng
  • P C Chang
Y.T. Chen, D.C. Tseng, and P.C. Chang, "Wavelet-based Medical Image Compression withAdaptive Prediction," in Proc. International Symposium on Intelligent Signal Processing and Communication Systems, 2005, pp. 825-828.
  • H Singh
H. Singh, Cryptosystem for Securing Image Encryption Using Structured Phase Masks in Fresnel Wavelet Transform Domain," 3D Research, vol. 7, no. 4, 2016.
  • H Singh
H. Singh, Cryptosystem for Securing Image Encryption Using Structured Phase Masks in Fresnel Wavelet Transform Domain," 3D Research, vol. 7, no. 4, 2016. Proceedings of the 12 th INDIACom; INDIACom-2018; IEEE Conference ID: 42835