Table 2 - uploaded by Nacéra Benamrane
Content may be subject to copyright.
Comparison of coding results for a pancreas CT image

Comparison of coding results for a pancreas CT image

Source publication
Article
Full-text available
Since the progress of digital medical imaging techniques, it has been needed to compress the variety of medical images. In medical imaging, reversible compression of image's region of interest (ROI) which is diagnostically relevant is considered essential. Then, improving the global compression rate of the image can also be obtained by separately c...

Similar publications

Article
Full-text available
Medical images contain very significant information so very high quality images are used in medical image domain. These high quality images demand very high cost and bandwidth for storage and transmission. Therefore, image compression is very important in medical image processing. Main objective of the paper is to achieve high compression ratio and...

Citations

... Despite significant strides in compression methodologies, particularly those leveraging wavelet transforms, the quest for optimizing compression efficiency without compromising image quality remains a complex challenge [1]. ...
Article
Full-text available
This paper presents a cutting-edge algorithmic framework for lossless image compression, directly addressing the limitations and quality compromises inherent in existing compression models. Traditional approaches often fail to effectively balance efficiency with quality retention across various image complexities, leading to degraded image fidelity. Our proposed framework distinguishes itself by adeptly integrating smart partitioning, selective encoding, and wavelet coefficient analysis, thereby achieving marked improvements in compression efficiency without sacrificing image quality. Essential to the framework's efficacy is a methodical approach to image preprocessing, which ensures images are in an optimal state for processing. Through rigorous images and evaluation against industry standards such as JPEG2000 and PNG, the proposed model demonstrated exceptional performance enhancements: achieving compression ratios up to 4.2:1, enhancing Peak Signal-to-Noise Ratios (PSNR) to 49 dB for low complexity images, and maintaining Structural Similarity Index (SSIM) values as high as 0.99. These quantitative outcomes not only underline the model's superior compression capability but also its robustness in preserving the structural and perceptual quality of images across varying complexities. The significance of this research lies in its potential to redefine benchmarks within the lossless image compression domain, as evidenced by its superior performance metrics. Further exploration into machine learning for partitioning automation, real-time adaptive encoding mechanisms, and expanded framework applicability promises to optimize compression efficiency further. Ultimately, this study lays a foundational stone for future advancements in digital image management, addressing the critical need for high-efficiency, quality-conserving image compression solutions.
... This algorithm was better for both loss and lossless compression on medical images. Elhannachi et al. [14] proposed an embedded lossless wavelet-based image coding algorithm based on successive differences for efficient region-based image compression. An irreversible lifting wavelet with a SPIHT coder was used to compress the background. ...
... Tanh is a non-linear function executed in the range (-1, + 1), which is calculated mathematically using Eq. (14). ...
Article
Full-text available
Medical imaging systems generate enormous amounts of information that place a heavy burden on storage and transmission. As a result, image data compression is a major research topic in the field of medical imaging. Therefore, in this paper, an efficient image compression technique is proposed. The proposed technique consists of three stages such as segmentation, image compression, and decompression. Initially, the medical images are collected from the internet. Then, the images are segmented into ROI and Non-ROI regions using the Otsu thresholding technique. Then, the ROI regions are compressed using optimal zero tree wavelet (OZTW) transformand Non-ROI regions are compressed using an enhanced convolution neural network (ECNN). The threshold value of the zero tree wavelet (ZTW) transform and weight and bias value of the convolution neural network (CNN)is optimally selected using the sunflower optimization (SFO) algorithm. After the compression process, the reverse process is carried out for the reconstruction process. The performance of the proposed approach is analyzed based on PSNR, Similarity index, compression ratio, and mean square error.
... Annex An in PS3.5 (Data Structures and Encoding) defines several transfer syntaxes following the JPEG 2000 standard and provides lossless (bitpreserving) and lossy compression schemes. A transfer syntax is a collection of encoding rules that may clearly express abstract syntaxes [2], [27]. ...
Article
Full-text available
In this modern era, medical image sharing has become a routine activity within hospital information systems. Digital medical images have become valuable resources that aid health care systems’ decision-making and treatment procedures. A medical image consumes a significant amount of memory, and the size of medical images continues to grow as medical imaging technology progresses. In addition, an image is shared for analysis to support knowledge sharing and disease diagnosis. Therefore, health care systems must ensure that medical images are appropriately distributed without information loss in a timely and secure manner. Image compression is the primary process performed on each medical image before it is shared to ensure that the purpose of sharing an image is accomplished. The hybrid region of interest-based medical compression algorithms reduces image size. Furthermore, these algorithms shorten the image compression process time by manipulating the advantages of lossy and lossless compression techniques. A comprehensive review of previous studies that utilized this approach was conducted. Sample studies were selected from published articles in an open database subscribed to by Universiti Teknologi Malaysia for ten years (2012 to 2023). This work aims to critically review and comprehensively analyze previous types of algorithms by focusing on their main performance results: compression ratio, mean square error and peak signal-to-noise ratio. This article will identify which type of algorithm can give optimal value to the primary performance metric for compressing medical images.
... All of the previous methodologies proposed for image compression in telemedicine applications make use of Neural Networks. [2,5,8,12,28,30,31,35,36,38,41] and Fuzzy logic [14,29] gave efficient results under the system having high bandwidth but unable to deliver accurate results when the system has low bandwidth circumstances. The employed Neural network model in this study work executes a selection process with the goal of deleting unnecessary pixel coefficients and fine points of medical image data while retaining finer details around larger shapes. ...
Article
Full-text available
Using an image compression hybrid model, the suggested research created a practical method for integrating learning system advantages with a decision logic framework. The emphasis here is that when integrated with the conventional image coding technology the potential usefulness of the decision logic is used as decision making. The execution is divided into three stages. In the first place, the image DCT representation of the image transformed to a different energy usage and is computed for different energy levels. A parallel processing of each power coefficient would then result in a substantially higher processing speed. In the second phase, differential pulse code modulation is used to compress the coefficients that correspond to the lowest energy level. Coefficients from the learning system are used as energy component, used to extract the coefficients. Finally, the algorithm is fed the results of the probabilistic decisions made in the second step of the program’s development. To validate the proposed approach, the suggested method is tested over different Magnetic resonance imaging (MRI) medical samples. The simulation findings reveal good results and suggest that the reconstructed images are better than the conventional system. The developed Neuro-Fuzzy image compression model, results in attaining high accuracy and precision with reduced processing overhead and computation complexity.
... Additionally, generic data compression techniques such as LZMA, Deflate, PPMd, Bzip2, and Gzip are also put to the test. In [14], an efficient embedded image coder based on a reversibly discrete cosine transform is proposed for lossless. ROI coding with a high compression ratio (RDCT) was suggested. ...
Article
Full-text available
Companies that produce energy transmit it to any or all households via a power grid, which is a regulated power transmission hub that acts as a middleman. When a power grid fails, the whole area it serves is blacked out. To ensure smooth and effective functioning, a power grid monitoring system is required. Computer vision is among the most commonly utilized and active research applications in the world of video surveillance. Though a lot has been accomplished in the field of power grid surveillance, a more effective compression method is still required for large quantities of grid surveillance video data to be archived compactly and sent efficiently. Video compression has become increasingly essential with the advent of contemporary video processing algorithms. An algorithm’s efficacy in a power grid monitoring system depends on the rate at which video data is sent. A novel compression technique for video inputs from power grid monitoring equipment is described in this study. Due to a lack of redundancy in visual input, traditional techniques are unable to fulfill the current demand standards for modern technology. As a result, the volume of data that needs to be saved and handled in live time grows. Encoding frames and decreasing duplication in surveillance video using texture information similarity, the proposed technique overcomes the aforementioned problems by Robust Particle Swarm Optimization (RPSO) based run-length coding approach. Our solution surpasses other current and relevant existing algorithms based on experimental findings and assessments of different surveillance video sequences utilizing varied parameters. A massive collection of surveillance films was compressed at a 50% higher rate using the suggested approach than with existing methods.
... The image compression process usually consists of two basic stages; encoder and decoder. The encoder stage in the transmitter converts the original image into code sequence [3]. The decoder regenerates the required data in the receiver to recreate the original image, where the reconstructed image will look like the original one. ...
Article
Full-text available
The main aim of this study is to decrease the amount of storage as much as possible and the decoded image seen on the monitor should be as close as possible to the original image. The main goal of this study is to design a fully hybrid system for medical image compression. For this purpose, a hybrid techniques were used to enhance the compression performance, decreasing the computational complexity level and raising the CR (Compression Ratio, the proposed system is adopted on these tools: to design a new fully hybrid image compression system to compress a medical image (Brain Tumour disease type). Furthermore, a new reliable algorithm was proposed in order to identify the ROI (Region of Interest) and NROI (Non-Region of Interest) before compression process. In addition, this algorithm has less computational complexity and efficient, also develop new algorithms to compress the ROI and NROI regions. The first region, ROI, is compressed by cascading of SPIHT and BAT algorithms. Meanwhile, the second region (NROI) is compressed by the 2D-DWT algorithm, finally to design a new coding system by mixed the RLE (Run-Length Encoding) and Huffman coding algorithms to improve the CR. The results indicate that the SPIHT-BAT algorithm has increase the compression ratio better than SPIHT. Furthermore, the result of ROI region better than the result of NROI region. While the result of coding when used (RLE- Huffman) algorithm better than the result when used (RLE) alone or Huffman algorithm. The different parameters of compression process indicate that the proposed system is better than that of Traditional systems that described in literature.
... Sid Ahmed Elhannachi, Nacra Benamrane, and Taleb-Ahmed Abdelmalik identified an efficient embedded image encoder based on reversible discrete cosine transformation (RDCT) designed for lossless ROI coding at a very high compression ratio [23]. The ROI assortment is made manually by the researcher using the various kind of medical images including MRI, X-rays, CT Scan and ultrasound. ...
... Table 2 shows the evaluation metrics used by the previous studies. [20] √ √ [21] √ √ √ √ [16] √ √ √ [22] √ √ √ √ [23] √ √ [24] √ √ √ ...
... The researcher [16] not considered the quality of the output image; therefore, he did not consider the PSNR value. The researcher [21,23] aim for the quality of the output image. Thus, the compression ratio is not measured by them. ...
Article
Full-text available
Digital medical images have become a vital resource that supports decision-making and treatment procedures in healthcare facilities. The medical image consumes large sizes of memory, and the size keeps on growth due to the trend of medical image technology. The technology of telemedicine encourages the medical practitioner to share the medical image to support knowledge sharing to diagnose and analyse the image. The healthcare system needs to ensure distributes the medical image accurately with zero loss of information, fast and secure. Image compression is beneficial in ensuring that achieve the goal of sharing this data. The region of interest-based hybrid medical compression algorithm plays the parts to reduce the image size and shorten the time of medical image compression process. Various studies have enhanced by combining numerous techniques to get an ideal result. This paper reviews the previous works conducted on a region of interest-based hybrid medical image compression algorithms.
... A trained NN can be treated as an "expert" who can give correct information based on the information given it for analysis. Many image compression algorithms [27][28][29][30][31][32][33][34][35][36] were proposed in the past but given the new circumstances, to provide estimates of the experts, the interest and the questions "what" can be used to answer and hence this research work is undertaken. ...
... 19]. Embedded coding is based on threshold where values greater or equal to the threshold are called significant. ...
Thesis
Full-text available
In this thesis, four methods are proposed to enhance the performance of 5G mobile system. The first proposal is the Direction of Arrival (DOA) estimator for 5G mobile systems. A proposed Separated Steering Matrix (SSM) DOA algorithm is based on separating the covariance matrix into two parts to minimize the computational time required for Eigenvalue Decomposition (EVD) process. Since large numbers of antennas are employed in 5G mobile system; called Massive Multiple-input multiple-output (MMIMO), computation time is also increased dramatically. The proposed SSM was tested with other methods and showed that the required number of processes when using 20x20 antenna elements was decreased by a factor of 2x103 over traditional Multiple Signal Classification (MUSIC) and decreased by 102 over enhanced propagator method (PM). SSM method was able to distinguish between two targets as long as the separation between them is a fraction of beamwidth. The second proposal is adaptive Minimum Variant Distortionless Response-Least Mean Square beamformer (MVDR-LMS BF) where the first BF is MVDR and the second is LMS. The proposed adaptive MVDR-LMS BF can form thinner beams with 3% time less than LMS. The power of beams is reduced to 10.2% from power level of MVDR, with side lobs power below 50dB. A third proposal is Computer Generated-Sparse Code Multiple Access (CG-SCMA), which is a codebook for Non Orthogonal Multiple Access (NOMA), based SCMA. This codebook is generated using computer program to specify the most appropriate values of the 16-point star- Quadrature Amplitude Modulation (QAM), then using Trellis Coded Modulation (TCM) to divide star-QAM constellation into four sub constellations to increase the minimum Euclidian Distance (MinED). The new codebook reaches MinED for four sub constellations {3.46, 2.16, 2.16, 3.46} and achieves increment by 10.1% in main constellation and 7.5% increment in sub constellations over SCMA codebook based 16-point star-QAM. The multiplexer and de-multiplexer using both proposed and traditional SCAM codebook was implemented using netFPGA-1G-CML kintex-7 and was able to achieve 5x10-5 bit error rate (BER) at SNR 10dB. A Final proposal is an enhancement to Joint Photographic Expert Group 2000 (JPEG2000) standard for image compression compatible with 5G. This algorithm is uses only low resolution quarter of wavelet transformation with hybrid Listless Modified-Set partitioning in Hierarchical Tree (LM-SPIHT) based coding and an additional level of runlength encoding to increase image compression. This proposal was implemented using netFPGA-1G-CML kintex-7 and able to produce PSNR 51.45dB and with increment of 19.65% over methods used in literature. Hardware implementation was able to speed up processing by 15.3% over previous standards.
... The use of joint source channel coding enabled multiple three-dimensional ROIs to achieve higher transmission priorities in the context of wireless transmission. Elhannachi et al. [17] proposed an embedded image encoder based on efficient reversible discrete cosine transform (RDCT). The proposed rearrangement structure was well coupled to a lossless embedded zerotree wavelet encoder (LEZW). ...
Article
Full-text available
Magnetic resonance imaging (MRI), which assists doctors in determining clinical staging and expected surgical range, has high medical value. A large number of MRI images require a large amount of storage space and transmission bandwidth of PACS system in offline storage and remote diagnosis. Therefore, high quality compression of MRI images is very research-oriented. Current compression methods for MRI images with high compression ratio cause loss of information on lesions, leading to misdiagnosis; compression methods for MRI images with low compression ratio does not achieve the desired effect. Therefore, a fast fractal-based compression algorithm for MRI images is proposed in this paper. First, three-dimensional (3D) MRI images are converted into a two-dimensional (2D) image sequence, which facilitates the image sequence based on fractal compression method. Then, range and domain blocks are classified according to inherent spatiotemporal similarity of 3D objects. By using self-similarity, the number of blocks in the matching pool is reduced to improve matching speed of the proposed method. Finally, a residual compensation mechanism is introduced to achieve compression of MRI images with high decompression quality. Experimental results show that compression speed is improved by 2-3 times, and the PSNR is improved by nearly 10. It indicates the proposed algorithm is effective and solves the contradiction between high compression ratio and high quality of MRI medical images.