Figure - uploaded by Ali Kamil Ahmed
Content may be subject to copyright.
Source publication
Image denoising and improvement are essential processes in many underwater applications. Various scientific studies, including marine science and territorial defence, require underwater exploration. When it occurs underwater, noise power spectral density is inconsistent within a certain range of frequency, and the noise autocorrelation function is...
Citations
... The images thus obtained are still hazy and foggy, so they used the contrast enhancement technique, in which the intensities of the dark and light zones are altered to adjust the contrast. Abdulwahed and Ahmed [11] have used a pre-whitening filter and discrete wavelet transform for noise reduction in underwater images. Their proposed method first used the pre-whitening filter on the noisy image to convert the coloured noise into white noise. ...
Underwater environments present significant challenges for object detection due to limited visibility and inconsistent lighting. This research aims to develop a computational model to improve underwater image quality, leading to more accurate detection of aquatic organisms, specifically fish. To achieve this, we investigate the efficacy of the YOLOv8m model, a state-of-the-art deep learning architecture, for underwater object detection. The model’s performance is evaluated on a comprehensive dataset focused on fish detection. Additionally, we compare YOLOv8m’s performance against established models like Faster-RCNN and Single Shot MultiBox Detector (SSD). The results of this study demonstrate exceptional performance by the YOLOv8m model, achieving a noteworthy F1 score of 64.31%. This score suggests superior efficiency and effectiveness in underwater object detection compared to the alternative models. These findings reaffirm the potential of the proposed model for underwater object detection within aquatic environments. The impressive results highlight the model’s potential to enhance subaquatic monitoring and contribute valuable data for marine research and applications.
... The power spectrum density (PSD) of the white Gaussian noise represents constant value over full range of the frequencies, all of the frequency values range with no/2 magnitude. For some certain time instant, it had shaped the probability distribution function pdf ( ) that has been given as (3) [20]: ...
Any communication scheme's principal goal is providing error-free data transmission. By increasing the rate at which data could be transmitted through a channel and maintaining a given error rate, this coding is advantageous. The message bits to be transmitted will gradually receive more bits thanks to the convolution (channel) encoder. At the receiver end of the channel, a Viterbi decoder is utilized in order to extract original message sequence from the received data. Widely utilized error correction approaches in communication systems for the enhancement of bit error rate (BER) performance are Viterbi decoding and convolutional encoding. The Viterbi decoder and convolution encoder rate for constraints with lengths of 2 and 6 and bit rates of 1⁄2 and 1⁄3 are shown in this study in the presence of (1/f) noise. The performance regarding the convolutional encoding/hard decision Viterbi decoding forward error correction (FEC) method affects the simulation outcomes. The findings demonstrate that the BER as function of signal to noise ratio (SNR) acquired for uncoded binary phase shift keying (BPSK) with the existence of additive white Gaussian noise (AWGN) is inferior to that acquired with the use of a hard decision Viterbi decoder.
... With regard to digital image processing, images are sometimes attacked via different noises and the image's quality is going to be reduced; if the image noise might be efficiently filtered out or not, it is going to be affecting subsequent processing like image decryption, edge detection, object segmentation, and feature extraction [21,22]. With regard to digital image processing, images are sometimes attacked via different noises and the image's quality is going to be reduced; if the image noise might be efficiently filtered out or not, it is going to be affecting subsequent processing like image decryption, edge detection, object segmentation, and feature extraction [21,22]. ...
... With regard to digital image processing, images are sometimes attacked via different noises and the image's quality is going to be reduced; if the image noise might be efficiently filtered out or not, it is going to be affecting subsequent processing like image decryption, edge detection, object segmentation, and feature extraction [21,22]. With regard to digital image processing, images are sometimes attacked via different noises and the image's quality is going to be reduced; if the image noise might be efficiently filtered out or not, it is going to be affecting subsequent processing like image decryption, edge detection, object segmentation, and feature extraction [21,22]. The next phases are describing the process of image denoising. ...
Information security is considered as one of the important issues in the information age used to preserve the secret information throughout transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. Mean square error (MSE) in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study's results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
... These images can be acquired in two ways: acquisition of offline and live-scan [3][4]. The pre-processing is enabled to improve the overall quality of the captured image in the second step.Due to the presence of large amounts of noisy areas in the image [5][6], it is often difficult to realize this process. After that, the segmentation is applied.It is the process of image separation into two regions: the region of the fingerprint image containing all the important data required for recognition is called the foreground region, whereas the regions that were the blurred or noisy area are called the background region.The feature points are extracted from a pre-processed fingerprint image such as ridge ending and bifurcation consistently called minutiae in the next step. ...
The fingerprint identification system is nowadays the biometric sector that is most exploited. Segmentation of the fingerprint image is considered as one of its first stage of processing.This stage thus typically affects the extraction and matching process of the feature, resulting in a high accuracy fingerprint recognition system.Three important steps are proposed in this paper. First, to improve the quality of the fingerprint images, Soble and TopHat filtering method were used.K-means clustering for combining 5-dimensional vector characteristics (variance, mean difference, gradient coherence, ridge direction, and energy spectrum) then accurately separates the foreground and background region for each local block in a fingerprint image.Also, local variance thresholding is used in our approach to reducing computing time for segmentation. Finally, we are combined with our DBSCAN clustering system that was performed to overcome the disadvantages of classifying K-means in the segmentation of fingerprint images.In four different databases, the proposed algorithm is tested. Experimental results show that our approach is significantly effective in the separation between the ridge and non-ridge region against some recently published techniques.
... Because of great requirement of contemporary computer vision and image processing [1][2][3][4][5] for more than one and half decades, images with better and higher spatial information is dramatically required however almost recorded images is usually lower spatial information, which are recorded by marketing build-in CCTV or camera. Therefore, the regular digital image interpolation is conceptual idea in order to creating an image with higher spatial information from one image with lower spatial information. ...
... 2426 step, the pre-processing is allowed to improve overall quality of the captured image. Through, it is frequently difficult to realize this process because the presence of large amount noisy areas in the image [5][6]. After that, the segmentation is applied. ...
Nowadays, the fingerprint identification system is the most exploited sector of biometric. Fingerprint image segmentation is considered one of its first processing stage. Thus, this stage affects typically the feature extraction and matching process which leads to fingerprint recognition system with high accuracy. In this paper, there are three major steps that are proposed. First, Soble and TopHat filtering method have been used to improve the quality of the fingerprint images. Then, for each local block in fingerprint image, an accurate separation of the foreground and background region is obtained by K-means clustering for combining 5-dimensional characteristics vector (variance, difference of mean, gradient coherence, ridge direction and energy spectrum). Additionally, in our approach, the local variance thresholding is used to reduce computing time for segmentation. Finally, we are combined to our system DBSCAN clustering algorithm which has been performed in order to overcome the drawbacks of K-means classification in fingerprint images segmentation. The proposed algorithm is tested on four different databases. Experimental results demonstrate that our approach is significantly efficacy against some recently published techniques in terms of separation between the ridge and non-ridge region.
Exploration of underwater resource play a vital role for nation development. Underwater surveillance systems play a crucial role in security applications, requiring accurate detection of suspicious objects in underwater images. However, the presence of noise, poor visibility, and uneven lighting conditions in underwater environments pose significant challenges for reliable object detection. This work proposes an integrated approach for underwater image de-noising, pre-processing, enhancement, and subsequent suspicious object detection by combining the DnCNN (Deep Convolutional Neural Network), CLAHE (Contrast Limited Adaptive Histogram Equalization), and additional image enhancement techniques. In addition to de-noising and pre-processing, it incorporate various image enhancement techniques to further improve object detection performance. These techniques include color correction, contrast adjustment, and edge enhancement, aiming to enhance the visual characteristics and saliency of suspicious objects in underwater images. To evaluate the effectiveness of proposed approach, this work conducted extensive experiments on an underwater image dataset containing diverse scenes and suspicious objects. The work compares proposed method with existing de-noising, preprocessing, and object detection techniques, analyzing the results using quantitative performance metrics, including precision, recall, and F1 score. The experimental results demonstrate that proposed integrated approach outperforms individual methods and achieves superior detection performance by enhancing the quality of underwater images and improving the visibility of suspicious objects.
In aquaculture, fish behaviour monitoring and analysis can provide the information required to guide daily feeding, schedule making and disease diagnosis. Technology such as machine vision, bio‐loggers and acoustic systems is essential to analyse fish behaviour. This paper focuses on tools and algorithms for fish behaviour quantification analysis. The goal is to present their basic concepts and principles, including the quantification analysis procedure and its potential application scenarios. This review shows that the most common behaviour quantification indexes can be categorised into three classes: swimming indexes, physical indexes and context indexes. Typically, swimming indexes are of the most interest to researchers. However, achieving comprehensiveness of the information and quantisation precision remain challenging in fish behaviour analysis. In brief, this paper aims to help researchers and practitioners better understand the current state‐of‐the‐art behavioural quantification analysis, which provides strong support for the implementation of intelligent breeding.