ArticlePDF Available

Performance of Levenberg-Marquardt Backpropagation for Full Reference Hybrid Image Quality Metrics

Authors:

Abstract

Image quality analysis is to study the quality of images and develop methods to efficiently and swiftly determine the quality of images. It is an important process especially in this digital age whereby transmission, compression and conversion are compulsory. Therefore, this paper proposed a hybrid method to determine the image quality by using Levenberg-Marquardt Back-Propagation Neural Network (LMBNN). Three known quality metrics were combined as the input element to the network. A proper set of network properties was chosen to represent this element and was trained using Levenberg-Marquardt algorithm (trainlm) in MATLAB. From the preliminary simulation, a promising output result was obtained indicated by good performance metrics results and good regression fitting.
AbstractImage quality analysis is to study the quality of
images and develop methods to efficiently and swiftly
determine the quality of images. It is an important process
especially in this digital age whereby transmission,
compression and conversion are compulsory. Therefore, this
paper proposed a hybrid method to determine the image
quality by using Levenberg-Marquardt Back-Propagation
Neural Network (LMBNN). Three known quality metrics
were combined as the input element to the network. A proper
set of network properties was chosen to represent this element
and was trained using Levenberg-Marquardt algorithm
(trainlm) in MATLAB. From the preliminary simulation, a
promising output result was obtained indicated by good
performance metrics results and good regression fitting.
Index TermsImage Quality Metrics, Levenberg-Marquardt,
Neural Network, hybrid
I. INTRODUCTION
Image quality analysis is the science of analyzing and
comparing the characteristics and features of an image with
reference to the original image of predetermined/preset
standards. Image quality analysis measures should be
employed to determine the usability of images after they
have undergone any kind of manipulation, for example,
compression, transmission or conversion. Therefore,
studying the various approaches to image quality analysis
will provide information on method of image quality
assessment that can be efficiently employed under any
circumstances.
Kuryati Kipli is with Department of Electronic Engineering, Faculty of
Engineering, Universiti Malaysia Sarawak, 94300 Kota Samarahan,
Sarawak, Malaysia (+6082-583354; fax:+6082-583410; email:
kkuryati@feng.unimas.my).
Mohd Saufee Muhammad, Sh. Masniah Wan Masra, Nurdiani Zamhari,
Kasumawati Lias and Dayang Azra Awang Mat are with Department of
Electronic Engineering, Faculty of Engineering, Universiti Malaysia
Sarawak, 94300 Kota Samarahan, Sarawak, Malaysia.
(email: msaufee@feng.unimas.my, wmmasnia@feng.unimas.my,
znurdiani@feng.unimas.my, lkasumawati@feng.unimas.my,
dmazra@feng.unimas.my).
Method of image quality assessment can be classified
into subjective and objective methods. The subjective
method requires the use of human discretion to decide the
level of the image’s quality [1-2]. This method is subject to
bias in the form of the tester’s taste and preferences.
However, the result of the subjective analysis is very often
a trusted method as it is only natural for people to judge
with their own eyes. The demerit of subjective assessment
is that it is time and labour consuming. The objective
method is unbiased and automated therefore it provides a
result that is faithful to all assigned parameters [1-2]. The
demerit of objective assessment is that it may not be
reliable.
Most of digital image analysis processes trying to
simulate the human visual cortex as the human eye remains
a very superior judge of image quality. For example, if the
computer saying the image is of a good quality but a
human saying it is of a bad quality, the image will most
likely be scrapped. Therefore, the computer’s reliability
and accuracy will be considered low if there is a poor
correlation between its results and the human eye’s
judgment.
Depending on the existence of reference images, there
are three categories of objective image quality metrics
(IQMs); full-reference (FR), reduced reference (RR) and
non reference (NR) [3-5]. These IQMs are developed based
on color appearance, blur assessment, wavelet, pixels
comparison, hue saturation and many others. Among the
available image quality metrics, the widely known metrics
are Peak Signal-to-Noise Ratio (PSNR), Mean Squared
Error (MSE) and Structural Similarity (SSIM) [6-7].
The objective of this paper is to investigate the potential
of combining multiple metrics with artificial neural
network (ANN) in order to achieve image quality score
that similar to human visual measure. Another objective is
to evaluate the performance of LMBP as a hybrid IQM. To
achieve these objectives, a number of objective
assessments was conducted and compared to a
corresponding subjective assessment. Afterward, this
measurement data were combined and used as the input
vectors to Levenberg-Marquardt Back-propagation
network.
Performance of Levenberg-Marquardt
Backpropagation for
Full Reference Hybrid Image Quality Metrics
Kuryati Kipli, Mohd Saufee Muhammad, Sh. Masniah Wan Masra,
Nurdiani Zamhari, Kasumawati Lias, Dayang Azra Awang Mat
Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 Vol I,
IMECS 2012, March 14 - 16, 2012, Hong Kong
ISBN: 978-988-19251-1-4
ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)
IMECS 2012
The paper aims to combine PSNR, MSE and SSIM
metrics with the assumption each of the metrics could
overcome each other weaknesses. Each of this metrics has
its own weakness in term of detecting certain types of
image degradation. Other limitations of the existing metrics
are consistency, accuracy and computational cost. By
combining these metrics values, and trained using LMBP
ANN, it is hopefully will become a new image quality
metrics which is more intelligent and accurate.
II. METHODOLOGY
Four assessments were modeled consist of three
objectives and one subjective assessments. A subjective
assessment was conducted using a sample set of 34 images
and 80 participants. The sample set consists of a reference
image and 33 digitally altered images using four categories
of operations namely, morphological operations, noise-
adding, format conversion and filtered images. The results
of this subjective assessment were tabulated as Mean
Opinion Score (MOS) values ranging from 1 to 5, with 5
being the highest quality. These values will be further used
as the target for the network.
Three objective assessments were conducted with the
aim of determining the Mean Squared Error (MSE), the
Peak Signal-to-Noise Ratio (PSNR), and the Structural
Similarity (SSIM) between the reference image and the
sample images. The results of all four image quality
assessments were then tabulated and analyzed to determine
correlation characteristics using Regression, R correlation
between each objective and subjective assessment.
The measured data of PSNR, MSE and SSIM were used
as input of the network and MOS as target for the network.
Experimental was setup using MATLAB Neural Network
Toolbox. This network was trained with Levenberg-
Marquardt backpropagation algorithm (trainlm). This
network was chosen due to its good characteristic for
solving fitting problems. The neural network must map
well between a data set of numeric inputs and a set of
numeric targets.
The network used is a two-layer feed-forward network
as illustrated in Fig. 1. The two-layer feed-forward network
with sigmoid hidden neurons and linear output neurons
(newfit), can fit multi-dimensional mapping problems
arbitrarily well, given consistent data and enough neurons
in its hidden layer. After a few experimental run, the
number of neurons in the hidden layers was set to 20.
Fig.1 Two-layer feed-forward network
A. Levenberg-Marquardt BP
The application of Levenberg-Marquardt to neural
network training is described in [8-9]. This algorithm has
been shown to be the fastest method for training moderate-
sized feed-forward neural networks (up to several hundred
weights). It also has an efficient implementation in
MATLAB software, since the solution of the matrix
equation is a built-in function, so its attributes become even
more pronounced in a MATLAB environment [10].
The network trainlm can train any network as long as its
weight, net input, and transfer functions have derivative
functions. Backpropagation is used to calculate the
Jacobian jX of performance with respect to the weight and
bias variables X. Each variable is adjusted according to
Levenberg-Marquardt equation,
jj = jX * jX
je = jX * E
dX = -(jj+I*mu) \ je (1)
where E is all errors and I is the identity matrix. The
adaptive value mu is increased until the change results in a
reduced performance value [10].
III. RESULTS AND DISCUSSIONS
For the purpose of analysis, the regression of existing
IQMs with MOS will be discussed. Detailed correlation of
these IQMs based on categories of degradation is also
shown. Results of training and testing using hybrid metric
using LMBP is also presented.
A. Existing IQMs (PSNR, MSE and SSIM)
Results of objective measurements for PSNR, SSIM and
MSE are shown in Table I and Table II respectively. Table
I shows the correlation of objective measure with
subjective measure as overall while Table II shows the
correlation between the three objective measurements and
MOS for specific categories. From these results, the three
metrics did show good correlation with MOS when
measured based on categories. However, the correlation
dropped when an overall score was considered.
TABLE I
CORRELATION COEFFICIENT
Metrics SSIM PSNR MSE
Regression, R 0.5180 0.5007 0.5735
TABLE II
CORRELATION COEFFICIENT BASED ON CATEGORIES
Objective Assessment/
Category of Images SSIM PSNR MSE
Morphologically Altered 0.7144 -0.1360 0.7326
Noise-Added 0.9121 -0.8290 0.3851
Format Converted 0.0000 0.9000 -0.9449
Filtered 0.6844 -0.0459 0.7901
Table II indicates that the SSIM algorithm achieved a
satisfactory degree of correlation to the subjective
assessment for every category except the format converted
images. The MSE algorithm had satisfactory correlations
Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 Vol I,
IMECS 2012, March 14 - 16, 2012, Hong Kong
ISBN: 978-988-19251-1-4
ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)
IMECS 2012
for 2 out of 4 of the categories and the PSNR performed
the worst with negative correlations for 3 out of 4
categories. The results revealed that even though objective
assessment could not achieved a satisfactory level of
correlation to the subjective assessment in overall, each of
the assessments did performed well in analyzing the image
quality for certain categories.
The following results show that by combining the three
unique metrics, an intelligent hybrid metric was obtained.
This is also supported by discussion of Table II that shown
each metrics has its own merits over another depending on
categories.
B. Network Predictive Ability and General Performance
Mean Squared Error (MSE) is performance metric
adopted to determine the network performance, while
regressions, R is used to measure the correlation between
outputs and targets. The fitting curve between targets with
inputs is shown in Fig. 2 and the MSE and Regression is
tabulated in Table III. MSE is the average squared
difference between outputs and targets. Lower values of
MSE are better as zero indicate no error while R value of 1
indicated closed relationship while 0 is a random
relationship.
(a)
(b)
Fig. 2 The fitting curve between output/target with input (a)
training only (b)training and testing
TABLE III
THE MSE and REGRESSION OF THE NETWORK
Model Training
Set Test
Set Whole
Set
MSE 0.2435 0.3995 0.2634
Regression, R 0.9236 0.854 0.8907
From the results in Table III and Fig. 2, it is proven that
Levenberg-Marquardt backpropagation algorithm has good
ability to predict the MOS with high regression correlation
for both training and testing. The training error is 0.244
and testing error is 0.399, indicative of about 70%
accuracy. Comparing the new hybrid LMBP metric with
the objective scores, Table IV shows the correlation of
PSNR, SSIM, MSE and new hybrid LMBP metric with
MOS in percentage (data is extracted from Table I and
Table III).
TABLE IV
COMPARISON OF REGRESSION (%) FOR THE FOUR IQMs
Metrics Regression, R
SSIM 51.8%
PSNR 50.1%
MSE 57.4%
LMBP 89.1%
It is shown that correlation with MOS is below 60% for
the existing metrics but the new LMBP metric is more than
80% correlated with MOS. Performance of the new metric
shown a promising result. Further investigation with larger
sample and proper selection of network properties will
definitely improve predictive ability and general
performance of ANN.
IV. CONCLUSIONS
From the study, new method for constructing image
quality metrics has been proposed. This paper is an attempt
to investigate the potential of combining existing metrics
with ANN to predict the quality of images. It was proven
that Levenberg-Marquardt backpropagation algorithm has
good ability to predict the MOS with high correlation for
training and testing with training error and testing error of
0.244 and 0.399 respectively. The regression, R showed
that it is highly correlated with mean opinion score (MOS)
compared to individual metrics (PSNR, MSE or SSIM).
In future work, more distortion measures and feature
domains will be used as the image samples. Also, the
relationship between the metrics adopted for the
combination will be further investigated to find the best
combination among them. More experiments are needed to
validate properties of the network such as it optimum
number of neurons in hidden layers, validation etc.
Performance comparison of LMBP with other networks
should also be discussed.
ACKNOWLEDGMENT
This research work is supported by the Universiti
Malaysia Sarawak (UNIMAS), under grant scheme
NF (F02)/144/2010(47). The research work was carried out
at Department of Electronic Engineering, Faculty of
Engineering, UNIMAS.
Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 Vol I,
IMECS 2012, March 14 - 16, 2012, Hong Kong
ISBN: 978-988-19251-1-4
ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)
IMECS 2012
REFERENCES
[1] Wang, Z., Sheikh, H.R., & Bovik, A.C. (2003). Objective
Video Quality Assessment. Handbook of Video Databases:
Design and Applications (Chapter 41 pp. 1041-1078). CRC
Press.
[2] Saffor, A., Ramli, A., Ng., K., & Dowsett, D. (2002).
Objective and Subjective Evaluation of Compressed
Computed Tomography (CT) Images. Internet Journal of
Medical Simulation, Vol.1(1).
[3] Eskicioglu, A.M., & Fisher, P.S. (1995). Image Quality
Measures and their performance. IEEE Transactions on
Communication Vol. 43 (12).
[4] Ahmet, M., & Paul, S. (1995). Image quality measures and
their performance. IEEE Transactions on Communications,
Vol. 43, pp. 2959-2965.
[5] Choi, M.G., Jung, J.H., & Jeon, J.W. (2009). No-Reference
Image Quality Assessment using Blur Noise. International
Journal of Computer Science and Engineering. Vol. 3(2),
pp. 76-80.
[6] Cui, L., & Allen, A.R. (2008). An Image Quality Metric
Based on a Colour Appearance Model. Proceedings of the
10th International Conference on Advanced Concepts for
Intelligent Vision Systems, France, pp.696-707.
[7] Wang, Z., Bovik, A.C., Sheikh, H.R., & Simoncelli, E.P
(2004) Image quality Assessment: from error visibility to
structural similarity. IEEE Tran. Image Processing 13(4),
pp. 600-612.
[8] Hagan, M.T., & Menhaj, M. (1994). Training feed-forward
networks with the Marquardt algorithm. IEEE Transactions
on Neural Networks, Vol. 5, No. 6, 1999, pp. 989-99.
[9] Kermani, B.G., Schiffman, S.S., & Nagle, H.G. (2005).
Performance of the Levenberg–Marquardt neural network
training method in electronic nose applications. Science
Direct, Sensors and Actuators B: Chemical, Volume 110,
Issue 1, pp. 13-22.
[10] Charrier, C., Lebrun, G., & Lezoray, O. (2007). Selection of
features by a machine learning expert to design a color
image quality metrics. 3rd Int. Workshop on Video
Processing and Quality Metrics for Consumer Electronics
(VPQM), pp.113-119, Scottsdale, Arizona.
[11] Zampolo, R. & Seara, R. (2003). A measure for perceptual
image quality assessment. Proceeding IEEE International
Conference Image Processing.
[12] Cadik, M., & Slavik, P. (2004). Evaluation of two principal
approaches to objective image quality assessment. 8th
International Conference on Information Visualization:
IEEE Computer Society Press, pp. 513-518.
[13] Demuth, H., & Beale, M., Matlab Neural Network Toolbox
Help, Version R2008a, The MathWorks, Inc.
Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 Vol I,
IMECS 2012, March 14 - 16, 2012, Hong Kong
ISBN: 978-988-19251-1-4
ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)
IMECS 2012
... Using the notation 2-2-1, we can identify the network that has two neurons in the input layer, two neurons in one hidden layer, and one neuron in the output layer. Using the Levenberg-Marquardt 'Trainlm' training method (Kipli et al., 2012;Nasution, 2021) with a score of 2-10-1, it has been shown that network architectures function most predictably after extensive simulation testing. The categories for HC and patients with MS are shown in the confusion matrix described previously, along with the degree of classification accuracy. ...
... The BP-NN uses a technique called "back-propagation" to adjust the weights of the connections between the neurons in the ANN based on the difference between the predicted output and the actual output. The BP-NN adjusts the weights during a number of iterations, called "epochs," and uses a learning function called Levenberg-Marquardt (TRAINLM) (Kipli et al., 2012) to determine the rate at which the weights are adjusted. By iteratively adjusting the weights in this way, the BP-NN is able to "learn" from the input data and improve the accuracy of its predictions over time. ...
Conference Paper
Full-text available
The COVID-19 pandemic has significantly impacted the construction sector, which is highly sensitive to economic cycles. In order to boost value and efficiency in this sector, the use of innovative exploration technologies such as ultrasonic and Artificial Intelligence techniques in building material research is becoming increasingly crucial. In this study, we developed two models for predicting the Los Angeles (LA) and Micro Deval (MDE) coefficients, the two important geo-technical tests used to determine the quality of carbonate rock aggregates. These coefficients describe the resistance of aggregates to fragmentation and abrasion. The ultrasound velocity, porosity, and density of the rocks were determined and used as inputs to develop prediction models using multiple regressions and an artificial neural network. These models may be used to assess the quality of rock aggregates at the exploration stage without the need for tedious laboratory analysis.
... The BP-NN uses a technique called "backpropagation" to adjust the weights of the connections between the neurons in the ANN based on the difference between the predicted output and the actual output. The BP-NN adjusts the weights during a number of iterations, called "epochs," and uses a learning function called Levenberg-Marquardt (TRAINLM) (Kipli et al., 2012) to determine the rate at which the weights are adjusted. By iteratively adjusting the weights in this way, the BP-NN is able to "learn" from the input data and improve the accuracy of its predictions over time. ...
Preprint
Full-text available
The COVID-19 pandemic has significantly impacted the construction sector, which is sensitive to economic cycles. In order to boost value and efficiency in this sector, the use of innovative exploration technologies such as ultrasonic and Artificial Intelligence techniques in building material research is becoming increasingly crucial. In this study, we developed two models for predicting the Los Angeles (LA) and Micro Deval (MDE) coefficients, two important geotechnical tests used to determine the quality of rock aggregates. These coefficients describe the resistance of aggregates to fragmentation and abrasion. The ultrasound velocity, porosity, and density of the rocks were determined and used as inputs to develop prediction models using multiple regression and an artificial neural network. These models may be used to assess the quality of rock aggregates at the exploration stage without the need for tedious laboratory analysis.
... This is why recently image quality metrics based on DNNs gain the advantage over more straightforward analytic formulas. There are DNN-based metrics for universal image quality rating, like NIMA [36], Elman neural network-based model [37], or Levenberg-Marquardt back-propagation neural network (LMBNN)-based [38]. A newer type of metrics address the need to detect image artefacts caused by neural networks, like LPIPS [8]. ...
Article
Full-text available
A novel view synthesis is one of generative imaging issues in which generative adversarial networks (GANs) can be applied. One of such tasks is human re-rendering from a single image. In this work, we reimplement the current state-of-the-art solution and identify its main drawbacks—low quality of rendered images in the areas of high-frequency details like hair, faces, hands, etc. We modify the architectures of baseline models and investigate the influence of operations on Fourier spectra of the images, which we believe may be the solution to the main issue of missing quality of high-frequency details. In particular, we propose discrete Fourier transform loss function (DFT loss) and investigate how this loss function influences the visual quality and evaluation metrics values for the rendered images.
... In particular, Arabameri et al. (2017) compared 10 different training algorithms to predict the COD removal efficiency of landfill leachate, and L-M was identified as the best one for ANN training. L-M currently dominates in the training algorithms probably because of its good solving capability for fitting problems (Kipli et al., 2012) and its properties of rapid and stable computation (Grosan and Abraham, 2011). ...
Article
Artificial neural networks (ANNs) have recently attracted significant attention in environmental areas because of their great self-learning capability and good accuracy in mapping complex nonlinear relationships. These properties of ANNs benefit their application in solving different solid waste-related issues. However, the configurations, including ANN framework, algorithm, data set partition, input parameters, hidden layer, and performance evaluation, vary and have not reached a consensus among relevant studies. To address the current state of the art of ANN application in the solid waste field and identify the commonalities of ANNs, this critical review was conducted by focusing on a modeling perspective and using 177 relevant papers published over the last decade (2010-2020). We classified the reviewed studies into four categories in terms of research scales. ANNs were found to be applied widely in waste generation and technological parameter prediction and proven effective in solving meso-microscale and microscale issues, including waste conversion, emissions, and microbial and dynamic processes. Given the difficulty of data collection in many solid waste-related issues, most studies included a data size of 101-150. For mathematical optimization, dividing the data into training-validation-test sets is preferable, and the training set is supposed to account for ~70%. A single hidden layer is usually sufficient, and the optimal numbers of hidden layer nodes most likely range from 4 to 20. This review is supposed to contribute basic and comprehensive knowledge to the researchers in general waste management and specialized ANN study on solid waste-related issues.
... Training of the FF-NN is conducted using the "fitnet" function implemented in the MATLAB Deep Learning Toolbox by Levenberg-Marquardt backpropagation, as this has been shown to be the fastest method in training the FF-NN [60][61][62]. FF-NN architecture and training parameters are summarised in Table 3. For a complete description of Levenberg-Marquardt backpropagation, refer to the cited paper by Hagan and Menhaj [60]. ...
Article
Full-text available
In the design of parts consisting of long-fibre-reinforced Sheet Moulding Compounds (SMC), the potential for the optimisation of processing parameters and geometrical design is limited due to the high number of interdependent variables. One of the influences on fibre orientations and therefore mechanical part performance is the initial filling state of the compression moulding tool, which is defined by the geometry and positioning of the SMC preform. In the past, response surface methodology and linear regression analysis were successfully used for a simulation-based optimisation of rectangular preform size and position in regard to a part performance parameter. However, the computational demand of these increase exponentially with an increase in the number of design variables, such as in the case of more complex preform geometries. In this paper, these restrictions are addressed with a novel approach for metamodelling the correlation of preform and the resulting mechanical part performance. The approach is applied to predicting the maximum absolute deflection of a plate geometry under bending load. For metamodelling, multiple neural networks (NN) are trained on a dataset obtained by process and structural simulation. Based on the discretisation of the plate geometry used in these simulation procedures, the binary initial filling states (completely filled/empty) of each element are used as inputs of the NNs. Outputs of the NNs are combined by ensemble modelling to form the metamodel. The metamodel allows an accurate prediction of maximum deflection; subsequent validation of the metamodel shows differences in predicted and simulated maximum deflection ranging from 0.26% to 2.67%. Subsequently, the metamodel is evaluated using a mutation algorithm for finding a preform that reduces the maximum deflection.
Article
To analysis the speed of sending message in Healthcare standard 7 with the use of back propagation in neural network. Various algorithms are define in back propagation in neural network we can use back propagation algorithm for sending message purpose. Genetic Algorithm are used to extract information and send these information with this algorithm appears to be fastest method for training moderate sized feed forward neural network. It has a very efficient mat lab implementation. The need of this algorithm are used for analysis, increase the speed of sending message faster and accurately and more efficiently. Abstract-To analysis the speed of sending message in Healthcare standard 7 with the use of back propagation in neural network. Various algorithms are define in back propagation in neural network we can use back propagation algorithm for sending message purpose. Genetic Algorithm are used to extract information and send these information with this algorithm appears to be fastest method for training moderate sized feed forward neural network. It has a very efficient mat lab implementation. The need of this algorithm are used for analysis, increase the speed of sending message faster and accurately and more efficiently.
Article
Full-text available
The fifth-degree polynomial equation determines the diameter in pressurized drinking water systems. The input variables are Q: flow (m3/s), H: pressure drop (m); L: pipe length (m); ε: roughness (m), ϑ: kinematic viscosity (m2/s), and Ʃk: sum of minor loss coefficients (dimensionless). After applying the energy equation for a hydraulic system composed of two tanks connected to a pipe of constant diameter and accepting the Colebrook-White and the Darcy-Weisbach equations, an undetermined expression is obtained since more unknowns than equations are established. This problem is solved by implementing a nested loop for the coefficient of friction and the diameter. This article proposes an Artificial Neural Network (ANN) implementing the Levenberg-Marquardt backpropagation method to estimate the diameter from the log-sigmoidal transfer function under stationary flow conditions. The training signals set consists of 5,000 random data that follow a normal distribution, calculated in Visual Basic (®Excel). The statistics used for the network evaluation correspond to the mean square error, the regression analysis, and the cross-entropy function. The architecture with the best performance had a hidden layer with 25 neurons (6-25-1) presenting an MSE equal to 5.41E-6 and 9.98E+00 for the Pearson Correlation Coefficient. The cross-validation of the neural scheme was carried out from 1,000 independent input signals from the training set, obtaining an MSE equal to 6.91E-6. The proposed neural network calculates the diameter with a relative error equal to 0.01% concerning the values obtained with ®Epanet, evidencing the generalizability of the optimized system.
Conference Paper
Full-text available
This paper addresses the issue of perceptual image quality assessment in image restoration systems. Firstly, experiments are conducted to evaluate perceived quality in images degraded only by frequency distortion. Based on the resulting experimental data, a distortion quality measure (DQM) is proposed. Then another experiment, whose test images present frequency distortion and noise injection, is achieved. From this latter experiment, a composed quality measure (CQM) is developed to assess perceived quality of images degraded by combined effects of frequency distortion and noise injection. The CQM is derived from DQM and NQM (noise quality measure), this latter has been recently proposed, and it can be used as a tool for evaluation and optimization of image restoration systems according to human visual perception.
Chapter
Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission and reproduction. For example, lossy video compression techniques, which are almost always used to reduce the bandwidth needed to store or transmit video data, may degrade the quality during the quantization process. For another instance, the digital video bitstreams delivered over error-prone channels, such as wireless channels, may be received imperfectly due to the impairment occurred during transmission. Package-switched communication networks, such as the Internet, can cause loss or severe delay of received data packages, depending on the network conditions and the quality of services. All these transmission errors may result in distortions in the received video data. It is therefore imperative for a video service system to be able to realize and quantify the video quality degradations that occur in the system, so that it can maintain, control and possibly enhance the quality of the video data. An effective image and video quality metric is crucial for this purpose.
Article
Assessment for image quality traditionally needs its original image as a reference. The conventional method for assessment like Mean Square Error (MSE) or Peak Signal to Noise Ratio (PSNR) is invalid when there is no reference. In this paper, we present a new No-Reference (NR) assessment of image quality using blur and noise. The recent camera applications provide high quality images by help of digital Image Signal Processor (ISP). Since the images taken by the high performance of digital camera have few blocking and ringing artifacts, we only focus on the blur and noise for predicting the objective image quality. The experimental results show that the proposed assessment method gives high correlation with subjective Difference Mean Opinion Score (DMOS). Furthermore, the proposed method provides very low computational load in spatial domain and similar extraction of characteristics to human perceptional assessment.
Article
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Article
The focus of this study is to find the appropriateness of the Levenberg–Marquardt (LM) neural network (NN) training algorithm for recognition of odor patterns associated with an electronic nose (e-nose). Multiple time-patterns represent step response of the array of sensors to the odorants. The experiments are performed on four representative classes of odorants: coffees, fragrances, hog farm air, and cola beverages. The odor recognition system is composed of a Karhunen–Loéve (KL) based pre-processing unit, and a feedforward neural network with the LM training algorithm. The parameters of the pre-processing unit and the neural network are fine-tuned using a genetic algorithm. Back-propagation algorithm with adaptive learning rate is selected as a standard neural network training method, for the purpose of comparison. The results of the experiments indicate that the LM algorithm provides high correct recognition ratios. In addition, the results confirm that the LM method outperforms the back-propagation (BP) method with adaptive learning rate, for the classes of the odorants provided in this study.
Conference Paper
Image quality metrics have been widely used in imaging systems to maintain and improve the quality of images being processed and transmitted. Due to the close relationship between image quality and human visual perception, both computer scientists and psychologists have contributed to the development of image quality metrics. In this paper, a novel image quality metric using a colour appearance model is proposed. After the physical colour stimuli of the images being compared are transformed into perceptual colour appearance attributes, the distortion measures between the corresponding attributes are used to predict the subjective scores of image quality, by use of data-driven models: Multiple Linear Regression (MLR), General Regression Neural Network (GRNN) and Back-Propagation Neural Network (BPNN). Based on the data-driven model used, we have developed three image quality metrics, CAM_MLR, CAM_GRNN and CAM_BPNN. The experiments have shown that the performance of CAM_BPNN is better than the well-known image quality metric SSIM.
Conference Paper
Nowadays, it is evident that we must consider human perceptual properties to visualize information clearly and efficiently. We may utilize computational models of human visual systems to consider human perception well. Image quality assessment is a challenging task that is traditionally approached by such computational models. Recently, a new assessment methodology based on structural similarity has been proposed. We select two representative models of each group, the visible differences predictor and the structural similarity index, for evaluation. We begin with the description of these two approaches and models. We then depict the subjective tests that we have conducted to obtain mean opinion scores. Inputs to these tests included uniformly compressed images and images compressed non-uniformly with regions of interest. Then, we discuss the performance of the two models, and the similarities and differences between the two models. We end with a summary of the important advantages of each approach.