Access to this full-text is provided by Springer Nature.
Content available from Artificial Intelligence Review
This content is subject to copyright. Terms and conditions apply.
Accepted: 6 February 2025
© The Author(s) 2025
Dhanasekar Sundaram
dhanasekar.sundaram@vit.ac.in
Chithra Selvam
chithramathz@gmail.com
1 Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology,
Chennai, Tamil Nadu 600127, India
Interval-valued intuitionistic fuzzy generator based low-light
enhancement model for referenced image datasets
ChithraSelvam1· DhanasekarSundaram1
Articial Intelligence Review (2025) 58:141
https://doi.org/10.1007/s10462-025-11138-5
Abstract
Image processing is a rapidly evolving research eld with diverse applications across
science and technology, including biometric systems, surveillance, trac signal control
and medical imaging. Digital images taken in low-light conditions are often aected by
poor contrast and pixel detail, leading to uncertainty. Although various fuzzy based tech-
niques have been proposed for low-light image enhancement, there remains a need for a
model that can manage greater uncertainty while providing better structural information.
To address this, an interval-valued intuitionistic fuzzy generator is proposed to develop
an advanced low-light image enhancement model for referenced image datasets. The en-
hancement process involves a structural similarity index measure (SSIM) based optimiza-
tion approach with respect to the parameters of the generator. For experimental validation,
the Low-Light (LOL), LOLv2-Real and LOLv2-Synthetic benchmark datasets are utilized.
The results are compared with several existing techniques using quality metrics such as
SSIM, peak signal-to-noise ratio, absolute mean brightness error, mean absolute error, root
mean squared error, blind/referenceless image spatial quality evaluator and naturalness
image quality evaluator, demonstrating the superiority of the proposed model. Ultimately,
the model’s performance is benchmarked against state-of-the-art methods, highlighting its
enhanced eciency.
Keywords Low-light image enhancement; Intuitionistic fuzzy generator; Interval-valued
intuitionistic fuzzy image; Structural similarity index measure; Referenced image dataset
1 3
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
1 Introduction
Image processing is a signicant eld that has earned a wide range of applications in sci-
ence and technology. Image processing is a signicant tool to deal with various real-world
image based problems such as low-light image enhancement (LLIE), fusion, edge detection,
de-blur, morphology, noise removal, haze removal, segmentation, compression and so on.
Among these, the LLIE plays a key role in multiple elds not limited to medical domain,
trac signals, surveillance, forensics, cyber security, autonomous driving and so on. Images
captured in low-light frequently aected by a number of diculties, including low contrast
and brightness as well as color distortion. The main goal of the LLIE is to improve the visual
perception of images which is taken in low or poor light conditions. The enhancement of a
low-light image with less noise or distortion aims to improve brightness, contrast and the
clarity of the image. For this purpose, various image processing and computer vision tools
have been utilized by the researchers. For the validation and experimental analysis of the
models, various performance metrics such as structural similarity index measure (SSIM),
peak signal-to-noise ratio (PSNR), absolute mean brightness error (AMBE), Shannon’s
entropy, etc., have been applied.
The LLIE tool has also gained diverse real-world applications in many elds includ-
ing security system, biometrics, medical image, sand dust image, underwater image, trac
image, invisible medical image, etc. Recently, various researchers have contributed to deal
with these problems. For instance,
(1) Surveillance systems help to handle the criminal activities, infrastructure management,
etc. Due to outdated surveillance devices, the wild images suer from low-light, low-
visual perception and noise. Therefore, to improve the visual perception and the quality
wild images taken by webcam, Dai et al. (2024) designed a model called ImCam which
includes a generative adversarial network (GAN) and Retinex model.
(2) With the high diculty of reproduction, acquisition, uniqueness and stability of their
texture characteristics, the palmprints have a wide range of application in real-world.
Sometimes, palmprint images lose a amount of important features, due to low-light
conditions and long distance capturing, leading to aect the accuracy of palmprint rec-
ognition. To deal with this problem, Kaijun et al. (2024) initiated a low-contrast palm-
print enhancement model using GUNet and wavelet transform.
(3) In medical eld, MRI, X-ray, CT scan, etc., images are important sources to detect vari-
ous diseases in earlier stages. Due to shaking when capturing, non-uniform illumina-
tion, low-light condition, etc., and fault in the imaging device, the medical images often
lose many information. To deal with the problem, Yadav et al. (2024) developed a LLIE
model using harmonic ltering and entropy curve approaches.
(4) Sand dust images often fail in detailing the pixel information due to over dust, uneven
illumination, poor light condition, etc. It greatly aect the visibility of the images.
Therefore, Hassan et al. (2024) explored a sand dust image enhancement approach uti-
lizing adaptive gray world-blue channel and with normalized intensity and saturation
correction in RGB and HSI color spaces, respectively.
(5) Underwater images have poor visibility due to light absorption and experience varying
levels of structural and statistical damage. This causes a nonuniform shift in feature
representation, further degrading visual performance and complicating vision tasks. To
1 3
141 Page 2 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
rectify these problems, Zhou et al. (2023) created multi-interval sub histogram equal-
ization (HE) model for the enhancement of underwater images.
(6) Due to low contrast in low-light conditions, detecting vehicles and pedestrians in intel-
ligent transportation system is challenging. Traditional deep learning methods designed
for LLIE which need paired data are impractical in real trac scenarios. To enhance
low-light trac images, Zhang et al. (2024) designed a self-supervised network that can
be trainable without paired images. Additionally, a denoising network was introduced
to pre-process input images for mitigating noise and artifacts.
(7) Poor image illumination increases uncertainty in clinical treatments and heightens the
risk of misdiagnosis. To address this, Subramani and Veluchamy (2023) proposed a
novel bilateral tone-mapped gamma correction technique that enhances the visual qual-
ity of medical images without compromising their natural color. Furthermore, Velu-
chamy and Subramani (2023) introduced an articial bee colony optimization based
weighted gamma correction method to improve the quality of contrast distorted images.
This approach enhances perceived contrast by adjusting pixel values through expan-
sion and compression.Noise removal methods play a crucial role in image enhance-
ment operations. Khmag et al. (2019) developed a cluster-based image denoising model
using the wavelet domain. Additionally, Khmag proposed a mixed noise removal model
based on the regularized Perona Malik model (Khmag 2023a) and employed a GAN
model for additive Gaussian noise removal (Khmag 2023b). Veluchamy et al. (2021)
presented an optimized Bezier curve-based intensity mapping scheme for enhancing
low-light image quality. The algorithm dynamically adjusts intensity curves in both
dark and bright regions, with control points automatically obtained through an optimal
weighting process. Balanced enhancement was achieved using the whale optimization
algorithm, which resolves issues of under and over enhancement. Singh et al. (2024)
introduced a novel LLIE method that combines reection models and wavelet fusion.
The algorithm dynamically adapts to dark images using multiscale techniques. In real-
ity, the digital images often fail in providing the clear visual due to uncertain pixel
information. For example, the low-light images suer in revealing that how much the
image is bright or dark. This uncertain information is very challenging in the image
enhancement process. To deal with such problems, the fuzzy logic tools help in examin-
ing the uncertain information. Fuzzy logic, an extension of Boolean logic, helps to deal
with various real-world uncertain problems. Initially, Zadeh (1965) introduced the idea
of fuzzy sets associate with fuzzy logic. Fuzzy set helps to examine the belongingness
grade of an element in a set as an extension of crisp set, where the element belongs or
does not belongs to the set. Consequently, Atanassov (1986) dened the view of intu-
itionistic fuzzy set (IFS) to analyze the belongingness and non-belongingness grades of
an element in a set. In digital images, the IFS helps to examine the grades of member-
ship and non-membership for the intensities of pixels. The researchers have frequently
been applied the concept of IFS in developing image processing models, especially the
image enhancement techniques as reviewed in Sect. 2.
1 3
Page 3 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
2 Literature review
The process of enhancing low-light images with less noise or distortion aims to improve
contrast, brightness and the clarity of the image. To handle such problems, various research-
ers proposed a variety of image processing and computer vision tools including HE, deep-
learning, Retinex, transformers and fusion based models.
HE is a model that improves the low-contrast images eectively. Recently, (Vijayalak-
shmi and Nath 2022) designed a variational HE model to handle artifacts and over-ampli-
cation in the uniform background images. As an extension of HE, adaptive histogram
equalization (AHE) examines an image as a tile or an integration of nearby regions. Colney
et al. (2023) explored an AHE technique to enhance surveillance video for safety and secu-
rity purposes. Another noteworthy extension of HE and AHE called contrast limited adaptive
histogram equalization (CLAHE) that tackles the problem of excessive noise amplication
which comes with AHE. CLAHE is a rapid and simple technique which is frequently used
for image enhancement tasks. This technique divides the images into several rectangular
sections for processing the image which has an uneven pixel range. Singh and Bhandari
(2021) investigated an innovative multi-scale principle and reection method based on prin-
cipal component analysis (PCA) and reection models to improve low-light image contrast
utilizing the CLAHE methodology with HSV color space. Demir and Kaplan (2023) pro-
posed a sharpening-smoothing image lter (SSIF) approach for enhancing low-light images
using CLAHE model. Hu et al. (2024) designed a HE technique for segmenting radial ring
images with the goal of processing images with a reduced radial contrast enhancement of
the images. Consequently, Retinex theory is one of the important image enhancement con-
cept. Recently, the researchers have frequently developed the image enhancement technique
based on the Retinex theory. Wang et al. (2019) developed a fusion based approach and
a correction function with respect to Weber-Fechner equation to improve the LLIE pro-
cess. For the better improvement of the low-light images, Al-Hashim and Al-Ameen (2020)
developed multiphase Retinex based method. Li et al. (2024) created a Retinex decomposi-
tion and multi-scale adjustment based low-light image improvement network that carries
out initial decomposition and then modication. Chen and Yu (2024) presented an unsuper-
vised reectance Retinex and noise model.
Deep learning models are well and rapidly growing research eld which includes a lot
of application especially image processing. Recently, many scholars have gained attention
on designing the deep learning based image enhancement models. The majority of current
LLIE techniques usually necessitate costly training data pairs, which poses considerable
practical diculties. However, unsupervised approaches, which require unpaired data and
manually generated prior knowledge, frequently have issues with color distortion, struc-
tural blurring and even unpredictable and inecient enhancement in complicated circum-
stances. To tackle such problem, Xu et al. (2024) developed a unique unsupervised approach
degraded structure and hue guided auxiliary learning. Qu et al. (2024) designed a double
domain guided real-time LLIE network for ultra-high denition transportation surveillance
in order to satisfy the requirements on both enhancement quality and computational speed.
Park et al. (2023) proposed a novel LLIE network that uses a U-shaped enhanced lightening
back-projection to address poor contrast and color distortion diculties.
Due to uncertainty in pixel information in the digital images, it often lack in providing
clear details to the human vision. To deal with such uncertainty, the fuzzy logic tools have
1 3
141 Page 4 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
recently been applied by various scholars. IFS helps to deal with the uncertainty of the pixel
information by examining membership, non-membership and hesitancy degrees. Initially,
with the base of IFS, Yager (1980) proposed a generalized class of fuzzy complement to gen-
erate the non-membership grade and Sugeno (1997) explored another type of fuzzy comple-
ment. With these hints, Chaira (2020) proposed a new class of fuzzy complement which is
known as the intuitionistic fuzzy generator (IFG) and applied it to improve the contrast of
low-illumination mammography images. Ghosh and Ghosh (2022) combined the generating
function of Yager and Sugeno functions with divergence measure based enhancement for
mammogram images. Chaira (2021) employed a dierent IFG and clustering principles to
enhance the quality and segmentation of the mammography images. In order to improve the
color poor contrast images using HE and defuzzication approaches, Jebadass and Balasu-
bramaniam (2022) presented a LLIE model using Yager’s generator. Recently, Selvam et al.
(2024) designed a novel IFG and applied it with CLAHE technique to enhance the low-light
images. Additionally, it was compared with various existing approaches to showcase the
superiority of the technique. Also, Chinnappan and Sundaram (2024) proposed a new IFG to
enhance the low-light video using HE method. In digital images, often the non-uniform illu-
mination condition exist due to uneven light allocation. To generate the images with more
clarity by altering the non-uniform illumination, Al-Ameen (2024) designed an adapted
type-2 fuzzy algorithm as an eective tool.
Atanassov and Gargov (1989) extended the notion of IFS into an interval-valued intu-
itionistic fuzzy set (IVIFS) to deal with the uncertainty in an eective manner. Based on
this idea, the Yager’s (Jebadass and Balasubramaniam 2022) and Chaira’s (Jebadass and
Balasubramaniam 2024) generators were applied to improve low-contrast images utilizing
the idea of IVIFS with the CLAHE process. All these LLIE models (Jebadass and Balasu-
bramaniam 2022; Selvam et al. 2024; Jebadass and Balasubramaniam 2022, 2024) have
employed based on the idea of entropy measure. It has been applied for the optimization
process in enhancing the low-light images. However, when reviewing the existing studies,
the present study identies several research gaps as follows:
(i) The existing LLIE approaches which apply IFG (Jebadass and Balasubramaniam 2022,
2022, 2024) utilize pre-existing generators.
(ii) The studies (Jebadass and Balasubramaniam 2022; Selvam et al. 2024; Jebadass and
Balasubramaniam 2022, 2024) employ either HE or CLAHE for the enhancement of
low-light images.
(iii) SSIM is an important performance metric which helps to examine the pixel relation-
ship between the original (ground truth) and enhanced images. However, the existing
approaches have not applied SSIM for the optimization process.
(iv) In the literature, the studies (Jebadass and Balasubramaniam 2022, 2024) employ the
LLIE technique in the IVIFS context. However, they do not apply a new generator and
are designed with the CLAHE method additionally.
(v) The existing studies (Jebadass and Balasubramaniam 2022; Selvam et al. 2024; Jeba-
dass and Balasubramaniam 2024) have not examined the SSIM and PSNR values com-
prehensively. Additionally, they fail to compare these values with those obtained from
deep learning approaches.To address these research gaps, the present study oers sev-
eral novel contributions as follows:
1 3
Page 5 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
(i) Selvam et al. (2024) proposed a novel IFG for the LLIE technique and demonstrated
its superiority by comparing it with existing approaches. Therefore, the present study
utilizes the IFG function to propose a novel IVIFG.
(ii) Using the new IVIFG, a novel LLIE model is designed without employing HE and
CLAHE approaches.
(iii) As a novel approach, SSIM is applied for the optimization process during the LLIE
process.
(iv) Performance measures such as PSNR and SSIM are compared with existing deep learn-
ing approaches to demonstrate the superiority of the proposed technique.From the
points, it is observed that these novel contributions reveal the uniqueness and eective-
ness of this research. However, the present study is initiated with the following key
objectives.
●To propose a novel IVIFG based on the existing IFG dened by Selvam et al. (2024).
●To design a new LLIE technique using the proposed IVIFG.
●To employ the SSIM metric in optimizing the enhanced images.
●To propose the LLIE model without additional approaches like HE and CLAHE.
●To compare the SSIM and PSNR metric of the results of proposed model with existing
state-of-the-art (SOTA) approaches.The remaining article is structured as follows: The
fundamental concepts are provided in Sect. 3. The construction of IVIFS is discussed in
Sect. 4. The proposed methodology is provided in Sect. 5. The experimental results and
comparative study are discussed in Sect. 6. The implications and limitations of the mod-
el are discussed in Sect. 7. Finally, the conclusion and future scope are given in Sect. 8.
3 Fundamental concepts
This part discusses the fundamental ideas that support the present work and help to enhance
the understanding of the new methodology.
3.1 Intuitionistic fuzzy set
A fuzzy set
˜
A
of a universal set X is dened as
˜
A={(
x, µ
˜
A(
x
))|
x
∈
X
}
, where
µ˜
A(x)
is
called a membership function which maps from X to [0, 1].
Currently, the idea of fuzzy sets have frequently been used in the elds of computer
vision and image processing, where membership grades are used to indicate pixel intensity.
In actuality, the digital images frequently convey more information with less accuracy. The
image processing techniques are quite challenging because of the ambiguity present in the
images. In this scenario, the concept IFS has been applied to address these type of problems.
An intuitionistic fuzzy set F in a universe X is a collection
F={(y, µF(y),ν
F(y))|y∈X},
where for all
µF(y):X→[0,1]
and
νF(y):X→[0,1]
are called the membership and the
non-membership functions of y in F, respectively such that
µF(y)+νF(y)≤1
.
1 3
141 Page 6 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
The complement of the addition of membership and non-membership grades is
dened as the hesitance grade of the element in the IFS and it is dened as a mapping
πF(y):X→[0,1]
such that
µF(y)+νF(y)+πF(y)=1
3.2 Interval-valued intuitionistic fuzzy set
When the point of uncertainty in real-world problems is enlarged, the view of interval-
valued fuzzy set helps to handle the problems. Similarly, the idea of IVIFS is an ecient
tool when an IFS not sucient to deal with the problems. Also, the IVIFS play an important
role in addressing the uncertainty in an ecient manner.
Atanassov and Gargov (1989) dened the notion of IVIFS as below: An IVIFS
˜
F
over X
is an object having the form:
˜
F={(
y, µ
I
F(
y
)
,ν
I
F(
y
))|
y
∈
X
},
where,
µI
F(
y
)=[
µ
L
F(
y
)
,µ
U
F(
y
)] ⊂[0
,
1]
and
νI
F(
y
)=[
ν
L
F(
y
)
,ν
U
F(
y
)] ⊂[0
,
1]
. It must
satises
sup
y∈X
µ
I
F
(
y
) + sup
y∈X
ν
I
F
(
y
)
≤
1.
3.3 Intuitionistic fuzzy generator
A fuzzy complement is dened as a mapping
c: [0,1] →[0,1]
. Then, the set
F={(y, µF(y),c(µF(y)))|y∈X}
is not always an IFS, because
µF(y)+c(µF(y))
may
greater than 1 and hence
πF(y)<0
.
For instance;
(i) The Yager’s (Yager 1980) class of fuzzy complement is dened as
c(
µF
(
y
)) = (1 −(
µF
(
y
))
b
)1/b
, where
0<b<1
.
(ii) The Sugeno’s (Sugeno 1997) class is dened as
c
(µF(y)) =
1−µ
F
(y)
1+
bµ
F(
y
)
, where
b≥0
.The condition
πF(y)<0
aects the theory of the IFS. Hence, a function
ϕ: [0,1] →[0,1]
is considered in this study such that
0≤µF(y)+ϕ(µF(y)) ≤1
is
veried for every y. The function
ϕ(y)
is called IFG if
ϕ(y)≤1−y
(1)
for all
y∈[0,1]
.
An IFG can be a increasing, decreasing and continuous function if
ϕ
is an increasing,
decreasing and continuous functions, respectively. Therefore,
ϕ(1) = 0
and
ϕ(0) ≤1
with
respect to the above condition.
1 3
Page 7 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
3.4 Construction of intuitionistic fuzzy generator
In the literature, various IFG such as Yager (1980), Sugeno (1997), Chaira (2020, 2021) and
their integrated versions are available. Recently, a novel IFG was proposed by Selvam et al.
(2024) and it was applied to design a new LLIE technique for obtaining a better enhance-
ment for the low-illumination images.
If
α: [0,1] →[0,1],
then
α
is an involution function if and only if a continuous function
p exists such that
p(0) = 0
and p is an increasing function. It is denoted as
α(
µ
(
y
)) =
p
−1(
p
(1) −
p
(
µ
(
y
)))
(2)
Let us assume
p
(y)=
1
(b+ 1)
2log(1 + y(b+ 1)2
)
(3)
with
p
(0) =
1
(
b
+ 1)
2log(1) = 0 and
p
(1) =
1
(
b
+ 1)
2log(1 + (b+ 2)2
)
.
Then inverse function of Eq. (3) is derived as
p
−1(y)= ey
(
b
+1)2
−
1
(b+ 1)2
α(y)=p−11
(b+ 1)2log 1+(b+ 1)2−1
(b+ 1)2log 1+y(b+ 1)2
α(y)=p−11
blog 1+(b+ 1)2
1+y(b+ 1)2
α(y)= 1−y
1+y(b+ 1)
2
(4)
for any value of b.
In terms
µ(y)
and
ν(y)
map from
α(y)
.
α
(µ(y)) = ϕ(µ(y)) =
1−µ(y)
1+µ(y)(b+ 1)
2 (5)
Apply the Eq. (5) in Eq. (1) to determine the membership function of the element in a set
and it is provided in the following Eq. (6).
µ
′(y)=1
−1
−µ
(
y
)
1+
µ
(
y
)(
b
+ 1)
2=
(1+(
b
+ 1)2)
µ
(
y
)
1+
µ
(
y
)(
b
+ 1)
2 (6)
The non-membership function
ν′(y)
is obtained using Eq. (6),
1 3
141 Page 8 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
ν
′(y)=ϕ(µ′(y)) =
1−
µ
′(
y
)
1+µ′(y)(b+ 1)2
=
1−((1+(b+ 1)2)µ(y)
1+µ(y)(b+ 1)2)
1+(b+ 1)2
(
(1+(b+ 1)2)µ(y)
1+µ(y)(b+ 1)2
)
ν
′(y)=
1−µ(y)
1+µ(y)(b+ 1)
2
(2+(b+ 1)
2
).
(7)
Additionally, the hesitancy degree can be calculated using Eq. (8),
π′(
y
)=1−
µ
′(
y
)−
ν
′(
y
).
(8)
4 Construction of interval-valued intuitionistic fuzzy set
In the present study, the IFG proposed by Selvam et al. (2024) is employed to construct the
IVIFS using the existing approach (Jebadass and Balasubramaniam 2024). This IFG has
already been validated and compared with the existing works to show its eciency. In this
study, it is further extended into IVIFS context to adapt more uncertainty and to enhance the
low-contrast image eectively. The developed IVIFG handle the uncertainty by generating
the non-membership and hesitancy grades in the interval manner. Thus, the proposed gen-
erator is considered as ecient compared to the existing fuzzy generators. To construct the
IVIFS, initially let us assume the function
τ:IFS →IVIFS
dened as
τ(
F
)={(
y, µ
I
F(
y
)
,ν
I
F(
y
))|
y
∈
X
}→˜
F,
(9)
where
µI
F(
y
)=[
µ
L
F(
y
)
,µ
U
F(
y
)]
and
νI
F(
y
)=[
ν
L
F(
y
)
,ν
U
F(
y
)]
are the sub-intervals of [0, 1]
such that
0≤
µ
U
F(
y
)+
ν
U
F(
y
)≤1
. Further, the lower and upper limits of the membership
and non-membership grades are computed as follows:
µU
F(
y
)=
µ
′(
y
)+
σπ
′(
y
)
,
0≤
σ
≤1
(10)
µ
L
F(y)=µ′(y)
−
kπ
′(y),0
≤
k
≤µ′(y)
π
′(
y
)
(11)
νU
F(y)=ν′(y)+ηπ
′(y),0≤η≤1
(12)
ν
L
F(y)=ν′(y)
−
lπ
′(y),0
≤
l
≤
ν
′(
y
)
π′(y)
(13)
with
0≤(σ+η)≤1
and
0<(σ+k),(η+l)≤1
.
Now, let us dene
β=
µ
U
F(
y
)−
µ
L
F(
y
)=(
σ
+
k
)
π
′(
y
)
, (14)
1 3
Page 9 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
γ=
ν
U
F(
y
)−
ν
L
F(
y
)=(
η
+
l
)
π
′(
y
)
. (15)
From this derivation, it is observed that the IVIFS is constructed so that the width of the
membership and non-membership grades dierence doesn’t exceed
π′(y)
. Here, if F
belongs to a fuzzy set, then
τ(
F
)={(
y, µ
I
F(
y
)
,ν
I
F(
y
))|
y
∈
X
}
with
π′(y)=0
, implies
µU
F(
y
)−
µ
L
˜
F(
y
)=
µ
′(
y
)
νU
F(
y
)−
ν
L
˜
F(
y
)=
ν
′(
y
)
and hence
τ(F)=F
. Therefore, if F belongs to a fuzzy set, then
τ(F)=F
.
These theoretical insights greatly help to deal with wider uncertainty in the real-world
problems. In this study, the derived systems are applied to design a LLIE model based on
SSIM which is dened in Eq. (16).
SSIM can be utilized to measure the similarity between two images. In this work, the
SSIM is used to compute the similarity between the ground truth and enhanced images to
optimize the enhanced image based on the parameters which are involved in the proposed
IVIFG. The formula of SSIM is dened as
SSIM
(I1,I
2)=
(2¯µ
I1
¯µ
I2
+k
1
)(2σ
I1I2
+k
2
)
(¯µ2
I1
+¯µ2
I2
+k
1
)(σ2
I1
+σ2
I2
+k
2
) (16)
where
I1
and
I2
depict original and enhanced images.
¯µI1
and
¯µI2
represent mean of the
pixel values of
I1
and
I2
.
σ2
I1
and
σ2
I2
depict the variances of
I1
and
I2
.
σI1I2
denotes the
covariance of
I1
and
I2
, respectively.
5 Methodology
In this part, the interval-valued intuitionistic fuzzy image (IVIFI), computation of b and hes-
itancy degree
β
, defuzzication and the proposed LLIE technique are discussed to improve
the understanding of the working methodology. One of the main goals of the present study
is to apply above discussed theoretical concepts to design a novel LLIE technique. Thus,
the proposed IVIFG is applied to eciently enhance the low-contrast images. Moreover, the
overall work ow of the proposed method is depicted in Fig. 1 to demonstrate the intermedi-
ate results of the steps involved in the model.
5.1 Interval-valued intuitionistic fuzzy image
Initially, the input low-light image
O=[tij ]m×n
is normalized by using the following form
given in Eq. (17), which is known as a linear function, utilizing to generate the fuzzied
image
A=[µij ]m×n
.
1 3
141 Page 10 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
µ
ij =
t
ij
−t
min
tmax −tmin
,
(17)
where
tij
is the value of pixels,
tmax
and
tmin
are the maximum and minimum value of gray
levels in the image, respectively.
In the fuzzy set, choosing a membership value is a challenging task. Hence, the proposed
method seeks to strengthen the input image by eliminating the low intensities of the mem-
bership function to the pixels in the uncertain source images. Moreover, the Eq.(6) is applied
to transform the fuzzy image into the intuitionistic fuzzy image (IFI)
F=[fij ]m×n
. This
IFI is further enhanced by converting it into IVIFI
˜
F=[˜
f
ij ]m×n
by utilizing the theoretical
insights which are derived in the Sect. 4.
f
ij
=
µ
′
ij +
π
′
ij
From Eqs. (6) and (14), the IVIFI is determined in Eq. (18),
Fig. 1 Work ow of the proposed model
1 3
Page 11 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
˜
f
ij
=
µ
′
ij
+(
σ
+
k
)
π
′
ij
˜
f
ij =µ′
ij
+β (18)
Since the IVIFI has a higher possibility of enhancing the values with intervals than the IFI,
it is evident that the IVIFI is preferable than the IFI. Thus, there are more opportunities to
choose the generated image. As a result, the IVIFI reveal a greater quality compared to IFI.
5.2 Detailed optimization process by the computation of b and
β
The parameters b and
β
in Eq. (18) play a crucial role in the proposed technique, and their
optimization based on SSIM values is outlined in Sect. 4 and illustrated in Fig. 2. A system-
atic procedure is followed to determine the optimal values of b and
β
.
Step 1: Range of b and
β
Values: Initially, 10 values for both b and
β
are selected,
ranging from 0.1 to 1.0 in increments of 0.1 (i.e.,
0.1,0.2,...,1.0
). These values cover a
range of modications that can be applied to enhance image quality. In total, 100 images are
generated, representing all possible combinations of the selected b and
β
values.
Step 2: SSIM Computation: Using the combinations from Step 1, 100 enhanced images
are produced. For each generated image, the SSIM value is calculated by comparing it with
the corresponding ground truth image. This step is essential for the optimization process.
Step 3: Optimization Procedure for Selecting b and
β
: The optimal enhancement
parameters are determined based on the b and
β
values that maximize the SSIM score,
ensuring the highest similarity between the enhanced image and the ground truth. The pair
of values that yields the maximum SSIM score is considered the optimal choice, ne-tuning
the enhancement process for the best possible result.
Step 4: Image-Specic Optimization: It is important to note that the ideal values of b
and
β
may vary across dierent images. The optimal selection of these parameters depends
on the SSIM value and is inuenced by various factors, such as illumination, noise levels,
and the complexity of the image content. Therefore, the method for determining the optimal
b and
β
values needs to be tailored to each individual image to achieve the best enhance-
ment results.
Fig. 2 Optimization process
1 3
141 Page 12 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
The proposed model aims to improve the low-contrast images while maintaining maxi-
mum similarity with ground truth image information and minimizing loss by selecting the b
and
β
values that maximize the SSIM. This unique method provides that the image enhance-
ment procedure is precisely matched for every individual image, yielding better results in a
range of visual scenarios.
5.3 Defuzzication
Defuzzication is a process of transforming a fuzzy quantity into a crisp quantity. It is
frequently used in image processing to transform fuzzy images into crisp images. Refer
the study (Jebadass and Balasubramaniam 2022) for the defuzzication process of IVIFI to
crisp image. The crisp version of image determined from the fuzzied one as given below,
eij =µij (tmax −tmin)+tmin
Similarly, the IVIFI can be defuzzied using Eq. (19).
e
ij =
˜
fij −
(
σ
+
k
)
π
′
ij
µ′
ij
(19)
5.4 Algorithm of proposed method
The detailed process of the proposed technique is provided in this section.
Step 1: Input a pair of low-contrast
O(= tij )
and ground truth image
G(= gij )
of size
m×n
.
Step 2: Obtain the fuzzied the image
F(= µij )
from O by normalizing the image using
Eq. (17),
µ
ij =
t
ij
−tmin
tmax −tmin
where
tij
represent the intensity of
(
i, j
)th
pixel. The grades
tmax
and
tmin
denote the high-
est and lowest intensity of gray levels, respectively.
Step 3: Calculate b values to determine the grades of membership and non-membership for
the pixels by applying the novel IFG
µ
′
ij =
(1+(
b
+ 1)2)
µij
1+µij (b+ 1)2
ν′
ij =1−µij
1+µ
ij
(b+ 1)2(2+(b+ 1)2)
which is derived in subsection 3.4.
Step 4: Compute the hesitancy degree
π′
ij
as follows
π′
ij =1−
µ
′
ij −
ν
′
ij
1 3
Page 13 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
Step 5: Aggregate the membership
µ′
ij
and hesitancy degree
π′
ij
to generate IFI
F(= fij )
by using the expression
f
ij
=
µ
′
ij +
π
′
ij
Step 6: Construct the IVIFI
˜
F(= ˜
f
ij )
with b and
β
values using the proposed IVIFG
˜
f
ij
=
µ
′
ij +
β
for every
b, β ∈{0.1,0.2,...,1}
, as discussed in the subsection 5.2.
Step 7: Optimize the obtained IVIFI to get enhanced IVIFI
˜
F∗(= ˜
f
∗
ij )
by computing
maximum SSIM value between 100 pairs of the ground truth and IVIFIs based on the
parameters as given in Fig. 2.
˜
F∗=˜
F
where
SSIM(
G,
˜
F
) = max
b,β
(
SSIM
(
G,
˜
F
))
.
Step 8: Defuzzify the optimized image IVIFI
˜
F∗
to obtain the proposed enhanced image
E(= eij )
using Eq. (19),
e
ij =
˜
f
∗
ij −
(
σ
+
k
)
π
′
ij
µ′
ij
The pseudocode of the proposed model is discussed in Algorithm 1. From the pseudocode,
it is observed that the time complexity of the model is
O(mn)
. A visual work ow of the
model and the intermediate results of the images are demonstrated in Fig. 1.
1 3
141 Page 14 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
Algorithm 1 Pseudo code of proposed technique
6 Experimental analysis
The experimental investigation is performed using a computer system with the follow-
ing specic hardware and software components. The machine’s CPU is 13th Gen Intel(R)
Core(TM) i9-13900 processor with a base frequency of 2.00 GHz and Dual-Core technol-
ogy. It also includes 1TB of hard disk and 64.0 GB of RAM, operating on the latest variant
of the Windows 11 Pro operating system. The experimental setup is conducted using MAT-
LAB R2023b along with the image processing toolbox.
6.1 Data description
The proposed LLIE model is evaluated throgh the LOw-Light (LOL) dataset (LOL dataset
https://daooshee.github.io/BMVC2018website/) is a widely utilized dataset which consists
of 1000 low/normal-light image pairs and 500 pairs of images which are separated into 15
testing pairs and 485 training pairs. The low-light images contain noise generated during the
1 3
Page 15 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
photo capturing process. In this dataset, the majority of the images depict indoor scenes and
all images have a dimension of
400 ×600
. Additionally, the proposed model is evaluated
through LOLv2 (LOLv2 datasets https://github.com/lowlevelai/glare) which contains two
datasets namely real and synthetic datasets. LOLv2-real dataset contains 689 for training
and 100 for testing low/normal-light image pairs. LOLv2-synthetic dataset contains 900 for
training and 100 for testing low/normal-light image pairs. The size of the images in these
datasets is
400 ×600
.
6.2 Implementation of the model
The proposed model is implemented in the MATLAB tool in the system with the above-
mentioned specications. The followings steps are employed for the implementation of
the model. Initially, three benchmark datasets that contain ground truth are considered as
detailed in the Subsect. 6.1.
In step 1, an image is considered as input from the above datasets. Then, the following
processes are repeated for all the images in the datasets. In step 2, the input image is then
fuzzied using Eq. (17).
In step 3, the membership
µ′
ij
and non-membership grades
ν′
ij
of the image are computed
by setting the b value from 0.1 to 1 with the increment of 0.1. The hesitancy grades
π′
ij
are
then computed in step 4. Further, the
µ′
ij
and
π′
ij
grades are aggregated to form the IFI in
step 5.
In step 6, the IVIFI is then constructed from the IFI using Eq. (18) as detailed in Subsect.
5.2 for further enhancement of the images. At this stage, 100 images are generated by all the
combinations of the parameters b and
β
.
In step 7, the IVIFI is optimized by computing SSIM between the 100 images and the
ground truth of the input image. After the computation of the SSIM values, they are opti-
mized by nding the maximum value among the 100 values. Then, the image is called as
optimized image which is also considered as more similar to the ground truth.
Finally, the optimized image IVIFI is defuzzied in step 8 using Eq. (19).
6.3 Investigation detail
All the training and testing pairs are initially gathered for each datasets. The proposed model
is implemented with all the datasets and determined the enhanced images as detailed in
the Subsect. 5.4. Then, the existing models are assessed through MATLAB tool with the
datasets and the enhanced images are determined. In these assessments, the training and
testing datasets are not separated for the implementation. Further, the performance metrics
are evaluated for all the enhanced images obtained through the existing methods as detailed
in the following Subsect. 6.5. Finally, the values are compared with the outcomes of the
proposed model.
6.4 Existing methods for comparative study
In this study, a comparative investigation is performed with various type of existing meth-
odologies as follows: (Traditional methods) HE, CLAHE and Dehaze, (Filter based method)
SSIF (Demir and Kaplan 2023), (Retinex based methods) AIE (Wang et al. 2019), RBMA
1 3
141 Page 16 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
(Al-Hashim and Al-Ameen 2020) and PCA (Singh and Bhandari 2021), (Fuzzy based meth-
ods) IFS (Jebadass and Balasubramaniam 2022), IVIFS1 (Jebadass and Balasubramaniam
2022), IVIFS2 (Jebadass and Balasubramaniam 2024) and Chithra's IFI (Selvam et al.
2024). Among these models, the functions of HE, CLAHE and Dehaze are already available
in many databases. For the fuzzy methods like IFS (Jebadass and Balasubramaniam 2022),
IVIFS1 (Jebadass and Balasubramaniam 2022), IVIFS2 (Jebadass and Balasubramaniam
2024) and Chithra's IFI (Selvam et al. 2024) are obtained from the authors of the papers.
The codes for the models called SSIF (Demir and Kaplan 2023), AIE (Wang et al. 2019),
RBMA (Al-Hashim and Al-Ameen 2020) and PCA (Singh and Bhandari 2021) are gathered
from github databases of the respective authors. All these codes are implemented through
MATLAB tool with the aforementioned system specications. Also, a group of visual com-
parison of the proposed and existing approaches are portrayed in Fig. 11.
6.5 Performance metrics
This part discusses the quality measures employed for the comparative study. Among
various metrics, SSIM, PSNR, AMBE, MAE, RMSE, BRISQUE and NIQE are widely
used metrics to evaluate the quality of enhanced images. Thus, these quality measure are
employed in this study to showcase the superiority and eciency of the proposed technique.
Further, the outcomes are displayed in Tables 1, 2, 3.
6.5.1 Structural similarity index measure (SSIM)
SSIM is employed to compute the similarity between two images. In this study, the SSIM
is used to measure the similarity between the ground truth G and enhanced E images. The
SSIM can be measured using Eq. (20).
SSIM
(G, E)=
(2¯µ
G
¯µ
E
+k
1
)(2σ
GE
+k
2
)
(¯µ2
G
+¯µ2
E
+k
1
)(σ2
G
+σ2
E
+k
2
) (20)
where
¯µG
and
¯µE
represent mean of the pixel values of G and E.
σ2
G
and
σ2
E
depict the vari-
ances of G and E.
σGE
denotes the covariance of G and E, respectively.
6.5.2 Peak signal-to-noise ratio (PSNR)
PSNR is used to measure the quality of an enhanced image compared to the original noise-
free image. It calculates the ratio between the maximum possible power of a signal and
the power of corrupting noise that aects the delity of its representation. It is typically
expressed in decibels (dB) and relies on the calculation of the mean squared error (MSE)
between the ground truth and enhanced images. The MSE between is dened as
MSE
(G, E)= 1
mn
m−
1
∑
i
=0
n−
1
∑
j
=0
(gij −eij )2
The PSNR (in dB) between the ground truth G and enhanced E images is dened as
1 3
Page 17 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
Table 1 Performance metrics comparison LOL dataset
Methods SSIM
(↑)
PSNR
(↑)
AMBE
(↓)
MAE
(↓)
RMSE
(↓)
BRISQUE
(↓)
NIQE
(↓)
Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std
AIE 0.4936 0.1719 15.7716 3.9758 0.1424 0.1056 0.1604 0.0914 0.1813 0.0902 36.0316 5.3977 9.7205 1.7850
Chithra 0.5538 0.1621 17.7105 3.4888 0.0987 0.0765 0.1213 0.0637 0.1418 0.0643 35.2926 5.4275 9.1472 1.8896
CLAHE 0.3808 0.1737 9.9014 3.3489 0.3155 0.1129 0.3166 0.1108 0.3410 0.1104 30.2021 4.9831 8.4057 1.5655
Dehaze 0.4668 0.1648 14.3822 4.4960 0.1916 0.1138 0.1992 0.1052 0.2162 0.1042 34.7505 5.0747 9.4602 1.8211
He 0.4252 0.1624 15.2662 2.5199 0.0931 0.0684 0.1514 0.0485 0.1799 0.0535 37.6757 5.7909 9.7228 1.9752
IFS 0.4246 0.1524 15.2868 2.3979 0.0835 0.0575 0.1486 0.0423 0.1786 0.0491 38.5327 5.5805 9.7820 2.1342
IVIFS1 0.5744 0.1584 17.7337 3.1243 0.0865 0.0616 0.1176 0.0505 0.1385 0.0514 35.2285 5.5558 8.9939 1.7158
IVIFS2 0.5458 0.1751 15.8796 4.5795 0.1577 0.1077 0.1671 0.0992 0.1844 0.0985 34.3415 5.0923 9.0468 1.8416
PCA 0.3580 0.1887 14.8060 2.9360 0.1107 0.0895 0.1616 0.0661 0.1928 0.0689 36.9504 5.8196 9.5198 1.8647
RBMA 0.5889 0.1784 16.9655 4.3320 0.1292 0.0931 0.1433 0.0826 0.1605 0.0826 29.8631 7.9019 8.1431 1.7510
SSIF 0.4197 0.1793 11.4814 4.0010 0.2713 0.1175 0.2736 0.1138 0.2923 0.1143 33.4385 4.7567 8.8694 1.7001
Proposed 0.6202 0.1858 19.1104 4.5832 0.1017 0.0779 0.1103 0.0749 0.1280 0.0761 31.1165 7.5956 8.3835 1.8450
1 3
141 Page 18 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
PSNR
(G, E) = 10 log10
((max(
gij
))2
MSE(G, E )
)
here,
max(gij )
represents the maximum pixel value of the ground truth image G.
6.5.3 Absolute mean brightness error (AMBE)
AMBE is as a quality measure of the images. In this study, the AMBE is used to attain the
quality of the ground truth and enhanced images. It represents the dierence of gray levels
between the original and enhanced images.
AMBE(
G, E
)=|¯
G
−¯
E
|
(21)
where
¯
G
and
¯
E
denote the mean value of the ground truth and enhanced images, respectively.
6.5.4 Mean absolute error (MAE)
The MAE is a quantitative metric used to evaluate the performance of an enhancement
algorithm by measuring the average absolute dierence between the pixel values of the
enhanced image and a reference image (often the ground truth). It helps determine how
closely the enhanced image matches the original, ideal version. It is represented by
MAE
(G, E)= 1
N
N
∑
i=1
|gi−ei
|
where, N is the total number of pixels,
gi
and
ei
represent the
ith
pixel value of the ground
truth and enhanced images, respectively. The lower MAE indicates the better image quality.
6.5.5 Root mean squared error (RMSE)
The RMSE is a widely used metric to measure the dierence between the enhanced image
and the ground truth (reference) image. In the context of LLIE, RMSE quanties how
much the enhanced image deviates from the ideal version by considering the magnitude of
pixel-wise errors. A lower RMSE, especially near 0, indicates fewer errors and better image
quality.
RMSE
(G, E)=
1
N
N
i=1
(gi−ei)
2
where, N is the total number of pixels,
gi
and
ei
represent the
ith
pixel value of the ground
truth and enhanced images, respectively.
1 3
Page 19 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
Table 2 Performance metrics comparison LOLv2 Real dataset
Methods SSIM
(↑)
PSNR
(↑)
AMBE
(↓)
MAE
(↓)
RMSE
(↓)
BRISQUE
(↓)
NIQE
(↓)
Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std
AIE 0.4002 0.2007 14.4688 3.9672 0.1445 0.1098 0.1818 0.0920 0.2091 0.0940 40.0897 7.8784 12.1071 4.4604
Chithra 0.4376 0.2207 15.8452 4.1088 0.1017 0.0832 0.1511 0.0755 0.1800 0.0841 39.5858 7.8929 12.1742 5.4467
CLAHE 0.3138 0.1784 10.2272 3.1802 0.2804 0.1295 0.2970 0.1107 0.3274 0.1080 35.2752 9.7795 11.0512 5.2568
Dehaze 0.3753 0.1950 13.2333 4.1356 0.1766 0.1196 0.2139 0.0981 0.2409 0.0997 38.7758 7.7010 12.0066 4.6701
He 0.3516 0.1815 14.2393 2.7705 0.0940 0.0681 0.1689 0.0538 0.2040 0.0639 41.2963 7.2759 12.4532 5.1429
IFS 0.3444 0.1804 14.2773 2.7021 0.0823 0.0586 0.1661 0.0499 0.2026 0.0619 41.7743 6.8990 12.8429 5.7563
IVIFS1 0.4526 0.2251 16.0143 3.7366 0.0863 0.0614 0.1434 0.0619 0.1730 0.0722 39.6988 8.0517 11.9700 5.3304
IVIFS2 0.4292 0.2233 14.4805 4.4176 0.1500 0.1124 0.1865 0.0956 0.2125 0.0984 38.7248 8.1388 11.9963 5.3224
PCA 0.2930 0.1884 13.6307 3.1075 0.1092 0.0887 0.1835 0.0693 0.2215 0.0769 40.6919 7.4964 12.3867 5.3331
RBMA 0.4774 0.2274 15.5794 4.2245 0.1247 0.0937 0.1614 0.0794 0.1856 0.0832 36.7101 10.6705 11.7787 4.9733
SSIF 0.3401 0.1919 11.3648 3.6219 0.2460 0.1281 0.2654 0.1096 0.2921 0.1083 37.6802 8.4289 11.6783 5.3978
Proposed 0.5036 0.2376 17.0657 4.9575 0.1213 0.0839 0.1395 0.0835 0.1635 0.0896 36.3366 9.9838 10.9837 4.7143
1 3
141 Page 20 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
6.5.6 Blind/referenceless image spatial quality evaluator (BRISQUE)
The BRISQUE is a no-reference image quality assessment metric that evaluates the qual-
ity of an image without the need for a ground truth or reference image. In LLIE process,
BRISQUE measures how natural and visually pleasing the enhanced image appears based
on distortion patterns and deviations from natural scene statistics. A lower BRISQUE value
indicates the better image quality.
6.5.7 Naturalness image quality evaluator (NIQE)
The NIQE is also a no-reference image quality assessment metric, like BRISQUE, but it is
an opinion-unaware model. This means NIQE evaluates image quality without relying on
subjective human opinion scores during training. It assesses how much an image deviates
from statistical regularities found in high-quality natural images.
6.6 Quantitative comparative analysis
In this part, a quantitative comparative analysis is performed utilizing the quality measures
discussed in the Subsect. 6.5 such as SSIM, PSNR, AMBE, MAE, RMSE, BRISQUE and
NIQE. The results of the analysis are provided in Tables 1, 2, 3. To clearly demonstrate the
outcomes of the quantitative analysis, the line charts are separately given in Figs. 4, 5, 6, 7,
8, 9, 10 for the mean values of all the metrics with LOL, LOLv2-Real and LOLv2-Synthetic
datasets.
From the quantitative comparative analysis presented in Tables 1, 2, 3, it is evident that
the proposed method signicantly outperforms existing methods in terms of the mean and
standard deviation of the important performance metrics.
●LoL dataset - From the analysis of mean value with the LoL dataset, it is observed
that the proposed model outperforms the other methods in the metrics such as SSIM
(0.6202), PSNR (19.1104), MAE (0.1103) and RMSE (0.1280). Also, the model results
at second place in the metrics such as BRISQUE (31.1165) and NIQE (8.3835).
●LoLv2-Real dataset - From the analysis of mean value with the LoLv2-Real dataset, it
is demonstrated that the proposed model outperforms the other methods in the metrics
such as SSIM (0.5036), PSNR (17.0657), MAE (0.1395), RSME (0.1635) and NIQE
(10.9837). Also, it results at second place in the metric BRISQUE (36.3366).
●LoLv2-Synthetic dataset - From the analysis of mean value with the LoLv2-Synthetic
dataset, it is demonstrated that the proposed model outperforms the other methods in the
metrics such as SSIM (0.8357), PSNR (22.0751), AMBE (0.0364), MAE (0.0729) and
RSME (0.0925). Also, it results at second place in the metric NIQE (4.3425).
●Additionally, the standard deviation for all the datasets are computed to demonstrate the
spread of the outcome from the mean value. From this computation, it is observed that
the proposed method is moderately ecient than the other methods.
Fig. 13 demonstrates a pictorial illustration of the SSIM measure between the ground truth
and enhanced images by existing methods and the proposed model.
1 3
Page 21 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
Table 3 Performance metrics comparison LOLv2 Synthetic dataset
Methods SSIM
(↑)
PSNR
(↑)
AMBE
(↓)
MAE
(↓)
RMSE
(↓)
BRISQUE
(↓)
NIQE
(↓)
Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std
AIE 0.8198 0.1061 18.5333 4.1893 0.0977 0.0653 0.1119 0.0566 0.1321 0.0606 30.3566 10.4554 5.1899 1.5025
Chithra 0.6919 0.1883 16.6030 4.0561 0.1022 0.0737 0.1351 0.0762 0.1659 0.0878 26.8210 9.9084 4.4740 1.2066
CLAHE 0.5974 0.1906 13.4195 4.1600 0.1985 0.1037 0.2061 0.0951 0.2361 0.0964 24.9677 9.3925 4.4532 1.2241
Dehaze 0.7170 0.1486 17.1744 4.1581 0.1139 0.0772 0.1304 0.0677 0.1547 0.0735 28.3367 9.7735 4.8217 1.3208
He 0.7626 0.1057 16.6358 3.8816 0.1159 0.0780 0.1374 0.0662 0.1624 0.0731 25.7581 9.9104 4.6711 1.2610
IFS 0.6360 0.2292 14.9910 4.3733 0.1173 0.0795 0.1681 0.0886 0.2020 0.1047 26.4615 9.5400 4.7479 1.4900
IVIFS1 0.7133 0.1251 16.4803 3.3207 0.1102 0.0751 0.1342 0.0608 0.1614 0.0641 25.5334 9.9845 4.3309 1.2594
IVIFS2 0.7260 0.1300 16.7261 3.4062 0.1087 0.0717 0.1304 0.0597 0.1575 0.0643 25.6131 9.8644 4.3816 1.3071
PCA 0.7420 0.1273 17.1701 2.7290 0.0830 0.0587 0.1146 0.0446 0.1457 0.0490 26.6032 10.3009 4.7903 1.2266
RBMA 0.7441 0.1321 17.9813 4.4801 0.1098 0.0677 0.1220 0.0595 0.1422 0.0650 25.8158 9.5805 4.5419 0.9779
SSIF 0.6448 0.1701 14.2205 4.2068 0.1766 0.0942 0.1860 0.0846 0.2155 0.0885 25.3117 9.5929 4.3816 1.1783
Proposed 0.8357 0.1088 22.0751 4.4739 0.0364 0.0441 0.0729 0.0601 0.0925 0.0718 27.5846 9.8180 4.3425 1.0751
1 3
141 Page 22 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
These quantitative results with the dierent datasets illustrate that the proposed model
not only excels in maintaining structural integrity and enhancing image quality but also
competes closely with the existing methods in terms of brightness preservation. The supe-
rior SSIM, PSNR, AMBE, MAE, RSME and NIQE values, along with a competitive BRIS-
QUE, collectively validate the proposed model’s eectiveness and superiority over existing
LLIE techniques.
6.7 Comparative study with SOTA approaches
In this section, a comparative analysis is conducted based on the SSIM and PSNR for
the enhanced images produced by the proposed technique against 13 SOTA approaches,
namely LIME, SID, DeepUPE, DeepLPF, RF, LECARM, SRIE, 3DULT, IPT, ZeroDCE,
RetinexNet, DSLR and SCI (Xu et al. 2022; Cai et al. 2023; Jiang et al. 2023). This analysis
utilizes the LOL dataset, with the SSIM and PSNR results sourced from existing literature
(Xu et al. 2022; Cai et al. 2023; Jiang et al. 2023) where their models were evaluated the
SOTA methods using the same dataset. While there are some SOTA approaches obtain a
better outcome outcomes, the proposed technique demonstrates superior performance over
several SOTA models, as shown in Table 4 and portrayed in Fig. 12.
From this comparative analysis, it is evident that the proposed model achieves the high-
est SSIM (0.6202) and PSNR (19.1104) among all evaluated approaches. In the Table 4,
the bold entries represent the maximum values. The proposed model not only outperforms
established techniques but also showcases signicant improvements in preserving struc-
tural similarity and enhancing image quality in low-light conditions. These results arm
the superiority of the proposed technique, highlighting its eectiveness and robustness in
handling LLIE tasks compared to the other SOTA methods.
7 Implications and limitations of the model
In this part, the implications and limitations of the proposed model are discussed.
7.1 Implications
This work proposes a novel IVIFG based low-light enhancement model for referenced
image datasets. In this model, the SSIM of the reference images is used to optimize the
enhanced images. It helps obtain enhanced images with clearer information and better qual-
ity that closely matches the ground truth. The implications of the model are as follows:
●In everyday life, people use mobile phones for various purposes and they are often
securing their devices with face lock. Some users may set multiple faces for unlock-
ing their phones. Initially, this process requires a clear image for recognition, which is
stored as the ground truth with a label for future verication. If a user wants to remove
the face lock, they must capture a new image. However, the image may sometimes be
captured under poor lighting conditions. The proposed model can be used to enhance
such images, ensuring they closely resemble the ground truth for successful recognition.
●In colleges, industries and oces, the face images of students or employees are collect-
1 3
Page 23 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
ed and stored in a database, labelled with their names or unique identication numbers
for recognition. These stored images serve as the ground truth. For daily attendance or
verication purposes, individuals are required to capture new images using a camera.
Occasionally, due to poor lighting, these images may have low contrast. The proposed
model can be applied in such scenarios to enhance the captured images, ensuring they
closely match the stored data for reliable verication.
●Similarly, ngerprint images are gathered and stored in a database as ground truth. Dur-
Fig. 5 Comparison of PSNR
Fig. 4 Comparison of SSIM
1 3
141 Page 24 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
ing attendance verication, poor-quality ngerprint scans or non-uniform impressions
may occur. In such cases, the proposed model can be applied to enhance the ngerprint
images, improving the accuracy of the verication process.These implications provide
valuable insights for researchers to further develop and apply the proposed model for
various purposes. Thus, the applicability of the proposed model is demonstrated through
these practical scenarios.
Fig. 7 Comparison of MAE
Fig. 6 Comparison of AMBE
1 3
Page 25 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
7.2 Limitations of the model
Although the proposed study delivers promising results in experimental and quantitative
analyses, it has certain limitations:
●While the proposed method is highly eective for enhancing low-light images, it is not
suitable for handling noisy, blurred or hazy images.
Fig. 9 Comparison of BRISQUE
Fig. 8 Comparison of RMSE
1 3
141 Page 26 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
●The proposed method relies on ground truth images for optimization. Therefore, it is
only applicable to referenced image datasets.
8 Conclusion
In the present study, a novel optimized LLIE technique is designed for referenced image
datasets utilizing the proposed IVIFG and SSIM. Through a comprehensive experimental
analysis using benchmark referenced datasets, the proposed model demonstrated signi-
cant improvements over existing techniques. The comparative results with all the datasets
showed that the proposed model achieved the superior SSIM, PSNR, AMBE, MAE, RSME
and NIQE values, along with a competitive BRISQUE, indicating best performance in pre-
serving structural integrity and enhancing image quality. The SSIM and PSNR values of
the proposed approach was also compared with 13 SOTA methods, including LIME, SID,
DeepUPE, DeepLPF, RF, LECARM, SRIE, 3DULT, IPT, ZeroDCE, RetinexNet, DSLR
and SCI. Although some SOTA methods achieving competitive results, the proposed model
consistently outperformed them in both SSIM (0.6202) and PSNR (19.1104) metrics, high-
lighting its robustness and eectiveness in handling low-light conditions.
Further, this research can be extended with the following objectives.
●The interval-valued type-2 intuitionistic fuzzy generator (IVT2IFG) can be derived to
deal with higher dimensional uncertainties.
●The IVT2IFG can be applied to design a novel image enhancement technique without
considering ground truth images.
●Dierent reference and no-reference quality metrics can be utilized for the optimization
process.
●The applications of LLIE like trac images, underwater images, medical image, etc.,
Fig. 10 Comparison of NIQE
1 3
Page 27 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
can be analyzed through designed model
Table 4 Comparative study of SSIM and PSNR
Methods LIME SID DeepUPE DeepLPF RF LECARM SRIE
SSIM 0.5600 0.4360 0.4460 0.4730 0.4520 0.5690 0.4950
PSNR 16.7600 14.3500 14.3800 15.2800 15.2300 14.4099 11.8550
Methods 3DULT IPT ZeroDCE RetinexNet DSLR SCI Proposed
SSIM 0.4450 0.5040 0.5840 0.4620 0.5700 0.5250 0.6202
PSNR 14.3500 16.2700 14.8610 16.7700 14.8160 14.7840 19.1104
Fig. 11 Visual comparison
1 3
141 Page 28 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
Fig. 12 Comparative analysis
with SSIM and PSNR
Fig. 13 Visual comparison of SSIM
1 3
Page 29 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
C. Selvam, D. Sundaram
Author contributions Chithra Selvam: Conceptualization, Methodology, Visualization, Validation, Writ-
ing-Original Draft. Dhanasekar Sundaram: Conceptualization, Methodology, Validation, Formal Analysis,
Supervision, Writing-Review & Editing.
Funding Open access funding provided by Vellore Institute of Technology.
Data availability The employed dataset is publicly available for the research in the database h t t p s : / / d a o o s h e
e . g i t h u b . i o / B M V C 2 0 1 8 w e b s i t e / and https://github.com/lowlevelai/glare.
Declarations
Conict of interest The authors declare no Conict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
licence, and indicate if changes were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material.
If material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Al-Ameen Z (2024) Adapted type-II fuzzy algorithm to process images with non-uniform illumination. Sig-
nal, Image Video Process 18(4):3109–3122
Al-Hashim MA, Al-Ameen Z (2020) Retinex-based multiphase algorithm for low-light image enhancement.
Traitement du Signal, 37(5):733-743
Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96
Atanassov K, Gargov G (1989) Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst 31(3):343–349
Cai Y, Bian H, Lin J, Wang H, Timofte R, Zhang Y (2023) Retinexformer: One-stage retinex-based trans-
former for low-light image enhancement. In Proceedings of the IEEE/CVF international conference on
computer vision, pp. 12504-12513
Chaira T (2020) Intuitionistic fuzzy approach for enhancement of low-contrast mammogram images. Int J
Imaging Syst Technol 30(4):1162–1172
Chaira T (2021) An intuitionistic fuzzy clustering approach for detection of abnormal regions in mammo-
gram images. J Digit Imaging 34(2):428–439
Chen X, Yu Y (2024) An unsupervised low-light image enhancement method for improving V-SLAM local-
ization in uneven low-light construction sites. Autom Constr 162:105404
Chinnappan R R, Sundaram D (2024) A low-light video enhancement approach using novel intuitionistic
fuzzy generator. Eur Phys J Spec Top, pp 1–13
Colney L, Kumar S, Jodwal M, Bharti M, Acharya UK (2023) Performance analysis of adaptive histogram
equalization-based image enhancement schemes. In: 2023 International conference on computing, com-
munication, and intelligent systems (ICCCIS). IEEE, pp. 128–134
Dai J, Li Q, Wang H, Liu L (2024) Understanding images of surveillance devices in the wild. Knowl-Based
Syst 284:111226
Demir Y, Kaplan NH (2023) Low-light image enhancement based on sharpening-smoothing image lter.
Digital Signal Process 138:104054
Ghosh SK, Ghosh A (2022) A novel hyperbolic intuitionistic fuzzy divergence measure based mammogram
enhancement for visual elucidation of breast lesions. Biomed Signal Process Control 75:103586
Hassan MF, Siaw Lang W, Paramesran R (2024) An integrated enhancement method to improve image vis-
ibility and remove color cast for sand-dust image. Multimed Tools Appl, pp 1–16
Hu C, Li H, Ma T, Zeng C, Ji X (2024) An improved image enhancement algorithm: radial contrast-limited
adaptive histogram equalization, Multimed Tools Appl 83:83695-83707
Jebadass JR, Balasubramaniam P (2022) Low light enhancement algorithm for color images using intuition-
istic fuzzy sets with histogram equalization. Multimed Tools Appl 81(6):8093–8106
1 3
141 Page 30 of 31
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Interval-valued intuitionistic fuzzy generator based low-light…
Jebadass JR, Balasubramaniam P (2022) Low contrast enhancement technique for color images using inter-
val-valued intuitionistic fuzzy sets with contrast limited adaptive histogram equalization. Soft Comput
26(10):4949–4960
Jebadass JR, Balasubramaniam P (2024) Color image enhancement technique based on interval-valued intu-
itionistic fuzzy set. Inf Sci 653:119811
Jiang H, Luo A, Fan H, Han S, Liu S (2023) Low-light image enhancement with wavelet-based diusion
models. ACM Trans Graph 42(6):1–14
Kaijun Z, Duojie L, Guangnan L, Xiancheng Z, Yemei Q (2024) Gunet: a novel and ecient low-illumina-
tion palmprint image enhancement method. Signal, Image Video Proces 18(8):6093-6101
Khmag A (2023) Natural digital image mixed noise removal using regularization Perona-Malik model and
pulse coupled neural networks. Soft Comput 27(21):15523–15532
Khmag A (2023) Additive Gaussian noise removal based on generative adversarial network model and semi-
soft thresholding approach. Multimed Tools Appl 82(5):7757–7777
Khmag A, Ramli AR, Kamarudin N (2019) Clustering-based natural image denoising using dictionary learn-
ing approach in wavelet domain. Soft Comput 23(17):8013–8027
Li J, Hao S, Li T, Zhuo L, Zhang J (2024) RDMA: low-light image enhancement based on retinex decomposi-
tion and multi-scale adjustment. Int J Mach Learn Cybern 15(5):1693–1709
LOL dataset: https://daooshee.github.io/BMVC2018website/
LOL-v2 datasets: https://github.com/lowlevelai/glare
Park JY, Park CW, Eom IK (2023) ULBPNet: low-light image enhancement using U-shaped lightening back-
projection. Knowl-Based Syst 281:111099
Qu J, Liu RW, Gao Y, Guo Y, Zhu F, Wang FY (2024) Double domain guided real-time low-light image
enhancement for ultra-high-denition transportation surveillance. IEEE Trans Intell Transp Syst
25(8):9550-9562
Selvam C, Jebadass RJJ, Sundaram D, Shanmugam L (2024) A novel intuitionistic fuzzy generator for low-
contrast color image enhancement technique. Inf Fusion 108:102365
Singh N, Bhandari AK (2021) Principal component analysis-based low-light image enhancement using
reection model. IEEE Trans Instrum Meas 70:1–10
Singh P, Bhandari A K, Kumar R (2024) Low light image enhancement using reection model and wavelet
fusion. Multimed Tools Appl, pp 1–29
Subramani B, Veluchamy M (2023) Bilateral tone mapping scheme for color correction and contrast adjust-
ment in nearly invisible medical images. Color Res Appl 48(6):748–760
Sugeno M (1993) Fuzzy measures and fuzzy integrals: a survey. Readings in Fuzzy Sets for Intelligent Sys-
tems, pp. 251-257
Veluchamy M, Subramani B (2023) Articial bee colony optimized image enhancement framework for invis-
ible images. Multimed Tools Appl 82(3):3627–3646
Veluchamy M, Bhandari AK, Subramani B (2021) Optimized Bezier curve based intensity mapping scheme
for low light image enhancement. IEEE Trans Emerg Top Comput Intell 6(3):602–612
Vijayalakshmi D, Nath MK (2022) A novel multilevel framework based contrast enhancement for uniform
and non-uniform background images using a suitable histogram equalization. Digital Signal Process
127:103532
Wang W, Chen Z, Yuan X, Wu X (2019) Adaptive image enhancement method for correcting low-illumina-
tion images. Inf Sci 496:25–41
Xu H, Liu X, Zhang H, Wu X, Zuo W (2024) Degraded structure and Hue Guided Auxiliary learning for low-
light image enhancement. Knowl-Based Syst 295:111779
Xu X, Wang R, Fu C -W, Jia J (2022) SNR-aware low-light image enhancement. In: 2022 IEEE/CVF confer-
ence on computer vision and pattern recognition (CVPR), New Orleans, LA, USA, pp. 17693–17703
Yadav PS, Gupta B, Lamba SS (2024) A new approach of contrast enhancement for medical images based on
entropy curve. Biomed Signal Process Control 88:105625
Yager RR (1980) On the measure of fuzziness and negation. Part II: Lattices, information and control
44:236–260
Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
Zhang H, Yang KF, Li YJ, Chan LLH (2024) Self-supervised network for low-light trac image enhance-
ment based on deep noise and artifacts removal. Comput Vis Image Understand 246:104063
Zhou J, Pang L, Zhang D, Zhang W (2023) Underwater image enhancement method via multi-interval sub-
histogram perspective equalization. IEEE J Oceanic Eng 48(2):474–488
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
1 3
Page 31 of 31 141
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com