Conference PaperPDF Available

“LIP operators: Simulating exposure variations to perform algorithms independent of lighting conditions”

Authors:

Abstract and Figures

Logarithmic Image Processing (LIP) Model consists of a mathematical and physical framework dedicated to image processing. In this paper, a novel application of LIP operators is presented: we explain how LIP addition and subtraction can be used to simulate a variation of sensor sensitivity or changes in source intensity depending on the transmitted or reflected imaging context. Such a property generates an interesting result on another LIP operator: the LIP additive contrast is independent of brightness variations, due to exposure time modification for instance. The results are presented through different image processing techniques like image enhancement, contour detection and a lot of other possible applications.
Content may be subject to copyright.
LIP operators: simulating exposure variations to
perform algorithms independent of lighting conditions
Maxime Carre*
NT2I
Saint Etienne, France
m.carre@nt2i.fr
Michel Jourlin
Laboratoire Hubert Curien
Université Jean Monnet
Saint Etienne, France
michel.jourlin@univ-st-etienne.fr
Abstract Logarithmic Image Processing (LIP) Model consists of
a mathematical and physical framework dedicated to image
processing. In this paper, a novel application of LIP operators is
presented: we explain how LIP addition and subtraction can be
used to simulate a variation of sensor sensitivity or changes in
source intensity depending on the transmitted or reflected
imaging context. Such a property generates an interesting result
on another LIP operator: the LIP additive contrast is
independent of brightness variations, due to exposure time
modification for instance. The results are presented through
different image processing techniques like image enhancement,
contour detection and a lot of other possible applications.
Keywords: LIP model; LIP contrast; exposure time; brightness;
edge detectors
I. INTRODUCTION
The quality of image acquisition is a decisive step in the
resolution of imaging problems. Variable or difficult
acquisition conditions can lead to instability in image
processing algorithms. The LIP Model, a mathematical and
physical framework dedicated to image processing, has already
demonstrated its ability to compensate changing acquisition
conditions. LIP operators and many imaging tools have been
developed since the creation of the Model. The basic LIP
operators have been defined in a transmitted light context.
Nevertheless, the interest in using these LIP operators in a
reflected light model has already been shown, in particular by
Brailean [1] who established their consistency with human
vision. This point opens the way to process images acquired in
reflection, in particular when we aim at interpreting them as a
human eye would do.
In this paper, the basics notions of the Model are recalled
and their meaning under a reflected light model is detailed.
Moreover, a new property of a LIP contrast is shown, that
permits to obtain image processing algorithms invariant to
acquisition conditions.
II. LIP MODEL: RECALLS AND NOTATIONS
Introduced by Jourlin et al. ([2], [3], [4]), the LIP
(Logarithmic Image Processing) model proposes first a
framework adapted to images acquired in transmitted light
(when the observed object is placed between the source and the
sensor). In this context, each grey level image may be
identified to the object, as long as the acquisition conditions
(source intensity and sensor aperture) remain stable.
A. Introduction
An image f is defined on a spatial support D, with values in
the grey scale [0, M[, which may be written :
f: D [0,M[ R
In the LIP context, 0 corresponds to the « white » extremity
of the grey scale, which means to the source intensity. This
means that no obstacle (object) is placed between the source
and the sensor. The other extremity M is a limit situation where
no element of the source is transmitted (black value). This
value is excluded of the scale, and when working with 8-bits
digitized images, the 256 grey levels correspond to the interval
of integers [0,..,255].
The transmittance Tf(x) of an image f at x D is defined by
the ratio of the out-coming flux at x by the in-coming flux
(intensity of the source). In a mathematical formulation, Tf(x)
can be understood as the probability, for a particle of the source
incident at x, to pass through the observed obstacle, i.e. to be
seen by the sensor.
B. Operations
The addition of two images f and g corresponds to the
superposition of two obstacles (objects) generating respectively
f and g. The resulting image will be noted f g. Such an
addition is strongly linked to the transmittance law:
Tf g = Tf x Tg (1)
It means that the probability, for a particle emitted by the
source, to pass through the “sum” of the obstacles f and g,
equals the product of the probabilities to pass through f and g.
In this representation, an image f can be interpreted as a
luminance filter and be defined according to:
f(x) = M ( 1 It (x) / Ii (x) )
where It (x) and Ii (x) represent respectively the transmitted
and incident luminance. The grey level function f can be seen
as the opacity of the observed absorbing media. The
transmittance function is the opposite of the opacity function.
The link between the transmittance Tf(x) and the grey level f(x)
is established by the following relation:
Tf(x) = 1 f(x) / M (2)
Replacing in (1) the transmittances by their values obtained
in (2) yields:
f g = f + g f.g / M (3)
From this law, it is possible to derive the LIP multiplication
of an image by a positive real number a according to:
a f = M M( 1 f/M )a (4)
From these two operations, the LIP subtraction can be
defined:
f g = f ( -1 g ) = M . ( f g ) / ( M g ) (5)
C. LIP additive contrast
In the LIP context, Jourlin [5] introduced the LAC
(Logarithmic Additive Contrast), noted C(x,y) (f) of a grey
level function f at a pair (x,y) of points lying in D2 according
to the formula :
Min( f(x), f(y) ) C(x,y) (f) = Max( f(x), f(y) ) (6)
Such a contrast represents the grey level which must be
added (superposed) to the brightest point (smallest grey level)
in order to obtain the darkest one (highest grey level). By
definition, this logarithmic contrast can be visualized without
normalization. It is also possible to express this contrast as a
LIP subtraction:
C(x,y) (f) = Max( f(x), f(y) ) Min( f(x), f(y) ) (7)
In 2012, Jourlin [6] demonstrated a link between the LAC
and the Michelson contrast giving a precise “physical”
meaning to the Michelson contrast. Metrics and image
processing tools based on the LAC have been defined and
permit to use this notion of contrast in different imaging
applications (contour detection, automated thresholding and
pattern recognition).
III. PHYSICAL MEANING OF THE LIP ADDITION AND
SUBTRACTION
The LIP operators take their origin in transmitted light
imaging. Let’s analyze their physical meaning and see how
such operations can be interpreted in a reflected light model in
order to justify their use.
A. Representation in reflected light
In a transmitted light model, an absorbing media is placed
between a source and a sensor. The LIP addition consists of
adding a homogenous absorbing media after the existing one.
The sensor will receive less light and the resulting image will
be darker than the original. On the contrary, the LIP subtraction
consists in deleting a homogenous part of the observed semi-
transparent media. The sensor will receive more light and the
resulting image will look brighter.
Let us now interpret LIP addition (resp. subtraction) of a
constant to (resp. from) a grey level image in a reflected light
model. The use of these operations on reflected images can be
justified by supposing the possibility to obtain an image in a
transmitted situation identical to that obtained in a reflected
one. Indeed, at each image f obtained with a reflected model
can be associated a homogeneous semi-transparent medium
placed between the source and the sensor that would generate
the same image. A LIP addition of a constant to f would lead to
add a second absorbent medium, homogeneous and of constant
thickness in order to darken the resulting image. At the
opposite, a LIP subtraction would permit to increase the
amount of light coming into the sensor by deleting a
homogeneous thickness or increasing the intensity of the
source.
B. Physical interpretation
Let us come back in a transmitted light model in order to
study more precisely the effect of adding a second absorbing
medium, homogeneous (LIP addition by a constant), between a
source and a sensor.
Mayet [7] showed that the LIP model is based on the
exponential absorption physical laws. This attenuation results
in the following equation:
I(x,y) = I0(x,y)exp(-µ(x,y)z(x,y))
where I0(x,y) represents the incoming intensity function of
the source, z(x,y) denotes the thickness function of the
absorbing medium, μ(x,y) is the medium absorption function
(supposed to be constant all along the crossed thickness, i.e.,
μ(x,y) does not depend on z(x,y)) and I(x,y) is the transmitted
intensity that comes into the sensor.
Let us now consider μ1(x,y) and μ2(x,y) two media
absorption functions and z1(x,y), z2(x,y) two media thickness
functions transmitting intensities I1(x,y) and I2(x,y). We have:
I1(x,y) = I0(x,y)exp(-μ1(x,y)z1(x,y))
I2(x,y) = I0(x,y)exp(-μ2(x,y)z2(x,y))
By superposing these two absorbing media for an incoming
intensity I0, the transmitted intensity function is given by:
I3(x,y) = I0(x,y).exp(-μ1(x,y)z1(x,y)).exp(-μ2(x,y)z2(x,y)) (7)
If we consider now the second absorbing medium as being
homogeneous at each point (x,y), we have (with k a constant):
exp(-μ2(x,y)z2(x,y)) = k
Then the equation 7 becomes:
I3(x,y) = k.I0(x,y).exp(-μ1(x,y)z1(x,y))
I3(x,y) = I0’(x,y).exp(-μ1(x,y)z1(x,y)) with I0’= k.I0
The addition of a second homogeneous absorbing medium
between the source and the sensor results in varying the
incoming intensity of the source I0. It may be interpreted also
as the variation of a sensor parameter. For instance, this
parameter could be the exposure time that permits to adjust the
time during which the sensor is sensible to the light.
In a reflected light configuration, the simulation of the
source intensity variation by a medium placed just before the
sensor does not really make sense. In this case, the source does
not correspond to a classical punctual source. It corresponds to
the observed scene. The luminance emitted by the scene does
not only depend on the light source, but on many parameters:
reflection, transmission or absorption of the light linked to the
material, the geometry, etc. of the objects representing the
scene. On the contrary, simulating a variation of exposure time
remains true because this parameter is only attached to the
sensor, not to the observed scene.
To conclude, LIP addition and subtraction by a constant
may be used to simulate variations of source intensity or sensor
sensitivity (exposure time by instance) under transmitted light
conditions. Moreover, they can be used to modify the
sensitivity of the sensor under reflected light conditions.
C. Applications
An interesting property of LIP addition and subtraction by a
constant consists in its linearity. From this remark and from the
physical properties of these operations, simple and fast
algorithms exploiting these operations are possible to obtain
realistic image enhancement.
For instance, it is possible to compute the thickness of an
absorbing medium that must be added or subtracted in order to
perform an accurate variation of luminance. The aim is to
reduce (or increase) the transmittance of an absorbing medium,
i.e. its probability p0 to transmit a luminous flux in a ratio r (for
instance r = 0.25 to reduce four times the probability). Then a
medium with a transmission probability r must be added (or
subtracted). Indeed, the probability for a particle to go through
the “sum of the obstacles” f and g corresponds to the product of
the probabilities to go through f and g separately. In each point
x of the sum of the obstacles, we have the probability psum(x) =
r.p0(x) (psub (x) = p0(x)/r with a subtraction). In figure 1 is
presented a simulation of exposure time variation using LIP
subtraction. In this example, an image acquired at 10 ms is
Figure 1. LIP subtraction simulating exposure time variation, on top left :
image acquired at 10 ms, on top right : image acquired at 100 ms, on bottom :
LIP subtraction by a constant applied on the image aquired at10 ms in order to
simulate an image acquired at 100 ms
corrected by a LIP subtraction with a constant computed in
order to simulate an image acquired at 100 ms.
Another application concerns an algorithm of LIP tone
mapping that consists in applying locally these LIP operations
of addition and subtraction by a constant. Simulating local
exposure variations permits to correct differently the dark and
bright parts of an image.
An example of this local correction is presented (see figure
2), which permits to avoid saturation effects generated by a
“global” LIP subtraction. Similar effects would appear on an
image of the scene acquired with a higher exposure time.
Figure 2. Local correction by LIP operations, on top left: initial image, on
top right: correction by LIP subtraction, on bottom: local correction by LIP
subtraction
IV. INDEPENDENCE OF THE ADDITIVE CONTRAST TO LIGHT
CONDITIONS
Starting from the ability of LIP addition and subtraction to
simulate luminance changing, let us focus on another LIP
operator: the LIP additive contrast (based on a LIP subtraction)
and see a direct consequence of this novel property.
Applying an addition (or subtraction) by a constant on this
LIP additive contrast does not change this contrast value:
C(x,y) (f) = Max( f(x), f(y) ) Min( f(x), f(y) )
= ( Max( f(x), f(y) ) k ) ( Min( f(x), f(y) ) k )
= C(x,y) (f k)
Then we can establish that LIP additive contrast is invariant
to source variations or to sensor sensitivity modifications in a
transmitted light model. Also, this operator is invariant to
sensor sensitivity variations in a reflected light model.
An application of this property is presented (see figure 3).
LIP additive contrast is used in a contour detection algorithm.
A mean contrast AC(i) is computed at each pixel i of an
image with each of its neighbors Nk(i) [5]:
AC(i) = (1/8) k=1..8 C( i, Nk(i) )
In this example, this contour detection is applied on several
images of the same scene acquired at different exposure times.
As expected, the same contours are detected on each image
(the existing differences are due to noise phenomena in the
darkest part of the images). By comparison, “usual” contour
detectors, using classical grey-levels differences detect
information depending on the exposure time used (Sobel [8],
Prewitt [9] or Canny [10] operators).
Another interest of a contour detector based on LIP additive
contrast concerns its robustness to local brightness changes
inside an image. An example of texture detection by this LIP
contrast is presented (see figure 4). The original image in the
example shows a metallic bowl disposed on a rocky ground.
The metallic bowl generates a shadow on the ground. The
additive LIP contrast has the same behavior in the dark and
bright parts of the ground: “rocky texture” is detected in the
sunny ground as well as in the bowl’s shadow.
By comparison, a classical grey level difference detects
information depending on the local brightness. In the example,
the ground texture is not detected in the shadow area.
It is important to notice that in both examples (see figure 3
and 4), for a rigorous comparison, the resulting images
obtained with classic grey-level differences based methods
(here, Sobel operator) have been normalized in order to fit in a
grey-level scale, in our case between [0, 255]. In fact, the use
of grey-level difference produces mostly dark levels. On the
Figure 3. LAC robustness to exposure time variation, on top: two original
images acquired at 100 ms and 25 ms, in the center: images resulting of
contour detection using LIP additive contrast, at the bottom : images resulting
of Sobel operator
contrary, LIP additive contrast is defined as a grey level and
does not need any normalization to be visualized.
V. CONCLUSION
In this paper, a novel property of LIP addition and
subtraction by a constant is introduced. These operations can
be used to simulate exposure time variation under reflected
light conditions. Under transmitted light conditions, they can
be used to simulate exposure time or source intensity variation.
This property gives a physical sense to LIP image enhancement
techniques based on these operators. From this new
information, another interesting property is presented
concerning the LIP additive contrast: this contrast notion is
invariant under such brightness variations.
Figure 4. Robustness to light condition of texture detection using LIP
additive contrast, on the left: original image, at the center : contour detection
using LIP additive contrast, on the right : contour detection by Sobel operator
This last property is presented through two examples of
contour and texture detection, but many other applications of
this contrast are possible, benefiting of its capacity in being
invariant to brightness variations. If an algorithm computes a
difference between two grey levels, then this LIP contrast can
be used. Automated thresholding techniques, pattern
recognition or image filtering (e.g. bilateral filtering [11])
constitute possible applications of this contrast.
REFERENCES
[1] J.C. Brailean, Evaluating the em algorithm for image processing using
a human visual fidelity criterion. International conference Acoustics,
Speech, and Signal Processing, pp. 2957-2960, 1991.
[2] M. Jourlin and J.C. Pinoli, “A model for logarithmic image processing”,
Journal of Microscopy, 149, pp. 21-35, 1988.
[3] M. Jourlin and J.C. Pinoli, “Image dynamic range enhancement and
stabilization in the context of the logarithmic image processing model”,
Signal Processing, 41(2), pp. 225-237, 1995.
[4] M. Jourlin and J.C. Pinoli, The mathematical and physical framework
for the representation and processing of transmitted images”, Advances
in Imaging and Electron Physics, 115, pp. 129-196, 2001.
[5] M. Jourlin, J.C. Pinoli and R. Zeboudj, Contrast definition and contour
detection for logarithmic images”, Journal of Microscopy, 156, pp 33-
40, 1989.
[6] M. Jourlin, M. Carré, J. Breugnot and M. Bouabdellah, Logarithmic
Image Processing: Additive Contrast, Multiplicative Contrast, and
Associated Metrics”, Advances in Imaging and Electron Physics, 171,
pp 358-404, 2012.
[7] F. Mayet, J.-C. Pinoli, M. Jourlin, “Physical Justifications and
Applications of the LIP Model for the Processing of Transmitted Light
Images”, Traitement du Signal, vol. 13 (3), 1996.
[8] W.K. Pratt, Digital Image Processing”, Wiley-Interscience Publication,
1978.
[9] J.M.S. Prewitt “Object Enhancement and Extraction”, Picture processing
and Psychopictorics, Academic Press, 1970.
[10] J. Canny, "A computational approach to edge detection", IEEE Trans.
Pattern Analysis and Machine Intelligence, vo.l 8, pp 679-714, 1986.
[11] C. Tomasi, and R. Manduchi, Bilateral filtering for gray and color
images”, IEEE Int. Conf. on Computer Vision, pp 836-846, 1998.
... In the majority of the aforementioned methods, the enhancement step was not driven by a rigorous goal but aimed at making low-light images visually interpretable. To improve such an approach, Carré established that it was possible to perfectly simulate variations of exposure time by performing LIP addition/subtraction of a constant [22,23]. This applies to grey level images (cf. Figure 2) as well as color images by working on the associated luminance image computed as the mean of the 3 channels Red, Green, and Blue, and the LIP addition/subtraction being performed on each of the 3 channels (cf. Figure 3, an example of a LIP tone mapping algorithm using LIP subtraction on color images [18]). ...
... Images acquired under very short exposure times and/or degraded lighting conditions need a preprocessing step to get a reliable stabilization of their brightness. Previous works [22,23] have demonstrated the effectiveness of LIP tools to perform such a brightness correction, even in extreme conditions of very low-light images. Compared to classical enhancement methods, the LIP approach presents two major advantages: it permits precise modeling of variable exposure times and has been demonstrated to be consistent with human vision [10]. ...
Article
Full-text available
Using a sensor in variable lighting conditions, especially very low-light conditions, requires the application of image enhancement followed by denoising to retrieve correct information. The limits of such a process are explored in the present paper, with the objective of preserving the quality of enhanced images. The LIP (Logarithmic Image Processing) framework was initially created to process images acquired in transmission. The compatibility of this framework with the human visual system makes possible its application to images acquired in reflection. Previous works have established the ability of the LIP laws to perform a precise simulation of exposure time variation. Such a simulation permits the enhancement of low-light images, but a denoising step is required, realized by using a CNN (Convolutional Neural Network). A main contribution of the paper consists of using rigorous tools (metrics) to estimate the enhancement reliability in terms of noise reduction, visual image quality, and color preservation. Thanks to these tools, it has been established that the standard exposure time can be significantly reduced, which considerably enlarges the use of a given sensor. Moreover, the contribution of the LIP enhancement and denoising step are evaluated separately.
... -The addition (or subtraction) of a constant c to (or from) f simulates the decrease (or increase) of the acquisition exposure-time (Carre and Jourlin, 2014;Deshayes et al., 2015). If the values of f − − − c become strictly negative, they behave as light intensifiers (Jourlin, 2016, chap. ...
... Knowing that the addition of a constant to a function is equivalent to a variation of camera exposure-time (Carre and Jourlin, 2014;Deshayes et al., 2015), we have the fundamental result: the LIP-additive Asplund metric is insensitive to exposure-time changing. ...
Article
Full-text available
In this paper, we propose a complete framework to process images captured under uncontrolled lighting and especially under low lighting. By taking advantage of the Logarithmic Image Processing (LIP) context, we study two novel functional metrics: i) the LIP-multiplicative Asplund metric which is robust to object absorption variations and ii) the LIP-additive Asplund metric which is robust to variations of source intensity or camera exposure-time. We introduce robust to noise versions of these metrics. We demonstrate that the maps of their corresponding distances between an image and a reference template are linked to Mathematical Morphology. This facilitates their implementation. We assess them in various situations with different lightings and movement. Results show that those maps of distances are robust to lighting variations. Importantly, they are efficient to detect patterns in low-contrast images with a template acquired under a different lighting.
... • The addition (or subtraction) of a constant c to (or from) f simulates the decrease (or increase) of the acquisition exposure-time [44,45]. If the values of f − c become strictly negative, they perform as light intensifiers [27, chap. ...
... Indeed, the constants c 1 and c 2 become c 1 + k and c 2 + k respectively. This implies that d + As (f + k, g) = d + As (f, g) and that ∀f, ∀k, d + As (f, f + k) = 0. Knowing that the addition of a constant to a function is equivalent to a variation of exposure-time [44,45], we have the fundamental result: the LIP-additive Asplund's metric is insensitive to exposure-time changing. ...
Preprint
In this paper, we propose a complete framework to process images captured under uncontrolled lighting and especially under low lighting. By taking advantage of the Logarithmic Image Processing (LIP) context, we study two novel functional metrics: i) the LIP-multiplicative Asplund's metric which is robust to object absorption variations and ii) the LIP-additive Asplund's metric which is robust to variations of source intensity and exposure-time. We introduce robust to noise versions of these metrics. We demonstrate that the maps of their corresponding distances between an image and a reference template are linked to Mathematical Morphology. This facilitates their implementation. We assess them in various situations with different lightings and movements. Results show that those maps of distances are robust to lighting variations. Importantly, they are efficient to detect patterns in low-contrast images with a template acquired under a different lighting.
... Generalized linear image processing system (GLS), such as the homomorphic multiplicative system [1][2][3], human visual system model [4], generalized mean filter [5], the log-ratio (LR) model [6], and the logarithmic image processing (LIP) model [7], has been studied since the late 1960s. The LIP model has been applied to many practical problems [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. Its operations have been justified from perspectives of the physical image formation model, human vision models [9,23], and information theory [24]. ...
... In two recent papers [21,22], the addition and subtraction operations of the LIP model have been used to simulate the different exposure time in capturing an image. The basic idea is to add a constant to or subtract a constant from an image. ...
Article
Full-text available
Generalized linear image processing systems have been developed from physical image formation models , human visual perception models, and mathematical models. Although there have been many papers on the extension, parameterization, and symmetrization of some of these systems, what is lacking is a unified framework such that the development and study of such systems can be performed based on a common ground. In this paper, we propose a conceptual image sensor which models how the light energy is converted into the sensor data. In the proposed sensor model, the input energy is regarded as a random variable and the conversion is through the cumulative distribution function. Based on the sensor model, we suggest a statistical framework by which new systems can be derived, existing and seemingly unrelated systems can be studied from a unified perspective. The proposed statistical framework not only provides a principled way to symmetrizing systems through the even extension of the probability distribution functions (PDF) and a natural way for the parameterization of systems through parameters of PDF. In this paper, we demonstrate new applications of the statistical framework through a numerical approximation of the lower incomplete gamma function, through the enhancement of the dynamic range and manipulation of the sharpness of images by using the scalar multiplication operation of the parametric system, through an application of a new system in fusion of multi-exposure images, and through an application of the three new systems for the correction of incorrect exposure. Keywords Vector space · generalized linear image processing · image sensor model · human visual system.
... Simulation of variable exposure time This property was presented by Carré et al. [19], "LIP operators: Simulating exposure variations to perform algorithms independent of lighting conditions", and was improved in 2021 in Sensors [14], "Extending Camera's Capabilities in Low Light Conditions Based on LIP Enhancement Coupled with CNN Denoising". ...
Article
Full-text available
The present study deals with image enhancement, which is a very common problem in image processing. This issue has been addressed in multiple works with different methods, most with the sole purpose of improving the perceived quality. Our goal is to propose an approach with a strong physical justification that can model the human visual system. This is why the Logarithmic Image Processing (LIP) framework was chosen. Within this model, initially dedicated to images acquired in transmission, it is possible to introduce the novel concept of negative grey levels, interpreted as light intensifiers. Such an approach permits the extension of the dynamic range of a low-light image to the full grey scale in “real-time”, which means at camera speed. In addition, this method is easily generalizable to colour images and is reversible, i.e., bijective in the mathematical sense, and can be applied to images acquired in reflection thanks to the consistency of the LIP framework with human vision. Various application examples are presented, as well as prospects for extending this work.
... For the interested reader, Carré et al. [26] have shown that the LIP-addition (resp. subtraction) of a constant to an image (resp. ...
Article
Full-text available
Thermal images are widely used for various applications such as safety, surveillance, and Advanced Driver Assistance Systems (ADAS). However, these images typically have low contrast, blurred aspect, and low resolution, making it difficult to detect distant and small-sized objects. To address these issues, this paper explores various preprocessing algorithms to improve the performance of already trained object detection networks. Specifically, mathematical morphology is used to favor the detection of small bright objects, while deblurring and super-resolution techniques are employed to enhance the image quality. The Logarithmic Image Processing (LIP) framework is chosen to perform mathematical morphology, as it is consistent with the Human Visual System. The efficacy of the proposed algorithms is evaluated on the FLIR dataset, with a sub-base focused on images containing distant objects. The mean Average-Precision (mAP) score is computed to objectively evaluate the results, showing a significant improvement in the detection of small objects in thermal images using CNNs such as YOLOv4 and EfficientDet.
... La troisième distance développée au cours de la thèse est issue de l'application de la métrique d'Asplünd dans le cadre du modèle LIP. [33], en suppression de fond [56], en détection de contours [86,185]. Grâce à l'addition et à la multiplication LIP, le modèle LIP définit un espace vec- toriel pour les images en niveaux de gris, l"espace vectoriel optique". ...
Thesis
Grâce aux informations spatiales et spectrales qu'elle apporte, l'imagerie multispectrale de la peau est devenue un outil incontournable de la dermatologie. Cette thèse a pour objectif d'évaluer l’intérêt de cet outil pour la cosmétologie à travers trois études : la détection d'un fond de teint, l'évaluation de l'âge et la mesure de la rugosité.Une base d'images multispectrales de peau est construite à l'aide d'un système à multiples filtres optiques. Une phase de prétraitement est nécessaire à la standardisation et à la mise en valeur de la texture des images.Les matrices de covariance des acquisitions peuvent être représentées dans un espace multidimensionnel, ce qui constitue une nouvelle approche de visualisation de données multivariées. De même, une nouvelle alternative de réduction de la dimensionnalité basée sur l'ACP est proposée dans cette thèse. L'analyse approfondie de la texture des images multispectrales est réalisée : les paramètres de texture issus de la morphologie mathématique et plus généralement de l'analyse d'images sont adaptés aux images multivariées. Dans cette adaptation, plusieurs distances spectrales sont expérimentées, dont une distance intégrant le modèle LIP et la métrique d'Asplünd.Les résultats des prédictions statistiques générées à partir des données de texture permettent de conclure quant à la pertinence du traitement des données et de l'utilisation de l'imagerie multispectrale pour les trois études considérées.
... In a previous paper [12], we established that the LIP addition of a constant to a gray level image permits to precisely estimate images of the same scene acquired under other exposure times. Such a property has been refined and extended to color images [13]. ...
Chapter
Full-text available
After a short recall on metrics concept, we propose a logarithmic version of existing metrics. Furthermore, we extend the Asplünd's metric to functions previously defined on binary shapes. Two extensions are proposed: a “natural” one based on the logarithmic scalar multiplication and another completely novel one called additive Asplünd's metric. Such a metric possesses very powerful properties in combination with its insensitivity to source intensity variations or exposure time changing. We propose then to extend these metrics to color images and the end of the chapter is centered on notions related to metrics: stronger or weaker notions like norms or gauges.
... -As the subtraction from the virtual obstacle generating of a slice of constant thickness or -As a modification of the source intensity This last point incited us to experiment the LIP subtraction as a simulator of exposure time changing (cf. [1]). In order to show that, the process we propose is the following (illustrated in Fig. 5-1): ...
Chapter
Full-text available
In this chapter, some detailed investigations are performed to show how the logarithmic image processing (LIP) tools are able to manage variable lighting conditions and simulate various exposure times. For this last application, we require only the studied image and its exposure time. An objective quality evaluation of this simulation is performed. In addition, we propose to apply these techniques to the acquisition of moving objects. Other simulations are addressed, such as diaphragm aperture, long exposure time, and binning.
Conference Paper
Full-text available
The LIP (Logarithmic Image Processing) Model offers a framework consistent with Human Vision. Based on the transmittance law, the LIP operators apply to images acquired in transmission or reflection. In this paper, we extend to colour images a result previously established for grey level images which simulated variable exposure times by means of a LIP subtraction. The quality of such stabilization is evaluated thanks to the Euclidean metric. An application is then proposed: it consists in the fast acquisition of a moving object resulting in a dark image without blur. By simulating a larger acquisition time, we produce an image both enhanced and de-blurred to be compared with the initial one ….
Article
Full-text available
The LIP (Logarithmic Image Processing) Model is now recognized as a powerful framework to process images acquired in transmitted light and to take into account the human visual system. One of its major interests is linked to the strong mathematical properties it satisfies, allowing the definition and use of rigorous operators. In this paper, we first present the concept of Logarithmic Additive Contrast (LAC), its physical interpretation based on transmittance notion and some resulting properties: it represents by definition a grey level, it is highly efficient when computed on dark pairs of pixels, opening various applications in the field of low-lighted images. Then the LAC is compared to the classical Michelson contrast, showing an explicit link between them. Furthermore, the LAC is demonstrated as very useful in the fields of automated thresholding and contour detection. Another major interest of the LAC is that it allows defining logarithmic metrics, opening various application fields: grey level images comparison, pattern recognition, target tracking, defect detection in industrial vision and the creation of a new class of automated thresholding algorithms. Another part of the paper is dedicated to a novel notion of Logarithmic Multiplicative Contrast (LMC), which appears as a positive real number and also presents a “physical” interpretation in terms of transmittance. Our research concerning the LMC remains to-day at an exploratory level if we consider the number of possible ways opened in order to deepen this notion. In fact, the LMC values may exceed the grey scale maximum, which necessitates some normalization to display it as a contrast map. Nevertheless, the LMC is proved to be very sensitive near the bright extremity of the grey scale, which is very useful to process over-lighted images. As the LAC, the LMC permits to introduce a lot of new metrics, particularly the Asplünd’s one and a metric combining shapes’ and grey levels’ informations. Until now, Asplûnd’s metric was defined for binary shapes and is extended here to grey level images, with interesting applications to pattern recognition.
Article
Full-text available
This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.
Article
A new image quality metric consistent with the properties of the human visual system is derived. Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE). From these results, the advantages of the new metric are obvious. The EM algorithm is modified according to the underlying mathematical structure of the new metric, which results in improved performance.
Article
This chapter presents an account of logarithmic image processing (LIP) theoretical and practical aspects focusing on transmitted image settings. The power of LIP framework is justified by its being not only mathematically well defined, but also physically consistent with the multiplicative transmittance and reflectance image-formation models and with some important laws and characteristics of human brightness perception. In fact, the physical connections are particularly strong with the transmitted image-formation settings. From a physical viewpoint, LIP framework has been shown to be consistent with the transmitted image-formation settings. From a mathematical viewpoint, LIP framework consists in a positive-ordered functional vector-cone structure. From a computational viewpoint, LIP framework provides a set of operations that overcome the out-of-range problem by definition or by using the modulus notion. From a practical viewpoint, LIP allows for introducing image-processing techniques.
Article
Images of a scene observed under a variable illumination or with a variable optical aperture are not identical. Does a privileged representant exist? In which physical setting? In which mathematical context? With which meaning and criterion? How to obtain it? The authors answer to such questions in the physical setting of logarithmic imaging processes. For such a purpose, they use the logarithmic image processing (LIP) model, known to be a compatible mathematical framework. After short recalls on this model, the paper presents two image transforms: one performs an optimal enhancement and stabilization of the overall dynamic range, and the other does of the mean dynamic range. The results obtained on X-ray images, as well as for some natural scenes, are shown. Also the implementation of the transforms is addressed.
Article
Up to now, image processing and image analysis techniques have borrowed their basic tools from functional analysis: Fourier filtering, differential and integral calculus, and so on. These tools, however, only realize their efficiency when they are put into a well-defined algebraic frame, most of the time of a vectorial nature. Unfortunately, the class of functions modelling ‘images’, commonly referred to as ‘grey tone functions’ does not necessarily present this very type of structure. We present here an operation for the ‘addition’ of two images, with a physical justification in the context of transmitted light. Such an addition permits the construction of the family of ‘positive homothetics' of the grey tone function at hand. The vectorial context sought is well defined: The class of images associated with the class of their grey tone functions naturally becomes the positive cone of an ordered real vector space. Furthermore, the proposed model holds for logarithmic imaging and is compatible with what is known about the human visual process. This model has been called ‘LIP’ (logarithmic image processing model).
Conference Paper
A new image quality metric consistent with the properties of the human visual system is derived. Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE). From these results, the advantages of the new metric are obvious. The EM algorithm is modified according to the underlying mathematical structure of the new metric, which results in improved performance