Content uploaded by Michel Jourlin
Author content
All content in this area was uploaded by Michel Jourlin on Sep 09, 2015
Content may be subject to copyright.
LIP operators: simulating exposure variations to
perform algorithms independent of lighting conditions
Maxime Carre*
NT2I
Saint Etienne, France
m.carre@nt2i.fr
Michel Jourlin
Laboratoire Hubert Curien
Université Jean Monnet
Saint Etienne, France
michel.jourlin@univ-st-etienne.fr
Abstract— Logarithmic Image Processing (LIP) Model consists of
a mathematical and physical framework dedicated to image
processing. In this paper, a novel application of LIP operators is
presented: we explain how LIP addition and subtraction can be
used to simulate a variation of sensor sensitivity or changes in
source intensity depending on the transmitted or reflected
imaging context. Such a property generates an interesting result
on another LIP operator: the LIP additive contrast is
independent of brightness variations, due to exposure time
modification for instance. The results are presented through
different image processing techniques like image enhancement,
contour detection and a lot of other possible applications.
Keywords: LIP model; LIP contrast; exposure time; brightness;
edge detectors
I. INTRODUCTION
The quality of image acquisition is a decisive step in the
resolution of imaging problems. Variable or difficult
acquisition conditions can lead to instability in image
processing algorithms. The LIP Model, a mathematical and
physical framework dedicated to image processing, has already
demonstrated its ability to compensate changing acquisition
conditions. LIP operators and many imaging tools have been
developed since the creation of the Model. The basic LIP
operators have been defined in a transmitted light context.
Nevertheless, the interest in using these LIP operators in a
reflected light model has already been shown, in particular by
Brailean [1] who established their consistency with human
vision. This point opens the way to process images acquired in
reflection, in particular when we aim at interpreting them as a
human eye would do.
In this paper, the basics notions of the Model are recalled
and their meaning under a reflected light model is detailed.
Moreover, a new property of a LIP contrast is shown, that
permits to obtain image processing algorithms invariant to
acquisition conditions.
II. LIP MODEL: RECALLS AND NOTATIONS
Introduced by Jourlin et al. ([2], [3], [4]), the LIP
(Logarithmic Image Processing) model proposes first a
framework adapted to images acquired in transmitted light
(when the observed object is placed between the source and the
sensor). In this context, each grey level image may be
identified to the object, as long as the acquisition conditions
(source intensity and sensor aperture) remain stable.
A. Introduction
An image f is defined on a spatial support D, with values in
the grey scale [0, M[, which may be written :
f: D ⊂ R² → [0,M[⊂ R
In the LIP context, 0 corresponds to the « white » extremity
of the grey scale, which means to the source intensity. This
means that no obstacle (object) is placed between the source
and the sensor. The other extremity M is a limit situation where
no element of the source is transmitted (black value). This
value is excluded of the scale, and when working with 8-bits
digitized images, the 256 grey levels correspond to the interval
of integers [0,..,255].
The transmittance Tf(x) of an image f at x D is defined by
the ratio of the out-coming flux at x by the in-coming flux
(intensity of the source). In a mathematical formulation, Tf(x)
can be understood as the probability, for a particle of the source
incident at x, to pass through the observed obstacle, i.e. to be
seen by the sensor.
B. Operations
The addition of two images f and g corresponds to the
superposition of two obstacles (objects) generating respectively
f and g. The resulting image will be noted f ⨹ g. Such an
addition is strongly linked to the transmittance law:
Tf ⨹ g = Tf x Tg (1)
It means that the probability, for a particle emitted by the
source, to pass through the “sum” of the obstacles f and g,
equals the product of the probabilities to pass through f and g.
In this representation, an image f can be interpreted as a
luminance filter and be defined according to:
f(x) = M ( 1 – It (x) / Ii (x) )
where It (x) and Ii (x) represent respectively the transmitted
and incident luminance. The grey level function f can be seen
as the opacity of the observed absorbing media. The
transmittance function is the opposite of the opacity function.
The link between the transmittance Tf(x) and the grey level f(x)
is established by the following relation:
Tf(x) = 1 – f(x) / M (2)
Replacing in (1) the transmittances by their values obtained
in (2) yields:
f ⨹ g = f + g – f.g / M (3)
From this law, it is possible to derive the LIP multiplication
of an image by a positive real number a according to:
a ⨻ f = M – M( 1 – f/M )a (4)
From these two operations, the LIP subtraction can be
defined:
f ⨺ g = f ⨹ ( -1 ⨻ g ) = M . ( f – g ) / ( M – g ) (5)
C. LIP additive contrast
In the LIP context, Jourlin [5] introduced the LAC
(Logarithmic Additive Contrast), noted C⨹(x,y) (f) of a grey
level function f at a pair (x,y) of points lying in D2 according
to the formula :
Min( f(x), f(y) ) ⨹ C⨹(x,y) (f) = Max( f(x), f(y) ) (6)
Such a contrast represents the grey level which must be
added (superposed) to the brightest point (smallest grey level)
in order to obtain the darkest one (highest grey level). By
definition, this logarithmic contrast can be visualized without
normalization. It is also possible to express this contrast as a
LIP subtraction:
C⨹(x,y) (f) = Max( f(x), f(y) ) ⨺ Min( f(x), f(y) ) (7)
In 2012, Jourlin [6] demonstrated a link between the LAC
and the Michelson contrast giving a precise “physical”
meaning to the Michelson contrast. Metrics and image
processing tools based on the LAC have been defined and
permit to use this notion of contrast in different imaging
applications (contour detection, automated thresholding and
pattern recognition).
III. PHYSICAL MEANING OF THE LIP ADDITION AND
SUBTRACTION
The LIP operators take their origin in transmitted light
imaging. Let’s analyze their physical meaning and see how
such operations can be interpreted in a reflected light model in
order to justify their use.
A. Representation in reflected light
In a transmitted light model, an absorbing media is placed
between a source and a sensor. The LIP addition consists of
adding a homogenous absorbing media after the existing one.
The sensor will receive less light and the resulting image will
be darker than the original. On the contrary, the LIP subtraction
consists in deleting a homogenous part of the observed semi-
transparent media. The sensor will receive more light and the
resulting image will look brighter.
Let us now interpret LIP addition (resp. subtraction) of a
constant to (resp. from) a grey level image in a reflected light
model. The use of these operations on reflected images can be
justified by supposing the possibility to obtain an image in a
transmitted situation identical to that obtained in a reflected
one. Indeed, at each image f obtained with a reflected model
can be associated a homogeneous semi-transparent medium
placed between the source and the sensor that would generate
the same image. A LIP addition of a constant to f would lead to
add a second absorbent medium, homogeneous and of constant
thickness in order to darken the resulting image. At the
opposite, a LIP subtraction would permit to increase the
amount of light coming into the sensor by deleting a
homogeneous thickness or increasing the intensity of the
source.
B. Physical interpretation
Let us come back in a transmitted light model in order to
study more precisely the effect of adding a second absorbing
medium, homogeneous (LIP addition by a constant), between a
source and a sensor.
Mayet [7] showed that the LIP model is based on the
exponential absorption physical laws. This attenuation results
in the following equation:
I(x,y) = I0(x,y)exp(-µ(x,y)z(x,y))
where I0(x,y) represents the incoming intensity function of
the source, z(x,y) denotes the thickness function of the
absorbing medium, μ(x,y) is the medium absorption function
(supposed to be constant all along the crossed thickness, i.e.,
μ(x,y) does not depend on z(x,y)) and I(x,y) is the transmitted
intensity that comes into the sensor.
Let us now consider μ1(x,y) and μ2(x,y) two media
absorption functions and z1(x,y), z2(x,y) two media thickness
functions transmitting intensities I1(x,y) and I2(x,y). We have:
I1(x,y) = I0(x,y)exp(-μ1(x,y)z1(x,y))
I2(x,y) = I0(x,y)exp(-μ2(x,y)z2(x,y))
By superposing these two absorbing media for an incoming
intensity I0, the transmitted intensity function is given by:
I3(x,y) = I0(x,y).exp(-μ1(x,y)z1(x,y)).exp(-μ2(x,y)z2(x,y)) (7)
If we consider now the second absorbing medium as being
homogeneous at each point (x,y), we have (with k a constant):
exp(-μ2(x,y)z2(x,y)) = k
Then the equation 7 becomes:
I3(x,y) = k.I0(x,y).exp(-μ1(x,y)z1(x,y))
I3(x,y) = I0’(x,y).exp(-μ1(x,y)z1(x,y)) with I0’= k.I0
The addition of a second homogeneous absorbing medium
between the source and the sensor results in varying the
incoming intensity of the source I0. It may be interpreted also
as the variation of a sensor parameter. For instance, this
parameter could be the exposure time that permits to adjust the
time during which the sensor is sensible to the light.
In a reflected light configuration, the simulation of the
source intensity variation by a medium placed just before the
sensor does not really make sense. In this case, the source does
not correspond to a classical punctual source. It corresponds to
the observed scene. The luminance emitted by the scene does
not only depend on the light source, but on many parameters:
reflection, transmission or absorption of the light linked to the
material, the geometry, etc. of the objects representing the
scene. On the contrary, simulating a variation of exposure time
remains true because this parameter is only attached to the
sensor, not to the observed scene.
To conclude, LIP addition and subtraction by a constant
may be used to simulate variations of source intensity or sensor
sensitivity (exposure time by instance) under transmitted light
conditions. Moreover, they can be used to modify the
sensitivity of the sensor under reflected light conditions.
C. Applications
An interesting property of LIP addition and subtraction by a
constant consists in its linearity. From this remark and from the
physical properties of these operations, simple and fast
algorithms exploiting these operations are possible to obtain
realistic image enhancement.
For instance, it is possible to compute the thickness of an
absorbing medium that must be added or subtracted in order to
perform an accurate variation of luminance. The aim is to
reduce (or increase) the transmittance of an absorbing medium,
i.e. its probability p0 to transmit a luminous flux in a ratio r (for
instance r = 0.25 to reduce four times the probability). Then a
medium with a transmission probability r must be added (or
subtracted). Indeed, the probability for a particle to go through
the “sum of the obstacles” f and g corresponds to the product of
the probabilities to go through f and g separately. In each point
x of the sum of the obstacles, we have the probability psum(x) =
r.p0(x) (psub (x) = p0(x)/r with a subtraction). In figure 1 is
presented a simulation of exposure time variation using LIP
subtraction. In this example, an image acquired at 10 ms is
Figure 1. LIP subtraction simulating exposure time variation, on top left :
image acquired at 10 ms, on top right : image acquired at 100 ms, on bottom :
LIP subtraction by a constant applied on the image aquired at10 ms in order to
simulate an image acquired at 100 ms
corrected by a LIP subtraction with a constant computed in
order to simulate an image acquired at 100 ms.
Another application concerns an algorithm of LIP tone
mapping that consists in applying locally these LIP operations
of addition and subtraction by a constant. Simulating local
exposure variations permits to correct differently the dark and
bright parts of an image.
An example of this local correction is presented (see figure
2), which permits to avoid saturation effects generated by a
“global” LIP subtraction. Similar effects would appear on an
image of the scene acquired with a higher exposure time.
Figure 2. Local correction by LIP operations, on top left: initial image, on
top right: correction by LIP subtraction, on bottom: local correction by LIP
subtraction
IV. INDEPENDENCE OF THE ADDITIVE CONTRAST TO LIGHT
CONDITIONS
Starting from the ability of LIP addition and subtraction to
simulate luminance changing, let us focus on another LIP
operator: the LIP additive contrast (based on a LIP subtraction)
and see a direct consequence of this novel property.
Applying an addition (or subtraction) by a constant on this
LIP additive contrast does not change this contrast value:
C⨹(x,y) (f) = Max( f(x), f(y) ) ⨺ Min( f(x), f(y) )
= ( Max( f(x), f(y) ) ⨹ k ) ⨺ ( Min( f(x), f(y) ) ⨹ k )
= C⨹(x,y) (f ⨹ k)
Then we can establish that LIP additive contrast is invariant
to source variations or to sensor sensitivity modifications in a
transmitted light model. Also, this operator is invariant to
sensor sensitivity variations in a reflected light model.
An application of this property is presented (see figure 3).
LIP additive contrast is used in a contour detection algorithm.
A mean contrast AC⨹(i) is computed at each pixel i of an
image with each of its neighbors Nk(i) [5]:
AC⨹(i) = (1/8) ⨻ ∑⨹k=1..8 C⨹( i, Nk(i) )
In this example, this contour detection is applied on several
images of the same scene acquired at different exposure times.
As expected, the same contours are detected on each image
(the existing differences are due to noise phenomena in the
darkest part of the images). By comparison, “usual” contour
detectors, using classical grey-levels differences detect
information depending on the exposure time used (Sobel [8],
Prewitt [9] or Canny [10] operators).
Another interest of a contour detector based on LIP additive
contrast concerns its robustness to local brightness changes
inside an image. An example of texture detection by this LIP
contrast is presented (see figure 4). The original image in the
example shows a metallic bowl disposed on a rocky ground.
The metallic bowl generates a shadow on the ground. The
additive LIP contrast has the same behavior in the dark and
bright parts of the ground: “rocky texture” is detected in the
sunny ground as well as in the bowl’s shadow.
By comparison, a classical grey level difference detects
information depending on the local brightness. In the example,
the ground texture is not detected in the shadow area.
It is important to notice that in both examples (see figure 3
and 4), for a rigorous comparison, the resulting images
obtained with classic grey-level differences based methods
(here, Sobel operator) have been normalized in order to fit in a
grey-level scale, in our case between [0, 255]. In fact, the use
of grey-level difference produces mostly dark levels. On the
Figure 3. LAC robustness to exposure time variation, on top: two original
images acquired at 100 ms and 25 ms, in the center: images resulting of
contour detection using LIP additive contrast, at the bottom : images resulting
of Sobel operator
contrary, LIP additive contrast is defined as a grey level and
does not need any normalization to be visualized.
V. CONCLUSION
In this paper, a novel property of LIP addition and
subtraction by a constant is introduced. These operations can
be used to simulate exposure time variation under reflected
light conditions. Under transmitted light conditions, they can
be used to simulate exposure time or source intensity variation.
This property gives a physical sense to LIP image enhancement
techniques based on these operators. From this new
information, another interesting property is presented
concerning the LIP additive contrast: this contrast notion is
invariant under such brightness variations.
Figure 4. Robustness to light condition of texture detection using LIP
additive contrast, on the left: original image, at the center : contour detection
using LIP additive contrast, on the right : contour detection by Sobel operator
This last property is presented through two examples of
contour and texture detection, but many other applications of
this contrast are possible, benefiting of its capacity in being
invariant to brightness variations. If an algorithm computes a
difference between two grey levels, then this LIP contrast can
be used. Automated thresholding techniques, pattern
recognition or image filtering (e.g. bilateral filtering [11])
constitute possible applications of this contrast.
REFERENCES
[1] J.C. Brailean, “Evaluating the em algorithm for image processing using
a human visual fidelity criterion”. International conference Acoustics,
Speech, and Signal Processing, pp. 2957-2960, 1991.
[2] M. Jourlin and J.C. Pinoli, “A model for logarithmic image processing”,
Journal of Microscopy, 149, pp. 21-35, 1988.
[3] M. Jourlin and J.C. Pinoli, “Image dynamic range enhancement and
stabilization in the context of the logarithmic image processing model”,
Signal Processing, 41(2), pp. 225-237, 1995.
[4] M. Jourlin and J.C. Pinoli, “The mathematical and physical framework
for the representation and processing of transmitted images”, Advances
in Imaging and Electron Physics, 115, pp. 129-196, 2001.
[5] M. Jourlin, J.C. Pinoli and R. Zeboudj, “Contrast definition and contour
detection for logarithmic images”, Journal of Microscopy, 156, pp 33-
40, 1989.
[6] M. Jourlin, M. Carré, J. Breugnot and M. Bouabdellah, “Logarithmic
Image Processing: Additive Contrast, Multiplicative Contrast, and
Associated Metrics”, Advances in Imaging and Electron Physics, 171,
pp 358-404, 2012.
[7] F. Mayet, J.-C. Pinoli, M. Jourlin, “Physical Justifications and
Applications of the LIP Model for the Processing of Transmitted Light
Images”, Traitement du Signal, vol. 13 (3), 1996.
[8] W.K. Pratt, “Digital Image Processing”, Wiley-Interscience Publication,
1978.
[9] J.M.S. Prewitt “Object Enhancement and Extraction”, Picture processing
and Psychopictorics, Academic Press, 1970.
[10] J. Canny, "A computational approach to edge detection", IEEE Trans.
Pattern Analysis and Machine Intelligence, vo.l 8, pp 679-714, 1986.
[11] C. Tomasi, and R. Manduchi, “Bilateral filtering for gray and color
images”, IEEE Int. Conf. on Computer Vision, pp 836-846, 1998.