ArticlePDF Available

All-sky imaging: A simple, versatile system for atmospheric research

Authors:

Abstract and Figures

A simple and inexpensive fully automated all-sky imaging system based on a commercial digital camera with a fish-eye lens and a rotating polarizer is presented. The system is characterized and two examples of applications in atmospheric physics are given: polarization maps and cloud detection. All-sky polarization maps are obtained by acquiring images at different polarizer angles and computing Stokes vectors. The polarization in the principal plane, a vertical cut through the sky containing the Sun, is compared to measurements of a well-characterized spectroradiometer with polarized radiance optics to validate the method. The images are further used for automated cloud detection using a simple color-ratio algorithm. The resulting cloud cover is validated against synoptic cloud observations. A Sun coverage parameter is introduced that shows, in combination with the total cloud cover, useful correlation with UV irradiance.
Content may be subject to copyright.
All-sky imaging: a simple, versatile system
for atmospheric research
Axel Kreuter,* Matthias Zangerl, Michael Schwarzmann, and Mario Blumthaler
Division for Biomedical Physics, Department of Physiology and Medical Physics,
Innsbruck Medical University, Müllerstrasse 44, 6020 Innsbruck, Austria
*Corresponding author: axel.kreuter@imed.ac.at
Received 17 November 2008; revised 16 January 2009; accepted 17 January 2009;
posted 21 January 2009 (Doc. ID 104048); published 13 February 2009
A simple and inexpensive fully automated all-sky imaging system based on a commercial digital camera
with a fish-eye lens and a rotating polarizer is presented. The system is characterized and two examples
of applications in atmospheric physics are given: polarization maps and cloud detection. All-sky polar-
ization maps are obtained by acquiring images at different polarizer angles and computing Stokes
vectors. The polarization in the principal plane, a vertical cut through the sky containing the Sun, is
compared to measurements of a well-characterized spectroradiometer with polarized radiance optics
to validate the method. The images are further used for automated cloud detection using a simple color-
ratio algorithm. The resulting cloud cover is validated against synoptic cloud observations. A Sun cover-
age parameter is introduced that shows, in combination with the total cloud cover, useful correlation with
UV irradiance. © 2009 Optical Society of America
OCIS codes: 010.1615, 100.2960, 110.5405, 120.5410.
1. Introduction
Observations of the sky are one of the oldest methods
in planetary sciences such as meteorology and as-
tronomy. In many cases the human observer is still
indispensable, however autonomous digital technol-
ogy has grown in importance, e.g., in large observa-
tion networks. Digital all-sky imaging has been
utilized in the automated investigation of diverse
phenomena such as Auroras [1], urban light pollu-
tion [2], and photosynthetically active radiation un-
der forest canopies [3]. In astronomy, meteorology or
atmospheric physics, all-sky imaging in its most ele-
mentary form is a convenient way to record the gen-
eral atmospheric situation during observations [4].
More specifically, all-sky cameras with polarization
filters have been used for polarization mapping of
the sky hemisphere [510]. When calibrated against
a radiometric standard, such a system is a multiwa-
velength, multiangle radiometer measuring the radi-
ance of the whole sky in one exposure [5]. Compared
to sky-scanning grating spectroradiometers, acquisi-
tion speed is a trade-off against well-defined wave-
length bandwidth, dynamic range, and precision of
the detector. The skys polarization is sensitive to
the aerosol properties in the sky and is thus an ideal
complementary measurement device in combination
with aerosol optical depth measurements.
Systems for cloud observation and detection pur-
poses have been presented in [1014]. Cloud type
identification has been attempted but remains a
challenging issue [15]. It seems that a single camera
picture on its own is not quite sufficient for a detailed
automated cloud type analysis since, e.g., cloud
height information is difficult to recover. However,
in combination with satellite images in different
wavelength regions and a ceilometer, an all-sky im-
age could add valuable information.
The combination of all-sky imaging with UV radia-
tion measurements has been shown in [1618],
where cloud cover analysis was correlated with
enhanced UV irradiation. Under certain cloud
configurations the global UV radiation field can be
enhanced compared to the clear-sky value. For
radiation measurements in general, all-sky imaging
0003-6935/09/061091-07$15.00/0
© 2009 Optical Society of America
20 February 2009 / Vol. 48, No. 6 / APPLIED OPTICS 1091
adds an extra dimension to systematic data analysis.
Cloud cover and Sun coverage are key parameters for
estimating actual irradiances from clear-sky model
predictions.
In this study we present a particularly simple and
inexpensive all-sky imaging system for atmospheric
research and demonstrate two applications: polari-
zation maps and cloud detection. We show that rudi-
mentary characterization and relative calibration is
sufficient for these purposes.
2. General System Description
Images of the full-sky hemisphere are recorded with a
commercial compact digital camera (Canon A75) with
a fish-eye objective [field-of-view (FOV) 180°] and a
stepper motor controlled linear polarizer, situated be-
tween the objective and the camera. The system is
mounted in a weatherproof housing with a glass dome
and connected via ethernet cable to a personal compu-
ter for external automated control. A horizontal setup
is assured by a bubble level on the housing. Four
images at polarizer angles differing by 45° are
acquired at a fixed exposure time of 1=125 s and an
aperture of f=6. At one polarizer angle, a second under-
exposed image is acquired (a rudimentary high dy-
namic range (HDR) method, described, e.g., in [19])
to gain extra information close to the Sun where pixels
are often saturated. The images are transmitted in
JPEG format and have an intensity resolution of
8bit for three color channels (RGB) and a spatial re-
solution of 1536 ×2048 pixels. The whole process of ac-
quiring the set of five images, rotating the polarizer,
and transmitting the data takes less than 1min.
The system was installed on a building rooftop in
the city of Innsbruck, Austria, 47:26°N, 11:39°E,
620 m amsl. In routine operation, acquisitions are ta-
ken hourly at solar elevations >5°. The system was
also installed at two additional sites, featuring differ-
ent environments and horizons: A flat urban site in
Vienna (48:24°N, 16:33°E, 180 m amsl) and a mid-
altitude alpine site with an obscured horizon in
KolmSaigurn (47:07°N, 12:98°E, 1600 m amsl).
A shadow mechanism for obscuring the direct Sun
has been dismounted again since, for most of our in-
tentions here, the advantages (simplified system,
less moving parts, less obscured sky) outweigh the
disadvantages (area around the direct Sun difficult
for image processing, reflections in the lens system).
The hemisphere is projected onto the flat CCD
chip by an equiangular projection. Each point in
the sky, characterized by two angles (azimuth angle
ϕand zenith angle Θ) is mapped onto a circular
area in the xyplane. For increased computation
speed, the original JPEG images are downscaled
by a factor of 4 by nearest neighbor interpolation
and cut so that the hemisphere of 168° FOV is a circle
in a 325 ×325 pixel square plane. The xand ycoor-
dinates are centered on the zenith pixel ðx0;y0Þand
converted to polar coordinates ϕand r. The radius ris
proportional to the zenith angle in the sky Θ¼
rðΘmax=rmax Þ, with Θmax ¼FOV=2and rmax ¼
325=2, while the azimuth angle is invariant in the
transformation:
r
ϕpolar
r·Θmax
rmax
ϕΘ
ϕsky
:ð1Þ
The zenith pixel is the geometric center of the
square plane, when the camera is perfectly level.
This is checked by marked points (mountain peaks)
near the horizon. The resulting angle that each pixel
subtends is Θmax=rmax 0:5°. Considering the scale
of the atmospheric structures in the sky, an adequate
spatial resolution is easily met by a 1Mpixel
CCD chip.
Since each pixel is illuminated by a radiant power
from a solid angle and integrated over the spectral
responsivity, the measured radiometric quantity is
radiance. In fact, an idealistic camera attempts to
imitate the human eyes spectral responsivity, so the
quantity would then be luminosity.
Each pixel can be considered a set of three inde-
pendent broadband detectors with 8 bit resolution.
The radiance Rat each pixel is a nonlinear function
of the stored pixel counts C:
R¼k·fðCÞ;ð2Þ
where kis a calibration constant for radiance in ab-
solute units (Wm2sr1), so fðCÞis a linearized pixel
intensity or relative radiance. This nonlinear conver-
sion is generally implemented in imaging systems to
increase contrast. Normally, this function is called
gamma correction and is of the form fðCÞ¼Cγwith
γ2:1[20]. For our camera this relationship has
been found to be too inaccurate over the full dynamic
range, and the function fhas been established ex-
perimentally by analyzing a series of images of an
illuminated reflection plate with increasing exposure
time τand fitting a 3rd order polynomial of the form
fðCÞ¼aC bC2þcC3as an empirical function (see
Fig. 1). The result of the least squares fit for the
blue channel is ½abc¼½0:2410:0010:0000154.To
determine the constant k, a radiance standard like
an integrating sphere must be used. It is possible
Fig. 1. Linearization function fðCÞto convert pixel counts into
relative radiance for the blue channel. Data points are fitted with
a 3rd order polynomial.
1092 APPLIED OPTICS / Vol. 48, No. 6 / 20 February 2009
that kis a function of zenith angle Θ, an effect known
as vignetting in photography, in particular with fish-
eye objectives. It is a huge advantage of polarization
measurements and cloud detection presented in this
study that the issue of proper absolute radiance is
not relevant, which is a great experimental simplifi-
cation. Note that the dark noise is automatically sub-
tracted by the camera, so that our function fhas no
offset. As a test, a series of images was acquired at
exposure times of 0:000515 s with the camera being
completely in the dark. The mean counts of the JPEG
images ranged from 0.003 at 0:0005 s exposure time
to 0.25 at 15 s. It is concluded that for the exposure
times used here (around 0:01 s) the dark noise is suf-
ficiently subtracted.
Note also that, for the following image processing,
each image is rotated so that the line through the ze-
nith and the Sun is always vertical, the Sun being on
the upper half. This cut through the hemisphere is
called the principal plane (PP).
3. Polarization Maps
Polarization of the sky was first reported by Arago in
the early 19th century [21]. A first qualitative expla-
nation could be given by Lord Rayleigh using his
molecular scattering theory [22]. Indeed, the very
simplified consideration of single molecular scatter-
ing and geometric scattering angles reproduces the
polarization pattern for longer wavelengths in the
visible spectral range quite well. In the real atmo-
sphere, light is scattered multiple times and might
be backreflected by the ground, resulting in lower
than unity maximum polarization and points of zero
polarization, so-called neutral points. These neutral
points were observed quite early by Arago, Brewster
and Babinet and have more recently been imaged
in [23].
The generalized polarization state of light is con-
veniently described by the Stokes vector formalism
[21]. Decomposing the electric field vector into its
two orthogonal complex field amplitudes Erand El,
the 4-component Stokes vector is defined as
2
6
6
4
I
Q
U
V
3
7
7
5
¼2
6
6
4
ElE
lþErE
r
ElE
lErE
r
ElE
rþErE
l
iðElE
rþErE
lÞ
3
7
7
5
;ð3Þ
where Edenotes the complex conjugate amplitudes.
All the components are real, physical quantities,
namely, irradiances EE¼jEj2¼Iαmeasured at a
different polarizer angle α:
2
6
6
4
I
Q
U
V
3
7
7
5
¼2
6
6
4
I0þI90
I0I90
I45 I135
IþI
3
7
7
5
;ð4Þ
where Iþand Idenote circular polarized irra-
diances. Equation (4) can be simplified for the spe-
cific case here by noting that the irradiance is propor-
tional to the radiance for each pixel and neglecting
the circular polarization of the sky:
2
4
I
Q
U3
5¼2
4
R0þR90
R0R90
R45 R135 #:ð5Þ
Rαdenotes the measured radiances at relative polar-
izer angles α. The resulting degree of (linear) polar-
ization Πand its angle χis given by [21]
Π¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Q2þU2
pI;χ¼0:5arctanU
Q:ð6Þ
Geometrically, we can visualize Q=Iand U=Ias the
two orthogonal components in a unit circle (horizon-
tal cut through the Poincaré sphere), whose vector
sum is the degree of polarization. It is clear that Π
is invariant under rotation of the coordinate system,
so the zero offset of the analyzer is irrelevant. Note
also that, in the expressions for Πand χ, any calibra-
tion constant cancels so that it suffices to compute
sums of relative radiances fðCÞ. The polarization of
the skys hemisphere is computed by combining four
images at relative polarizer angles of 0°, 45°, 90°, and
135° [Fig. 2(a)]. The images are linearized by the con-
version function fto obtain four relative radiance
maps. Applying Eqs. (5) and (6) at each pixel yields
the polarization map [Fig. 2(b)]. The maps are
smoothed by a 20 ×20 pixel 2D median filter for
spatial noise reduction without obscuring real atmo-
spheric structures. In this study we restrict ourselves
to the blue channel of the camera, since it has the
strongest signal and can be compared best to the
UV spectroradiometer. The center wavelength is
about 450 mm with a full width at half-maximum
(FWHM) of 50 nm.
Finally, the polarizing property of the composite
optical system (dome/objective/polarizer/CCD-chip)
is tested. Errors introduced by optical components
can be described by the Müller matrix, operating
(a) (b)
Fig. 2. (a) Blue pixel counts Cfrom JPEG images at four polarizer
angles. Counts are converted to relative radiance and inserted into
Eqs. (5) and (6) to yield the polarization map. (b) Corresponding
polarization map of the cloud-free sky for a wavelength of 450
50 nm (blue channel). The degree of polarization is coded in gray
shades, undetermined areas are rendered white on 26 February
2008, 10:30 UTC.
20 February 2009 / Vol. 48, No. 6 / APPLIED OPTICS 1093
on the input Stokes vector [21]. The Müller matrix
describes a nonunitary polarization transformation,
i.e., it can rotate and change the length of the Stokes
vector. Here we confirm that the length of the Stokes
vector is conserved. A large, completely polarized
area (polarizer in front of a white reflectance plate)
is used as the test field and analyzed from different
angles of incidence (0°, 45°, and 80°) and polarizer
angle offsets (0°, 15°, and 30°) to confirm the invar-
iance of Πunder these angles. The measured polar-
ization is averaged over the test area and shown
along with the standard deviation as an error bar
in Fig. 3. No dependence is found for these angles of
incidence and polarizer offset angles. So we assume
the Müller matrix to be the identity for all the angles
of incidence.
However, small bright sources like the Sun, may
cause internal reflections at certain angles in the ob-
jective, which are superimposed in the skys image.
These reflections may locally introduce large errors
in the polarization but are clearly visible in the raw
images and can be identified as artifacts.
Since the Stokes vector is independent of a polar-
izer offset angle, it is clear that first Stokes para-
meter Iis overdetermined: I¼R0þR90 ¼R45þ
R135, i.e., three polarizer angles would suffice for a
full Stokes-vector recovery. This is expected because
the left-hand side of Eq. (5) contains only three un-
knowns. Now we use the additional information for
better statistics, averaging the first Stokes para-
meter Ito
I¼1
2P4
αRα. Furthermore, the difference,
d¼ðR0þR90ÞðR45 þR135 Þshould be zero and is
used as a quality check of the images.
To estimate the statistical error in our polarization
analysis, we measured consecutive maps of the sky
every 2min under stable atmospheric conditions.
From the variation of these series, we estimate a 1σ
standard deviation of 3% for Π.
For validation of this method with respect to sys-
tematic errors, we compare the polarization in the
principal plane to the values measured with a well-
characterized spectroradiometer with polarized
radiance optics with a small FOV of 1:5°[24]. The
wavelength of the spectroradiometer has been set
to 495 nm. As displayed in Fig. 4, the general shape
and value of polarization agree well within 3%.
Around the Sun at a polar angle of 60°, the cameras
pixels are close to overexposure resulting in a large
deviation of the polarization from the spectroradi-
ometer measurement. Also, around the maximum po-
larization, at 30° zenith angle (90° behind the Sun), a
polarized reflection within the fish-eye objective
causes a distortion of the polarization curve. To con-
firm the effect of the reflections from the unobscured
Sun, a direct comparison between the polarization in
the principal plane with and without a mounted sha-
dow band is shown in Fig. 5. So omitting the shadow
band, the measured polarization is perturbed only at
the locations of the reflections (around 45°) and
around the Sun (62°). Taking care of these limitations,
all-sky maps of the polarization can be investigated.
Two interesting examples of polarization maps are
given. Figure 6(a) shows clear-sky polarization after
sunset, when two neutral points are distinctly visi-
ble. The minima around þand 70° zenith angle in
the principal plane are called the Babinet and Arago
neutral points, respectively. Their positions are de-
pendent on aerosol parameters in the atmosphere
and ground reflectivity [21] and will be investigated
more closely in the future.
In contrast, Fig. 6(b) shows the polarization map of
a partly covered sky around noon (solar elevation is
41°). High, thin cirrus clouds reduce the maximum
polarization below 50%, while lower, optically thick
cumulus clouds are barely polarized. It has also been
noted that scattered cumulus clouds reduce the
polarization also in the clear sky in between, due
to reflected light, corresponding to a higher ground
albedo. So although clouds have a dramatic effect
on the skys polarization, cloud detection is based
on a different method, described in Section 4.
Fig. 3. Analysis of a fully polarized test field (white reflectance
place) at different angles of incidence and polarizer angle offsets
0°, 15°, and 30°. Within the experimental uncertainty, no influence
of these parameters on the measured polarization can be found.
Fig. 4. Comparison of the all-sky camera blue polarization and
spectroradiometer radiance measurements in the principal plane
on a clear-sky day, 26 February 2008, 10:30 UTC. Solar zenith
angle is 58°.
1094 APPLIED OPTICS / Vol. 48, No. 6 / 20 February 2009
4. Cloud Detection
Color is the primary property that allows visual dis-
tinction of clouds in the sky. Because of different
wavelength dependence of scattering, the color of
the clear-sky region is blue rather than the whitish
or gray color of clouds. So in a digital color image,
clear-sky pixels have a higher ratio of blue/red radi-
ance than cloud pixels. The cloud detection method
is based on setting a threshold on this ratio [1114].
Considering a cloudless, aerosol-free sky, the blue/
red ratio is a function of both zenith and azimuth an-
gles in the sky and the solar elevation. Toward the hor-
izon, the sky appears more palish than at the zenith.
In addition, scattering by aerosols, which in general
is less wavelength dependent and much stronger in
the forward direction than molecular scattering
[25], also diminishes the blueness of the sky, most pro-
minently in the region around the Sun. However, aero-
sol content and scattering properties may vary so
much in time that this temporal variation masks
the spatial variation of the pristine cloudless sky color.
Hence, without aerosol information, a constant blue/
red ratio threshold is taken for the entire hemisphere
to account for universal atmospheric conditions.
Using a diverse set of images with typical cloud
situations (low and high clouds, illuminated and
dark clouds, different solar zenith angles), a suitable
threshold of 1.3 on this ratio for marking cloud areas
was found, that best discriminated cloud and clear
sky. The threshold was confirmed by investigating
the number of cloud-marked pixels as a function of
threshold. The number first increases before reach-
ing a plateau for the optimal value of the threshold,
after which it increases again. It should be noted that
the threshold is unique to each camera system since
it depends on the color response of the CCD chip as
well as any gamma correction. Also, the location has
an influence on the threshold as altitude and typical
aerosol background will result in a different clear-
sky color.
For cloud-marked pixels, the underexposed image
is used as a second criterion, applying another
threshold for the blue/red ratio. This step is neces-
sary for the region near the unobscured Sun, where
pixels close to saturation would always be cloud-
marked. The total cloud cover (TCC) is then com-
puted as the ratio of cloud-marked pixels to total
pixel number in the hemisphere, in which the hori-
zon (up to 20° for the Innsbruck site) is masked
out. It is noted that the fish-eye projection is not
area-conserving and the projected solid angle per
pixel strictly is a function of zenith angle (see, e.g.,
[11] for a detailed mathematical formulation). So
the simple ratio is an approximation with the rela-
tive error growing with increasing zenith angle.
However, the absolute error in the resulting TCC
is below 0.01 for most situations and can safely be
ignored, considering the accuracy of the TCC value,
which is normally rounded to one decimal place or
given in octas.
Furthermore, the area around the Sun is investi-
gated in more detail. When the Sun is unobscured,
diffraction around the blades of the cameras aper-
ture produces a star flare pattern around the Sun
with a sixfold symmetry. The number of flares allows
a quantitative definition of a Sun coverage para-
meter (SCP) as a measure of how much the Sun is
obscured by clouds. Three discrete cases are being
distinguished: when no pixels in the underexposed
image are saturated, the Sun is completely obscured,
and the SCP is assigned unity. In all cases, when pix-
els are saturated but the number of detected flares is
less than five, the Sun is assumed to be partially cov-
ered with an associated SCP of 0.5. When five or
more Sun flares are detected, the Sun is considered
unobscured and has an SCP of 0.
Here we use the JPEG image with the polarizer set
parallel to the principal plane, which approximates
an unpolarized relative radiance image. In principle,
the degree of polarization could also be applied for
cloud discrimination, but it was found to be less
selective than the color-ratio threshold and requires
more image processing.
Two representative examples of the performance
of the cloud detection method are shown in Fig. 7.
Cumulus-type clouds [Figs. 7(a) and 7(b)] have a par-
ticularly sharp boundary and good contrast in the
Fig. 6. (a) Polarization map at dawn, solar elevation is 6° and the
Sun has set behind the mountains. Note the neutral points at
about 70° zenith angle (9 September 2008, 17:00 UTC). (b) Polar-
ization map around noon with high and low clouds. (24 September
2008, 12:00 UTC).
Fig. 5. Comparison of the polarization in the principal plane,
with and without obscuring the Sun using a shadow arm. The mea-
sured polarization is only perturbed at the locations of the reflec-
tions around 45° and around the Sun (Solar zenith angle ¼62°).
24 October 2008, 09:46 and 09:48 UTC.
20 February 2009 / Vol. 48, No. 6 / APPLIED OPTICS 1095
blue/red ratio against the clear sky. In this case, the
detection works very well, even close to the unobs-
cured Sun. More problematic are thin clouds in front
of the Sun [Figs. 7(c) and 7(d)], where blue/red ratios
are very similar. Nevertheless, total cloud cover re-
sults are not severely affected. Note that the contrail
is nicely detected, as well as cumulus-type clouds
near the horizon.
For a quantitative statistical validation, more than
1 yr of hourly data in the period from 2 August 2007
to 27 October 2008 has been compared to the synop-
tic (SYNOP) observation at Innsbruck airport, lo-
cated 3km to the west of the camera site; see
Fig. 8. 73% of a total of 3903 analyzed camera images
agree to within 1octa with the SYNOP observations.
The distribution of the differences shows a slight
asymmetry, i.e., our cloud cover results tend to un-
derestimate those of the SYNOP observers. However,
cloud observations always bear certain interpreta-
tional variances, some differences will even exist be-
tween human observers. Specifically, the transition
from haze to cloud is continuous. For example, on
some hazy days (aerosol optical depth at 500 nm of
>0:4) our analysis results in zero cloud cover
whereas observers often interpret such a situation
as totally overcast. Furthermore, clouds in front of
mountains are considered only by the SYNOP obser-
vers, and add to the bias.
Finally, the significance of the SPC is validated by
correlating it with the erythemally weighted global
UV irradiance (UV index or UVI) and the TCC
[Figs. 9(a)9(c)]. The method for measuring the UVI
and determining the clear-sky model value is de-
scribed in detail in [26]. For the cases when the Sun
was labeled obscured (SCP of 1), the sky is mostly
found totally covered and the measured UVI is much
lower than the predicted clear-sky value, with the ra-
tio peaking around 0.3. For cases of a partially cov-
ered Sun, the whole range of TCC covers are found
and, as expected, the ratio of measured-to-predicted
clear-sky UVI decreases with increasing TCC. For a
SCP of 0, predominantly cloud-free sky occurs with
small TCC and the measured UVI is close to the pre-
dicted clear-sky value. These observations compare
well with those in [16]. The classification of three
Sun coverage scenarios poses a refinement to
Fig. 7. (a) Image of a sky with convective cloud type (cumulus) of
low and medium height. This cloud type has a sharp contrast and
is relatively easy to discriminate from the clear sky. (19 September
2008,12:00 UTC). (b) Processed image after cloud detection, clouds
are rendered white, while clear sky is gray and mask is black. The
underexposed image (not shown here) allows good discrimination
close to the Sun. Total cloud cover here is 0.63. (c) Image of a sky
with both low and high clouds, including altostratus and a contrail.
(24 September 2008, 12:00 UTC). (d) Total cloud cover is 0.29. Note
the problematic area near the Sun when thin clouds are present.
Fig. 8. Comparison of total cloud cover (TCC) obtained from the
camera images and SYNOP cloud observations (TCCcamera-
TCCSYNOP). 73% of a total of 3903 analyzed camera images agree
within 1octa with the SYNOP-observations.
Fig. 9. (a) For SCP of 1, the Sun is totally occluded, which coincides with a TCC near 1. The ratio of measured UVI and clear-sky pre-
diction is peaked around 0.3. In the range 0:98 <TCC <1, more than 620 data points are accumulated. (b) SCP of 0 implies a partially
covered Sun. In this case increasing TCC is correlated with a decreasing ratio of measured-to-clear-sky prediction. (c) For SCP of 0, the Sun
is assumed totally unobscured, which correlates with small TCC and the measured UVI converging toward the clear-sky prediction.
1096 APPLIED OPTICS / Vol. 48, No. 6 / 20 February 2009
previous work where only two cases were distin-
guished (Sun obscured or not obscured). It is further
noted that the extreme cases SCP of 0 and 1 repre-
sent a well-defined class of scenarios, while SCP of
0.5 spans a larger scope, which may suggest a further
improvement in the future by refined partitioning of
this case.
5. Conclusion and Outlook
A simple all-sky imaging system with a polarizing
filter has been described and characterized with
respect to measuring the sky´s degree of polarization
and detecting total cloud cover. Both types of analysis
have been validated against independent measuring
methods and found to agree well within their respec-
tive uncertainties. A Sun coverage parameter has
been obtained from image processing around the
Sun. It has been shown that the combination of total
cloud cover and Sun coverage parameter form a solid
basis for UV radiation estimation under cloudy
conditions. A cloud type analysis could be attempted
using pattern-recognition techniques in combination
with satellite images. Polarization maps of the sky
contain valuable information of aerosol content in
the atmosphere. The correlation of aerosol optical
depth and degree of polarization is one of the para-
mount topics in our ongoing work. Furthermore,
in contemporary climate research, the interaction
of aerosol and clouds are a key uncertainty in
the Earths energy budget. As a complementary de-
vice together with an aerosol optical depth measure-
ment like a sunphotometer all-sky imaging could be
a powerful tool for investigation of aerosolcloud
interaction.
This work was supported by the Austrian Science
Fund (FWF) under Project P18780. We gratefully
acknowledge Lanzinger at Austrocontrol, Innsbruck
Airport, for supplying the SYNOP cloud observa-
tions. The camera system was developed in coopera-
tion with Schreder (CMS-Ing.Dr.Schreder GmbH).
References
1. S. B. Mende, S. E. Harris, H. U. Frey, V. Angelopoulos,
C. T. Russell, E. Donovan, B. Jackel, M. Greffen, and
L. M. Peticolas, The THEMIS array of ground-based observa-
tories for the study of auroral substorms,Space Sci. Rev., doi:
10.1007/s11214-008-9380-x (2007).
2. G. Zotti, Measuring light pollution with a calibrated high dy-
namic range all-sky image acquisition system,presented at
the DARKSKY20077th European Symposium for the Pro-
tection of the Night Sky, Bled, Slovenia (2007).
3. R. L. Chazdon and C. B. Field, Photographic estimation of
photosynthetically active radiation: evaluation of a computer-
ized technique,Oecologia 73, 525532 (1987).
4. T. E. Pickering, The MMT all-sky camera,Proc. SPIE 6267,
62671A (2006).
5. Y. Liu and K. Voss, Polarized radiance distribution measure-
ments of skylight. I. System description and characterization,
Appl. Opt. 36, 60836094 (1997).
6. N. J. Pust and J. A. Shaw, Dual-field imaging polarimeter
using liquid crystal variable retarders,Appl. Opt. 45,
54705478 (2006).
7. J. Gál, G. Horváth, V. B. Meyer-Rochow, and R. Wehner,
Polarization patterns of the summer sky and its neutral
points measured by full-sky imaging polarimetry in Finnish
Lapland north of the Arctic Circle,Proc. R. Soc. A 457,
13851399 (2001).
8. J. A. North and M. J. Duggin, Stokes vector imaging of the
polarized sky-dome,Appl. Opt. 36, 723730 (1997).
9. M. V. Berry, M. R. Dennis, and R. L. Lee, Jr., Polarization
singularities in the clear sky,New J. Phys. 6, 162 (2004).
10. G. Horváth, A. Barta, J. Gál, B. Suhai, and O. Haiman,
Ground-based full-sky imaging polarimetry of rapidly
changing skies and its use for polarimetric cloud detection,
Appl. Opt. 41, 543559 (2002).
11. C. N. Long, J. M. Sabburg, J. Calbé, and D. Pagès, Retrieving
cloud characteristics from ground-based daytime color
all-sky images,J. Atmos. Ocean. Technol. 23, 633652
(2006).
12. N. H. Schade, A. Macke, H. Sandmann, and C. Stick, Total
and partial cloud detection during summermonths 2005 at
Westerland (Sylt, Germany),Atmos. Chem. Phys. Discuss.
8, 1347913505 (2008).
13. U. Feister, J. Shields, M. Karr, R. Johnson, K. Dehne, and M.
Woldt, Ground-based cloud images and sky radiances in the
visible and near infrared region from whole sky imager mea-
surements,in Proceedings of Climate MonitoringSatellite
Application Facility Training Workshop (Dresden, 2000).
14. U. Feister and J. Shields, Cloud and radiance measurements
with the VIS/NIR Daylight Whole Sky Imager at Lindenberg
(Germany),Meteor. Zeitschr. 14, 627639 (2005).
15. K. A. Buch and C. H. Sun, Cloud classification using
whole-sky imager data,presented at Ninth Symposium on
Meteoriological Observations and Instrumentation, paper
7.5, Charlotte, North Carolina, 1995.
16. G. Pfister, R. L. McKenzie,J. B. Liley, A. Thomas, B. W. Forgan,
and C. N. Long, Cloud Coverage Based on All-Sky Imaging
and Its Impact on Surface Solar Irradiance,J. Appl. Meteorol.
42, 14211434 (2003).
17. N. H. Schade, A. Macke, H. Sandmann, and C. Stick,
Enhanced solar global irradiance during cloudy sky
conditions,Meteor. Zeitschr. 16, 295303 (2007).
18. J. M. Sabburg and C. N. Long, Improved sky imaging for
studies of enhanced UV irradiance,Atmos. Chem. Phys. 4,
25432552 (2004).
19. J. Stumpfel, C. Thou, A. Jones, T. Hawkins, A. Wenger, and
P. Debevec, Proceedings of the Third International Conference
on Computer Graphics, Virtual Reality, Visualisation and
Interaction in Africa (Association for Computing Machinery,
2004), pp. 145149.
20. C. A. Poynton, Digital Video and HDTV: Algorithms and
Interfaces (Morgan Kaufmann, 2003).
21. K. L. Coulson, Polarization and Intensity of Light in the Atmo-
sphere (Deepak, 1988).
22. J. W. Strutt, On the light from the sky, its polarisation and
color,Philos. Mag. 41, 107120 274279 (1871).
23. G. Horváth, J. Gál, I. Pomozi, and R. Wehner, Polarization
portrait of the Arago Point: video-polarimetric imaging of
the neutral points of skylight polarization,Naturwis-
senschaften 85, 333339 (1998).
24. M. Blumthaler, B. Schallhart, M. Schwarzmann, R. McKenzie,
P. Johnston, M. Kotkamp, and H. Shiona, Spectral UV mea-
surements of global irradiance, solar radiance, and actinic flux
in New Zealand: intercomparison between instruments and
model calculations,J. Atmos. Ocean. Technol. 25, 945958
(2008).
25. S. Twomey, Atmospheric Aerosols (Elsevier, 1977).
26. B. Schallhart, M. Blumthaler, J. Schreder, and J. Verdebout, A
method to generate near real time UV-index maps of Austria,
Atmos. Chem. Phys. 8, 74837491 (2008).
20 February 2009 / Vol. 48, No. 6 / APPLIED OPTICS 1097
... Accurate cloud segmentation is a primary precondition for the cloud analysis of ground-based all-sky-view imag-ing equipment, which can improve the precision of derived cloud cover information and help meteorologists further understand climatic conditions. Therefore, accurate cloud segmentation has become a topic of interest, and many algorithms have been recently proposed for the cloud analysis of ground-based all-sky-view imaging instruments (Long et al., 2006;Kreuter et al., 2009;Heinle et al., 2010;Liu et al., 2014Liu et al., , 2015. ...
... Thus, the blue and red channel values of a cloud image are available for identifying features as cloud segmentation. Long et al. (2006) and Kreuter et al. (2009) proposed a fixed-threshold algorithm using the ratio of red and blue channel values to identify clouds from whole-sky images. Particularly, pixels whose ratio of red and blue channel values are greater than the defined fixed threshold are identified as cloud, and as clear sky otherwise. ...
... 1. R/B threshold method. Given the camera parameters and atmospheric environment, the fixed ratio of red and blue channel values is set to 0.77 (Kreuter et al., 2009). 2. Otsu algorithm (the adaptive threshold method). ...
Article
Full-text available
Cloud detection and cloud properties have substantial applications in weather forecast, signal attenuation analysis, and other cloud-related fields. Cloud image segmentation is the fundamental and important step in deriving cloud cover. However, traditional segmentation methods rely on low-level visual features of clouds and often fail to achieve satisfactory performance. Deep convolutional neural networks (CNNs) can extract high-level feature information of objects and have achieved remarkable success in many image segmentation fields. On this basis, a novel deep CNN model named SegCloud is proposed and applied for accurate cloud segmentation based on ground-based observation. Architecturally, SegCloud possesses a symmetric encoder–decoder structure. The encoder network combines low-level cloud features to form high-level, low-resolution cloud feature maps, whereas the decoder network restores the obtained high-level cloud feature maps to the same resolution of input images. The Softmax classifier finally achieves pixel-wise classification and outputs segmentation results. SegCloud has powerful cloud discrimination capability and can automatically segment whole-sky images obtained by a ground-based all-sky-view camera. The performance of SegCloud is validated by extensive experiments, which show that SegCloud is effective and accurate for ground-based cloud segmentation and achieves better results than traditional methods do. The accuracy and practicability of SegCloud are further proven by applying it to cloud cover estimation.
... Figure 1 shows how quickly cloud changes can occur within three minutes. Looking at cloud detection methods, we can mention threshold-based algorithm [10] and machine learning methods [11][12][13][14][15]. Threshold-based algorithms normally use a red/blue ratio of the three RGB (red, blue, and green) channels from the image pixels for cloud classification [16][17][18][19]. Cloud pixels are identified as high R and B values, while sky pixels have low R and high B values. ...
... Section 2 briefly describes the data acquisition methods. The methodologies of this study are described in Section 3. The results are given in Section 4. The conclusions and future work are discussed in Section 5. Looking at cloud detection methods, we can mention threshold-based algorithm [10] and machine learning methods [11][12][13][14][15]. Threshold-based algorithms normally use a red/blue ratio of the three RGB (red, blue, and green) channels from the image pixels for cloud classification [16][17][18][19]. Cloud pixels are identified as high R and B values, while sky pixels have low R and high B values. ...
Article
Full-text available
A novel high-resolution method for forecasting cloud motion from all-sky images using deep learning is presented. A convolutional neural network (CNN) was created and trained with more than two years of all-sky images, recorded by a hemispheric sky imager (HSI) at the Institute of Meteorology and Climatology (IMUK) of the Leibniz Universität Hannover, Hannover, Germany. Using the haze indexpostprocessing algorithm, cloud characteristics were found, and the deformation vector of each cloud was performed and used as ground truth. The CNN training process was built to predict cloud motion up to 10 min ahead, in a sequence of HSI images, tracking clouds frame by frame. The first two simulated minutes show a strong similarity between simulated and measured cloud motion, which allows photovoltaic (PV) companies to make accurate horizon time predictions and better marketing decisions for primary and secondary control reserves. This cloud motion algorithm principally targets global irradiance predictions as an application for electrical engineering and in PV output predictions. Comparisons between the results of the predicted region of interest of a cloud by the proposed method and real cloud position show a mean Sørensen–Dice similarity coefficient (SD) of 94 ± 2.6% (mean ± standard deviation) for the first minute, outperforming the persistence model (89 ± 3.8%). As the forecast time window increased the index decreased to 44.4 ± 12.3% for the CNN and 37.8 ± 16.4% for the persistence model for 10 min ahead forecast. In addition, up to 10 min global horizontal irradiance was also derived using a feed-forward artificial neural network technique for each CNN forecasted image. Therefore, the new algorithm presented here increases the SD approximately 15% compared to the reference persistence model.
... The applications of ground-based cloud monitoring are focused on precipitation and fog detection (Kim et al., 2020), cloud forecast skill verification (Hogan et al., 2009), cloud seeding evaluation (Schaefer et al., 1957;Geresdi et al., 2020), satellite cloud-mask validation (Skakun et al., 2021), and solar irradiance nowcasting (Schmidt et al., 2016). In addition, cloud observations have proven to be useful in investigating their effect on photochemistry (Hall et al., 2018), CO2 fluxes (Still et al., 2009), ecological biomes (Wilson & Jetz, 2016), erythemal dose rate (Silva & de Souza Echer, 2013), light pollution amplification (Jechow et al., 2019), air pollution modelling (Arciszewska & McClatchey, 2001), sky polarization (Kreuter et al., 2009), and space communications (Nugent et al., 2009). ...
Thesis
Full-text available
One of the largest challenges in Numerical Weather Prediction (NWP) is cloud forecasting. The reason is twofold: first, clouds are constantly changing size and shape over short periods of time, and second, cloud formation processes occur at wide scales, from sub-micrometer to few kilometers. Typical NWP models operate at scales between 1 to 20 km resolution and run between 2 to 8 times per day, thus spatial and temporal limited for accurate cloud forecasting. Data extrapolation techniques have emerged to forecast cloud cover using deep learning and satellite imagery. However, satellite resolution remains impractical for hyperlocal cloudiness forecasting, which is relevant for many industries such as solar power, astronomy, and aviation. Over the last years, higher resolution models have been developed (<1km). One example is ClimaCell Bespoke Atmospheric Model (CBAM), which claims to be the world’s highest resolution weather forecasting model. CBAM initial conditions are fed with data from power grids, cellular networks, road cameras, connected vehicles, and smartphones. It is well established that the accuracy of NWP models improves significantly with the assimilation of high-resolution meteorological observations. However, most data-assimilation techniques omit cloud observations, thus the benefits of highly localized cloud-assimilation are still largely unknown. Since clouds directly influence solar radiation, surface temperature, precipitation and are useful signs to predict upcoming weather, I have hypothesized that the next generation of NWP models will require assimilation of high-resolution cloud data. The overall goal of this thesis is to retrieve cloud cover, cloud type, cloud motion vectors, cloud base height, and cloud transmittance from the combined use of a skycam, a pyranometer, and a ceilometer, aiming to ultimately improve hyperlocal short-term weather forecasting. This study comprised summer daytime data from July, August, and September 2020. The datasets generated by this dissertation are publicly available on www.github.com/maxaragon/ThesisPTE
... In detail, the RGB brightness varies according to the light scattering from the sky and clouds, and, using the ratio or difference between these colors, cloud can be detected and cloud cover can be calculated (Long et al., 2006;Shields et al., 2013;Liu et al., 2014;Yang et al., 2015;Kim et al., 2016). For example, when the red-blue ratio (RBR) is 0.6 or more or the red-blue difference (RBD) is less than 30, the corresponding pixel is classified (i.e., using a threshold method) as a cloud pixel and incorporated in the cloud cover calculation (Kreuter et al., 2009;Heinle et al., 2010;Liu et al., 2014;Azhar et al., 2021). However, using these empirical methods, it is difficult to distinguish between the sky and clouds under various weather conditions (Yang et al., 2015). ...
Article
Full-text available
In this study, image data features and machine learning methods were used to calculate 24 h continuous cloud cover from image data obtained by a camera-based imager on the ground. The image data features were the time (Julian day and hour), solar zenith angle, and statistical characteristics of the red–blue ratio, blue–red difference, and luminance. These features were determined from the red, green, and blue brightness of images subjected to a pre-processing process involving masking removal and distortion correction. The collected image data were divided into training, validation, and test sets and were used to optimize and evaluate the accuracy of each machine learning method. The cloud cover calculated by each machine learning method was verified with human-eye observation data from a manned observatory. Supervised machine learning models suitable for nowcasting, namely, support vector regression, random forest, gradient boosting machine, k-nearest neighbor, artificial neural network, and multiple linear regression methods, were employed and their results were compared. The best learning results were obtained by the support vector regression model, which had an accuracy, recall, and precision of 0.94, 0.70, and 0.76, respectively. Further, bias, root mean square error, and correlation coefficient values of 0.04 tenths, 1.45 tenths, and 0.93, respectively, were obtained for the cloud cover calculated using the test set. When the difference between the calculated and observed cloud cover was allowed to range between 0, 1, and 2 tenths, high agreements of approximately 42 %, 79 %, and 91 %, respectively, were obtained. The proposed system involving a ground-based imager and machine learning methods is expected to be suitable for application as an automated system to replace human-eye observations.
... Polarimetric properties of light have been used for imaging [1,2] and remote sensing [3,4] through the Earth's atmosphere since the 1980s, after the basic principles of imaging polarimetry were introduced by Walraven [5]. Later, polarization-based wireless optical communication systems (WOSs) were shown to be effective over relatively short atmospheric paths, in the absence of particles but in the presence of classic turbulence [6,7]. ...
Article
We propose using ElectroMagnetic Phase Coherence Gratings (EMPCG) for fine spatial segregation in polarimetric components of stationary beams on their propagation in atmospheric turbulence. Unlike for other beams, e.g., non-uniformly correlated EM beams, the off-axis shifts occurring in polarimetric components of the EMPCGs are shown to be invariant with respect to the local turbulence strength. This effect may lead to implementation of novel techniques for direct energy, imaging and wireless optical communication systems operating in the presence of turbulent air.
... The traditional methods are divided into the threshold-based [14], time differentiation [15] and statistics methods [16]. According to Mie scattering and Rayleigh scattering theory [14], the fixed or adaptive threshold methods [17,18], based on the scattering intensity differences of clouds and air molecules in the blue and red bands [19], are utilized to distinguish clouds and clear skies. Combined with multi-band features including radiance, reflectance, and normalized vegetation index [20][21][22][23], the spectral threshold method turns out to be effective for CD at specific bands. ...
Article
Full-text available
Cloud detection (CD) with deep learning (DL) algorithms has been greatly developed in the applications involving the predictions of extreme weather and climate. In this review, the different conventional CD methods based on threshold, time differentiation, machine learning, and the intelligent algorithms including convolution neural networks (CNN), simple linear iterative clustering (SLIC), and semantic segmentation algorithms (SSAs) are introduced in detail, and, especially, the majority of CD publications employing the advanced and prevalent DL algorithms during the last decade are summarized and analyzed. First, in terms of the detection for different types of clouds, we meticulously compare the labels, scenarios and volumes of three popular CD datasets and put forward further the constructive recommendations about the cloud images selection, multi-bands images preprocessing, and truth labels combination for creating similar datasets. Subsequently, the structures, detection accuracies, and operating speeds of several different CD network models comprising the fully convolutional neural networks (FCNs), U-Net, SegNet, pyramid scene parsing network (PSPNet), as well as the associated derivatives are conducted elaborately to explore the comprehensively optimal performance for CD. In addition, aiming at expanding the applications in the resource-limited space-borne environment, we conclude the mainstream compression strategies of a number of different lightweight networks. Finally, the various limitations constraining the performance of the existing state-of-the-art DL CD methods and the corresponding development tendency are presented, which, expectantly, could be referential for the following researches.
... The liquid-crystal-based polarimeter operated in this experiment with 10-nm bands centered at 450, 530, and 670 nm [29]. To our knowledge, this instrument was the first all-sky polarization imager based on liquid crystal variable retarders (LCVRs), enabling full four-Stokes-parameter imaging (similar instruments reported later include [34,35]), whereas other digital all-sky polarization imagers recorded the three linear Stokes parameters using a rotating filter wheel with a fisheye lens [36][37][38][39], building on ideas demonstrated earlier with a multiaperture film camera viewing a reflective dome [40] and narrow-angle images obtained with film cameras [41,42]. Our LCVR all-sky polarimeter has been used previously to study skylight polarization variation with aerosols [5,6], clouds [7,8], and surface reflectance [11]. ...
Article
All-sky polarization images were measured from sunrise to sunset and during a cloud-free totality on 21 August 2017 in Rexburg, Idaho using two digital three-camera all-sky polarimeters and a time-sequential liquid-crystal-based all-sky polarimeter. Twenty-five polarimetric images were recorded during totality, revealing a highly dynamic evolution of the distribution of skylight polarization, with the degree of linear polarization becoming nearly zenith-symmetric by the end of totality. The surrounding environment was characterized with an infrared cloud imager that confirmed the complete absence of clouds during totality, an AERONET solar radiometer that measured aerosol properties, a portable weather station, and a hand-held spectrometer with satellite images that measured surface reflectance at and near the observation site. These observations confirm that previously observed totality patterns are general and not unique to those specific eclipses. The high temporal image resolution revealed a transition of a neutral point from the zenith in totality to the normal Babinet point just above the Sun after third contact, providing the first indication that the transition between totality and normal daytime polarization patterns occurs over of a time period of approximately 13 s.
Article
A full sky High Dynamic Range imaging system, based on a Single-Lens Reflex camera with a fisheye lens, has been constructed and calibrated with a sky scanner luminance meter. The method considers the geometrical, spectral, timing and orientation issues between instruments. The calibration data sets, having nearly simultaneous measurements under stable sky conditions, were obtained from approximately one month of data using selection variables based in the experimental design. For luminance estimation we use the standard CIEY RGB combination and a Spectrally Matched Luminance (SML) predictor, matching the spectral response of the instruments. With 738 calibration points having luminances up to 23.6kcd/m2, covering 98.5% of the sky luminance range, CIEY is linearly correlated with sky scanner measurements with a coefficient of determination R2=0.9927 and a Root Mean Squared Error (RMSE) of 7.7%. SML gives better results, with R2=0.9973 and RMSE=5.3%. With 253 calibration points with luminances up to 12.9kcd/m2, comprising 94.1% of the sky luminance range, both predictors clearly improve, with R2=0.9964 and RMSE=4.1% in case of CIEY and R2=0.9982 and RMSE=2.9% in case of SML.
Article
As the penetration of solar energy generation into power systems keeps rising, intra-hour solar forecasting (IHSF) is becoming increasingly important for the secure and economical operation of a power system. One major difficulty in providing very accurate IHSF emanates from rapid cloud changes in the sky. The ground-based sky image (GSI) provides the intuitive information of intra-hour cloud changes and has thus been widely utilized in studies on IHSF. This paper presents a systematic review of the state-of-the-art of ground-based sky image-based intra-hour solar forecasting (GSI-IHSF). To our knowledge, we first propose a generic framework of GSI-IHSF consisting of four modules, i.e., sky image acquisition, sky image preprocessing, cloud forecasting, and solar forecasting. Then, as for each module, this paper introduces its core function, shows the major challenges, briefly reviews several extensively used techniques, summarizing research trends. Finally, this paper offers a prospect of GSI-IHSF research, discusses recent advances that demonstrate the potential for a great improvement in forecast accuracy, pointing out some new requirements and challenges that should be further investigated in the future.
Preprint
Full-text available
In this study, image data features and machine learning methods were used to calculate 24-h continuous cloud cover from image data obtained by a camera-based imager on the ground. The image data features were the time (Julian day and hour), solar zenith angle, and statistical characteristics of the red-blue ratio, blue–red difference, and luminance. These features were determined from the red, green, and blue brightness of images subjected to a pre-processing process involving masking removal and distortion correction. The collected image data were divided into training, validation, and test sets and were used to optimize and evaluate the accuracy of each machine learning method. The cloud cover calculated by each machine learning method was verified with human-eye observation data from a manned observatory. Supervised machine learning models suitable for nowcasting, namely, support vector regression, random forest, gradient boosting machine, k-nearest neighbor, artificial neural network, and multiple linear regression methods, were employed and their results were compared. The best learning results were obtained by the support vector regression model, which had an accuracy, recall, and precision of 0.94, 0.70, and 0.76, respectively. Further, bias, root mean square error, and correlation coefficient values of 0.04 tenth, 1.45 tenths, and 0.93, respectively, were obtained for the cloud cover calculated using the test set. When the difference between the calculated and observed cloud cover was allowed to range between 0, 1, and 2 tenths, high agreement of approximately 42 %, 79 %, and 91 %, respectively, were obtained. The proposed system involving a ground-based imager and machine learning methods is expected to be suitable for application as an automated system to replace human-eye observations.
Article
Full-text available
In Lauder, Central Otago, New Zealand, two all-sky imaging systems have been in operation for more than 1 yr, measuring the total, opaque, and thin cloud fraction, as well as indicating whether the sun is obscured by clouds. The data provide a basis for investigating the impact of clouds on the surface radiation field. The all-sky cloud parameters were combined with measurements of global, direct, and diffuse surface solar irradiance over the spectral interval from 0.3 to 3 μm. Here, the results of ongoing analysis of this dataset are described. As a reference for the magnitude of the cloud influence, clear-sky irradiance values are estimated as a simple function of solar zenith angle and the earth-sun distance. The function is derived from a least squares fit to measurements taken when available cloud images show clear-sky situations. Averaged over a longer time period, such as 1 month, cloud fraction and surface irradiance are clearly negatively correlated. Monthly means in the ratio of the measured surface irradiance to the clear-sky value had a correlation coefficient of about -0.9 with means of cloud fraction for the months from July 2000 to June 2001. In the present work reductions in the surface irradiance and situations in which clouds cause radiation values to exceed the expected clear-sky amount are analyzed. Over 1 yr of observations, 1-min-averaged radiation measurements exceeding the expected clear-sky value by more than 10% were observed with a frequency of 5%. In contrast, a reduction of more than 10% below estimated clear-sky values occurred in 66% of the cases, while clear-sky irradiances (measured irradiance within ±10% of estimated clear-sky value) were observed 29% of the time. Low cloud fractions frequently lead to moderate enhancement, because the sun is often unobscured and the clouds are brighter than the sky that they hide. As cloud fraction increases the sun is likely to be obscured, causing irradiance values to fall well below clear-sky values. However, in the case of unobscured sun, there is a tendency for strongest enhancements when cloud fractions are highest. Enhancements, especially at high solar zenith angle, are also often observed in association with thin clouds.
Article
Full-text available
Presented here are the results of a short but intense measurement campaign at Lauder, New Zealand, in which spectral irradiance from instruments operated by the National Institute of Water and Atmospheric Research (NIWA) and Austria/Innsbruck (ATI) were traced to different irradiance standards and compared. The observed spectral differences for global irradiance were relatively small (<5%) and were consistent with those expected from observed differences in the radiation standards used by each group. Actinic fluxes measured by both groups were also intercompared and found to agree at the 10% level. The ATI instrument had the additional capability of measuring solar direct beam irradiance and sky radiances. These provided the first series of sky radiance measurements at this pristine Network for the Detection of Atmospheric Composition Change (NDACC) site. The polarization of sky radiance results were compared with estimates from a radiative transfer model without any aerosols and was found to be up to 25% smaller. Total ozone values derived from Total Ozone Mapping Spectrometer (TOMS), Dobson measurements by NIWA, spectral direct sun measurements by ATI, and spectral global irradiance measurements by NIWA agreed generally within 2%-3%.
Article
Full-text available
We present here an overview of sky imaging, and techniques that may be applied to the analysis of full color sky images to infer cloud macrophysical properties. Details of two different types of sky imaging systems developed by the authors are presented, one of which has been developed into a commercially available instrument. Retrievals of fractional sky cover from automated processing methods are compared to human retrievals, both from direct observations and visual analyses of sky images. Although there exists some uncertainty in fractional sky cover retrievals from sky images, this uncertainty is no greater than that attached to human observations for the commercially available sky imager retrievals, with still acceptable uncertainty with the research system used in Girona, Spain. Thus, the application of automatic digital image processing techniques on sky images is a useful method to complement, or even replace traditional human observations of sky conditions for cloud cover and potentially cloud type. Additionally, the possibilities for inferring other cloud parameters such as cloud brokenness and solar obstruction of benefit for scientific research further enhance the usefulness of sky imagers.
Article
Full-text available
A recent World Meteorological Organization report discussed the importance of continued study of the effect of clouds on the solar UV radiation reaching the earths surface. The report mentions that the use of all-sky imagery offers the potential to understand and quantify cloud effects more accurately. There are an increasing number of studies investigating the enhancement of surface solar, UV irradiance, and UV actinic flux, using automated CCD and sky imagers. This paper describes new algorithms applicable to a commercially available all-sky imager (TSI-440), for research investigating cloud enhanced spectral UV irradiance. Specifically, these include three new algorithms relating to cloud amount at different spatial positions from (1) zenith and (2) from the solar position and (3) the visible brightness of clouds surrounding the sun. A possible relationship between UV enhancement and the occurrence of near-sun cloud brightness is reported based on this preliminary data. It is found that a range of wavelength dependent intensities, from 306 to 400 nm, can occur in one day for UV enhancements. Evidence for a possible decreasing variation of intensity with longer wavelengths is also presented.
Article
Full-text available
Using 180° field–of–view (full–sky) imaging polarimetry, the patterns of the degree and angle of polarization of the entir summer sky were measured on 25 June 1999 at a location north of the Arctic Circle in Finnish Lapland as a function of th angular solar zenith distance. A detailed description of the used full–sky imaging polarimeter and its calibration is given. A series of the degree and angle of polarization pattern of the full sky is presented in the form of high–resolution circula maps measured in the blue (450 nm) spectral range as a function of the solar zenith distance. Graphs of the spectral dependenc of the degree and angle of polarization of skylight at 90° from the Sun along the antisolar meridian are shown. The celestia regions of negative polarization and the consequence of the existence of this anomalous polarization, the neutral points are visualized. The measured values of the angular zenith distance of the Arago and Babinet neutral points are presented a a function of the zenith distance of the Sun for the red (650 nm), green (550 nm) and blue (450 nm) ranges of the spectrum. The major aim of this work is to give a clear and comprehensive picture, with the help of full–sky imaging polarimetry, o what is going on in the entire polarized skydome. We demonstrate how variable the degree of polarization of skylight and th position of the neutral points can be within 24 h on a sunny, almost cloudless, visually clear day.
Article
Clouds are one of the most important moderators of the earth radiation budget and one of the least understood. The effect that clouds have on the reflection and absorption of solar and terrestrial radiation is strongly influenced by their shape, size, and composition. Physically accurate parameterization of clouds is necessary for any general circulation model (GCM) to yield meaningful results. The work presented here is part of a larger project that is aimed at producing realistic three-dimensional (3D) volume renderings of cloud scenes, thereby providing the important shape information for parameterizing GCMs. The specific goal of the current study is to develop an algorithm that automatically classifies (by cloud type) the clouds observed in the scene. This information will assist the volume rendering program in determining the shape of the cloud. Much work has been done on cloud classification using multispectral satellite images. Most of these references use some kind of texture measure to distinguish the different cloud types and some also use topological features (such as cloud/sky connectivity or total number of clouds). A wide variety of classification methods has been used, including neural networks, various types of clustering, and thresholding. The work presented here utilizes binary decision trees to distinguish the different cloud types based on cloud feature vectors.
Book
All aspects of aerosol physics important in the oformation, evolution and removal of particulate material in the atmosphere are presented, and the influence of such particles on the climate and weather are outlined. The book opens with a discussion of the physics of aerosols and derives some f the more important relationships in the physics of single aerosol particles. These are then used as a basis for subsequent examination of interactions between particles and the dynamics of populations of particles relative to the evolution amd maintenance of particle size distributions in the atmosphere and for the production - modification and coagulation-removal cycle. The balance between production and removal is then reviewed and the regions of the size spectrum where the various formative and removal processes are most effective are identified. The last five chapters are devoted to the influence of atmospheric particles on weather, atmospheric optics and radiative transfer, atmospheric electricity and atmospheric energetics and climate.