ArticlePDF Available

Abstract and Figures

Scholars using still cameras to take (mostly) oblique imagery from a low-flying aircraft of various possible archaeologically related anomalies can be defined as aerial archaeologists. At present, as well as in the past, aerial/air archaeology has been acquiring data almost exclusively in the visible range of the electromagnetic spectrum. This phenomenon can largely be attributed to the critical imaging process and sometimes unconvincing results related to the film-based approach of near-infrared (NIR) photography. To overcome the constraints of detecting and interpreting only the varying visible colors in vegetation (the so-called crop marks), while still maintaining the flexible and low-cost approach characteristic for aerial archaeology, a consumer digital still camera was modified to capture NIR radiation. By its spectral characterization, more insight was gained into its imaging properties and necessary guidelines for data processing, and future improvements could be formulated, all in an attempt to better capture the archaeologically induced anomalous growth stresses in crops.
Content may be subject to copyright.
3456 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
Spectral Characterization of a Digital Still
Camera’s NIR Modification to Enhance
Archaeological Observation
Geert J. Verhoeven, Philippe F. Smet, Dirk Poelman, and Frank Vermeulen
Abstract—Scholars using still cameras to take (mostly) oblique
imagery from a low-flying aircraft of various possible archaeolog-
ically related anomalies can be defined as aerial archaeologists.
At present, as well as in the past, aerial/air archaeology has been
acquiring data almost exclusively in the visible range of the elec-
tromagnetic spectrum. This phenomenon can largely be attributed
to the critical imaging process and sometimes unconvincing re-
sults related to the film-based approach of near-infrared (NIR)
photography. To overcome the constraints of detecting and inter-
preting only the varying visible colors in vegetation (the so-called
crop marks), while still maintaining the flexible and low-cost
approach characteristic for aerial archaeology, a consumer digital
still camera was modified to capture NIR radiation. By its spectral
characterization, more insight was gained into its imaging prop-
erties and necessary guidelines for data processing, and future
improvements could be formulated, all in an attempt to better
capture the archaeologically induced anomalous growth stresses in
crops.
Index Terms—Aerial archaeology, camera characterization,
crop mark, digital photography, near-infrared (NIR).
I. INTRODUCTION
A. Aerial Archaeology
T
HE TERM “aerial archaeology” encompasses the entire
process from the acquisition and inventory of imagery to
the mapping and the final interpretation. It comprises the whole
study of all sorts of archaeological remains by using informa-
tion acquired from a certain altitude: digital or film-based low-
altitude aerial photographs, satellite imagery, lidar, radar, etc.
The majority of source data used by most aerial archaeologists
are acquired from the cabin of a low-flying airplane using
small- or medium-format handheld cameras with (generally)
uncalibrated lenses, mostly capturing oblique imagery. Al-
though this specific type of data acquisition may seem strange
to the nonarchaeological community, the noninvasive approach
easily yields interpretable imagery with abundant spatial detail,
Manuscript received November 20, 2008; revised February 10, 2009. First
published August 7, 2009; current version published September 29, 2009. This
work was supported by the Fund for Scientific Research—Flanders (FWO).
G. J. Verhoeven and F. Vermeulen are with the Department of Archaeol-
ogy and Ancient History of Europe, Ghent University, 9000 Ghent, Belgium
(e-mail: Geert.Verhoeven@UGent.be; Frank.Vermeulen@UGent.be).
P. F. Smet and D. Poelman are with LumiLab, Department of Solid State
Sciences, Ghent University, 9000 Ghent, Belgium (e-mail: Philippe.Smet@
UGent.be; Dirk.Poelman@UGent.be).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TGRS.2009.2021431
is extremely flexible, might be cost efficient (certainly when
compared to other prospecting methods and applied in previ-
ously unexplored areas), and is driven by the specific nature of
the archaeological anomalies.
Archaeological remains such as settlements, graveyards, and
roads can show up on the surface in a number of ways. Aside
from still-standing material relics (e.g., churches, bridges, and
fortifications) and partly eroded structures (e.g., earthen banks,
mounds, and ditches), most of the features that can be viewed
from above are the remains of buried archaeological sites.
Whereas the first type of archaeological features is directly
visible, the second type—often referred to as earthworks—is
mostly recorded from the air when thrown into relief by low-
slanting sunlight (sometimes referred to as shadow marks) and
in northern Europe by differential snow accumulations or dif-
ferential melting of snow or frost. The buried or leveled remains
might be disclosed by distinct tonal differences in the (usually
ploughed) soil (soil marks) or differences in color and/or height
of vegetation on top of the remains (crop/plant marks), with
the variations in the subsoil being the prime movers in their
creation. In other words, archaeological residues must exhibit
a certain localized contrast in their surrounding matrix to be
detected [1]. Although these marks are mostly discovered,
photographed, and mapped using visible light, this paper will
explore how these anomalies, particularly crop marks, can
benefit from detection and interpretation by low-cost digital
aerial imaging of near-infrared (NIR) radiation. Consequently,
the nature of crop marks needs to be considered first.
B. Crop Marks and Related Plant Reflectance
Subsurface archaeological remains such as pits or trenches
will often be filled with organic material and/or new soil, which
has greater moisture retention than the surrounding matrix. In
periods of drought, these soils might have a favorable effect on
the crops, allowing the plants to grow luxuriantly and for an
extended period of time. The adjacent plants will be less tall
and thinner and ripen quicker, leading to differences in chroma
and/or plant size that can be seen from above as positive crop
marks [Fig. 1(a)].
In unfavorable situations [e.g., plants growing over buried
stone walls or floors—Fig. 1(b)], weaker and shorter plants
might occur, in which case negative crop marks are yielded
[2]–[9]. Speaking in more technical terms, such adverse sit-
uations put a certain stress on the vegetation, hence blocking
the growth, development, or metabolism of the plant. It is the
0196-2892/$26.00 © 2009 IEEE
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3457
Fig. 1. (a) Positive and (b) negative crop marks (adapted from [2, Fig. 13]).
Fig. 2. Kodak Ektachrome Professional Infrared image of a dense archaeo-
logical landscape containing Neolithic and Roman features [29, Fig. 8].
stress-related loss of chlorophyll—a green pigment that can be
found in all green plants and largely absorbs incident visible
wavelengths in the blue waveband (centered around 450 nm)
and red (around 650 nm) spectral region [10]–[12]—which
induces an increased visible reflectance in the green–yellow–
orange waveband and the red chlorophyll absorption region
around 670 nm [13], [14]. Consequently, the plant’s domi-
nant green color disappears in favor of a yellowing discol-
oration, which is a phenomenon called chlorosis [15]–[17]. By
recording the reflected portion of the visible radiation, aerial
photographs thus allow the remote assessment of vegetation
status [18].
However, aerial archaeologists have sometimes acquired im-
agery using other parts of the electromagnetic (EM) spectrum
(Fig. 2), particularly the NIR waveband (see [19] for an ex-
tensive overview). In the NIR (700/750 to 1400 nm), pigment
absorption is extremely low [20], and the leafs internal cellular
structure (more particularly the structure of the spongy meso-
phyll) effects a very high and diffuse reflectance [12], [21]–[23].
In the case of diseased, senescent, and heavily nutrient-deficient
vegetation, reflectance can significantly drop in the photo-
graphic NIR region [24]–[28], with an absolute change in the
NIR reflectance that might be far more noticeable than the re-
flectance increase in the visible band (for an in-depth overview
of a plant’s physiological- and morphological-state-related
spectral differences in the NIR, consider [19]). Although imag-
ing reflected NIR has been recognized as potentially beneficial,
a film-based approach has certain inherent drawbacks (e.g.,
the requirement for cooled storage and transportation of emul-
sions, inappropriate exposure determination, narrow exposure
latitude, and relatively weak sensitivity), making the complete
NIR image acquisition and processing workflow costly and
complicated, with a final outcome that is rather unpredictable.
C. Digital NIR Acquisition
Since the advent of digital photographic cameras [also called
digital still cameras (DSCs)], the acquisition of such NIR
imagery has enormously been simplified, because their silicon
image sensors are very sensitive to this invisible radiation, with
a so-called cutoff wavelength λ
c
at circa 1100 nm [30]–[32]. In
addition to the digital image sensor, the whole imaging array of
most one-shot DSCs also consists of a microlens array, which
is used to increase the amount of photons impinging on the
sensor’s photodiode (i.e., the light-sensitive area that collects
photons, hence creating one pixel of the final digital image), and
a color filter array (CFA), which is a mosaic pattern of colored
filters positioned above the photodiodes [Fig. 3(a)] [31], [33]–
[35]. As every photodiode of the image sensor has such a filter,
only a specific spectral range can be transmitted, subsequently
generating a charge in the photodiode [Fig. 3(b)].
Although both the sensor technology and the arrays of mi-
crolenses and colored filters are responsible for some variation
in the spectral responses of DSCs, it is safe to state that most
imaging matrices are very responsive to NIR radiation (for a
more in-depth discussion, consider [36]). To cut out the image-
degrading effect of these nonvisible wavelengths, camera man-
ufacturers place an NIR-blocking filter in front of the sensor
[37]–[39]. By removing this optical element and replacing it
with a visibly opaque filter, all visible wavelengths are removed
before they reach the sensor, allowing only NIR photons to
pass. Such a modification hugely increases the DSC’s sensitiv-
ity to NIR, while retaining the facility to view through the lens
(impossible in the film-based approach of pure NIR imaging).
Using a dedicated NIR DSC also deals with most of the
difficulties presented by film. Additionally, digital solutions
offer enhanced quantum efficiencies (QEs) and larger dynamic
ranges [41], [42] when compared to analog approaches, which
means that the former can be applied in far-from-optimal oper-
ational conditions.
Moreover, a DSC’s linear response to radiation, as well as
its direct feedback on accurate focusing and exposure, enables
a very consistent output. Finally, DSCs are suited for mapping
purposes, as they do not suffer from geometric film distortions
[43], [44]. In spite of these major advantages, the application of
digital NIR imaging with DSCs was never really investigated in
archaeological reconnaissance.
Using imagery generated by such a modified DSC and con-
ventional frames from a simultaneously operated unmodified
DSC, Verhoeven [19] gives an overview of situations in which
these easy-to-use NIR-imaging instruments might be archae-
ologically advantageous. Specifically, by comparing both data
sources, the author demonstrated the potential of this approach
to overcome the constraints of detecting and interpreting only
the varying visible colors in vegetation, while still maintaining
a flexible and economic approach (in terms of imaging instru-
ments).
This paper further explores the possibilities of such con-
verted DSCs in extracting even more meaningful information
from an acquired NIR frame, reporting on the evaluation and
quantification (as with any scientific measuring tool) of the
intrinsic properties of an NIR modified digital single-lens reflex
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3458 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
Fig. 3. (a) Bayer CFA [36, Fig. 7]. (b) Wavelength versus absolute QE for the Kodak KAF-8300 (adapted from [40, Fig. 5]).
(D-SLR) camera. This assessment of the channel-dependent
spectral responses and the accuracy of capturing NIR photons
might offer significant possibilities in the data processing, inter-
pretation, and quantification of the acquired imagery. Instead of
only using the imagery straight out of the camera, exploiting the
DSC’s individual spectral responses should ideally permit the
capture of (archaeologically) induced growth stresses in crops
even better (i.e., enhance the contrast between the archaeologi-
cal residue and the landscape matrix [1]).
II. DSC C
HARACTERIZATION:MATERIALS AND METHOD
A. Hardware
For the reasons discussed in [45], a Nikon D50 D-SLR was
employed. The NIR modification of the DSC (hereafter called
D50
NIR
) was executed by Chen [46], who placed a sort of cold
mirror in front of the sensor to block most visible radiation. The
sensor itself, a Sony ICX413AQ APS-C format sensor (called
DX format by Nikon) of the charge-coupled device (CCD) type,
measures 23.7 mm × 15.6 mm and contains 3008 effective
photodiodes in width by 2000 photodiodes in height [47], [48].
Above this sensor, an on-chip three-color red–green–blue
(RGB) CFA is fitted, with the filters arranged in a Bayer pattern,
as shown in Fig. 3(a). Bayer’s pattern features twice as many
green filters as blue or red filters to improve the sampling of
the luminance information [49], generating digital imagery with
higher perceived sharpness [49], [50].
As the majority of optical glasses and polymers freely trans-
mit NIR [39], most lenses can be used for NIR imaging [37],
[51], [52]. On the D50
NIR
, the Nikkor 20-mm f/3.5 AI-S and
the AF-S DX Zoom-Nikkor 17–55-mm f/2.8 G IF-ED are
used for Helikite aerial photography (i.e., remotely controlled
photography by means of a Helikite, a helium balloon with kite
wings [53], [54]) and photography from an airplane, respec-
tively. Whereas the latter lens is slightly more prone to hot spots
(i.e., a brighter area in the center of the image produced by in-
ternal reflections) than the fixed-focal-length lens, it allows for
zooming, which is often necessary when flying. The prime lens
is, however, a top-class performer in the NIR, capable of pro-
ducing very crisp and extremely sharp images [55]. Moreover,
it features an NIR focus mark. This lens was also used in the
subsequently described spectral analyses. To verify the consis-
tency of the results, all tests were repeated with an AF Nikkor
50-mm f/1.8 D.
B. Image Acquisition
To identify the NIR behavior of the D50
NIR
s complete
imaging system (lens + cold mirror + microlenses + CFA +
CCD), spectral response data are very important as they rep-
resent the digital output of the image sensor per incident light
energy of a certain wavelength. In the procedure followed, a
2800-K tungsten lamp was used as a reference EM source with
known spectral output. A small part of the emission spectrum
was selected with a Zeiss quartz prism monochromator (type
Carl Zeiss M4 QII) in the wavelength range from 400 to
1100 nm. Using quartz prisms for wavelength selection is bene-
ficial as no second-order contributions, which are typical when
using a diffraction grating, exist. Nevertheless, it was verified
that no spurious light in other than the selected wavelength
range was present.
Subsequently, a small entrance slit was fitted on the mono-
chromator to obtain a Gaussian-distributed narrow-band stim-
ulus. The transmitted waveband was then characterized with a
calibrated Ocean Optics QE65000 spectrometer (with a wave-
length resolution of 0.8 nm) to accurately determine the peak
wavelength and the bandwidth, which typically had a full width
at half maximum (FWHM) of 2.8 nm at 600 nm and 5.2 nm at
950 nm. Finally, characterization with the spectrometer allowed
the number of photons that passed each selected wavelength to
be determined. The D50
NIR
was irradiated with its sensor per-
pendicular to the output of the monochromator to minimize as
much as possible the angular dependence of the image sensor
[56]. Pictures of the transmitted radiation were acquired at
monochrome EM levels every 5 nm to obtain sufficient data
points. The D-SLR used a lens aperture of f/5.6 and a total ex-
posure time short enough (0.25 s for visual and 5 s for NIR)
to make sure that no photodiode became saturated, while the
integration time was still long enough to generate sufficiently
high digital numbers [DNs, also called analog-to-digital units
(ADUs)], essential for an acceptable signal-to-noise ratio
(SNR) (or S/N) and related measurement accuracy. For all im-
ages, the D50
NIR
s default ISO 200 setting was used, yielding a
minimal gain g of 6.57e
/DN with the 12-bit analog-to-digital
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3459
converter (ADC). This value was calculated according to the
method described by Berry and Burnell [57] and indicates the
number of electrons that will cause the DN to increase by
one [33], hence corresponding to a linear scaling factor K of
0.152 DN/e
(K =1/g).
As it is very important to work with the initially generated
integer values, using RAW imagery is crucial. In essence, a
RAW file is nothing but an array of DNs, each of them gen-
erated by one photodiode and proportional to the EM radiation
of a certain wavelength range (determined by the colored filter
on top) plus some offset due to dark current and bias. Because
the D50
NIR
utilizes a 12-bit ADC, the DNs can vary from 0 to
4095, corresponding to a tonal range of 2
12
gradations. Using
a RAW workflow ensures that the imagery for analysis is the
“pristine” sensor data, as these files (which can be created by
most consumer and all professional DSCs) were not subjected
to any color-processing algorithms (i.e., white balancing, demo-
saicking, tonal curve) by the DSC’s firmware, unlike in-camera-
generated JPEGs and TIFFs (for a discussion on the necessity
of using RAW in scientific imaging, consider [58]).
C. Image Calibration
Subsequently, the RAW images (called NEF by Nikon,
which means Nikon Electronic Format) were imported to The
MathWorks’ MATLAB to measure the DSC’s response to the
narrow-band illuminations but not before calibrating the im-
agery by removing some unwanted signals.
In scientific digital imaging, only the stream of photons
that reach the sensor (i.e., the photon signal) is of interest.
However, the light frame captured by an image sensor always
encompasses three particular signals: the photon signal, the
dark-current signal, and the bias signal/direct-current offset
[57]. Unlike the photon signal, which is generated by the
accumulated EM radiation during the exposure, dark current
is a signal that is produced even when the sensor is not illu-
minated, due to thermally induced electrons. This dark charge
accumulates with integration time and is heavily temperature
and ISO dependent. The bias component, which is a small and
mostly steady zero voltage offset that occurs even in the total
absence of illumination, is due to the effects of the electrical
charge applied to the detector prior to exposure [57], [59].
Each of these nonrandom signals has some corresponding
random variation (i.e., noise) embedded, all three varying ac-
cording to the imaging technology used [60]. In addition to
photon/shot noise (σ) and dark-current noise (σ
d
), caused by
the inherently random process of photon arrival and both obey-
ing the law of Poissonian statistics [33], [61], there is the signal-
independent read/readout/bias noise (σ
ron
): the sum of the reset
noise (σ
reset
), the on- and off-chip amplifier noise (σ
amp-on
and
σ
amp-off
), and the quantization noise (σ
ADC
) [34], [62]. In the
D50
NIR
, this minimal noise floor was measured to be about
1.04 ADU (12 bits) or about 6.83 root-mean-square electrons
(i.e., 1.04 g), an extremely low value that makes the D50
NIR
completely photon noise limited when imaging normal signal
levels and set to ISO 200.
Hence, the DNs making up an NEF picture are the sum of
the photon signal (with its corresponding Poisson noise), an
TAB LE I
A
PPROPRIATE SYMBOLS AND UNITS OF
ALL MENTIONED DSC QUANTITIES
unwanted dark-current signal (with Poisson noise), and a bias
constant (with readout noise), mathematically written as (1),
with the noise equal to (2) ([57], all symbols are defined in
Table I)
S
raw
=
x
g
+
x
d
g
+ b (1)
σ
raw
=
1
g
σ
2
+ σ
2
d
+ σ
2
ron
. (2)
Due to their randomness, the noise components are difficult
to correct. However, the dark-current and bias signals can be
removed during calibration. To reveal the dark characteristics of
the D50
NIR
, several sets of five NEF images were shot at dark
condition, each set with a different integration time, starting
from the fastest possible shutter speed (0.00025 s) up to 1 s,
while the DSC was in thermal equilibrium at a constant room
temperature (20
C).
After linearly reading them out (i.e., omitting the nonlinear
tonal redistribution normally applied by DSCs) and disregard-
ing white balance (WB), the RAW frames were converted to
16-bit TIFFs (one averaged version per set), and both the mean
and the standard deviation of the output values were plotted
versus integration time. The results are presented in Fig. 4(a)
and show that this D50
NIR
has significantly low dark-noise
levels at ISO 200.
However, the sudden drop in maximum dark-pixel value
makes a particular Nikon characteristic apparent. That is,
the firmware runs a median filter when the DSC takes an
exposure 1 s, aimed at reducing the effects of hot pixels
during long exposures yielded by particular photodiodes with
abnormally high dark current. The French astronomer Buil
found a way around this [63], by turning noise reduction on and
shutting down the D-SLR immediately after the exposure has
completed, thereby aborting the noise reduction job and saving
the pure RAW image directly from the buffer to the memory
card. When applying this method, it is seen in Fig. 4(b) that
the mean linear dark current is still not even 5e
/diode (i.e.,
0.7 DN 6.57e
/DN) at an exposure of 5 s, which means that
its error contribution is still negligible (apart from a few hot
pixels).
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3460 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
Fig. 4. (a) DNs generated by dark current (+ bias signal) versus exposure
time (in seconds) for very short exposures. (b) DNs generated by dark current
(+ bias signal) without subsequent median filtering.
After using this method in the data acquisition, a dark frame
was subtracted from all RAW images as in (3), with the total
image noise mathematically expressed by (4)
S
image
= S
raw
S
dark
=
x
g
+
x
d
g
+ b
x
d
g
+ b
(3)
σ
image
=
σ
2
raw
+ σ
2
dark
. (4)
Expression (4) clearly shows the noise to slightly increase
by dark subtraction. Therefore, rather than generating a single
frame, a high-S/N master dark frame yielded by averaging ten
stacked 5-s dark frames (or 0.25-s dark frames) was subtracted
from the original image to average the random noise. As the
master dark frame also contains the bias component b,this
operation corrects for both unwanted signals, making the use
of a bias frame obsolete [57], [64], [65]. Third, this approach
also accounts for the possible amplifier glow resulting from a
response of the photodiodes to radiation emitted by the readout
amplifiers every time the detector is read out [59], although the
latter was not visually attested.
In addition to dark subtraction, calibration also involves the
removal of a multiplicative component by flat fielding [57],
[61], [64], [65]. This process corrects the image for photo-
response nonuniformity (PRNU) by dividing the dark-
subtracted light frame with a master flat frame: an average of
several dark-current-corrected images taken from a uniform or
“flat” field of light, hence recording dust particles on the lens
and sensor, optical vignetting, and photodiode nonuniformity,
which is the main cause of PRNU [66].
Fig. 5. Relative response versus wavelength of the Nikon D50
NIR
with a
Nikkor 20-mm f/3.5 AI-S.
Finally, all calibrated RAW images were analyzed with a
purpose-written MATLAB program. Once the spectral and
intensity response of both green filter sets were verified to be
identical, a DN for the red, green, and blue sensor responses
was extracted by averaging over a rectangular section of some
15 pixels × 100 pixels in the center portion of every image. The
resulting set of three measured intensities allowed plotting the
color-filter-dependent relationship between the captured wave-
length and the ratio of the DN to the intensity of the emitted ra-
diant energy. However, accurate measurement of such a spectral
sensor response requires the output signal to be linearly propor-
tional to the incident light intensity over a large range of input
levels. Although this is known to be mostly the case [67] and
certainly to be expected for modern DSCs [68], a coefficient of
determination, i.e., R
2
> 0.99 (calculated for both the complete
CFA and all three color channels), confirms the almost perfect
linearity of the photometric response below saturation for this
CCD, an observation that was also reported in [69].
III. DSC C
HARACTERIZATION:RESULTS AND PROCESSING
A. Spectral Response Curves
Fig. 5 displays the relative spectral sensitivity response of
the different photodiodes in the D50
NIR
to the 2800-K lamp
as measured with the procedure explained above. The graph
describes the way in which the whole imaging matrix responds
to particular wavelengths. By repeating the same procedure
with an AF Nikkor 50-mm f/1.8 D, it was verified that the
impact of the photographic lens can be ignored to a large extent.
Only from 740 nm onward do the eleven lens elements of the
Nikkor 20-mm f/3.5 AI-S [70] slightly decrease the NIR trans-
mission rate [71] compared to the 50-mm lens (which consists
of only six lens elements [72]). This fact confirms that normal
photographic lenses are highly transparent to NIR radiation,
although—strictly speaking—they also have a specific spectral
absorption response.
In addition to transmitting radiation in specific spectral bands
of the visual spectrum, the colored filters thus also function
as wavelength-specific filters in the NIR range, allowing the
photodiodes to capture information in particular spectral bands.
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3461
From the curves, it is clearly seen that the spectral sensitivity
is almost negligible for visible light with wavelengths below
650 nm, corresponding to the cut-on frequency of the NIR-
pass filter in front of the CCD. Starting at about 660 nm,
the red photodiodes are most sensitive for deep-red to NIR
wavelengths, reaching a maximum at 730 nm. Above this
value, the QE markedly drops due to generated electrons often
recombining before reaching a sensor’s depletion region where
they are stored [34].
The blue filter locations are, however, totally insensitive for
the entire visible part of the EM spectrum, as their sensitivity
onset lies at 780 nm, rapidly increasing to a maximum response
at around 815 nm. The spectral range of 795–875 nm at half
maximum indicates that most information is gathered before
the moisture-sensitive NIR trough starting at about 940 nm
[73], [74], making the blue-filtered diodes particularly sensitive
to vegetation density or biomass [10], [12], [75]. Because the
general spectral response in the blue channel is much weaker
than the green and red responses, it is best to expose with a
somewhat longer-than-normal integration time. This will ef-
fectively counter high noise levels, as the following equation
shows that the SNR increases with the square root of all photons
captured by the diode [33], [61]:
SNR
x
=
x. (5)
Finally, the green diodes show an intermediate spectral be-
havior, being responsive to EM radiation from 680 nm onward,
until they also reach a maximum at about 815 nm. On the
long-wavelength side (> 820 nm), the similar response of
the particular diodes indicates that the RGB filters become
nearly completely transparent to the incident radiation, until
the imaging matrix becomes the perfect equivalent of a mono-
chrome detector at around 850 nm, which means that all filtered
photodiodes are equally sensitive to the incoming radiation. For
wavelengths longer than 1000 nm, the D50
NIR
s QE becomes
extremely low, due to the inherent wavelength-dependent low
absorption coefficient [34]. On the other side of the spectrum,
the sensitivity in the wavelength range from 400 to 650 nm
is extremely low, as one would expect from a good visible-
blocking filter. Only the green and red photodiodes show a
very small response, with green spectrally peaking at 565 nm.
Nevertheless, the contribution of these wavelengths to the final
output can safely be ignored.
B. New Spectral Bands
NIR imagery generated by the D50
NIR
has already been
used in archaeological research [19], [36], [45]. However, the
spectral characterization described above allows one to go
beyond the initial approaches in which the default output was
used. Because this analysis has clearly revealed the unequal
spectral responses of each photodiode type, spectroscopic in-
formation can be extracted by differentiating between the red,
green, and blue channels. The normalized spectral response
after subtraction and addition of particular channels is shown
in Fig. 6. These mathematical operations make sense, as all
three diode types have the same transmittance on the long-
wavelength side, whereas the blue and green spectral responses
Fig. 6. Red channel minus the green channel (R G), the blue channel
subtracted from the green channel (G B), and the blue channel added to
the green one (G + B). The peak response of each band is normalized to unity.
TAB LE II
ALL WORKABLE BANDS GENERATED BY THE D50
NIR
completely fit within the response ranges of the green and red
diodes, respectively. This way, the blue pure NIR component
can effectively be filtered out of the green channel, whereas
subtracting the green from the red channel seriously narrows the
bandwidth of the latter. Adding the green to the blue band, on
the other hand, creates a new spectral range that peaks at around
815 nm, with a better response in the 750–900-nm range, where
a plant’s maximum NIR reflectance lies [76]. Table II gives an
overview of all primary and newly created bands that can be
worked with and their close resemblance to particular spectral
bands acquired by satellite sensors, although for the purposes
of this study, only the archaeological potential of the bands
displayed in Fig. 6 is exploited (see Section IV). First, however,
one extra elementary processing step is explained.
C. Demosaicking
Apart from the few DSCs that have a Foveon X3 sensor,
single-shot DSCs usually feature one CCD, complimentary
metal–oxide–semiconductor, n-channel metal–oxide–
semiconductor, or junction field-effect transistor sensor
with an additional CFA to allow one particular spectral band to
be captured by each photodiode. Consequently, a mathematical
operation must be executed to fill in the DNs for the other
two bands, which is a process commonly referred to as
demosaicking, color reconstruction, CFA-interpolation, or de-
Bayering (in case a Bayer array is used). Given the widespread
use of CFAs, a large range of linear and nonlinear algorithms
has been created to reconstruct the final RGB image as accu-
rately as possible (e.g., [77]–[82]). However, these methods
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3462 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
Fig. 7. Processed images from the same aerial picture taken with the D50
NIR
. (a) RAW file developed by Capture NX. (b) Same RAW file linearly developed
in dcraw. (c) Contrast-enhanced version of (a). (d) Output after a simple mathematical operation (6) on version (b).
were designed to demosaic information from the visible
domain, and the assumptions underlying most of them may
not hold for NIR wavelengths, making them sometimes
unsuited for interpolating missing information in NIR imagery.
Previous research by Verhoeven [58], however, indicated that
the adaptive homogeneity-directed demosaicking algorithm
[83] performed very well in this invisible domain. As this
algorithm is implemented in the program dcraw, this software
has been used to demosaic all NEF images. Moreover, this free
ANSI C RAW decoder works on any operating system and is
capable of writing reconstructed 16-bit TIFF files [84] without
applying any tonal/gamma curve or WB (omitting the latter
two is often of utmost importance in scientific applications
[58]). As in-camera-generated TIFF and JPEG files do not
allow this approach, the following analysis assumes a complete
RAW workflow, yielding completely linearly developed files in
which the DNs are still equal to the ones initially generated by
the sensor but with all three channels completely reconstructed.
IV. A
RCHAEOLOGICAL RESULTS
Do the three dissimilar spectral responses of the D50
NIR
allow the researcher to gain more archaeological information
out of a straight-from-the-camera NIR frame? The answer to
this question is illustrated in Fig. 7. In the upper part [Fig. 7(a)
and (b)], two 16-bit versions of the same aerial photograph are
shown, taken with the D50
NIR
on July 20, 2007 at 13:30 h
above the central Adriatic Roman town of Septempeda
(43
14
10

N, 13
11
52

E–WGS84). Fig. 7(a) was created
by opening the original RAW file in Capture NX (Nikon
Corporation), a dedicated RAW converter for NEF files. As
with all RAW converters, this program automatically applies
a tonal correction to the data (a gamma-like curve to rectify
the mismatch between the approximately logarithmic human
visual system (HVS) and the linear sensor) and white balances
the scene by multiplying every spectral channel with a preset
weight, thereby correcting for the differential spectral response
of the DSC and compensating for the varying spectral output of
the light source.
Fig. 7(b), on the other hand, was converted and demosaicked
using dcraw. The corresponding histogram shows that the chan-
nels are not equal [unlike in Fig. 7(a)], and the maximum DNs
are also smaller than the Capture NX version, indicating that the
file is completely linearly processed. Histogram stretching of
Fig. 7(a), which is often necessary to tackle the nonmaximized
tonal range in NIR aerial photographs, yields the greater con-
trast seen in Fig. 7(c). Although some features start to become
faintly apparent, this result is largely inferior to Fig. 7(d), which
clearly indicates lighter and darker patches in the colza field,
indicating the presence of underground structures such as roads,
buildings, and ditches. The approach that yielded the result in
Fig. 7(d) was a simple arithmetic operation on Fig. 7(b), i.e.,
F (i, j)=
[R(i, j) G(i, j)]
[G(i, j)+B(i, j)]
(6)
in which F (i, j) is the final pixel, and R, G, and B indicate the
value of this pixel in the red, green, and blue channels, respec-
tively (a computation that is valid, as demosaicking attributed
each pixel with three complete spectral channels).
This operation clearly enhances the contrast between the soil
and the vegetation, as well as biomass differences in the canopy,
revealing subtle dissimilarities that are largely masked in the
structure of the original image [1]. The result is no coincidence.
Although the bands used are rather broad (85-nm FWHM and
95-nm FWHM), dividing them yields a so-called simple ratio
(SR), a result that is also known as the ratio vegetation index
(VI) or VI number. As the first true VI developed by Birth
and McVey [85], Jordan [86], and Pearson and Miller [87], this
ratio is known to indicate the amount of green biomass or leaf
area index (LAI) better than either band alone [86], [88], [89].
In all three of these pioneering cases, an NIR waveband was
divided by a part of the red spectrum (740 nm/675 nm, 800 nm/
675 nm, and 780 nm/680 nm, respectively). Although [16] also
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3463
Fig. 8. Comparison between (a) a conventional photograph and (b)–(d) three versions of a NIR photograph depicting approximately the same scene. (b) The
complete NIR frame. (c) The Blue NIR channel. (d) The result of the SR.
suggested a R
NIR
/R
700
ratio, it was opted to divide the red by
the NIR band, just out of convenience rather than following
other scholars (e.g., [90]). This way, the resulting vegetation
marks have a greater resemblance to crop marks as they appear
in the visible spectrum. Because the maxima of the red and
NIR bands are situated near 730 and 815 nm, respectively, the
operation also has close resemblance to the R
850
/R
710
ratio,
with the latter being proven by Datt [76], [91] to exhibit a very
strong correlation with chlorophyll content.
In addition to these comparisons, Fig. 7(d) demonstrates that
this simple VI is effective, exploiting the fact that when dealing
with healthy green vegetation, absorption is high in the red
band, whereas the plant’s mesophyll tissue allows for a strong
NIR reflection. Correspondingly, these areas are displayed dark
in the output. In the case of the Roman road in the center of the
picture, the bare soil and/or decreased LAI markedly increase
the magnitude of the red/NIR ratio, creating lighter areas or
negative crop marks. Although the SPOT-3-similar blue band
[92] has the advantage over the green or green + blue channel
through not including any visible radiation, the incorporation
into the SR did not yield better results (as all pictures were taken
before the DSC’s spectral characterization and the signal of the
blue channel was not optimized to counter the noise levels).
Longer exposures with a higher SNR should yield equal, if not
better, results.
In a second example, the same SR was tested on remotely
sensed data from a totally different situation. Fig. 8(a) shows
the grayscale and histogram-stretched version of a Canon
EOS 300D digital color photograph of the western grassland
part of the Italian Adriatic Roman coastal colony Potentia
(43
24
53

N, 13
40
14

E–WGS84), taken on July 17, 2007
at 15:00 h. It shows an excavation area (1), traces of the Roman
street pattern (2), and a plot of cut grass (3), needed to perform
geophysical research. Additionally, two paths to the excavation
area are depicted: one created by mowing (4) and a second
smaller path of trampled vegetation (5) as a result of passage to
and from the excavation area. Just as the traces of the wheel-
barrow traffic (6), the latter is characterized by a yellowish-
brown appearance, which is a very strong visual indication of
plant stress [15]. Fig. 8(b) and (c), respectively, shows a demo-
saicked, linearly converted, and histogram-stretched 16-bit
aerial D50
NIR
photograph and its extracted blue layer, taken
on the same day at 12:45 h.
Due to the extreme and long-term drought-induced stress the
plants suffered from in the Italian summer of 2007, Fig. 8(b)
[and certainly the pure NIR image in Fig. 8(c)] clearly shows
the traces of the Roman street pattern much better than Fig. 8(a).
Although the stressed plants reflect greater green and red ra-
diation (due to the substantial loss of chlorophyll), the street
traces stay faint in the visible domain as the surrounding
vegetation is also wilted to a certain extent and the lower
canopy closure causes an increased reflectance due to a lower
density of photosynthetic pigments per unit soil surface area.
Consequently, the differences between both vegetation stages
in the visible domain are small when compared to the NIR
reflectance dissimilarity. The fact that these NIR crop marks are
even visible in grasslands indicates the very high soil moisture
deficits this vegetation is suffering from [6]. Moreover, color
infrared (CIR) imaging was also reported earlier to have a clear
advantage over color photography in detecting archaeological
crop marks in pastures during summer [93], whereas pure NIR
should better reveal crop marks in dry vegetation [94].
On the other hand, all other features mentioned are easier
to distinguish in the visible domain than in any of the D50
NIR
’s
three layers, as the decrease in total chlorophyll content is much
larger than the change in the internal cellular structure of the
vegetation. However, the aforementioned ratio again clearly
reveals [Fig. 8(d)] these biomass related traces—the square,
both the paths, and the wheelbarrow area. As the street pattern
almost completely disappears in Fig. 8(d), this feature is less
related to large differences in chlorophyll content and LAI.
Although both pictures were not simultaneously taken from
the same spot (at 15:00 h from the airplane and circa 2 h
before with the use of the Helikite—marked by its shadow in
the middle of the frame), the angles of view of the DSCs and
the position of the sun did not change to such an extent that the
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3464 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
Fig. 9. (a) Visible image of the central part of the Roman town of Ricina.
(b) NIR image of the same scene with some contrast enhancement. (c) Output
when applying the SR with the channels from (b). The images were acquired
with (a) a Nikon D200 and (b) and (c) a Nikon D50
NIR
.
observed differences could be attributed to them. Indeed, the
parameter that changed the most was the solar geometry, whose
effects are known to be of limited importance [95], certainly
when the sun has a very small zenith angle [96].
In addition to negative crop marks, positive crop marks might
also be distinguished by the SR. From the contrast-enhanced
RAW image in Fig. 9(b), two zones with higher and denser
vegetation are obviously registered brighter when compared to
the surrounding plant canopy, due to the fact that the larger bio-
mass of both features effects a higher reflection of incident NIR
radiation. The visible frame from this scene [Fig. 9(a)], simul-
taneously captured with the NIR image above the center of the
Roman town of Ricina (43
19
41

N, 13
25
26

E–WGS84)
on May 15, 2008 at 11:27 h, gives only a small hint of the pres-
ence of these nonarchaeological positive grass marks [1 and 2
in Fig. 9(a)]. Moreover, the hydrographical features visible in
Fig. 9(b) are largely indiscernible in Fig. 9(a), showing the
importance of NIR acquisition in this situation [19]. Notwith-
standing, the NIR record fails to clearly distinguish between
the stone walls of the Roman theater (upper part of the frame)
and the grass growing in between. Calculating the SR yields
Fig. 9(c). When comparing all three frames, the magnitude of
reflectance dissimilarity in the grass field seems largest in the
SR output. This mathematical operation also highlighted the
lack of contrast between the theater walls and the vegetation,
although it was not able to visualize the old hydrographical
features.
V. D
ISCUSSION
From the results presented, it is clear that the archaeological
potential of a modified NIR-enabled DSC cannot be underes-
timated. Both the use of individual spectral channels (e.g., the
pure NIR image generated by the blue diodes) and that of arith-
metic operations performed on a combination of channels (e.g.,
the calculation of an SR) offer many opportunities to visually
enhance archaeologically related anomalies and/or even reveal
completely new archaeological information (as shown in [19]
and [45]). Although the application of NIR aerial imaging is
by no means novel in archaeological reconnaissance, the ad-
ditional advantages modified DSCs can offer in the generation
and interpretation of NIR photographs are substantial. Not only
do they significantly simplify the complete workflow, but they
also expand the possibilities known from the film-based NIR
approach (pure NIR or CIR), without the costs of the latter.
However, the real-world examples also point to some impor-
tant issues. First, both visible information and NIR information
(pure NIR and calculated SR) clearly need to be used together
to get a relevant archaeological picture [93] and in other nonar-
chaeological disciplines [97], certainly at times when stress
has sufficiently developed, causing lower NIR reflectance of
the canopy. From an interpretational point of view, the visible
information remains very important since the HVS is trained
to spot and interpret vegetation marks (as well as soil, shadow,
and other patterns) in this part of the spectrum. Moreover, when
dealing with chlorotic vegetation, reflectance data in the visible
domain are also of utmost importance as these very common
negative crop marks are extremely hard to distinguish in a pure
NIR image (as also witnessed in [98]), even though the SR can
tackle this issue to a large extent. Therefore, building a simple
camera rig to hold two DSCs is advised to simultaneously
acquire NIR and visible wavelengths (while offering the possi-
bility to mathematically combine particular spectral channels).
Second, all photographs (except those in Fig. 9) were ac-
quired in less-than-optimal circumstances, because long hot
dry periods present the least discriminating conditions to fly in
[21]. It can be expected that flying directly after rainfall could
significantly improve the results yielded by the D50
NIR
and the
calculated SR.
Third, the values of the SR sometimes exhibit very little
variation, a phenomenon that can largely be attributed to two
causes. On one hand, the photographs under consideration show
grassland and semiarid zones, which are regions where the SR
is known to be less effective in discriminating biomass/LAI
variations [99]. To counteract this, other mathematical
operations were tried (particularly normalizations and VIs such
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3465
as difference VI (DVI) and normalized DVI). Generally, it was
this SR that yielded the best and certainly the most consistent
results in these low-cover areas, which confirms to a degree the
results of the work of Baugh and Groeneveld [100].
On the other hand and more importantly, the applied SR does
not really involve the mean red reflected radiant flux to mean
NIR radiant flux. Whereas the blue + green channel (with a
spectral range at half maximum of 780–875 nm and a sensitivity
peaking at 815 nm) is well suited as a reference band, being
very little affected by either chlorophyll or water vapor absorp-
tion [75], the red green channel is still spectrally too broad to
be effectively used as a band that shows maximum sensitivity
to pure chlorophyll absorption. Although the green subtraction
proved very useful in removing much NIR radiation from the
red channel, the resulting response curve—which has a spectral
range of 690–775 nm at half maximum—completely overlaps
the stress-sensitive red-edge region (i.e., the very steep increase
in a healthy green plant’s reflectance curve at the edge of the
visible light and the beginning of the NIR spectrum [101]),
something that should be omitted as it reduces the accuracy of
vegetation investigation [102]–[104]. A solution to tackle these
problems of the D50
NIR
and the resulting SR is being worked
on, involving flying with another simultaneously operated DSC
that acquires only radiation from the red-edge spectral region
(690–710 nm). This zone has been proven several times to give
the most consistent leaf (and even canopy) reflectance response
to plant physiological stress [102], [105]–[110] and is therefore
of extreme importance in several narrow-band VIs for chloro-
phyll estimation, even at the canopy level [111]–[113]. As this
range is severely compromised in unmodified DSCs, a similar
modified DSC equipped with a narrow-band interference filter
attached to the lens would be needed to generate aerial frames
using only the reflected radiation from this stress-sensitive side
of the chlorophyll absorption band. This would increase the
correlation of the proposed reflectance ratio to plant senescence
and stress, allowing the spectral characteristics of the D50
NIR
to be more fully exploited. Such an approach offers archaeol-
ogists an affordable and easily managed multispectral tool that
can provide useful information on the vegetation’s physiolog-
ical and morphological conditions to aid in the survey of the
archaeological subsurface. If flying with a second (visible) or
third (visible and 700 nm) DSC is impossible to achieve, the
spectral characteristics of the D50
NIR
and the resulting SR will
still most likely allow more relevant vegetation information to
be gathered in comparison with only a pure NIR band.
However, no matter how efficient and accurate this new
“tool” can be, an increase in site discovery rate using multispec-
tral imaging with DSCs is unlikely as long as the predominant
flying strategy of “observer-directed” survey and photography
is in practice [114]. This approach generates extremely selec-
tive (i.e., biased) data that are totally dependent on an airborne
observer recognizing archaeological phenomena. Thus, subsur-
face soil disturbances that are visually imperceptible at the time
of flying will not make it into an NIR photograph (even if the
spectral response in this domain is distinct). The large-scale
use of the techniques advocated in this paper require a new (or
call it additional) approach to aerial archaeology, that is, flying
to collect geographically unbiased photographs of large areas
(a point that was already raised by other scholars concerning
aerial imaging in the visible domain [114]–[117]). Otherwise,
nonvisible and narrow-band imaging will only enhance the
record of known features and—in the best case—reveal pre-
viously undetected archaeological details within a site that can
be seen from above (which, however, should still not be under-
estimated, as new evidence may always alter the archaeological
appraisal [118]).
VI. C
ONCLUSION
Archaeological aerial reconnaissance has long been and, to a
certain extent, is still largely equated to flying around in a small
aircraft, using still cameras to record archaeological anomalies
recognized by the airborne observer. Although satellite and
multispectral and hyperspectral airborne data have been used
in a variety of archaeological surveys, most users often lack
both the financial and staff resources to acquire and handle
the majority of these data (let alone the fact that the image
acquisition is executed without taking the specific archaeolog-
ical requirements and constraints into account). This does not,
however, imply that technical enhancements have to be ignored
and certainly not if they can cheaply be achieved. It is therefore
encouraging to see that the products of the current digital
photography industry can have a great contribution in the low-
cost technological improvements needed to better understand
the buried landscape record. In 1936, Reeves wrote about aerial
archaeology, pointing out that as “its methods and technique
are improved, aerial photography will increase in scientific
value” [119, p. 107]. Seen from this perspective, the ability of
modified DSCs to acquire nonvisible data in wide and/or narrow
wavebands can be just the tool archaeologists need to increase
the scientific value of every single flight. However, testing these
tools on their spectral capabilities is an absolute prerequisite
for the optimal use of the generated aerial (archaeological)
imagery, given the fact that no two imaging matrices are alike.
Once all essential characteristics are known, such highly NIR-
sensitive devices provide a cheap, compact, robust, and easy-
to-handle means for a “spectroscopic” aerial approach.
Allowing that the presented imagery was acquired in an
unfavorable period and the red–green channel seems signifi-
cantly broader than the ideal 690–710-nm band, the individual
channels of a modified Nikon D50 proved very useful in the
calculation of a simple VI to indicate chlorophyll-related issues,
whereas the pure broadband NIR channels are more suited
to reveal severe drought and nutrient stress in the canopy
reflectance [120]. In addition to using the three channels gen-
erated by one single modified DSC, their combination with
discrete specifically chosen spectral bands (which are generated
by a tandem of photographic cameras) looks promising. Just as
their use is not solely restricted to crop mark archaeology [19],
NIR-enabled DSCs could also be applied in several nonarchae-
ological domains, including agriculture, forest management,
and the mapping of water bodies. Rather than making the
other methods of data acquisition obsolete, modified DSCs
thus offer convenient low-cost possibilities to yield essential
beyond-visible information for the benefit of various aerial and
ground-based disciplines.
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3466 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
ACKNOWLEDGMENT
The authors would like to thank D. Cowley (Royal Commis-
sion on the Ancient and Historical Monuments of Scotland) for
proofreading the manuscript and the two anonymous reviewers
for their helpful comments. This paper arises from the first
author’s Ph.D., which was conducted with the permission of
the Fund for Scientific Research—Flanders (FWO).
R
EFERENCES
[1] A. R. Beck, Archaeological site detection: The importance of contrast,”
presented at the Annu. Conf. Remote Sensing and Photogrammetry
Society, RSPSoc, Newcastle, U.K., 2007. [Online]. Available: http://
www.ceg.ncl.ac.uk/rspsoc2007/papers/189.pdf
[2] T. E. Avery and T. R. Lyons, Remote Sensing. Aerial and Terres-
trial Photography for Archeologists. Washington, DC: Cultural Resour.
Manage. Division, 1981.
[3] G. W. G. Allen, “Discovery from the air,” Aerial Archaeol., vol. 10,
pp. 37–92, 1984.
[4] R. Evans, “The weather and other factors controlling the appearance of
crop marks on clay and ‘difficult’ soils,” in Populating Clay Landscapes,
J. Mills and R. Palmer, Eds. Stroud, U.K.: Tempus, 2007, pp. 16–27.
[5] R. Evans and R. J. A. Jones, “Crop marks and soil marks at two ar-
chaeological sites in Britain,” J. Archaeol. Sci., vol. 4, no. 1, pp. 63–76,
Mar. 1977.
[6] R. J. A. Jones and R. Evans, “Soil and crop marks in the recognition of
archaeological sites by air photography,” in Aerial Reconnaissance for
Archaeology, D. R. Wilson, Ed. London, U.K.: Council Brit. Archaeol.,
1975, pp. 1–11.
[7] D. N. Riley, “The technique of air-archaeology,” Archaeol. J., vol. 101,
pp. 1–16, 1946.
[8] D. N. Riley, “Factors in the development of crop marks,” Aerial
Archaeol., vol. 4, pp. 28–32, 1979.
[9] D. R. Wilson, Air Photo Interpretation for Archaeologists. Stroud,
U.K.: Tempus, 2000.
[10] E. B. Knipling, “Physical and physiological basis for the reflectance
of visible and near-infrared radiation from vegetation,” Remote Sens.
Environ., vol. 1, no. 3, pp. 155–159, Dec. 1970.
[11] G. S. Rabideau, C. S. French, and A. S. Holt, “The absorption and
reflection spectra of leaves, chloroplast suspensions, and chloroplast
fragments as measured in an Ulbricht sphere,” Am.J.Bot., vol. 33,
no. 10, pp. 769–777, 1946.
[12] J. T. Woolley, “Reflectance and transmittance of light by leaves,” Plant
Physiol., vol. 47, no. 5, pp. 656–662, May 1971.
[13] M. N. Merzlyak and A. A. Gitelson, “Why and what for the leaves
are yellow in autumn? On the interpretation of optical spectra of senesc-
ing leaves (Acer platanoides L.),” J. Plant Physiol., vol. 145, no. 3,
pp. 315–320, 1995.
[14] A. Young and G. Britton, “Carotenoids and stress,” in Stress Responses
in Plants: Adaptation and Acclimation Mechanisms, R. Hobbs and
H. Mooney, Eds. New York: Springer-Verlag, 1990, pp. 87–112.
[15] M. L. Adams, W. D. Philpot, and W. A. Norvell, “Yellowness index:
An application of spectral second derivatives to estimate chlorosis of
leaves on stressed vegetation,” Int. J. Remote Sens., vol. 20, no. 18,
pp. 3663–3675, Dec. 1999.
[16] A. A. Gitelson, M. N. Merzlyak, and H. K. Lichtenthaler, “Detection of
red-edge position and chlorophyll content by reflectance measurements
near 700 nm,” J. Plant Physiol., vol. 148, no. 3/4, pp. 501–508, 1996.
[17] G. A. F. Hendry, J. D. Houghton, and S. B. Brown, “Tansley review
no. 11. The degradation of chorophyll—A biological enigma,” New
Phytol., vol. 107, no. 2, pp. 255–302, Oct. 1987.
[18] A. R. Benton, R. H. Haas, J. W. Rouse, and R. W. Toler, “Low-cost aerial
photography for vegetation analysis,” J. Appl. Photogr. Eng.,vol.2,
pp. 46–49, 1976.
[19] G. J. J. Verhoeven, “Near-infrared sensing of vegetation marks, in Be-
yond Conventional Boundaries. New Technologies, Methodologies, and
Procedures for the Benefit of Aerial Archaeological Data Acquisition
and Analysis, G. J. J. Verhoeven, Ed. Zelzate, Belgium: Nautilus Aca-
demic Books, 2009, pp. 193–216.
[20] E. W. Chappelle, M. S. Kim, and J. E. McMurtrey, III, “Ratio analysis
of reflectance spectra (RARS): An algorithm for the remote estimation
of the concentrations of chlorophyll a, chlorophyll b, and carotenoids
in soybean leaves,” Remote Sens. Environ.
, vol. 39, no. 3, pp. 239–247,
Mar. 1992.
[21] H. W. Gausman, “Leaf reflectance of near-infrared,” Photogramm. Eng.,
vol. 40, pp. 183–191, 1974.
[22] H. W. Gausman, W. A. Allen, and R. Cardenas, “Reflectance of
cotton leaves and their structure,” Remote Sens. Environ., vol. 1, no. 1,
pp. 19–22, Mar. 1969.
[23] V. I. Myers, M. D. Heilman, R. J. P. Lyon, L. N. Namken, D. Simonett,
J. R. Thomas, C. L. Wiegand, and J. T. Woolley, “Soil, water and plant
relations,” in Remote Sensing With Special Reference to Agriculture and
Forestry. Washington, DC: Nat. Acad. Sci., 1970, pp. 253–297.
[24] R. C. Heller, “Imaging with photographic sensors,” in Remote Sensing
With Special Reference to Agriculture and Forestry. Washington, DC:
Nat. Acad. Sci., 1970, pp. 35–72.
[25] C. C. D. Lelong, P. C. Pinet, and H. Poilvé, “Hyperspectral imag-
ing and stress mapping in agriculture: A case study on wheat in
Beauce (France),” Remote Sens. Environ., vol. 66, no. 2, pp. 179–191,
Nov. 1998.
[26] M. S. Moran, P. J. Pinter, B. E. Clothier, and S. G. Allen, “Effect
of water stress on the canopy architecture and spectral indices of ir-
rigated alfalfa,” Remote Sens. Environ., vol. 29, no. 3, pp. 251–261,
Sep. 1989.
[27] S. H. Shakir Hanna and B. Girmay-Gwahid, “Spectral characterization
of water stress impact on some agricultural crops: II. Studies on alfalfa
using handheld radiometer,” in Proc. SPIE—Remote Sensing for Agri-
culture, Ecosystems, and Hydrology, Barcelona, Spain, 1998, vol. 3499,
pp. 296–306.
[28] S. H. Shakir Hanna and B. Girmay-Gwahid, “Spectral characterization
of water stress impact on some agricultural crops: III. Studies on Sudan
grass and other different crops using handheld radiometer,” in Proc.
SPIE—Remote Sensing for Earth Science, Ocean, and Sea Ice Appli-
cations, Florence, Italy, 1999, vol. 3868, pp. 154–166.
[29] O. Braasch, “Gallipoli Ahead—Air survey between the Baltic and
Mediterranean,” Publikácia vznikla v rámci Centra excelentnosti SAV
Výskumné centrum Najstarsích dejín Podunajska pri Archeologickom
ústave SAV v Nitre, pp. 84–96, 2007, Nitra, Slovakia.
[30] Eastman Kodak Company, Conversion of Light (Photons) to Electronic
Charge, Aug. 5, 1999. [Online]. Available: http://www.kodak.com/US/
en/digital/pdf/ccdPrimerPart1.pdf
[31] J. Nakamura, “Basics of image sensors,” in Image Sensors and Signal
Processing for Digital Still Cameras, J. Nakamura, Ed. Boca Raton,
FL: Taylor & Francis, 2006, pp. 53–93.
[32] F. van de Wiele, “Photodiode quantum efficiency,” in Proc. NATO
Adv. Study Inst. Solid State Imag., Louvain-la-Neuve, Belgium, 1975,
pp. 47–90.
[33] F. Dierksin “Sensitivity and image quality of digital cameras,”
Basler AG, Ahrensburg, Germany, Oct. 27, 2004. [Online]. Available:
http://www.baslerweb.com/downloads/9835/Image_Quality_of_Digital_
Cameras.pdf
[34] G. C. Holst, CCD Arrays, Cameras and Displays. Bellingham, WA:
SPIE, 1996.
[35] A. J. P. Theuwissen, Solid-State Imaging With Charge-Coupled Devices.
Dordrecht, The Netherlands: Kluwer, 1995.
[36] G. J. J. Verhoeven, “Imaging the invisible using modified digital still
cameras for straightforward and low-cost archaeological near-infrared
photography,” J. Archaeol. Sci., vol. 35, no. 12, pp. 3087–3100,
Dec. 2008.
[37] D. D. Busch, Digital Infrared Pro Secrets. Boston, MA: Thomson,
2007.
[38] T. Koyoma, “Optics in digital still cameras,” in Image Sensors and Signal
Processing for Digital Still Cameras, J. Nakamura, Ed. Boca Raton,
FL: Taylor & Francis, 2006, pp. 21–51.
[39] S. F. Ray, Applied Photographic Optics. Lenses and Optical Systems
for Photography, Film, Video, Electronic and Digital Imaging. Oxford,
U.K.: Focal Press, 2002.
[40] Eastman Kodak Company, Kodak KAF-8300CE Image Sensor.
3326 (H) x 2504 (V). Full-Frame CCD Color Image Sensor with
Square Pixels for Color Cameras, Apr. 18, 2005. [Online]. Available:
http://www.kodak.com/ezpres/business/ccd/global/plugins/acrobat/en/
datasheet/fullframe/KAF-8300LongSpec.pdf
[41] D. M. Chabries, S. W. Booras, and G. H. Bearman, “Imaging the
past: Recent applications of multispectral imaging technology to de-
ciphering manuscripts,” Antiquity, vol. 77, no. 296, pp. 359–372,
Jun. 2003.
[42] D. Har, Y. Son, and S. Lee, “SLR digital camera for forensic pho-
tography,” in Proc. SPIE—Sensors and Camera Systems for Scientific,
Industrial, and Digital Photography Applications V, San Jose, CA, 2004,
vol. 5301, pp. 276–284.
[43] D. King, P. Walsh, and F. Ciuffreda, Airborne digital frame camera
imaging for elevation determination,” Photogramm. Eng. Remote Sens.,
vol. 60, no. 11, pp. 1321–1326, 1994.
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
VERHOEVEN et al.: SPECTRAL CHARACTERIZATION OF A DIGITAL STILL CAMERA’S NIR MODIFICATION 3467
[44] D. J. King, K. Haddow, and D. G. Pitt, “Evaluation of CIR digital camera
imagery in regeneration assessment: Research design and initial results,”
in Proc. 1st North Amer. Symp. Small Format Aerial Photography,
Cloquet, MN, 1997, pp. 186–195.
[45] G. J. J. Verhoeven, “Becoming a NIR-sensitive aerial archaeologist,” in
Proc. SPIE—Remote Sensing for Agriculture, Ecosystems, and Hydrol-
ogy IX, Florence, Italy, 2007, vol. 6742, pp. 674 20Y-1–674 20Y-13.
[46] J. Chen, Digital Infrared at Jim Chen Photography, 2007. [Online].
Available: http://www.jimchenphoto.com
[47] The Nikon Guide to Digital Photography With the D50 Digital Camera,
Nikon Corporation, Tokyo, Japan, 2005.
[48] “ICX413AQ. APS Size Diagonal 28.4 mm (Type 1.8) 6.15 M Effec-
tive Pixels Color CCD Image Sensor.,” Cx-News. Sony Semiconductor
News, 2002. [Online]. Available: http://www.sony.net/Products/SC-HP/
cx_news/vol28/pdf/icx413np.pdf
[49] B. E. Bayer, “Color imaging array,” U.S. Patent 3 971 065, Jul. 20, 1976.
[50] K. Parulski and K. Spaulding, “Color image processing for digi-
tal cameras,” in Digital Color Imaging Handbook,G.Sharma,Ed.
Boca Raton, FL: CRC Press, 2003, pp. 728–757.
[51] P. G. Dorrell, Photography in Archaeology and Conservation.
Cambridge, U.K.: Cambridge Univ. Press, 1994.
[52] B. Rørslett, All You Ever Wanted to Know About Digital UV and IR
Photography, But Could Not Afford to Ask, 2004. [Online]. Available:
http://www.naturfotograf.com/UV_IR_rev00.html
[53] G. J. J. Verhoeven and J. Loenders, “Looking through black-tinted
glasses—A remotely controlled infrared eye in the sky,” in 2nd Int. Conf.
Remote Sens. Archaeol. Proc. 2nd Int. Workshop—From Space to Place,
Rome, Italy, 2006, pp. 73–79.
[54] G. J. J. Verhoeven, J. Loenders, F. Vermeulen, and R. Docter, “Helikite
aerial photography or HAP—A versatile means of unmanned, radio
controlled low altitude aerial archaeology,” Arch. Prosp., vol. 16, no. 2,
pp. 125–138, 1994.
[55] B. Rørslett, Lens Survey and Subjective Evaluations, 2007. [Online].
Available: http://www.naturfotograf.com/lens_surv.html
[56] T. Mizoguchi, “Evaluation of image sensors,” in Image Sensors
and Signal Processing for Digital Still Cameras, J. Nakamura, Ed.
Boca Raton, FL: Taylor & Francis, 2006, pp. 179–203.
[57] R. Berry and J. Burnell, The Handbook of Astronomical Image Process-
ing. Richmond, VA: Willmann-Bell, 2005.
[58] G. J. J. Verhoeven, “It’s all about the format—Unleashing the power of
RAW aerial photography,” Int. J. Remote Sens., to be published.
[59] C. J. Skinner and L. Bergeronin “Characteristics of NICMOS detector
dark observations,” Space Telesc. Sci. Inst., Baltimore, MD, Oct. 1997.
[Online]. Available: http://www.stsci.edu/hst/nicmos/documents/isrs/
isr_026.pdf
[60] H. T. Hytti, “Characterization of digital image noise properties based on
RAW data,” in Proc. SPIE—Image Quality and System Performance III,
2005, vol. 6059, p. 605 90A.
[61] Eastman Kodak Company, CCD Image Sensor Noise Sources,
Jan. 10, 2005. [Online]. Available: http://www.kodak.com/global/
plugins/acrobat/en/digital/ccd/applicationNotes/noiseSources.pdf
[62] Y. Reibel, M. Jung, M. Bouhifd, B. Cunin, and C. Draman, “CCD
or CMOS camera noise characterisation,” Eur. Phys. J., Appl. Phys.,
vol. 21, no. 1, pp. 75–80, Jan. 2003.
[63] C. Buil, Comparaison du Canon 10D et du Nikon D70 en imagerie as-
tronomique longue pose. [Online]. Available: http://astrosurf.com/build/
70v10d/eval.htm
[64] S. B. Howell, Handbook of CCD Astronomy. Cambridge, U.K.:
Cambridge Univ. Press, 2006.
[65] R. Wodaski, The New CCD Astronomy: How to Capture the Stars With
a CCD Camera in Your Own Backyard. Duvall, WA: New Astronomy
Press, 2002.
[66] J. Lukás, J. Fridrich, and M. Goljan, “Digital camera identification from
sensor pattern noise,” IEEE Trans. Inf. Forensics Secur.
, vol. 1, no. 2,
pp. 205–214, Jun. 2006.
[67] W. Budde, “Definition of the linearity range of Si photodiodes,” Appl.
Opt., vol. 22, no. 11, pp. 1780–1784, Jun. 1983.
[68] P. L. Vora, J. E. Farrell, J. D. Tietz, and D. H. Brainard, “Linear models
for digital cameras,” in Proc. IS&T’s 50th Annu. Conf.—A Celebration
of All of Imaging, Cambridge, MA, 1997, pp. 377–382.
[69] R. N. Clark, The Nikon D50 Digital Camera: Sensor Noise, Dynamic
Range, and Full Well Analysis, Mar. 3, 2006. [Online]. Available: http://
clarkvision.com/imagedetail/evaluation-nikon-d50/index.html
[70] Leofoo, Additional Information on Nikkor 20 mm Ultra-Wideangle
Lenses—20 mm f/3.5, 2001. [Online]. Available: http://www.mir.com.
my / rb / photography / companies / nikon / nikkoresources / late70nikkor /
ultrawides/20mma.htm
[71] A. Mann, Infrared Optics and Zoom Lenses. Bellingham, WA: SPIE,
2000.
[72] Nikon Corporation, AF Nikkor 50 mm f/1.8D, 2007. [Online]. Avail-
able: http://www.nikonimaging.com/global/products/lens/af/normal/af_
50mmf_18d/index.htm
[73] J. Peñuelas, I. Filella, C. Biel, L. Serrano, and R. Savé, “The reflectance
at the 950–970 nm region as an indicator of plant water status,” Int. J.
Remote Sens., vol. 14, no. 10, pp. 1887–1905, Jul. 1993.
[74] P. S. Thenkabail, R. B. Smith, and E. De Pauw, “Hyperspectral vege-
tation indices and their relationships with agricultural crop characteris-
tics,” Remote Sens. Environ., vol. 71, no. 2, pp. 158–182, Feb. 2000.
[75] C. J. Tucker, A comparison of satellite sensor bands for vege-
tation monitoring,” Photogramm. Eng. Remote Sens., vol. 44, no. 11,
pp. 1369–1380, 1978.
[76] B. Datt, “Visible/near infrared reflectance and chlorophyll content in
eucalyptus leaves,” Int. J. Remote Sens., vol. 20, no. 14, pp. 2741–2759,
Sep. 1999.
[77] J. Adams, K. Parulski, and K. E. Spaulding, “Color processing in digital
cameras,” IEEE Micro, vol. 18, no. 6, pp. 20–30, Nov./Dec. 1998.
[78] D. H. Brainard and D. Sherman, “Reconstructing images from
trichomatic samples: From basic research to practical applications,” in
Proc. 3rd IS&T/SID Color Imag. Conf.—Color Sciences, Systems and
Applications, Scottsdale, AZ, 1995, pp. 4–10.
[79] L. Chang and Y.-P. Tan, “Effective use of spatial and spectral correlations
for color filter array demosaicking,” IEEE Trans. Consum. Electron.,
vol. 50, no. 1, pp. 355–365, Feb. 2004.
[80] B. C. de Lavarène, D. Alleysson, and J. Hérault, “Practical implementa-
tion of LMMSE demosaicing using luminance and chrominance spaces,”
Comput. Vis. Image Underst., vol. 107, no. 1/2, pp. 3–13, Jul. 2007.
[81] B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and
R. M. Mersereau, “Demosaicking: Color filter array interpolation. Ex-
ploring the imaging process and the correlations among three color
planes in single-chip digital cameras,” IEEE Signal Process. Mag.,
vol. 22, no. 1, pp. 44–54, Jan. 2005.
[82] R. Lukac and K. N. Plataniotis, “Demosaicked image postprocessing
using local color ratios,” IEEE Trans. Circuits Syst. Video Technol.,
vol. 14, no. 6, pp. 914–920, Jun. 2004.
[83] K. Hirakawa and T. W. Parks, Adaptive homogeneity-directed
demosaicing algorithm,” IEEE Trans. Image Process., vol. 14, no. 3,
pp. 360–369, Mar. 2005.
[84] D. Coffin, Decoding RAW Digital Photos in Linux, 2008. [Online]. Avail-
able: http://cybercom.net/~dcoffin/dcraw/
[85] G. S. Birth and G. R. McVey, “Measuring the color of growing turf
with a reflectance spectrophotometer,” Agron. J., vol. 60, pp. 640–643,
1968.
[86] C. F. Jordan, “Derivation of leaf-area index from quality of light on the
forest floor,” Ecology, vol. 50, no. 4, pp. 663–666, 1969.
[87] R. L. Pearson and L. D. Miller, “Remote mapping of standing crop
biomass for estimation of the productivity of the short-grass prairie,
Pawnee National Grasslands, Colorado,” in Proc. 8th Int. Symp. Remote
Sens. Environ., 1972, pp. 1355–1379.
[88] J. Peñuelas and I. Filella, “Visible and near-infrared reflectance tech-
niques for diagnosing plant physiological status,” Trends Plant Sci.,
vol. 3, no. 4, pp. 151–156, 1998.
[89] M. Schlerf, C. Atzberger, and J. Hill, “Remote sensing of forest biophys-
ical variables using HyMap imaging spectrometer data,” Remote Sens.
Environ., vol. 95, no. 2, pp. 177–194, Mar. 2005.
[90] D. Zhao, K. R. Reddy, V. G. Kakani, J. J. Read, and S. Koti, “Selection
of optimum reflectance ratios for estimating leaf nitrogen and chloro-
phyll concentrations of field-grown cotton,” Agron. J., vol. 97, no. 1,
pp. 89–98, 2005.
[91] B. Datt, A new reflectance index for remote sensing of chlorophyll
content in higher pants: Tests using eucalyptus leaves,” J. Plant Physiol.,
vol. 154, no. 1, pp. 30–36, 1999.
[92] G. Begni, “Selection of the optimum spectral bands for the SPOT satel-
lite,” Photogramm. Eng. Remote Sens., vol. 48, no. 10, pp. 1613–1620,
1982.
[93] J. N. Hampton, An experiment in multispectral air photography for
archaeological research,” Photogramm. Rec., vol. 8, no. 43, pp. 37–64,
Apr. 1974.
[94] R. Lasaponara and N. Masini, “Detection of archaeological crop marks
by using satellite QuickBird multispectral imagery,” J. Archaeol. Sci.,
vol. 34, no. 2, pp. 214–221, Feb. 2007.
[95] D. B. Lobell, G. P. Asner, B. E. Law, and R. N. Treuhaft, “View
angle effects on canopy reflectance and spectral mixture analysis of
coniferous forests using AVIRIS,” Int. J. Remote Sens., vol. 23, no. 11,
pp. 2247–2262, Jun. 2002.
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
3468 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 47, NO. 10, OCTOBER 2009
[96] D. F. Wanjura and J. L. Hatfield, “Vegetative and optical character-
istics of four-row crop canopies,” Int. J. Remote Sens., vol. 9, no. 2,
pp. 249–258, Feb. 1988.
[97] R. N. Colwell, “Spectrometric considerations involved in making rural
land use studies with aerial photography,” Photogrammetria, vol. 20,
no. 1, pp. 15–33, Feb. 1965.
[98] J. H. Everitt, D. E. Escobar, J. R. Noriega, M. R. Davis, and
I. Cavazos, A digital imaging system and its application to natural
resource management,” in Proc. 1st North Amer. Symp. Small Format
Aerial Photography, Cloquet, MN, 1997, pp. 123–135.
[99] A. R. Huete, K. Didan, T. Miura, E. P. Rodriguez, X. Gao, and
G. Ferreira, “Overview of the radiometric and biophysical performance
of the MODIS vegetation indices,” Remote Sens. Environ., vol. 83,
no. 1/2, pp. 195–213, Nov. 2002.
[100] W. M. Baugh and D. P. Groeneveld, “Broadband vegetation index per-
formance evaluated for a low-cover environment,” Int. J. Remote Sens.,
vol. 27, no. 21, pp. 4715–4730, Nov. 2006.
[101] D. N. H. Horler, M. Dockray, and J. Barber, “The red edge of plant
leaf reflectance,” Int. J. Remote Sens., vol. 4, no. 2, pp. 273–288,
1983.
[102] G. A. Carter, “Ratios of leaf reflectances in narrow wavebands as indica-
tors of plant stress,” Int. J. Remote Sens., vol. 15, no. 3, pp. 697–703,
1994.
[103] L. S. Galv
˘
ao, I. Vitorello, and M. A. Pizarro, An adequate band posi-
tioning to enhance NDVI contrasts among green vegetation, senescent
biomass, and tropical soils,” Int. J. Remote Sens., vol. 21, no. 9,
pp. 1953–1960, Jun. 2000.
[104] C. J. Tucker, “Red and photographic infrared linear combinations for
monitoring vegetation,” Remote Sens. Environ., vol. 8, no. 2, pp. 127–
150, May 1979.
[105] G. A. Carter, “Responses of leaf spectral reflectance to plant stress,”
Am. J. Bot., vol. 80, no. 3, pp. 239–243, Mar. 1993.
[106] G. A. Carter, “General spectral characteristics of leaf reflectance
responses to plant stress and implications for the remote sensing of veg-
etation,” in Proc. ASPRS Annu. Conf.—Gateway to the New Millennium,
St. Louis, MO, 2001. CD-ROM.
[107] G. A. Carter, W. G. Cibula, and R. L. Miller, “Narrow-band re-
flectance imagery compared with thermal imagery for early detec-
tion of plant stress,” J. Plant Physiol., vol. 148, no. 5, pp. 515–522,
1996.
[108] G. A. Carter and L. Estep, “General spectral characteristics of leaf re-
flectance responses to plant stress and their manifestation at the land-
scape scale,” in From Laboratory Spectroscopy to Remotely Sensed
Spectra of Terrestrial Ecosystems, R. S. Muttiah, Ed. Dordrecht,
The Netherlands: Kluwer, 2002, pp. 271–293.
[109] G. A. Carter and R. L. Miller, “Early detection of plant stress by dig-
ital imaging within narrow stress-sensitive wavebands,” Remote Sens.
Environ., vol. 50, no. 3, pp. 295–302, Dec. 1994.
[110] A. A. Gitelson and M. N. Merzlyak, “Signature analysis of leaf
reflectance spectra: Algorithm development for remote sensing of
chlorophyll,” J. Plant Physiol., vol. 148, no. 3/4, pp. 494–500, 1996.
[111] N. C. Coops, C. Stone, D. S. Culvenor, L. A. Chisholm, and
R. N. Merton, “Chlorophyll content in eucalypt vegetation at the leaf
and canopy scales as derived from high spectral resolution data,” Tree
Physiol.: Int. Botanical J., vol. 23, no. 1, pp. 23–31, Jan. 2003.
[112] C. P. Ferri, A. R. Formaggio, and M. A. Schiavinato, “Narrow band
spectral indexes for chlorophyll determination in soybean canopies
[Glycine max (L.) Merril],” Braz. J. Plant Physiol., vol. 16, no. 3,
pp. 131–136, Dec. 2004.
[113] H. K. Lichtenthaler, A. A. Gitelson, and M. Lang, “Non-destructive
determination of chlorophyll content of leaves of a green and an
aurea mutant of tobacco by reflectance measurements,” J. Plant Physiol.,
vol. 148, no. 3/4, pp. 483–493, 1996.
[114] R. Palmer, “If they used their own photographs they would
not take them like that,” in
From the Air: Understanding Aerial Archae-
ology, K. Brophy and D. Cowley, Eds. Stroud, U.K.: Tempus, 2005,
pp. 94–116.
[115] S. Coleman, “Taking advantage: Vertical aerial photographs commis-
sioned for local authorities,” in Populating Clay Landscapes, J. Mills
and R. Palmer, Eds. Stroud, U.K.: Tempus, 2007, pp. 28–33.
[116] J. Mills, “Bias and the world of the vertical aerial photograph,” in
From the Air: Understanding Aerial Archaeology, K. Brophy and
D. Cowley, Eds. Stroud, U.K.: Tempus, 2005, pp. 117–126.
[117] R. Palmer, “Seventy-five years v. ninety minutes: Implications of
the 1996 Bedfordshire vertical aerial survey on our perceptions of
clayland archaeology,” in Populating Clay Landscapes, J. Mills and
R. Palmer, Eds. Stroud, U.K.: Tempus, 2007, pp. 88–103.
[118] J. N. Hampton, Aerial reconnaissance for archaeology: Uses of the
photographic evidence,” Photogramm. Rec., vol. 9, no. 50, pp. 265–272,
Oct. 1977.
[119] D. M. Reeves, Aerial photography and archaeology,” Amer. Antiq.,
vol. 2, no. 2, pp. 102–107, Oct. 1936.
[120] J. Dungan, L. Johnson, C. Billow, P. Matson, J. Mazzurco, J. Moen, and
V. Vanderbilt, “High spectral resolution reflectance of Douglas fir grown
under different fertilization treatments: Experiment design and treatment
effects,” Remote Sens. Environ., vol. 55, no. 3, pp. 217–228, Mar. 1996.
Geert J. Verhoeven was born in 1978. He received
the Master’s degree in archaeology from Ghent Uni-
versity, Ghent, Belgium, in 2002. Since 2003, he has
been working at the Department of Archaeology and
Ancient History of Europe, Ghent University. From
September 2004 till October 2008, he was a Ph.D.
fellowship of the Research Foundation—Flanders
(FWO) and developed new technologies, methodolo-
gies, and data processing procedures for the benefit
of aerial archaeological data acquisition and analysis.
For this research, he obtained the Ph.D. degree in
May 2009.
Since 2003, he has been working at the Department of Archaeology and
Ancient History of Europe, Ghent University. His main research interests
concern remote sensing technology, GIS, aerial and ground-based photography,
photogrammetry, and archaeological computing.
Philippe F. Smet was born in 1979. He received
the M.Sc. and Ph.D. degrees in physics from Ghent
University, Ghent, Belgium, in 2001 and 2005,
respectively.
He is currently a Postdoctoral Researcher for the
Fund for Scientific Research—Flanders (FWO) with
LumiLab, Department of Solid State Sciences, Ghent
University. His main research is focused on color
conversion materials for light-emitting diodes and
persistent luminescent materials for safety applica-
tions. His other research topics include the effects of
particle size on the emission properties of rare-earth-doped materials.
Dirk Poelman was born in 1963. He received the
Ph.D. degree in physics, on electroluminescent thin
films, from Ghent University, Ghent, Belgium.
He is currently leading the research group Lu-
miLab, Department of Solid State Sciences, Ghent
University. In addition, he lectures several courses
on bachelor and master levels. He is a coauthor
of more than 130 international publications and
conference contributions. His research interests in-
clude luminescent powders and thin films, structural
characterization of materials using microscopic and
X-ray techniques, and photocatalysis for air purification.
Frank Vermeulen was born in 1960. He received the
Ph.D. degree in archaeology from Ghent University,
Ghent, Belgium, in 1988.
Since 1999, he has been a Full-Time Professor
in Roman archaeology and archaeological methods
with the Department of Archaeology and Ancient
History of Europe, Ghent University. His research
mainly focuses on the archaeology of landscapes,
with an emphasis on Mediterranean environments
and the development of geoarchaeological method-
ology and fieldwork. He has organized seven inter-
national congresses, published more than ten archaeological monographs, and
written more than 80 articles in international journals and series.
Authorized licensed use limited to: University of Gent. Downloaded on September 25, 2009 at 03:15 from IEEE Xplore. Restrictions apply.
... This has proven valuable in different scientific contexts 2,3 including agriculture-where full-spectrum photography (FSP) aids in assessing plant health and soil properties 4 -or aerial archeology, where it eases the visualization of ancient sites without the need for excavation. 5 Narrowing it down to the medical field, current applications of FSP frequently depend on the administration of external agents, such as indocyanine green (ICG) or 5-aminolevulenic acid (5-ALA), to enhance the visualization of certain tissues, particularly in the operating room (OR). 6,7 Although contrast-dependent fullspectrum imaging has been widely applied, the potential of fullspectrum imaging without the use of contrast agents remains unexplored in neurosurgery. ...
Article
Full-text available
Background Anatomical dissection is a fundamental part of surgical education, but it can be challenging to visualize the subtle details of anatomy. Full-spectrum photography (FSP) is a technique that uses cameras that can capture light in a wider range of wavelengths compared to traditional cameras. This technical laboratory note introduces a method using noncontrast-dependent FSP to improve the visualization of neuroanatomical structures during cadaveric dissection for surgical neuroanatomy. Methods The techniques described below were employed to routinely capture anatomical images in the Skull Base and Cerebrovascular Laboratory at the University of California, San Francisco from February 2022 to August 2024, when capturing anatomical exposures of 20 specimens. We used 3 different full-spectrum modalities: 720-nm infrared, ultraviolet (UV) reflectance, and UV fluorescence and a modified mirror-less camera without hot mirror. Results The application of FSP yielded high-clarity images across various imaging modalities, each enhancing specific anatomical structures. Infrared 720-nm images showed the general smooth texture of the surface structure of the cadaver, UV reflectance, and fluorescence provided an enhancement of calcified structures, nerve fibers, microvascular components, and melanin. Combining these techniques with other imaging modalities, such as visible imaging and photogrammetry, proved to be both straightforward and feasible. Conclusions FSP offers a valuable enhancement to anatomical dissections and education. It provides a noninvasive means to characterize anatomical structures with greater clarity, highlighting features that may be challenging to observe with the naked eye or standard visible-light photography.
... While CMOS sensors are sensitive to approximately 350-1050 nm, an infrared filter (750-1000 nm) is applied to reproduce natural colors visible to humans. By replacing the infrared filter with a red one, some cameras can be easily customized to near infrared, hence CMOS records infrared wavelengths into the red channel of the RGB image file [18,19]. ...
Article
Full-text available
This review paper aims to provide a meta-analysis of the scientific literature for heritage documentation and monitoring using geo-information sensors. The study initially introduces the main types of geomatic sensors that are currently widely used for heritage studies. Although the list provided here is indicative rather than exhaustive, it provides a general overview of the variety of sensors used for different observation scales. The study next focuses on the existing literature, based on published documents. Targeted queries were implemented to the Scopus database to extract the relevant information. Filtering was then applied to the results so as to limit the analysis on the specific thematic sub-domains that is applied for heritage documentation and monitoring. These domains include, among other close-range and underwater photogrammetry, Terrestrial Laser Scanner, Unmanned Aerial Vehicles platforms, and satellite observations. In total, more than 12,000 documents were further elaborated. The overall findings are summarized and presented here, providing further insights into the current status of the domain.
... In particular, regarding the multispectral data analysis, the study of the spectral signature is widely used in the environmental and forest investigations fields (Belcore et al. 2021), in precision agriculture (Dubbini et al. 2015;Kharuf et al. 2018) and is particularly cutting-edge in hydrology processes observation (McCabe et al. 2017). However, in the last years, it was possible to witness an increasing application development in the field of noninvasive diagnostic techniques of cultural and archaeological heritage to evaluate how the presence of underground buildings or buried architectural artefacts may influence the characteristics of the vegetation and the soil above the analysed remains (Verhoeven et al. 2009). Specifically, archaeological features usually cause stress in the crops or soil above, which, expressed through reflectance value and registered by multispectral images, can be analysed with spectral indexes. ...
Article
Full-text available
Landscape heritage, especially if it does not arouse great public echoes, needs great attention, starting from knowledge and metric documentation processes to which reality-based sensing techniques often contribute significantly. The primary purpose of this work is to reflect on the possibility of identifying submerged built heritages, which are sometimes characterised by precarious safety conditions due to abandonment, through multispectral photogrammetric technologies with primary data acquired by UAVs. The experience carried out in an impervious alpine territory foresees the close relationship of integration of photogrammetric techniques in the visible and the multispectral ranges, with the integration of terrestrial scanning solutions from slam-based mobile systems, to validate the results provided by the analysis of the spectral signatures of different kind of soils.
... The first sensor, a Sony ɑ7R 42.4 Megapixel (MP) camera mounted with a 21 mm (mm) fixed focal length lens took standard color (RGB) digital imagery. The second sensor was a PhaseOne 60 MP camera with a 55-mm fixed focal length lens with modifications by the manufacturer to allow near-infrared (NIR) light to pass through the sensor (Verhoeven et al. 2009). A filter fixed on the lens allowed red light to be collected in the first band and NIR light to be collected in the third band so that the images could be used in a Normalized-Difference Vegetation Index (NDVI) band combination. ...
Article
Full-text available
Concepts of scale are at the heart of diverse scientific endeavors that seek to understand processes and how observations and analyses influence our understanding. While disciplinary discretions exist, researchers commonly devise spatial, temporal, and organizational scales in scoping phenomena of interest and determining measurements and representational frameworks in research design. The rise of the Fourth Paradigm for science drives data-intensive computing without preconceived notions regarding at what scale the phenomena or processes of interest operate, or at which level of details meaningful patterns may emerge. While scale is the a priori consideration for theory-driven research to seek ontological and relational affirmations, big data analytics and machine learning embed scale in algorithms and model outputs. In this paper, we examine embedded scale in data-driven machine learning research, connect the embedding scale to scale operating in the general theory of geographic representation in GIS and scaffold our arguments with a study of using machine learning to detect archeological features in drone-collected high-density images.
... The new archaeological contexts must also be presented, above all with multimedia promotional activity, through the penetration of digital technologies that will be able to differentiate and distribute the offerings of the tourist sector to the territory [17][18][19][20]. This could represent a new way to publicize the Italian brand, enhancing the spaces still unknown and partially unexplored. ...
Article
Full-text available
The consequences of the coronavirus pandemic are and will continue to be devastating for the tourism sector, especially for the cultural one. It is necessary to reflect on the new strategies to be adopted to deal with the heavy losses that the world of cultural heritage is suffering. The great archaeological attractions will no longer be able to accommodate the prepandemic numbers and therefore we must also think of alternative routes to present the minor heritage of our country. In recent years, our experience has allowed us to realize an open-air museum project in bioarchaeological sites (archaeological cemetery areas characterized by the recovery of human remains) that are part of an archaeological heritage that is little known, but which reserve great dissemination and fruition potential. The design of an archaeological itinerary, even a virtual one, which includes the bioarchaeological sites that we are musealizing, could offer a new visiting experience, especially in this difficult moment for all of us.
Article
This report presents the preliminary results of field research conducted at Tell ‘Umar/Seleucia on the Tigris by the Italian Archaeological Expedition (IAES) in October-November 2022 and April 2023. Activities were resumed onsite after a gap of more than thirty years. The site had previously been surveyed and excavated by two expeditions: the American expedition of the University of Michigan and the Museums of Toledo and Cleveland, which worked from 1927 to 1936, and the Italian expedition of the Centro Ricerche Archeologiche e Scavi di Torino per il Medio Oriente e l’Asia (CRAST) and the University of Torino (UniTo), which worked from 1963 to 1976 and from 1985 to 1989. The American team opened several trenches of different sizes in many points of the site: most of them are unpublished. Among other buildings, the latter team particularly focused on one of the city’s dwelling blocks. The longest and most important scholarly tradition of research on Seleucia is that of the Italian team of the CRAST and UniTo. The Italian excavations have brought to light various buildings: the theater, the stoa and the city archives have been discovered in the so-called North agora; workshops and houses have also been identified in various parts of the city, along with a small temple of Mesopotamian type. Material evidence has been found in large quantities by both expeditions (especially pottery, terracotta figurines, coins and seal impressions), and studies on materiality, which are partially still ongoing, have significantly increased our knowledge of the city objectscape. Seleucia on the Tigris was the cosmopolis of West Asia in the period from the 4th century BCE to the 2nd century CE. It was claimed as a major capital of the Oikumene in ancient Greek sources –a capital which could be only comparable with Alexandria and Rome– and it was labeled as the city of kingship in Akkadian records of Seleucid times. The city’s dimension, location and centrality in ancient road and waterway networks made it a pivotal crossroad of trade and a model of connectedness. The importance of the city for our understanding of cultural interaction processes equals nowadays the prestige it had in antiquity. Fieldwork already carried out by the Italian team was of different types: extensive excavations, which allowed to expose more than 7,000 m2 in different areas of the site, alternated with ground surveys. Innovative approaches were also experienced: for instance, one of the first geophysical surveys ever conducted in Iraq was completed at Seleucia (see below): ancient canals that predate the city foundation were identified, along with other anomalies. However, many aspects related to the city layout, environment and materiality demand to be further investigated.
Article
Full-text available
In spectral imaging, the constraints imposed by hardware often lead to a limited spatial resolution within spectral filter array images. On the other hand, the process of demosaicking is challenging due to intricate filter patterns and a strong spectral cross correlation. Moreover, demosaicking and super resolution are usually approached independently, overlooking the potential advantages of a joint solution. To this end, we use a two-branch framework, namely a pseudo-panchromatic image network and a pre-demosaicking sub-branch coupled with a novel deep residual demosaicking and super resolution module. This holistic approach ensures a more coherent and optimized restoration process, mitigating the risk of error accumulation and preserving image quality throughout the reconstruction pipeline. Our experimental results underscore the efficacy of the proposed network, showcasing an improvement of performance both qualitatively and quantitatively when compared to the sequential combination of state-of-the-art demosaicking and super resolution. With our proposed method, we obtained on the ARAD-1K dataset an average PSNR of 48.02 (dB) for domosaicking only, equivalent to the best method of the state-of-the-art. Moreover, for joint demosaicking and super resolution our model averages 35.26 (dB) and 26.29 (dB), respectively for ×2 and ×4 upscale, outperforming state-of-the-art sequential approach.The codes and datasets are available at https://github.com/HamidFsian/DRDmSR.
Article
Multispectral filter array (MSFA) sensors provide a cost-effective and one-shot acquisition solution to obtain well-aligned multi-band images, which are helpful for various optical and remote sensing applications. However, the sparse spatial sampling rate and strong spectral cross-correlation make MSFA image demosaicing a challenging problem. Therefore, it is essential to develop effective MSFA demosaicing solutions to reconstruct full-resolution and high-fidelity multispectral images from the raw mosaic image. In this paper, we present a Pseudo-panchromatic Image (PPI) Edge infused Spatial-Spectral Adaptive Residual Network (PPIE-SSARN) for multispectral filter array image demosaicing. The proposed two-branch model deploys a residual sub-branch to adaptively compensate for the spatial and spectral differences of reconstructed multispectral images and a PPI edge infusion sub-branch to enrich the edge-related information. Moreover, we design an effective mosaic initial feature extraction module with a spatial- and spectral-adaptive weight-sharing strategy whose kernel weights can change adaptively with spatial locations and spectral bands to avoid artifacts and aliasing problems. Experimental results demonstrate the superiority of our proposed method, outperforming the state-of-the-art MSFA demosaicing approaches and achieving satisfying demosaicing results in terms of spatial accuracy and spectral fidelity. Our models and code will be publicly available.
Article
Full-text available
Over the past decades, climate change has accelerated the deterioration of heritage sites and archaeological resources in Arctic and subarctic landscapes. At the same time, increased tourism and growing numbers of site visitors contribute to the degradation and manipulation of archaeological sites. This situation has created an urgent need for new, quick, and non-invasive tools and methodologies that can help cultural heritage managers detect, monitor, and mitigate vulnerable sites. In this context, remote sensing and the applications of UAVs could play an important role. Here, we used a drone equipped with an RGB camera and a single multispectral/thermal camera to test different possible archeological applications at two well-known archaeological sites in the UNESCO World Heritage area of Kujataa in south Greenland. The data collected were used to test the potential of using the cameras for mapping (1) ruins and structures, (2) the impact of human activity, and (3) soil moisture variability. Our results showed that a combination of RGB and digital surface models offers very useful information to identify and map ruins and structures at the study sites. Furthermore, a combination of RGB and NDVI maps seems to be the best method to monitor wear and tear on the vegetation caused by visitors. Finally, we tried to estimate the surface soil moisture content based on temperature rise and the Temperature Vegetation Dryness Index (TVDI), but did not achieve any meaningful connection between TVDI and on-site soil moisture measurements. Ultimately, our results pointed to a limited archaeological applicability of the TVDI method in Arctic contexts.
Thesis
Full-text available
This thesis finalises a 5+3 PhD project within the joint doctoral programme in Digital Heritage established in collaboration between History, Archaeology and Classical Studies, Graduate School, the Faculty of Arts, Aarhus University and the University of York. The thesis deals with the overarching theme of spatial data in archaeological excavation recording. Spatial data are at the core of all archaeological observations, and are expressed in numerous ways, ranging from traditional hand drawings to digital two- and three-dimensional representations in Geographic Information Systems and proprietary 3D software. Yet, despite technological advances, stateof-the art digital spatial data are almost equally detached from textual archaeological interpretation as they were using conventional tools decades ago. The thesis presents a study of how technological advances influence archaeological excavation traditions and methodologies. Special emphasis is directed at exploring how the increased use of image-based 3D documentation may contribute to increased quality of field recording and, in particular, what theoretical conceptualisations and technical developments are needed to harness its full potential. The thesis is composed of four articles, which constitute individual chapters (2- 5). Each chapter covers a theme within the underlying topic of integrating spatial data in archaeology, supplemented by an introductory chapter (1), a synthesis (6) and a conclusion (7). The first article (chapter 2) provides an introduction to the overarching research questions and their methodological and historical background. It offers some rudimentary impressions of differing excavation and recording traditions in Britain and Denmark, to critically assess the use of GIS in archaeology and the negotiation between state-of-the-art technology and archaeological practice. The article discusses how the adaptation of GIS may have contributed significantly to the detrimental effect of creating stand-alone silos of spatial data that are rarely fully integrated with non-spatial, textual data, and has acted to stifle the development of digital standards of recording by perpetuating outmoded analogue recording conventions from a previous century. The chapter outlines the potential of born-digital 3D recording technologies such as Structure From Motion (SFM), GPS, and laser scanning in current practice, while advocating for a conceptualisation of new types of data and data representation in archaeological documentation. This, however, requires changes in archaeological methodologies and workflows and that we redefine more explicitly what we actually want to do with spatial data in archaeology. The second article (chapter 3) seeks to advance the conceptual framework of 3D models within archaeological excavation recording. 3D documentation advocates for a new workflow with a more three-dimensional reasoning, allowing for the utilisation of 3D as a tool for continuous progress planning and evaluation of an excavation and its results. Just like the general use of models to form hypotheses, it is possible to use 3D models as spatial hypotheses of an ongoing excavation. This allows us to visually realise or spatially conceptualise our hypotheses as a virtual reconstruction and to combine it with our observational data. The article presents first-hand experiences of working with 3D reconstruction and visualisations during the excavations at Viking Age site Jelling, and explores how the concept of authenticity may facilitate negotiations between visualising what we know, and what we think we know. The third article (chapter 4) further addresses the challenges inherent to the integration of 3D documentation: specifically its inability to convey archaeological interpretations. Image-based 3D modelling is generally considered a superior tool for generating geometrically accurate and photo-realistic recording of an excavation, but does not immediately encourage reflexive or interpretative practice. This is a direct consequence of the technical limitations of currently available tools, but also reflects an archaeological methodology and spatial conceptualisation based on two-dimensional abstractions. Using the example of the excavations at the Iron Age site Alken Enge, this article takes a more technical approach to exploring how new tools developed for segmenting field-recorded 3D geometry allow embedding archaeological interpretations directly in the 3D model, thereby augmenting its semantic value considerably. This is considered a precondition for the successful integration of 3D models as archaeological documentation. Furthermore, the article explores how web-based 3D platforms may facilitate collaborative exchange of 3D excavation content and how the integration of spatial and attribute data into one common event-based data model may be advantageous. The event-based approach is used for conceptualising how digital spatial data are created, derived and evolve throughout the documentation and post-excavation process. This effectively means building a conceptualisation of excavation recording procedures and seeing them through to the data model implementation itself. The fourth and last article (chapter 5) further explores the technologies outlined in chapters two and four. In particular, it focuses on evaluating analytical capabilities and alternative visualisation end-goals for 3D excavation recording. The chapter presents a simple case study, demonstrating the pipeline from excavating an archaeological feature, through image-based documentation and processing, to volumetric visual representation, while exploring the potential of machine learning to aid in feature recognition and classification. Chapter six acts as a synopsis, which provides added context to the results of the preceding chapters and furthermore discusses archaeological data models in general, conceptual reference models and, finally, presents the data model and implementation developed during the research project. The research introduces several novel approaches and technical developments aimed at aggregating the fragmented excavation data throughout the archaeological sector. This includes developing software for harvesting 2D GIS data from file storage at local archaeological institutions, functions for 3D semantic segmentation, automated processes for pattern recognition (SVM), machine learning and volumetric visualisation, and database mappings to web-services such as the MUD excavation database - all of which feed into the development of the Archaeo Framework. The online database www.archaeo.dk provides an implementation of the proposed data model for complex spatial field recorded data, and demonstrates the achieved data management capabilities, analytical queries, various spatial and visual representations and data interoperability functions. The Archaeo Framework acts as a data repository for excavation data, and provides long-awaited integration of spatial and textual data in Denmark. The benefits of spatial integration are clearly evident, notably having all information in one system, available online for research, dissemination and data re-use. For the first time, it is possible to perform large-scale validation of digital excavation plans against the written record, and perform complex spatial queries at a much deeper level than merely a site on a map. This research frames the basis for further developments of dynamic data management approaches to the integration of complex spatial data in field archaeology. The data model is expected to assist archaeologists in implementing better conceptualised excavation data models, and to facilitate a better understanding and use of 3D for archaeological documentation and analysis. Ultimately, the implementation provides access to the inaccessible dimensions of archaeological recording by joining hitherto isolated and fragmentary archaeological datasets - spatial and textual. Future areas of investigation should seek to advance this further in order to facilitate the persistence of complex spatial data as integrated components of archaeological data models.
Poster
Full-text available
Since the astronomer and composer Sir Frederick William Herschel (1738-1822) discovered in 1800 the infrared portion of the electromagnetic spectrum, many other scientists became interested in this kind of invisible radiation. It lasted, however, until 1904 for the first near-infrared photograph to be taken. From the 1930s onwards, this unusual type of imaging was practised more elaborately, specifically to examine damaged and censored writings or study blood patterns for medical purposes. After 1935 – the year in which one of the earliest infrared aerial photographs was taken from a stratosphere balloon – the trend was set. Less than a decade afterwards, aerial infrared colour film became extensively used for its camouflage detection capabilities in WWII. Today, orbital and aerial NIR recording serves a great number of applications, being intensively used by the military as well as the scientific fields of hydrology, geology, forestry and archaeology. Up till now, NIR radiation was mostly captured in an analogue way by infrared sensitive plates or film emulsions (black-and-white or colour), or digitally by satellites or high-tech multispectral sensors. For various reasons (cost, resolving power, lack of hardware etc.), aerial archaeologists use(d) the analogue NIR approach to study their objects (some examples are – amongst others – the work of Bradford, Strandberg, Solecki, Edienne and Martin). Such a film-based workflow is however very error-prone, as the emulsions need to be stored cooled and developed by specialised labs directly after exposing them. Moreover, determining the right exposure is not as straightforward as with conventional/standard (i.e. visible light) photography. Together with some ignorance and/or lack of knowledge about the subject, this critical imaging process severely restricted NIR radiation to be captured by aerial archaeologists so far. However, this changed completely with the advent of digital cameras. As their sensors are very sensitive to NIR radiation, the whole process of taking NIR photographs is much less of a cumber stone. The poster under consideration wants to show how NIR imagery can be taken with normal (and converted) digital cameras, what the images look like, compare the advantages (and disadvantages) to normal aerial imaging (e.g. haze penetration, enhanced clarity of detail and visualization of stressed vegetation) as well as outline a basic approach of NIR image processing.
Book
In this revised and updated edition of his 1989 book, Peter Dorrell provides a comprehensive guide to the uses of photography in archaeology. Drawing on thirty-five years of experience, he examines the use of photography in field archaeology, in surveys, in archaeological laboratories, and in conservation. He offers a clear and well-illustrated explanation of the techniques involved, with sections on equipment and materials, survey and site photography, architectural photography, the recording of different types of artifacts, registration and storage, the use of ultra-violet and infra-red, and photography for publication. He also covers the growing use of video and electronic recording systems.
Book
Ten years after the publication of Infrared Optics and Zoom Lenses, this text is still the only current publication devoted exclusively to infrared zoom lenses. This updated second edition includes 18 new refractive and reflective infrared zoom systems, bringing the total number of infrared zoom optical systems to 41 systems. Other additions include a section on focal plane arrays and a new closing chapter specifically devoted to applications of infrared zoom lenses. Coverage of wavelength region has been expanded to include the near infrared. Additional topics include an examination of the importance of principal planes, methods for athermalization by means of computer glass substitution, and global optimization techniques for zoom lens design. © 2009 Society of Photo-Optical Instrumentation Engineers. All rights reserved.
Chapter
Remarkably similar results have been reported in a number of studies that evaluated patterns of change in leaf reflectance spectra within the 400–850 nm wavelength range that occur with plant physiological stress. A variety of stressors have been imposed on species ranging from grasses to conifers and deciduous trees. In all cases, the maximum difference between control and stressed states occurred as a reflectance increase near 700 nm. This common response near 700 nm, as well as correspondingly increased reflectance in the green-yellow spectrum, are explained by the tendency of stress to reduce leaf chlorophyll concentration and by the in vivo absorption properties of chlorophyll. To determine the extent to which stress-induced changes in the reflectance of stressed vegetation at the landscape scale may be similar to those observed commonly for individual leaves, a row crop of corn was exposed to various levels of N fertilization, and canopy reflectances were measured using AVIRIS imagery. Changes in corn canopy reflectance with N deficiency were spectrally similar to the commonly observed leaf reflectance responses to stress, with maximum reflectance differences between N-deficient and control plots at 730 nm. Only far-red reflectance increased significantly (P=0.05) with relatively mild N deficiency, but reflectance in the green and far-red spectra correlated equally well with field estimates of leaf chlorophyll and laboratory measurements of leaf N concentration. A complete lack of N fertilizer increased reflectance significantly in both the green and far-red spectra and decreased reflectance in the near-infrared. Additionally, short-term water stress caused changes in corn canopy reflectance that differed from the responses to N deficiency, altering reflectance substantially only in the near-infrared, where it increased by as much as 2.5 percent. Consequently, remote sensing may be used not only to detect plant stress in monoculture stands but also to predict its cause.