Conference PaperPDF Available

Abstract and Figures

We have developed an engineering model of a novel compact hyperspectral imager. The CHIEM instrument is designed to be compatible with a 12U CubeSat satellite, offering a swath of 100km and a GSD of 25m from 600km altitude. The hyperspectral sensor has thin film interference filters directly deposited on a 12Mpixel CMOS 2D detector array. The spectral range covers 470 to 900 nm, with narrow spectral resolution (FWHM) between 5 and 10nm. Besides the hyperspectral zone which covers 2/3 of the detector array, it also contains 2 panchromatic zones without filters. While the baseline design uses a conventional front-side illuminated CMOS sensor, the development also includes filter depositions on a backside illumination (BSI) version with a higher sensitivity. For the optical design of the front telescope, CHIEM uses a very compact three mirror anastigmat, which allows a wide field of view in both across track and along track direction (> 9.5 o x 7.2 o). The readout electronics (ROE) provides all required sensor interfaces (power, control, data) enabling its full performance operation, and also a set of backend interfaces for system power, remote control, and backend remote data (to EGSE) and local storage interfaces.
Content may be subject to copyright.
Blommaert 1 31st Annual AIAA/USU
Conference on Small Satellites
SSC17-VI-02
CHIEM: A new compact camera for hyperspectral imaging
Joris Blommaert, Bavo Delauré, Stefan Livens, Dirk Nuyts
VITO Remote Sensing Department
Boeretang 200, B-2400 Mol, Belgium; +32 14 33 55 11
joris.blommaert@vito.be
Klaas Tack, Andy Lambrechts
IMEC
Kapeldreef 7, B-3001 Leuven, Belgium; +32 16 28 14 36
klaas.tack@imec.be
Vincent Morau
AMOS
Liège Science Park, Rue des Chasseurs Ardennais, B-4031 Angleur, Belgium, +32 4 361 40 40
vincent.moreau@amos.be
Eric Callut, Gérard Habay, Koen Vanhoof, Michel Caubo
Deltatec
Rue Gilles Magnée 92/6, B-4430 Ans, Belgium, +32 4 2397880
e.callut@deltatec.be
Jan Vandenbussche
CMOSIS bvba, AMS
Covelierstraat 15, B-2600 Antwerp, +32 3 260 17 30
jan.vandenbussche@cmosis.com
Atul Deep, Kyriaki Minoglou
European Space Agency -ESTEC,
Keplerlaan 1, PO Box 299, +31 71 565 6565
atul.deep@esa.int
ABSTRACT
We have developed an engineering model of a novel compact hyperspectral imager. The CHIEM instrument is
designed to be compatible with a 12U CubeSat satellite, offering a swath of 100km and a GSD of 25m from 600km
altitude. The hyperspectral sensor has thin film interference filters directly deposited on a 12Mpixel CMOS 2D
detector array. The spectral range covers 470 to 900 nm, with narrow spectral resolution (FWHM) between 5 and
10nm. Besides the hyperspectral zone which covers 2/3 of the detector array, it also contains 2 panchromatic zones
without filters. While the baseline design uses a conventional front-side illuminated CMOS sensor, the development
also includes filter depositions on a back-side illumination (BSI) version with a higher sensitivity. For the optical
design of the front telescope, CHIEM uses a very compact three mirror anastigmat, which allows a wide field of
view in both across track and along track direction (> 9.5o x 7.2o). The readout electronics (ROE) provides all
required sensor interfaces (power, control, data) enabling its full performance operation, and also a set of backend
interfaces for system power, remote control, and backend remote data (to EGSE) and local storage interfaces.
INTRODUCTION
In the CHIEM project we have developed an instrument
for remote sensing which addresses two important
trends in the space business in recent decades: the
demand for detailed spectral information and the need
to build smaller and cheaper satellite platforms. CHIEM
stands for Compact Hyperspectral Instrument
Blommaert 2 31st Annual AIAA/USU
Conference on Small Satellites
Engineering Model. Spectral observations allow
detailed studies in different areas like water quality,
land cover, land use and climate change. Current
operational hyperspectral missions are Hyperion and
CHRIS [1]. New developments giving higher sensitivity
and better spectral coverage are for example EnMap,
HyspIRI and PRISMA [2,3,4]. These missions are also
quite large in size, have a relatively low revisit rate, are
expensive and complex. They are built and operated by
space agencies.
An earlier, successful example where a much smaller
satellite replaced a larger one is PROBA-V. This
multispectral imaging mini-satellite provides global
land cover and vegetation monitoring every two days
for the entire planet [5,6]. Nowadays, further
instrument miniaturization allows the industry to
develop extremely small remote sensing satellites on
CubeSat platforms. One example of such a commercial
initiative is the one by the American company Planet
Labs, Inc. They aim to offer imaging of the Earth using
a cluster of CubeSats at 3-5 m resolution and daily
coverage. The latter examples however only provide
multispectral imaging and lack spectral detail.
With the CHIEM project we develop an instrument
allowing to combine both: imaging with high spectral
resolution with an instrument that is 12 U CubeSat
compatible. With this instrument the hyperspectral
imaging is obtained with a so-called Linear Variable
Filter (LVF) directly deposited on a 2D sensor array.
The cross-track dimension of the array provides the
spatial information, whereas the along-track dimension
registers the spectral information. As is schematically
shown in Figure 1, a spectrum over the full provided
wavelength range is obtained by scanning the array
along track at the appropriate frame rate.
Telescope
LinearVariable Filter 2D detector
The baseline instrument design is for a satellite mission
operating at 600 Km, but also alternative operational
scenarios of the imager, such as airborne application are
taken into account in the development. An overview of
the relevant parameters for two scenarios is given in
Table 1. The wavelength coverage of the spectrometer
is at least 470 900 nm
Table 1: Overview of parameters for the baseline
satellite design and for an airborne application.
The concept of a very small LVF-based hyperspectral
satellite was first proposed in the PhytoMapper project
[7] and then further refined, including a separate
panchromatic detector to combine the acquisition of
high spatial resolved information with hyperspectral
information [8]. Similar concepts, targeting a CubeSat
platform, are described in [9,10]. Around the same
time, advancements in microelectronics technology
showed the possibility to create hyperspectral imagers
based on interference filters using a new technique in
which filter material is deposited directly onto the
detector. This process has a number of advantages
compared to the traditional LVF filters. It allows a lot
of design flexibility up to defining individual filters at
single pixel level. At the same time it is possible to
define different zones at an individual detector with
different functionality, for instance a combination of
spectral zones with panchromatic zones. Moreover, it
offers improved performance compared to classical
LVF filters by better alignment and reduced straylight.
The practical feasibility of this process was
demonstrated on a 2Mpixel CMV2000 sensor from
CMOSIS using a single filter stack, resulting in a
spectral range of 600nm-900nm [11]. This sensor-filter
combination has spurred considerable interest, most
notably for instruments aboard small remotely piloted
aircraft system platforms. An example of such a camera
is the COSI-Cam (COmpact Hyperspectral Imaging
System) [12,13]. The COSI spectral imager has been
operated by VITO for agricultural purposes to derive
different parameters like chlorophyll content, biomass
and hydric status indicators. The COSI system provides
Figure 1 : Schematic overview of the scanning
technique with a LVF-based satellite instrument
Blommaert 3 31st Annual AIAA/USU
Conference on Small Satellites
accurate action and information maps related to the
crop status [14]. An illustration of the COSI images and
hypercube data is shown in Figure 2.
From the above experience, it was also evident that
extending the visual spectral range coverage would
greatly increase the usefulness. To realize this, the ESA
project FIDELHEO (FIlter DEposition for Lightweight
Hyperspectral Earth Observers) was initiated and
performed by IMEC and VITO. From the start, the aim
of the FIDELHEO project was to design and produce a
hyperspectral imager optimized for Earth observation
purposes [15]. This starting point has guided the design
choices made during development. To cover a wider
spectral range of 470-900nm, it was necessary to use a
more complex filter deposition in which two filter
stacks are used, each of a different material. The project
was successful and is nowadays used in commercial
cameras like the ButterflEYE LS (Cubert GmbH) and
Imec’s Snapscan.
While FIDELHEO-like hyperspectral filters are very
well suited for Earth observation from aerial platforms,
employing them efficiently for imaging from space
would greatly benefit from a detector which offers a
larger number of pixels in the across-track direction,
enabling wider swath imaging for a fixed resolution.
This was the first reason to initiate a new ESA project,
called CHIEM, targeted at making interference
hyperspectral imagers more suitable for space
applications. The project is not limited to just
hyperspectral filter development, but also the design of
the other necessary subsystems to match them.
In the CHIEM development of the compact
hyperspectral instrument from breadboard to
engineering model by implementing a number of
important improvements for the different subsystems.
The ESA project is funded by BELSPO and is carried
out by a consortium of Belgian companies with the
following responsibilities:
System engineering activities to refine the
conceptual design of the Compact Hyperspectral
Instrument and to complete the end-to-end
performance analysis (VITO).
Opto-mechanical design, including telescope, Focal
Plane Assembly, baffling, and thermo-mechanical
I/F (AMOS).
Modification of an existing large format CMOS
sensor design to allow the deposition of Fabry-
Pérot filters at wafer level (CMOSIS).
Improvement of the Linear Variable Filter (LVF)
to match the detector format and requirements of
out-of-band rejection (IMEC).
Deposition of LVF filters on BSI sensors (IMEC).
Development of the detector read-out electronics
and of the relevant electrical I/F (Deltatec).
Development of the EGSE to interface the read-out
electronics to standard computer (Deltatec).
Testing and verification of the fully assembled
focal plane assembly
At present (June 2017), the CHIEM project is in its
final phases, where the opto-mechanical design is
finalized, as well as the manufacturing of the ROE and
EGSE of the system. The characterization campaign of
the produced BSI and FSI sensors has started. Analysis
of the test campaign is expected to be finalized in the
Summer of 2017.
Figure 3: First CHIEM hyperspectral FSI sensor
image
Figure 2 : An illustrative example with a false colour
image and a hyperspectral data cube, obtained with
the COSI-cam above an agricultural field
(strawberry) [12,13,14]. Each pixel in the image
contains a continuous spectrum.
Blommaert 4 31st Annual AIAA/USU
Conference on Small Satellites
In the following sections the different aspects of the
instrument are further described in more detail.
CMOS SENSOR
With the targeted application in mind, the system
specifications have been translated to sensor
specifications and compared to proven CMOS imager
products. Finally the CMV12k was selected. The CMV
standard product series serves a broad range of
industrial vision, movie, traffic monitoring and motion
controlled applications. Here for the sensor is equipped
with an 8T global shutter pixel structure as shown
in Figure 4. This pixel combines pipelined global
shutter operation with CDS [16].
Figure 4: Pixel architecture and operation.
The image sensor architecture is shown in Figure 5.
The pixel array measures 4096 by 3072 pixels at 5.5 µm
pitch. With a dual readout on the top and bottom of the
sensor, a global throughput rate of 36 Gbps for this
12Mpixel sensors is achieved in a 10bit resolution at
approximately 300 frames/s.
The CMV12k sensor was used both for FSI and BSI
filter post processing; adding hyperspectral filters
designed for the targeted applications. The BSI route
offers the advantage of higher sensitivity and an
eliminated ripple on the Quantum efficiency (see next
section).
Some samples from the FSI route were characterized
before deposition of the filters; so no color filters, nor
micro lenses were deposited on the measured samples.
The most important measured characteristics of the
achieved performance of this sensor (FSI, without
hyperspectral filters) is listed in Table 2.
Figure 5: Image sensor architecture.
Table 2: CMV12k Sensor Performance
Specification
Value
Measured
Unit
Pixel size
5.5
-
um
Resolution
4096x3072
-
px
Frame rate
300 @ 10bit
132 @ 8bit
-
fps
Read noise at 25°C, default
gain
25
16
e-
Dark current at 25°C die
temperature
<100
<100
e-/s
Quantum Efficiency
50
20 (1)
%
Full Well
10
9.4
ke-
Dynamic range
60
55.5
dB
Power consumption
3
2.2
W
Note (1): Test silicon in this work was starting material for further
hyperspectral filter processing and not equipped with
microlenses.
Blommaert 5 31st Annual AIAA/USU
Conference on Small Satellites
LINEAR VARIABLE FILTER
introduction
In the traditional approach, narrow band linear variable
filters are typically deposited on a glass substrate and
then integrated separately with the sensor (e.g. as cover
glass). This approach has several disadvantages related
to the integration of the filter with the sensor. First,
reflections between the Linear Variable Filter and the
focal plane array make it more sensitive to stray light
and second, the alignment of the filters with the pixels
is not obvious. These disadvantages can be eliminated
by depositing the filter directly on the focal plane array,
using typical semiconductor process tools guaranteeing
accurate alignment and removing additional reflection.
An example of a wafer with imager and optical filters is
shown in Figure 6.
Figure 6: CMV12000 wafer: image sensor with
integrated optical filters
An additional advantage of this approach is the freedom
for the layout of the filters on the focal plane array.
Indeed, using our technique, every pixel has its own
narrow band filter and selecting the layout is only a
matter of designing the correct masks.
Figure 7: CMV12000 imager with integrated
hyperspectral filter bank and panchromatic zone
As shown in figure 7, this enabled us to combine
different stripes on the imager with different
functionality. The gold colored part of the chip on the
top and the bottom is a panchromatic zone, combined
with a grey colored hyperspectral zone in the center of
the chip.
Filter design
A first important aspect of the filter design is to select a
filter architecture that matches the typical operations of
semiconductor process technology, i.e. deposit a film,
pattern the film using lithography and remove the
unwanted part using an etch process. The hyperspectral
sensor presented in this paper therefore implements an
approximation of a Linear Variable Filter (LVF) using a
set of Fabry-Pérot filters. Figure 8 illustrates the typical
Fabry-Pérot Filter made of a transparent layer (called
cavity) with two mirrors at each side of that layer. The
mirrors are Bragg reflectors consisting of a multiple
layer stack of alternating high and low refractive index
materials. The reflectivity of these mirrors defines the
spectral range, the Full Width Half Max (FWHM) and
the quality of the filter. The cavity thickness t defines
the central wavelength of the optical filter.
Figure 8: the presented hyperspectral sensor relies
on the use of Fabry-Pérot filters post-processed on
top of the CMOS imager.
As illustrated in Figure 8, the spectral filters are
arranged with cavities in a wedge or staircase-like
structure in which each step of the wedge acts as an
independent optical filter. To reduce the number of
process steps, the Bragg mirrors are shared between all
filters and only the cavity thickness is varied. The
spectral range of the Fabry-Pérot filter is therefore
limited by the bandwidth of the Bragg mirror. An
important parameter in the design of the Linear
Variable Filter is the selection of the material
combination for the Bragg mirrors, which shall be
materials compatible with CMOS (in terms of thermal
budget [17,18] and contamination level).
Blommaert 6 31st Annual AIAA/USU
Conference on Small Satellites
The combination of the selected filter architecture and
the available CMOS-compatible materials enable a
spectral range covering 470 nm to 900 nm. The full
range can however not be covered with one Bragg
stack. For this reason, the sensor is integrating two
wedges, completely independent from each other, each
using a different built-up for their Bragg reflectors. The
first wedge covers the range from 470-620 nm, while
the second wedge covers from 600 up to 900 nm. The
~20 nm overlap allows to cope with deposition
tolerances and enables a smooth spectral acquisition
across both ranges.
The selection of the specific bands, i.e. thickness
selection of the cavities, is done by adapting the
sampling rate to the FWHM of the optical filters, i.e.
the sampling distance between two filters is slightly
lower than the FWHM of the filters. At the edges of the
spectral range, the FWHM of the Fabry-Perot filters is
increasing enabling a decrease of the sampling rate.
This results in 104 bands for the wedge covering 600
nm to 900 nm and 50 bands for the wedge between 470
nm and 600 nm. The simulation results are shown in
Figure 9, visualizing a spectrogram, i.e. the spectral
response of all the filters shown as color intensity
variations. The lines in the spectrogram represent the
bands, while the columns in the spectrogram represent
the wavelengths. Dark blue means low transmission,
while brighter colors represent higher transmission.
Figure 9: Selection of patterning procedure
location of filters spread over spectral range. Filter
responses are shown as color intensity with the lines
representing the spectral bands and the columns
representing the wavelength. Spectral responses are
the responses of the Fabry-Pérot filters only before
applying the rejection filters
A schematic layout of the sensor with the two wedges
can be observed in Figure 10. As already mentioned,
the image sensor used is the CMOSIS CMV12000. This
sensor has 4096 x 3072 pixels. To maximize the
SWAT, we have selected the dimension with 4096
pixels as the width of the sensor. Because of the high
number of rows in the CMV12000 sensor, multiple
lines can be grouped per band. It is important for the
synchronization of the frame rate of the camera to the
speed of the scanned object, that the number of lines per
band is the same in both wedges. We have chosen to
assign 12 lines to every band, resulting in 600 lines for
LVF1 and 1248 lines for LV2. The remaining lines are
spent on two PAN zones at the top and the bottom and
1 dead zone of 72 lines.
Figure 10 Layout of the two wedge filters on the
CMV12000 sensor, with 50 ‘blue’ and 104 ‘red’
spectral bands.
As can be seen in Figure 9, both LVF’s have significant
leakage of light outside their own spectral range, but
still inside the range of the combined LVF. The
following blocking filters integrated with the LVF are
therefore needed to remove this leakage:
1. A low pass filter LVF 1 (450 nm to 650 nm),
removing all leakage above 680 nm.
2. A high pass filter on LVF 2 (600 nm to 900 nm),
removing all leakage below 580 nm.
Figure 11 shows a schematic cross-section of this
sensor, with the two independent filter banks or wedges
(wedge 1 covering 470-620 nm and wedge 2 for the
600-900 nm range), which are co-fabricated on the
same imager wafer.
Two options were implemented for the integration of
the rejection filter, enabling a comparison of the
performance of both approaches:
1) Hybrid integration, the low-pass and high-pass
filters are deposited on a glass substrate and
epoxied on top of the monolithic LVF
2) Monolithic integration, the low-pass and high-pass
filters are deposited directly onto the LVF
Blommaert 7 31st Annual AIAA/USU
Conference on Small Satellites
Figure 11: Schematic cross-section of the sensor,
with the two wedges (wedge 1 and wedge 2 covering
470-620 and 600-900 nm respectively) post-
processed on top of the incoming CMOSIS
CMV12000 imager. The rejection filters, an LP to
be integrated on top of wedge 1 and an HP to be
integrated on top of wedge 2, are also shown.
Dimensions not to scale.
In the case of hybrid integration, the rejection filter was
deposited on a glass substrate which is thin enough to
fit in the CMV12000 standard package. The high-pass
and low-pass filter were deposited on separate glass
substrates, diced and epoxied together to form one
piece. Figure 12 shows a cross section of the chip with
the hybrid integrated filters (illustrated on one of the
two wedges for clarity). The rejection filter is epoxied
on top of the LVF filter with a typical distance between
the rejection filter and the LVF of 100 m. Calculated
from the filter to the focal plane array, this distance
results in a maximum allowable half cone angle of 60
degrees. The 72 lines transition zone in between the red
and blue zones are used as space to cope with
tolerances on alignment and dimensions of the
additional filters.
Figure 12: Option 1 glue rejection filter on the
LVF (shown for LVF 1 only)
In the case of the monolithic integration, the low-pass
and high-pass filters integrated on LVF 1 and LVF 2
are directly deposited onto the LVF using the same
tools and materials.
The most important design option for this filter is the
selection of the cut-off wavelength. The reasoning for
this cut-off wavelength is determined by the free
spectral range of both LVF’s in order to have a clean
spectral response in the full spectral range. The final
simulation results are shown in the spectrogram of
Figure 13.
Figure 13: Simulation of final spectrogram after
applying the rejection filters
Filters on backside illuminated sensors
Backside Illuminated (BSI) Sensors have the advantage
of higher Optical Efficiency compared to FSI imagers.
Indeed, because of the metal lines on top of the
photosensitive area, the fill factor of an FSI imager is
limited, reducing the sensitive area and thus Optical
Efficiency (often expressed reduced QE after applying
fill factor). BSI imagers are not illuminated through the
metal lines on top of the sensor, but from the back
where nothing covers the photosensitive area. The fill
factor is therefore 100% and the optical efficiency of a
BSI imager will always be higher compared to FSI
imagers. The higher sensitivity that can be reached is
the main reason why we are currently developing a
process for the deposition of the optical filters on BSI
imagers. This is however not the only reason.
FSI imagers have an isolating and transparent material
between the metal lines covering the photosensitive
area. This is introducing a thick layer in the optical path
that causes additional internal reflections between the
photosensitive area and the top layer of the image
sensor. These reflections interfere with each other
causing a ripple in the QE plot of the imagers as
function of wavelength (as illustrated in Figure 14). BSI
imagers do not have this transparent material on top of
the photosensitive area and will therefore not suffer
from these additional reflections, eliminating the ripple
on the QE plot. The difference in spectral response
between filters on FSI and filters on BSI imagers is
Blommaert 8 31st Annual AIAA/USU
Conference on Small Satellites
shown in Figure 15. The blue curve has a maximum
transmission around 480 nm and shows some additional
ripple on its spectral response. The green curve is the
spectral response of the filter on top of a BSI imager. In
this case, the efficiency is much higher and the spectral
response does not show any ripple before 700 nm.
Silicon has an absorption index that decreases with
wavelength. Depending on the thickness of the silicon
in the photosensitive area, some part of the light is no
longer fully absorbed by the silicon in the BSI imager.
This light is entering the silicon of the photosensitive
area, partially reflected on the metal lines/transparent
material at the other side and interfering with light at
the first interface. This is why some ripple will still
occur in the NIR wavelengths.
Figure 14: Quantum Efficiencies for the FSI (in red)
and BSI (in blue) sensors, with fringe visibility.
Reference QE
BSI filter response
FSI filter response
Leaking light
removed by low pass filter
Figure 15: FSI (in blue) vs BSI (in green) filter
responses and reference FSI QE (in red).
READOUT ELECTRONICS
ROE general description
The CMV12000 CMOS image sensor capable of 300fps
of frame rate at 10-bit of depth. To achieve this
throughput, pixels are output on 64 high speed LVDS
links running at 600 Mbps, for a total of useful data rate
of 37.75 Gbit/s. For real-world use, this high amount of
data needs to be compressed before being transmitted.
To achieve an efficient compression, the raw image of
the sensor is first corrected for per pixel gain and offset
variations and not responsive pixels are replaced. All of
these computations are done on the camera itself. After
that, the image compression can be done.
The required processing on the data is the following:
Fixed Pattern Noise (FPN) correction (offset)
Photo Response Non Uniformity (PRNU)
correction (gain)
Bad pixel replacement
Binning
Raw image accumulation for calibration map
generation
Cropping of the image (image width
reduction)
All these corrections have to be done at full speed,
requiring high-speed parallel data processing. Each
processing step can be enabled or disabled on request.
On the back-end, the processed data is downloaded to
the EGSE using 8 high-speed 3G-SDI (3 Gigabit Serial
Digital Interface) uncompressed video links. This
interface allows to have access to the full-quality
images for analysis but could be replaced by lower-
speed RF links using prior compression for real-world
use.
The sensor configuration is such that the throughput to
the storage system will not be higher than ~18 Gbits/s.
Nevertheless for a short period of time, the sensor could
be used at its highest frame rate. In this case the images
are stored in the internal ROE DDR3 memory (running
at 1600 MHz) and are downloaded afterwards to the
storage system.
To be able to cope with this high data throughput and
high speed parallel data processing, the ROE is based
on the Zynq-7000 SoC from Xilinx. This device
encapsulates a FPGA (Programmable Logic) and a
dual-core ARM Cortex-A9 hardware processor
Blommaert 9 31st Annual AIAA/USU
Conference on Small Satellites
(Processing System), with hardwired high-speed
connections between them. The programmable logic is
used to interface the sensor, to process the data and to
download it to the EGSE. The processing system
manages all high-level (control) aspects of the camera
and connects to the EGSE for the TM/TC link.
ROE modularity
The ROE consists of an assembly of 3 boards:
The first (top) board, also called Sensor Interface
Board (SI Board), contains the sensor, the passive
components and the voltage converters.
The second (middle) board, also called Frame
Grabber and Processing Board (FGP Board), is
the heart of the ROE. It contains the SoC, all its
peripherals (DDR memories, Flash memories) and
voltages converters.
The third (bottom) board, also called External
Interface and Power Board (EIP Board), will
contain all the interface connectors: SDI drivers
and connectors, Ethernet connector and PHY,
primary power connector and Micro-SD Card slot.
The ROE is modular: the 3G-SDI interface could be
replaced by SATA interface or by a compression board
without modification of the frame grabber processing
board and/or the sensor interface board.
Use of the Zynq internal functions
As explained the Zynq contains programmable logic
and a processing system. The programmable logic
implements the interfacing to the sensor, to the 8x 3G-
SDI links and to the FPGA dedicated DDR3 memory
(1 GByte @ 1600 MT/s). The FPGA is also doing all of
the image processing mentioned before.
The processor side of the Zynq takes care of the lower
speed interfacing to the outside world (UART, Ethernet
and SD card), the booting and configuration of the
FPGA part and the management of the correction maps.
For all of this, it has its own working memory of 1 GB
@ 1066 MT/s.
ROE Features
As mentioned before, the ROE is doing real time image
processing on the image stream. To cope with the pixel
rate of +/- 3.5 Gpixels/s, the implementation of the
image correction (FPN and PRNU), pixel accumulation
and the bad pixel replacement is done for 64 pixels in
parallel. This part of the processing runs at a frequency
of 200 MHz in order to have a few clock cycles per
pixel.
To make optimal use of the FPGA resources, the pixel
correction and averaging is implemented in a single
DSP block per pixel. The pixel accumulation and pixel
correction are calculated in 2 clock cycles: during the
first cycle the pixel correction is calculated; in the next
cycle the pixel accumulation is done. To achieve that,
the DSP block implements the following function:
P = (Pixel FPN) * PRNU + Accumulated Pixel
This formula maps perfectly on the resources of the
DSP block available in the programmable logic of the
Zynq. The block diagram of a Zynq DSP block is
shown below.
Figure 16: The block diagram of a Zynq DSP block.
With
D as the Pixel Raw value (12 bits)
A as the Pixel Offset (FPN) value (12 bits) or 0 in case
of image accumulation
B as the PRNU value (12 bits) or 1 in case of image
accumulation
C as the already accumulated pixel value (24 bit) or 0 in
case of image correction
P as the Pixel with FPN/PRNU Correction value (12
bits) or the Accumulated Pixel (24 bit)
In order to improve further compression possibilities
bad pixel replacement replaces a bad pixel by one of its
8 neighbouring pixels. To implement this a lot of DSP
blocks are used to create large multiplexers, running at
200 MHz. The replacement logic of one pixel uses 4
DSP blocks so in total the bad pixel replacement alone
uses 256 DSP blocks.
The ROE uses the ROI (Region of Interest) feature of
the sensor to limit the height of the image if needed. Up
Blommaert 10 31st Annual AIAA/USU
Conference on Small Satellites
to 10 different ROIs can be defined on the sensor to
create an image which only contains useful information.
Where the ROI feature can only select complete sensor
lines, the cropping step allows to tailor the size of the
image further. The cropping is the last step in the image
processing and reduces the image width of the complete
image to a user defined size.
Because the analog binning function which is provided
by the sensor is not flexible enough, the ROE provides
a binning feature to the following modes: 2x2, 3x3, 4x4
and 6x6.
DDR3 memory bandwidth
Doing all of the image processing in parallel, this
results in large data bandwidth requirement for the
DDR3 memory which stores the image correction data,
the accumulated image data and the video buffers for
the 8x3G-SDI links.
To sustain all of the processing in the most demanding
use case (doing raw image accumulation while
streaming corrected images over SDI), the DDR3
memory needs to sustain a data rate of +/- 90 Gbps to
get/deliver the data from/to 5 different sources/sinks at
the same time. The implementation uses a fully
pipelined AXI interconnect of 512 bits and local
buffering, resulting in a DDR3 efficiency of more than
86%.
To a space qualified design
The current requirements (data bandwidth, image
correction …) could not be satisfied with available
space qualified technologies. Nevertheless new
products with better performance are coming on the
market. This will be assessed during future
development.
One interesting approach is the Hybrid System
Architecture using :
COTS components for the data processing
requiring high speed processing resources (based
on Mil grade latest generation devices like Xilinx 7
or Zynq family + DDR2/3 memory).
Simpler RadHard devices to monitor and manage
the COTS devices
RadHard Flash memory, power conditioning …
Fault Tolerant mechanisms: ECC, scrubbing,
internal & external watchdogs, …
EGSE design
CHIEM’s ROE is connected to the computer through
8x SDI links for the image grabbing and through a
10/100 Ethernet interface for the TM/TC.
TM/TC interface
The 10/100 Ethernet Interface of the computer will be
used for the TM/TC.
Data Interface
The 2 PCIe Delta-3G-elp-d-40 from DELTACAST are
used for the video grabbing.
http://www.deltacast.tv/products/developer-
products/sdi-cards/delta-3g-elp-d-40
Data Storage
In order to not reduce the system performances in terms
of data rate, the EGSE stores the grabbed images at a
data rate up to 30Gbps (3.75 GBps).
Currently, best performance is met with 4 NVME SSD
disks with a software strip configuration.
Results
The ROE operates typically in 3 different use cases:
1) Recording use case: recording raw frames
from the sensor in the DDR3 memory at the
highest possible frame rate and the highest
possible resolution (12Mpixels)
2) Streaming use case: streaming images from the
sensor to the 8x 3G-SDI interface while doing
image correction, binning and cropping with a
resolution of 4096 x 1920 pixels.
3) Accumulation use case: is the same as the
streaming use case but combined with image
accumulation of not corrected images. All of
this at a lower frame rate than in the streaming
use case, with 12 bit pixels and a resolution of
4096x1920 pixels
Final testing of the ROE showed the following results:
Table 3: Achieved frame rates (fps) for the three
different use cases given for the different bit depths.
Pixel bit
depth
Recording
Streaming
Accumulation
8
328
150
NA
10
294
125
NA
12
130
124
99
Blommaert 11 31st Annual AIAA/USU
Conference on Small Satellites
OPTICAL DESIGN
A full reflective Three Mirror Anastigmatic design is
selected as baseline for the front telescope. This off-axis
design, both in aperture and field, is compact,
completely unobscured, relatively fast (F/4.5) and
diffraction limited over a large field of view, i.e. 7.2o
along track and 9.5o across track.
The proposed TMA optical layout is shown in Figure
17. The TMA fits within a 200x200x100 millimeters
volume. M1 and M3 are strongly aspherical concave
mirrors, while M2 is a spherical convex mirror.
The focal length of the Telescope is 135 mm, with an
entrance pupil diameter of 31 mm. The design is
telecentric with an exit pupil located further than 1.5
meter behind the focal plane.
A similar compact TMA has been manufactured by
AMOS in optical quality aluminum for the
multispectral Proba-V instrument [6], using ultra-
accurate single point diamond turning technology. The
instrument is fully a-thermal as all elements are made in
the same material.
Image Quality
The Image quality has been optimized for different FoV
positions at Nyquist frequency 91 lp/mm corresponding
to 5.5 µm pixels. The spot diagram over the detector is
presented in Figure 18. The optimization also considers
the evolution of wavelength from line to line, according
to the variable transmission band of the LVF.
The MTF (Modulation Transfer Function) at Nyquist
frequency 91 lp/mm range from 0.6 @ 470nm to 0.4 @
900nm, considering manufacturing and alignment
tolerances. These tolerances are quite severe (<10µm on
mirror positions and 10 arcsec in tilts, < 20 nm rms
surface error) due to the small pixel size.
Image Distortion
The distortion can be decomposed in swath curvature
and keystone components. Swath curvature will result
in non-straight pixels lines on the ground, that can be
corrected through image post-processing. Swath
curvature values for each channel are given in Figure
19. The keystone will impact inter channel pixel co-
registration. The telescope design has been optimized to
reduce keystone. However, as the along track field of
view is wide it cannot be totally eliminated. The
keystone values are given in Figure 20.
Such values of keystone and swath curvature are too
large for direct recombination of the complete
hyperspectral image stack. Resampling of the individual
bands will be necessary for good spatial registration,
but the distortions remain sufficiently low for not
impacting the addition of up to 12 successive bands, as
foreseen for SNR improvement by digital TDI.
Transmittance and Polarization sensitivity
Proposed coating is a Space Qualified Protected Silver
Coating from Cilas (France). This coating is well
adapted to metallic mirrors and was applied for the
Proba V instrument. The analyses consider the
measured variations of coatings reflectance with
wavelength, incidence angle and light polarization.
From the optical model the total transmittance for both
P and S polarization can be computed, taking into
account the reflection angle of each single ray on the
three mirrors. The results of this calculation are
presented in Figure 21.
M1
M3
M2
Focal
Plane
Figure 18 : Spot diagram over the detector. Airy
disk is also represented for the LVF wavelength at
the corresponding FoV.
Figure 17 : Optical Layout of the CHIEM front
telescope
Blommaert 12 31st Annual AIAA/USU
Conference on Small Satellites
The total transmittance is going from 80% in the blue to
85% in the NIR. The change of transmittance over the
across track FoV is negligible (about 0.1%).
From the P and S data, we can evaluate the polarization
sensitivity of the telescope, using the formula
PS
PS
Psens
(1)
The computed maximum polarization sensitivity is
about 1.3% in the blue. The dependency of this
sensitivity with the FoV is very low (< 0.2% change
over the across track FoV).
CHIEM STATUS AND OUTLOOK
We presented the CHIEM project in which we have
developed an engineering model for a hyperspectral
instrument to be used on board a satellite mission,
operating at 600 km and combining high spatial
resolution (GSD 25 m) and hyperspectral (470 900
nm, Δλ < 10 nm) imaging for a swath of 100 Km. The
LVF is directly deposited on the large CMOS (4096 x
3072) sensor array, providing several benefits like a
higher design flexibility, reduced straylight and better
alignment. One third of the array is spent as PAN zone
(situated at the bottom and top of the array). Also part
of the project is the design of the TMA telescope, which
is very compact and allows a wide field of view in both
across track and along track direction (> 9.5o x 7.2o).
Dedicated ROE is developed that is able to read the
sensor data at the highest frame rate and resolution and
to perform on board data processing.
CHIEM has finalized the manufacturing of the ROE
and EGSE and the Hyperspectral sensors, both FSI and
BSI, as well as the optical design of the TMA. The
project has now passed the Test Readiness Review and
has started the extensive characterization of the sensors,
which is expected to be finalized in the Summer of
2017.
Operational scenarios are being investigated. Several
applications, such as the determination of biophysical
parameters, require a high SNR. Due to the narrow
spectral bands, the amount of light per pixel is limited.
To increase the SNR, the image processing will be
optimized and made flexible so that spatial or spectral
resolution by binning of pixels can be exchanged for
higher SNR. Techniques like PAN-sharpening will be
Figure 20 : Keystone distortion (in mm on focal
plane) for different across track FoV.
Figure 21 : Transmittance of the complete TMA for
central FoV (Protected Silver coating)
Figure 19 : Swath curvature distortion (in mm on
focal plane) for different along track FoV
Blommaert 13 31st Annual AIAA/USU
Conference on Small Satellites
used to improve the spatial resolution of the LVF
images.
Acknowledgement and disclaimer
The CHIEM project acknowledges the support of the
Belgian Science Policy Office. It is also a pleasure to
thank Luca Maresi from ESA for his inspiring role in
the development of LVF instrumentation for remote
sensing and his support for the CHIEM project.
The view expressed in this paper can in no way be
taken to reflect the official opinion of the European
Space Agency.
REFERENCES
1. M. Marshall, P. Thenkabail, Advantage of
hyperspectral EO-1 Hyperion over multispectral
IKONOS, GeoEye-1, WorldView-2, Landsat
ETM+, and MODIS vegetation indices in crop
biomass estimation, ISPRS Journal of
Photogrammetry and Remote Sensing, Vol. 108,
Pages 205218, Oct. 2015.
2. L. Guanter et al, The EnMAP Spaceborne
Imaging Spectroscopy Mission for Earth
Observation, Remote Sensing, 7(7), 8830-8857,
2015
3. M. Meini, E. Fossati, L. Giunti, M. Molina, R.
Formaro, F. Longo, G. Varacalli, The PRISMA
Mission Hyperspectral Payload, Proc. 66th Int.
Astronautical Congress (IAC 2015), Jerusalem,
Israel, Oct.12-16, 2015.
4. C. M. Lee, M. L. Cable, S. J. Hook, R. O. Green,
S. L. Ustin, D. J. Mandl, E. M. Middleton, An
introduction to the NASA Hyperspectral InfraRed
Imager (HyspIRI) mission and preparatory
activities, Remote Sensing of Environment,
Volume 167, Pages 6-19, 2015
5. W. Dierckx, S. Sterckx, I. Benhadj, S. Livens, G.
Duhoux, T. Van Achteren, M. Francois, K.
Mellabb and G. Saint, PROBA-V mission for
global vegetation monitoring: standard products
and image quality, Int. J. of Remote Sensing,
Volume 35, Issue 7, 2014
6. L. de Vos, W. Moelans, J. Versluys, V. Moreau,
J.F Jamoye, Jan Vermeiren, L. Maresi, M.
Taccola, The Vegetation Instrument for the
PROBA-V Mission In: Sandau R., Roeser HP.,
Valenzuela A. (eds) Small Satellite Missions for
Earth Observation. Springer, Berlin (2010),
Heidelberg
7. L. Maresi, M. Taccola, M. Kohling and S.
Livens, PhytoMapper - Compact Hyperspectral
Wide Field of View Instrument, Proc Small
Satellite Missions for Earth Observation, Berlin,
Germany, 2010.
8. V. Moreau, C. DeClerq, G. Lousberg, L. Maresi,
B. Delauré, Development of a Compact
Hyperspectral/Panchromatic Imager for
Management of Natural Resources, in the 4S
symposium, 4-8 June 2012.
9. A. Näsilä et al, Aalto-1-A Hyperspectral Earth
Observing Nanosatellite, Proc. SPIE 8176,
Sensors, Systems, and Next-Generation Satellites
XV, 3 October 2011.
10. Conticello et al, Hyperspectral Imaging for Real
Time Land and Vegetation Inspection, Proc. 4S
conference 2016, Malta.
11. A. Lambrechts, P. Gonzalez, B. Geelen, P.
Soussan, K. Tack and M. Jayapala, A CMOS-
compatible, integrated approach to hyper- and
multispectral imaging, Proc IEEE Int. Electron
Devices Meeting, San Francisco, CA, 15-17 Dec.
2014
12. A. Sima; S. Livens; W. Dierckx, B. Delauré, K.
Tack, B. Geelen and A. Lambrechts, Spatially
variable filters Expanding the spectral
dimension of compact cameras for remotely
piloted aircraft systems, Proc IEEE Geoscience
and Remote Sensing Symposium, 2014
13. Sima, A., Baeck, P., Nuyts, D., Delalieux, S.,
Livens, S., Blommaert, J., Delauré, B., Boonen,
M., Compact Hyperspectral Imaging System
(COSI) For Small Remotely Piloted Aircraft
Systems (RPAS) System Overview and First
Performance Evaluation Results in Int. Arch.
Photogramm. Remote Sens. Spatial Inf. Sci.,
XLI-B1, 1157-1164, doi:10.5194/isprs-archives-
XLI-B1-1157-2016, 2016
14. P. Baeck, J. Blommaert, S. Delalieux, B. Delauré,
S. Livens, D. Nuyts, A. Sima, G. Jacquemin and
J.P. Goffart, High reolution vegetation mapping
with a novel compact hyperspectral camera
system. In “13th International Conference on
Precision Agriculture, St Louis, 2016
15. Livens S., Delauré B., Lambrechts A., Tack N.,
Hyperspectral Imager Development using Direct
Deposition of Interference Filters, 4S
symposium, 2014
16. X. Wang, et al, A 2.2M CMOS Image Sensor for
High Speed Machine Vision Applications, proc.
SPIE vol.7536, San Jose, Jan. 2010
Blommaert 14 31st Annual AIAA/USU
Conference on Small Satellites
17. Sedky, S., Witvrouw, A. , Bender H. and Baert,
K., “Experimental determination of the maximum
annealing temperature for standard CMOS
wafers”, IEEE Trans. Electron Devices 48 (2),
377-385 (2001).
18. Takeuchi, H., Wung , A., Sun, X., Howe, R. T.
and King, T.-J. , “Thermal budget limits of
quarter-micrometer foundry CMOS for post-
processing MEMS devices”, IEEE Trans.
Electron Devices, 52 (9), 2081-2086 (2005).
19. Macleod, H.A., [Thin-Film Optical Filters, 4th
Ed.], Taylor & Francis, 45-52 (2001).
... The main element of an imaging satellite is its optic system, which, in the case of nano-or microsatellites, differs significantly from those installed in large observation satellites, such as WorldView-3 or QuickBird. Nano-and microsatellites are equipped with small matrices, e.g., complementary metal-oxide-semiconductor [11] or CMOSIS CMV [12], which are characterized by low quantum efficiency. Another limitation of small satellites is the inadequacy of the telescope caused by the much smaller number of applied lenses, which leads to a deteriorated quality of the obtained images, e.g., through blurring. ...
Article
Full-text available
In recent years, we have witnessed significant development in the space sector, in particular regarding Earth imaging. Small satellites, whose size and construction make their production much cheaper, are becoming increasingly popular. As a result, a larger number of satellites may be placed in space, and thus, they may perform more frequent observations of selected spots on Earth. Unfortunately, the construction of these satellites also affects their observation capacity as they have a weaker spatial resolution. Scientists have been dealing with the problem of improving the spatial resolution of satellite imaging for many years. Numerous methods were developed that allow for the best possible representation of high-resolution images based on low-resolution images. However, the application of traditional solutions to improve the resolution of digital images requires an additional high-resolution image. As far as images obtained by small satellites (e.g., nano, micro, or mini) are concerned, the difference between the spatial resolution of panchromatic and multispectral images is small (e.g., for SkySat-3 – SkySat-15 satellites, it is only 0.16 m). The need to increase the spatial resolution of an image that does not have a corresponding higher resolution image (e.g., a panchromatic image or a sequence of images) causes additional problems. This article presents a review of the methods to improve the spatial resolution of small-satellite imaging. The authors analyze the interpolation, pansharpening, and digital image processing methods. Additionally, the article focuses on presenting solutions based on deep learning that enables the enhancement of the spatial resolution of images obtained from small satellites. The methodology of creating databases used for network training is described. Finally, the authors present the main limitations of the analyzed solutions and future development trends that will enable to improve the spatial resolution with the use of a single image.
Article
Full-text available
Nowadays, hyperspectral imaging is recognized as a cornerstone remote sensing technology. Next generation, high-speed airborne, and space-borne imagers have increased resolution, resulting in an explosive growth in data volume and instrument data rate in the range of gigapixel per second. This competes with limited on-board resources and bandwidth, making hyperspectral image compression a mission critical on-board processing task. At the same time, the “new space” trend is emerging, where launch costs decrease, and agile approaches are exploited building smallsats using commercial-off-the-shelf (COTS) parts. In this contribution, we introduce a high-performance parallel implementation of the CCSDS-123.0-B-1 hyperspectral compression algorithm targeting SRAM field-programmable gate array (FPGA) technology. The architecture exploits image segmentation to provide the robustness to data corruption and enables scalable throughput performance by leveraging segment-level parallelism. Furthermore, we exploit the capabilities of a COTS FPGA system-on-chip (SoC) device to optimize size, weight, power, and cost (SWaP-C). The architecture partitions a hyperspectral cube stored in a DRAM framebuffer into segments, compressing them in parallel using a flexible software scheduler hosted in the SoC CPU and several compressor accelerator cores in the FPGA fabric. A 5-core implementation demonstrated on a Zynq-7045 FPGA achieves a throughput performance of 1387 Msamples/s [22.2 Gb/s at 16 bits per pixel per band (bpppb)] and outperforms previous implementations in equivalent FPGA technology, allowing seamless integration with next-generation hyperspectral sensors.
Article
Full-text available
This paper gives an overview of the new COmpact hyperSpectral Imaging (COSI) system recently developed at the Flemish Institute for Technological Research (VITO, Belgium) and suitable for remotely piloted aircraft systems. A hyperspectral dataset captured from a multirotor platform over a strawberry field is presented and explored in order to assess spectral bands co-registration quality. Thanks to application of line based interference filters deposited directly on the detector wafer the COSI camera is compact and lightweight (total mass of 500g), and captures 72 narrow (FWHM: 5nm to 10 nm) bands in the spectral range of 600-900 nm. Covering the region of red edge (680 nm to 730 nm) allows for deriving plant chlorophyll content, biomass and hydric status indicators, making the camera suitable for agriculture purposes. Additionally to the orthorectified hypercube digital terrain model can be derived enabling various analyses requiring object height, e.g. plant height in vegetation growth monitoring. Geometric data quality assessment proves that the COSI camera and the dedicated data processing chain are capable to deliver very high resolution data (centimetre level) where spectral information can be correctly derived. Obtained results are comparable or better than results reported in similar studies for an alternative system based on the Fabry–Pérot interferometer.
Conference Paper
Full-text available
HyperScout is a hyperspectral nanosatellite payload operating in the VNIR with unprecedented low mass and volume. The large field of view and the onboard intelligence deployed on a platform as small as a 3U CubeSat enable a large variety of land and vegetation applications, for which cost efficiency and timeliness are of foremost importance. The system is extremely flexible and re-configurable in-orbit, resulting in a paradigm shift in the space asset use. Different users may use the same sensor for different real time applications. The same nanosatellite can support precision farming in Northern America and flood monitoring in South-East Asia. The end data product can be continuously tuned to satisfy the needs of multiple users, even over the same area of interest. HyperScout exploits the latest technological developments in the fields of CMOS detectors, hyperspectral filtering, electronics miniaturization and microprocessors. The result is a compact system, fitting in a CubeSat unit. HyperScout overcomes the major CubeSat limitation, the downlink capability, by processing and downloading L2 data products instead of the raw data. HyperScout can be used as a single nanosatellite or within a larger mission. Major advantages are obtained if used in constellation: 16 satellites allow for global coverage twice a day or single coverage of a single country every 30 minutes.
Article
Full-text available
Crop biomass is increasingly being measured with surface reflectance data derived from multispectral broadband (MSBB) and hyperspectral narrowband (HNB) space-borne remotely sensed data to increase the accuracy and efficiency of crop yield models used in a wide array of agricultural applications. However, few studies compare the ability of MSBBs versus HNBs to capture crop biomass variability. Therefore, we used standard data mining techniques to identify a set of MSBB data from the IKONOS, GeoEye-1, Landsat ETM+, MODIS, WorldView-2 sensors and compared their performance with HNB data from the EO-1 Hyperion sensor in explaining crop biomass variability of four important field crops (rice, alfalfa, cotton, maize). The analysis employed two-band (ratio) vegetation indices (TBVIs) and multiband (additive) vegetation indices (MBVIs) derived from Singular Value Decomposition (SVD) and stepwise regression. Results demonstrated that HNB-derived TBVIs and MBVIs performed better than MSBB-derived TBVIs and MBVIs on a per crop basis and for the pooled data: overall, HNB TBVIs explained 5-31% greater variability when compared with various MSBB TBVIs; and HNB MBVIs explained 3-33% greater variability when compared with various MSBB MBVIs. The performance of MSBB MBVIs and TBVIs improved mildly, by combining spectral information across multiple sensors involving IKONOS, GeoEye-1, Landsat ETM+, MODIS, and WorldView-2. A number of HNBs that advance crop biomass modeling were determined. Based on the highest factor loadings on the first component of the SVD, the "red-edge" spectral range (700-740nm) centered at 722nm (bandwidth=10nm) stood out prominently, while five additional and distinct portions of the recorded spectral range (400-2500nm) centered at 539nm, 758nm, 914nm, 1130nm, 1320nm (bandwidth=10nm) were also important. The best HNB vegetation indices for crop biomass estimation involved 549 and 752nm for rice (R2=0.91); 925 and 1104nm for alfalfa (R2=0.81); 722 and 732nm for cotton (R2=0.97); and 529 and 895nm for maize (R2=0.94). The higher spectral resolution of the EO-1 Hyperion hyperspectral sensor and the ability of users to choose distinct HNBs for improved crop biomass estimation outweigh the benefits that come with higher spatial resolution of MSBBs.
Article
Full-text available
Imaging spectroscopy, also known as hyperspectral remote sensing, is based on the characterization of Earth surface materials and processes through spectrally-resolved measurements of the light interacting with matter. The potential of imaging spectroscopy for Earth remote sensing has been demonstrated since the 1980s. However, most of the developments and applications in imaging spectroscopy have largely relied on airborne spectrometers, as the amount and quality of space-based imaging spectroscopy data remain relatively low to date. The upcoming Environmental Mapping and Analysis Program (EnMAP) German imaging spectroscopy mission is intended to fill this gap. An overview of the main characteristics and current status of the mission is provided in this contribution. The core payload of EnMAP consists of a dual-spectrometer instrument measuring in the optical spectral range between 420 and 2450 nm with a spectral sampling distance varying between 5 and 12 nm and a reference signal-to-noise ratio of 400:1 in the visible and near-infrared and 180:1 in the shortwave-infrared parts of the spectrum. EnMAP images will cover a 30 km-wide area in the across-track direction with a ground sampling distance of 30 m. An across-track tilted observation capability will enable a target revisit time of up to four days at the Equator and better at high latitudes. EnMAP will contribute to the development and exploitation of spaceborne imaging spectroscopy applications by making high-quality data freely available to scientific users worldwide.
Conference Paper
Full-text available
Thin film interference filters are well suited for building compact hyperspectral earth observation instruments. We introduce a new filter imager implementation in which spectral filters are fabricated directly on an image sensor. The feasibility has been demonstrated in a hyperspectral line scanner for the range 600nm - 1000nm. An optimized imager based on this technology is developed in the ESA FIDELHEO project, targeting a spectral range of 470 to 900nm and a spectral resolution of 5 to 10nm. The main challenge was the production of the optical filters, using tools and materials compatible with standard CMOS processes.. To cover the whole spectral range, a second set of filters was added. The new implementation offers important advantages. Precise per-pixel alignment reduces the complexity of spectral calibration and allows a flexible spatial design. The filters are designed to balance overall performance including peak transmission, out-of-band blocking and sensor sensitivity. The high transmission is important as narrow spectral bands, modest pixel size and short integration times limit the amount of light. To enable key applications such as biophysical parameter estimation, optimization towards high SNR is essential. We demonstrate successful implementation with performance suitable to address a range of earth observation applications.
Article
Full-text available
PROBA-V is a new global vegetation monitoring mission, to be launched in the second quarter of 2013. PROBA-V has been developed to show a consistent performance with SPOT-VEGETATION (SPOT-VGT) data, with similar spectral bands but with an improved spatial resolution of 1/3km. The innovative mission concept has led to several key research topics related to image quality, which are discussed in this article. To support the existing VEGETATION user community, the data products for PROBA-V continue to provide daily top of canopy synthesis (S1-TOC) and 10 day synthesis products (S10-TOC). In addition, the new top of atmosphere daily synthesis (S1-TOA) products and a radiometrically/geometrically corrected (level 1C) product in raw resolution will also be provided for scientific users.
Book
Written by a world-renowned authority of optical coatings, Thin-Film Optical Filters, Fourth Edition presents an introduction to thin-film optical filters for both manufacturers and users. The preeminent author covers an assortment of design, manufacture, performance, and application topics. He also includes enough of the basic mathematics of optical thin films to enable readers to carry out thin-film calculations. This new edition of a bestseller retains most of the descriptions of older design techniques because of their importance in understanding how designs work. However, this edition includes a substantial amount of new material as well. A new chapter on color takes into account the increasing importance of color in optical coatings. In addition, a new section discusses the effects of gain in optical coatings. This comprehensive yet accessible book continues to offer valuable insight into the principles, techniques, and processes of successful coating design. It provides the sound foundation required to make further advances in the field.
Article
In 2007, the NASA Hyperspectral InfraRed Imager (HyspIRI) mission was recommended in Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond (Decadal Survey) to address critical science questions in multiple areas, in particular ecosystems and natural hazards. HyspIRI is comprised of two instruments, a visible to short-wavelength infrared (VSWIR) imaging spectrometer and a thermal infrared (TIR) multispectral imager, together with an Intelligent Payload Module (IPM) for onboard processing and rapid downlink of selected data. The VSWIR instrument will have 10. nm contiguous bands and cover the 380-2500. nm spectral range with 30. m spatial resolution and a revisit of 16 days. The TIR instrument will have 8 discrete bands in the 4-13 μm range with 60 m spatial resolution and a revisit of 5 days. With these two instruments in low Earth orbit, HyspIRI will be able to address key science and applications questions in a wide array of fields, ranging from ecosystem function and diversity to human health and urbanization.
Article
This paper introduces the Aalto-1 remote sensing nanosatellite, which is being built under the coordination of The Department of Radio Science and Engineering of Aalto University School of Electrical Engineering. The satellite is a three unit CubeSat, and it will be mostly built by students. The satellite platform is designed to house several payloads, and the main payload of the Aalto-1 mission will be the world's smallest hyperspectral imager while secondary payloads being a compact radiation monitor and an electrostatic plasma brake for de-orbiting.