ArticlePDF Available

Spectroradiometer data structuring, pre-processing and analysis - An IT based approach


Abstract and Figures

Hyperspectral data collection results in huge datasets that need pre-processing prior to analysis. A review of the pre-processing techniques identified repetitive procedures with consequently a high potential for automation. Data from different hyperspectral field studies were collected and subsequently used as test sets for the described system. A relational database was utilized to store hyperspectral data in a structured way. Software was written to provide a graphical user interface to the database, pre-processing and analysis functionality. The resulting system provides excellent services in terms of organised data storage, easy data retrieval and efficient pre-processing. It is suggested that the use of such a system can improve the productivity of researchers significantly.
Content may be subject to copyright.
Spectroradiometer Data Structuring, Pre-Processing
and Analysis – an IT Based Approach
A. Hueni* and M. Tuohy
Institute of Natural Resources,
Massey University, Private Bag, Palmerston North 5301, New Zealand
Phone: +64-6-3569099 ext 7371, Fax: +64-6-3505632
*Corresponding author
Hyperspectral data collection results in huge datasets that need pre-processing prior to
analysis. A review of the pre-processing techniques identified repetitive procedures with
consequently a high potential for automation. Data from different hyperspectral field studies were
collected and subsequently used as test sets for the described system. A relational database was
utilized to store hyperspectral data in a structured way. Software was written to provide a
graphical user interface to the database, pre-processing and analysis functionality. The resulting
system provides excellent services in terms of organised data storage, easy data retrieval and
efficient pre-processing. It is suggested that the use of such a system can improve the
productivity of researchers significantly.
Field spectroradiometry has experienced an ever increasing popularity in the last few years. The
technology has advantages over conventional techniques, allowing the non destructive sampling
of objects and possibly enabling the user to gain critical information more quickly and cheaply. As
a result, many scientists are now actively researching applications of hyperspectral sensing. The
operation of the instruments tends to be relatively easy and data are collected quickly. However,
the interpretation of these data is not so simple. The main issue when dealing with hyperspectral
data is their dimensionality which is the result of sampling a wide spectral range in very narrow
bands. This is in itself a problem because the influence of noise on narrow channels is much
higher than on traditional broadband channels. Hyperspectral data are more complex than
previous multispectral data and different approaches for data handling and information extraction
are needed (Vane and Goetz, 1988, Landgrebe, 1997).
Hyperspectral data are essentially multivariate data, consisting of hundreds or even thousands
of variables. It has been shown that more bands do not automatically imply better results.
Although the separability of classes does increase with growing dimensionality, the classification
accuracy does not follow this trend endlessly but will decrease at a certain point. This is called the
Hughes Phenomenon and is caused by the ever increasing number of samples needed to build
sound statistics if the dimensionality grows (Landgrebe, 1997). In practice this means that more
samples must be collected to ensure successful statistical analyses. It is necessary, therefore, to
collect a large number of spectral data files, each containing a hyperspectral spectrum. The sheer
number of files and variables can become overwhelming. Interestingly, very few studies
concerned with hyperspectral data have ever mentioned how the data had been organised and
A further issue that is rarely addressed is the reusability of the data. Reference data is usually
compiled in so called spectral libraries. The majority of the publicly available spectral libraries are
distributed as physical files. This has drawbacks such as low flexibility and low query
performance (Bojinski et al., 2003). Another drawback of most libraries is their restriction in the
number of spectra per class. In many cases, only one reference spectrum is supplied. This
reduces any statistical analysis to first order statistics. The use of average values may be useful
in some circumstances, however, Landgrebe (1997) noted that the reduction of data to mean
values results in a loss of information. Second order statistics contain vital information about the
distribution of data in spectral or feature space and should therefore be included in spectral data
The time and effort that is spent in collecting spectral data, combined with the characteristically
large number of files, makes it clear that spectral data should be well organised. Otherwise
valuable data can be lost or loses its value because of missing metadata. Considering the above,
it seems logical to employ a database to store spectral data in a suitable form. Only one example
of such a database has been found: SPECCHIO (Bojinski et al., 2003) contains spectral
metadata ordered by campaigns, information about sensors, instrument models, landuse type of
the sampled area, spatial position and descriptions of the target. A relational database
management system (DBMS) is used to hold the above data in several tables. The actual
reflectance data is not stored in the DB but held on a dedicated file server and the spectral
database links the metadata to the reflectance file via a file path.
A further characteristic of hyperspectral data is the data redundancy. It has been shown that
neighbouring wavebands have a high degree of correlation (Thenkabail et al., 2004). This
redundancy is created by oversampling, i.e. the spectral signal is sampled at small enough steps
to describe very narrow features that could be discriminating (Shaw and Manolakis, 2002). The
redundancy and general noisiness of the data usually mean that certain pre-processing must be
carried out before any useful analysis can be performed.
We present here a possible solution for the efficient storage and pre-processing of field
spectroradiometer data. The system has been successfully used in studies concerned with New
Zealand native vegetation, soil properties and pastures.
Field Data Structuring
A hierarchical data structure that reflects the real world and the setup of sampling campaigns
for vegetation was designed. This structure was derived from the following conditions:
1. Reflectances of several different species were captured
2. In order to describe the in-species variation, several specimens of a species were sampled
3. The variability of the specimens was described by several measurements per specimen
The spatial extent where a specimen was sampled was termed a sample site, thus a species
contained a number of sample sites. The sites were numbered in the order of sampling. At each
site, several readings were taken to capture the variation exhibited by the specimen in question.
A site therefore contained a number of spectra. This led to a hierarchical directory structure
(Figure 1). As a general rule at least 10 spectra were collected per site. The calculation of
statistics like covariances requires at least 15 spectra per species to obtain meaningful
representations in feature space. This implied that a minimum of two sites (replicates) per species
were to be captured.
Figure 1: Hierarchical directory structure
Spectral Database Model
The spectral database was designed as a relational database. The presented table structure is
in third normal form (3NF). The process of database normalisation reduces complex user views to
a set of small, stable table structures (McFadden and Hoffer, 1988). Thus the transition of a
model into 3NF removes data redundancy. In practice a certain redundancy is sometimes
reintroduced in the form of foreign keys which simplify navigation and data queries in the
operative system. Such added relations can be observed between the entities study, species, site
and spectrum. For an overview of the spectral database model showing all entities and their
relations please refer to Figure 2.
The desired feature list of a spectral database according to the requirements identified in this
study was as follows:
Implements the same hierarchical structure as used for the field data to store species, site
and spectrum data
Multiple studies: can hold spectral data of different field/laboratory campaigns
Reflectance storage: stores the reflectance data in the database in its original form
Processing parameters: holds parameters that are needed for the processing of the data
Statistics: holds 1st and 2nd order statistics to enable classification, discriminant analysis
and separability measurements to be carried out efficiently
Figure 2: Database model overview at entity level
The entities species, site and spectrum reflect the hierarchical structure that was introduced
previously. The study entity was added to the top of this structure to enable the storage of data
belonging to different studies in the same database.
The waveband_filter and waveband_filter_range entities hold data that are needed for the
removal of noisy or uncalibrated bands from the spectra. These were defined at the study level
because every study might have different requirements for the data filtering. E.g. a study that
contains data collected by a contact probe will not need to remove water bands as the influence
of the atmosphere is effectively non existent. Similarly, if a study wishes to concentrate on a
certain part of the spectrum only, the unused wavebands can be removed by entering them into
the filter structure.
The library can be thought of as a collection of data that can be referred to for the identification
of unknown signatures. A library is built for certain settings of the data processing chain, namely
waveband filtering, smoothing, sensor convolution, derivative calculation and feature space
transformation. The resulting library is setup for classification of data that has been processed in
exactly the same way. In other words, before a classification can be carried out on a dataset, its
library must be built. A library therefore references the entities waveband_filter, smoothing_filter,
sensor, derivative and feature_space. The actual data needed for a classification is held in the
statistic entity in the form of a mean vector and a covariance matrix for every species.
The smoothing_filter entity holds data needed for the smoothing by a Savitzky-Golay filter
(Savitzky and Golay, 1964, Tsai and Philpot, 1998).
The sensor entity contains data for the synthesizing of sensor responses. Two general classes
of sensors exist, defined by the description of the response type of their elements:
1. Gaussian: each sensor element response is modelled by a Gaussian function. The
Gaussian curve is defined by the average wavelength and the full width at half the
maximum (FWHM).
2. Ratio: each sensor element response is modelled by ratios applied to narrow band data
over a certain range of wavelengths.
The entity sensor_element holds both Gaussian and Ratio settings, depending on the type of
sensor. In the case of Gaussian sensors, one sensor_element entry describes one sensor band.
For Ratio sensors, many sensor_element entries may be needed to describe one sensor band.
The derivative entity holds data for the calculation of derivatives either by an iterative method or
by Savitzky-Golay coefficients.
The feature_space entity holds or refers to data needed for the feature space transformation.
Three types of feature space were considered to be useful, although more possibilities exist:
1. Derivative Greeness Vegetation Indices (DGVI): a feature space is formed by calculating
several DGVIs (Elvidge and Chen, 1995, Thenkabail et al., 2004). The band ranges for
these indices are held in the band_range entity.
2. Normalized Two Band Indices (NTBI): a feature space is formed by calculating several
NTBIs. The two bands that define each index are held in the band_range entity. NTBIs are
a generalized version of the well known NDVI (Normalized Difference Vegetation Index)
which traditionally uses the values of red and infrared channels (Lillesand et al., 2004).
3. Principal Component Transformation (PCT): PCT is the most widely used algorithm for
data reduction and de-correlation (Shaw and Manolakis, 2002). Principal component
analysis performs an eigen-decomposition, the resulting eigenvectors are used to build a
transformation matrix, which is then applied to the original data. A feature space is thus
formed by calculating a certain number of components. The transformation matrix is held in
the pca_data entity. The number of components to be calculated is equal to the dimension
of the feature space.
Like the library, the pca_data is calculated for a certain setup of waveband filtering, smoothing,
sensor synthesizing and derivative calculation.
A Spectral Data Management and Processing Software
A spectral database such as that described above is not of much use on its own. Data must be
fed into the database and data extraction routines must exist in order to exploit the benefits of the
database. The technical requirements for such a system were identified as follows:
Graphical user interface to the database
Functions for loading spectral data into the database
Data pre-processing functions
Data analysis functions
File export functions to allow data analysis and plotting in 3rd party packages
The resulting, object oriented software was called SpectraProc.
File System Interfaces
SpectraProc provides input and output interfaces to the file system as illustrated in Figure 3.
Input file formats are: ASD binary file as produced by the ASD FieldSpecPro Spectroradiometer,
ENVI Z-Profiles that are signatures extracted from hyperspectral imagery in ENVI and sensor
specifications in a proprietary, tabulator separated format. ASD files can be imported into the
database as part of a study or loaded into memory for classification against a study dataset. ENVI
Z-Profiles can be loaded for classification only. Sensor specification files are a way of defining
new sensors in the database.
Output can be written in three data formats: (1) CSV (Comma Separated Values) for import into
various 3rd party applications like spreadsheets or statistical packages, (2) ENVI Spectral Library
for import into ENVI and subsequent use for signature matching and (3) ARFF which is a special
format used by WEKA (University of Waikato, 2005).
Input Interface
Output Interface
Figure 3: File system interfaces
Spectral Processing Concept
The spectral database stores only the raw spectral data. Further processing of the data is
performed at runtime and the results are held in memory. Once a spectrum is loaded from the
database it is put through a cascade of operations as shown in Figure 4. The result of every stage
is saved in a separate data structure in memory. This allows easy file export of spectral data at
any processing step.
The implemented pre-processing steps were:
Removal of unwanted bands in freely configurable wavelength regions
Data smoothing using a Savitzky-Golay filter
Synthesizing of other sensor responses or downsampling
Derivative calculation
Feature space transformations: Derivative Indices (e.g. DGVI), Normalized two band
indices (e.g. NDVI), Principal Component Transformation (PCT)
The processing parameters for the waveband filtering, synthesizing/downsampling and feature
space transformation operations are read from the database. The parameters for smoothing and
derivative calculation are taken directly from the settings in the user interface.
Spectral DB
Synthesizing /
Feature Space
Synthesized /
sampled data
Data in
Spectral data
Filter regions
Sensor definitions
Feature Space
Figure 4: Spectral data processing cascade showing the intermediate storage of spectral data in
memory and the processing parameters supplied by the database.
Analysis Functionality
Basic analysis functionality was built into the software: (a) separability analysis in the form of
the Jeffries-Matusita (JM) and the Bhattacharya (B) distance (Richards, 1993, Schmidt and
Skidmore, 2003), (b) discriminant analysis with the choice of three different discriminant functions
(quadratic (gaussian) distance, general squared distance and Spectral Angle Mapper), resulting
in the output of a confusion matrix including producer and user accuracy and (c) principal
component analysis (PCA) with the output being the eigenvalues, proportions and cumulative
The database was implemented in MySQL (MySQL AB, 2005), a GNU open source software.
MySQL is a relational database management system that can handle large amounts of data,
allows data access via standard SQL commands, provides multi-user access over TCP/IP and
supports several APIs (Application Programming Interfaces) amongst which is C/C++.
The database interface software was developed for the Microsoft Windows environment using
Microsoft Visual C++ V6.0. The graphical user interface was based on Microsoft Foundation
Classes (MFC), using a simple Document-View architecture with one document and one
associated view. MySQL C API was used for the database access from C++ code. Matrix
calculations were based on the excellent C++ matrix library NewMat V10B (Davies, 2002) which
is available free on the internet.
Field Data Structuring
The structuring of the field data had two main influences on the data collection process: (a) the
structure had to be setup before the actual sampling took place, a fact which led to better planned
sampling campaigns and (b) the resulting spectral data files were well ordered and could be
automatically loaded into the spectral database.
A Spectral Data Management and Processing Software
The combination of a relational database with associated software for data processing was
found to be highly efficient while dealing with hyperspectral field spectra from vegetation, soil and
pasture studies. Typically, the data from a full day of sampling could be loaded as a new study
into the database in a matter of minutes. Subsequent analysis and availability of results at various
stages followed almost instantly. The fast data processing allowed the use of the software for the
experimental analysis of the influence of different pre-processing parameters on the analysis
result. E.g. 1046 spectra of a study of New Zealand native plants could be pre-processed by
waveband filtering, Savitzky-Golay smoothing, Hyperion sensor synthesizing and first derivative
calculation using Savitzky-Golay coefficients and written to a file in just 10 seconds on a
Pentium4 machine.
Collaboration with other researchers has confirmed that the presented solution greatly
improved the speed of their research. Operations that would have taken hours or days with
conventional ways could be carried out in seconds.
Graphical User Interface
The graphical user interface (GUI) was based on the structure of the processing chain (see
Figure 5). The left side of the main window consists of controls for the selection of the study and
the main settings for smoothing filter, synthesizing, derivative calculation, feature space
transformation and classifier discriminant function. Processing details are entered in pop up
windows, shown here with the example of the smoothing function. The text output panel in the
middle of the main window is used to display processing and error information.
The listbox on top of the text output panel is used to display spectra files that are loaded directly
into memory. The ‘Indiv. Classify’ button under it classifies the selected, individually loaded
spectra against the current library.
The library status box on the top right of the screen indicates whether statistical information has
been compiled for the current pre-processing settings.
Figure 5: Screen capture of SpectraProc
Spectral databases
The database developed for this project proved to be ideal for the data analysis that was
carried out. It was however not designed to act as a repository for spectra that could be accessed
by persons having no prior knowledge of the stored spectra. Therefore information such as the
instrument used, illumination conditions, collector details and extensive target description were
not included. Furthermore, the hierarchical structuring featuring species, sites and spectra could
be regarded as somewhat restrictive. The experiences gained so far indicate however that the
chosen structure applies to most experiments. In some cases the site level might not be needed,
but this inconvenience could be solved by a software modification leaving the database structure
The database approach also enables the data to be stored in a central place and offers
simultaneous data access to several users. The implemented system however does not offer
multi-user capability, i.e. users can not store their own personalized settings.
Future spectral databases should provide multi-user access to studies and more information on
the instrumentation and environmental conditions of the sampling. The direct linkage with a
geographic information system (GIS) should also be considered when designing the database.
Spectral Processing Chain
The spectral processing chain consisted of waveband filtering, smoothing, sensor
synthesizing/downsampling, derivative calculation and feature space transformation. These are
the most commonly used operations in hyperspectral studies. It is however clear that the
implemented steps are not conclusive. Other data processing such as continuum removal and
special indices like band depth indices are in use in the research community. Such operations do
not fit into the current chain. Furthermore one could argue about the logical order of the
processing steps. E.g. the derivatives could be calculated before or after the data reduction
(sensor synthesizing / downsampling). For such a modification, a more flexible approach would
be needed where the processing methods would be modularised allowing the interactive building
of processing chains.
Analysis Functionality
Only basic analysis functionality in the form of separability, discriminant and principal
components analysis was implemented. It was found that the effort in writing analysis functions
was only justified if the concerned function was used often. For more rarely used or more
complex functions the use of 3rd party software on the pre-processed data proved to be more
Availability of SpectraProc to the Remote Sensing Community
The presented software has raised considerable interest among potential users and we are
currently evaluating different options as to SpectraProc can be made available to the remote
sensing community. Expressions of interest are welcome and should be directed to the
corresponding author.
Fast and repeatable data processing is a key factor to the efficient study of hyperspectral data.
By storing the spectral data in a database, all subsequent operations can be carried out on the
original dataset which remains unchanged. The implementation of software with a database
interface that handled data input, processing and output proved to be a most effective way of
hyperspectral data processing. The processing chain developed in this study contains methods
that are most commonly used in hyperspectral studies. It is recommended that future processing
chains should be of a modular nature to accommodate more varieties of data processing steps.
Statistical research should be carried out in other software packages and only if a certain method
has proven to be useful and often needed, should it be implemented in the database interface
Bojinski, S., Schaepman, M., Schlaepfer, D. & Itten, K. (2003). SPECCHIO: a spectrum database
for remote sensing applications. Computers & Geosciences 29: 27-38.
Davies, R. (2002). NewMat.
Elvidge, C. D. & Chen, Z. (1995). Comparison of broadband and narrowband red and near-
infrared vegetation indices. Remote Sensing of Environment 54: 38-48.
Landgrebe, D. (1997). On Information Extraction Principles for Hyperspectral Data, Purdue
Lillesand, T. M., Kiefer, R. W. & Chipman, J. W. (2004). Remote Sensing and Image
Interpretation, John Wiley & Sons.
McFadden, F. R. & Hoffer, J. A. (1988). Database Management. Redwood City, The
Benjamin/Cummings Publishing Co.
MySQL AB (2005). MySQL.
Richards, J. A. (1993). Remote Sensing Digital Image Analysis. Berlin, Springer Verlag.
Savitzky, A. & Golay, M. J. E. (1964). Smoothing and Differentiation of Data by Simplified Least
Squares Procedures. Analytical Chemistry 36(8): 1627-1639.
Schmidt, K. S. & Skidmore, A. K. (2003). Spectral discrimination of vegetation types in a coastal
wetland. Remote Sensing of Environment 85: 92-108.
Shaw, G. & Manolakis, D. (2002). Signal Processing for Hyperspectral Image Exploitation. IEEE
Signal Processing Magazine 19(1): 12-16.
Thenkabail, P. S., Enclona, E. A. & Ashton, M. S. (2004). Accuracy assessment of hyperspectral
waveband performance for vegetation analysis applications. Remote Sensing of Environment 91:
Tsai, F. & Philpot, W. (1998). Derivative Analysis of Hyperspectral Data. Remote Sensing of
Environment 66: 41-51.
University of Waikato (2005). WEKA.
Vane, G. & Goetz, A. F. H. (1988). Terrestrial Imaging Spectroscopy. Remote Sensing of
Environment 24: 1-29.
... Neste sentido, para visualização e processamento de dados hiperespectrais existem opções proprietárias e livres. Vamos realizar uma comparação aqui com as opções ENVI -THE LEADING GEOSPATIAL IMAGE ANALYSIS SOFTWARE (2020), GARFAGNOLI et al. (2013) e HUENI;TUOHY (2006). ...
Novos métodos baseados em Veículos Aéreos Não Tripulados, Sistema Global de Navegação por Satélite, Fotogrametria Digital e sensores hiperespectrais estão se tornando cada vez mais comuns e causando mudanças disruptivas na capacidade de coleta e processamento de dados nas Geociências. Esses métodos dão ao profissional acesso a informações que anteriormente não eram acessíveis ou cuja obtenção era demasiadamente custosa. Na Geologia do Petróleo, especificamente, a porosidade da rocha é uma propriedade vital para o estudo de fluxo de fluidos em reservatórios. Contudo, apesar da transformação digital ter modificado a forma de se trabalhar em diversas frentes, a medição da porosidade ainda utiliza métodos analíticos tradicionais diretos ou indiretos que demandam transporte de amostras para laboratório, são destrutivas e demandam tempo. Essa tese propõe uma alternativa não destrutiva, utilizável em campo e potencialmente contígua para estimar a porosidade de rochas carbonáticas usando dados hiperespectrais de reflectância e aprendizado de máquina. Para definição e validação do método proposto, apresentamos um experimento que leva em consideração rochas carbonáticas coletadas em dois afloramentos distintos, Cachoeira do Roncador localizada no município de Felipe Guerra (RN) e Pedreira Sal localizada no município de Campo Formoso (BA). Os afloramentos estudados possuem formações análogas às rochas de reservatório do Pré Sal Brasileiro. Utilizando o conjunto de dados coletado realizamos estimativas de porosidade cujo erro absoluto médio estimado fica abaixo de 2%.
... Hyperspectral software exist to associate hyperspectral measurements with their metadata. It is possible to identify two main systems developed for close range (ASD) measurements: SPECCHIO (Bojinski et al., 2003) and SpectraProc (Hueni and Tuohy, 2006). SPECCHIO in particular offers data including detailed metadata describing the environment, sampling geometry, spatial position, target type, object type, measurement sensor and acquisition campaign. ...
Les forêts tropicales, représentant 6,4% de la surface terrestre, abritent la plus grande biodiversité des écosystèmes terrestres et jouent un rôle fondamental dans le cycle du carbone à l'échelle mondiale. La durabilité de l'exploitation des forêts tropicales est un enjeu fondamental tant du point de vue de la conservation de la biodiversité que de la réduction des émissions liées à la déforestation et à la dégradation des forêts (REDD +). L'Office National des Forêts (ONF) est chargé de la conservation et de la gestion de 6 millions d'hectares de forêts privé en Guyane française. La possibilité de cartographier les espèces dans la canopée par télédétection est d'un intérêt évident, tant appliquées que scientifique.Les inventaires spatialisés à l'échelle du paysage contribueraient à faire progresser les connaissances fondamentales de ce biome complexe et menacé et aiderait à sa gestion durable. Les cartes de distribution d’espèces peuvent être croisées avec les facteurs environnementaux et fournir ainsi des clés d’interprétation des schémas d’organisation des peuplements forestiers. Du point de vue de la gestion, les cartes de distribution des espèces offre une rationalisation de l'exploitation forestière. La cartographie des espèces commerciales pourrait favoriser des pratiques forestières minimisant l'impact environnemental de l'exploitation. L'identification des espèces permettrait de prioriser les zones particulièrement riches en espèces commerciales, tout en évitant d'ouvrir des pistes d'exploitation dans les zones à faible niveau de ressources exploitables. La télédétection offre également la possibilité de surveiller l’extension des espèces proliférantes, telles que les lianes.Des capteurs hyperspectraux et LiDAR ont été utilisés à bord d’un avion pour identifier les espèces dans les forêts tropicales guyanaises. Une large gamme spectrale issue des capteurs hyperspectraux (400–2500 nm) est mesurée permettant d'avoir de nombreux descripteurs. Le LiDAR embarqué offre une description fine de la structure du couvert, facilitant la segmentation des houppiers. La fusion de ces deux informations améliore la caractérisation de la ressource.Afin de tirer le meilleur parti des données hyperspectrales, différents prétraitements radiométriques ont été évalués. Le lissage spatial et le filtrage des ombres sont les principaux facteurs qui améliorent la discrimination des espèces. L'utilisation de la gamme spectrale complète est également bénéfique. Ces résultats de classification ont été obtenus sur un groupe 20 espèces abondantes. L’identification de ces mêmes espèces en mélange au sein d’un peuplement hyperdivers a constitué la deuxième étape de ce travail.Nous avons évalué le niveau d'information nécessaire et le degré de confusion tolérable dans les données d’apprentissage afin de retrouver une espèce cible dans une canopée hyperdiverse. Une méthode de classification spécifique a été mise en œuvre pour être insensible à la contamination entre classes focales/non focales. Même dans le cas où la classe non focale contient jusqu’à 5% de pixels de la classe focale (espèce à identifier), les classifieurs se sont révélés efficaces.La troisème étape aborde le problème de la transposabilité des classifieurs d’une acquisition à une autre. La caractérisation des conditions d’acquisition et la prise en compte de leurs effets sont nécessaires pour convertir les données de radiance en réflectance de surface. Cependant cette opération de standardisation reste une étape extrêmement délicate au vue des nombreuses sources de variabilité : état de l’atmosphère, géométrie soleil-capteur et conditions d'éclairement. Nous évaluons en comparant des vols répétés sur le même site, la contribution des diverses caractéristiques d’acquisition à la divergence spectrale entre dates. Ce travail vise à proposer des pistes pour développer des méthodes de reconnaissance d'espèces qui soient plus robustes aux variations des caractéristiques d'acquisition.
... White reference is also called a calibration panel, which gives 100 % reflectance property. As shown in fig. 5 the one straight line indicates the white reference is reflected completely and now the user is ready to take ridings of spectral signature [18]. ...
Full-text available
Biometrics is the method for body calculations and measurement, biometrics used in computer science to identify the identity of persons in group .In biometrics there are number of methods and techniques are available for automatic person’s verification and identification. The authentication tool involves verification of what individuals knows passwords, PINs, what individuals has tokens, smart cards, and what individuals is facial feature, hand recognition, fingerprint, hand geometry, iris pattern. As compare to the other biometric characteristics the palmprint are more accurate, reliable, palmprint images gives more information as compare to fingerprint. Hypherspectral Palmprint recognition system is a promising biometric technology which received extremely large interest of researches. In the past decades many different algorithms and systems have been proposed and built. Although, great success has been achieved in palmprint research, however, the accuracy and spoofing mechanism are limited in some cases, as the palmprint feature may be similar for a given spectral illumination; hypherspectral Palmprint is a good recognition method to address this issue, it can provide more discriminate information under different illumination in short time. Work was already done by the researches they all use images for persons identification and authentication, so the image of the person is easily hack with the help of rubber cement, gelatin copy medium, etc. In this paper, we solve this problem with the help of a spectroscopic device, which generates spectral signatures of palmprint, and these spectral signatures are unique for every person. Hence it gives high security and spoofing mechanism is also avoided. This paper outlines the development of palmprint spectral library, for ENVI 5.5 applications software’s are used.
... Pre-processing of hyperion image is essential to remove the geometric and radiometric errors (Hueni and Tuohy 2006). Due to push-broom system and huge volume of spatial data, the hyperion image is affected by (242 contiguous bands) atmospheric attenuations like aerosols and water vapor. ...
Full-text available
The mineralogical composition of the hematite deposited-surface is investigated by a spectral model. The model involved in the gradation of hematite ore from the Hyperion satellite image. The analysis is restricted within Visible Near-Infrared wavelength range. The pre-processing was carried out for radiance to reflectance conversion, dimensionality reduction and minimizes scanning errors of image data. Typical image spectral modeling follows the steps of continuum removal, peak and absorption position of the spectra, band-depth calculation (BDs), spectral slope and Full Width Half Maximum (FWHM). Well defined relationships are obvious between the concentrations of iron oxide and (a) slope between peak and absorption trough (R²:0.729); (b) FWHM (R²:0.853). Therefore, the Lucey’s model was utilized to generate the hematite abundance map. Result was validated by observed relationships between predicted and actual Fe with R²:0.80 and average error ±3.81%. This study exhibits the possibility of evaluations of iron grades in view of image-spectral parameters.
... Results of spectral measurements are often compiled and stored in digital spectral libraries. While several institutions keep their own spectral libraries for internal research, several spectral libraries are available in the public domain (Bojinski et al. 2003;Hueni and Tuohy 2006;Pfitzner et al. 2006;Clark et al. 2007;Baldridge et al. 2009;Viscarra Rossel et al. 2016). With the increasing number of spectrometer users and ongoing research activities, the number and diversity of such libraries raise as well. ...
Full-text available
The focus of this study was the comparative analysis and evaluation of reflectance measurements (350 nm–2500 nm) of a chlorite rock sample, which were collected by 26 institutions in 42 different spectroscopic set-ups as a part of an international measurement comparison designed to document the plurality in laboratory reflectance spectroscopy. The impacts of the different set-ups on the chlorite spectra were determined by analysing the parameter variations of two characteristic chlorite absorption features at 1400 nm and 2345 nm and interpretation based on user-provided metadata. The positions of the 1400 nm absorption features showed a standard deviation of 1.4 nm. Larger deviations were observed for the shoulders, widths, depths and areas. Here, the strongest deviations could be clearly related to impacts of unfavourable background materials and unsuitable illumination types. The positions of the absorption feature at 2345 nm showed a standard deviation of 16 nm and the variations in absorption width were stronger compared to the 1400 nm feature. In contrast, the variations of depths and areas of the feature at 2345 nm were comparable to the variations we observed for the 1400 nm feature but could not be assigned to singular influencing factors. Although the majority of the spectra showed the typical shapes and specific features of chlorite, strong deviations were present in a few spectra which are likely to hamper the spectral identification of chlorite and quantitative spectral analysis. Thus, the results of this study underline the necessity to define measurement standards and protocols and to provide basic information for future standards.
Full-text available
This paper focuses on vegetation health conditions (VHC) assessment and mapping using high resolution airborne hyperspectral AVIRIS-NG imagery and validated with field spectroscopy-based vegetation spectral data. It also quantified the effect of mining on vegetation health for geo-environmental impact assessment at a fine level scale. In this study, we have developed and modified vegetation indices (VIs) based model for VHC assessment and mapping in coal mining sites. We have used thirty narrow banded VIs based on the statistical measurement for suitable VIs identification. The highest Pearson's r, R², lowest RMSE, and P values indices have been used for VIs combined pixels analysis. The highest different (Healthy vs. unhealthy) vegetation combination index (VCI) has been selected for VHC assessment and mapping. We have also compared VIs model-based VHC results to ENVI (software) forest health tool and Spectral-based SAM classification results. The 1st VCI result showed the highest difference (72.07%) from other VCI. The AUC values of the ROC curve have shown a better fit for the VIs model (0.79) than Spectral classification (0.74), and ENVI FHT (0.68) based on VHC results. The VHC results showed that unhealthy vegetation classes are located at low distances from mine sites, and healthy vegetation classes are situated at high distances. It is also seen that there is a highly significant positive relationship (R² =0.70) between VHC classes and distance from mines. These results will provide a guideline for geo-environmental impact assessment in coal mining sites.
Spectroradiometry has gained popularity over conventional techniques and is now used in numerous fields, such as in hyperspectral remote sensing. Spectroradiometry allows the non‐destructive sampling of objects for retrieval of biochemical and biophysical properties to provide the user with critical information more quickly and cheaply. This is facilitated by compilation of these signatures in a database and can further be utilized in the retrieval of relevant information. Hyperspectral imaging technology, which is also based on spectroradiometry, is used in the retrieval of spectral characteristics of surface features at the synoptic scale. This chapter reviews spectroradiometer types, data collection procedures, and their processing, with some examples.
Full-text available
Geophysical remote sensing is a novel discipline which strengthens geological application along with advance remote sensing methods and describes morpho-geological process. A collection of geo-scientific techniques is frequently used by field planner, analysis, researchers, scientist. This chapter follows the purport of introducing remote sensing techniques in geology. The emphasis is on mineral mapping, as it interfaces strongly with geology and remote sensing technique. The lists of topics covered are including mineral spectral character (hyperspectral) with special reference to iron ore and copper ore, mineral exploration technique through satellite data, mineral alteration mapping and lineament extraction methods. These approaches will reduce the field efforts for geologists who used conventional methods to explore field data. Furthermore, this chapter covers the various applications of multi and hyperspectral satellites data, such as Landsat, ASTER, and EO-1 Hyperion and their processing technique. The use of remote sensing techniques in geological application, by means of either spaceborne or airborne remote sensing sensor or ground-based sensor (Spectroradiometers) namely as ‘Geological remote sensing’.
Full-text available
The main objectives of this research were to: (a) determine the best hyperspectral wavebands in the study of vegetation and agricultural crops over the spectral range of 400 – 2500 nm; and (b) assess the vegetation and agricultural crop classification accuracies achievable using the various combinations of the best hyperspectral narrow wavebands. The hyperspectral data were gathered for shrubs, grasses, weeds, and agricultural crop species from the four ecoregions of African savannas using a 1-nm-wide hand-held spectroradiometer but was aggregated to 10-nm-wide bandwidths to match the first spaceborne hyperspectral sensor, Hyperion. After accounting for atmospheric widows and/or areas of significant noise, a total of 168 narrowbands in 400 – 2500 nm was used in the analysis. Rigorous data mining techniques consisting of principal component analysis (PCA), lambda – lambda R 2 models (LL R 2 M), stepwise discriminant analysis (SDA), and derivative greenness vegetation indices (DGVI) established 22 optimal bands (in 400 – 2500 nm spectral range) that best characterize and classify vegetation and agricultural crops. Overall accuracies of over 90% were attained when the 13 – 22 best narrowbands were used in classifying vegetation and agricultural crop species. Beyond 22 bands, accuracies only increase marginally up to 30 bands. Accuracies become asymptotic or near zero beyond 30 bands, rendering 138 of the 168 narrowbands redundant in extracting vegetation and agricultural crop information. Relative to Landsat Enhanced Thematic Mapper plus (ETM +) broadbands, the best hyperspectral narrowbands provided an increased accuracy of 9 – 43% when classifying shrubs, weeds, grasses, and agricultural crop species.
Full-text available
With the goal of applying derivative spectral analysis to analyze high-resolution, spectrally continuous remote sensing data, several smoothing and derivative computation algorithms have been reviewed and modified to develop a set of cross-platform spectral analysis tools. Emphasis was placed on exploring different smoothing and derivative algorithms to extract spectral details from spectral data sets. A modular program was created to perform interactive derivative analysis. This module calculated derivatives using either a convolution (Savitzky–Golay) or finite divided difference approximation algorithm. Spectra were smoothed using one of the three built-in smoothing algorithms (Savitzky–Golay smoothing, Kawata–Minami smoothing, and mean-filter smoothing) prior to the derivative computation procedures. Laboratory spectral data were used to test the performance of the implemented derivative analysis module. An algorithm for detecting the absorption band positions was executed on synthetic spectra and a soybean fluorescence spectrum to demonstrate the usage of the implemented modules in extracting spectral features. Issues related to smoothing and spectral deviation caused by the smoothing or derivative computation algorithms were also observed and are discussed. A scaling effect, resulting from the migration of band separations when using the finite divided difference approximation derivative algorithm, can be used to enhance spectral features at the scale of specified sampling interval and remove noise or features smaller than the sampling interval.
In attempting to analyze, on digital computers, data from basically continuous physical experiments, numerical methods of performing familiar operations must be developed. The operations of differentiation and filtering are especially important both as an end in themselves, and as a prelude to further treatment of the data. Numerical counterparts of analog devices that perform these operations, such as RC filters, are often considered. However, the method of least squares may be used without additional computational complexity and with considerable improvement in the information obtained. The least squares calculations may be carried out in the computer by convolution of the data points with properly chosen sets of integers. These sets of integers and their normalizing factors are described and their use is illustrated in spectroscopic applications. The computer programs required are relatively simple. Two examples are presented as subroutines in the FORTRAN language.
An experiment has been conducted in which narrow-band field reflectance spectra were acquired of a roofed pinyon pine canopy with Fee different gravel backgrounds. Leaf area teas successively removed as the measurements were repeated. From these reflectance spectra, narrow-band and broad-band (AVHRR, TM, MSS) red and near-infrared (NIR) vegetation index values were calculated. The performance of the vegetation indices was evaluated based on their capability to accurately estimate leaf area index (LAI) and percent green cover. Background effects were found for each of the tested vegetation indices. However the background effects are most pronounced in the normalized difference vegetation index (NDVI) and ratio vegetation index (RVI). Background effects can be reduced using either the perpendicular vegetation index (PVI) or soil adjusted vegetation index (SAVI) formulations. The narrow-band versions of these vegetation indices had only slightly better accuracy than their broad-band counterparts. The background effects were minimized using derivative based vegetation indices, which measure the amplitude of the chlorophyll red-edge using continuous narrow-band spectra from 626 nm to 795 nm.
A review of progress made in the new field of imaging spectroscopy is presented based on the nine papers making up the special issue of this journal. Background material on the motivation for the new approach to earth remote sensing is discussed. The history, design, and performance of the pioneering sensor for terrestrial high resolution remote sensing, the Airborne Imaging Spectrometer (AIS), are presented. Concluding this paper is a discussion of plans for the future of imaging spectroscopy of the earth.
Remote sensing is an important tool for mapping and monitoring vegetation. Advances in sensor technology continually improve the information content of imagery for airborne, as well as space-borne, systems. This paper investigates whether vegetation associations can be differentiated using hyperspectral reflectance in the visible to shortwave infrared spectral range, and how well species can be separated based on their spectra. For this purpose, the field reflectance spectra of 27 saltmarsh vegetation types of the Dutch Waddenzee wetland were analysed in three steps. Prior to analysis, the spectra were smoothed with an innovative wavelet approach.
Representative and comprehensive information on the spectral properties of natural and artificial materials on the Earth's surface is highly relevant in aircraft or satellite remote sensing, such as geological mapping, vegetation analysis, or water quality estimation. For this reason, the spectrum database SPECCHIO (Spectral Input/Output) has been developed, offering ready access to spectral campaign data, modelled data, and existing spectral libraries. Web-based and command line interfaces allow for the input of spectral data of heterogeneous formats and descriptions, as well as interactive queries, previews, and downloads. ASCII and ENVI spectral library data formats are currently supported. SPECCHIO is used as a reference database for the retrieval of geophysical and biophysical parameters from remotely sensed data, accounting for the frequent lack of surface spectra. The database is also used for the general management of spectral data, including detailed ancillary data.
Representative and comprehensive information on the spectral properties of natural and artificial materials on the Earth's surface is highly relevant in aircraft or satellite remote sensing, such as geological mapping, vegetation analysis, or water quality estimation. For this reason, the spectrum database SPECCHIO (Spectral Input/Output) has been developed, offering ready access to spectral campaign data, modelled data, and existing spectral libraries. Web-based and command line interfaces allow for the input of spectral data of heterogeneous formats and descriptions, as well as interactive queries, previews, and downloads. ASCII and ENVI spectral library data formats are currently supported. SPECCHIO is used as a reference database for the retrieval of geophysical and biophysical parameters from remotely sensed data, accounting for the frequent lack of surface spectra. The database is also used for the general management of spectral data, including detailed ancillary data.