ISPRS Journal of Photogrammetry and Remote Sensing

Published by Elsevier
Print ISSN: 0924-2716
Publications
Mapping and monitoring impervious surface dynamic change in a complex urban-rural frontier with medium or coarse spatial resolution images is a challenge due to the mixed pixel problem and the spectral confusion between impervious surfaces and other non-vegetation land covers. This research selected Lucas do Rio Verde County in Mato Grosso State, Brazil as a case study to improve impervious surface estimation performance by the integrated use of Landsat and QuickBird images and to monitor impervious surface change by analyzing the normalized multitemporal Landsat-derived fractional impervious surfaces. This research demonstrates the importance of two step calibrations. The first step is to calibrate the Landsat-derived fraction impervious surface values through the established regression model based on the QuickBird-derived impervious surface image in 2008. The second step is to conduct the normalization between the calibrated 2008 impervious surface image with other dates of impervious surface images. This research indicates that the per-pixel based method overestimates the impervious surface area in the urban-rural frontier by 50-60%. In order to accurately estimate impervious surface area, it is necessary to map the fractional impervious surface image and further calibrate the estimates with high spatial resolution images. Also normalization of the multitemporal fractional impervious surface images is needed to reduce the impacts from different environmental conditions, in order to effectively detect the impervious surface dynamic change in a complex urban-rural frontier. The procedure developed in this paper for mapping and monitoring impervious surface area is especially valuable in urban-rural frontiers where multitemporal Landsat images are difficult to be used for accurately extracting impervious surface features based on traditional per-pixel based classification methods as they cannot effectively handle the mixed pixel problem.
 
In research on forest mapping with data from the Landsat Thematic Mapper special attention has been given to work aimed at covering large sections of Bavaria on a scale of 1:200 000. The first 1:200 000 sheet processed was the Regensburg sheet. The classification was based on signature analysis of training areas. With that information, the stands were studied according to tree species, age class and mixture distribution. Multitemporal methods were applied to separate forest from nonforest areas. In addition to the main classes of deciduous, coniferous and mixed deciduous/coniferous, the separation of spruce and pine was included.
 
This paper deals with orientation of three-line imagery which has been taken during the MOMS-02/D2 experiment in spring 1993, and during the MOMS-2P/PRIRODA mission since April 1996. The reconstruction of the image orientation is based on a combined adjustment of the complete image, ground control, orbit and attitude information. The combined adjustment makes use of the orientation point approach or the orbital constraints approack. In the second case, the bundle adjustment algorithm is supplemented by a rigorous dynamical modeling of the spacecraft motion. The results of the combined adjustment using MOMS-2P/D2 imagery and control information of orbit #75b are presented. Based on the orientation point approach an empirical height accuracy of up to 4 m (0.3 pixel) is obtained. In planimetry the empirical accurary is limited to about 10 m (0.7 pixel), since the ground control points (GCP) and check points could not be identified in the imagery with the required accuracy. Computer simulations on MOMS-2P/PRIRODA image orientation based on realistic input information have shown that good accuracies of the estimated exterior orientation parameters and object point coordinates can be obtained either with a single strip and a few precise GCP or even without ground control information, if a block of several overlapping and crossing strips with high geometric strength (q~60%) is adjusted.
 
Image segmentation can be performed on raw radiometric data, but also on transformed colour spaces, or, for high-resolution images, on textural features. We review several existing colour space transformations and textural features, and investigate which combination of inputs gives best results for the task of segmenting high-resolution multispectral aerial images of rural areas into its constituent cartographic objects such as fields, orchards, forests, or lakes, with a hierarchical segmentation algorithm. A method to quantitatively evaluate the quality of a hierarchical image segmentation is presented, and the behaviour of the segmentation algorithm for various parameter sets is also explored.
 
This article deals with the design, construction and use of the Dahl Autograph, an innovative stereo-plotting instrument designed and built in the late 1930s by a Norwegian surveying engineer, J.J. Dahl, details of which have remained unknown till now. After giving some biographical information on the designer's background and career in surveying and mapping, the situation in photogrammetry in general and in Norway in particular at the time of the instrument's conception is explained. The basic principle and the main design features of the instrument are then covered, with the use of positive positions of the mechanical photo planes, the zero-base arrangement of the projection system, and the use of a unique double stereo-viewing system being picked out as clear innovations in instrumental design at the time of its construction. Comparisons are made with other later instruments incorporating the same or similar design features. Finally the operational aspects of the Dahl Autograph are discussed together with the results of accuracy tests conducted by Professor Hallert on both the normal angle and wide angle versions of the instrument.
 
The Canadian government's imaginative approach to fostering scientific excellence produced an environment ideal to draw the very best from Uki Helava's fertile intellect when he joined the Photogrammetric Section, Division of Physics, NRC, Ottawa in 1953. Initial work on error control in aerial triangulation led into the design of instruments for image measurement. This work culminated in the production of the NRC Monocomparator, but Helava's main effort was devoted to the analytical plotter, for which a working model was exhibited in 1963. Difficulties with the commercialization of this device led to Helava's decision to leave NRC in 1965.
 
The Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS) has a unique low-light imaging capability developed for the detection of clouds using moonlight. In addition to moonlit clouds, the OLS also detects lights from human settlements, fires, gas flares, heavily lit fishing boats, lightning and the aurora. By analysing the location, frequency, and appearance of lights observed in an image time series, it is possible to distinguish four primary types of lights present at the earth's surface: human settlements, gas flares, fires, and fishing boats. We have produced a global map of the four types of light sources as observed during a 6-month time period in 1994–1995. We review a number of environmental applications that have been developed or proposed based on the night-time light data. We examine the relationship between area of lighting, population, economic activity, electric power consumption, and energy related carbon emissions for 200 nations, representing 99% of the world's population.
 
A mathematical model is designed to provide an accurate method of transforming PAN imagery from image space to object space and vice versa, using a minimum of one ground control point (GCP) to determine the exterior orientation of the images. The model, initially developed for SPOT images, uses collinearity condition equations to model the satellite path, while variations of the satellite attitude with time are modelled using higher order polynomials. Initial orbit information is obtained from the given ephemeris data and refinement is carried out using an iterative least squares solution. This model is tested for three different cases: (1) single image, (2) strip (acquired from one detector during a single orbit pass), and (3) stereopair. For the cases (1) and (2), an average error of 9.1 m in latitude and 7.6 m in longitude could be achieved using a single surveyed GCP for modelling. Using one ground control point identified from a 1 : 50,000 scale map, accuracies in the order of 38.3 m in latitude, 42.6 m in longitude and 23.8 m in height were obtained for a stereopair. The results verify the model and give some idea of the extent to which IRS-1C PAN will be contributing to future evolutions in photogrammetry and cartography. The software for orbit attitude modelling described in this paper is a part of SOFTSPACE, a Softcopy Photogrammetric Workstation to handle stereo data of IRS-1C PAN and SPOT images, which is an integrated package of preprocessing, restitution, DTM and feature capture modules.
 
Fusion of optical and synthetic aperture radar data has been attempted in the present study for mapping of various lithologic units over a part of the Singhbhum Shear Zone (SSZ) and its surroundings. ERS-2 SAR data over the study area has been enhanced using Fast Fourier Transformation (FFT) based filtering approach, and also using Frost filtering technique. Both the enhanced SAR imagery have been then separately fused with histogram equalized IRS-1C LISS III image using Principal Component Analysis (PCA) technique. Later, Feature-oriented Principal Components Selection (FPCS) technique has been applied to generate False Color Composite (FCC) images, from which corresponding geological maps have been prepared. Finally, GIS techniques have been successfully used for change detection analysis in the lithological interpretation between the published geological map and the fusion based geological maps. In general, there is good agreement between these maps over a large portion of the study area. Based on the change detection studies, few areas could be identified which need attention for further detailed ground-based geological studies.
 
Coverage and the source dataset of the Landsat ETM+ scenes used for the forest/non-forest classification.
Flowchart of the developed processing chain.
An example of the effect of the minimum mapping unit on the CLC. The small forest patches present in the extract of the original Landsat ETM+ imagery (left) have been merged with the dominating land cover class (agriculture) in the final CLC product (right). 
Pan-European forest/non-forest map representing year 2000 forested area, coordinate reference system applied ETRS-LAEA, with a spatial resolution of 25 m.
This paper describes a simple and adaptive methodology for large area forest/non-forest mapping using Landsat ETM+ imagery and CORINE Land Cover 2000. The methodology is based on scene-by-scene analysis and supervised classification. The fully automated processing chain consists of several phases, including image segmentation, clustering, adaptive spectral representativity analysis, training data extraction and nearest-neighbour classification. This method was used to produce a European forest/non-forest map through the processing of 415 Landsat ETM+ scenes. The resulting forest/non-forest map was validated with three independent data sets. The results show that the map’s overall point-level agreement with our validation data generally exceeds 80%, and approaches 90% in central European conditions. Comparison with country-level forest area statistics shows that in most cases the difference between the forest proportion of the derived map and that computed from the published forest area statistics is below 5%.
 
The overarching goal of this research was to develop methods and protocols for mapping irrigated areas using a Moderate Resolution Imaging Spectroradiometer (MODIS) 500 m time series, to generate irrigated area statistics, and to compare these with ground- and census-based statistics. The primary mega-file data-cube (MFDC), comparable to a hyper-spectral data cube, used in this study consisted of 952 bands of data in a single file that were derived from MODIS 500 m, 7-band reflectance data acquired every 8-days during 2001–2003. The methods consisted of (a) segmenting the 952-band MFDC based not only on elevation-precipitation-temperature zones but on major and minor irrigated command area boundaries obtained from India’s Central Board of Irrigation and Power (CBIP), (b) developing a large ideal spectral data bank (ISDB) of irrigated areas for India, (c) adopting quantitative spectral matching techniques (SMTs) such as the spectral correlation similarity (SCS) R2-value, (d) establishing a comprehensive set of protocols for class identification and labeling, and (e) comparing the results with the National Census data of India and field-plot data gathered during this project for determining accuracies, uncertainties and errors. The study produced irrigated area maps and statistics of India at the national and the subnational (e.g., state, district) levels based on MODIS data from 2001–2003. The Total Area Available for Irrigation (TAAI) and Annualized Irrigated Areas (AIAs) were 113 and 147 million hectares (MHa), respectively. The TAAI does not consider the intensity of irrigation, and its nearest equivalent is the net irrigated areas in the Indian National Statistics. The AIA considers intensity of irrigation and is the equivalent of “irrigated potential utilized (IPU)” reported by India’s Ministry of Water Resources (MoWR). The field-plot data collected during this project showed that the accuracy of TAAI classes was 88% with a 12% error of omission and 32% of error of commission. Comparisons between the AIA and IPU produced an R2-value of 0.84. However, AIA was consistently higher than IPU. The causes for differences were both in traditional approaches and remote sensing. The causes of uncertainties unique to traditional approaches were (a) inadequate accounting of minor irrigation (groundwater, small reservoirs and tanks), (b) unwillingness to share irrigated area statistics by the individual Indian states because of their stakes, (c) absence of comprehensive statistical analyses of reported data, and (d) subjectivity involved in observation-based data collection process. The causes of uncertainties unique to remote sensing approaches were (a) irrigated area fraction estimate and related sub-pixel area computations and (b) resolution of the imagery. The causes of uncertainties common in both traditional and remote sensing approaches were definitions and methodological issues.
 
After a period of anomalous activity affecting the Volcano of Stromboli (Aeolian volcanic arc, Italy), the “Sciara del Fuoco” slope, situated on the north–east flank of the island, was affected by major landslides on December 30, 2002. Recent lava accumulations starting from the beginning of the eruption (December 28, 2002) and a portion of the subaerial and submarine deposits were detached. As a result, tsunami waves several meters high affected the coasts of the island. After the event, monitoring activities, coordinated by the Italian Civil Protection Department, included systematic photogrammetric surveys. The digital photogrammetric technique was adopted to extract high-resolution digital elevation models and large-scale orthophotos.The comparison between the data collected before the eruption and that acquired on January 5, 2003, together with bathymetric data, allowed to define the geometry and to estimate the volume of the surfaces involved in the landslides.The following 13 photogrammetric surveys (from January to September 2003) enabled the monitoring of the continuous and relevant morphological changes induced by both the lava flow and the evolution of the instability phenomena.The method adopted for the data analysis and the results obtained are described in the paper.
 
In the 2003 Mars Exploration Rover (MER) mission, the twin rovers, Spirit and Opportunity, carry identical Athena instrument payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper presents the photogrammetric processing techniques for high accuracy topographic mapping and rover localization at the two landing sites. Detailed discussions about camera models, reference frames, interest point matching, automatic tie point selection, image network construction, incremental bundle adjustment, and topographic product generation are given. The developed rover localization method demonstrated the capability of correcting position errors caused by wheel slippages, azimuthal angle drift and other navigation errors. A comparison was also made between the bundle-adjusted rover traverse and the rover track imaged from the orbit. Mapping products including digital terrain models, orthophotos, and rover traverse maps have been generated for over two years of operations, and disseminated to scientists and engineers of the mission through a web-based GIS. The maps and localization information have been extensively used to support tactical operations and strategic planning of the mission.
 
Monitoring the evolution of polar glaciers, ice caps and ice streams is of utmost importance because they constitute a good indicator of global climate change and contribute significantly to ongoing sea level rise. Accurate topographic surveys are particularly relevant as they reflect the geometric evolution of ice masses. Unfortunately, the precision and/or spatial coverage of current satellite missions (radar altimetry, ICESat) or field surveys are generally insufficient. Improving our knowledge of the topography of Polar Regions is the goal of the SPIRIT (SPOT 5 stereoscopic survey of Polar Ice: Reference Images and Topographies) international polar year (IPY) project. SPIRIT will allow (1) the acquisition of a large archive of SPOT 5 stereoscopic images covering most polar ice masses and, (2) the delivery of digital terrain models (DTM) to the scientific community.Here, we present the architecture of this project and the coverage achieved over northern and southern polar areas during the first year of IPY (July 2007 to April 2008). We also provide the first accuracy assessments of the SPIRIT DTMs. Over Jakobshavn Isbrae (West Greenland), SPIRIT elevations are within ±6 m of ICESat elevations for 90% of the data. Some comparisons with ICESat profiles over Devon ice cap (Canada), St Elias Mountains (Alaska) and west Svalbard confirm the good overall quality of the SPIRIT DTMs although large errors are observed in the flat accumulation area of Devon ice cap. We then demonstrate the potential of SPIRIT DTMs for mapping glacier elevation changes. The comparison of summer-2007 SPIRIT DTMs with October-2003 ICESat profiles shows that the thinning of Jakobshavn Isbrae (by 30–40 m in 4 years) is restricted to the fast glacier trunk. The thinning of the coastal part of the ice stream (by over 100 m) and the retreat of its calving front (by up to 10 km) are clearly depicted by comparing the SPIRIT DTM to an ASTER April-2003 DTM.
 
Decisions based on basic geometric entities can only be optimal, if their uncertainty is propagated through the entire reasoning chain. This concerns the construction of new entities from given ones, the testing of geometric relations between geometric entities, and the parameter estimation of geometric entities based on spatial relations which have been found to hold.Basic feature extraction procedures often provide measures of uncertainty. These uncertainties should be incorporated into the representation of geometric entities permitting statistical testing, eliminates the necessity of specifying non-interpretable thresholds and enables statistically optimal parameter estimation. Using the calculus of homogeneous coordinates the power of algebraic projective geometry can be exploited in these steps of image analysis.This review collects, discusses and evaluates the various representations of uncertain geometric entities in 2D together with their conversions. The representations are extended to achieve a consistent set of representations allowing geometric reasoning. The statistical testing of geometric relations is presented. Furthermore, a generic estimation procedure is provided for multiple uncertain geometric entities based on possibly correlated observed geometric entities and geometric constraints.
 
This paper describes the geometric in-flight calibration of the Modular Optoelectronic Multispectral Scanner MOMS-2P, which has collected digital multispectral and threefold along-track stereoscopic imagery of the earth's surface from the PRIRODA module of the Russian space station MIR from October 1996 to August 1999. The goal is the verification and, if necessary, the update of the calibration data, which were estimated from the geometric laboratory calibration. The paper is subdivided into two parts, describing two different procedures of geometric in-flight calibration. The first method is based on DLR matching software and is restricted to nadir looking channels, which are read out simultaneously. From a high number of individual point matches between the images of the same area taken by the different CCD arrays, the most reliable ones are selected and used to calculate shifts with components in and across flight direction between the CCD arrays. These actual shifts are compared to the nominal shifts, derived from the results of the laboratory calibration, and parameters of the valid camera model are estimated from both data sets by least squares adjustment. A special case of band-to-band registration are the two optically combined CCD-arrays of the nadir high-resolution channel. They are read out simultaneously with a nominal 10 pixel overlap in stereoscopic imaging mode A. The DLR matching software is applied to calculate the displacement vector between the two CCD-arrays. The second method is based on combined photogrammetric bundle adjustment using an adapted functional model for the reconstruction of the interior orientation. It requires precise and reliable ground control information as well as navigation data of the navigation-package MOMS-NAV. Nine contiguous image scenes of MOMS-2P data-take T083C building an about 550-km-long strip over southern Germany and Austria taken in March 1997 were evaluated. From both procedures calibration data are estimated, which are presented and compared to the lab-calibration results.
 
Automatic 3D building reconstruction has becoming increasingly important for a number of applications. The reconstruction of buildings using only aerial images as data source has been proven to be a very difficult problem. The complexity of the reconstruction can be greatly reduced by combining the aerial images with other data sources. In this paper, we describe a 3D building reconstruction method that integrates the aerial image analysis with information from large-scale 2D Geographic Information System (GIS) databases and domain knowledge. By combining the images with GIS data, the specific strengths of both the images (high resolution, accuracy, and large-information content) and the GIS data (relatively simple interpretation) are exploited.
 
3D surface matching would be an ill conditioned problem when the curvature of the object surface is either homogenous or isotropic, e.g. for plane or spherical types of objects. A reliable solution can only be achieved if supplementary information or functional constraints are introduced. In a previous paper, an algorithm for the least squares matching of overlapping 3D surfaces, which were digitized/sampled point by point using a laser scanner device, by the photogrammetric method or other techniques, was proposed [Gruen, A., and Akca, D., 2005. Least squares 3D surface and curve matching. ISPRS Journal of Photogrammetry and Remote Sensing 59 (3), 151–174.]. That method estimates the transformation parameters between two or more fully 3D surfaces, minimizing the Euclidean distances instead of z-differences between the surfaces by least squares. In this paper, an extension to the basic algorithm is given, which can simultaneously match surface geometry and its attribute information, e.g. intensity, colour, temperature, etc. under a combined estimation model. Three experimental results based on terrestrial laser scanner point clouds are presented. The experiments show that the basic algorithm can solve the surface matching problem provided that the object surface has at least the minimal information. If not, the laser scanner derived intensities are used as supplementary information to find a reliable solution. The method derives its mathematical strength from the least squares image matching concept and offers a high level of flexibility for many kinds of 3D surface correspondence problem.
 
Surveying techniques such as Terrestrial Laser Scanner have recently been used to measure surface changes via 3D point cloud (PC) comparison. Two types of approaches have been pursued: 3D tracking of homologous parts of the surface to compute a displacement field, and distance calculation between two point clouds when homologous parts cannot be defined. This study deals with the second approach, typical of natural surfaces altered by erosion, sedimentation or vegetation between surveys. Current comparison methods are based on a closest point distance or require at least one of the PC to be meshed with severe limitations when surfaces present roughness elements at all scales. We introduce a new algorithm performing a direct comparison of point clouds in 3D. Surface normals are first estimated in 3D at a scale consistent with the local surface roughness. The measurement of the mean change along the normal direction is then performed with an explicit calculation of a confidence interval. Comparison with existing methods demonstrates the higher accuracy of our approach, as well as an easier workflow due to the absence of surface meshing or DEM generation. Application of the method in a rapidly eroding meandering bedrock river (Rangitikei river canyon) illustrates its ability to handle 3D differences in complex situations (flat and vertical surfaces on the same scene), to reduce uncertainty related to point cloud roughness by local averaging and to generate 3D maps of uncertainty levels. Combined with mm-range local georeferencing of the point clouds, levels of detection down to 6 mm can be routinely attained in situ over ranges of 50 m. We provide evidence for the self-affine behavior of different surfaces. We show how this impacts the calculation of normal vectors and demonstrate the scaling behavior of the level of change detection.
 
The development of tools for the generation of 3D city models started almost two decades ago. From the beginning, fully automatic reconstruction systems were envisioned to fulfil the need for efficient data collection. However, research on automatic city modelling is still a very active area. The paper will review a number of current approaches in order to comprehensively elaborate the state of the art of reconstruction methods and their respective principles. Originally, automatic city modelling only aimed at polyhedral building objects, which mainly reflects the respective roof shapes and building footprints. For this purpose, airborne images or laser scans are used. In addition to these developments, the paper will also review current approaches for the generation of more detailed facade geometries from terrestrial data collection.
 
A three-dimensional (3D) spatial index is required for real time applications of integrated organization and management in virtual geographic environments of above ground, underground, indoor and outdoor objects. Being one of the most promising methods, the R-tree spatial index has been paid increasing attention in 3D geospatial database management. Since the existing R-tree methods are usually limited by their weakness of low efficiency, due to the critical overlap of sibling nodes and the uneven size of nodes, this paper introduces the k-means clustering method and employs the 3D overlap volume, 3D coverage volume and the minimum bounding box shape value of nodes as the integrative grouping criteria. A new spatial cluster grouping algorithm and R-tree insertion algorithm is then proposed. Experimental analysis on comparative performance of spatial indexing shows that by the new method the overlap of R-tree sibling nodes is minimized drastically and a balance in the volumes of the nodes is maintained.
 
Current terrain modelling algorithms are not capable of reconstructing 3D surfaces, but are restricted to so-called 2.5D surfaces: for one planimetric position only one height may exist. The objective of this paper is to extend terrain relief modelling to 3D. In a 3D terrain model overhangs and caves, cliffs and tunnels will be presented correctly. Random measurement errors, limitations in data sampling and the requirement for a smooth surface rule out a triangulation of the original measurements as the final terrain model. A new algorithm, starting from a triangular mesh in 3D and following the subdivision paradigm will be presented. It is a stepwise refinement of a polygonal mesh, in which the location of the vertices on the next level is computed from the vertices on the current level. This yields a series of triangulated terrain surfaces with increasing point density and smaller angles between adjacent triangles, converging to a smooth surface. With the proposed algorithm, the special requirements in terrain modelling, e.g. breaklines can be considered. The refinement process can be stopped as soon as a resolution suitable for a specific application is obtained. Examples of an overhang, a bridge which is modelled as part of the terrain surface and for a 2.5D terrain surface are presented. The implications of extending modelling to 3D are discussed for typical terrain model applications.
 
In image analysis, scale-space theory is used, e.g., for object recognition. A scale-space is obtained by deriving coarser representations at different scales from an image. With it, the behaviour of image features over scales can be analysed. One example of a scale-space is the reaction-diffusion-space, a combination of linear scale-space and mathematical morphology. As scale-spaces have an inherent abstraction capability, they are used here for the development of an automatic generalization procedure for three-dimensional (3D) building models. It can be used to generate level of detail (LOD) representations of 3D city models. Practically, it works by moving parallel facets towards each other until a 3D feature under a certain extent is eliminated or a gap is closed. As not all building structures consist of perpendicular facets, means for a squaring of non-orthogonal structures are given. Results for generalization and squaring are shown and remaining problems are discussed. The conference version of this paper is Forberg [Forberg, A., 2004. Generalization of 3D Building Data Based on a Scale-Space Approach. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 35 (Part B4) http://www.isprs.org/istanbul2004/comm4/papers/341.pdf (accessed January 17, 2007)].
 
The automatic co-registration of point clouds, representing 3D surfaces, is a relevant problem in 3D modeling. This multiple registration problem can be defined as a surface matching task. We treat it as least squares matching of overlapping surfaces. The surface may have been digitized/sampled point by point using a laser scanner device, a photogrammetric method or other surface measurement techniques. Our proposed method estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface, using the Generalized Gauss–Markoff model, minimizing the sum of squares of the Euclidean distances between the surfaces. This formulation gives the opportunity of matching arbitrarily oriented 3D surface patches. It fully considers 3D geometry. Besides the mathematical model and execution aspects we address the further extensions of the basic model. We also show how this method can be used for curve matching in 3D space and matching of curves to surfaces. Some practical examples based on the registration of close-range laser scanner and photogrammetric point clouds are presented for the demonstration of the method. This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.
 
This paper describes the fusion of information extracted from multispectral digital aerial images for highly automatic 3D map generation. The proposed approach integrates spectral classification and 3D reconstruction techniques. The multispectral digital aerial images consist of a high resolution panchromatic channel as well as lower resolution RGB and near infrared (NIR) channels and form the basis for information extraction.Our land use classification is a 2-step approach that uses RGB and NIR images for an initial classification and the panchromatic images as well as a digital surface model (DSM) for a refined classification. The DSM is generated from the high resolution panchromatic images of a specific photo mission. Based on the aerial triangulation using area and feature-based points of interest the algorithms are able to generate a dense DSM by a dense image matching procedure. Afterwards a true ortho image for classification, panchromatic or color input images can be computed.In a last step specific layers for buildings and vegetation are generated and the classification is updated.
 
3D point clouds of natural environments relevant to problems in geomorphology often require classification of the data into elementary relevant classes. A typical example is the separation of riparian vegetation from ground in fluvial environments, the distinction between fresh surfaces and rockfall in cliff environments, or more generally the classification of surfaces according to their morphology. Natural surfaces are heterogeneous and their distinctive properties are seldom defined at a unique scale, prompting the use of multi-scale criteria to achieve a high degree of classification success. We have thus defined a multi-scale measure of the point cloud dimensionality around each point, which characterizes the local 3D organization. We can thus monitor how the local cloud geometry behaves across scales. We present the technique and illustrate its efficiency in separating riparian vegetation from ground and classifying a mountain stream as vegetation, rock, gravel or water surface. In these two cases, separating the vegetation from ground or other classes achieve accuracy larger than 98 %. Comparison with a single scale approach shows the superiority of the multi-scale analysis in enhancing class separability and spatial resolution. The technique is robust to missing data, shadow zones and changes in point density within the scene. The classification is fast and accurate and can account for some degree of intra-class morphological variability such as different vegetation types. A probabilistic confidence in the classification result is given at each point, allowing the user to remove the points for which the classification is uncertain. The process can be both fully automated, but also fully customized by the user including a graphical definition of the classifiers. Although developed for fully 3D data, the method can be readily applied to 2.5D airborne lidar data.
 
In this paper, we present an automatic building extraction method from Digital Elevation Models based on an object approach. First, a rough approximation of the building footprints is realized by a method based on marked point processes: the building footprints are modeled by rectangle layouts. Then, these rectangular footprints are regularized by improving the connection between the neighboring rectangles and detecting the roof height discontinuities. The obtained building footprints are structured footprints: each element represents a specific part of an urban structure. Results are finally applied to a 3D-city modeling process.
 
The recent developments in the fields of pattern recognition and parallel computation open new possibilities to motion analysis. The quantitative analysis of movement is no longer restricted to the research, but is becoming an important tool for the assessment of patients and the therapeutic evaluation in clinical environment, as well as in industrial applications.The main features of an automatic system for motion analysis using optoelectronic sensors (special nonmetric TV cameras) and passive lightweight markers (with no limits in number) will be described herein. The system is hierarchically organized on two levels: the first provides for marker recognition and is implemented by a dedicated hardware processor and the second is devoted to the software processing of the spatial coordinates of the markers. Special attention has been paid on the description of the algorithms for 3D spatial resection and intersection. The features of the system lead to high performance with respect to noise rejection, flexibility of set-up, accuracy and easyness of use, even in critical environmental conditions.
 
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
 
In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.
 
A joint research project was conducted at ETH Zurich to develop a user-friendly software environment for the representation, visual manipulation, analysis and design of urban areas. Three groups were involved in the project: (1) the ‘Architecture and Planning’ group defined the requirements and expectations for the system; (2) the ‘Photogrammetry’ group acquired and processed raster and 3D vector data to form a 3D model of the urban area; and (3) the ‘CAAD’ (Computer Aided Architectural Design) group embedded the data into AutoCAD and implemented database functionality. Results of the photogrammetry group are presented, including the implementation of a ‘topology builder’ which automatically fits roof planes to manually or semi-automatically measured roof points in order to create AutoCAD-compatible 3D building models. Digital orthoimages and derived products such as perspective views, and the geometric correction of house roofs in digital orthoimages also were generated for test sites in Switzerland.
 
Three-dimensional photo-models are object models with texture information taken from photographs. They can be used, among other things, to present structural projects to the public and decision-makers. The geometrical modeling of our photo-model is supported by assumptions on the object shape. These assumptions are introduced as fictitious feature observations into the process of hybrid bundle block adjustment. The resulting geometrical model is then filled with texture by projecting the original photographs onto it. A suitable data format for such purposes is VRML (Virtual Reality Modeling Language) because such three-dimensional photo-models can be made available over the Internet. In addition, they can be combined very easily with other media such as texts, sounds, images, movies and, because of that, may serve as a basis for information systems.
 
In this paper, a novel multiple representation data structure for dynamic visualisation of 3D city models, called CityTree, is proposed. To create a CityTree, the ground plans of the buildings are generated and simplified. Then, the buildings are divided into clusters by the road network and one CityTree is created for each cluster. The leaf nodes of the CityTree represent the original 3D objects of each building, and the intermediate nodes represent groups of close buildings. By utilising CityTree, it is possible to have dynamic zoom functionality in real time. The CityTree methodology is implemented in a framework where the original city model is stored in CityGML and the CityTree is stored as X3D scenes. A case study confirms the applicability of the CityTree for dynamic visualisation of 3D city models.
 
Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.
 
This paper addresses the steps associated to reconstruct a model based on a number of scans acquired with different sensor techniques. It discusses the concept of managing and operating on scans with different origins, which may have different spatial resolution, accuracy and coverage. In order to cope with these varying sensor characteristics, the article introduces the concept of error-bound which describes a point-wise uncertainty inherited from the acquisition system. Further processing steps in a 3D Model Reconstruction task need to consider this measure in order to manage varying point accuracies and correctly manage the final fusion step making a single model based on the best available information from all part information. Furthermore, the paper discusses concepts to use normal weights and normal cut-offs in the ICP algorithm and in this area we show experimental results comparing these techniques. The registration and fusion technique proposed in the article is applied on a 16 scan project aimed at accurately reconstructing an office building and some utility constructions.
 
An image-based 3D surface reconstruction method based on simultaneous evaluation of intensity and polarisation features (shape from photopolarimetric reflectance) and its combination with absolute depth data is introduced in this article. The proposed technique is based on the analysis of single or multiple intensity and polarisation images. To compute the surface gradients, we present a global optimisation method based on a variational framework and a local optimisation method based on solving a set of non-linear equations individually for each image pixel. These approaches are suitable for strongly non-Lambertian surfaces and those of diffuse reflectance behaviour and can also be adapted to surfaces of non-uniform albedo. We describe how independently measured absolute depth data is integrated into the shape from photopolarimetric reflectance framework in order to increase the accuracy of the 3D reconstruction result. In this context we concentrate on dense but noisy depth data obtained by depth from defocus and on sparse but accurate depth data obtained by stereo or structure from motion analysis. We show that depth from defocus information should preferentially be used for initialising the optimisation schemes for the surface gradients. For integration of sparse depth information, we suggest an optimisation scheme that simultaneously adapts the surface gradients to the measured intensity and polarisation data and to the surface slopes implied by depth differences between pairs of depth points. In principle, arbitrary sources of depth information are possible in the presented framework. Experiments on synthetic and on real-world data reveal that while depth from defocus is especially helpful for providing an initial estimate of the surface gradients and the albedo in the absence of a-priori knowledge, integration of stereo or structure from motion information significantly increases the 3D reconstruction accuracy. In our real-world experiments, we regard the scenarios of 3D reconstruction of raw forged iron surfaces in the domain of industrial quality inspection and the generation of a digital elevation model of a section of the lunar surface in the context of space-based planetary exploration.
 
An investigation of the application of 1-m Ikonos satellite imagery to 3D point positioning and building reconstruction is reported. The focus of the evaluation of Geo panchromatic imagery has been upon 3D positioning accuracy, radiometric quality and attributes of the image data for building feature extraction. Following an initial review of characteristics of the Ikonos system, the multi-image dataset employed is described, as is the Melbourne Ikonos testfield. Radiometric quality and image preprocessing aspects are discussed, with attention being given to noise level and artifacts, as well as to methods of image enhancement. The results of 2D and 3D metric accuracy tests using straightforward geometric sensor models are summarised, these showing that planimetric accuracy of 0.3–0.6 m and height accuracy of 0.5–0.9 m are readily achievable with only three to six ground control points. A quantitative and qualitative assessment of the use of stereo Ikonos imagery for generating building models is then provided, using the campus of the University of Melbourne as an evaluation site. The results of this assessment are discussed, these highlighting the high accuracy potential of IkonosGeo imagery and limitations to be considered in building reconstruction when a comprehensive and detailed modelling is required.
 
This paper summarises the results achieved from a number of laser scanning experiments performed in our laboratories and on remote sites. The potential of this technology for imaging applications and as an input to virtualised reality environments is discussed. Parameters to be considered for this type of activity are related to the design of laser scanners with adequate depth of field, image resolution, shape reproduction fidelity, registered colour information, robustness to ambient light interference and scanning strategies. The first case reviewed is an application geared towards improving access to art collections belonging to museums. A number of digital 3D models acquired in Italy in 1997–1998 are presented, e.g. marble statue from G. Pisano (circa 1305). The second case aims at digitising large structures. Examples of a large sculpture located outside of the Canadian Museum of Civilisation in Hull, Canada and the Orbiter Docking System (ODS) located at the Kennedy Space Center in Florida, are presented.
 
Improvements and new developments in the fields of sensor and computer technology have had a major influence on photogrammetry and today a large variety of cameras and photogrammetric systems is available. The application of the CAD-based 3D-feature extraction routine within a system for digital photogrammetry and architectural design (DIPAD) is described. This system enables the operator to perform the whole reconstruction of the three-dimensional object without any manual measurement. His task is only to interpret the scene qualitatively while selecting the features to be measured, whereas the quantitative measurement and calculations are performed by the computer. An automatic data transfer of the photogrammetrically generated three-dimensional object description to a CAAD (computer aided architectural design) system provides new capabilities for architects and conservationists.
 
This paper presents a hybrid concept of interaction between scene and sensors for image interpretation. We present a strategy for 3D building acquisition which combines different approaches based on different levels of description and different sensors: the detection of regions of interest, and the automatic and semi-automatic reconstruction of object parts and complete buildings.
 
Inappropriate hypersurface. 
3D-facet hard constraints. 
Penalized hypersurface. 
3D-segment constraint impact. 
Smoothing parameter K impact on different classes. 
This paper aims to present a new approach for automatic urban scene modeling from high-resolution satellite images with focus on building areas. The input data consist of a panchromatic stereo pair of satellite images, with a submetric resolution of 50–70 cm and a low Base to Height ratio B/H [0.05–0.2]. Since a detailed extraction and description of building roofs is complex in a satellite context, we propose to describe the scene by means of a 3D-surface that provides either raster or vector information using different description levels. The main contribution of our approach is the use of 3D-features such as 3D-segments and 3D-facets to guide the optimization process. 3D surface modeling can be formulated as a matching problem that can be solved by graph cut minimization. The novelty consists in the original construction of the graph to combine input 2D data and 3D feature constraints to control the final surface. Complementary features are used. 3D-segments modelize discontinuities and 3D-facets help to regularize the surface by planar patches. The proposed automatic system provides a surface height map with subpixellar precision. Moreover, the system is generic and extensible to other data such as aerial and terrestrial images or to a multiple view context. External databases can also be easily added to the process to constrain the optimization.
 
Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge.In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes ‘buildings’, ‘cars’ and ‘trees’ by using aerial colour images of an urban area of the town of Engen in Germany.
 
This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.
 
This paper presents a method to assess the geometric quality of 3D building models. The quality depends on properties of the input data and the processing steps. Insight in the quality of 3D models is important for users to judge whether the models can be used in their specific applications. Without a proper quality description it is likely that the building models are either treated as correct or considered as useless because the quality is unknown. In our research we analyse how the quality parameters of the input data affect the quality of the 3D models. The 3D models have been reconstructed from dense airborne laser scanner data of about 20 pts/m2. A target based graph matching approach has been used to relate specific data features to general building knowledge. The paper presents a theoretical and an empirical approach to identify strong parts and shortcomings in 3D building models reconstructed from airborne laser scanning data without the use of reference measurements. Our method is tested on three different scenes to show that a proper quality description is essential to correctly judge the quality of the models.
 
From the industry a growing demand for as-built digital information of large pipe systems exists. In this paper two models are presented which enable the reconstruction of straight and curved pipes from digital images. The model for straight pipes is based on Mulawa's coplanarity constraint defining a relation between the unknown parameters of a 3D line and measured points on this line in a set of images. Furthermore a model is developed for the reconstruction of curved pipes. The combination of both models will result in a fast and more accurate determination of the parameters of both the straight and curved pipes. It is shown that line photogrammetry can be an appropriate method for the reconstruction of straight pipes. Tests on a set of images of a pipe frame gives encouraging results for the use of digital line photogrammetry for the 3D reconstruction of pipe systems.
 
Photogrammetry has many advantages as a technique for the acquisition of three-dimensional models for virtual reality. But the traditional photogrammetric process to extract 3D geometry from multiple images is often considered too labour-intensive. In this paper a method is presented with which a polyhedral object model can be efficiently derived from measurements in a single image combined with geometric knowledge on the object. Man-made objects can often be described by a polyhedral model and usually many geometric constraints are valid. These constraints are inferred during image interpretation or may even be extracted automatically. In this paper different types of geometric constraints and their use for object reconstruction are discussed. Applying more constraints than needed for reconstruction will lead to redundancy and thereby to the need for an adjustment. The redundancy is the basis for reliability that is introduced by testing for possible measurement errors. The adjusted observations are used for object reconstruction in a separate step. Of course the model that is obtained from a single image will not be complete, for instance due to occlusion. An arbitrary number of models can be combined using similarity transformations based on the coordinates of common points. The information gathered allows for a bundle adjustment if highest accuracy is strived for. In virtual reality applications this is generally not the case, as quality is mainly determined by visual perception. A visual aspect of major importance is the photo-realistic texture mapped to the faces of the object. This texture is extracted from the same (single) image. In this paper the measurement process, the different types of constraints, their adjustment and the object model reconstruction are treated. A practical application of the proposed method is discussed in which a texture mapped model of a historic building is constructed and the repeatability of the method is assessed. The application shows the feasibility of the method and the potential of photogrammetry as an efficient tool for the production of 3D models for virtual reality applications.
 
Scanning of analogue images has become a key hardware technology specific to modern digital photogrammetry. Since specialised photogrammetric scanners have been introduced in the late 1980s, a gradual development and improvement of their performance regarding hardware, software and functionality, and productivity has been observed. Originally, geometric accuracy of scanners was the overriding specification for scanners. This is increasingly being augmented by a concern for good colour and radiometric performance. This article describes the UltraScan 5000, a modern photogrammetric scanner manufactured by Vexcel Imaging Austria, and its features, assesses its radiometric and geometric performance with various well-founded tests, and discusses its versatility and use in production. The UltraScan 5000 was introduced in November 1998 and since then, a surprisingly large number of systems has been installed worldwide. Their successful operation illustrates on a daily basis the validity of the technical solution and tests at user sites have confirmed a good to excellent performance regarding geometric accuracy and resolution, radiometric performance (noise, dynamic range) and colour rendition.
 
A comparison between data acquisition and processing from passive optical sensors and airborne laser scanning is presented. A short overview and the major differences between the two technologies are outlined. Advantages and disadvantages with respect to various aspects are discussed, like sensors, platforms, flight planning, data acquisition conditions, imaging, object reflectance, automation, accuracy, flexibility and maturity, production time and costs. A more detailed comparison is presented with respect to DTM and DSM generation. Strengths of laser scanning with respect to certain applications are outlined. Although airborne laser scanning competes to a certain extent with photogrammetry and will replace it in certain cases, the two technologies are fairly complementary and their integration can lead to more accurate and complete products, and open up new areas of application.
 
This paper presents an algorithm for the segmentation of airborne laser scanning data. The segmentation is based on cluster analysis in a feature space. To improve the quality of the computed attributes, a recently proposed neighborhood system, called slope adaptive, is utilized. Key parameters of the laser data, e.g., point density, measurement accuracy, and horizontal and vertical point distribution, are used for defining the neighborhood among the measured points. Accounting for these parameters facilitates the computation of accurate and reliable attributes for the segmentation irrespective of point density and the 3D content of the data (step edges, layered surfaces, etc.) The segmentation with these attributes reveals more of the information that exists in the airborne laser scanning data.
 
The article discusses the theoretical precision for the measurement of six degrees of freedom (6DOF = position and orientation in 3D space) of an object with respect to a reference system using one camera only. Single camera solutions are of interest in applications where the application is restricted to only one camera, e.g. due restrictions in terms of costs, synchronisation demands, or spatial observation conditions. A pure photogrammetric approach is discussed in which two sequential space resections generate the required six transformation values. The paper describes the mathematical model and precision evaluations based on mathematical simulations, and some practical examples as well. With suitable configurations of camera position, object size and reference system, accuracies of better than 1:10 000 of the measuring volume can be achieved whereby translations in Z and rotations around ω and φ are the most critical to measure.
 
Top-cited authors
Thomas Blaschke
  • University of Salzburg
George Vosselman
  • University of Twente
Peter Hofmann
  • Deggendorf Institute of Technology
Lucian Drǎguţ
  • West University of Timisoara
Mariana Belgiu
  • University of Twente