ArticlePDF Available

PRECISE 3D GEO-LOCATION OF UAV IMAGES USING GEO-REFERENCED DATA

Authors:
  • Faculty of Surveying and Geospatial Engineering, College of Engineering, University of Tehran

Abstract

In most of UAV applications it is essential to determine the exterior orientation of on-board sensors and precise ground locations of images acquired by them. This paper presents a precise methodology for 3D geo-location of UAV images using geo-referenced data. The fundamental concept behind this geo-location process is using database matching technique for refining the coarse initial attitude and position parameters of the camera derived from the navigation data. These refined exterior orientation parameters are then used for geo-locating entire image frame using rigorous collinearity model in a backward scheme. A forwards geo-locating procedure also is proposed based on a ray-DSM intersection method for the cases where ground location of specific image targets (and not the entire frame) is required. Experimental results demonstrated the potential of the proposed method in accurate geo-location of UAV images. Applying this method, an RMSE of about 14 m in horizontal and 3D positions has been obtained.
PRECISE 3D GEO-LOCATION OF UAV IMAGES USING GEO-REFERENCED DATA
M. Hamidi a*, F. Samadzadegan a
a School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
(m.hamidi, samadz)@ut.ac.ir
Commission VI, WG VI/4
KEY WORDS: UAV, Geo-location, Direct Geo-referencing, Database matching, Ray-DSM intersection.
ABSTRACT:
In most of UAV applications it is essential to determine the exterior orientation of on-board sensors and precise ground locations of
images acquired by them. This paper presents a precise methodology for 3D geo-location of UAV images using geo-referenced data.
The fundamental concept behind this geo-location process is using database matching technique for refining the coarse initial attitude
and position parameters of the camera derived from the navigation data. These refined exterior orientation parameters are then used
for geo-locating entire image frame using rigorous collinearity model in a backward scheme. A forwards geo-locating procedure also
is proposed based on a ray-DSM intersection method for the cases where ground location of specific image targets (and not the entire
frame) is required. Experimental results demonstrated the potential of the proposed method in accurate geo-location of UAV images.
Applying this method, an RMSE of about 14 m in horizontal and 3D positions has been obtained.
1. INTRODUCTION
Nowadays, Unmanned Aerial Vehicles (UAVs) are a valuable
source of data for inspection, surveillance, mapping and 3D
modelling issues. Their ability and suitability in performing
dangerous and repetitive tasks as well as providing high spatial
and temporal resolution imagery are great advantages that have
made such a spread use of this technology (Rango, et al., 2006;
Heintz, et al., 2007; Semsch et al., 2009; Remondino et al., 2011;
Saari, et al., 2011; Neitzel et al., 2011; Nex et al., 2014; Bollard-
Breen et al., 2015; Wischounig-Strucl and Rinner, 2015). In most
of these applications it is essential to determine the exterior
orientation of sensor and precise ground locations of UAV
images. The mapping between camera coordinates and ground
coordinates, called geo-location, depends both on the position
and attitude of the sensor and on the distance and topography of
the ground (Kumar at al., 2000).
With geodetic GPS/IMU in aerial manned vehicles, the direct and
the integrated sensor orientation are able to calculate exterior
orientation parameters precisely. However, the accuracy of
GPS/IMU devices on-board UAV platforms are not sufficient for
these application. The small size and the reduced payload of
many UAV platforms limit the transportation of high quality
IMU devices like those coupled to airborne cameras or LiDAR
sensors used for mapping (Remondino et al., 2011). Moreovers,
GPS is mainly used in code-based positioning mode and thus it
is not sufficient for accurate direct sensor orientation
(Remondino et al., 2011). Furthermore, integrated sensor
orientation needs an image block that in several UAV
applications might not be available.
Matching UAV acquired images with the previously available
geo-referenced imagery as a database can help in providing
accurate position and orientation parameters of the UAV
platform with no GCP or image block required and thereby
improve the geo-location accuracy considerably. However, in
places where there are not many details on the terrain, or on sea,
this method is of little help. Further, in disasters like flood,
earthquake, tsunami etc., terrain may have undergone substantial
changes in the areas of interest and hence registration may fail
(Kushwaha et al., 2014). The accuracy of this geo-location
process will depends on many factors such as accuracy of
* Corresponding author
GPS/IMU data, accuracy of reference database (image and
DEM), camera calibration parameters, image matching accuracy,
and the number and dispersal of matched points in the image.
Barber et al., 2006 presented a method for determining the GPS
location of a ground-based object when imaged from a fixed-
wing miniature air vehicle (MAV). Using the pixel location of
the target in an image, with measurements of MAV position and
attitude, and camera pose angles, the target is localized in world
coordinates. They present four techniques for reducing the
localization error; RLS filtering, bias estimation, flight path
selection, and wind estimation.
Kumar et al., 2011 proposed a method for determining the
location of a ground based target when viewed from an
Unmanned Aerial Vehicle. They use the concept of direct geo-
referencing in combination with a range finder to convert pixel
coordinates on the video frame to the target’s geo-location in the
North-East-Down (NED) frame. They fuse RGB vision and
thermal images in order to providing day and night time
operation.
Arun et al., 2012, discussed an unmanned aerial vehicle which
was capable of navigating autonomously to geo-localize arbitrary
ground target. They used two on-board camera, one forward
looking for vision based navigation, and the other nadir point for
geo-location purpose. The geo-location task could be achieved
by first registering the video sequence obtained from the vehicle
with aerial images of the region. Then immediately perform
geometric coordinate transformation from aerial images to video
sequence frames using Homography matrix derived from
matching phase.
Kushwaha et al., 2014, discussed a model for obtaining geo-
location for a target in real time from UAV videos, taking into
account the digital elevation data as well. This paper also use the
principles of direct geo-referencing technique. It intersects the
light rays coming from the perspective center with the elevation
map to find the ground location of the interested target.
While most publications focus on geo-locating a specific object
in the video stream, following the method described here, all
information collected by the on-board camera is accurately geo-
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
269
located through registration with pre-existing geo-reference
imagery. This paper presents a precise methodology for 3D geo-
location of UAV images based on database matching technique.
The rest of the paper is organized as follows. Section 2 presents
the proposed method for geo-locating UAV imagery as well as
providing fine exterior orientation of the sensor while acquiring
these image frames. The experimental results for various frames
of images in different part of the study area are provided in
Section 3. The conclusions and discussions are presented in
Section 4.
2. PROPOSED METHOD
This paper presents a precise rigorous model based methodology
for 3D geo-location of UAV images using geo-referenced data.
The procedure uses database matching technique for providing
Virtual Control Points (VCPs) in the coverage areas of each
frame. Initial Exterior Orientation Parameters (EOPs) together
with positional information of provided VCPs for each frame are
then used to adjust these data through a weighted least square
based resection process. Finally, using obtained fine EOPs of
each frame it would be possible to geo-locate entire image frame
following a rigorous model (collinearity equations) in a backward
scheme. If ground location of specific image targets (and not the
entire frame) is required, it could be obtained through a forward
geo-location scheme. In this case, repetitive ray-DSM
intersection method will be needed. Considering the divergence
conditions of the common method for solving this problem (see
section 2.3.1), especially in the case of UAV imagery, we use a
method that prevents these divergence cases. The main stages of
this geo-location process are as following:
i. Extract features and descriptors from reference image
ii. Coarse Geo-locate forward
iii. Image resection using LS technique
iv. Fine Geo-locate forward
v. Fine Geo-locate inverse
2.1 Extract features and descriptors from reference image
In the first stage, Scale Invariant Feature Transform (SIFT)
descriptors (Lowe, 2004) from the geo-referenced image are
derived and stored as part of our database. This process is time-
consuming, so it is done once in the beginning of the procedure
and the results are stored in the database for the next consequent
usages. Remaining stages will be performed for each of acquired
image frames repeatedly.
2.2 Coarse geo-locate forward
For each image frame, the coarse geo-location of its borders are
determined using GPS/IMU parameters extracted from the
navigation data in the form of a forward geo-referencing process.
This procedure is equivalent to the forward projection step in
image orthorectification techniques based on forward projection
scheme. For each image corner the light ray passing through the
camera’s projection center and that point is intersected with three
dimensional ground surface defined by the DSM and resulted
position of that corresponding corner in the ground space.
Even though EOPs of the image and DSM are available, because
of mutual dependency of horizontal and vertical ground
coordinates this process is not straightforward. Computation of
horizontal ground coordinates is dependent on vertical
coordinate. And clearly vertical coordinate of the point that is
read from DSM is dependent on its horizontal coordinates. As a
consequence, translation from 2D image space to 3D image space
requires a repetitive computation scheme as described in (Bang
et al., 2007). This method is commonly used in orthorectification
of satellite imagery when following forward projection approach.
However, as described in section 2.3 this method has divergency
risk. So, we will suggest using another ascheme for solving this
problem in that section.
In the following of the geo-location procedure, ground locations
of four image corners obtained in the forwards geo-location step
considering a confidence margin area are used to extract
candidate reference descriptors already available in the database.
2.3 Fine geo-locate forward
At this point, SIFT feature descriptors from the UAV image
frame are extracted and matched against to reference feature
descriptors extracted in the previous step. After removing
potential outliers, if at least three matched points are available, it
would be possible to refine camera parameters using these points
as virtual control points whose vertical positions are simply read
from the available DSM in the database. For this purpose, VCPs
information (image and ground positions) as well as GPS/IMU
data are integrated in a combined weighted least squares
adjustment process for solving resection problem and eventuate
adjusted exterior orientation parameters of the camera. Weights
are obtained using predicted accuracy of telemetry data as well
as positional accuracy of VCPs which are estimated based on
accuracy of reference database and matching procedure.
Accurate 3D geo-location of any object visible in the image then
can be obtained using refined camera parameters following a
forward geo-referencing process.
It should be noted that for obtaining coarse coordinates of image
borders in the ground space, one can by neglecting the
topography of the ground surface simply consider a mean height
for the area and thereby prevent repetitive computations needed
for ray-DSM intersection procedure. This strategy is more logical
because a confidence margin is considered around the obtained
area. But, whenever geo-locating a single target on the image is
of purpose - and geo-locating the whole scene is not required- the
repetitive procedure for ray-DSM intersection must be followed
in order to prevent displacement due to altitude difference. So,
considering divergence risk of common method, we will use a
different method for solving ray-DSM intersection problem in the
next section.
2.3.1 Ray-DSM intersection: Figure 1 (a) illustrates the
conventional method for solving iterative procedure of ray-DSM
intersection. As it can be seen in Figure 1 (a-c), this process only
convergence for the cases in which slopes of the light ray from
the perspective center is greater than the slope of the ground
surface in the intersection area. Even though this condition is
more common with manned aerial and specially satellite
imagery, UAV platforms generally fly in low altitudes and also
may capture imagery from high oblique attitudes, so the cases (b)
and (c) in the Figure 1 may be common in this type of platforms
resulting divergence when using traditional ray-DSM
intersection technique.
For these divergency cases we use a technique similar to
bisection method in finding roots of nonlinear functions in the
numerical analysis domain. Bisection method - as its name
illustrates - uses successive bisections in an interval around the
root of the function f(x) (Figure 2. a). So, it is enough to find two
starting points with different signs in order to find the root.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
270
(c)
(b)
(a)
Figure 1. (a) Common method for solving Ray-DSM
intersection problem, (b and c) its convergency problems
Figure 2. (a) Bisection method, (b) Bisection-based Ray-DSM
intersection method, and (c) convergence process of the method
The similarity of the root finding concept with ray-DSM
intersection problem can be find out with comparing two images
depicted in Figure 2 (a) and (b). By considering light ray as x-
axis and ground surface as the function its root (i.e. intersection
with x-axis) must be find, equivalency of two concepts becomes
clear. As it is depicted in Figure 2 (b) common characteristic of
all points in each side of the light ray is that Z differences
obtained for those from collinearity equations and DSM have the
same sign. Two first starting points are obtained using the first
two repetitions of common method (as illustrated in Figure 1 (a)
these points have different signs). Then, the coordinates of third
point is calculated by averaging coordinates of these two points.
For the next repetition, third point according to its position with
respect to intersection point is replaced with the first or second
point. Then, using new first and second points explained steps are
repeated. This procedure continues until Z difference calculated
from collinearity and interpolated from DSM will becomes
negligible (Figure 2. c).
2.4 Fine geo-locate inverse
Availability of accurate camera parameters as well as altitudinal
information from DEM data makes it possible to geo-reference
the whole UAV image frame with different ground sampling
distances (GSD) in a backward geo-referencing process. In
backward projection, each pixel in the geo-referenced image
takes its pixel value from UAV image using the collinearity
condition and the ground space coordinates X, Y, and Z of the
corresponding DSM cell. These geo-referenced imagery then can
be used to produce a wider mosaic from the study area.
3. EXPERIMENTS AND RESULTS
Performance analysis of proposed geo-location procedure have
been performed based on data acquired during a planned flight
over an area with different topographies (Figure 3). Data
collection was performed using a multirotor UAV platform
(Figure 4. a) flown in fully autonomous mode at mean altitude of
400 meters above the ground. The imaging camera is Sony NEX-
5R digital camera (Figure 4. b) equipped with a 16 mm lens
which acquired still images during the flight.
Figure 3. Planned flight path over an mountainous area
(a)
(b)
Figure 4. (a) Mini-UAV Quad-Copter; (b) Sony NEX-5R digital
camera
In this research we used DigitalGlobe’s satellite imagery derived
from Google Earth and SRTM 90 m DEM, which are freely and
globally available, as geo-referenced database.
Accuracy assessment was performed visually and statistically
using geotagged images acquired from different locations of the
study area. Figure 5 illustrates the results of the geolocation
process for some of images frames over different parts of the
area. The results of the matching process for one example of these
Image
Ground
Slope of
ground
surface
Slope of
the ray
Image
Ground
Slope of
the ray
Slope of
ground
surface
Image
Ground
Slope of
ground
surface
Slope of
the ray
Image
Ground
Surface
Intersection
Point
Light
ray
Ground
Surface
Intersection
Point
Light
ray
Image
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
271
images is shown in Figure 6. Because of different distortion
sources of both UAV image frames and the reference image
acquired from Google Earth database in the one hand, and the
absence of distinctive feature points as well as presence of
repetitive patterns in the natural scene imagery on the other hand,
the number and distribution of feature points is almost rare. Even,
in some image frames the matching procedure failed due to the
above mentioned reasons. Figure 7 represented some examples
of matched pairs. Figure 8 depicts geo-located image boundaries
resulted from coarse (red) and fine (green) geo-location stages
for the example frame shown in Figure 6.
For visual comparisons, full geo-referenced images were
produced from UAV image frames and then overlaid on the
reference data (Figure 9). Obviously, more coincident results
indicate better performance of the intended geo-location process.
Figure 9 exhibits the resulted geo-referenced images using initial
coarse as well as fine EOPs for selected image frame shown in
figure 6 from different viewpoints. As it can be seen, the resulted
EOPs from proposed geo-location process produced much better
results in comparison with those extracted from initial coarse
EOPs extracted from navigation data.
Figure 5. The results of the geolocation process for some of
image frames over different parts of the area
Figure 6. The results of matching process for selected UAV
image frame against to database; left: UAV image frame and
right: reference image
Figure 7. Some examples of matched pairs; up: image frame,
and bottom: reference image
DSC08638 frame
Figure 8. Geo-located image boundaries resulted from coarse
(red) and fine (green) geo-location stages for four example
frames shown in Figure 5
(a)
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
272
(b)
(c)
Figure 9. Geo-referenced image frames overlaid on (a) reference image resulted from (b) initial coarse and (c) fine EOP
In order to provide a context for statistical analysis, nine distinct
point features with proper distribution over sample geo-located
image frames were measured. These frames have obtained by
applying coarse and fine geolocation procedures on one sample
UAV image frame. Calculated locations of these points then
compared with their reference locations obtained from the
database. Horizontal residual vectors of these points are depicted
in Figures 10 and 11 for the cases of coarse and fine geo-location
procedures, respectively. As it is seen, residuals after applying
the fine geo-location process have been reduced considerably.
Resulted differences then used to extract statistical parameters
indicating performance of the intended process. The statistical
parameters, Root Mean Square Error (RMSE), Mean Absolute
Error (MAE), minimum (Min), and maximum (Max) values are
then calculated for X, Y, and Z coordinate components, also for
horizontal (2D), and three dimensional (3D) coordinates. These
parameters are illustrated in Tables 1 and 2 for the cases of coarse
and fine geo-location procedures, respectively. As it can be seen,
applying the fine geolocation process has improved the positional
accuracy by the order of 100 m, approximately. For example, by
applying fine geolocation process the RMSE values for
horizontal and 3D locations will be 14.349 m and 14.476 m
respectively, in comparison with 114.765 m and 119.605 m in the
case of coarse geo-location process.
Figure 10. Residual vectors obtained by differencing control
point positions in the case of geo-located image resulted from
coarse geo-location process
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
273
Figure 11. Residual vectors obtained by differencing control
point positions in the case of geo-located image resulted from
fine geo-location process
dX (m)
dY (m)
dZ (m)
d2D (m)
d3D (m)
RMSE
112.918
20.507
33.678
114.765
119.605
MAE
112.778
20.111
28.889
114.608
119.434
Min
103
14
3
103.947
104.757
Max
123
26
47
125.718
127
Table 1. Statistical parameters in the case of applying coarse
geolocation process
dX (m)
dY (m)
dZ (m)
d2D (m)
d3D (m)
RMSE
8.557
11.518
1.915
14.349
14.476
MAE
6.333
9.111
1.444
12.236
12.368
Min
2
1
0
3.606
3.606
Max
21
23
4
23.706
24.042
Table 2. Statistical parameters in the case of applying fine
geolocation process
Results indicated that the proposed method can improve the
geolocation accuracy considerably. Using this method, geo-
location of UAV images will be performed with the accuracy
order comparable to accuracy of used georeferenced database.
Experimental results demonstrated the potential of the proposed
method in accurate geolocation of UAV images.
It should be noted that although the results of geo-locating the
example frames presented here are satisfactory, these frames
have common features that simplifies the geo-location process of
them using proposed method. First, these frames have sufficient
distinct feature points that facilitate matching process. Second,
they are all near vertical image frames so the orientation
parameters of them are all near zero, i.e. the initial orientation
parameters are almost near true values. In natural environments
it is common that image frames have not sufficient distinctive
feature points. In these cases one must use alternative robust
features such as structural ones in the matching stage. Also, in
the case of UAV data acquisition procedures it is common to
have highly oblique image frames, which for geo-location need
accurate feature points with good dispersion over the frame.
4. CONCLUSION
In this paper we proposed a procedure for 3D geo-location of
UAV image frames using a geo-referenced database consisting
of geo-referenced imagery and DSM.
Experimental results demonstrated the potential of the proposed
method in accurate geo-location of UAV images when they have
sufficient number and dispersion of feature points. Results
indicated that the proposed method can improve the geo-location
accuracy considerably. Using this method, geo-location of UAV
images will be performed with the accuracy order comparable to
accuracy of used geo-referenced database.
However, in the cases with not sufficient feature points matching
process will be failed, or even if not, erratic dispersion of feature
points in the image will prevent accurate solving of attitude
parameters. Developing more robust matching strategies would
be interesting issue for the next researches.
REFERENCES
Arun, A., Yadav, M. M., Latha, K., & Kumar, K. S., 2012. Self
directed unmanned aerial vehicle for target geo-localization
system. In Computing, Electronics and Electrical Technologies
(ICCEET), 2012 International Conference on (pp. 984-990).
IEEE.
Bang, K. I., Habib, A. F., Kim, C., & Shin, S., 2007.
Comprehensive analysis of alternative methodologies for true
ortho-photo generation from high resolution satellite and aerial
imagery. In American Society for Photogrammetry and Remote
Sensing, Annual Conference, Tampa, Florida, USA, May, pp. 7-
11.
Barber, D. B., Redding, J. D., McLain, T. W., Beard, R. W., &
Taylor, C. N., 2006. Vision-based target geo-location using a
fixed-wing miniature air vehicle. Journal of Intelligent and
Robotic Systems, 47(4), 361-382.
Bollard-Breen, B., Brooks, J. D., Jones, M. R., Robertson, J.,
Betschart, S., Kung, O., ... & Pointing, S. B., 2015. Application
of an unmanned aerial vehicle in spatial mapping of terrestrial
biology and human disturbance in the McMurdo Dry Valleys,
East Antarctica. Polar Biology, 38(4), 573-578.
Heintz, F., Rudol, P., & Doherty, P., 2007. From images to traffic
behavior-a uav tracking and monitoring application.
In Information Fusion, 2007 10th International Conference
on (pp. 1-8). IEEE.
Kumar, R., Samarasekera, S., Hsu, S., & Hanna, K., 2000.
Registration of highly-oblique and zoomed in aerial video to
reference imagery. In Pattern Recognition, 2000. Proceedings.
15th International Conference on (Vol. 4, pp. 303-307). IEEE.
Kushwaha, D., Janagam, S., & Trivedi, N., 2014. Compute-
Efficient Geo-Localization of Targets from UAV Videos: Real-
Time Processing in Unknown Territory. International Journal of
Applied Geospatial Research (IJAGR), 5(3), 36-48.
Lowe, D. G., 2004. Distinctive image features from scale-
invariant keypoints. International journal of computer
vision, 60(2), 91-110.
Neitzel, F., & Klonowski, J., 2011. Mobile 3D mapping with a
low-cost UAV system. Int. Arch. Photogramm. Remote Sens.
Spat. Inf. Sci, 38, 1-6.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
274
Nex, F., & Remondino, F., 2014. UAV for 3D mapping
applications: a review. Applied Geomatics, 6(1), 1-15.
Rango, A., Laliberte, A., Steele, C., Herrick, J. E., Bestelmeyer,
B., Schmugge, T., ... & Jenkins, V., 2006. Using unmanned aerial
vehicles for rangelands: current applications and future
potentials. Environmental Practice, 8(03), 159-168.
Remondino, F., Barazzetti, L., Nex, F., Scaioni, M., & Sarazzi,
D., 2011. UAV photogrammetry for mapping and 3d modeling
current status and future perspectives. International Archives of
the Photogrammetry, Remote Sensing and Spatial Information
Sciences, 38(1), C22.
Saari, H., Pellikka, I., Pesonen, L., Tuominen, S., Heikkilä, J.,
Holmlund, C., ... & Antila, T., 2011. Unmanned Aerial Vehicle
(UAV) operated spectral camera system for forest and agriculture
applications. In SPIE Remote Sensing (pp. 81740H-81740H).
International Society for Optics and Photonics.
Semsch, E., Jakob, M., Pavlíček, D., & Pěchouček, M., 2009.
Autonomous UAV surveillance in complex urban environments.
In Web Intelligence and Intelligent Agent Technologies, 2009.
WI-IAT'09. IEEE/WIC/ACM International Joint Conferences
on (Vol. 2, pp. 82-85). IET.
Wischounig-Strucl, D., & Rinner, B., 2015. Resource aware and
incremental mosaics of wide areas from small-scale
UAVs. Machine Vision and Applications, 1-20.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W5, 2015
International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 23–25 Nov 2015, Kish Island, Iran
This contribution has been peer-reviewed.
doi:10.5194/isprsarchives-XL-1-W5-269-2015
275
... Other researches present alternative methods for georeferencing images, by combining techniques such as feature matching algorithms and direct georeferencing. This is discussed in [4]. This paper aims to propose and implement a georeferencing algorithm capable of georeferencing high-altitude aerial images, providing information about the location of wildres to reghting teams operating on the terrain. ...
Chapter
High-altitude balloons (HAB), allied with flying-wing unmanned aerial vehicles (UAV), may play an important role in fire monitoring. Due to their aerostatic lift, a HAB may effortlessly carry an UAV to reach higher altitudes and therefore survey a wider area. Considering high-altitude UAV acquired imagery, this work presents a direct georeferencing method based on the geolocation algorithm, that consists on computing the pose of the camera with respect to the ground followed by the mapping between a 3D point and a 2D image pixel using the projection equation. Real-flight data covering diverse situations is used for evaluating the algorithm performance. The complementary filter is used on the measurements from the payload sensors to compute the necessary parameters for the direct georeferencing.KeywordsDirect georeferencingAerial imageryHigh altitude balloon
... The proposed method achieved a RMSE of 2.25 m while the intersection method best result was a RMSE of 22 m. Hamidi and Samadzadegan [29] propose the IPG algorithm combined with EP refinement using feature matching with georeferenced imagery. The DEM used was the Shuttle Radar Topography Mission [30], with a spatial resolution of 90 m. ...
Article
Full-text available
Although Aerial Vehicle images are a viable tool for observing large-scale patterns of fires and their impacts, its application is limited by the complex optical georeferencing procedure due to the lack of distinctive visual features in forest environments. For this reason, an exploratory study on rough and flat terrains was conducted to use and validate the Iterative Ray-Tracing method in combination with a Bearings-Range Extended Kalman Filter as a real-time forest fire georeferencing and filtering algorithm on images captured by an aerial vehicle. The Iterative Ray-Tracing method requires a vehicle equipped with a Global Positioning System (GPS), an Inertial Measurement Unit (IMU), a calibrated camera, and a Digital Elevation Map (DEM). The proposed method receives the real-time input of the GPS, IMU, and the image coordinates of the pixels to georeference (computed by a companion algorithm of fire front detection) and outputs the geographical coordinates corresponding to those pixels. The Unscented Transform B is proposed to characterize the Iterative Ray-Tracing uncertainty. A Bearings-Range filter measurement model is introduced in a sequential filtering architecture to reduce the noise in the measurements, assuming static targets. A performance comparison is done between the Bearings-Only and the Bearings-Range observation models, and between the Extended and Cubature Kalman Filters. In simulation studies with ground truth, without filtering we obtained a georeferencing Root Mean Squared Errors (RMSE) of 30.7 and 43.4 m for the rough and flat terrains respectively, while filtering with the proposed Bearings-Range Extended Kalman Filter showed the best results by reducing the previous RMSE to 11.7 and 19.8 m, respectively. In addition, the comparison of both filter algorithms showed a good performance of Bearings-Range filter which was slightly faster. Indeed, these experiments based on the real data conducted to results demonstrated the applicability of the proposed methodology for the real-time georeferencing forest fires.
... In photogrammetry and computer vision, the problem of spatial resection involves the determination of the spatial position and attitude of a camera or the image taken with that camera, with respect to the object space coordinate system [9]. It is solved with the aid of known coordinates of ground control points on the earth surface whose features appear on the image [10]. ...
... Others sought faster methods by introducing the digital terrain model (DTM). The aerial images were corrected, and the object location was calculated by matching the DTM data [26], [27]. The accuracy of the results was significantly affected by the reference data, and the ground reference points were shown to be indispensable. ...
Article
Full-text available
Unmanned aerial vehicles (UAVs) have been widely used in urban traffic supervision in recent years. However, the detection, tracking and geolocation of moving vehicle based on the airborne platform suffers from small object sizes, complex scenes and low-accuracy sensors. To address these problems, this paper develops a framework for moving vehicle detecting, tracking and geolocating based on a monocular camera, a GPS receiver and inertial measurement units (IMU) sensors. First, the method based on YOLOv3 was employed for vehicle detection due to its effectiveness and efficiency for small object detection in complex scenes. Then, a visual tracking method based on correlation filters is introduced, and a passive geolocation method is presented to calculate the GPS coordinates of the moving vehicle. Finally, a flight control method in terms of the previous image processing results is introduced to lead the UAV that is following the interested moving vehicle. The proposed scheme has been built on a DJI M100 platform on which a monocular camera and a microcomputer Jetson TX1 are added. The experimental results show that this scheme is capable of detecting, tracking and geolocating the interested moving vehicle with high precision. The framework demonstrates its capacity in automatic supervision on target vehicles in real-world experiments, which suggests its potential applications in urban traffic, logistics, and security.
... Application of the existing oriented images for performance of a second photogrammetric project improves efficiency of production of geo-spatial products. M. Hamidi and F. Samadzadegan [Hamidi and Samadzadegan, 2015] used referenced images and DTM for orientation of the images from a UAV. However, their method expects application of the images with worse resolution characteristics than the referenced aerial images in the function of control images. ...
... In photogrammetry and computer vision, the problem of spatial resection involves the determination of the spatial position and attitude of a camera or the image taken with that camera, with respect to the object space coordinate system [9]. It is solved with the aid of known coordinates of ground control points on the earth surface whose features appear on the image [10]. ...
Article
Full-text available
Aim. Determination of the elements of external spatial orientation of the surveying systems at the moment of image acquisition is the fundamental task in photogrammetry. Principally, this problem is solving in two ways. The first way is direct positioning and measuring of directions of camera optical axis in the geodetic space with the help of GNSS/INS equipment. The second way is the analytical solution of the problem using a set of reference information (often such information is a set of ground control points whose geodetic positions are known with sufficient accuracy and which are reliably recognised on aerial images of the photogrammetric block). The authors consider the task of providing reference and control information using the second approach, which has a number of advantages in terms of reliability and accuracy of determining the unknown image exterior orientation parameters. It is proposed to obtain additional images of ground control points by the method of their auxiliary aerial photography using an unmanned aerial vehicle (UAV) on a larger scale compared to the scale of the images of the photogrammetric block. The aim of the presented work is the implementation of the method of creating reference points and experimental confirmation of its effectiveness for photogrammetric processing. Methods and results. For the entire realization of the potential of the analytical way to determine the elements of external orientation of images, it is necessary to have a certain number of ground control points (GCP) and to keep the defined scheme of their location on the photogrammetric block. As the main source of input data authors use UAV aerial images of the terrain, which are obtained separately from the block of aerial survey, and have a better geometric resolution and which clearly depict the control reference points. Application of such auxiliary images gives the possibility of automated transferring of the position of ground control point into images of the main photogrammetric block. In our interpretation, these images of ground control points and their surroundings on the ground are called "control reference images". The basis of the work is to develop a method for obtaining the auxiliary control reference images and transferring of position of GCP depicted on them into aerial or space images of terrain by means of computer stereo matching. To achieve this goal, we have developed a processing method for the creation of control reference images of aerial image or a series of auxiliary multi-scale aerial images obtained by a drone from different heights above the reference point. The operator identifies and measures the GCP once on the auxiliary aerial image of the highest resolution. Then there is an automatic stereo matching of the control reference image in the whole series of auxiliary images in succession with a decrease in the resolution and, ultimately, directly with the aerial images of photogrammetric block. On this stage there are no recognition/cursor targeting by the human operator, and therefore there are no discrepancies, errors or mistakes related to it. In addition, if to apply fairly large size of control reference images, the proposed method can be used on a low-texture terrain, and therefore deal in many cases without the physical marking of points measured by GNSS method. And this is a way to simplify and reduce the cost of photogrammetric technology. The action of the developed method has been verified experimentally to provide the control reference information of the block of archival aerial images of the low-texture terrain. The results of the experimental approbation of the proposed method give grounds to assert that the method makes it possible to perform geodetic reference of photogrammetric projects more efficiently due to the refusal of the physical marking of the area before aerial survey. The proposed method can also be used to obtain the information for checking the quality of photogrammetric survey for provision of check points. The authors argue that the use of additional equipment - UAV of semi-professional class to obtain control reference images is economically feasible. Scientific novelty and practical relevance. The results of approbation of the "control reference image" method with obtaining stereo pairs of aerial images with vertical placement of the base are presented for the first time. There was implemented the study of the properties of such stereo pairs of aerial images to obtain images of reference points. The effectiveness of including reference images in the main block of the digital aerial triangulation network created on UAV’s images is shown.
Chapter
Geographic information products of UAV have been developed from single to multiple directions. How to obtain the pose parameters of images with high quality is the key to ensure the accuracy of products. In terms of both production quality and efficiency, the development of RTK-UAVs has gradually become the focus of public attention. In view of the fact that these UAVs are mostly used for navigation, the real-time correction of camera coordinates is not considered, and the accuracy of attitude parameters obtained by the inertial navigation unit of the micro/mini UAV is low. So a small number of GCPs still need to be laid to meet the accuracy requirements of surveying and mapping results. This paper studied a new PPK method, taking the BD930 GNSS module as an example, the UAV system was modified by using the configuration scheme of three non-collinear antennas. A carrier phase dual-difference model combining GPS, BDS and GLONASS was constructed. Then baseline vector’s determination was carried out by LAMBDA method. The system errors caused by fixed frequency, camera exposure delay and position difference were eliminated by three spline function GNSS interpolation algorithm and eccentricity measurement technology for space rear intersection. Taking the corrected position of the camera and the calculated attitude parameters as the initial values, GNSS-assisted self-bundle block adjustment was introduced to obtain high-precision pose and distortion parameters of the UAV image. The flight test in a hilly area proved that the accuracy of differential GNSS system could reach 0.01 m. Under dynamic flight, the fixed solution ratio was 58.7% higher than that of the traditional method. Without using any GCPs, the error of CPs was less than 0.3 m, and with four corner GCPs, the median error of CPs was less than 0.2 m. This confirmed that the position and distortion parameters obtained in this paper were correct and could meet the accuracy requirements of geographic information results.
Article
Full-text available
The McMurdo Dry Valleys of Antarctica are a unique yet threatened polar biome. Cyanobacterial mats form a large part of the standing biomass in the McMurdo Dry Valleys and are therefore an indicator of ecosystem productivity and health. They are, however, patchily distributed, and this has hampered spatial ecology studies due to the logistical challenges of ground-based field sampling. Here, we report the application of remote sensing using a fixed-wing unmanned aerial vehicle (UAV) and GIS spatial mapping to identify cyanobacterial mats, estimate their extent and discriminate between different mat types. Using the Spalding Pond area of Taylor Valley as a test site, we were able to identify mats on soil surfaces within the hyporheic zone, as well as benthic mats below the water surface. The mapping also clearly identified the footprint of campsites and walking trails on soils, and we highlight the potential of this technique in monitoring human impact in this fragile ecosystem.
Conference Paper
Full-text available
In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.
Article
Full-text available
Unmanned aerial vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping, and 3D modeling issues. As UAVs can be considered as a low-cost alternative to the classical manned aerial photogrammetry, new applications in the short- and close-range domain are introduced. Rotary or fixed-wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semiautomated, and autonomous modes. Following a typical photogrammetric workflow, 3D results like digital surface or terrain models, contours, textured 3D models, vector information, etc. can be produced, even on large areas. The paper reports the state of the art of UAV for geomatics applications, giving an overview of different UAV platforms, applications, and case studies, showing also the latest developments of UAV image processing. New perspectives are also addressed.
Article
Full-text available
VTT Technical Research Centre of Finland has developed a Fabry-Perot Interferometer (FPI) based hyperspectral imager compatible with the light weight UAV platforms. The concept of the hyperspectral imager has been published in the SPIE Proc. 7474 and 7668. In forest and agriculture applications the recording of multispectral images at a few wavelength bands is in most cases adequate. The possibility to calculate a digital elevation model of the forest area and crop fields provides means to estimate the biomass and perform forest inventory. The full UAS multispectral imaging system will consist of a high resolution false color imager and a FPI based hyperspectral imager which can be used at resolutions from VGA (480 x 640 pixels) up to 5 Mpix at wavelength range 500 - 900 nm at user selectable spectral resolutions in the range 10...40 nm @ FWHM. The resolution is determined by the order at which the Fabry- Perot interferometer is used. The overlap between successive images of the false color camera is 70...80% which makes it possible to calculate the digital elevation model of the target area. The field of view of the false color camera is typically 80 degrees and the ground pixel size at 150 m flying altitude is around 5 cm. The field of view of the hyperspectral imager is presently is 26 x 36 degrees and ground pixel size at 150 m flying altitude is around 3.5 cm. The UAS system has been tried in summer 2011 in Southern Finland for the forest and agricultural areas. During the first test campaigns the false color camera and hyperspectral imager were flown over the target areas at separate flights. The design and calibration of the hyperspectral imager will be shortly explained. The test flight campaigns on forest and crop fields and their preliminary results are also presented in this paper.
Article
Full-text available
High resolution aerial photographs have important rangeland applications, such as monitoring vegetation change, developing grazing strategies, determining rangeland health, and assessing remediation treatment effectiveness. Acquisition of high resolution images by Unmanned Aerial Vehicles (UAVs) has certain advantages over piloted aircraft missions, including lower cost, improved safety, flexibility in mission planning, and closer proximity to the target. Different levels of remote sensing data can be combined to provide more comprehensive information: 15–30 m resolution imaging from space-borne sensors for determining uniform landscape units; < 1 m satellite or aircraft data to assess the pattern of ecological states in an area of interest; 5 cm UAV images to measure gap and patch sizes as well as percent bare soil and vegetation ground cover; and < 1 cm ground-based boom photography for ground truth or reference data. Two parallel tracks of investigation are necessary: one that emphasizes the utilization of the most technically advanced sensors for research, and a second that emphasizes the minimization of costs and the maximization of simplicity for monitoring purposes. We envision that in the future, resource management agencies, rangeland consultants, and private land managers should be able to use small, lightweight UAVs to satisfy their needs for acquiring improved data at a reasonable cost, and for making appropriate management decisions.
Article
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Article
Unmanned Air Vehicles (UAVs) have crucial roles to play in traditional warfare, asymmetric conflicts, and also civilian applications such as search and rescue operations. Though satellites provide extensive coverage and capabilities crucial to many remote sensing tasks, UAVs have distinct edge over satellites in dynamic situations due to shorter revisit times and desired area/time coverage. The course, speed and altitude of a UAV can be dynamically altered, details of an activity of interest monitored by loitering over the area as desired. A fundamental requirement in most UAV operations is to find geo-coordinates of an object in the captured image. Most small, low-cost UAVs use low-cost, less accurate sensors. Matching with pre-registered images may not be possible in areas with low details or in emergency situations where terrain may have undergone severe sudden changes. In these situations that demand near real-time results and wider coverage, it is often enough to provide approximate results as long as bounds on accuracies can be established. Even when image registration is possible, it can benefit from these bounds to reduce search space thereby saving execution time. The prime contributions of this paper are computation of location of target anywhere in the image even at larger slant ranges, optimized algorithm to compute terrain elevation at target point, and use of visual simulation tool to validate the model. Analysis from simulation and results from real UAV flights are presented.
Article
Small-scale unmanned aerial vehicles (UAVs) are an emerging research area and have been recently demonstrated in many applications including disaster response management, construction site monitoring and wide area surveillance where multiple UAVs impose various benefits. In this work we present a system composed of multiple networked UAVs for autonomously monitoring a wide area scenario. Each UAV is able to follow waypoints and capture high-resolution images. In order to overcome the strong resource limitations we implement an incremental approach for generating an orthographic mosaic from the individual images. Captured images are pre-processed on-board, annotated with other sensor data and transferred by a prioritized transmission scheme. The ultimate goal of our approach is to generate an overview mosaic as fast as possible and to improve its quality over time. The mosaicking exploits position and orientation data of the UAV to compute rough image projections which are incrementally refined by scene structure analysis when more image data is available. We evaluate our incremental mosaicking in the strongly resource limited UAV network composed of up to three concurrently flying UAVs. Our results are compared to state-of-the-art mosaicking methods and show a unique performance in our dedicated application scenarios.
Article
An unmanned aerial vehicle which is capable of navigating autonomously to geo-localize arbitrary ground target is proposed here. From the video captured by the aerial vehicle, successive frame comparisons are done to extract 3D scene points. We have also designed certain decision making rules which is used for autonomous navigation. While traversing, target geo-localization system uses a camera placed at nadir point in order to find target's ground coordinates. This task can be achieved by first registering the video sequence obtained from the vehicle with aerial images of the region where the vehicle is flying. Then immediately perform geometric coordinate transformation from aerial images to video sequence frames. By using the video sequence captured by the vehicle, both autonomous navigation and coordinates of arbitrary ground target can be calculated. The major application of proposed work is search and rescue operations.