Technical ReportPDF Available

Best practice tutorial: Technical handling of the UAV "DJI Phantom 3 Professional" and processing of the acquired data

Authors:

Abstract and Figures

This tutorial assumes no prior knowledge of the reader on handling low-budget Unmanned Aerial Vehicles in ecological and environmental contexts. It initially includes general infos on preperation and constellation of a typical UAV system, followed by instructions on planning and implementation of UAV flights using the available commercial software, importing the acquired imagery, relative orientation, optimization of camera parameters, generation of dense point clouds and finally digital surface modeling of the point clouds. The tutorial eventually includes lessons learned, tipps and tricks on further processing and potential applications of the UAV topographic products.
Content may be subject to copyright.
BEST PRACTICE
TUTORIAL
Technical handling of the UAV "DJI
Phantom 3 Professional" and processing
of the acquired data
Marius Röder
(marius.roeder@web.de)
Steven Hill
Hooman Latifi
1
Table of contents
Composition and preparation of the UAV system .................................... 2
Performance of UAV recordings with DJI GO .......................................... 2
Performance of UAV data acquisition with Pix4Dcapture ........................ 2
Planning ............................................................................................... 3
Implementation ..................................................................................... 6
Import of the images ........................................................................... 11
Relative orientation ............................................................................. 15
Optimization of the camera parameters .............................................. 23
Dense Point Cloud Creation ............................................................... 28
Creation of the DSM ........................................................................... 31
Further processing and applications of the DSM ................................... 35
Literature ............................................................................................... 36
2
Best Practice Tutorial DJI Phantom 3
Professional
Composition and preparation of the UAV system
See Quick Start Guide
Performance of UAV recordings with DJI GO
For classic applications of a Low-Budget UAVthat means the simple recording of pictures
and videos the app DJI GO is recommended. With this company-owned app, the drone can
be controlled manually via remote control. This allows photos or videos to be captured with the
camera by manual triggering. In addition, the sensor of the drone can be calibrated and further
parameters such as the exposure time of the camera, etc. can be adjusted.
However, an entry of a fixed flight path, which the drone should fly along, is not possible before
the start of the flight with DJI GO. The app is therefore not suitable for capturing images that
are later to be processed with photogrammetric software. For this reason, DJI GO is no longer
dealt with here. For further details on handling DJI GO, please refer to the Quick Start Guide
or the user manual.
Performance of UAV data acquisition with Pix4Dcapture
Last update: March 2017, App-Version 3.7.1 (Android)
Further information: Pix4Dcapture online manual https://support.pix4d.com/hc/en-
us/articles/203873435--Android-Pix4Dcapture-Manual#gsc.tab=0called on March 21st
2017
With the company-owned app DJI GO, the drone can be controlled manually. An input of a
fixed flight path, which the drone should follow, is not possible before the start of the flight.
The app Pix4Dcapture is a free and a very good alternative to plan the flights in advance and
set a predefined flight path. Thus, many other flight parameters can be set by the user
himself. The following is a description of how to use the app.
In order to connect Pix4Dcapture to a DJI drone, the app Ctrl + DJI has to be downloaded and
installed on the smartphone next to Pix4Dcapture from the Google Play Store. This app runs
in the background and allows working with Pix4Dcapture using DJI drones.
It can lead to problems when Pix4Dcapture and DJI GO are installed simultaneously on a
smartphone/tablet. Therefore it is recommended to install only one of the two flight control apps
on the smartphone/tablet.
For a successful UAV flight, some settings can already be set up in advance. For this reason,
this chapter is divided into the planning (internal service) and implementation (field service).
3
Planning
If the app Pix4Dcapture is opened after the installation, a window appears in which the user
must first create his own free account via Sign up for free (see Figure 1).
Figure 1: Creation of a free account
If the user is logged in, the actual start screen of the app appears (see Figure 2).
Figure 2: Start screen Pix4Dcapture
General settings can be made under Settings. In the General tab, the corresponding drone is
selected (in this case DJI Phantom 3 Professional). Another important setting is Sync
automatically when mission ends. By activating this option, the captured images are
automatically transmitted to the smartphone by radio after the flight has been completed. Since
no further evaluations take place on the smartphone, it is recommended to deactivate this
option in order to avoid unnecessary memory consumption on the smartphone. The images
4
are stored exclusively on the SD card. It is also important that the Save offline maps option is
enabled in the Maps tab. This saves background maps (satellite images or vector maps) when
you create the project on your smartphone. In areas where reception of mobile data is not
possible, the background maps are still available. Furthermore, in the Advanced tab under
Root directory path, you can specify where the metadata for the flights on the smartphone
should be stored.
If the general settings have been made, different flight missions can be selected in the start
screen. The app offers four different mission modes, which differ by type of flight path. The
Grid Mission is best suited to generate 2D maps from the images. Here, the drone flies a simple
grid over a defined area. The Circular Mission was designed to create 3D models of a single
object (such as a house). The recording takes place in a circular arrangement. In addition, the
Free Flight Mission can be used to create a project in which the drone is controlled manually
and triggered automatically within a specified interval. This mode is not to be mixed up with
the completely manual control in DJI GO. In DJI GO, among other things, even more settings
such as e.g. the exposure time of the camera can be set. In the Free Flight Mission, the drone
can only be flown manually. The image is taken automatically in the specified time interval and
not manually. To create 3D models of the Earth's surface, the Double Grid Mission is
recommended. Here, the drone flies a flight path over a defined area, which corresponds to
two rectangular grids (see Figure 3).
Figure 3: Double Grid flight path (black) with sample images (red)
Due to the high overlaps from different viewing angles, this flight pattern is best suited for the
creation of 3D models of the recording area by photogrammetric methods (Pix4D 2017). High
overlaps also lead to better accuracy of 3D point clouds (Haala, Cramer and Rothermel 2013).
The workflow with the Double Grid Mission is described below.
Clicking on Double Grid Mission will display a user interface with various settings (see Figure
4).
5
Figure 4: GUI Pix4Dcapture
The majority of the GUI is taken by the background map. The type of the background map can
be changed between the vector and satellite data via the - or -button. On the map is
a green polygon, which represents the flight path (double grid). The polygon can be drawn to
the desired size of the area to be taken.
On the left side, Alt can be used to set the flight height. This depends on the objects that are
to be recorded during the project. If a flight height of less than 30 m is selected, the app warns
in the upper bar (message: Low!) that a too small value of flight height has been probably
chosen. If a flight height is selected to exceed 100 m, the app will warn you that it is too high
(message: High!) and may violate local regulations and laws. This must be observed during
flight planning.
Further settings can be made via the -button (see Figure 5).
Figure 5: Adjustment of speed, angle and overlap
6
The speed can be selected in small intervals between slow and fast. Here the slowest speed
is recommended. This allows the best possible image quality by avoiding distortions or blurring
in the images. The recording direction can be determined under angle. For the calculation of
3D models, this option should be set to vertical, which corresponds to a recording angle of 80°.
As a result, approximate nadir photographs, which are usual for aerial photography, can been
achieved. The overlap should be set as high as 90% to achieve optimal accuracy for the later
orientation of the images and the subsequent point cloud calculation (Haala, Cramer and
Rothermel 2013). It was also shown that high image overlap minimizes height errors (Dandois,
Olano and Ellis 2015).
You can use the -button to zoom on the polygon. At the bottom of the GUI, it is indicated
how large the polygon is and how long the flight takes with the specified settings. The duration
of the flight depends on the size of the area to be taken, the speed of the flight and the flight
height. A battery of the drone would last about 23 minutes. In order for the drone to be safely
landed, it is recommended not to exceed a flight time of approximately 16 minutes. This still
provides sufficient time for landing. In addition, the app creates warnings in the bar at the top
of the screen (message: Flight time!), if the flight time was selected too long.
After all settings have been made, the project is saved via the -button. The user is
given access to the individual created projects on the start screen under Project List. When
the project is saved, the background maps for the area are automatically downloaded and
saved on the smartphone to make them available offline.
Implementation
Once all settings of the flight planning have been made, the flight can be carried out in the
field.
First, the drone has to be prepared for the flight (see Quick Start Guide). This includes
tightening the rotors, inserting the battery and checking the cleanliness of the camera lens.
Then the individual devices are switched on or connected to each other. According to
manufacturer, firstly the remote control, then the drone should be switched on and finally the
smartphone/tablet should be connected with the remote control via the USB cable (Pix4D
2017). The app is then started and the project created in the service is selected (Start screen
Project List Project xx). It is important that the GPS function of the smartphone is now
active. The -button allows the view to be focused on the current position of the
smartphone.
The connection of the remote control and the smartphone to the drone can be examined by
the fact that the Wifi symbol is green (see Figure 6). In addition, a drone symbol can now be
seen on the GUI, which shows the position of the drone (known by the integrated GNSS
receiver) (see Figure 6).
7
Figure 6: GUI after connecting with drone
If the devices are connected to each other, you can switch between map mode and camera
mode before the flight via . In the camera mode, a live view of the drone camera can
be seen (see Figure 7).
Figure 7: Camera mode
Since it was only possible to estimate roughly how large the grid is to be deployed during
mission planning, the size of the flight route can be readjusted in the field.
Clicking on Start will bring up a new window in which the app summarizes the most important
mission data and confirms that the smartphone or remote control is connected to the drone
(see Figure 8).
8
Figure 8: Control screen before the start of the drone (1)
Clicking on Next will bring up another window listing the requirements for starting (see Figure
9).
Figure 9: Control screen with checklist before the start of the drone (2)
For example, the app controls whether there are any connections between the individual
components (smartphone - remote control - drone), whether there are sufficient GNSS
satellites available or there is sufficient space on the SD card. If this is not the case, the app
issues warnings. During the training into the software different warnings occurred. When using
Pix4Dcapture, for example, the switch on the remote control has to be set to "F", otherwise a
warning occurs. In addition, the mission programmed into the app could not be loaded onto
the drone, since remote firmware and drone firmware had different versions installed. Please
ensure that the same version is installed on both devices. If the drone is started in the indoor
area, then errors occur that there are not enough GPS satellites. As a result of this, for
9
example, the so-called homepoint, from which the start takes place, is not known (see Figure
10).
Figure 10: Control screen with checklist before the start of the drone (3)
As soon as all prerequisites are met, the Take off-button is held for three seconds. The drone
then flies vertically into the desired flight height. The relative height above ground in this case
is determined not by GNSS, but by barometer (Pix4D 2017). Once the flight height is reached,
the drone goes to the starting point of the double grid and flies fully automatically the previously
programmed flight path. It is not necessary to intervene with the remote during the flight. In the
live view mode, the camera can be switched live during the flight (see Figure 11).
Figure 11: Live view during the flight
After the recordings are complete, the drone returns to the homepoint, which was determined
at the start, in the designated flight height. Above the homepoint, the UAV lowers itself
completely automatically. From a flight height of approx. 10 m, it is recommended that the
drone be landed manually. Care must be taken to ensure that the parking lot is free from
disturbing objects. The drone can be landed manually with the remote control.
10
In the internal service, the images are then saved via USB cable from the internal memory
card of the drone to an external hard disk.
11
Evaluation of UAV recordings in Agisoft Photoscan
Last Update: March 2017, Software-Version: Agisoft Photoscan Professional 64-Bit Version
1.2.6
The software product Agisoft Photoscan is used to evaluate the recorded UAV images. This
is an independent software product that carries out photogrammetric processing of digital
images and generates three-dimensional, spatial data.
In the following sub-chapters, the workflow for the evaluation of UAV recordings is explained
using an example project. The UAV flight took place as part of an M.Sc thesis (Röder 2017)
in the Bavarian Forest National Park. The different settings, calculation times, etc. are based
on experience values of the master thesis and are specially adapted for this project. As a
result, they are not generally applicable to all UAV analyzes with Agisoft Photoscan and are
to be taken individually for each project.
Import of the images
After the program has been started, the newly opened project is saved by File Save (see
Figure 12).
Figure 12: Saving the project
Clicking on Workflow Add Photos... opens a window in which all images taken with the drone
are selected and imported into the project (see Figure 13).
12
Figure 13: Adding the photos
On the left side of the GUI you can see the workspace in which the newly imported images are
listed (see Figure 14). After the images are imported, a chunk is created automatically. You
can save as many chunks per workspace. The division into chunks is useful, e.g. a chunk
should be created for each step in the project so that the information in the previous steps is
not lost. In addition, two UAV flights with overlapping areas can initially be oriented separately
from one another and subsequently processed together. NA behind each of the images stands
for Not Aligned and indicates that the images have not yet been oriented relative to each other.
In the middle of the GUI, after the import of the images in the model window, the approximation
position of the cameras appear as blue dots (see Figure 14). The drone is equipped with a
GNSS single-frequency receiver, which stores the position of the camera at the time of capture
in the metadata of the images. The accuracy of the positions is several meters, which is why
they are referred to as approximated positions. Agisoft Photoscan can import this information
automatically from the EXIF data of the images during import.
13
Figure 14: Workspace-tab after importing the images
If the user moves from the Workspace tab to the Reference tab (lower-left corner of the GUI),
the approximation positions with the geodetic date WGS84 (latitude, longitude, height) can be
seen in the Camera window (see Figure 15 left side). The Accuracy column has a value of 10
m (automatic accuracy of the GNSS single-frequency receiver). In the lower third of the GUI
you can see the pictures in small format. Double-clicking on one of the images opens a larger
view in the middle of the GUI (see Figure 15 right side).
Figure 15: Reference tab after importing the images (left) and single-image view (right)
14
The images of the example project used are quite dark. With the Pix4Dcapture app, the
exposure settings of the camera cannot be changed during recording. The exposure is always
adjusted automatically. Agisoft Photoscan provides a feature that allows you to adjust the
brightness of the images. The following evaluations in point clouds or in the orthomosaic, which
are carried out by the human observer, can be greatly facilitated by radiometric adaptation of
the images. Clicking on Photo Image Brightness opens a new window (see Figure 16).
Figure 16: Call of the function Image Brightness
Via Estimate the software calculates an optimal value for the image brightness (see Figure
17). The exposure adjustment in the images is then clearly visible (see Figure 18).
Figure 17: Image Brightness before (left) and after (right) estimation of a fit value
15
Figure 18: Sample image after adjusting the Image Brightness
Relative orientation
After importing the images and the brightness adjustment, the relative orientation of the images
to one another takes place. At the moment, only an approximation position of the images is
available via the GNSS receiver of the drone. The images do not yet "know" how they are
positioned opposite the other images. For this purpose, the relative orientation of the images
must be established. Using Workflow Align Photos... opens a new window with different
settings for the relative orientation (see Figure 19).
Figure 19: Accessing the Align Photos... function (left) and its settings (right)
Under General, the parameters are Accuracy and Pair preselection. It is recommended to
always set Accuracy to Highest. This computes the camera positions with the highest
accuracy. This setting results in a longer processing time for the relative orientation. However,
a high accuracy is the prerequisite for a precise production of following products such as DSM
16
or orthomosaic. By selecting Highest, the original image is scaled up by a factor of four. One
level of accuracy in each case means a scaling down of the images by a factor of four, which
considerably reduces the processing time of the orientation. Under Pair preselection,
Reference is selected. As a result, overlapping pairs of images are already determined in
advance by their approximation position (from the GNSS receiver of the drone), which
facilitates the relative orientation and reduces the calculation time.
Additional parameters can be set under Advanced. The default settings are used here. The
Key Point Limit specifies the upper limit of feature points that are considered per image during
processing. First of all, the software searches for these striking pixels per image, by means of
which it can then orient the images relative to each other. This parameter helps to set number
of feature points detected per image to match the recordings is set in the parameter Tie point
limit. The most reliable and accurate feature points are selected by the software. The Adaptive
camera model fitting parameter is always set active. This parameter makes it possible to
incorporate automatically adapted camera parameters into the compensation by virtue of their
reliability measures. By activating this option, the divergence of some parameters is pre-
selected, particularly in the case of aerial image data sets. The OK button pushes the
orientation. The progress of work can be followed in a new window (see Figure 20).
Figure 20: Work progress during the alignment
First, the feature points are detected. The overlapping image pairs are then selected and finally
matched. If the relative orientation is completed, the so-called sparse point cloud appears in
the middle of the GUI (see Figure 3).
17
Figure 21: Sparse Point Cloud after Alignement
You can see all the tie points used to create the relative orientation in the images. By clicking
on the Show Cameras button you can see the positions of the cameras taking into account the
relative orientation (see Figure 22).
Figure 22: Displaying the camera positions (right) via the Show Cameras Button (left)
In the single image view (double-click on one of the images) and click on View Points, the
unused feature points appear in gray and the tie points used appear in blue (see Figure 23).
18
Figure 23: View the Feature Points (white) and the Tie Points (blue) (right) using the View Points Button (left)
In the Reference tab, more information is now available (see Figure 24).
Figure 24: Reference-Tab after Alignement
In the column Error (m), the difference between the camera position from the approximation
coordinates and the camera position according to the relative orientation is shown. Projections
shows the number of tie points per image and Error (pix) indicates an RMSE value for the
reprojection error. Since values for yaw, pitch, and roll have not been imported, these columns
19
are empty. By right-clicking on Chunk Show Info..., details of the chunk are displayed (see
Figure 25). Here under Alignment parameters you can see which settings have been made for
the relative orientation. In addition, the calculation time for the orientation is given here.
Figure 25: Call up the chunk information
Import of GCPs and external orientation
The relative orientation of the images is established. Next step is the exterior orientation using
GCPs. In principle, the recordings are already georeferenced. However, the accuracy of the
single-frequency GNSS receiver of the drone is not sufficient. For this reason, GCPs are used.
These are usually measured with an accuracy of less than 10 cm. The GCPs must be marked
in the software by so-called markers. To do this, double-click an image and search for a GCP.
If one of the GCPs has been found, a marker is placed there centrally by right-clicking
Create Marker (see Figure 26).
Figure 26: Create a marker on a GCP
This is automatically assigned the name point 1, which can be changed in the workspace tab.
In this case, the markers were named according to their IDs (see Figure 27).
20
Figure 27: GCP before (left) and after (right) rename
When the next image is opened, a line appears on which the marker must lie in this image.
This is the epipolar line. This makes it easier to find the point. If the marker is found in the
second image, the marker is placed using the right-click Place Marker (see Figure 10).
Figure 28: Place the newly created GCP in another image using the epipolar line
Once the marker has been set in two images, the position in all other images is known by the
already existing relative orientation. In order to improve the orientation manually, the point must
nevertheless be placed in the right position in every image. If another image is opened, the
marker appears as a point with a gray flag (see Figure 29, left). This is the proposed position
for this marker. Use the left mouse button to move the marker centrally to the correct position
of the GCP. Then the marker is marked with a green flag (see Figure 29, right).
Figure 29: Activation of the proposed approximation position (left) by displacement (right)
21
In order to improve the accuracy of the orientation, it is recommended to place the marker in
all images in which it is clearly visible. In images that are out of focus or in which the marker is
difficult to recognize, it is not recommended to set the marker. This workflow must be done for
all markers. Figure 30 shows the marker list in the workspace tab after setting all the markers
in the sample project.
Figure 30: Marker list after setting all GCPs
Georeferencing is already possible with three GCPs. In the sample project, six GCPs were
available. If the 3D position of all markers is known, they can be displayed via the Show
Markers button (see Figure 31).
22
Figure 31: Display the markers in the Sparse Point Cloud (below) by the Show Markers button (top)
In order to carry out the bundle adjustment of the aerial images by means of GCPs, their
precisely measured coordinates have to be imported. In this example, the reference points are
available as shape files. These were loaded in QGIS and the necessary attributes
(geographical length and width, accuracy, height, ID) are exported as a tab-delimited csv-file.
Afterwards, a text file was created for each plot in which the values for the individual attributes
are listed separately (see Figure 32, left). Using the Import button in the Reference tab, the
GCPs are imported into Agisoft Photoscan (see Figure 32, right).
Figure 32: Structure of the text file (left) and import button (right)
A new window with import settings appears (see Figure 33). The values can be assigned to
the individual tab-separated columns. By activating the Load accuracy checkbox, the accuracy
determined in the GNSS measurements can also be imported. After the import, the software
automatically performs bundle block adjustment. The images are then precisely georeferenced
by the GCPs.
23
Figure 33: Settings for importing the GCPs
In the Marker section of the Reference tab, the markers with the imported coordinates and the
accuracy are now listed (see Figure 34). At the same time, the software calculates in the
column Error (m) the difference between the imported coordinates and the coordinates
estimated by adjustment. The values in the Projections column indicate how often the markers
in the images were set. Error (pix) returns the RMSE value for the reprojection error of the
markers calculated for all images in which the marker is visible.
Figure 34: Marker section in the Reference tab after importing the GCPs
Optimization of the camera parameters
After the import of the GCPs has been completed, the camera parameters are improved in the
next step in order to optimize the accuracy of the model.
Agisoft Photoscan estimates the inner and exterior orientation parameters of the camera
during the orientation of the images. The accuracy of the estimation depends on many factors,
e.g. the overlap or the shape of the terrain. This can lead to errors which can lead to non-linear
deformations in the model. During georeferencing using GCPs, the model is linearly
24
transformed by means of a 7-parameter similarity transformation (3 translations, 3 rotations, 1
scale). Linear errors can be compensated, but non-linear components cannot be
compensated. For this reason, errors are mainly caused by georeferencing. In order to
eliminate the non-linear deformations, the Sparse Point Cloud and the camera parameters are
optimized on the basis of the known reference coordinates. In this step, Agisoft Photoscan
compensates the estimated point coordinates and camera parameters by minimizing the sum
of the reprojection errors and the reference coordinate errors.
For optimization, it is recommended to duplicate the chunk within the project. This means that
you can always access the status after the relative orientation and the import of the GCPs if
unexpected problems occur in subsequent processing steps. To do this, right-click on the
chunk and duplicate to copy the chunk within the project (see Figure 35). For a better overview,
the first chunk is renamed to Alignment and the second chunk to Optimization.
Figure 35: Duplicate the alignment chunk (left) and rename (right)
In the first step of the optimization, the tie points are eliminated, which are clearly recognizable
as outliers. To do this, the Sparse Point Cloud is loaded and viewed from different
perspectives. If outliers are clearly visible, they are selected with the options for the software
and removed with the Delete button (see Figure 36).
25
Figure 36: Detection of outliers
Next, tie points are removed which have a high reprojection error, high reprojection uncertainty,
and low projection accuracy. To do this, Agisoft offers with Edit Gradual Selection, a function
with which the tie points are selected using a threshold value (see Figure 37).
Figure 37: Open the Gradual Selection Tool
The thresholds used are based on experience found in the literature (Gatewing 2017) (Mallison
2015). In the Gradual Selection window, initially Reprojection Error is selected as Criterion.
Here, the value 1 was selected as a threshold value with few exceptions. That means that all
tie points that have a larger reprojection error than 1 are selected by OK. In the model, the
selected points are marked in pink. The Delete button removes these points. In some plots the
reprojection errors were so low that the threshold was 0.5. Next, the Criterion Reconstruction
uncertainty was selected and all points that are above the threshold 10 are removed. Points
located at the edge of the aerial image group generally have a higher degree of reconstruction
uncertainty than dots in the middle. The reason for this is that these points are detected only
26
in images with forward overlap and the lateral overlap is missing. The last parameter Projection
accuracy was used to select and remove all points with a threshold greater than 2. The Figure
38 summarizes the settings made fort he reprojection.
Figure 38: Thresholds for the reprojection error (top left), the reconstruction uncertainty (top right) and the
projection accuracy (bottom)
As a result of the above measures, approximately 80 % of the tie points are eliminated (see
Figure 39). 20 % of the initially generated tie points are sufficient to link the images. However,
care should be taken to ensure that no more than 90 % of the total number of tie points is
removed to maintain a good relative orientation. If necessary, the threshold values are to be
set higher.
27
Figure 39: Sparse point cloud after filtering through gradual selection
Before the actual optimization is performed, settings in the Reference tab must be changed.
Click the Settings button to open the Reference Settings window. The settings must be set
according to the accuracy of the drone, the accuracy of the markers etc. (see Figure 40).
Figure 40: Call the reference settings before performing the optimization
28
The best results are achieved when the orientation parameters are first optimized based on
the camera coordinates and then based on the GCP coordinates. All cameras are activated
first and the GCPs are deactivated. By clicking the Optimize button, the optimization is
triggered (see Figure 41).
Figure 41: Performing the optimization with the Optimize Cameras Button
Afterwards, the cameras are deactivated and the GCPs are activated and the optimization
process is started again using the Optimize button. The average reprojection errors and the
residual claws should have become significantly smaller after optimization. Figure 42 shows
the marker section of the sample project after optimization of camera parameters. Compared
to Figure 34 the total error of the residual claws has decreased from 18.6 cm to 7.6 cm and
the total reprojection error from 0.65 pix to 0.22 pix.
Figure 42: Marker section in the Reference tab after optimization of camera parameters
If the residual clauses (column Error (m)) are still high (> 10 cm) for some GCPs after the
optimization, the GCPs in the images may not have been clicked exactly or the GCP has
moved. If necessary, remove the GCP from the adjustment (remove hook).
Dense Point Cloud Creation
After the optimization of the camera parameters, the creation of a dense point cloud takes
place. For this, it is recommended to create a new chunk by duplicating the optimization chunk
(see Figure 43).
29
Figure 43: Structure of the workspace after duplication of another chunk
Before the process of point cloud generation is started, the area for which the Dense Point
Cloud is to be calculated must be defined (Area of Interest). To do this, select the two buttons
Resize Region and Rotate Region (see Figure 44).
Figure 44: Definition of the Area of Interest by Resize Region (left) and Rotate Region (right)
The size and orientation of the bounding box can be changed. Only the area enclosed by the
bounding box is processed. As a result, a large part of processing time can be saved. The
Bounding Box was chosen to encapsulate the plot plotted with the GCPs (see Figure 45).
30
Figure 45: Definition of the Area of Interest
With Workflow Build Dense Cloud, a window opens in which the settings for the point cloud
generation are set (see Figure 46).
Figure 46: Call up the function Build Dense Cloud... (left) and its settings (right)
31
Under Quality the desired quality of the reconstruction is set. Higher quality means a more
detailed and accurate geometry of the point cloud, but also a longer processing time. The
highest quality level - Ultra High - uses the images in original size during the process. Each
additional quality level scales the images down by a factor of four. The quality level High is
recommended as the best setting. Qualitatively good geometries are created, whereby the
calculation time still appears economical. With Ultra High, calculation times of approx. one
week result in the sample plot, which does not appear to be economically viable. With High the
point cloud of the Plot is calculated in about 10 hours. In addition, under Depth Filtering, you
can set whether and if so how to filter over the point clouds to eliminate outliers. If Disabled is
selected here, no filter is used. This parameter is, however, not recommended, as otherwise
the dot clouds are extremely noisy. If the area contains small details that are still to be
recognized in the point cloud, the setting Mild is recommended. If not, aggressively can use
an extremely strong filter, which removes very many outliers.
Figure 47 shows a very dense point cloud of the sample plot in which the individual
rejuvenation stocks of the spruce are clearly visible. The deadwood trunks are also clearly
reconstructed.
Figure 47: 3D section of the resulting DensePoint Cloud (Quality: High, Depth Filtering: Mild)
Creation of the DSM
The last step of the analysis with Agisoft Photoscan is the creation of the DSM. For this
purpose, the chunk in which the point cloud was calculated is duplicated and renamed (see
Figure 48).
32
Figure 48: Structure of the Workspace after duplication of another chunk
Since the depth filter Mild does not completely remove all outliers, it is necessary to select and
eliminate the remaining outliers manually (see Figure 49).
Figure 49: Removing remaining noise from the point cloud (top) and side view of the resulting point cloud (bottom)
After this step, Workflow Build DEM... opens the window with settings for DSM creation (see
Figure 50).
33
Figure 50: Call up the function Build DEM... (left) and its settings (right)
Agisoft Photoscan generally designates the surface models as DEM (Digital Elevation Model),
which in this case is the same as DSM. It is important to specify the dense cloud as the data
source under Source data. The Interpolation is activated so that the gaps of the point cloud
are filled by interpolated points. The rest is left at the default settings. The resolution or raster
width of the DSM (Resolution (m/pix)) cannot be changed here. This setting is only made when
the DSM is exported. The DSM is calculated with OK. In the display window in the middle of
the GUI, a 2D image of the DSM is displayed (see Figure 51).
Figure 51: 2D-View of the resulting DSM
34
The export of the DSM is done in the Workspace tab by right-clicking on DEM Export DEM
Export TIFF /BIL /XYZ (see Figure 52, alternatively, the DSM can also be exported as a
KMZ file).
Figure 52: Export function of the DSM (left) and its settings (right)
A window with the export settings opens. Here, the resolution of the output sensor just
mentioned can be entered in meters via Metres... All DSM were exported with a resolution of
5 cm. The remaining settings were left at the default values. If necessary, the DSM can be
divided into blocks or only a certain region of the DSM can be exported. Clicking on Export...
opens another window where the location and file format are specified (see Figure 53). The
DSM generated in the sample plot have been exported in XYZ format. This format is universal
and readable by different software packages.
35
Figure 53: Save the DSM as a .xyz file
Further processing and applications of the DSM
At the end of the tutorial, we will briefly discuss how the DSMs derived from UAV images can
be further processed and which applications are possible with them.
If, for example, an externally provided DTM is present, the DSM can be normalized. In
normalized surface models it is then possible, for example, to get the height above ground.
This allows nDSM to be used, for example, for detecting tree heights or building heights.
The subtraction can be performed with raster functions in QGIS (Raster Raster Calculator)
or ArcGIS (ArcToolbox 3D Analyst Tools Raster Math Minus). To do this, the .xyz file
must first be converted to a raster format (ArcGIS: ArcToolbox Conversion Tools In
Raster, QGIS: Tab Raster Conversion Raster (Vector in Raster)).
General applications of DSM and/or nDSM derived from UAV recordings are for example the
calculation of masses, the change detection of the earth surface’s topography or the inventory
of forest areas.
In contrast to Lidar data, only the surface is modeled in photogrammetric evaluations. That
means the vertical structure is not detected. In the active lidar method, however, first and load
pulses are used to record these data.
This results in the advantage for the Lidar process that, for example, DSM and DTM can be
derived simultaneously in forest areas. However, manned Lidar flights are very expensive and
economical for large areas. Using the low budget UAV, on the other hand, is cost-effective and
allows a very flexible recording of the objects to be captured. Due to the very low flight altitudes,
UAV recordings can also achieve much higher spatial resolutions than by Lidar flights.
In summary, this tutorial provides a description of how high-quality three-dimensional remote
sensing products can be produced from a commercially available UAV, an Android smartphone
and the corresponding evaluation software.
36
Literature
Dandois, Jonathan, Mark Olano, and Erle Ellis. "Optimal Altitude, Overlap and Weather
Conditions for Computer Vision UAV Estimates of Forest Structure." Remote
Sensing, Oktober 23, 2015.
Gatewing. "Software Workflow AgiSoft PhotoScan Pro 0.9.0 For use with Gatewing X100
UAS." 2017.
Haala, Norbert, Michael Cramer, and Mathias Rothermel. "Quality of 3D point clouds from
highly overlapping UAV Imagery." International Archives of the Photogrammetry,
Remote Sensing and Spatial Information Sciences, September 2013.
Mallison, Heinrich. Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan
- Online Tutorial. 2015.
https://dinosaurpalaeo.wordpress.com/2015/10/11/photogrammetry-tutorial-11-how-
to-handle-a-project-in-agisoft-photoscan/ (Zugriff am 22. März 2017).
Pix4D. Pix4Dcapture: Android Manual. 2017. https://support.pix4d.com/hc/en-
us/articles/203873435--Android-Pix4Dcapture-Manual#gsc.ta (Zugriff am 21. März
2017).
Röder, Marius. "Eignungsprüfung einer UAV-basierten Forstinventur als Ersatz zu
traditionellen Feldverfahren in Verjüngungsbeständen." Masterarbeit Hochschule für
Technik Stuttgart, 2017.
... There is still neither a comprehensive guideline on how to clean the sparse point cloud on behalf of Agisoft nor a description of the algorithm implemented to do it. Therefore, for a given project, it is imperative to establish a workflow, including the appropriate selection of the parameters involved, sustained on several work presented in the literature (USGS, 2017a;USGS, 2017b;Röder et al., 2017;Agisoft Community, 2012. Different parameters setup and varied processes, such as optional compensation of rolling shutter, might produce different results and thus the presented workflow should be used as a guideline. ...
... It is selected in the Gradual Selection dialogue, expressed as a non-dimensional value referring to the directional overlap of photos. Tie points located in the edges of the project area generally have a higher degree of reconstruction uncertainty than those in the block centre, since images covering that area show reduced lateral overlapping (Röder et al., 2017). Such determined 3D points can noticeably deviate from the object surface, introducing noise in the point cloud. ...
... It is recommended to delete not more than 10-20% of the total tie points in every single gradual selection process, since the aim is to just delete points with high re-projection errors. Otherwise the overall photogrammetric point cloud might be over-constraint and the photo alignment can fail, which would be reflected in doming deformations (Röder et al., 2017). The improvement is an iterative process and the impact of each step needs to be carefully checked. ...
Conference Paper
Full-text available
UAV-based remote sensing offers the possibility to acquire aerial images with high geometric and temporal resolutions. With highly automated photogrammetric software packages, this imagery is extensively and quite successfully used for the production of geo-information. However, the utilization of inexpensive, un-calibrated small-format cameras mounted on lightweight UAV makes processing of aerial images challenging. After the automatic measurements of tie points the image orientations and the camera calibration is computed in a bundle block adjustment for compensating systematic effects. The quality and distribution of tie points may be optimized to avoid reconstruction errors before dense point cloud computation. For processing the acquired images, the operator tends to rely on the default parameter values given by the software provider, which may be inadequate. Additionally, inconsistent advises are available in the literature on how to reduce tie point errors correctly and efficiently. Therefore, it is essential to establish a comprehensive workflow to process aerial images for an efficient production of accurate geo-information. Focusing on the commercial software package Agisoft PhotoScan, the impact of possible workflows that include different processes and parameter values, on the accuracy of the resulting Digital Surface Models (DSM) and orthophoto will be presented.
... PS calculates how reliable these parameters are as part of the adaptive camera fitting and automatically uses the best combination. This process prevents divergence of parameters in aerial photography (De Reu et al. 2013;Mallison 2015;Marius et al. 2017;Agisoft 2018). Adaptive camera fitting was set to on. ...
... The script then aligns all images in each chunk and optimises the model before applying the gradual selection process. The gradual selection strategy selected is based on a number of sources and adapted to produce consistent results (USGS 2017;Mallison 2015;Marius et al. 2017). The aim of the script is to reduce the tie points to 80% of the original tie points in three steps, retaining only the highest quality points. ...
Article
Full-text available
In 2017, hurricane Maria caused unprecedented damage and fatalities on the Caribbean island of Dominica. In order to ‘build back better’ and to learn from the processes causing the damage, it is important to quickly document, evaluate and map changes, both in Dominica and in other high-risk countries. This paper presents an innovative and relatively low-cost and rapid workflow for accurately quantifying geomorphological changes in the aftermath of a natural disaster. We used unmanned aerial vehicle (UAV) surveys to collect aerial imagery from 44 hurricane-affected key sites on Dominica. We processed the imagery using structure from motion (SfM) as well as a purpose-built Python script for automated processing, enabling rapid data turnaround. We also compared the data to an earlier UAV survey undertaken shortly before hurricane Maria and established ways to co-register the imagery, in order to provide accurate change detection data sets. Consequently, our approach has had to differ considerably from the previous studies that have assessed the accuracy of UAV-derived data in relatively undisturbed settings. This study therefore provides an original contribution to UAV-based research, outlining a robust aerial methodology that is potentially of great value to post-disaster damage surveys and geomorphological change analysis. Our findings can be used (1) to utilise UAV in post-disaster change assessments; (2) to establish ground control points that enable before-and-after change analysis; and (3) to provide baseline data reference points in areas that might undergo future change. We recommend that countries which are at high risk from natural disasters develop capacity for low-cost UAV surveys, building teams that can create pre-disaster baseline surveys, respond within a few hours of a local disaster event and provide aerial photography of use for the damage assessments carried out by local and incoming disaster response teams.
... The raw images were processed with AgiSoft PhotoScan Pro 1.3.4 following the "Best Practice Tutorial DJI Phantom 3 Professional" (Röder et al. 2017) to create a digital surface model (DSM, including vegetation) using the "structure from motion" approach (Marteau et al. 2017) and to stitch georeferenced orthomosaics for each survey. To allow the determination of the vegetation height, an additional digital terrain model (DTM) was generated for each survey, including only ground points. ...
... To allow the determination of the vegetation height, an additional digital terrain model (DTM) was generated for each survey, including only ground points. Ground points were automatically identified by Photo-Scans ground point classification tool (Röder et al. 2017) with the maximum distance set to 0.2 m, angle set to 6°and cell size set to 10 m. Data processing of one survey took 5-6 h without any user interaction on a standard PC (2.6 GHz Intel i7, 16 GB RAM). ...
Article
The increasing importance of conservation and restoration of our natural capital is associated with a growing demand for reliable and cost-effective scientific data to support decision making and monitoring of implemented measures. Such data include information about habitats, abiotic conditions and disturbances. The small extent or small-scale structure of most restoration sites requires a high spatial resolution exceeding that provided by standard satellite imagery. When the site is still unstable during the initial phase or when frequent disturbance is expected, additionally a high temporal resolution is required. In this case study, we demonstrate the usefulness of a UAV (unmanned aerial vehicle) for monitoring river and floodplain restoration based on a conservation project aimed to preserve Germany's last remaining population of the highly endangered alpine river plant C. chondrilloides. This population is confined to a small, 5 ha area within a highly dynamic alluvial fan in the Bavarian Alps. We used the data acquired by UAV to monitor stream channel dynamics, to quantify bedload transport, erosion and deadwood structures and to characterize vegetation cover and height with the aim of identifying types and loss of habitats and consequently estimating extinction risk. The results show a highly dynamic stream channel and considerable bedload transport and erosion between flood events. The majority of the remaining C. chondrilloides population is found on terraces which are prone to erosion and escape habitats are missing. Between 2016 and 2017 we documented a loss of almost 25% of the species' potential habitats due to erosion events. Three percent of the existing population was lost to those events. Substantial spread of the species was only at the edge terrace which is subject to a high risk of extinction. Our study further demonstrates that many important parameters for the implementation of evidence-based conservation can be easily and cost-effectively derived from standard RGB images taken by UAVs and GIS software. As a consequence, we encourage the increased use of UAVs for restoration and monitoring practice.
... This phase was automatically carried out in Agisoft photoscan software. Using Align Photos function, the relative orientation of images to each other was done [24]. After the relative orientation of the images is established, the exterior orientation for the first model should be made using GCPs. ...
Article
Full-text available
In recent years, scholars have witnessed the increasing progress of using unmanned aerial systems (UASs) in topographic mapping due to its lower cost compared with alternative systems These UASs enables tree height estimation by capturing overlapped images and generating 3D point cloud through the structure from motion (SfM) algorithm. To ensure that the normalized digital surface model (nDSM) in the mountain areas is created accurately, careful attention to flight patterns and uniform distribution of ground control points (GCPs) are necessary. To this end, a quadcopter equipped with an RGB camera is used for imaging an area of 131 hectares in two steps: firstly, through a single flight strip with an optimized distribution of GCP and secondly through an improvement of the flight configuration. Afterward, two nDSMs were created by the automatic processing of raw images of both approaches. The prominent results demonstrate that the smart integration of key parameters in flight design can bring the root mean square errors (RMSE) down to 52.43 cm without the need to include GCPs. However, using GCPs with an appropriate distribution culminates in RMSE of 33.59 cm, which means 35.93% better performance. This study highlights the impacts of optimal distribution in GCP on nDSM accuracy, as well as the strategy of using images extracted from the combination of two flight strips with different altitudes and high overlap when local GCP is inaccessible, was found to be beneficial for increasing the overall nDSM accuracy.
... When suffering from serious natural disasters, [33] showed that, instead of communication vehicles which are heavy and greatly affected by bumpy roads, using UAVs is more likely to keep local networks connected in a cost-effective way. In practical business, one company-DJI UAV-has shown that the battery capacity of a normal UAV is 5870 mAh, the speed of a UAV can reach 15 m/s, and the longest flight of 30 min can be achieved [34]. This description indicates that the performance of a UAV in data collection is better than that of terrestrial vehicles, especially when road conditions are complex. ...
Article
In wireless rechargeable sensor networks, mobile vehicles (MVs) combining energy replenishment and data collection are studied extensively. To reduce data overflow, most recent work has utilized more vehicles to assist the MV to collect buffered data. However, the practical network environment and the limitations of the vehicle in the data collection are not considered. UAV-enabled data collection is immune to complex road environments in remote areas and has higher speed and less traveling cost, which can overcome the lack of the vehicle in data collection. In this paper, a novel framework joining the MV and UAV is proposed to prolong the network lifetime and reduce data overflow. The network lifetime is correlated with the charging order; therefore, we first propose a charging algorithm to find the optimal charging order. During the charging period of the MV, the charging time may be longer than the collecting time. An optimal selection strategy of neighboring clusters, which could send data to the MV, was found to reduce data overflow. Then, to further reduce data overflow, an algorithm is also proposed to schedule the UAV to assist the MV to collect buffered data. Finally, simulation results verified that the proposed algorithms can maximize network lifetime and minimize the data loss simultaneously.
... We used a limit of 40 000 key points and 4000 tie points for the alignment and construction of the sparse point cloud, followed by a noise filtering stage with the gradual selection method [41]. ...
Article
Full-text available
In this work, we apply close-range photogrammetry with unmanned aircraft systems to quantify erosion with milli-metric spatial resolution in agricultural plots. We evaluate the proposed methodology against the traditional runoff method on active plots. A database of digital elevation models was constructed with a ground sampling distance of 7 mm/pixel and maximum root-mean-square total error of 4.8 mm, which allowed the follow-up of soil erosion dynamics within the runoff plots for a period of three months. Good agreement of the photogrammetric estimations with respect to field measurements was observed, whereas it provides a more detailed spatial information that can be used for precise soil loss dynamic studies. Index Terms-DEMs of difference (DoD), digital elevation model (DEM), photogrammetry, runoff plots, soil erosion, unmanned aerial systems (UAS).
... Each photograph taken by the camera was precisely located by GPS using WGS84 geographic coordinates. The images were processed to obtain a sub-decimeter structure-for-motion (SFM) model following the workflow of Röder et al. (2017). ...
Article
Full-text available
The seismic cycle model is roughly constrained by limited offset data sets from the eastern Altyn Tagh fault with a low slip rate. The recent availability of high-resolution topographic data from the eastern Altyn Tagh fault provides an opportunity to obtain distinctly improved quantitative, dense measurements of fault offsets. In this paper, we used airborne light detection and ranging data and unmanned aircraft vehicle photogrammetry to evaluate fault offsets. To better constrain the large earthquake recurrence model, we acquired dense data sets of fault displacements using the LaDiCaoz_v2.1 software. A total of 321 offset measurements below 30 m highlight two new observations: (1) surface-slip of the most recent earthquake and multiple events exhibit both short-wavelength (m-scale) and long-wavelength (km-scale) variability; and (2) synthesis of offset frequency analysis and co­efficient of variation indicate regular slip events with ~6 m slip increment on fault segments to the west of the Shulehe triple junction. The distribution of offsets and paleoseismological data reveal that the eastern Altyn Tagh fault exhibits characteristic slip behavior, with the characteristic slip of ~6 m and a recurrence period ranging from 1170 to 3790 years. Paleoearthquake recurrence intervals and slip increments yield mean horizontal slip-rate estimates of 2.1–2.6 mm/yr for fault segments to the west of the Shulehe triple junction. Assuming a 10 km rupture depth and a 30 GPa shear modulus, we estimated a characteristic slip event moment magnitude (Mw) of ~7.6. Finally, we discuss the interaction mechanism between Altyn Tagh fault (strike fault) and the NW-trending thrust faults (reverse faults) that caused the sudden decrease of sinistral slip rate at the Shulehe and Subei triple junctions; our results support the eastward “lateral slip extrusion” model.
... The flights were performed using an 80% lateral and longitudinal overlap [33,42] and included a double collection flight plan, where the area is flown twice with perpendicular flight lines to increase the number of possible camera views [42,43]. The image collection was performed on the same day, at the same time as the field PHA measurements, therefore there were no differences in the creek and water level. ...
Article
Full-text available
Physical Habitat Assessments (PHA) are useful to characterize and monitor stream and river habitat conditions, but can be costly and time-consuming. Alternative methods for data collection are getting attention, such as Unmanned Aerial Vehicles (UAV). The objective of this work was to evaluate the accuracy of UAV-based remote sensing techniques relative to ground-based PHA measurements, and to determine the influence of flight altitude on those accuracies. A UAV quadcopter equipped with an RGB camera was flown at the altitudes of 30.5 m, 61.0 m, 91.5 m and 122.0 m, and the metrics wetted width (Ww), bankfull width (Wbf) and distance to water (Dw) were compared to field PHA. The UAV-PHA method generated similar values to observed PHA values, but underestimated distance to water, and overestimated wetted width. Bankfull width provided the largest RMSE (25-28%). No systematic error patterns were observed considering the different flight altitudes, and results indicated that all flight altitudes investigated can be reliably used for PHA measurements. However, UAV flight at 61 m provided the most accurate results (CI = 0.05) considering all metrics. All UAV parameters over all altitudes showed significant correlation with observed PHA data, validating the use of UAV-based remote sensing for PHA.
Article
Full-text available
Aim. Accurate knowledge of the extent and local distribution of pollution plays a key role in many areas of life. Method. Although there are many well-known and generally-accepted methods for obtaining the intended data, these methods do not give a satisfactory result in cases when it is necessary to determine the exact parameters of pollution quickly and in a relatively small area (e.g. an industrial zone of several square kilometers, a residential area, etc.) and to determine changes in these parameters as expressed numerically. Small UAVs (multicopter with fixed or rotating wing) were equipped with sensitive detectors for gamma rays and polluting gases, including the assignation of flight data coordinates to the measured data. Such informational groupings provide the opportunity to determine the distribution of radiation or air polluting gases. Using this method, it is possible to identify and localise illegally-stored or illegally-released gamma ray emitting materials, continuously monitor pollution caused by chemical disasters and determine the spatial distribution of pollution. Results. The article presents systems based on practical experiments, which, in the case of using a gamma detector, allow the localisation of objects using low radiation doses along with a high-quality map of gamma radiation in a specific area; and, in the case of gas sensors, the visualisation of the spatial distribution of a polluting gas. The method is used primarily in the field to detect gamma emitters with low activity or to analyse the emission of industrial facilities with the emission of pollutants. Conclusion. The combination of spatial coordinates with remote sensing data comprises an effective meas-urement method. The developed system is generally applicable for mobile platforms equipped with sensors. The systems are designed to provide fast, efficient and reliable measurements that can be used for both detection and control. The type of pollutants to be measured depends on the sensors used. The experiments also indicate that, when replacing the used sensors, it may be necessary to change the processing of the measured data in accordance with the characteristics of the particular sensor; however, in general, data processing and visualisation of the results can be carried out in practice.
Article
Full-text available
Ecological remote sensing is being transformed by three-dimensional (3D), multispectral measurements of forest canopies by unmanned aerial vehicles (UAV) and computer vision structure from motion (SFM) algorithms. Yet applications of this technology have out-paced understanding of the relationship between collection method and data quality. Here, UAV-SFM remote sensing was used to produce 3D multispectral point clouds of Temperate Deciduous forests at different levels of UAV altitude, image overlap, weather, and image processing. Error in canopy height estimates was explained by the alignment of the canopy height model to the digital terrain model (R 2 = 0.81) due to differences in lighting and image overlap. Accounting for this, no significant differences were observed in height error at different levels of lighting, altitude, and side overlap. Overall, accurate estimates of canopy height compared to field measurements (R 2 = 0.86, RMSE = 3.6 m) and LIDAR (R 2 = 0.99, RMSE = 3.0 m) were obtained under optimal conditions of clear lighting and high image overlap (>80%). Variation in point cloud quality appeared related to the behavior of SFM 'image features'. Future research should consider the role of image features as the fundamental unit of SFM remote sensing, akin to the pixel of optical imaging and the laser pulse of LIDAR.
Conference Paper
Zusammenfassung: Um Nachforschungen zur Regeneration in vom Borkenkäfer befallenen Beständen Mitteleuropas durchzuführen, ist das Monitoring der Waldstruktur von besonde-rer Bedeutung. Herkömmlich werden die Strukturparameter in kleinmaßstäbigen Gebieten durch zeit-und personalintensive Feldinventuren erhoben. In den letzten Jahren trat mit der Bildaufnahme mittels Unmanned Aerial Vehicle (UAV) eine neue Form der kleinmaßstäbi-gen forstlichen Fernerkundung in den Vordergrund. In dieser Arbeit wird geprüft, ob eine UAV-basierte Forstinventur als Ersatz zu traditionellen Feldverfahren in diesen Verjün-gungsbeständen geeignet ist. Die Ergebnisse verdeutlichen, dass die UAV-Inventur in den kleinmaßstäbigen Untersuchungsgebieten der Verjüngungsbestände in Qualität und Quanti-tät nur begrenzt mit der Feldinventur mithalten kann. Rein wirtschaftlich gesehen überwie-gen allerdings die Vorteile gegenüber dem Feldverfahren.
Software Workflow AgiSoft PhotoScan Pro 0.9.0 For use with Gatewing X100 UAS
  • Gatewing
Gatewing. "Software Workflow AgiSoft PhotoScan Pro 0.9.0 For use with Gatewing X100 UAS." 2017.
Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan -Online Tutorial
  • Heinrich Mallison
Mallison, Heinrich. Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan -Online Tutorial. 2015. https://dinosaurpalaeo.wordpress.com/2015/10/11/photogrammetry-tutorial-11-howto-handle-a-project-in-agisoft-photoscan/ (Zugriff am 22. März 2017).
Im Beispielprojekt lagen sechs GCPs vor. Ist die 3D-Position aller Marker bekannt, können diese über den Show MarkersButton sichtbar geschaltet werden (vgl Abbildung 31)
  • Eine Georeferenzierung Ist Bereits Mit Drei Gcps Möglich
Eine Georeferenzierung ist bereits mit drei GCPs möglich. Im Beispielprojekt lagen sechs GCPs vor. Ist die 3D-Position aller Marker bekannt, können diese über den Show MarkersButton sichtbar geschaltet werden (vgl. Abbildung 31).
Mild nicht komplett alle Ausreißer entfernt, ist es notwendig manuell via Fangfunktion die restlichen Ausreißer zu selektieren und zu eliminieren (vgl
  • Tiefenfilter Da Der
Da der Tiefenfilter Mild nicht komplett alle Ausreißer entfernt, ist es notwendig manuell via Fangfunktion die restlichen Ausreißer zu selektieren und zu eliminieren (vgl. Abbildung 49).
In normalisierten Oberflächenmodellen ist es dann beispielsweise möglich, die Höhe über Grund abzugreifen
  • Liegt Beispielsweise Ein Extern Bereitgestelltes Dtm Vor, Kann Das Dsm Normalisiert Werden
Liegt beispielsweise ein extern bereitgestelltes DTM vor, kann das DSM normalisiert werden. In normalisierten Oberflächenmodellen ist es dann beispielsweise möglich, die Höhe über Grund abzugreifen. Dadurch können nDSM beispielsweise zur Erfassung von Baumhöhen oder Gebäudehöhen verwendet werden.
xyz-Datei zuvor in ein Raster-Format umgewandelt werden (ArcGIS: ArcToolbox  Conversion Tools
  • Dazu Muss Die
Dazu muss die.xyz-Datei zuvor in ein Raster-Format umgewandelt werden (ArcGIS: ArcToolbox  Conversion Tools  In Raster, QGIS: Reiter Raster  Konvertierung  Rastern (Vektor in Raster)). Voraussetzung für die Subtraktion ist, dass beide Datenquellen (DSM und DTM) im selben geodätischen Datum vorliegen.
Befliegungen sehr teuer und erst für große Gebiete wirtschaftlich Aufnahmen mittels Low-Budget UAV sind hingegen kostengünstig und erlauben eine zeitlich sehr flexible Erfassung der aufzunehmenden Objekte
  • Allerdings Sind Bemannte Lidar
Allerdings sind bemannte Lidar-Befliegungen sehr teuer und erst für große Gebiete wirtschaftlich. Aufnahmen mittels Low-Budget UAV sind hingegen kostengünstig und erlauben eine zeitlich sehr flexible Erfassung der aufzunehmenden Objekte. Durch die sehr niedrige Flughöhe lassen sich durch UAV-Aufnahmen außerdem wesentlich höhere räumliche Auflösungen erzielen als durch Lidar-Befliegungen.