ArticlePDF Available

Abstract and Figures

Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m^2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Content may be subject to copyright.
High-Resolution Trench Photomosaics from Image-Based
Modeling: Workflow and Error Analysis
by Nadine G. Reitman, Scott E. K. Bennett,*Ryan D. Gold,
Richard W. Briggs, and Christopher B. DuRoss
Abstract Photomosaics are commonly used to construct maps of paleoseismic
trench exposures, but the conventional process of manually using image-editing soft-
ware is time consuming and produces undesirable artifacts and distortions. Herein, we
document and evaluate the application of image-based modeling (IBM) for creating
photomosaics and 3D models of paleoseismic trench exposures, illustrated with a
case-study trench across the Wasatch fault in Alpine, Utah. Our results include a
structure-from-motion workflow for the semiautomated creation of seamless, high-
resolution photomosaics designed for rapid implementation in a field setting. When
compared with conventional manual methods, the IBM photomosaic method provides
a more accurate, continuous, and detailed record of paleoseismic trench exposures in
approximately half the processing time and 15%20% of the user input time. Our error
analysis quantifies the effect of the number and spatial distribution of control points on
model accuracy. For this case study, an 87 m2exposure of a benched trench photo-
graphed at viewing distances of 1.57 m yields a model with <2cm root mean square
error (rmse) with as few as six control points. Rmse decreases as more control points
are implemented, but the gains in accuracy are minimal beyond 12 control points.
Spreading control points throughout the target area helps to minimize error. We pro-
pose that 3D digital models and corresponding photomosaics should be standard prac-
tice in paleoseismic exposure archiving. The error analysis serves as a guide for future
investigations that seek balance between speed and accuracy during photomosaic and
3D model construction.
Online Material: Image-based modeling workflow for paleoseismic trench photo-
mosaics, 3D trench model, example photomosaic log, table of error analysis data, and
Python script for exporting photomosaics in a vertical plane.
Introduction
Active fault studies for seismic-hazard analysis typically
yield information on the timing, displacement, rupture ex-
tent, and magnitude of past large earthquakes (e.g., Personius
et al., 2007;McCalpin, 2009;Scharer et al., 2014). Such
studies commonly include detailed surface maps for site
characterization and subsurface trench investigations to as-
sess earthquake histories. Surface maps greatly benefit from
high-resolution (1m) topographic data (Fig. 1a), whereas
maps (or logs) of geologic relationships exposed in trench
walls (Figs. 1bcand 2a) benefit from detailed (1cm res-
olution) basemaps composed of overlapping photos (Figs. 2b
and 3a). Stratigraphic and structural contacts between sub-
surface strata exposed in trenches are typically drafted di-
rectly onto a composite image of trench-wall photographs
(photomosaic) manually constructed using image-editing
software (conventional photomosaic) (Fig. 3a). However,
conventional photomosaics are time consuming to create,
and abrupt changes in contrast and tone at seams between
photographs limit their quality and usability. Furthermore,
conventional photomosaics do not record the 3D character
of the trench exposures. Here, we address these deficiencies
with a semiautomated workflow for creating high-resolution
photomosaics using structure-from-motion (SFM) image-
based modeling (IBM). This method produces accurate,
seamless photomosaics and 3D models in a fraction of
the time required to produce mosaics with conventional
image-editing software.
*Now at U.S. Geological Survey, Department of Earth and Space Scien-
ces, University of Washington, Box 351310, Seattle, Washington 98195.
1
Bulletin of the Seismological Society of America, Vol. 105, No. 5, pp. , October 2015, doi: 10.1785/0120150041
Image-Based Modeling
IBM is the general process of constructing 3D models
from collections of 2D photographs (Snavely, Garg, et al.,
2008, and references therein). SFM is a form of IBM devel-
oped in the computer vision community to reconstruct
unknown 3D scene structure, camera positions, and orienta-
tions using feature-matching algorithms (Snavely et al.,
2006;Snavely, Garg, et al., 2008;Snavely, Seitz, et al.,
2008). Hereafter, we use IBM to refer to the general process
and SFM to refer specifically to the feature-matching step.
SFM is similar to traditional stereoscopic photogrammetry
in that it relies on overlapping images of a still object or
scene to reconstruct 3D scene structure. An important differ-
ence, however, is that SFM operates on a set of unordered
photographs without the need for organized image acquisi-
tion, precalibration of the camera, and control points with
known locations, as in traditional photogrammetry. Com-
bined with a multiview stereo algorithm to build dense point
clouds, IBM is capable of constructing high-resolution
(<1cm) photorealistic 3D models and seamless photomo-
saics with only a consumer-grade camera and inexpensive or
open-source software. New software packages make the
Figure 1. Typical setting of a paleoseismic trench investigation of a normal fault (dashed line). (a) Mountain front-scale perspective view,
showing surface topography derived from airborne light detection and ranging (lidar) data. (b) Site-scale perspective view with trench ex-
cavation (inside box) perpendicular to the fault scarp. (c) Trench-scale schematic view of subsurface stratigraphy and structures exposed in
the trench. The land surface is visible behind the trench. The color version of this figure is available only in the electronic edition.
2N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
process fast, user-friendly, and automated. See Bemis et al.
(2014) for examples of commercial and open-source soft-
ware packages.
IBM applications in the geosciences have recently be-
come widespread and well-documented (e.g., Harwin and
Lucieer, 2012;James and Robson, 2012;Westoby et al.,
2012;Fonstad et al., 2013;Lucieer et al., 2013;Bemis et al.,
2014;Javernick et al., 2014;Johnson et al., 2014;Kaiser
et al., 2014;Tavani et al., 2014). IBM techniques offer a
low-cost and portable alternative to airborne and terrestrial
light detection and ranging (lidar) surveys to make high-res-
olution topographic datasets (e.g., Castillo et al., 2012;
James and Robson, 2012;Bemis et al., 2014;Johnson et al.,
2014). Bemis et al. (2013,2014) illustrated the potential for
using IBM to generate photomosaics and 3D models for com-
plex paleoseismic exposures from pre-existing trench photo-
graphs. We build on this prior work by implementing an IBM
workflow in a start-to-finish paleoseismic trench study with
precise 3D control and rigorous error analysis.
Paleoseismic Trench Investigations
Paleoseismic trench investigations are the primary
approach to estimate past earthquake timing, recurrence inter-
vals, per-event displacements, and fault-slip rates for seismic-
hazard analysis. Trenches typically expose the fault zone,
sedimentary units displaced by faulting, scarp-derived collu-
vium, and secondary faults and fractures (e.g., McCalpin,
2009). Detailed mapping of structure and stratigraphy within
the trench is essential for reconstructing stratigraphic and geo-
metric relations that are used to infer fault rupture history. In
modern paleoseismic studies, stratigraphic contacts, shear
zones, and sample locations are logged directly onto photo-
mosaic basemaps (e.g., Personius et al.,2007). Required res-
olution and scale of the photomosaic basemaps depend on the
magnitude of the offsets and the grain size of the substrate
material. A trench that exposes centimeter- to decimeter-scale
displacements of fine-grained deposits (e.g., Scharer et al.,
2014) will require higher resolution and finer-scale basemaps
than a trench that exposes decimeter- to meter-scale displace-
ments in coarse material (e.g., Personius et al.,2007).
Conventional trench-wall photomosaics (Fig. 3a) are
created manually and require a string grid overlain on the
trench walls, careful image acquisition, and several hours
of labor-intensive image manipulation. In the conventional
workflow, images are acquired systematically such that each
photograph spans one grid rectangle and is shot orthogonal
to the trench wall to minimize geometric distortion. Each
image is then cropped, color-balanced, and warped in image-
editing software and compiled into a composite image of the
trench wall.
The primary drawbacks to the conventional photomosaic
method are the time required to acquire and edit individual
photographs and the poor visual and color continuity of the
final photomosaic (Haddad et al., 2012;Bemis et al.,2014).
For example, compiling a photomosaic for an 250 m2
benched trench exposure on the Wasatch fault (Bennett et al.,
2014)took>50 hrs of user input. Frequently observed prob-
lems with the conventional approach include distortion at the
margins of individual images and parallax exaggerated by 3D
irregularities of trench walls (e.g., holes, protruding clasts;
Figure 2. (a) Looking east up the case-study benched trench across the Wasatch fault in Alpine, Utah. (b) Same view of 3D model of the
south wall showing the aligned photographs (rectangles) and their look directions (vectors). Images are approximately orthogonal to the
trench walls and acquired from three levels: in the base of the trench, on the opposite bench, and from the ground surface. The tripods are
1:5m tall in both panels. The color version of this figure is available only in the electronic edition.
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 3
Fig. 3a, inset box). These issues result in misalignment or
duplication of features, such as large clasts, and obvious
distortion of string grids. Finally, the difficulties inherent in
manually color-balancing and aligning images result in photo-
mosaics with obvious seams and poor color continu-
ity (Fig. 3a).
Recently, terrestrial laser scanning (TLS) has been used
as an alternative to conventional photomosaics for producing
high-resolution 3D georeferenced models of trenches and
outcrops (Haddad et al., 2012;Minisini et al., 2014). When
coupled with an onboard or mounted camera, TLS produces
photorealistic trench models that are more detailed and
Figure 3. Comparison of photomosaics of the Wasatch fault zone exposed in the lower south wall of the Alpine trench. Images generated
using (a) conventional photomosaic methodology via manual processing and tiling of individual images in image-editing software (Adobe
Photoshop) and (b) image-based modeling (IBM) methodology using automated software (Agisoft PhotoScan, see Data and Resources). The
duplicated clast highlighted in (a) inset box is represented accurately as a single clast in (b). Images in (a) acquired 4 June 2014 and (b) 26
May 2014. The color version of this figure is available only in the electronic edition.
4N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
accurate than conventional photomosaics (Haddad et al.,
2012) and require little postprocessing. However, the equip-
ment required for TLS is costly, fragile, bulky, and requires
external power, and the technique can be time-consuming
to implement in the field (Castillo et al., 2012;James and Rob-
son, 2012). For these reasons, the TLS method can be prohibi-
tively expensive and/or logistically challenging in remote
field sites.
In this study, we demonstrate the utility of IBM in
paleoseismic trench investigations to rapidly produce high-
resolution trench photomosaic basemaps in a semiautomated
workflow in a field setting. We evaluate the speed and accu-
racy of the IBM technique to construct a detailed photomo-
saic and a photorealistic 3D model of a paleoseismic trench
exposure on the Wasatch fault in Alpine, Utah (Bennett et al.,
2015). We also explore the relationship between the number
and distribution of ground control points (GCPs) and model
accuracy.
Image-Based Modeling Methods
Field Data Collection
IBM 3D models and corresponding 2D photomosaics
depend on a sufficient number of overlapping images and
accurate control points in order to correctly preserve the 3D
geometry and location of the target area. Resolution of the
photomosaic is a function of camera sensor resolution and the
distance between the camera and the target. We found that a
consumer-grade camera with a 14 megapixel sensor and pho-
tographs taken 1.57 m from the trench wall are sufficient to
distinguish very coarse sand and larger material on the photo-
mosaic. Images should overlap 50%60%, be taken
roughly orthogonal to the target, extend beyond the area of
interest (e.g., include the trench floor and the ground surface
above the trench), and be captured from multiple vertical and
horizontal positions. Incorporating a few photographs taken
at oblique angles can reduce systematic error (James and
Robson, 2014). Accuracy of the 3D model generally in-
creases with image density; however, at a minimum, each
point in the trench exposure should be visible in at least three
photographs. It is advisable to take excess photographs,
especially if working in an ephemeral setting.
Geometric accuracy of the model and photomosaic de-
pends on incorporating control points in either relative or ab-
solute coordinates. Our process for establishing precise
survey control at the trench site has three steps: (1) conduct
Real Time Kinematic Global Positioning System (RTK GPS)
surveys of one or more tripod locations at the ground surface
(Fig. 2a) and a minimum of three control points on the
ground surface surrounding the trench site; these points
are postprocessed to obtain coordinates in an absolute coor-
dinate system (e.g., World Geodetic System 1984 [WGS 84]
horizontal, North American Vertical Datum of 1988
[NAVD88] vertical), if desired; (2) use a total station at
the tripod location(s) to set a grid of points (e.g., nails) into
each trench wall in relative coordinates (i.e., horizontal and
vertical trench units); and (3) use the total station to record
locations and elevations of select trench grid points to be
used as control points for the IBM model. Optionally, after
step 2, absolute coordinates and elevations obtained from
postprocessed Global Positioning System (GPS) data can be
applied to the total station location(s) to place the trench
model in an absolute coordinate system during step 3. Re-
cording coordinates of many (50) points enables rigorous
error analysis, in which some of the points are incorporated
into the model as control points and others are used as check
points to test model accuracy.
We demonstrate this workflow with a case study of a
normal-fault trench across the Wasatch fault in Alpine, Utah.
For this 32-m-long, 3-m-deep, benched trench (Fig. 2), we
used a Magellan ProMark 500 RTK GPS system and a TOP-
CON GPT-7503 Pulse total station to establish site survey
control and place a 1×1m grid of nails in the trench walls.
Measurement distances between total station and trench-wall
control points ranged from 5 to 25 m. The nail heads were
marked with a +, and nails were labeled with relative (trench)
coordinates (e.g., 19H 5V). Five ground-surface control
points and two total station tripod locations were established
and recorded in absolute coordinates with the RTK GPS
system. After creating the trench-wall nail grid in relative
coordinates, the total station was placed in the Universal
Transverse Mercator (UTM) coordinate system using its GPS-
established location, with corrections obtained from the On-
line Positioning User Service (OPUS) of the National Geodetic
Survey (see Data and Resources). UTM coordinates of trench-
wall control points were recorded with the total station by
shooting the center of each nail head in no-prismmode.
In this trench exposure, establishing the grid and recording
coordinates of nail heads took two to three scientists approx-
imately eight hours per wall and resulted in >50 control
points on each wall.
Images were acquired using a Nikon AW 1 mirrorless
camera with an 1127.5 mm (3074 35-mm equivalent) lens,
internal GPS, and 14 megapixel resolution sensor. The camera
was set to automatic mode, and images were acquired with
ample ambient light (no flash), avoiding direct sunlight and
shadows. The photographer stood in the base of the trench
to photograph the lowest wall and on the bench to photograph
the opposite upper wall, in both cases capturing images from
different heights while maintaining a roughly orthogonal ori-
entation to the trench wall (Figs. 2b and 4a). Photographs were
also taken from the ground surface (Fig. 2b) to capture the top
portion of the opposite upper wall and the ground surface. Im-
ages were taken from 1.5 to 7 m away, as dictated by trench
geometry. Optical zoom was used when >3m away. Image
acquisition of both walls took less than two hours and resulted
in 1300 photographs, with a goal of total and redundant im-
age coverage from multiple angles and distances. See the
workflow in the electronic supplement to this article for a
more detailed description of field methods.
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 5
Processing
Automated processing of the trench photographs in IBM
software yields a 3D model of the trench exposure. Our
processing steps are based on Agisoft PhotoScan Profes-
sional Edition v.1.1 (PhotoScan) software (see Data and Re-
sources). Out-of-focus photographs are discarded prior to
processing. No other preprocessing or calibration is required.
There are four primary steps (Fig. 4) in the general IBM
workflow to transform an uncalibrated, unordered photoset
into a 3D photorealistic model: (1) align photos, (2) build
dense point cloud, (3) grid point cloud into a 3D surface
(mesh), and (4) add color (texture) from original photographs.
Step 1 uses SFM feature-matching algorithms to detect
feature points in each photograph, match feature points across
photos, and align the photographs. The outputs from step 1 are
aligned photographs (camera positions and orientations,
Fig. 4a); internal camera-calibration parameters, which in-
clude focal lengths and radial and tangential distortion coef-
ficients; and a sparse, colorized point cloud (Fig. 4b).
Step 2 uses a multiview stereo algorithm to make a dense
point cloud (Fig. 4c) from the aligned photographs. The dense
point cloud comprises millions of points with 3D locations
and colors. The dense point cloud is usually two to three or-
ders of magnitude denser than the sparse point cloud.
Step 3 grids the point cloud into a 3D surface or mesh
(Fig. 4d). Step 4 uses the original photographs to drape low-
resolution texture (color) over the 3D surface (Fig. 4e), result-
ing in a photorealistic 3D model. The original photographs
can also be mosaicked and exported at high resolution as a
2D photomosaic basemap for trench logging. The resolution
of the exported photomosaic depends on the resolution of the
input images and can be downsampled during export. See the
Agisoft User Manual (Data and Resources)andVerhoeven
(2011) for more information on the specific algorithms em-
ployed by PhotoScan.
In practice, the general workflow described above is
customized for each project, and control points are required to
make geometrically accurate and/or georeferenced models.
Adding control points after photograph alignment (step 1)
allows the user to take advantage of automated control point
placement in IBM software, considerably expediting the proc-
ess. After control points are entered, the sparse point cloud can
be updated to a scaled and/or georeferenced system (linear
transformation) and optimized (nonlinear transformation)
based on control point coordinates and camera-calibration
parameters (output from step 1) to reduce nonlinear warping.
To obtain accurate models, it is also important to edit the point
cloud to remove points with high error and those beyond the
area of interest. See the step-by-step workflow available in
the electronic supplement for more details about processing.
The general workflow is altered in one important way to
make rapid trench photomosaic basemaps: instead of
Figure 4. Results of Agisoft PhotoScan processing steps. The tripod legs are 1:5m tall. (a) Aligned photographs. The squares represent
photographs, and vectors represent their look direction. (b) Sparse point cloud. (c) Dense point cloud. (d) Triangular irregular network (TIN)
3D mesh based on downsampled dense point cloud. (e) Final model with photomosaic derived from automated blending of original photo-
graphs draped over the 3D mesh, creating a photorealistic 3D model. The color version of this figure is available only in the electronic edition.
6N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
building a dense point cloud, we based the 3D model on the
sparse point cloud. Creation of the dense point cloud can be
omitted when rapid, 2D photomosaic basemaps provide
sufficient resolution for trench logging in the field and when
3D surface topography is of secondary importance. We
skipped building the dense point cloud because this step is
computationally intensive, time-consuming, and does not
improve resolution of the exported photomosaic, which de-
pends solely on resolution of the input photographs. Building
the dense point cloud may enhance accuracy of the photo-
mosaic, but that greater level of accuracy is often not neces-
sary for trench photomosaics, and the increased processing
time should be weighed against the gains in accuracy and
project needs. Conversely, if the desired outcome is a high-
resolution 3D model of a trench (or high-resolution topog-
raphy), then building the dense point cloud remains an
essential step in the process.
For this case study, the north and south trench walls
were processed as separate projects, each covering 87 m2
of trench exposure. The south wall model comprises 689
photographs and was optimized with 17 control points. The
sparse point cloud has 2.28 million points, with a spatially
variable average density of 200 points=m2. Pixel resolution
of the photomosaic is 0.45 mm, which is more than sufficient
to resolve structural and stratigraphic features of interest dur-
ing trench logging. The north wall model is built from 583
photographs and 15 control points, and the sparse point
cloud has 1.98 million points, with an average density of
125 points=m2. Resolution of the photomosaic is 0.32 mm.
For trench basemaps, the photomosaics were exported at
0.7 mm resolution and overlain with a 1m2grid in a Geo-
graphic Information System (GIS). Processing time for one
trench wall using PhotoScan v.1.0 on a laptop computer with
16 GB RAM and 2.7 GHz processor was 9 hrs, including
3hrs of user input. See the electronic supplement to
this article for the complete workflow, with a detailed de-
scription of the settings used, the time required for each step,
a discussion of the photomosaic exporting method, and a
script for exporting photomosaics projected onto a vertical
plane. We include an example of a 3D, rotatable trench
model (Fig. S1) and an example photomosaic basemap
used for logging (Fig. S2).
Error Analysis
Sources of Error
Accuracy of an IBM model depends on the quality of the
photoset; geometry of the target; and number, spatial distribu-
tion, and precision of the control points. Here, we evaluate and
quantify the accuracy of check points from the SFM sparse
point cloud model of the case-study trench in terms of absolute
and relative positions. Absolute position, or georeferencing
accuracy, is ultimately limited by uncertainty in the RT K G PS
survey (measurement error and postprocessing error reported
by OPUS), which is 4:5cm. We disregard GPS
uncertainty in quantifying internal geometric model accuracy
(relative positions), because the GPS survey data are used only
to locate the total station in an absolute coordinate system
(UTM). In the case-study trench (Fig. 2), control point loca-
tions (nail heads) for each wall are recorded without moving
the total station, so relative positions are independent of any
error in the absolute location of the total station.
Relative accuracy of points in the sparse point cloud is
more important than absolute positional accuracy for paleo-
seismic trench studies and is subject to three sources of error:
control point measurement, user error, and error in image
alignment and the recovery of camera parameters during the
alignment step. Control point measurement was performed
with a total station that has a reported measurement accuracy
of 1 cm for the distances and settings used in this study
(TOPCON, 2007, see Data and Resources). User error exists
at each stage of the process: for example, typical user error
occurs during target layout or while sighting the total station
on each target (+ on nail heads). User error can be com-
pounded during model construction when the user manually
refines control point placement on each photo. We estimate
that typical user error in total station sighting and control
point placement is 1cm, but it may be sporadically higher.
Geometric accuracy is most often affected by errors in
image alignment and the recovery of camera parameters.
This processing step can cause systematic error that may ex-
ceed instrument and user error. Problems with image align-
ment and camera models can lead to nonlinear deformation
in the point cloud, which cannot be corrected during the
seven-parameter linear transformation applied during geore-
ferencing (e.g., if using Agisoft, see Agisoft User Manual in
the Data and Resources). Nonlinear error due to camera
model recovery can be reduced by using camera precalibra-
tion to obtain more accurate lens-distortion models (James
and Robson, 2014, and references therein). Alternatively,
nonlinear error can be minimized by optimizing the sparse
point cloud using control points. Because local control points
(e.g., nail heads at grid intersections) are usually established
as standard practice when mapping paleoseismic trench ex-
posures, we focus on the use of control points for minimizing
nonlinear error. At least three control points are required to
georeference and scale (linear transform) or optimize (non-
linear transform) the point cloud. It is clear that greater than
three control points are required to obtain high internal geo-
metric precision, but it is uncertain how many control points
are optimal.
Methods
To assess how many control points are required to obtain
maximum internal geometric accuracy for a typical exposure
of one trench wall (87 m2), we optimized the sparse point
cloud of the Alpine trench south wall in 19 separate models,
varying the number and spatial distribution of control points
(Fig. 5). We evaluate the results (Fig. 6;Table S1) by
calculating the magnitude of the residual between the
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 7
observed and estimated 3D locations of points not used in
optimization (check points) using equation (1):
EQ-TARGET;temp:intralink-;df1;55;709Residuali
xobs xest2yobs yest 2zobs zest 2
q1
Observed, or measured, locations are those recorded
with the total station in the field. Estimated locations are
those calculated for the check points based on the optimized
sparse point cloud. To determine error per model we calcu-
late the mean and median of the residuals for each model
(Fig. 6a;Table S1). We also calculate the root mean
square error (rmse) for each model and for models with the
same number of control points (Fig. 6b,Table S1), and an
error envelope of twice the rmse (shaded region in Fig. 6b).
To calculate rmse, we use equation (2):
EQ-TARGET;temp:intralink-;df2;55;543rmse 
Pn
i1Residuali2
n
r:2
We also compared pairs of models (Figs. 5and 6a;
Table S1) with 10 (models I and J) and 12 (models K and L)
control points distributed throughout the model (scatter style)
or around the edges of the model (edges style) to evaluate the
effect that spatial distribution of control points has on check
point accuracy.
Results
The error analysis (Fig. 6,Table S1) indicates that
there is a correlation between the number of control points
implemented and check point accuracy: using progressively
more control points to optimize the sparse point cloud sub-
stantially reduces misfit in the model, but the relationship be-
tween additional control points and the reduction in error is
nonlinear. An exponential fit to the data (Fig. 6a) shows a
rapid decay in the mean residuals from three to six control
points. This exponential fit represents the average misfit
but does not capture the full range of residuals. For example,
all models with at least 10 control points scattered throughout
the target area have mean residuals <1cm (Figs. 5and 6b;
Table S1), but the data are not normally distributed (Fig. 6e,f)
and maximum residuals can reach 57cm(Fig.6b).
We use rmse to better understand variation in the data
and control point spatial distributions. The rmse analysis
(Fig. 6b) indicates rmse values also drop significantly, from
3to 1.5 cm, as three to six control points are used. Models
with six or more control points have more consistent rmse
values of <2cm. Although >95% of all residuals are within
an error envelope of twice the rmse (Fig. 6b), each model has
12 outliers with larger (up to 9.3 cm) residuals. This may be
due to variation in the quality of control point measurement
and placement (e.g., user error). The models with >25 con-
trol points were chosen to optimize the point cloud using
50% (n2627), 75% (n40), and 100% (n53)of
the 53 available control points, in which the spatial distribu-
tion of control points was somewhat randomized. The mod-
els with >25 control points serve to highlight that some
measure of variability in check point accuracy is possible
even when using many control points (e.g., the observed in-
crease in rmse for model N; Fig. 6a,b). This variability arises
in part from the distribution and quality of the control points
used for optimization. Furthermore, because all 53 available
control points are used in optimization for model S (Fig. 6a,
b), this model represents the practical lowest obtainable limit
for check point accuracy in this case study.
Using the minimum number of control points (n3)for
point cloud optimization results in check points that are on
average accurate to within 1.54cm(Fig.6a). Rmse for models
with three control points is 3cm, but the maximum residuals
can be as large as 8 cm (Fig. 6b).
For pairs of models with 10 (I and J) and 12 (K and L)
control points, the models with control points scattered
throughout the target area yield lower rmse than models with
control points near the edges (Figs. 5and 6a;Table S1).
Discussion
Check Point Accuracy and Optimal Control Point
Distribution
The error analysis evaluates check point accuracy from
sparse point clouds optimized with a nonlinear transforma-
tion. We draw four primary observations from these results:
1. The use of progressively more control points results in
smaller mean residuals and rmse (Fig. 6a,b).
2. Gains in accuracy per added control point are relatively
small beyond 12 control points (Fig. 6a,b).
3. Using the minimum number of control points (n3)
yields low mean residuals and rmse (Fig. 6a,b), but re-
sults in high scatter (Fig. 6b,c).
4. Higher accuracy is achieved when control points are dis-
tributed throughout the target area (Figs. 5and 6a;
Table S1).
The error analysis supports the intuitive notion that
incorporating more control points results in increasingly ac-
curate model geometries. However, if field time is limited,
using only three control points in a typical trench-wall
exposure of 87 m2results in reasonably accurate check
points, provided the control points are measured and placed
precisely and distributed throughout the target area. We cau-
tion that using only a few control points is risky, because the
model is more vulnerable to error at individual points, and
using additional control points scattered throughout the ex-
posure will increase the likelihood of producing accurate
check points. For the target area described in this case study,
we find diminishing gains in accuracy for each control point
added beyond 12 control points. Although check point accu-
racy is highest when >25 control points are used for opti-
mization, the relatively small gains in accuracy should be
evaluated against the time spent measuring, recording, and
inputting those additional control points.
8N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
We also find that the spatial distribution of control points
is an important factor in check point accuracy (Figs. 5and
6a;Table S1). Because models with control points scat-
tered throughout the target area are more accurate than mod-
els with control points near the edges, we suggest that
scattering control points throughout the target area is a better
strategy than placing them only at the edges.
Ultimately the number and distribution of control points
implemented depends on the accuracy goals, size, and geom-
etry of the project. For rapid photomosaics of a normal-fault
trench exposure in coarse material with meter-scale offsets,
using 36 control points per wall (87 m2exposure) may be
sufficient for field logging, but for projects requiring greater
accuracy (e.g., trenches in fine-grained deposits with
centimeter-scale offsets), at least 12 control points and/or a
dense point cloud may be necessary. Similarly, the shape of
the target area may affect how many control points are
needed. For example, a long, shallow exposure may require
twice as many control points as this case study; whereas a
square exposure may not require more control points.
Control Point
Check Point
Model
A - 3
Number
Control
Points
A - 3
B - 3
C - 3
D - 5
E - 5
F - 5
G - 6
H - 6
J - 10
K - 12
L - 12
M - 17
N - 26
O - 27
P - 27
Q - 27
R - 40
S - 53
I - 10
Figure 5. Schematic trench walls illustrating the distribution of control points and check points for each point cloud optimization.
Optimal control point distributions have at least 1012 control points scattered throughout the target area (e.g., models I and K).
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 9
This analysis agrees with Harwin and Lucieer (2012),
who found that both the number and distribution of control
points impacted the geometric accuracy of check points in a
dense point cloud reconstructed from imagery taken with an
unmanned aerial vehicle.
Adapting the Workflow for Different Paleoseismic
Exposures
The process presented in this case study is meant to
serve as a best practices workflow in paleoseismic trench set-
tings in which high accuracy is required, survey equipment
such as an RTK GPS system and a total station are available,
and the trench will be open for at least one week. Further-
more, this workflow is primarily designed to facilitate rapid,
high-resolution photomosaics using the SFM sparse point
cloud. We hope this study serves as a useful guide for future
paleoseismic investigations, but it is meant as a jumping-off
point rather than a definitive workflow. Here, we discuss
ways this workflow can be modified for different paleoseis-
mic exposure settings.
Some of the techniques we use in this benched normal-
fault trench many not be optimal for other paleoseismic exca-
vations, as each trench has its own unique geometry and
5
10
15
20
25 Model C
3 Control Points
0
5
10
15
20
25 Model H
6 Control Points
0
5
10
15
20
25 Model K
12 Control Points
Residual (cm)
0
5
10
15
20
25 Model R
40 Control Points
0
Count
1048620 13579
10486201 3 5 7 9
10486201 3 5 7 9
10486201 3 5 7 9
Control points (#)
0 5 10 15 20 25 30 35 40 45 50 55
Residual (cm)
0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
RMSE
Check points
RMSE
Envelope
Mean and
Median for
models with
same # of
control points
Control points (#)
0 5 10 15 20 25 30 35 40 45 50 55
Residual (cm)
0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
R = 0.74
y = 0.038x
2
-0.58
A
B
C
D
E
F
G
H
I
J
K, L
M
N
O
P
Q
RS
Mean for each model
Total Station
Precision Limit
(b)
(a)
(d)
(c)
(f)
(e)
Figure 6. Error analysis. (a) Three-dimensional residuals of check points used in error analysis. Labels AS indicate models with differ-
ent quantities and distributions of control points, as illustrated in Figure 5. Control points are scattered throughout the target area for models I
and K and at the edges for models J and L. Each model yields an optimized point cloud for which we calculate mean, minimum, and
maximum residuals (Table S1). Error bars extend to the maximum and minimum residual for each model. (b) Root mean square error
(rmse) analysis. For models with the same number of control points (e.g., models A, B, and C with three control points), the mean and median
residuals include points from all models. Check points show the total range in residuals. An error envelope of twice the rmse (shaded area)
shows that models having at least six control points are consistently more accurate. For models with the same number of control points
arranged in scatter and edges distributions (models I and J with 10 control points and models K and L with 12 control points), the model with
the scatter distribution is shown. (cf) Histograms showing the distribution of check point residuals for four models.
10 N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
complexities, along with specific research goals. The resolu-
tion and accuracy required for paleoseismic trench photomo-
saics and 3D models depend on the grain size of the substrate,
the stratigraphy, and the magnitude of the offsets. Different
trench geometries (e.g., slot, open-pit, benched) (e.g., McCal-
pin, 2009;Bemis et al., 2014;Scharer et al., 2014)may
require modified photo acquisition and control point deploy-
ment strategies. For example, slot trenches will necessitate
more photos due to the limited viewing distance, as will any
trench requiring ultra-high resolution and accuracy.
The process described in this study utilizes control
points with absolute coordinates measured with RTK GPS
precision, but this is not required to make scaled trench mod-
els and photomosaics (e.g., Bemis et al., 2014). At a mini-
mum, a 3D scaled model of a trench can be made with only a
consumer camera and a tape measure. In this case, distances
between at least three pairs of objects in the trench that are
visible on the photographs are measured and used to provide
model scale, but not absolute position. This approach is suit-
able for situations in which survey time and resources are
extremely limited. Bemis et al. (2014) caution that these dis-
tance measurements should be taken over the width of the
target area, rather than at small intervals. Such long measure-
ments are difficult to capture accurately by hand (e.g., Cas-
tillo et al., 2012), and consequently model errors may be
large and unknown.
Additionally, georeferencing may be accomplished using
coordinates from GPS-tagged photos, but in this case distance
measurements between objects in the trench should be used
for optimization because the low-precision GPS in consumer
cameras can introduce model error. We used GPS-tagged
photos because we found that using a GPS-enabled camera
significantly reduced image alignment time in initial testing
with PhotoScan v.0.9. However, this benefit is not as pro-
nounced in recent versions of PhotoScan (v.1.1), and poor-
quality GPS photos may slightly increase processing time.
Error can also arise from the use of handheld GPS devices
to measure control points. Users should expect decreased
accuracy with any of these methods, as accuracy is influenced
by instrument precision limits, user errors, and error in image
alignment and camera model recovery. Although high quality,
precise survey control data are optimal for research-grade
trench studies, we recognize this approach may not be feasible
in all field conditions (Bemis et al.,2014).
One of the benefits of SFM is that the image-matching
algorithms can process photographs from multiple views and
cameras, enabling rapid photo acquisition and the use of
inexpensive cameras. These qualities are desirable in field
settings, when quick photo acquisition may be necessary in
fleeting good light conditions or an expensive camera could
be harmed. For example, trench photos can be acquired
simultaneously by multiple people with multiple cameras
if field time is extremely limited. However, a number of stud-
ies have shown that using a digital single-lens reflex (dSLR)
camera with a fixed-focal-length lens and precalibration can
significantly reduce error in model geometry (e.g., James and
Robson, 2012,2014). In this case study, we opted not to use
adSLR camera and fixed-focal-length lens because we are
able to distinguish very coarse sand on the photomosaics
without these tools and because the larger file size increases
processing time. Nevertheless, such tools may be needed to
obtain adequate accuracy in some trenches.
Advantages of Using IBM for Paleoseismic
Photomosaics
The IBM approach for constructing trench photomosaics
provides an alternative to the more time-consuming and labor-
intensive practice of manually making photomosaics using
image-editing software. At the Alpine trench site, a single user
acquired photographs just after sunrise, and the photomosaic
basemaps were generated and ready for trench logging by the
following morning. Photomosaic basemaps for both walls
were completed in approximately 18 hrs, though only 6of
those hours required human input. For comparison, we esti-
mate it would take 34 people working 10 hrs each to produce
conventional photomosaics for the case-study trench within a
day. The IBM methodology used in this study requires only
15%20% of the user input time and approximately half
of the total processing time that would be required to generate
a conventional photomosaic, though exact time saved depends
on the size and geometry of the trench, processing settings
used, and user familiarity with software. For the case-study
trench and photomosaics, user input time was 2min =m2
of exposure, and total processing time was 6min =m2of ex-
posure. We estimate a manual photomosaic takes at least
12 min =m2of exposure, all requiring user input. Much of
the processing time for IBM photomosaics is automated and
requires only one user, relieving other members of the field
party to tend to tasks at the trench site. The most time-
consuming step in creating the photomosaic is inputting con-
trol points; however, the accuracy of the final product is highly
dependent on survey control, in either a relative or absolute
coordinate system. The error analysis presented here (Figs. 5,
6;Table S1) serves as a guide for future studies in planning
control point deployment and finding an optimal balance
between speed and accuracy.
Trench photomosaics generated with the IBM method
provide basemaps that are more spatially and visually accurate
than conventional photomosaics (Fig. 3). Common features in
a trench, such as holes, protruding clasts, and benches that are
difficult to handle in conventional photomosaics (Fig. 3a), are
easily modeled with the IBM approach (Fig. 3b). Furthermore,
exported IBM photomosaics are seamless and color-
continuous. Although geometrically accurate models can be
made using relative coordinates or scale bars, use of an abso-
lute coordinate system (e.g., UTM) allows for a more complete
archive of trench data and the 3D model to be displayed and
analyzed in the context of detailed site mapping in a GIS envi-
ronment. Two-dimensional photomosaics of the trench expo-
sure can be placed in a GIS environment and georeferenced
with relative coordinates, which facilitates faster digitizing and
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 11
subsequent modification of the trench logs, more efficient
management of the data, and easier creation of publication-
quality trench logs.
IBM also provides an affordable and timesaving alterna-
tive to TLS in the documentation of paleoseismic trench
exposures. A number of researchers have quantitatively evalu-
ated the accuracy of IBM-derived topographic data compared
with laser-based methods such as airborne and terrestrial lidar
(Castillo et al.,2012;Harwin and Lucieer, 2012;James and
Robson, 2012;Westoby et al.,2012;Fonstad et al.,2013;
Johnson et al., 2014). These studies found that IBM products
derived from dense point clouds are similar in accuracy and
precision to laser-based methods at many scales (James and
Robson, 2012) when control points are used for georeferenc-
ing (linear transformation) (Harwin and Lucieer, 2012)and
optimization (nonlinear transformation) (Johnson et al.,
2014). James and Robson (2012) systematically compare
IBM dense point clouds with TLS in terms of cost, efficiency,
and accuracy at three scales and report that IBM with a dense
point cloud is capable of accuracy to 0.1% (i.e., accurate to
1 cm over a project scale of 100 m). On scales ranging from
1to1000m,James and Robson (2012) demonstrate that IBM
is more efficient than TLS in terms of field time, cost, and flex-
ibility and produces data of similar quality. Similarly, Castillo
et al. (2012) calculated that IBM methods are less expensive
and faster per meter of exposure than TLS for gully erosion
surveys, which are similar in shape and length to paleoseismic
trenches. TLS surveying typically requires more field time
than IBM, with minimal postprocessing, whereas IBM field
data acquisition is faster but must be processed afterwards.
However, Castillo et al. (2012) found that IBM methods were
faster than TLS in both field and processing times in gully ero-
sion surveys. A weakness of the IBM approach is that only
surface models can be created, equivalent to first-return digital
terrain models generated by lidar methods; however, this is not
usually an issue for the fresh exposures studied in paleoseis-
mic trenches, and users can employ automatic point cloud
classification and manual editing to remove unwanted points.
Finally, the IBM technique facilitates creation of a 3D
digital representation of the trench (Fig. S1), which may
be used to measure fault offsets and the thickness of strati-
graphic units, to better visualize reconstructions of progres-
sive deformation, and to convey interpretations more effec-
tively. Detailed 3D models and associated photomosaics pro-
vide a way to permanently preserve ephemeral paleoseismic
exposures. Although IBM models do not capture the often
subtle variations in unit character (e.g., soil texture, hardness,
and friability) that paleoseismologists rely on to map trench
exposures, they are a substantial improvement on the existing
photomosaic basemaps that typically accompany trench
studies.
Conclusions
IBM 3D models and 2D photomosaics provide faster,
seamless, more accurate, and more detailed records of paleo-
seismic trench investigations than conventional (manual)
photomosaic methods. We demonstrate the use of IBM in doc-
umenting paleoseismic trench exposures in a case-study
trench along the Wasatch fault, compare the IBM and conven-
tional photomosaics constructed for the site, and evaluate geo-
metric accuracy of check points from the SFM sparse point
cloud. The method presented here demonstrates that the rapid,
semiautomated creation of high-resolution, seamless, geore-
ferenced paleoseismic trench photomosaics can be accom-
plished approximately twice as fast with 15%20% of the
user input time as compared with manual photomosaic meth-
ods. Check point accuracy increases with the number of con-
trol points implemented, and rmse of <2cm can be achieved
with as few as six control points on a typical benched trench
exposure, provided the control points are spatially distributed
throughout the target area. We find diminishing gains in ac-
curacy beyond 12 control points. IBM models also provide
enhanced visualization, archival, and educational benefits
over conventional photomosaics and are a faster, more cost-
effective alternative to TLS to make similar products. The
methodology, workflow, and error analysis presented here
should aid geologists in any tectonic setting in planning and
implementing future investigations.
Finally, we advocate that the IBM method, or a similar
approach, become standard practice for paleoseismic trench
studies. Because of their intrinsically ephemeral nature, pa-
leoseismic trench exposures should be preserved as 3D mod-
els with their corresponding high-resolution photomosaics.
IBM models and photomosaics provide substantial data pres-
ervation with relatively little effort.
Data and Resources
Real Time Kinetic Global Positioning System data were
postprocessed using corrections from the National Geodetic
Surveys Online Positioning User Service available at http:
//www.ngs.noaa.gov/OPUS/ (last accessed May 2014). Model
processing was completed using Agisoft PhotoScan Profes-
sional Edition v.1.1. The Agisoft PhotoScan User Manual:
Professional Edition, v.1.1, was downloaded from http://www
.agisoft.com/downloads/user-manuals/ (last accessed January
2015). Useful tips on image capture (How to Get an Image
Dataset that Meets PhotoScan Requirements?) are available
from http://www.agisoft.com/pdf/tips_and_tricks/Image%20
Capture%20Tips%20-%20Equipment%20and%20Shooting
%20Scenarios.pdf (last accessed July 2015). The TOPCON
Pulse GPT-7500 total station user manual by TOPCON
Corporation (2007) was downloaded from http://www
.topptopo.dk/uploads/media/manualer/Totalstation/IM_GPT-
7500Eng.pdf (last accessed January 2015).
Acknowledgments
Conversations with Edwin Nissen, Kendra Johnson, Adam McKean,
and Steve Bowman guided development of the workflow in trench settings.
Joshua DeVore and Adam Hiscock provided valuable field assistance. We
thank the Agisoft Support Team for providing the export script and
12 N. G. Reitman, S. E. K. Bennett, R. D. Gold, R. W. Briggs, and C. B. DuRoss
answering technical questions. Reviews by Steve Personius, Mike James,
and Sean Bemis significantly improved this manuscript. The U.S. Geologi-
cal Survey Earthquake Hazards Program supported this work. Any use of
trade, product, or firm names is for descriptive purposes only and does
not imply endorsement by the U.S. Government.
References
Bemis, S., S. Micklethwaite, D. Turner, M. R. James, S. Akciz, S. Thiele,
and H. A. Bangash (2014). Ground-based and UAV-based photogram-
metry: A multi-scale, high resolution mapping tool for structural geol-
ogy and paleoseismology, J. Struct. Geol. 69, 163178, doi: 10.1016/
j.jsg.2014.10.007.
Bemis, S., L. A. Walker, C. Burkett, and J. R. DeVore (2013). Use of 3D
models derived from handheld photography in paleoseismology, Geol.
Soc. Am. Abstr. Progr 45, no. 7, 147.
Bennett, S. E. K., C. B. DuRoss, R. D. Gold, R. W. Briggs, S. F. Personius,
and S. A. Mahan (2014). Preliminary paleoseismic trenching results
from the Flat Canyon site, southern Provo segment, Wasatch fault
zone: Testing Holocene fault-segmentation at the Provo-Nephi seg-
ment boundary, Seismological Society of America Annual Meeting,
Anchorage, Alaska, 30 April2 May 2014.
Bennett, S. E. K., C. B. DuRoss, R. D. Gold, R. W. Briggs, S. F. Personius,
N. G. Reitman, A. I. Hiscock, J. D. DeVore, H. J. Gray, and S. A.
Mahan (2015). History of six surface-faulting Holocene earthquakes
at the Alpine trench site, northern Provo segment, Wasatch fault zone,
Utah, Seismological Society of America Annual Meeting, Pasadena,
California, 2123 April 2015.
Castillo, C., R. Perez, M. R. James, J. N. Quinton, and J. A. Gomez (2012).
Comparing the accuracy of several field methods for measuring
gully erosion, Soil Sci. Soc. Am. J. 76, 13191332, doi: 10.2136/
sssaj2011.0390.
Fonstad, M. A., J. T. Dietrich, B. C. Courville, J. L. Jensen, and P. E.
Carbonneau (2013). Topographic structure from motion: A new devel-
opment in photogrammetric measurement, Earth Surf. Process. Landf.
38, 421430, doi: 10.1002/esp.3366.
Haddad, D. E., S. O. Akciz, R. A. Arrowsmith, D. D. Rhodes, J. S. Oldow,
O. Zielke, N. A. Toke, A. G. Haddad, J. Mauer, and P. Shilpakar
(2012). Applications of airborne and terrestrial laser scanning to pale-
oseismology, Geosphere 8, no. 4, 771786, doi: 10.1130/GES00701.1.
Harwin, S., and A. Lucieer (2012). Assessing the accuracy of georeferenced
point clouds produced via multi-view stereopsis from unmanned aerial
vehicle (UAV) imagery, Remote Sens. 4, 15731599, doi: 10.3390/
rs4061573.
James, M. R., and S. Robson (2012). Straightforward reconstruction of 3D
surfaces and topography with a camera: Accuracy and geoscience ap-
plication, J. Geophys. Res. 117, no. F03017, doi: 10.1029/
2011JF002289.
James, M. R., and S. Robson (2014). Mitigating systematic error in topo-
graphic models derived from UAVand ground-based image networks,
Earth Surf. Process. Landf. 39, 14131420, doi: 10.1002/esp.3609.
Javernick, L., J. Brasington, and B. Caruso (2014). Modeling the topography
of shallow braided rivers using structure-from-motion photogram-
metry, Geomorphology 213, 166182.
Johnson, K., E. Nissen, S. Saripalli, J. R. Arrowsmith, P. McGarey, K.
Scharer, P. Williams, and K. Blisniuk (2014). Rapid mapping of ultra-
fine fault zone topography with structure from motion, Geosphere 10,
no. 5, doi: 10.1130/GES01017.1.
Kaiser, A., F. Neugirg, G. Rock, C. Muller, F. Haas, J. Ries, and J. Shmidt
(2014). Small-scale surface reconstruction and volume calculation of soil
erosion in complex Moroccan gully morphology using structure from
motion, Remote Sens. 6, no. 8, 70507080, doi: 10.3390/rs6087050.
Lucieer, A., S. M. de Jong, and D. Turner (2013). Mapping landslide dis-
placements using structure from motion (SfM) and image correlation
of multi-temporal UAV photography, Progress Phys. Geogr. 38, no. 1,
97116, doi: 10.1177/0309133313515293.
McCalpin, J. P. (Editor) (2009). Paleoseismology, Second Ed., R. Dmowska,
D. Hartmann, and H. Thomas Rossby (Series Editors), Vol. 95,
International
Geophysics Series, Elsevier Academic Press, New York, New York.
Minisini, D., M. Wang, S. C. Bergman, and C. Aiken (2014). Geological
data extraction from LiDAR 3-D photorealistic models: A case study
in an organic-rich mudstone, Eagle Ford Formation, Texas, Geosphere
10, no. 3, 610626, doi: 10.1130/GES00937.1.
Personius, S. F., A. J. Crone,M. N. Machette, D. J. Lidke, L.-A. Bradley, and
S. A. Mahan (2007). Logs and scarp data from a paleoseismic inves-
tigation of the Surprise Valley fault zone, Modoc County, California,
version 1.1, U.S. Geol. Surv. Scientif. Investigations Map 2983,2
sheets.
Scharer, K., R. Weldon, A. Streig, and T. Fumal (2014). Paleoearthquakes at
Frazier Mountain, California delimit extent and frequency of past San
Andreas fault ruptures along 1857 trace, Geophys. Res. Lett. 41,
no. 13, 45274534, doi: 10.1002/2014GL060318.
Snavely, N., R. Garg, S. M. Seitz, and R. Szeliski (2008). Modeling the
world from Internet photo collections, Int. J. Comput. Vis. 80,
no. 2, 189210, doi: 10.1007/s11263-007-0107-3.
Snavely, N., S. M. Seitz, and R. Szeliski (2006). Photo tourism: Exploring
photo collections in 3D, ACM Trans. Graph. 25, no. 3, 835846, doi:
10.1145/1179352.1141964.
Snavely, N., S. M. Seitz, and R. Szeliski (2008). Finding paths through the
worlds photos, ACM Trans. Graph. 27, no. 3, 1121, doi: 10.1145/
1360612.1360614.
Tavani, S., P. Granado, A. Corradetti, M. Girundo, A. Iannace, P. Arbués,
J. A. Muñoz, and S. Mazzoli (2014). Building a virtual outcrop,
extracting geological information from it, and sharing the results in
Google Earth via OpenPlot and Photoscan: An example from the
Khaviz anticline (Iran), Comput. Geosci. 63, 4453.
Verhoeven, G. (2011). Taking computer vision aloftArchaeological three-
dimensional reconstructions from aerial photographs with PhotoScan,
Archaeol. Prospect. 18, 6773, doi: 10.1002/arp.399.
Westoby, M. J., J. Brasington, N. F. Glasser, M. J. Hambrey, and J. M.
Reynolds (2012). Structure-from-motionphotogrammetry: A low-
cost, effective tool for geoscience applications, Geomorphology
179, 300314, doi: 10.1016/j.geomorph.2012.08.021.
Geologic Hazards Science Center
U.S. Geological Survey
M.S. 966
PO Box 25046
Denver, Colorado 80225
nreitman@usgs.gov
Manuscript received 4 February 2015
High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis 13
... In recent years, trench-wall diagrams are increasingly constructed using Structure-from-Motion (SfM) algorithms to produce high-detail orthophoto mosaics of trench walls (Haddad et al., 2012;Bemis et al., 2014;Reitman et al., 2015;Delano et al., 2021). SfM relies on photogrammetric principles and many overlapping photographs taken from different positions to reproduce the 3D geometry and camera locations of a scene. ...
... Prior methods of accurately scaling photomosaics of paleoseismic trenches using SfM have relied on control points surveyed with either a total station or dGPS (Reitman et al., 2015) or by placing printed scale bars throughout the trench exposure (e.g., Delano et al., 2021). The total station and dGPS methods are timeconsuming and require expensive instruments, and in the case of dGPS, may not produce the necessary accuracy in a forested or urban environment. ...
... The methods presented in this manuscript build on those of Reitman et al. (2015) by substituting the dGPS or total station for the more readily available, easy to use, and cheap iOS (or other) laser scanner. Our method quickly surveys dozens of control points without the ...
Article
Full-text available
Measuring displacements of strike-slip paleoearthquakes from trenching excavations requires detailed 3D trenching excavations. Here a new methodology utilizing an iOS based laser scanner and structure-from-motion is used to reconstruct stratigraphy and trace a displaced fluvial channel sequence across the Dog Valley fault in Northeastern California. The Dog Valley fault is a left-lateral strike slip fault in the northern Walker Lane. The northern Walker Lane accommodates ~5-7 mm/yr of dextral shear; however, the relative rates of deformation and earthquake history of the fault have not been previously assessed. Here, we present geomorphic mapping observations and preliminary paleoseismic trenching results from the Dog Valley fault. Lidar data reveal a clear east-northeast striking fault trace that extends about ~25 km from the Prosser Creek drainage west of the Polaris Fault near Highway 89 to the northwest flank of Peavine Mountain. The main trace of the fault appears to project through Stampede dam. Youthful fault scarps are visible along much of the fault, with alternating northwest- and southeast-facing scarps. Clear lateral displacements are largely absent along the fault, however right-stepping fault strands, sidehill benches, linear valleys and ridges, and alternating scarp facing directions are all consistent with left-lateral strike slip displacement. Stratigraphic and structural relations exposed in the Dog Valley fault trench show clear truncations and tilting of bedded fluvial and peat deposits and provide evidence for the occurrence of two Holocene earthquakes: the most recent earthquake postdates ~8 ka, and an earlier earthquake is inferred to have occurred between 8491-8345 cal. ybp. Based on 3D excavations of a prominent channel margin, the most recent earthquake was associated with ~ 115 cm of left-lateral displacement, corresponding to an M6.7 earthquake.
... On the cleaned walls a 1x1 m reference grid was constructed, and the grid points were measured by dGNSS ( Figure 3.5). The trench walls were then photographed to reproduce an image-based, seamless and orthorectified (combined with dGNSS) photomosaic (see Reitman et al., 2015;Patyniak et al., 2017). ...
Thesis
Full-text available
The Pamir Frontal Thrust (PFT) located in the Trans Alai range in Central Asia is the principal active fault of the intracontinental India-Eurasia convergence zone and constitutes the northernmost boundary of the Pamir orogen at the NW edge of this collision zone. Frequent seismic activity and ongoing crustal shortening reflect the northward propagation of the Pamir into the intermontane Alai Valley. Quaternary deposits are being deformed and uplifted by the advancing thrust front of the Trans Alai range. The Alai Valley separates the Pamir range front from the Tien Shan mountains in the north; the Alai Valley is the vestige of a formerly contiguous basin that linked the Tadjik Depression in the west with the Tarim Basin in the east. GNSS measurements across the Central Pamir document a shortening rate of ~25 mm/yr, with a dramatic decrease of ~10-15 mm over a short distance across the northernmost Trans Alai range. This suggests that almost half of the shortening in the greater Pamir – Tien Shan collision zone is absorbed along the PFT. The short-term (geodetic) and long-term (geologic) shortening rates across the northern Pamir appear to be at odds with an apparent slip-rate discrepancy along the frontal fault system of the Pamir. Moreover, the present-day seismicity and historical records have not revealed great Mw > 7 earthquakes that might be expected with such a significant slip accommodation. In contrast, recent and historic earthquakes exhibit complex rupture patterns within and across seismotectonic segments bounding the Pamir mountain front, challenging our understanding of fault interaction and the seismogenic potential of this area, and leaving the relationships between seismicity and the geometry of the thrust front not well understood. In this dissertation I employ different approaches to assess the seismogenic behavior along the PFT. Firstly, I provide paleoseismic data from five trenches across the central PFT segment (cPFT) and compute a segment-wide earthquake chronology over the past 16 kyr. This novel dataset provides important insights into the recurrence, magnitude, and rupture extent of past earthquakes along the cPFT. I interpret five, possibly six paleoearthquakes that have ruptured the Pamir mountain front since ∼7 ka and 16 ka, respectively. My results indicate that at least three major earthquakes ruptured the full-segment length and possibly crossed segment boundaries with a recurrence interval of ∼1.9 kyr and potential magnitudes of up to Mw 7.4. Importantly, I did not find evidence for great (i.e., Mw ≥8) earthquakes. Secondly, I combine my paleoseimic results with morphometric analyses to establish a segment-wide distribution of the cumulative vertical separation along offset fluvial terraces and I model a long-term slip rate for the cPFT. My investigations reveal discrepancies between the extents of slip and rupture during apparent partial segment ruptures in the western half of the cPFT. Combined with significantly higher fault scarp offsets in this sector of the cPFT, the observations indicate a more mature fault section with a potential for future fault linkage. I estimate an average rate of horizontal motion for the cPFT of 4.1 ± 1.5 mm/yr during the past ∼5 kyr, which does not fully match the GNSS-derived present-day shortening rate of ∼10 mm/yr. This suggests a complex distribution of strain accumulation and potential slip partitioning between the cPFT and additional faults and folds within the Pamir that may be associated with a partially locked regional décollement. The third part of the thesis provides new insights regarding the surface rupture of the 2008 Mw 6.6 Nura earthquake that ruptured along the eastern PFT sector. I explore this rupture in the context of its structural complexity by combining extensive field observations with high-resolution digital surface models. I provide a map of the rupture extent, net slip measurements, and updated regional geological observations. Based on this data I propose a tectonic model in this area associated with secondary flexural-slip faulting along steeply dipping bedding of folded Paleogene sedimentary strata that is related to deformation along a deeper blind thrust. Here, the strain release seems to be transferred from the PFT towards older inherited basement structures within the area of advanced Pamir-Tien Shan collision zone. The extensive research of my dissertation results in a paleoseismic database of the past 16 ~kyr, which contributes to the understanding of the seismogenic behavior of the PFT, but also to that of segmented thrust-fault systems in active collisional settings. My observations underscore the importance of combining different methodological approaches in the geosciences, especially in structurally complex tectonic settings like the northern Pamir. Discrepancy between GNSS-derived present-day deformation rates and those from different geological archives in the central part, as well as the widespread distribution of the deformation due to earthquake triggered strain transfer in the eastern part reveals the complexity of this collision zone and calls for future studies involving multi-temporal and interdisciplinary approaches.
... All trench walls were logged with a scale of 1:20, and we also correlated our logs with photomosaics of the trench walls. Photomosaics were produced by following the procedure suggested by Reitman et al. (2015). Samples from critical horizons were collected for radiocarbon ( 14 C) and optically stimulated luminescence (OSL) dating. ...
Article
Full-text available
The Milas Fault (MF) is a poorly understood active fault located between the Büyük Menderes graben to the north and the Gökova graben to the south within the Anatolian–Aegean Region, SW Türkiye. This dextral strike-slip fault has a length of 55 km between Bafa Lake in the northwest and Çamlıca village in the southeast, with a general strike of N60°W, and its surface trace displays two separate geometric segments. We mapped the geomorphological and geological features of the MF using Google Earth© images, digital elevation models (DEMs) and field observations. The surface traces and kinematic characteristics of the MF were defined by the slickenlines on the partly altered fault planes, morphological lineaments and offset streams, which all suggest a dominant horizontal deformation for this tectonic structure. Moreover, we excavated three palaeoseismological trenches to expose signs of palaeoearthquakes on the MF and to evaluate its seismic hazard potential. Evidence of three palaeoearthquake events was revealed in trenches according to the stratigraphic and structural relationships of the exposed strata. The modelled age limits for these earthquakes yielded 2913–2117 BC, 7680–7043 BC and before 8354 BC from youngest to oldest. Based on these findings, the MF has produced surface-rupturing earthquakes in the Holocene epoch. Although there are no constrained dates to propose a recurrence interval, combined data from field observations, morphology, seismic records and palaeoseismology indicate that the Milas Fault is an active structure and has the potential to produce an earthquake with a magnitude of Mw 6.6–7.1 in the future.
... The exposed walls in the trench were cleaned and photographed with a digital camera ( Figure 8). We used the structure-from-motion method to mosaic the photos (e.g., Bemis et al., 2014;Reitman et al., 2015) before mapping the deformed and offset sedimentary units on the orthophotos. ...
Article
Full-text available
Understanding the three‐dimensional structure, segmentation, and kinematics of complex fault systems is essential to assessing the size of potential earthquakes and related seismic hazards. The Danghe Nan Shan thrust, a major splay of the Altyn Tagh fault (ATF) in north Tibet, is one of these complex fault junctions. Near the town of Subei, the western Danghe Nan Shan thrust composes two left‐stepping faults outlined by fault scarps in front of folded and uplifted alluvial fans and terraces. Age constraints and 2D reconstructions of the accumulated slip above a transient base level of four terraces standing 7–60 m above the present stream bed yield shortening and vertical uplift rates of 0.5 ± 0.1 and 1.1 ± 0.3 mm/yr, respectively, over the last 130 ka on the southern thrust. Along the northern thrust, vertical terrace offsets of 1.5–3.6 m and horizontal slip of 4.5 m documented in a paleoseismological trench occurred after 12 ± 4 ka, constraining coeval rates of 0.3 ± 0.1 mm/yr for uplift and shortening. Overall, 1.4 ± 0.4 mm/yr terrace uplift and 0.8 ± 0.2 mm/yr shortening rates are determined, in agreement with late Miocene long‐term exhumation rate estimates. Our fault mapping and geomorphic and structural observations imply that the western Danghe Nan Shan thrust accommodates slip transfer from the ATF to the west to thrusting and shortening farther east in the Qilian Shan region. Considering the scarp sizes, their lateral extent, the geometry of the faults at depth, and their slip‐rate, we suggest the possible occurrence of Mw 7+ earthquakes near Subei.
... The success of SfM-based topographic products is largely dependent on the photo set quality, coverage, and resolution. An ideal photoset for SfM techniques has detailed image resolution to produce accurate photo tie points (e.g., Westoby et al., 2012), ample (∼60%) overlap between photos (Abdullah et al., 2013;Bakker & Lane, 2017;Krauss, 1993), an extent larger than the area of interest (e.g., Reitman et al., 2015), and is taken with the same camera and specifications with no changes in lighting or the subject (e.g., Bemis et al., 2014). ...
Article
Full-text available
Earthquake surface deformation provides key constraints on the geometry, kinematics, and displacements of fault rupture. However, deriving these characteristics from past earthquakes is complicated by insufficient knowledge of the pre‐event landscape and its post‐event modification. The 1987 Mw 6.5 Edgecumbe earthquake in the northern Taupō volcanic zone (TVZ) in New Zealand represents a moderate‐magnitude earthquake with distributed surface rupture that occurred before widespread high‐resolution topographic data were available. We use historical aerial photos to build pre‐ and post‐earthquake digital surface models (DSMs) using structure‐from‐motion techniques. We measured discrete and distributed deformation from differenced DSMs and compared the effectiveness of the technique to traditional field‐ and lidar‐based studies. We identified most fault traces recognized by 1987 field mapping, mapped newly identified traces, and made dense remote slip measurements with a vertical separation resolution of ∼0.3 m. Our maximum and average vertical separation measurements on the Edgecumbe fault trace (2.5 ± 0.3 and 1.2 m, respectively), are similar to field‐based values of 2.4 and 1.1 m, respectively. Importantly, this technique can discern between new and pre‐existing fault scarps better than field techniques or post‐earthquake lidar‐based measurements alone. Our surface displacement results are used to refine subsurface fault geometries and slip distributions at depth, which are further used to investigate potential magmatic‐tectonic stress interactions in the northern TVZ. Our results suggest the Edgecumbe fault dips more gently at depth than at the surface, hosted shallow slip in 1987, and may be advanced toward failure by interactions with nearby magma bodies.
... Para ello, se aplicó una primera metodología consistente en la toma sistemática de múltiples fotografías de las paredes completas, para posteriormente generar un fotomosaico 3D a partir de un procesado por medio del software Agisoft Metashape Pro (https://www.agisoft.com), siguiendo el flujo de trabajo definido por Reitman et al. (2015). Los resultados de dicho método se muestran en el Anexo 1. ...
Thesis
Full-text available
The Alhama de Murcia Fault (AMF) is one of the main seismogenic faults in the Eastern Betics Shear Zone (EBSZ). Between Lorca and Totana, the fault splits in multiple branches that are considered to join at depth and, therefore, have their own contribution in terms of the seismic history and the slip rate of the main fault. A previous study constrained most of the activity of the fault by carrying out a paleoseismic analysis in four of the five most important branches in this sector and, therefore, producing the first paleoseismic transect analysis in the area. However, an additional branch was not included in the transect due to the lack of adequate paleoseismic sites: the N2a-AMF. This work has focused on this unexplored branch to analyze its seismic potential and obtain new data to complete the paleoseismic transect previously carried out in this area (El Saltador-La Hoya), with the aim of contributing to generate a more realistic seismic hazard model in the region. We conducted a detailed geomorphological study to accurately map the N2a-AMF and to select a suitable location for the excavation of a palaeoseismic trench. We also refined the mapping of N2b-AMF, which has been previously analysed, to better understand the relationship between both branches and the push-up they bound. In the new trench, we observed robust evidence of recurrent deformation (a minimum of three morphogenetic events) in Upper Pleistocene sediments, which implies that N2a-AMF has had activity at least during that period (dating is in course). These events would have taken place in the estimated time interval between 82,4 kyr and 39,2 kyr, presenting a minimum recurrence of 14,4 kyr. Furthermore, we calculated a minimum vertical slip rate of between 0.010 and 0.011 mm/year for the last 82.4 kyr.
... To date, geological surface reconstructions have enjoyed diverse applications within numerous geoscientific disciplines, including structural geology (e.g., [24,30]), sedimentology (e.g., [31,32]), stratigraphy (e.g., [33]), volcanology (e.g., [34]), geomorphology (e.g., [35,36]), and applications in slope stability analysis and landslide monitoring (e.g., [37][38][39][40]). Such models are also routinely employed within geo-heritage site documentation (e.g., [41,42]), as well as for the documentation of excavations (e.g., [30,43]). In recent years, geological surface reconstructions have also been leveraged as pedagogical tools to enhance contextual understanding and 3D thinking within the classroom [44][45][46][47], and to deliver virtual geological field trips to geoscience students, industry practitioners and the wider public (e.g., [48][49][50][51]). ...
Article
Full-text available
We are witnessing a digital revolution in geoscientific field data collection and data sharing, driven by the availability of low-cost sensory platforms capable of generating accurate surface reconstructions as well as the proliferation of apps and repositories which can leverage their data products. Whilst the wider proliferation of 3D close-range remote sensing applications is welcome, improved accessibility is often at the expense of model accuracy. To test the accuracy of consumer-grade close-range 3D model acquisition platforms commonly employed for geo-documentation, we have mapped a 20-m-wide trench using aerial and terrestrial photogrammetry, as well as iOS LiDAR. The latter was used to map the trench using both the 3D Scanner App and PIX4Dcatch applications. Comparative analysis suggests that only in optimal scenarios can geotagged field-based photographs alone result in models with acceptable scaling errors, though even in these cases, the orientation of the transformed model is not sufficiently accurate for most geoscientific applications requiring structural metric data. The apps tested for iOS LiDAR acquisition were able to produce accurately scaled models, though surface deformations caused by simultaneous localization and mapping (SLAM) errors are present. Finally, of the tested apps, PIX4Dcatch is the iOS LiDAR acquisition tool able to produce correctly oriented models.
Article
On rocky tectonic coasts, data from Holocene marine terraces may constrain the timing of coseismic uplift and help identify the causative faults. Challenges in marine terrace investigations include: 1) identifying the uplift datums; 2) obtaining ages that tightly constrain the timing of uplift; 3) distinguishing tsunami deposits from beach deposits on terraces; and 4) identifying missing terraces and hence earthquakes. We address some of these challenges through comparing modern beach sediments and radiocarbon ages with those from a trench excavated across three terraces at Aramoana, central Hikurangi Subduction Margin, New Zealand. Sedimentary analyses identified beach and dune deposits on terraces but could not differentiate specific environments within them. Modern beach shells yielded modern radiocarbon ages, regardless of position or species, showing age inheritance and habitat is likely not an issue when dating shells on these terraces. By integrating terrace mapping, stratigraphy, morphology, and radiocarbon ages we develop a conceptual model of coastal uplift and terrace formation following at least two, possibly three, earthquakes at 5490–5070, 2620–2180, and 950–650 cal. yr BP. A high step and time gap between the upper two terraces raises the possibility that at least one intervening terrace is completely eroded. The trench exposure also showed that terrace stratigraphy may differ from that inferred from surface geomorphology, with apparent beach ridges being of composite origin and draping of younger beach deposits on the outer edge of a previous terrace. Dislocation modelling and comparison of marine terrace and earthquake ages from ~4 km south and ≤73 km north confirms that the most likely earthquake source is the nearshore, landward‐dipping, Kairakau Fault. Alternative sources, such as multi‐fault ruptures of the Kairakau‐Waimārama faults or Hikurangi subduction earthquakes, and/or a combination of the two are also possible and should be examined in future studies.
Article
During the reconstruction of the digital outcrop models (DOM) by photogrammetry, the geodetic coordinate systems of the models can be calibrated through the camera positions or the control points. This paper developed a procedure for constructing virtual digital outcrop model (VDOM) and proposed a numerical simulation procedure of photogrammetry to compare the measurement accuracy of the mentioned-above two methods. A physical model test indicated the validation of the proposed procedure. In addition, many numerical experiments about the accuracy of the camera positions and the control points are designed. The experiment results show that when the control points are used to georeference the DOMs, it is recommended to measure the coordinates of the control points by the device with high positioning accuracy, such as the total station or RTK equipment. If the device with high positioning accuracy is unavailable, better measurement accuracy can be obtained using the camera positions.
Article
Full-text available
This study presents a computer vision application of the structure from motion (SfM) technique in three dimensional high resolution gully monitoring in southern Morocco. Due to impractical use of terrestrial Light Detection and Ranging (LiDAR) in difficult to access gully systems, the inexpensive SfM is a promising tool for analyzing and monitoring soil loss, gully head retreat and plunge pool development following heavy rain events. Objects with known dimensions were placed around the gully scenes for scaling purposes as a workaround for ground control point (GCP) placement. Additionally, the free scaling with objects was compared to terrestrial laser scanner (TLS) data in a field laboratory in Germany. Results of the latter showed discrepancies of 5.6% in volume difference for erosion and 1.7% for accumulation between SfM and TLS. In the Moroccan research area soil loss varied between 0.58 t in an 18.65 m2 narrowly stretched gully incision and 5.25 t for 17.45 m2 in a widely expanded headcut area following two heavy rain events. Different techniques of data preparation were applied and the advantages of SfM for soil erosion monitoring under complex surface conditions were demonstrated.
Conference Paper
Full-text available
Paleoseismic data near fault segment boundaries provide direct information about fault rupture segmentation. The 350 km-long Wasatch fault zone (WFZ), the archetype of segmented normal faults, consists of 10 structural segments. Abundant paleoseismic data support a history of segmented Holocene surface ruptures, but recent findings document at least one case where an earthquake ruptured across a WFZ structural segment boundary. The extent and frequency of ruptures that span segment boundaries remains poorly known, in part because most paleoseismic studies are sited in segment interiors, adding uncertainty to seismic hazard models for this heavily populated region of Utah. To address these unknowns and reduce this uncertainty we have begun a paleoseismic trenching campaign targeting WFZ structural segment boundaries. We excavated a trench at Flat Canyon (Salem, UT), near the southern end of the Provo segment in the complex Provo-Nephi segment boundary, a 5–8 km-wide right stepover. Alluvial fan deposits at the site are displaced across a 13 m-high scarp. We document a minimum of four and maximum of seven earthquakes within a 17–20 m-wide fault zone, consisting of two graben systems within the scarp. The lower graben preserves evidence for four earthquakes and ≥5 m of vertical throw. The upper graben preserves evidence for two to three earthquakes with ~1 m of total throw. Ongoing optically stimulated luminescence and radiocarbon analyses will provide earthquake timing constraints and allow us to correlate earthquakes between the grabens. Our goal is to determine whether earthquakes at the Flat Canyon site correspond with earthquakes at several paleoseismic sites farther north on the Provo segment and/or with earthquakes at the northernmost sites on the adjacent Nephi segment to the south. Comparison of these earthquake chronologies will allow us to test whether Holocene earthquakes have ruptured across the Provo-Nephi segment boundary.
Article
Full-text available
Structure from Motion (SfM) generates high-resolution topography and coregistered texture (color) from an unstructured set of overlapping photographs taken from vary- ing viewpoints, overcoming many of the cost, time, and logistical limitations of Light Detection and Ranging (LiDAR) and other topographic surveying methods. This paper provides the first investigation of SfM as a tool for mapping fault zone topography in areas of sparse or low-lying vegetation. First, we present a simple, affordable SfM workflow, based on an unmanned helium balloon or motorized glider, an inexpensive camera, and semiautomated software. Second, we illustrate the system at two sites on southern California faults covered by exist- ing airborne or terrestrial LiDAR, enabling a comparative assessment of SfM topography resolution and precision. At the first site, an ~0.1 km2 alluvial fan on the San Andreas fault, a colored point cloud of density mostly >700 points/m2 and a 3 cm digital elevation model (DEM) and orthophoto were produced from 233 photos collected ~50 m above ground level. When a few global positioning system ground control points are incorporated, closest point vertical distances to the much sparser (~4 points/m2) airborne LiDAR point cloud are mostly <3 cm. The second site spans an ~1 km section of the 1992 Landers earthquake scarp. A colored point cloud of density mostly >530 points/m2 and a 2 cm DEM and orthophoto were produced from 450 photos taken from ~60 m above ground level. Closest point vertical distances to exist- ing terrestrial LiDAR data of comparable density are mostly <6 cm. Each SfM survey took ~2 h to complete and several hours to generate the scene topography and texture. SfM greatly facilitates the imaging of subtle geomorphic offsets related to past earth- quakes as well as rapid response mapping or long-term monitoring of faulted landscapes.
Article
Full-text available
This study presents a computer vision application of the structure from motion (SfM) technique in three dimensional high resolution gully monitoring in southern Morocco. Due to impractical use of terrestrial Light Detection and Ranging (LiDAR) in difficult to access gully systems, the inexpensive SfM is a promising tool for analyzing and monitoring soil loss, gully head retreat and plunge pool development following heavy rain events. Objects with known dimensions were placed around the gully scenes for scaling purposes as a workaround for ground control point (GCP) placement. Additionally, the free scaling with objects was compared to terrestrial laser scanner (TLS) data in a field laboratory in Germany. Results of the latter showed discrepancies of 5.6% in volume difference for erosion and 1.7% for accumulation between SfM and TLS. In the Moroccan research area soil loss varied between 0.58 t in an 18.65 m 2 narrowly stretched gully incision and 5.25 t for 17.45 m 2 in a widely expanded headcut area following two heavy rain events. Different OPEN ACCESS Remote Sens. 2014, 6 7051 techniques of data preparation were applied and the advantages of SfM for soil erosion monitoring under complex surface conditions were demonstrated.
Article
When a scene is photographed many times by different people, the viewpoints often cluster along certain paths. These paths are largely specific to the scene being photographed, and follow interesting regions and viewpoints. We seek to discover a range of such paths and turn them into controls for image-based rendering. Our approach takes as input a large set of community or personal photos, reconstructs camera viewpoints, and automatically computes orbits, panoramas, canonical views, and optimal paths between views. The scene can then be interactively browsed in 3D using these controls or with six degree-of-freedom free-viewpoint control. As the user browses the scene, nearby views are continuously selected and transformed, using control-adaptive reprojection techniques.
Conference Paper
The 350-km-long Wasatch fault zone (WFZ) consists of west-dipping normal fault segments at the eastern boundary of the Basin and Range Province, Utah. Paleoseismic trench data generally support single-segment surface ruptures during large (M≥7) Holocene (<11 ka) earthquakes, but also permit longer ruptures that span structural segment boundaries. To improve rupture length estimates and evaluate the persistence of Holocene rupture termination at central WFZ segment boundaries, we investigated sites ~1 km north and ~1 km south of the boundary between the Salt Lake City (SLCS) and Provo (PS) segments, a ~7 km-long transfer fault. At the Alpine site, located on the northern PS, we excavated a 33-m-long trench across an 8-m-high fault scarp and exposed evidence of faulting in stratified sandy to gravelly alluvial fan deposits. In this trench, the WFZ is expressed as a 5- to 40-cm-wide shear zone that dips ~70° SW. A 2- to 4-m-wide antithetic fault zone with <0.5 m of displacement is observed ~10 m outboard of the primary shear zone. We document evidence for six surface-faulting earthquakes based on colluvial-wedge stratigraphy, fault terminations, and soils that formed during periods of scarp slope stability between earthquakes. Individual colluvial wedges are up to ~0.5- to 0.8-m-thick, suggesting ~1–2 m of displacement per event. A distinctive charcoal-rich sand bed is vertically separated ~7 m across the fault zone, similar to the separation of the ground surface. Ages from 14 radiocarbon and 18 luminescence samples will provide constraints on the timing of individual earthquakes, and facilitate comparison of our new data to existing paleoseismic histories of the SLCS and PS. These data will help resolve the timing and northern extent of PS ruptures and determine whether multi-segment (SLCS-PS) ruptures or spillover ruptures from the SLCS have occurred. These findings will permit a more accurate characterization of the earthquake hazard in the Wasatch Front region.
Conference Paper
We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.
Article
Large earthquakes are infrequent along a single fault, and therefore historic, well-characterized earthquakes exert a strong influence on fault behavior models. This is true of the 1857 Fort Tejon earthquake (estimated M7.7-7.9) on the southern San Andreas Fault (SSAF), but an outstanding question is whether the 330-km long rupture was typical. New paleoseismic data for 6-7 ground-rupturing earthquakes on the Big Bend of the SSAF restrict the pattern of possible ruptures on the 1857 stretch of the fault. In conjunction with existing sites, we show that over the last ~650 years, at least 75% of the surface ruptures are shorter than the 1857 earthquake, with estimated rupture lengths of 100 to <300 km. These results suggest the 1857 rupture was unusual, perhaps leading to the long open interval, and that a return to pre-1857 behavior would increase the rate of M7.3-M7.7 earthquakes.
Article
The production of topographic datasets is of increasing interest and application throughout the geomorphic sciences, and river science is no exception. Consequently, a wide range of topographic measurement methods have evolved. Despite the range of available methods, the production of high resolution, high quality digital elevation models (DEMs) requires a significant investment in personnel time, hardware and/or software. However, image-based methods such as digital photogrammetry have been decreasing in costs. Developed for the purpose of rapid, inexpensive and easy three-dimensional surveys of buildings or small objects, the ‘structure from motion’ photogrammetric approach (SfM) is an image-based method which could deliver a methodological leap if transferred to geomorphic applications, requires little training and is extremely inexpensive. Using an online SfM program, we created high-resolution digital elevation models of a river environment from ordinary photographs produced from a workflow that takes advantage of free and open source software. This process reconstructs real world scenes from SfM algorithms based on the derived positions of the photographs in three-dimensional space. The basic product of the SfM process is a point cloud of identifiable features present in the input photographs. This point cloud can be georeferenced from a small number of ground control points collected in the field or from measurements of camera positions at the time of image acquisition. The georeferenced point cloud can then be used to create a variety of digital elevation products. We examine the applicability of SfM in the Pedernales River in Texas (USA), where several hundred images taken from a hand-held helikite are used to produce DEMs of the fluvial topographic environment. This test shows that SfM and low-altitude platforms can produce point clouds with point densities comparable with airborne LiDAR, with horizontal and vertical precision in the centimeter range, and with very low capital and labor costs and low expertise levels. Copyright © 2012 John Wiley & Sons, Ltd.