Figure 1 - uploaded by Michael Bunds
Content may be subject to copyright.
Context in source publication
Context 1
... document accompanies high-resolution topography and an orthomosaic of a portion of the central San Andreas Fault, California, USA. The data set was generated from optical photographs taken with a small uncrewed aerial system (sUAS) and differential global navigation satellite system (dGNSS) measurements using structure-from-motion processing (SfM) and covers ~ 3 km of strike length of the fault ( Figure 1). We acquired this data set to measure tectonic deformation near the fault ( Scott et al., in review). ...
Similar publications
The frictional behaviour of a series of numerical 2D granular mass flows down a model topography is analysed. Effective friction coefficients estimated from final deposits are compared with data from documented natural geophysical flows, and show a consistent behaviour as far as run-out distances are concerned. The latter is used to estimate effect...
Citations
... The datasets have been either collected by us or gently made available by OpenTopography. We used four point clouds (see Figure 8): 1) Alhambra (100 million points), 2) solar plant (500 million points), 3) San Andreas fault (subsampled to 1 billion points) [39] and 4) San Simeon and Cambria faults (subsampled to 2 billion points) [40]. ...
Remote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, hindering their processing and rendering stages. Level of detail (LOD) techniques have been widely investigated, but arranging the LOD structures is also time-consuming. In this study, we propose a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into meshlets sorted using the Hilbert curve. This alternative encoding alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities, such as edge computing systems. Moreover, hole filling is implemented to fill the gaps inherent in point cloud rendering. Notably, our proposal can handle point clouds of 2 billion points maintaining more than 150 frames per second (FPS) on average without any perceptive quality loss. We evaluate our approach through numerous experiments conducted over real-world data by comparing it with state-of-the-art methods. Figure 1. Performance of our work against state-of-the-art methods over 1 billion points. The average performance remains high regardless of the point cloud size, while the VRAM usage is considerably lower. In addition, point sorting generates more compact meshlets with a reduced computational complexity. Abstract-Remote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, hindering their processing and rendering stages. Level of detail (LOD) techniques have been widely investigated, but arranging the LOD structures is also time-consuming. In this study, we propose a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into meshlets sorted using the Hilbert curve. This alternative encoding alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities, such as edge computing systems. Moreover, hole filling is implemented to fill the gaps inherent in point cloud rendering. Notably, our proposal can handle point clouds of 2 billion points maintaining more than 150 frames per second (FPS) on average without any perceptive quality loss. We evaluate our approach through numerous experiments conducted over real-world data by comparing it with state-of-the-art methods.
... The number of wavelets scales so that after the wavelet decomposition, the extracted component of the maximum period is~300 days (Torrence & Compo, 1998). We obtained the denoised time series by adaptive filtering of the wavelet coefficients at each level of the decomposition (Gao et al., 2010) and then (Bunds et al., 2020), the trench site (Toke & Arrowsmith, 2015), and Buck Ridge. reconstructing the signal following the inverse wavelet transform. ...
... The~50 × 5 m 2 orthomosaic was georeferenced using ground control points measured with rapid-static GNSS. We did not observe tectonic ground fractures in the 2011 and 2012 summers when conducting paleoseismic studies or in October 2017 when collecting the sUAS data set (Bunds et al., 2020). ...
Imaging tectonic creep along active faults is critical for measuring strain accumulation and ultimately understanding the physical processes that guide creep and the potential for seismicity. We image tectonic deformation along the central creeping section of the San Andreas Fault at the Dry Lake Valley paleoseismic site (36.468°N, 121.055°W) using three data sets with varying spatial and temporal scales: (1) an Interferometric Synthetic Aperture Radar (InSAR) velocity field with an ~100‐km footprint produced from Sentinel‐1 satellite imagery, (2) light detection and ranging (lidar) and structure‐from‐motion 3‐D topographic differencing that resolves a decade of deformation over a 1‐km aperture, and (3) surface fractures that formed over the 3‐ to 4‐m wide fault zone during a drought from late 2012 to 2014. The InSAR velocity map shows that shallow deformation is localized to the San Andreas Fault. We demonstrate a novel approach for differencing airborne lidar and structure‐from‐motion topography that facilitates resolving deformation along and adjacent to the San Andreas Fault. The 40‐m resolution topographic differencing resolves a 2.5 ± 0.2 cm/yr slip rate localized to the fault. The opening‐mode fractures accommodate 2.2−0.6+0.8 cm/yr of fault slip. A 90% ± 30% of the 1‐km aperture deformation is accommodated over the several meter‐wide surface trace of the San Andreas Fault. The extension direction inferred from the opening‐mode fractures and topographic differencing is 40°–48° from the local trend of the San Andreas Fault. The localization of deformation likely reflects the well‐oriented and mature fault.
LOD construction is typically implemented as a preprocessing step that requires users to wait before they are able to view the results in real time. We propose an incremental LOD generation approach for point clouds that allows us to simultaneously load points from disk, update an octree-based level-of-detail representation, and render the intermediate results in real time while additional points are still being loaded from disk. LOD construction and rendering are both implemented in CUDA and share the GPU's processing power, but each incremental update is lightweight enough to leave enough time to maintain real-time frame rates.
Our approach is able to stream points from an SSD and update the octree on the GPU at rates of up to 580 million points per second (~9.3GB/s) on an RTX 4090 and a PCIe 5.0 SSD. Depending on the data set, our approach spends an average of about 1 to 2 ms to incrementally insert 1 million points into the octree, allowing us to insert several million points per frame into the LOD structure and render the intermediate results within the same frame.