Fraction of time each patch is “on” during the inversion. This indicates that slip is required from the surface to around 12 km. Many other patches are “off” for nearly the entire inversion (the patch that is off the most is only turned on for 230 out of 2,000,000 iterations), indicating that slip there is not necessary to fit the data, and by the end of the burn‐in these patches have been assigned as off.

Fraction of time each patch is “on” during the inversion. This indicates that slip is required from the surface to around 12 km. Many other patches are “off” for nearly the entire inversion (the patch that is off the most is only turned on for 230 out of 2,000,000 iterations), indicating that slip there is not necessary to fit the data, and by the end of the burn‐in these patches have been assigned as off.

Source publication
Article
Full-text available
Many earthquake properties, including slip, show self‐similar (fractal) features. We can incorporate self‐similarity into Bayesian slip inversions via von Karman correlation, so that the regularization applied is representative of observed fault features. In von Karman regularization, each slip patch has a relationship to every other patch. This me...

Similar publications

Article
Full-text available
The possibility of increased landslide activity as a result of climate change has often been suggested, but few studies quantify this connection. Here, we present and utilize a workflow for the first time solely using publicly available data to assess the impact of future changes in landslide dynamic conditioning factors on landslide movement. In o...
Article
Full-text available
Exploration of territories not previously analyzed by landslide experts provides interesting findings. The Chgega landslide, in northern Tunisia, represents a paradigmatic mass movement. It can be classified as a complex landslide, or more specifically as vast rock spreading that evolved into a block slide. It involves a great block of limestone-ab...
Article
Full-text available
In view of the limitations of traditional InSAR technology in selecting stable target point for orbit refining and surface subsidence inversion in complicated mining area, this paper proposes a time-series InSAR mining area subsidence monitoring method based on the fusion of multi threshold targets. On the basis of the traditional technology, the d...
Article
Full-text available
Plain Language Summary Hampton Roads in coastal Virginia is among the regions experiencing high rates of relative sea level rise. This rate exceeds the global average primarily due to ongoing land subsidence. In part to reduce this subsidence, Hampton Roads has begun injecting treated wastewater into the underlying aquifer. However, the rate of sub...
Article
Full-text available
The Hooskanaden landslide is a large (~600 m wide × 1,300 m long), deep (~30 – 45 m) slide located in southwestern Oregon. Since 1958, it has had five moderate/major movements that catastrophically damaged the intersecting U.S. Highway 101, along with persistent slow wet‐season movements and a long‐term accelerating trend due to coastal erosion. Mu...

Citations

... In the inversion, von Karman autocorrelation is used as a prior probability to solve for a slip distribution that both fits the geodetic data and shows self-similarity (full details on method in Amey et al. 2018). Additionally, we solve for slip using a trans-dimensional inversion scheme in which we solve for the location of a slipping area as well as the magnitude (full details of method given in Amey et al. 2019). ...
Article
Full-text available
This study investigates the distribution of coseismic slip of the 19th August 2018 Mw 7.2 Lombok earthquakes, Indonesia, using Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) data. Two look directions on ascending, with a maximum displacement of 27 cm and 35 cm, and one on descending of Sentinel-1 SAR data, with a maximum displacement of 12 cm, are used. In addition, static offsets from the GPS data, which are located at the most western part of the island and the northern part of the island, detect ~ 2 cm and ~ 5 cm coseismic displacement due to the earthquake. Using combined InSAR and GPS data, this study estimates the fault location, fault geometry and the coseismic slip distribution by a joint inversion using a trans-dimensional Bayesian method. This method solves for the contiguous area of the fault that is allowed to slip in the inversion whilst also solving for the magnitude. This dampens the spurious smoothing that can occur in distributed slip solutions, in particular with far-field geodetic data or deep sources. The maximum slip of ~ 3.5 m is located at deeper portion of the fault at ~ 21 km, adjacent to the epicenter of the earthquake. This study demonstrates that the coseismic slip of the 19th August 2018 earthquakes occurred on a structure further south towards Lombok, a parallel fault structure with Flores back arc thrust.
... However, this method also encountered a complex calculation process. Amey et al. (2018Amey et al. ( , 2019 proposed a von Karman Bayesian slip distribution inversion method based on the self-similar characteristics of faults. This method also refers to the research of Bagnardi and Hooper (2018) and processes the parameters with an adaptive step size, which improves the calculation efficiency and accuracy to a certain extent. ...
Article
Full-text available
For slip distribution inversion with Bayesian theory, traditionally, the Markov Chain Monte Carlo (MCMC) method is well applied to generate a posterior probability density function with a sampling strategy. However, its computational cost may be expensive, and it fails to accommodate large volume data sets and estimate higher dimensional parameters of interest. In this study, we introduce variational inference theory into the study of coseismic slip distribution, and present a variational Bayesian slip distribution inversion approach. Furthermore, synthetic tests show that the variational Bayesian approach can efficiently and accurately invert the designed slip distribution; therefore, we conclude that the proposed algorithm is appropriate to invert the slip distribution parameters, which might be superior to MCMC sampling due to its excellent convergence speed and low computational burden. Taking the Dingri earthquake on March 20, 2020, as an example, we further verify the practicability of the variational Bayesian method in actual earthquakes. Additionally, the inversion results show that the main fault slip region of the Dingri earthquake occurs at depths of 2 ~ 8 km on the surface, the maximum slip amount is 0.54 m, and the coseismic release seismic moment is 5.58 × 1017 Nm, corresponding to a moment magnitude of Mw 5.79.
... Estimation of seismic locking models at subduction interfaces is of great importance to reduce epistemic uncertainty of seismic and tsunami hazard along subduction zones. The inclusion of a priori (or prior) knowledge regarding the physical process of rupture into the development of inverse methods can potentially reduce considerably the uncertainty of plate locking models (e.g., Amey et al., 2019). However, incomplete displacement field observations, which are also buried within noise originated by multiple sources, leads to non-uniqueness in solution and uncertain insight regarding the inferred model. ...
... A less restrictive Bayesian approach consists of drawing samples directly from the posterior distribution, through a sampling algorithm such as the Markov Chain Monte Carlo (MCMC, e.g., Amey et al., 2019;Fukuda & Johnson, 2008;Minson et al., 2013Minson et al., , 2014Sambridge et al., 2006). The posterior PDF is then characterized from its samples. ...
... But the unknown backslip model parameters belong in a log-space domain, rather than a linear domain, therefore it becomes more difficult to incorporate physical observations into the prior. Amey et al. (2019) argue that physical heterogeneous slip spatial correlations features should be incorporated as prior information instead of the commonly employed Laplacian operator. These obser-BECERRA-CARREÑO ET AL. ...
Article
Full-text available
Inversions of geodetic data are regularly used to estimate interseismic locking in subduction zones. However, the ill‐posed nature of these problems motivates us to include prior information, physically consistent with processes of the subduction seismic cycle. To deal with model instabilities, we present an inversion method to estimate both plate‐locking and model uncertainties by inverting Global Navigation Satellite System derived velocities based on a Bayesian model selection scheme. Our method allows us to impose positivity constraints via a multivariate folded‐normal distribution, with a specified covariance matrix. Model spatial correlations are explored and ranked to find models that best explain the observed data and for a better understanding of locking models. This approach searches for hyperparameters of the prior joint multivariate probability density function (PDF) of model parameters that minimize the Akaike Bayesian Information Criterion (ABIC). To validate our approach, we invert synthetic displacements from analytic models, yielding satisfactory results. We then apply the method to estimate the plate‐locking in Central Chile (28°–39°S) and its relation to the coseismic slip distribution of earthquakes with magnitudes Mw > 8.0, on the subduction zone since 2010. We also search among different prior PDFs for a single ductile‐fragile limit depth. Our results confirms a spatial correlation between locked asperities and the 2010 Mw 8.8 Maule and 2015 Mw 8.3 Illapel earthquake rupture zones. The robustness of our locking model shows potential to improve future seismic and tsunami hazard estimations.
... Although we focused on the estimation using a weakly informative prior PDF for the slip distribution, the accurate consideration of model uncertainty that the method allows for should also be effective in estimations introducing strong prior PDFs. Moreover, by taking a fully Bayesian approach, the method can be flexibly combined with not only the widely used constraints such as the smoothing approach but also recently proposed sophisticated implicit (e.g., trans-dimensional inversion: Dettmer et al., 2014) and explicit (e.g., von Karman regularization: Amey et al., 2018Amey et al., , 2019 regularization schemes, which is expected to increase the quality of estimation. The approach of generating multiple models of the underground structure can be improved further by focusing on the genuine estimation errors of underground structure models, which remains as an important future challenge. ...
Article
Full-text available
We consider a Bayesian multi‐model fault slip estimation (BMMFSE), which incorporates many candidates of the underground structure (Earth structure and plate boundary geometry) model characterized by a prior probability density function (PDF). The technique is used to study long‐term slow slip events (L‐SSEs) that occurred beneath the Bungo Channel, southwest Japan, in around 2010 and 2018. We here focus on the two advantages of BMMFSE: First, it allows for estimating slip distribution without introducing relatively strong prior information such as smoothing constraints, by combining a fully Bayesian inference and better consideration of model uncertainty to avoid overfitting. Second, the posterior PDF for the underground structure is also obtained during the fault slip estimation, which can be used as priors for the estimation of slip distribution for recurring events. The estimated slip distribution obtained using BMMFSE agreed better with the distribution of deep tectonic tremors at the down‐dip side of the main rupture area than those based on stronger prior constraints when the corresponding Coulomb failure stress changes are compared. This finding suggests a mechanical relationship between the L‐SSE and the synchronized tremors. The use of the posterior PDF of the underground structure estimated for the 2010 L‐SSE as prior PDF for the 2018 event resulted in more consistent estimation with the data, indicated by a smaller value of an information criterion.
... The M j 6.6 (M w 6.2) Central Tottori earthquake occurred on October 21, 2016, in the western part of Honshu, Japan (Fig. 2b). The associated damage was small compared with the Kumamoto earthquake sequence and, fortunately, resulted in no deaths (Amey et al. 2019). The largest displacement observed by the GNSS CORS Page 7 of 19 Morishita and Kobayashi Earth, Planets and Space (2022) 74:16 network for this event was approximately 7 cm at ~ 13 km north of the epicenter. ...
... The largest displacement observed by the GNSS CORS Page 7 of 19 Morishita and Kobayashi Earth, Planets and Space (2022) 74:16 network for this event was approximately 7 cm at ~ 13 km north of the epicenter. As Fig. 2 shows, both regions are covered with dense vegetation; therefore, C-band SAR data tend to be decorrelated, while the L-band provides sufficient coherence (Jiang et al. 2017;Funning and Garcia 2018;Amey et al. 2019;Morishita 2019;He et al. 2019b). For both earthquake events, abundant pre-and post-earthquake ALOS-2 data are available including left-looking and right-looking types. ...
Article
Full-text available
Three-dimensional (3D) surface deformation data with high accuracy and resolution can help reveal the complex mechanisms and sources of subsurface deformation, both tectonic and anthropogenic. Detailed 3D deformation data are also beneficial for maintaining the position coordinates of existing ground features, which is critical for developing and advancing global positioning technologies and their applications. In seismically active regions, large earthquakes have repeatedly caused significant ground deformation and widespread damage to human society. However, the delay in updating position coordinates following deformation can hamper disaster recovery. Synthetic aperture radar (SAR) data allow high-accuracy and high-resolution 3D deformation measurements. Three analysis methods are currently available to measure 1D or 2D deformation: SAR interferometry (InSAR), split-bandwidth interferometry (SBI), and the pixel offset method. In this paper, we propose an approach to derive 3D deformation by integrating deformation data from the three methods. The theoretical uncertainty of the derived 3D deformations was also estimated using observed deformation data for each of these methods and the weighted least square (WLS) approach. Furthermore, we describe two case studies (the 2016 Kumamoto earthquake sequence and the 2016 Central Tottori earthquake in Japan) using L-band Advanced Land Observing Satellite 2 (ALOS-2) data. The case studies demonstrate that the proposed approach successfully retrieved 3D coseismic deformation with the standard error of ~ 1, ~ 4, and ~ 1 cm in the east–west, north–south, and vertical components, respectively, with sufficient InSAR data. SBI and the pixel offset method filled the gaps of the InSAR data in large deformation areas in the order of 10 cm accuracy. The derived standard errors for each pixel are also useful for subsequent applications, such as updating position coordinates and deformation source modeling. The proposed approach is also applicable to other SAR datasets. In particular, next-generation L-band SAR satellites, such as ALOS-4 and NASA-ISRO SAR (NISAR), which have a wider swath width, more frequent observation capabilities than the former L-band satellites, and exclusive main look directions (i.e., right and left) will greatly enhance the applicability of 3D deformation derivation and support the quick recovery from disasters with significant deformation.
... Numerical approaches using McMC sampling (Gilks et al. 1995;Denison et al. 2002;Sambridge & Mosegaard 2002;Brooks et al. 2011) are often applied to estimate the PPD in Bayesian inversion. McMC methods have been applied to many geophysical data, including geoacoustic data (Belcourt et al. 2019), resistivity data (Galetti & Curtis 2018), seismic data (Ray et al. 2018), fault rupture data (Shi et al. 2018;Amey et al. 2019) and gravity and magnetic data (Bosch et al. 2006). McMC sampling generates a sequence (chain) of models called samples. ...
Article
Gravity and magnetic data resolve the Earth with variable spatial resolution, and Earth structure exhibits both discontinuous and gradual features. Therefore, model parametrization complexity should be able to address such variability by locally adapting to the resolving power of the data. The reversible-jump Markov chain Monte Carlo (rjMcMC) algorithm provides variable spatial resolution that is consistent with data information. To address the prevalent non-uniqueness in joint inversion of potential field data, we employ a novel spatial partitioning with nested Voronoi cells that is explored by rjMcMC sampling. The nested Voronoi parametrization partitions the subsurface in terms of rock types, such as sedimentary, salt and basement rocks. Therefore, meaningful prior information can be specified for each type which reduces non-uniqueness. We apply nonoverlapping prior distributions for density contrast and susceptibility between rock types. In addition, the choice of noise parametrization can lead to significant trade-offs with model resolution and complexity. We adopt an empirical estimation of full data covariance matrices that include theory and observational errors to account for spatially correlated noise. The method is applied to 2D gravity and magnetic data to study salt and basement structures. We demonstrate that meaningful partitioning of the subsurface into sediment, salt, and basement structures is achieved by these advances without requiring regularization. Multiple simulated- and field-data examples are presented. Simulation results show clear delineation of salt and basement structures while resolving variable length scales. The field data show results that are consistent with observations made in the simulations. In particular, we resolve geologically plausible structures with varying length scales and clearly differentiate salt structure and basement topography.
... From this model, we calculated 1D surface displacements as InSAR ascending and descending observations would detect and added Gaussian noise (Text S5), which we used in the Bayesian estimation (Figures 14b and 14c). To avoid slip artifacts related to smoothing constraints that can arise due to large fault dimensions compared to the actual slip (Amey et al., 2019), we subjectively choose the parameter  2 3 from a misfit L-curve. The misfit trade-off curve between the misfit function and average smoothness is obtained for the most probable non-planar fault geometry at stage 0 of our sampling technique. ...
Article
Full-text available
Large earthquakes are usually modeled with simple planar fault surfaces or a combination of several planar fault segments. However, in general, earthquakes occur on faults that are non‐planar and exhibit significant geometrical variations in both the along‐strike and down‐dip directions at all spatial scales. Mapping of surface fault ruptures and high‐resolution geodetic observations are increasingly revealing complex fault geometries near the surface and accurate locations of aftershocks often indicate geometrical complexities at depth. With better geodetic data and observations of fault ruptures, more details of complex fault geometries can be estimated resulting in more realistic fault models of large earthquakes. To address this topic, we here parametrize non‐planar fault geometries with a set of polynomial parameters that allow for both along‐strike and down‐dip variations in the fault geometry. Our methodology uses Bayesian inference to estimate the non‐planar fault parameters from geodetic data, yielding an ensemble of plausible models that characterize the uncertainties of the non‐planar fault geometry and the fault slip. The method is demonstrated using synthetic tests considering slip spatially distributed on a single continuous finite non‐planar fault surface with varying dip and strike angles both in the down‐dip and along‐strike directions. The results show that fault‐slip estimations can be biased when a simple planar fault geometry is assumed in presence of significant non‐planar geometrical variations. Our method can help to model earthquake fault sources in a more realistic way and may be extended to include multiple non‐planar fault segments or other geometrical fault complexities.
... However, there are few applications of the approach to estimate fault slip distributions; for example, Dettmer et al. (2014) and Hallo and Gallovič (2020) investigated a fault slip distribution using seismic wave data. Amey et al. (2019) estimated a fault slip distribution using geodetic observational data, utilizing the rj-MCMC technique and a von Karman regularization to restrict the number of sub-faults with non-zero slip. This is one of the few applications of the trans-dimensional approach to geodetic data, and thus the application has not been thoroughly tested and there is plenty of scope for further investigation. ...
Article
Full-text available
Geodetic fault slip inversions have generally been performed by employing a least squares method with a spatially uniform smoothing constraint. However, this conventional method has various problems: difficulty in strictly estimating non‐negative solutions, assumption that unknowns follow the Gaussian distributions, unsuitability for expressing spatially non‐uniform slip distributions, and high calculation cost for optimizing many hyper‐parameters. Here, we have developed a trans‐dimensional geodetic slip inversion method using the reversible‐jump Markov chain Monte Carlo (rj‐MCMC) technique to overcome these problems. Because sub‐fault locations were parameterized by the Voronoi partition and were optimized in our approach, we can estimate a slip distribution without the need for spatially uniform smoothing constraints. Moreover, we introduced scaling factors for observational errors. We applied the method to the synthetic data and the actual geodetic observational data associated with the 2011 Tohoku‐oki earthquake and found that the method successfully reproduced the target slip distributions including a spatially non‐uniform slip distribution. The method provided posterior probability distributions with the unknowns, which can express a non‐Gaussian distribution such as large slip with low probability. The estimated scaling factors properly adjusted the initial observational errors and provided a reasonable slip distribution. Additionally, we found that checkerboard resolution tests were useful to consider sensitivity of the observational data for performing the rj‐MCMC method. It is concluded that the developed method is a powerful technique to solve the problems of the conventional inversion method and to flexibly express fault‐slip distributions considering the complicated uncertainties.
... Several models can contribute to the data fit and the associated uncertainties be considered simultaneously (Sambridge, 2013;Sambridge et al., 2013). Transdimensional methods have been applied to geoscientific problems in general Sambridge, 2013;Bodin et al., 2012) and to slip in finite fault modeling in particular (Dettmer et al., 2014;Amey et al., 2019), where it is employed to circumvent regularization a priori. Transdimensional models adapt according to Bayesian parsimony locally to the data (Bodin et al., 2012), reducing the need for any subjective choices on the model complexity or regularizations (Dettmer et al., 2014). ...
Thesis
Full-text available
Earthquake source modeling has emerged from the need to be able to describe and quantify the mechanism and physical properties of earthquakes. Investigations of earthquake rupture and fault geometry requires the testing of a large number of such potential sets of earthquake sources models. Earthquakes often rupture across more than one fault segment. If such rupture segmentation occurs on a significant scale, a simple model may not represent the rupture process well. This thesis focuses on the data-driven inclusion of earthquake rupture segmentation into earthquake source modeling. The developed tools and the modeling are based on the joint use of seismological waveform far-field and geodetic Interferometric Synthetic Aperture Radar near-field surface displacement maps to characterise earthquake sources robustly with rigorous consideration of data and modeling errors. A strategy based on information theory is developed to determine the appropriate model complexity to represent the available observations in a data-driven way. This is done in consideration of the uncertainties in the determined source mechanisms by investigating the inferences of the full Bayesian model ensemble. Application on the datasets of four earthquakes indicated that the inferred source parameters are systematically biased by the choice of model complexity. This might have effects on follow-up analyses, e. g. regional stress field inversions and seismic hazard assessments. Further, two methods were developed to provide data-driven model-independent constraints to inform a kinematic earthquake source optimization about earthquake source parameter prior estimates. The first method is a time-domain multi-array backprojection of teleseismic data with empirical traveltime corrections to infer the spatio-temporal evolution of the rupture. This enables detection of potential rupture segmentation based on the occurrence of coherent high- frequency sources during the rupture process. The second developed method uses image analysis methods on satellite radar measured surface displacement maps to infer modeling constraints on rupture characteristics (e.g. strike and length) and the number of potential segments. These two methods provide model-independent constraints on fault location, dimension, orientation and rupture timing. The inferred source parameter constraints are used to constrain an inversion for the source mechanism of the 2016 Muji Mw 6.6 earthquake, a segmented and bilateral strike-slip earthquake. As a case study to further investigate a depth-segmented fault system and occurrence of co-seismic rupture segmentation in such a system the 2008-2009 Qaidam sequence with co-seismic and post-seismic displacements is investigated. The Qaidam 2008-2009 earthquake sequence in northeast Tibet involved two reverse-thrust earthquakes and a postseismic signal of the 2008 earthquake. The 2008 Qaidam earthquake is modeled as a deep shallow dipping earthquake with no indication of rupture segmentation. The 2009 Qaidam earthquake is modeled on three distinct south-dipping high-angle thrusts, with a bilateral and segmented rupture process. A good agreement between co-seismic surface displacement measurements and coherent seismic energy emission in the backprojection results is determined. Finally, a combined framework is proposed which applies all the developed methods and tools in an informed parallel modeling of several earthquake source model complexities. This framework allows for improved routine determination of earthquake source modeling under consideration of rupture segmentation. This thesis provides overall an improvement for earthquake source analyses and the development of modeling standards for robust determination of second-order earthquake source parameters.
... Bayesian statistical methods are widely applied in earth science and geochronology in order to incorporate prior information and calculate the posterior distribution for a set of parameters given quantitative measurements, using a mathematical model (Bronk Ramsey, 2009;Montoya-Noguera & Wang, 2017). Bayesian inversions can also be transdimensional, meaning that the number of model parameters ("unknowns") for which we solve is allowed to vary, increasing or decreasing the complexity of the model depending on what is required by the data (Amey et al., 2019;Bodin & Sambridge, 2009;Dettmer et al., 2010;Green, 1995;Sambridge et al., 2006). Changes in slip rate can be added or removed, and the number of changes in slip rate is a hyperparameter, because it varies the number of model parameters. ...
... The number of slip rate changes is limited by a reversible-jump algorithm that favors simple solutions (Sambridge et al., 2006). Bayesian techniques are often applied to deal with uncertainty associated with limited data (Amey et al., 2019;Bronk Ramsey, 2009;Montoya-Noguera & Wang, 2017). Several different Bayesian MCMC approaches have been developed for modeling cosmogenic data from fault scarps (Beck et al., 2018;Tesson & Benedetti, 2019;Tikhomirov et al., 2011). ...
Article
Full-text available
Cosmogenic exposure data can be used to calculate time‐varying fault slip rates on normal faults with exposed bedrock scarps. The method relies on assumptions related to how the scarp is preserved, which should be consistent at multiple locations along the same fault. Previous work commonly relied on cosmogenic data from a single sample locality to determine the slip rate of a fault. Here we show that by applying strict sampling criteria and using geologically informed modeling parameters in a Bayesian‐inference Markov chain Monte Carlo method, similar patterns of slip rate changes can be modeled at multiple sites on the same fault. Consequently, cosmogenic data can be used to resolve along‐strike fault activity. We present cosmogenic ³⁶Cl concentrations from seven sites on two faults in the Italian Apennines. The average slip rate varies between sites on the Campo Felice Fault (0.84 ± 0.23 to 1.61 ± 0.27 mm yr⁻¹), and all sites experienced a period of higher than average slip rate between 0.5 and 2 ka and a period of lower than average slip rate before 3 ka. On the Roccapreturo fault, slip rate in the center of the fault is 0.55 ± 0.11 and 0.35 ± 0.05 mm yr⁻¹ at the fault tip near a relay zone. The estimated time since the last earthquake is the same at each site along the same fault (631 ± 620 years at Campo Felice and 2,603 ± 1,355 years at Roccapreturo). These results highlight the potential for cosmogenic exposure data to reveal the detailed millennial history of earthquake slip on active normal faults.