A simulation-based approach to forecasting the next great San Francisco earthquake.

Center for Computational Science and Engineering, and Department of Geology, University of California-Davis, Davis, CA 95616, USA.
Proceedings of the National Academy of Sciences (Impact Factor: 9.81). 11/2005; 102(43):15363-7. DOI: 10.1073/pnas.0507528102
Source: PubMed

ABSTRACT In 1906 the great San Francisco earthquake and fire destroyed much of the city. As we approach the 100-year anniversary of that event, a critical concern is the hazard posed by another such earthquake. In this article, we examine the assumptions presently used to compute the probability of occurrence of these earthquakes. We also present the results of a numerical simulation of interacting faults on the San Andreas system. Called Virtual California, this simulation can be used to compute the times, locations, and magnitudes of simulated earthquakes on the San Andreas fault in the vicinity of San Francisco. Of particular importance are results for the statistical distribution of recurrence times between great earthquakes, results that are difficult or impossible to obtain from a purely field-based approach.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: .−. m. As we approach the hundredth anniversary of the great San Francisco earthquake, a timely question is the extent of the hazard posed by another such event, and how this hazard may be estimated. We present an analysis of this problem based upon a numerical simulation, Virtual California, that include many of the physical processes known to be important in earthquake dynamics. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. The faults in the model interact by means of quasistatic elasticity, and frictional dynamics are based on laboratory friction experiments. Constraints for the input parameters for these models originate from field data, and typically include realistic fault system topologies, realistic long term slip rates, and realistic frictional parameters. Outputs from the simulations include synthetic earthquake sequences and space-time patterns, together with associated surface deformation and strain patterns that are similar to those seen in nature. Our simulations can be used to compute, or "measure", empirical statistical distributions (probability density functions: PDFs) that characterize the activity. Examples include PDFs for recurrence intervals on selected faults. These PDFs can be used to construct probabilistic seismic forecasts for selected faults or groups of faults. The major difference between the simulation-based method and current statistical approaches lies in the way in which inter-event times and probabilities for joint failure of multiple segments are computed. In our simulation-based approach, these times and probabilities come from the modeling of fault interactions and laboratory-based friction laws. Space-time patterns of activity can be defined based upon Karhunen-Loeve expansions (Principal Component Analysis) that lead to deeper understanding of fundamental patterns of correlated activity in the fault system. An example of this type of result is our discovery that the two most significant modes of activity represent coordinated events on 1) the Northern San Andreas-Haward-Calaveras system; and on the Big-Bend region of the San Andreas together with the Garlock fault. We also find that the creeping section tends to decouple activity in northern and southern California.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Numerical simulations are routinely used for weather forecasting. It is clearly desirable to develop simulation models for regional seismicity. One model that has been developed for the purpose is the Virtual California (VC) simulation. In order to better understand the behaviour of seismicity simulations, we apply VC to three relatively simple problems involving a straight strike-slip fault. In problem I, we divide the fault into two segments with different mean earthquake interval times. In problem II, we add a central strong (asperity) segment and in problem III we change this to a weak central segment. In all cases we observe limit cycle behaviour with a wide range of periods. We also show that the historical sequence of 13 great earthquakes along the Nankai Trough, Japan, exhibits a limit-cycle behaviour very similar to our asperity model.
    Geophysical Journal International 02/2010; 180(2):734-742. · 2.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These results lead to the (weak) conclusion that California seismicity may be characterized more by quiescence than by activation, and that BASS-ETAS models may not be robustly applicable to the real data.
    Geophysical Journal International 01/2011; 187:225-236. · 2.85 Impact Factor


Available from
May 21, 2014