A simulation-based approach to forecasting the next great San Francisco earthquake.
ABSTRACT In 1906 the great San Francisco earthquake and fire destroyed much of the city. As we approach the 100-year anniversary of that event, a critical concern is the hazard posed by another such earthquake. In this article, we examine the assumptions presently used to compute the probability of occurrence of these earthquakes. We also present the results of a numerical simulation of interacting faults on the San Andreas system. Called Virtual California, this simulation can be used to compute the times, locations, and magnitudes of simulated earthquakes on the San Andreas fault in the vicinity of San Francisco. Of particular importance are results for the statistical distribution of recurrence times between great earthquakes, results that are difficult or impossible to obtain from a purely field-based approach.
- SourceAvailable from: basin.earth.ncu.edu.tw[Show abstract] [Hide abstract]
ABSTRACT: .−. m. As we approach the hundredth anniversary of the great San Francisco earthquake, a timely question is the extent of the hazard posed by another such event, and how this hazard may be estimated. We present an analysis of this problem based upon a numerical simulation, Virtual California, that include many of the physical processes known to be important in earthquake dynamics. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. The faults in the model interact by means of quasistatic elasticity, and frictional dynamics are based on laboratory friction experiments. Constraints for the input parameters for these models originate from field data, and typically include realistic fault system topologies, realistic long term slip rates, and realistic frictional parameters. Outputs from the simulations include synthetic earthquake sequences and space-time patterns, together with associated surface deformation and strain patterns that are similar to those seen in nature. Our simulations can be used to compute, or "measure", empirical statistical distributions (probability density functions: PDFs) that characterize the activity. Examples include PDFs for recurrence intervals on selected faults. These PDFs can be used to construct probabilistic seismic forecasts for selected faults or groups of faults. The major difference between the simulation-based method and current statistical approaches lies in the way in which inter-event times and probabilities for joint failure of multiple segments are computed. In our simulation-based approach, these times and probabilities come from the modeling of fault interactions and laboratory-based friction laws. Space-time patterns of activity can be defined based upon Karhunen-Loeve expansions (Principal Component Analysis) that lead to deeper understanding of fundamental patterns of correlated activity in the fault system. An example of this type of result is our discovery that the two most significant modes of activity represent coordinated events on 1) the Northern San Andreas-Haward-Calaveras system; and on the Big-Bend region of the San Andreas together with the Garlock fault. We also find that the creeping section tends to decouple activity in northern and southern California.
- [Show abstract] [Hide abstract]
ABSTRACT: Numerical simulations are routinely used for weather forecasting. It is clearly desirable to develop simulation models for regional seismicity. One model that has been developed for the purpose is the Virtual California (VC) simulation. In order to better understand the behaviour of seismicity simulations, we apply VC to three relatively simple problems involving a straight strike-slip fault. In problem I, we divide the fault into two segments with different mean earthquake interval times. In problem II, we add a central strong (asperity) segment and in problem III we change this to a weak central segment. In all cases we observe limit cycle behaviour with a wide range of periods. We also show that the historical sequence of 13 great earthquakes along the Nankai Trough, Japan, exhibits a limit-cycle behaviour very similar to our asperity model.Geophysical Journal International 02/2010; 180(2):734-742. · 2.85 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These results lead to the (weak) conclusion that California seismicity may be characterized more by quiescence than by activation, and that BASS-ETAS models may not be robustly applicable to the real data.Geophysical Journal International 01/2011; 187:225-236. · 2.85 Impact Factor
A simulation-based approach to forecasting the next
great San Francisco earthquake
J. B. Rundle*†‡, P. B. Rundle*, A. Donnellan†, D. L. Turcotte‡§, R. Shcherbakov*, P. Li¶, B. D. Malamud?, L. B. Grant**,
G. C. Fox††, D. McLeod‡‡, G. Yakovlev*, J. Parker¶, W. Klein§§, and K. F. Tiampo¶¶
*Center for Computational Science and Engineering, and§Department of Geology, University of California, Davis, CA 95616;†Earth Space Science Division,
and¶Exploration Systems Autonomy Section, Jet Propulsion Laboratory, Pasadena, CA 91125;?Department of Geography, Kings College London,
London WC2R 2LS, United Kingdom; **Department of Environmental Health, Science, and Policy, University of California, Irvine, CA 92697;
††Departments of Computer Science and Physics, Indiana University, Bloomington, IN 47405;‡‡Department of Computer Science, University of Southern
California, Los Angeles, CA 90089;§§Department of Physics, Boston University, Boston, MA 02215; and¶¶Departments of Earth Science and Biological and
Geological Sciences, University of Western Ontario, London, ON, Canada N6A 5B7
Contributed by D. L. Turcotte, August 31, 2005
In 1906 the great San Francisco earthquake and fire destroyed
much of the city. As we approach the 100-year anniversary of that
event, a critical concern is the hazard posed by another such
earthquake. In this article, we examine the assumptions presently
used to compute the probability of occurrence of these earth-
quakes. We also present the results of a numerical simulation of
interacting faults on the San Andreas system. Called Virtual Cali-
fornia, this simulation can be used to compute the times, locations,
and magnitudes of simulated earthquakes on the San Andreas
fault in the vicinity of San Francisco. Of particular importance are
results for the statistical distribution of recurrence times between
great earthquakes, results that are difficult or impossible to obtain
from a purely field-based approach.
hazards ? Weilbull distribution
of the city, leaving 225,000 of 400,000 inhabitants homeless. The
1906 earthquake occurred on a 470-km segment of the San
Andreas fault that runs from San Juan Bautista north to Cape
Mendocino (Fig. 1) and is estimated to have had a moment
magnitude m ? 7.9 (1). Observations of surface displacements
across the fault ranged from 2.0 to 5.0 m (2). As we approach the
100th anniversary of the great San Francisco earthquake, timely
such event and how can this hazard be estimated.
The San Andreas fault is the major boundary between the
Pacific and North American plates, which move past each other
at an average rate of 49 mm?yr?1(3), implying that to accumulate
2.0–5.0 m of displacement 40–100 years are needed. One of the
simplest hypotheses for the recurrence of great earthquakes in
the San Francisco area is that they will occur at approximately
the next earthquake may be imminent. However, there are two
problems with this simple ‘‘periodic’’ hypothesis. The first is that
it is now recognized that only a fraction of the relative displace-
ment between the plates occurs on the San Andreas fault proper.
The remaining displacement occurs on other faults in the San
Andreas system. Hall et al. (4) concluded that the mean dis-
placement rate on just the northern part of the San Andreas
Fault is closer to 24 mm?yr?1. With the periodic hypothesis this
would imply recurrence intervals of 80–200 years.
The second and more serious problem with the periodic
hypothesis involves the existence of complex interactions be-
tween the San Andreas Fault and other adjacent faults. It is now
recognized (5–7) that these interactions lead to chaotic and
complex nonperiodic behavior so that exact predictions of the
hazard forecasts can be made. For the past 15 years a purely
statistical approach has been used by the Working Group on
California Earthquake Probabilities (WGCEP) (8–11) to make
he great San Francisco earthquake (April 18, 1906) and
subsequent fires killed ?3,000 persons and destroyed much
risk assessments for northern California. Its statistical approach
is a complex, collaborative process that uses observational data
describing earthquake slips, lengths, creep rates, and other
information on regional faults as inputs to a San Francisco Bay
Regional fault model. Using its forecast algorithm, the WGCEP
(11) found that the conditional probability for the occurrence of
an earthquake having M ? 6.7 during the 30-year period
2002–2031 is 18.2%.
As described in the WGCEP report (11), the critical assump-
tion in computing the hazard probability is the choice of a
probability distribution, or renewal model. The WGCEP study
used the Brownian passage time (BPT) distribution. Previous
studies used the log normal (LN) (8–10) and the Weibull
distributions. The means and standard deviations of the distri-
butions for event times on the fault segments were constrained
by geological and seismological observations.
In this article, we present the results of a topologically realistic
numerical simulation of earthquake occurrence on the San
Andreas fault in the vicinity of San Francisco. This simulation,
called Virtual California, includes fault system physics, such as
friction laws developed with insights from laboratory experi-
ments and field data. Simulation-based approaches to forecast-
ing and prediction of natural phenomena have been used with
considerable success for weather and climate. When carried out
on a global scale these simulations are referred to as general
circulation models (12, 13). Turbulent phenomena are repre-
sented by parameterizations of the dynamics, and the equations
are typically solved over spatial grids having length scales of tens
to hundreds of kilometers. Although even simple forms of the
fluid dynamics equations are known to display chaotic behavior
(5), general circulation models have repeatedly shown their
value. In many cases ensemble forecasts are carried out, which
use simulations computed with multiple models to test the
robustness of the forecasts.
The Virtual California simulation, originally developed by
Rundle (14), includes stress accumulation and release, as well as
stress interactions between the San Andreas and other adjacent
faults. The model is based on a set of mapped faults with
estimated slip rates, prescribed long-term rates of fault slip,
parameterizations of friction laws based on laboratory experi-
ments and historic earthquake occurrence, and elastic interac-
tions. An updated version of Virtual California (15–17) is used
in this article. The faults in the model are those that have been
Abbreviations: WGCEP, Working Group on California Earthquake Probabilities; BPT,
Brownian passage time; LN, log normal.
‡To whom correspondence may be addressed. E-mail: email@example.com or
© 2005 by The National Academy of Sciences of the USA
October 25, 2005 ?
vol. 102 ?
no. 43 ?
active in recent geologic history. Earthquake activity data and
slip rates on these model faults are obtained from geologic
databases of earthquake activity on the northern San Andreas
fault. A similar type of simulation has been developed by Ward
and Goes (18) and Ward (19). A consequence of the size of the
fault segments used in this version of Virtual California is that
the simulations do not generate earthquakes having magnitudes
less than about m ? 5.8.
segment occurs because of the accumulation of a slip deficit at
the prescribed slip rate of the segment. The vertical rectangular
fault segments interact elastically, and the interaction coeffi-
cients are computed by means of boundary element methods
(20). Segment slip and earthquake initiation are controlled by a
friction law that has its basis in laboratory-derived physics (17).
Onset of initial instability is controlled by a static coefficient of
friction. Segment sliding, once begun, continues until a residual
stress is reached, plus or minus a random overshoot or under-
shoot of typically 10%. To prescribe the friction coefficients we
use historical displacements in earthquakes having moment
magnitudes m ? 5.0 in California during the last 200 years (17).
The topology of Virtual California is shown in Fig. 1 super-
imposed on a LandSat image. The 650 fault segments are
represented by lighter and darker lines. The lighter lines repre-
sent the San Andreas fault, stretching from the Salton trough in
the south to Cape Mendocino in the north. The ‘‘San Francisco
section’’ of the San Andreas fault, ?250 km in length, is the
section of the fault whose rupture would be strongly felt in San
Francisco and is considered here. Using standard seismological
relationships (21), we estimate that an earthquake having mSF?
7.0 with an average slip of 4 m and a depth of 15 km would
rupture a 20-km length of fault. Earthquakes like these would
produce considerable damage, destruction, and injury in San
Our goal is to forecast waiting times until the next great
earthquake on the yellow section in Fig. 1 of the fault for a
minimum magnitude mSF? 7.0. Using Virtual California, we
advance our model in 1-year increments and simulate 40,000
note that although the average slip on a fault segment and the
average recurrence intervals are tuned to match the observed
averages, the variability in the simulations is a result of the fault
interactions. Slip events in the simulations display highly com-
plex behavior, with no obvious regularities or predictability. In
Fig. 2, we show examples of the distribution of earthquakes on
the San Francisco section of the San Andreas fault for a
3,000-year period. Fig. 2 Left shows the slip in each earthquake
as a function of distance along the fault from Fort Ross in the
north to San Juan Bautista in the south. Fig. 2 Right shows the
moment magnitudes of each of the earthquakes.
One output of our simulations is the distribution of surface
displacements caused by each model earthquake. Synthetic
aperture radar interferometry is routinely used to obtain the
coseismic displacements that occur after earthquakes (22). The
displacements associated with two sets of our model earthquakes
are illustrated in Fig. 3 as interferometric patterns. Each inter-
ferometric fringe corresponds to a displacement along the line
of sight to the hypothetical spacecraft of 56 mm.
A quantitative output of our simulations is the statistical distri-
bution of recurrence times t between successive great earth-
quakes on a given fault. For the northern section of the San
Andreas fault near San Francisco, this distribution is required if
the risk of future earthquakes on the fault is to be specified. We
associate the properties of this distribution directly with the
elastic interactions between faults, which are an essential feature
statistics of time intervals. However, Savage (23) has argued
convincingly that actual sequences of earthquakes on specified
faults are not long enough to establish the statistics of recurrence
times with the required reliability. We argue that it is preferable
to use numerical simulations to obtain applicable statistics. We
illustrate this approach by using numerical simulations to obtain
recurrence statistics for synthetic earthquakes on the San Fran-
cisco section of the San Andreas fault over 40,000 years.
We consider earthquakes on the section of the northern San
Andreas fault shown in yellow in Fig. 1. Over the 40,000-year
simulation, we obtained 395 simulated mSF? 7.0 events having
an average recurrence interval of 101 years. From the simula-
tions, we measured the distribution of recurrence times t be-
segments, each ?10 km in length along strike and 15 km in depth. The yellow
segments make up the San Francisco section of the San Andreas fault.
the fault for each earthquake over a 3,000-year period. FR, Fort Ross; SJB, San
Juan Bautista. (Right) The corresponding moment magnitude of each of the
Illustration of simulated earthquakes on the San Francisco section of
www.pnas.org?cgi?doi?10.1073?pnas.0507528102Rundle et al.
t is defined as the recurrence time between two successive great
A second important distribution that we will consider is the
distribution of waiting times ?t until the next great earthquake,
given that the time elapsed since the most recent great earth-
quake is t0. If we take the time of the last great earthquake to be
1906 and the present to be 2005, we find for San Francisco t0
from the present, thus t ? t0? ?t. We will express our results in
terms of the cumulative conditional probability P(t,t0) that an
time since the last great earthquake is t0(25).
A probability distribution that has often been applied to
recurrence statistics is the Weibull distribution (26–29), and it is
used here for reasons that we will describe. For the Weibull
distribution the fraction of the recurrence times P(t) that are ?t
can be expressed as
P?t? ? 1 ? exp???
where ? and ? are fitting parameters. Sieh et al. (30) fit this
distribution to the recurrence times of great earthquakes on the
southern San Andreas fault obtained from paleoseismic studies
with ? ? 166 ? 44.5 years and ? ? 1.5 ? 0.8. In its extension to
the cumulative conditional probability the Weibull distribution
is given by ref. 31
P?t, t0? ? 1 ? exp??
Eq. 2 specifies the cumulative conditional probability that an
earthquake will have occurred at a time t after the last earth-
quake if the earthquake has not occurred by a time t0after the
earthquakes will have occurred on the San Andreas Fault near San Francisco at a recurrence time t years after the last great earthquake with mSF? 7.0. For
comparison, we plot three cumulative probability distributions having the same mean ? ? 101 years and standard deviation ? ? 61 years as the simulation data.
The solid line is the best-fitting Weibull distribution; the dashed line is the BPT distribution; and the dotted line is the LN distribution. (b) The wiggly line is the
conditional probability P(to? 30, to) that a magnitude mSF? 7.0 event will occur in the next 30 years, given that it has not occurred by a time tosince the last
such event. The solid line is the corresponding conditional probability for the Weibull distribution; the dashed line is for the BPT; and the dotted line is for the
Probabilities of occurrence of mSF? 7 earthquakes. (a) The wiggly line is the simulation-based cumulative probability P(t) that a great mSF? 7.0
displacement of 56 mm. (A) First set. (B) Second set.
Interferometric patterns of the coseismic deformations associated with two sets of model earthquakes. Each interferometric fringe corresponds to a
Rundle et al.
October 25, 2005 ?
vol. 102 ?
no. 43 ?
We first consider the type of statistical forecast described in
the WGCEP report (9). In Fig. 4a, the wiggly line is the
cumulative probability P(t) that a simulated great mSF ? 7.0
earthquake will have occurred on the San Andreas Fault near
San Francisco, at a time t after the last such great earthquake.
For comparison, we plot three cumulative probability distribu-
tions having the same mean ? ? 101 years and standard
deviation ? ? 61 years as the simulation data. In Fig. 4a the solid
line is the best-fitting Weibull distribution; the dashed line is the
BPT distribution; and the dotted line is the LN distribution. For
the Weibull distribution, these values of mean and standard
deviation correspond to ? ? 1.67 and ? ? 114 years.
In Fig. 4b we show the same type of conditional probability
in Fig. 4a. In Fig. 4b the wiggly line is the simulation-based
conditional probability P(to? 30, to) that a magnitude mSF? 7.0
event will occur in the next 30 years, given that it has not
occurred during the time to since the last such event. For
comparison, in Fig. 4b the solid line is the corresponding
conditional probability for the Weibull distribution; the dashed
line is for the BPT; and the dotted line is for the LN.
distribution describes the simulation data substantially better
than either the BPT or LN distributions. At least in Virtual
California, we can conclude that among these three statistical
distributions, the Weibull distribution is the preferred distribu-
tion to describe the failure of a group of fault segments inter-
acting by means of elastic stress transfer.
The corresponding cumulative conditional distributions of
waiting times ?t from our simulations are given in Fig. 5. These
are the cumulative conditional probabilities that an earthquake
will have occurred at a time t ? t0? ?t after the last earthquake
if it has not occurred at a time t0. We remove recurrence times
that are less than or equal to t0 and plot the cumulative
distribution of the remaining recurrence times. The left-most
times P(t) given in Fig. 4a. Cumulative conditional distributions
P(t, t0) are given in Fig. 5 with t0? 25, 50, 75, 100, 150, and t0
years. With the fitting parameters ? and ? used to fit Eq. 1 to the
cumulative distributions of waiting times P(t), we again compare
the predictions of the Weibull distribution for P(t, t0) from Eq.
2, the smooth curves, with data from our simulations in Fig. 5,
the irregular curves. Again good agreement is found.
The data given in Fig. 5 can also be used to determine the
waiting times to the next great earthquake ?t ? t ? t0corre-
sponding to a specified probability of occurrence as a function
of the time since the last great earthquake occurred t0. This
dependence is given in Fig. 6. The small stars in Fig. 6 are the
median waiting times ?t, P(t0? ?t, t0) ? 0.5, to the next great
earthquake as a function of the time t0 since the last great
earthquake. These stars are the intersections of the dashed line
given as circles in Fig. 6 are the waiting times for P(t, t0) ? 0.25
(lower limit of the gray band) and P(t, t0) ? 0.75 (upper limit of
the gray band). The dashed lines in Fig. 5 are the forecasts of risk
based on the Weibull distributions from Eq. 2.
Immediately after a great earthquake, e.g., in 1906, we have
t0? 0 years. At that time, Figs. 5 and 6 indicate that there was
a 50% chance of having an earthquake mSF? 7.0 in the next t ?
90 years, i.e., in 1996. In 2006 it will have been 100 years since
the last great earthquake occurred in 1906. The cumulative
years. We see from Figs. 5 and 6 that there is a 50% chance of
having a great earthquake (mSF? 7.0) in the next ?t ? 45 years
(t ? 145 years). This is indicated by the large star in Fig. 6. It can
also be seen that there is a 25% chance for such an earthquake
in the next ?t ? 20 years (t ? 120 years), and a 75% chance of
having such an earthquake in the next ?t ? 80 years (t ? 180
years). During each year in this period, to a good approximation,
there is a 1% chance of having such an earthquake. These
estimates are consistent with the information in Fig. 4b, which
indicates a 30% chance of an mSF? 7.0 earthquake during the
We see from Figs. 3–6 that the Weibull distribution that fits
the distribution of interval times also does an excellent job of
simulations and in our Weibull fit, the median waiting times
systematically decrease with increases in the time since the last
great earthquake. This is not the case for other distributions that
provide a good fit to interval times (9). Our results therefore
support the use of Weibull distributions to carry out probabilistic
hazard analyses of earthquake occurrences.
There are major differences between the simulation-based fore-
earthquake will occur on the San Andreas Fault near San Francisco at a time
t ? t0? ?t years after the last great earthquake, if the last great earthquake
occurred t0years ago in the past. Results are given for t0? 0, 25, 50, 75, 100,
125, and 150 years. Also included are fits to the data of the Weibull distribu-
sponding to the 50% probability of the distributions in Fig. 5) indicate the
median waiting times ?t until the next great earthquake as a function of the
time t0since the last great earthquake. The large star indicates the median
waiting time (50% probability) from today. The shaded band represents
waiting times with 25% probability (lower edge of shaded band) to 75%
probability (upper edge of shaded band). The dashed lines are the forecasts
using the Weibull distribution in Eq. 2.
Waiting times to the next great earthquake. The small stars (corre-
www.pnas.org?cgi?doi?10.1073?pnas.0507528102 Rundle et al.
(11). In our approach, it is not necessary to prescribe a proba-
bility distribution of recurrence times. The distribution of re-
currence times is obtained directly from simulations, which
include the physics of fault interactions and frictional physics.
Because both methods use the same database for mean fault slip
on fault segments, they give approximately equal mean recur-
rence times. The major difference between the two methods lies
in the way in which recurrence times and probabilities for joint
failure of multiple segments are computed. In our simulation
approach, these times and probabilities come from the modeling
of fault interactions through the inclusion of basic dynamical
processes in a topologically realistic model. In the WGCEP
statistical approach (11), times and probabilities are embedded
in the choice of an applicable probability distribution function,
as well as choices associated with a variety of other statistical
weighting factors describing joint probabilities for multisegment
It should be remarked that Fig. 2 indicates that there is a
difference between measurements of ‘‘earthquake recurrence of
a certain magnitude’’ on a fault, and ‘‘earthquake recurrence at
a site’’ on a fault. Specifically, the latter is the quantity that is
measured by paleoseismologists, who would observe very dif-
ferent statistics on the earthquakes shown in Fig. 2 Left if they
made observations at the locations of 50, 100, and 150 km from
set of segments on the given section of fault.
A measure of the variability of recurrence times is the
coefficient of variation cv of the distribution of values. The
coefficient of variation is the ratio of the standard deviation to
the mean cv? ???. For periodic earthquakes, we have ? ? cv?
0; for random (Poisson) distribution of interval times, we have
? ? ? and cv? 1. For our simulations of great earthquakes on
cv? 0.6 for earthquakes having mSF? 7.0. These numbers apply
to any earthquakes on the fault between Fort Ross and San Juan
Bautista, rather than at a point on the fault. As mentioned
previously, Ward and Goes (18) also simulated earthquakes on
the San Andreas fault system. Although the statistics of the
simulated earthquakes produced by their standard physical earth
model (SPEM) are similar to those produced by Virtual Cali-
fornia, there are important differences between the two simu-
lation codes. Whereas Virtual California involves rectangular
fault segments in an elastic half space, SPEM is a plain strain
computation in an elastic plate of thickness H. The friction laws
used in the two simulations are also entirely different. Ward and
Goes (18) obtained the statistical properties of earthquake
recurrence times for the San Francisco section of the San
Andreas fault and found cv? 0.54 for earthquakes with mSF?
7.5. It is also of interest to compare the simulation results with
the available statistical distributions of recurrence times for the
San Andreas fault. Paleoseismic studies of mSF? 7 plus earth-
quakes on the southern San Andreas fault at Pallett Creek by
Sieh et al. (30) indicate seven intervals with ? ? 155 years and
? ? 109 years, hence cv? 0.70.
In this article we have examined the statistics of great earth-
quake occurrence on the northern San Andreas fault in the San
Francisco Bay region by using numerical simulations. For pre-
vious estimates of hazard, only purely statistical estimates have
been made. Our approach is analogous to the simulations used
to forecast the weather. An example of the type of statement that
can be made about the seismic hazard is: There exists a 5%
chance of an earthquake with magnitude m ? 7.0 occurring on
the San Andreas fault near San Francisco before 2009 and a 55%
chance by 2054. The practical use of statements like this for
hazard estimation using numerical simulations must be validated
by more computations and observations.
This work was supported by Department of Energy, Office of Basic
Energy Sciences Grant DE-FG02-04ER15568 to the University of
California, Davis (to J.B.R. and P.B.R.); National Science Foundation
Grant ATM 0327558 (to D.L.T. and R.S.); and grants from the Com-
putational Technologies Program of the National Aeronautics and Space
Administration’s Earth-Sun System Technology Office to the Jet Pro-
pulsion Laboratory (to J.B.R. and A.D.), the University of California,
Davis (to J.B.R. and P.B.R.), and the University of Indiana (to G.C.F.).
1. Engdahl, E. R. & Villasenor, A. (2002) in International Handbook of Earth-
quake and Engineering Seismology, eds. Lee, W. H. K., Kanamori, H., Jennings,
P. C. & Kisslinger, C. (Academic, Amsterdam), pp. 665–690.
2. Thatcher, W. (1975) J. Geophys. Res. 80, 4862–4872.
3. Demets, C., Gordon, R. G., Argus, D. F. & Stein, S. (1994) Geophys. Res. Lett.
4. Hall, N. T., Wright, R. H. & Clahan, K. B. (1999) J. Geophys. Res. 104,
5. Lorenz, E. N. (1963) J. Atmos. Sci. 20, 130–141.
6. Rundle, J. B. (1988) J. Geophys. Res. 93, 6237–6254.
7. Turcotte, D. L. (1997) Fractals and Chaos in Geology and Geophysics (Cam-
bridge Univ. Press, Cambridge, U.K.), 2nd Ed.
8. Working Group on California Earthquake Probabilities (1988) Probabilities of
Large Earthquakes Occurring in California on the San Andreas Fault (U.S.
Geologic Survey, Denver), USGS Open File Report 88-398.
9. Working Group on California Earthquake Probabilities (1990) Probabilities of
Large Earthquakes in the San Francisco Bay Region, California (U.S. Geologic
Survey, Denver), USGS Circular 1053.
Am. 85, 379–439.
11. Working Group on California Earthquake Probabilities (2003) Earthquake
Probabilities in the San Francisco Bay Region (U.S. Geologic Survey, Denver),
USGS Open File Report 03-214.
12. Kodera, K., Matthes, K., Shibata, K., Langematz, U. & Kuroda, Y. (2003)
Geophys. Res. Lett. 30, 1315.
13. Covey, C., Achutarao, K. M., Gleckler, P. J., Phillips, T. J., Taylor, K. E. &
Wehner, M. F. (2004) Global Planet. Change 41, 1–14.
14. Rundle, J. B. (1988) J. Geophys. Res. 93, 6255–6271.
15. Rundle, J. B., Rundle, P. B., Klein, W., Martins, J. S. S., Tiampo, K. F.,
Donnellan, A. & Kellogg, L. H. (2002) Pure Appl. Geophys. 159, 2357–2381.
16. Rundle, P. B., Rundle, J. B., Tiampo, K. F., Martins, J. S. S., McGinnis, S. &
Klein, W. (2001) Phys. Rev. Lett. 8714, 148501.
17. Rundle, J. B., Rundle, P. B., Donnellan, A. & Fox, G. (2004) Earth Planets
Space 56, 761–771.
18. Ward, S. N. & Goes, S. D. B. (1993) Geophys. Res. Lett. 20, 2131–2134.
19. Ward, S. N. (2000) Bull. Seismol. Soc. Am. 90, 370–386.
20. Brebbia, C. A., Tadeu, A. & Popuv, V., eds. (2002) Boundary Elements XXIV,
24th International Conference on Boundary Element Methods (WIT Press,
21. Kanamori, H. & Anderson, D. L. (1975) Bull. Seism. Soc. Am. 65, 1073–1096.
22. Massonnet, D., Rossi, M., Carmona, C., Adragna, F., Peltzer, G. & Feigl, K.
(1993) Nature 364, 138–142.
23. Savage, J. C. (1994) Bull. Seism. Soc. Am. 84, 219–221.
24. Matthews, M. V., Ellsworth, W. L. & Reasenberg, P. A. (2002) Bull. Seism. Soc.
Am. 92, 2233–2250.
25. Wesnousky, S. G., Scholz, C. H., Shimazaki, K. & Matsuda, T. (1984), Bull.
Seismol. Soc. Am. 74, 687–708.
26. Hagiwara, Y. (1974) Tectonophysics 23, 313–318.
27. Rikitake, T. (1976) Tectonophysics 35, 335–362.
28. Rikitake, T. (1982) Earthquake Forecasting and Warning (Reidel, Dordrecht,
29. Utsu, T. (1984) Bull. Earthquake Res. Inst. 59, 53–66.
30. Sieh, K., Stuiver, M. & Brillinger, D. (1989) J. Geophys. Res. 94, 603–623.
31. Sornette, D. & Knopoff, L. (1997) Bull. Seismol. Soc. Am. 87, 789–798.
Rundle et al.
October 25, 2005 ?
vol. 102 ?
no. 43 ?