## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Time-dependent earthquake forecast depends on the frequency and number of past events and time since the last event. Unfortunately, only a few past events are historically documented along subduction zones where forecasting relies mostly on paleoseismic catalogs. We address the role of dating uncertainty and completeness of paleoseismic catalogs on probabilistic estimates of forthcoming earthquakes using a 3.6-ka-long catalog including 11 paleoseismic and 1 historic (Mw≥8.6) earthquakes that preceded the great 1960 Chile earthquake. We set the clock to 1940 and estimate the conditional probability of a future event using five different recurrence models. We find that the Weibull model predicts the highest forecasting probabilities of 44% and 72% in the next 50 and 100 yr, respectively. Uncertainties in earthquake chronologies due to missing events and dating uncertainties may produce changes in forecast probabilities of up to 50%. Our study provides a framework to use paleoseismic records in seismic hazard assessments including epistemic uncertainties.

To read the full-text of this research,

you can request a copy directly from the authors.

... We investigate the properties of the frequency-size scaling of earthquake magnitudes and the exponent of the Gutenberg-Richter law in particular using the maximum likelihood approach with correction for binned data 20 . We also analyze the cumulative stress drop on fault over time (sum of the stress drops of each event at each step of the simulation), the average stress on the interface and its standard deviation over time, the cumulative magnitude (in terms of the total size of different ruptures occurred at each step of the simulation) and the interevent time distribution of large events (an arbitrary threshold for the size of the smallest considered earthquake is chosen), which is expected to be overall well fitted by a Weibull distribution 21,22 . This approach enables us to study the effect of the spatial scale on simulated seismic activity: we can indeed change the size of the grid, L, so that we characterize the properties of interfaces depending on their extension and the implications on physical parameters. ...

An accurate assessment of seismic hazards requires a combination of earthquake physics and statistical analysis. Because of the limits in the investigation of the seismogenic source and of the short temporal intervals covered by earthquake catalogs, laboratory experiments have been playing a crucial role in improving our understanding of earthquake phenomena. However, differences are observed between acoustic emissions in the lab, events in small, regulated systems (e.g., mines) and natural seismicity. One of the most pressing issues concerns the role of mechanical parameters and how they affect seismic activity depending on boundary conditions and on the spatio-temporal scales. Here, we focus on fault friction. There is evidence inferred from geodesy that most large faults are weak and featured by very low static friction coefficients not compatible with those of smaller faults and laboratory experiments. We propose a possible explanation: static friction decreases with fault size also depending on a few physical properties (e.g., faulting fractal dimension), while dynamic coefficients are not affected by the spatial scale. Mathematical derivations are grounded on hypotheses validated using a simple model for earthquake occurrence based on fracture mechanics and able to reproduce the fundamental statistical properties of seismicity.

An accurate assessment of seismic hazards requires a combination of earthquake physics and statistical analysis. Because of the limits in the investigation of the seismogenic source and of the short temporal intervals covered by earthquake catalogs, laboratory experiments have been playing a crucial role in improving our understanding of earthquake phenomena. However, differences are observed between acoustic emissions in the lab and seismicity. One of the most pressing issues concerns the role of mechanical parameters (e.g., fault friction) and how they affect seismic activity depending on boundary conditions and on the spatial and temporal scales. Here, we propose a model for earthquake occurrence based on seismological evidence in agreement with fracture mechanics. We show that, although extremely simple, not only it reproduces the fundamental properties of seismicity like other models, but it can also explain some peculiar phenomena about observational and statistical seismology (e.g., low stress drops and the origin of characteristic earthquakes) being more coherent with earthquake physics. It also provides hints for multiscale modeling of physical parameters. Static and dynamic friction coefficients are investigated. While the latter is not affected by the spatial scale, static frictional resistance decreases with fault size converging to the dynamic value.

A new history of great earthquakes (and their tsunamis) for the central and southern Cascadia subduction zone shows more frequent (17 in the past 6700 yr) megathrust ruptures than previous coastal chronologies. The history is based on along-strike correlations of Bayesian age models derived from evaluation of 554 radiocarbon ages that date earthquake evidence at 14 coastal sites. We reconstruct a history that accounts for all dated stratigraphic evidence with the fewest possible ruptures by evaluating the sequence of age models for earthquake or tsunami contacts at each site, comparing the degree of temporal overlap of correlated site age models, considering evidence for closely spaced earthquakes at four sites, and hypothesizing only maximum-length megathrust ruptures. For the past 6700 yr, recurrence for all earthquakes is 370–420 yr. But correlations suggest that ruptures at ∼1.5 ka and ∼1.1 ka were of limited extent (<400 km). If so, post-3-ka recurrence for ruptures extending throughout central and southern Cascadia is 510–540 yr. But the range in the times between earthquakes is large: two instances may be ∼50 yr, whereas the longest are ∼550 and ∼850 yr. The closely spaced ruptures about 1.6 ka may illustrate a pattern common at subduction zones of a long gap ending with a great earthquake rupturing much of the subduction zone, shortly followed by a rupture of more limited extent. The ruptures of limited extent support the continued inclusion of magnitude-8 earthquakes, with longer ruptures near magnitude 9, in assessments of seismic hazard in the region.

The Cascadia subduction zone (CSZ) is an exceptional geologic environment for recording evidence of land-level changes, tsunamis, and ground motion that reveals at least 19 great megathrust earthquakes over the past 10 kyr. Such earthquakes are among the most impactful natural hazards on Earth, transcend national boundaries, and can have global impact. Reducing the societal impacts of future events in the US Pacific Northwest and coastal British Columbia, Canada, requires improved scientific understanding of megathrust earthquake rupture, recurrence, and corresponding hazards. Despite substantial knowledge gained from decades of research, large uncertainties remain about the characteristics and frequencies of past CSZ earthquakes. In this review, we summarize geological, geophysical, and instrumental evidence relevant to understanding megathrust earthquakes along the CSZ and associated uncertainties. We discuss how the evidence constrains various models of great megathrust earthquake recurrence in Cascadia and identify potential paths forward for the earthquake science community. ▪ Despite outstanding geologic records of past megathrust events, large uncertainty of the magnitude and frequency of CSZ earthquakes remains. ▪ This review outlines current knowledge and promising future directions to address outstanding questions on CSZ rupture characteristics and recurrence. ▪ Integration of diverse data sets with attention to the geologic processes that create different records has potential to lead to major progress.
Expected final online publication date for the Annual Review of Earth and Planetary Sciences, Volume 49 is May 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

Elastic rebound theory forms the basis of the standard earthquake cycle model and predicts large earthquakes to recur regularly through cycles of strain accumulation and release. Yet few individual earthquake records are sufficiently long to test the theory. Here we characterize the distribution of earthquake interevent times from a global compilation of 80 long-term records. We find that large earthquakes recur more regularly than a random Poisson process on individual fault segments. The majority of Earth's well-studied faults shows weakly periodic and uncorrelated large earthquake recurrence, consistent with the expectations of elastic rebound theory. However, many low activity-rate (annual occurrence rates < 2 × 10 −4) faults show random or clustered earthquake recurrence, which cannot be explained by elastic rebound theory. Plain Language Summary To help society prepare for earthquakes, we use the record of previous earthquakes to forecast the chance of future large earthquakes. A key question is whether large earthquakes on a particular fault recur regularly, randomly, or cluster together in time. Here we compile records from 80 different studies of prehistoric earthquakes. We show that large earthquakes on a particular fault recur more regularly than random, except in regions that experience few earthquakes. This means that for regions that experience many earthquakes, we can forecast future large earthquake occurrence with some degree of skill. In contrast, forecasting large earthquake occurrence in places that experience few earthquakes, and where we expect them least, remains exceptionally difficult.

The Hikurangi subduction margin, New Zealand, has not produced large subduction earthquakes within the short written historic period (~180 years) and the potential of the plate interface to host large (M > 7) to great (M > 8) earthquakes and tsunamis is poorly constrained. The geological record of past subduction earthquakes offers a method for assessing the location, frequency and approximate magnitude of subduction earthquakes to underpin seismic and tsunami hazard assessments. We review evidence of Holocene coseismic coastal deformation and tsunamis at 22 locations along the margin. A consistent approach to radiocarbon age modelling is used and earthquake and tsunami evidence is ranked using a systematic assessment of the quality of age control and the certainty that the event in question is an earthquake. To identify possible subduction earthquakes, we use temporal correlation of earthquakes, combined with the type of earthquake evidence, likely primary fault source and the earthquake certainty ranking. We identify 10 past possible subduction earthquakes over the past 7000 years along the Hikurangi margin. The last subduction earthquake occurred at 520–470 years BP in the southern Hikurangi margin and the strongest evidence for a full margin rupture is at 870–815 years BP. There are no apparent persistent rupture patches, suggesting segmentation of the margin is not strong. In the southern margin, the type of geological deformation preserved generally matches that expected due to rupture of the interseismically locked portion of the subduction interface but the southern termination of past subduction ruptures remains unresolved. The pattern of geological deformation on the central margin suggests that the region of the interface that currently hosts slow slip events also undergoes rupture in large earthquakes, demonstrating different modes of slip behaviour occur on the central Hikurangi margin. Evidence for subduction earthquakes on the northern margin has not been identified because deformation signals from upper plate faults dominate the geological record. Large uncertainties remain in regard to evidence of past subduction earthquakes on the Hikurangi margin, with the greatest challenges presented by temporal correlation of earthquake evidence when working within the uncertainties of radiocarbon ages, and the presence of upper plate faults capable of producing deformation and tsunamis similar to that expected for subduction earthquakes. However, areas of priority research such as improving the paleotsunami record and integration of submarine turbidite records should produce significant advances in the future.

After more than 100 years of earthquake research, earthquake forecasting, which relies on knowledge of past fault rupture patterns, has become the foundation for societal defense against seismic natural disasters. A concept that has come into focus more recently is that rupture segmentation and cyclicity can be complex, and that a characteristic earthquake model is too simple to adequately describe much of fault behavior. Nevertheless, recognizable patterns in earthquake recurrence emerge from long, high resolution, spatially distributed chronologies. Researchers now seek to discover the maximum, minimum, and typical rupture areas; the distribution, variability, and spatial applicability of recurrence intervals; and patterns of earthquake clustering in space and time. The term “supercycle” has been used to describe repeating longer periods of elastic strain accumulation and release that involve multiple fault ruptures. However, this term has become very broadly applied, lumping together several distinct phenomena that likely have disparate underlying causes. We divide earthquake cycle behavior into four major classes that have different implications for seismic hazard and fault mechanics: 1) quasi-periodic similar ruptures, 2) clustered similar ruptures, 3) clustered complementary ruptures/rupture cascades, and 4) superimposed cycles. “Segmentation” is likewise an ambiguous term; we identify “master segments” and “asperities” as defined by barriers to fault rupture. These barriers may be persistent (rarely or never traversed), frequent (occasionally traversed), or ephemeral (changing location from cycle to cycle). We compile a catalog of the historical and paleoseismic evidence that currently exists for each of these types of behavior on major well-studied faults worldwide. Due to the unique level of paleoseismic and paleogeodetic detail provided by the coral microatoll technique, the Sumatran Sunda megathrust provides one of the most complete records over multiple earthquake rupture cycles. Long historical records of earthquakes along the South American and Japanese subduction zones are also vital contributors to our catalog, along with additional data compiled from subduction zones in Cascadia, Alaska, and Middle America, as well as the North Anatolian and Dead Sea strike-slip faults in the Middle East. We find that persistent and frequent barriers, rupture cascades, superimposed cycles, and quasi-periodic similar ruptures are common features of most major faults. Clustered similar ruptures do not appear to be common, but broad overlap zones between neighboring segments do occur. Barrier regions accommodate slip through reduced interseismic coupling, slow slip events, and/or smaller more localized ruptures, and are frequently associated with structural features such as subducting seafloor relief or fault trace discontinuities. This catalog of observations provides a basis for exploring and modeling root causes of rupture segmentation and cycle behavior. We expect that researchers will recognize similar behavior styles on other major faults around the world.

Plain Language Summary
Do large earthquakes occur at regular intervals through time? For over 100 years, this question has been at the center of efforts to determine earthquake hazards in seismically active regions that are home to millions of people around the world. Answering it requires extended observation of recurrent earthquakes on individual faults. These faults, however, produce relatively few large earthquakes over human time scales. Geologists have solved this problem by using radiometric dating to determine the age of “fossil” earthquakes recorded in the geologic record, leading to records containing as many as 30 events recorded over tens or hundreds of thousands of years. Few of these records, however, have been tested statistically to determine whether they record periodic (versus random) earthquake recurrence. Here we employ a statistical analysis which answers a simple question, “what is the probability that a given series of earthquakes was produced by random behavior?” By applying this analysis to a catalog of 31 previously published large‐earthquake records, we find that the majority (58%) of the cases have less than a 10% chance of having been produced through random action. Our results suggest that quasiperiodic earthquake recurrence is likely normal behavior for seismogenic faults in the Earth's crust.

In earthquake fault systems, active faults release elastic strain energy in a near-repetitive manner. Earthquake forecasting that basically refers to the assessment of earthquake hazards via probability estimates is crucial for many strategic and engineering planning. As the current need across sciences dominantly grows for conceptualization, abstraction, and application, comparison of lifetime probability distributions or understanding their physical significance becomes a fundamental concern in statistical seismology. Using various characteristic measures derived from density function, hazard rate function, and mean residual life function with its asymptotic (limiting) behavior, the present study examines the similitude of the two most versatile inverse Gaussian and lognormal distributions in earthquake forecasting. We consider three homogeneous and complete seismic catalogs from northeast India, northwest Himalaya, and Kachchh (western India) region for illustration. We employ maximum likelihood and moment methods for parameter estimation, and Fisher information for uncertainty valuation. Using three performance tests based on Akaike information criterion, Kolmogorov-Smirnov criterion, and Anderson-Darling test, we show that the heavy-tailed lognormal distribution performs relatively better in terms of its model fit to the observed data. We envisage that the ubiquitous heavy-tailed property of lognormal distribution helps in capturing desired characteristics of seismicity dynamics, providing better insights to the long-term earthquake forecasting in a seismically active region.

Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great
earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within
the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate
interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw < 8) and less frequent,
tsunamigenic great shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone
with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper
segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity.
Deeper segments fail earlier (~60 yr recurrence), producing moderate-size events that precede the failure of the shallower
region, which fails in a great earthquake (recurrence > 110 yr). We interpret the contrasting frictional strength and lag
time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis
strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic
hazard assessment of other subduction zones.

New documentary findings and available paleoseismological evidence provide both new insights into the historical seismic sequence that ended with the giant 1960 south-central Chile earthquake and relevant information about the region’s seismogenic zone. According to the few available written records, this region was previously struck by earthquakes of varying size in 1575, 1737, and 1837. We expanded the existing compilations of the effects of the two latter using unpublished first-hand accounts found in archives in Chile, Peru, Spain, and New England. We further investigated their sources by comparing the newly unearthed historical data and available paleoseismological evidence with the effects predicted by hypothetical dislocations. The results reveal significant differences in the along-strike and depth distribution of the ruptures in 1737, 1837, and 1960. While the 1737 rupture likely occurred in the northern half of the 1960 region, on a narrow and deep portion of the megathrust, the 1837 rupture occurred mainly in the southern half and slipped over a wide range of depth. Such a wide rupture in 1837 disagrees with the narrow and shallow seismogenic zone currently inferred along this region. If in fact there is now a narrow zone where 200 years ago there was a wider one, it means that the seismogenic zone changes with time, perhaps between seismic cycles. Such change probably explains the evident variability in both size and location of the great earthquakes that have struck this region over the last centuries, as evidenced by written history, and through millennia, as inferred from paleoseismology.

Along a subduction zone, great megathrust earthquakes recur either after long seismic gaps lasting several decades to centuries or over much shorter periods lasting hours to a few years when cascading successions of earthquakes rupture nearby segments of the fault. We analyze a decade of continuous Global Positioning System observations along the South American continent to estimate changes in deformation rates between the 2010 Maule (M8.8) and 2015 Illapel (M8.3) Chilean earthquakes. We find that surface velocities increased after the 2010 earthquake, in response to continental-scale viscoelastic mantle relaxation and to regional-scale increased degree of interplate locking. We propose that increased locking occurs transiently during a super-interseismic phase in segments adjacent to a megathrust rupture, responding to bending of both plates caused by coseismic slip and subsequent afterslip. Enhanced strain rates during a super-interseismic phase may therefore bring a megathrust segment closer to failure and possibly triggered the 2015 event.

Time-dependent models for seismic hazard and earthquake probabilities are at the leading edge of research nowadays. In the frame-work of a 2-year national Italian project (2005– 2007), we have applied the Brownian passage time (BPT) renewal model to the recently released Database of Individual Seismogenic Sources (DISS) to compute earthquake probability in the period 2007–2036. Observed interevent times on faults in Italy are absolutely insufficient to char-acterize the recurrence time. We, therefore, de-rived mean recurrence intervals indirectly. To estimate the uncertainty of the results, we resorted to the theory of error propagation with respect to the main parameters: magnitude and slip rate. The main issue concerned the high variability of slip rate, which could hardly be reduced by exploiting geodetic constraints. We did some validation tests, and interesting considerations were derived from seismic moment budgeting on the historical earth-quake catalog. In a time-dependent perspective, i.e., when the date of the last event is known, only 10–15% of the 115 sources exhibit a probability of a characteristic earthquake in the next 30 years higher than the equivalent Poissonian probabil-ities. If we accept the Japanese conventional choice of probability threshold greater than 3% in 30 years to define "highly probable sources," mainly intermediate earthquake faults with char-acteristic M < 6, having an elapsed time of 0.7–1.2 times the recurrence interval are the most "prone" sources. The number of highly probable sources rises by increasing the aperiodicity coef-ficient (from 14 sources in the case of variable α ranging between 0.22 and 0.36 to 31 sources out of 115 in the case of an α value fixed at 0.7). On the other hand, in stationary time-independent approaches, more than two thirds of all sources are considered probabilistically prone to an im-pending earthquake. The performed tests show the influence of the variability of the aperiodicity factor in the BPT renewal model on the absolute probability values. However, the influence on the relative ranking of sources is small. Future devel-opments should give priority to a more accurate determination of the date of the last seismic event for a few seismogenic sources of the DISS catalog 120 J Seismol (2010) 14:119–141 and to a careful check on the applicability of a purely characteristic model.

It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended.

An uncommon coastal sedimentary record combines evidence for seismic shaking and coincident tsunami inundation since AD 1000 in the region of the largest earthquake recorded instrumentally: the giant 1960 southern Chile earthquake (Mw 9.5). The record reveals significant variability in the size and recurrence of megathrust earthquakes and ensuing tsunamis along this part of the Nazca-South American plate boundary. A 500-m long coastal outcrop on Isla Chiloé, midway along the 1960 rupture, provides continuous exposure of soil horizons buried locally by debris-flow diamicts and extensively by tsunami sand sheets. The diamicts flattened plants that yield geologically precise ages to correlate with well-dated evidence elsewhere. The 1960 event was preceded by three earthquakes that probably resembled it in their effects, in AD 898–1128, 1300–1398 and 1575, and by five relatively smaller intervening earthquakes. Earthquakes and tsunamis recurred exceptionally often between AD 1300 and 1575. Their average recurrence interval of 85 years only slightly exceeds the time already elapsed since 1960. This inference is of serious concern because no earthquake has been anticipated in the region so soon after the 1960 event, and current plate locking suggests that some segments of the boundary are already capable of producing large earthquakes. This long-term earthquake and tsunami history of one of the world's most seismically active subduction zones provides an example of variable rupture mode, in which earthquake size and recurrence interval vary from one earthquake to the next.

Accurate probabilistic seismic hazard analysis requires a good knowledge of the recurrence parameters of the strongest earthquakes in a region. Due to the typical short temporal span of instrumental and historical data, it is often unclear whether one should adopt a time-dependent or time-independent (Poissonian) recurrence model, the choice of which has large repercussions on the probability estimates for new strong events. The rapidly-growing discipline of lacustrine paleoseismology aims at producing long continuous records of strong seismic shaking, which integrate the activity of all significant seismic sources in a region and allow a reliable determination of recurrence patterns. The typical continuous sedimentation regime in lakes can lead to complete, sensitive paleoseismic records and a reduced temporal uncertainty in recurrence intervals. Here, I present a worldwide compilation of published long lacustrine paleoseismic records grouped per tectonic domain, and statistically explore the variability of their recurrence intervals expressed by the coefficient of variation (CoV). A CoV 1 indicates a time-independent process, whereas CoV <0.5 and 0.5–1 are interpreted as a quasi-periodic and weakly-periodic process, respectively. By resampling the data and applying different statistical tests, it is found that generally at least ~10 intervals are needed to allocate a paleoseismic record to the main recurrence models, and ~ 15 intervals are required to confidently reject the possibility of Poissonian behavior, if applicable. The compilation shows a wide range of CoVs (0.32–1.48), which do not seem to be controlled (on a global scale) by the mean interval, record time span, sedimentation rate, seismic intensity threshold or record type. Plate boundary settings generally exhibit a quasi-periodic to weakly-periodic recurrence behavior, characterized by a rising hazard function with elapsed time since the last event. In contrast, intraplate settings are characterized by a Poissonian or clustered model and either a constant hazard function with time or an enhanced hazard function shortly after an event. This general pattern seems to be modulated by the local distribution of seismic sources, where a CoV 0.3–0.4 can be interpreted as caused by a simple, isolated seismic source, and a CoV ~1 may indicate the additive effect of several seismic sources capable of leaving a sedimentary fingerprint in the lake. Most lacustrine records at subduction zones show a CoV ~0.4–0.8, representing a mixture of a dominant megathrust seismic cycle and other secondary sources. In contrast, transform settings present a larger variability in recurrence parameters, with CoV ranging between ~0.4 and ~ 1.4. A clustered recurrence (CoV >1) may be related to changes in the sensitivity of the lacustrine paleoseismograph or to real earthquake clustering due to e.g. stress transfer between neighboring faults. The most useful lacustrine paleoseismic records can be retrieved in high-seismicity settings where many paleoseismic events can be recorded in a relatively short time span (i.e. <10 kyr). Such records yield sufficient intervals to reach a stable CoV and to allow probability distribution fitting, but avoid large changes in seismicity and sediment dynamics, which can be caused by e.g. the direct and indirect effects of regional deglaciation.

Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a “clock change” model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.

Historical and paleoseismic records in south-central Chile indicate that giant earthquakes on the subduction megathrust – such as in AD1960 (Mw9.5) – reoccur on average every ∼300yr. Based on geodetic calculations of the interseismic moment accumulation since AD1960, it was postulated that the area already has the potential for a Mw8earthquake. However, to estimate the probability of such a great earthquake to take place in the short term, one needs to frame this hypothesis within the long-term recurrence pattern of megathrust earthquakes in south-central Chile. Here we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800yr. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII1/2 vs. ≥VI1/2) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII1/2) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI1/2 (139 ±69 yr). Moreover, distribution fitting and the coefficient of variation (CoV) of inter-event times indicate that the stronger earthquakes recur in a more periodic way (CoV: 0.32 vs.0.5). Regional correlation of our multi-threshold shaking records with coastal paleoseismic data of complementary nature (tsunami, coseismic subsidence) suggests that the intensity ≥VII1/2 events repeatedly ruptured the same part of the megathrust over a distance of at least ∼300km and can be assigned to Mw≥8.6. We hypothesize that a zone of high plate locking – identified by geodetic studies and large slip in AD 1960 – acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. Our paleo-records indicate that Poissonian recurrence models are inadequate to describe large megathrust earthquake recurrence in south-central Chile. Moreover, they show an enhanced probability for a Mw7.7–8.5 earthquake during the next 110 years whereas the probability for a Mw≥8.6(AD1960-like) earthquake remains low in this period.

We present an exceptionally long and continuous coastal lacustrine record of ∼5500 years from Lake Huelde on the west coast of Chiloé Island in south central Chile. The study area is located within the rupture zone of the giant 1960 CE Great Chilean Earthquake (MW 9.5). The subsequent earthquake-induced tsunami inundated Lake Huelde and deposited mud rip-up clasts, massive sand and a mud cap in the lake. Long sediment cores from 8 core sites within Lake Huelde reveal 16 additional sandy layers in the 5500 year long record. The sandy layers share sedimentological similarities with the deposit of the 1960 CE tsunami and other coastal lake tsunami deposits elsewhere. On the basis of general and site-specific criteria we interpret the sandy layers as tsunami deposits. Age-control is provided by four different methods, 1) ²¹⁰Pb-dating, 2) the identification of the ¹³⁷Cs-peak, 3) an infrared stimulated luminescence (IRSL) date and 4) 22 radiocarbon dates. The ages of each tsunami deposit are modelled using the Bayesian statistic tools of OxCal and Bacon. The record from Lake Huelde matches the 8 regionally known tsunami deposits from documented history and geological evidence from the last ∼2000 years without over- or underrepresentation. We extend the existing tsunami history by 9 tsunami deposits. We discuss the advantages and disadvantages of various sedimentary environments for tsunami deposition and preservation, e.g. we find that Lake Huelde is 2–3 times less sensitive to relative sea-level change in comparison to coastal marshes in the same region.

Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8-0.9 by 2027-2036 and 2034-2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation, location identification of lifeline structures, and revision of building codes.

The Alpine fault in south Westland, New Zealand, releases strains of Pacific-Australian relative plate motion in large earthquakes with an average inter-event spacing of similar to 330 years. A new record of earthquake recurrence has been developed at Hokuri Creek, with evidence for 22 events. The youngest Hokuri Creek earthquake overlaps in time and is believed to be the same as the oldest of another site about 100 km to the northwest near Haast. The combined record spans the last 7900 years and includes 24 events. We study the recurrence rate and conditional probability of ground ruptures from this record using a new likelihood-based approach for estimation of recurrence model parameters. Paleoseismic parameter estimation includes both dating and natural recurrence uncertainties. Lognormal and Brownian passage time (BPT) models are considered. The likelihood surface has distribution location and width parameters as axes, the mean and standard deviation of the log recurrence for the lognormal, and the mean and coefficient of variation for the BPT. The maximum-likelihood (ML) point gives the parameters most likely to have given rise to the data. The ML point, 50-year conditional probabilities of a ground-rupturing earthquake are 26.8% and 26.1% for the lognormal and BPT models, respectively. Contours of equal likelihood track the parameter pairs that are equally probable to have given rise to the observed data. Conditional probabilities on the lognormal 95% boundary around the ML point range from 18.2% to 35.8%. An empirical distribution model completely based on past recurrence times gives a similar conditional probability of 27.1% (9.6%-50.2%). In contrast, the time-independent conditional probability estimate of 13.6% (8.8%-19.1%) is about half that of the time-dependent models. A nonparametric test of earthquake recurrence at Hokuri Creek indicates that time-dependent recurrence models best represent the southern Alpine fault of the South Island, New Zealand.

This paper investigates the suitability of a three-parameter (scale, shape, and location) Weibull distribution in probabilistic assessment of earthquake hazards. The performance is also compared with two other popular models from same Weibull family, namely the two-parameter Weibull model and the inverse Weibull model. A complete and homogeneous earthquake catalog (Yadav
et al. in Pure Appl Geophys 167:1331–1342, 2010) of 20 events (M ≥ 7.0), spanning the period 1846 to 1995 from north–east India and its surrounding region (20°–32°N and 87°–100°E), is used to perform this study. The model parameters are initially estimated from graphical plots and later confirmed from statistical estimations such as maximum likelihood estimation (MLE) and method of moments (MoM). The asymptotic variance–covariance matrix for the MLE estimated parameters is further calculated on the basis of the Fisher information matrix (FIM). The model suitability is appraised using different statistical goodness-of-fit tests. For the study area, the estimated conditional probability for an earthquake within a decade comes out to be very high (≥0.90) for an elapsed time of 18 years (i.e., 2013). The study also reveals that the use of location parameter provides more flexibility to the three-parameter Weibull model in comparison to the two-parameter Weibull model. Therefore, it is suggested that three-parameter Weibull model has high importance in empirical modeling of earthquake recurrence and seismic hazard assessment.

Understanding the long-term earthquake recurrence pattern at subduction zones requires continuous paleoseismic records with excellent temporal and spatial resolution and stable threshold
conditions. South central Chilean lakes are typically characterized by laminated sediments providing a
quasi-annual resolution. Our sedimentary data show that lacustrine turbidite sequences accurately reflect
the historical record of large interplate earthquakes (among others the 2010 and 1960 events). Furthermore, we found that a turbidite’s spatial extent and thickness are a function of the local seismic intensity and can be
used for reconstructing paleo-intensities. Consequently, our multilake turbidite record aids in pinpointing
magnitudes, rupture locations, and extent of past subduction earthquakes in south central Chile. Comparison of the lacustrine turbidite records with historical reports, a paleotsunami/subsidence record, and a marine megaturbidite record demonstrates that the Valdivia Segment is characterized by a variable rupture mode over the last 900 years including (i) full ruptures (Mw ~9.5: 1960, 1575, 1319 ± 9, 1127 ± 44), (ii) ruptures covering half of the Valdivia Segment (Mw ~9: 1837), and (iii) partial ruptures of much smaller coseismic slip and extent (Mw ~7.5–8: 1737, 1466 ± 4). Also, distant or smaller local earthquakes can leave a specific sedimentary imprint which may resolve subtle differences in seismic intensity values. For instance, the 2010 event at the Maule Segment produced higher seismic intensities toward southeastern localities
compared to previous megathrust ruptures of similar size and extent near Concepciόn.

Goldfinger et al. (2012) interpreted a 10,000-year old sequence of deep sea turbidites at the Cascadia subduction zone (CSZ) as a record of clusters of plate-boundary great earthquakes separated by gaps of many hundreds of years. We performed statistical analyses on this inferred earthquake record to test the temporal clustering model and calculate time-dependent recurrence intervals and probabilities. We used a Monte Carlo simulation to determine if the turbidite recurrence intervals follow an exponential distribution consistent with a Poisson (memoryless) process. The latter was rejected at a statistical significance level of 0.05. We performed a “cluster analysis” on 20 randomly simulated catalogs of 18 events (event T2 excluded), using ages with uncertainties from the turbidite dataset. Results indicate 13 catalogs exhibit statistically significant clustering behavior, yielding a probability of clustering of 13/20 or 0.65. Most (70%) of the 20 catalogs contain 2 or 3 closed clusters (a sequence that contains the same or nearly the same number of events) and the current cluster T1 to T5 appears consistently in all catalogs. Analysis of the 13 catalogs that manifest clustering indicates that the probability that at least one more event will occur in the current cluster is 0.82. Given that the current cluster may not be closed yet, the probabilities of a M 9 earthquake during the next 50 and 100 years were estimated to be 0.17 and 0.25, respectively. We also analyzed the sensitivity of results to including event T2, whose status as a full-length rupture event is in doubt. The inclusion of T2 did not change the probability of clustering behavior in the CSZ turbidite data, but did significantly reduce the probability that the current cluster would extend to one more event. Based on the statistical analysis, time-independent and time-dependent recurrence intervals were calculated.

Central Switzerland lies tectonically in an intraplate area and recurrence rates of strong earthquakes exceed the time span covered by historic chronicles. However, many lakes are present in the area that act as natural seismographs: their continuous, datable and high‐resolution sediment succession allows extension of the earthquake catalogue to pre‐historic times. This study reviews and compiles available data sets and results from more than 10 years of lacustrine palaeoseismological research in lakes of northern and Central Switzerland. The concept of using lacustrine mass‐movement event stratigraphy to identify palaeo‐earthquakes is showcased by presenting new data and results from Lake Zurich. The Late Glacial to Holocene mass‐movement units in this lake document a complex history of varying tectonic and environmental impacts. Results include sedimentary evidence of three major and three minor, simultaneously triggered basin‐wide lateral slope failure events interpreted as the fingerprints of palaeoseismic activity. A refined earthquake catalogue, which includes results from previous lake studies, reveals a non‐uniform temporal distribution of earthquakes in northern and Central Switzerland. A higher frequency of earthquakes in the Late Glacial and Late Holocene period documents two different phases of neotectonic activity; they are interpreted to be related to isostatic post‐glacial rebound and relatively recent (re‐)activation of seismogenic zones, respectively. Magnitudes and epicentre reconstructions for the largest identified earthquakes provide evidence for two possible earthquake sources: (i) a source area in the region of the Alpine or Sub‐Alpine Front due to release of accumulated north‐west/south‐east compressional stress related to an active basal thrust beneath the Aar massif; and (ii) a source area beneath the Alpine foreland due to reactivation of deep‐seated strike‐slip faults. Such activity has been repeatedly observed instrumentally, for example, during the most recent magnitude 4·2 and 3·5 earthquakes of February 2012, near Zug. The combined lacustrine record from northern and Central Switzerland indicates that at least one of these potential sources has been capable of producing magnitude 6·2 to 6·7 events in the past.

The time‐varying hazard of rupture of the Alpine Fault is estimated using a renewal process model and a statistical method that takes account of uncertainties in data and parameter values. Four different recurrence‐time distributions are considered. The central and southern sections of the fault are treated separately. Data inputs are based on estimates of the long‐term slip rate, the average single‐event displacement, and the dates of earthquakes that have occurred in the last 1000 yr from previous studies of fault traces, landslide and terrace records, and forest ages and times of disturbance. Using these data and associated uncertainties, the current hazard of rupture on the central section of the fault is estimated to be 0.0051, 0.010, 0.012, and 0.0073 events per year under the exponential, lognormal, Weibull, and inverse Gaussian recurrence‐time distributions, respectively. The corresponding probabilities of rupture in the next 20 yr are 10, 18, 21, and 14%, respectively. The current hazard on the southern section of the fault is estimated to be 0.0033, 0.0075, 0.0070, and 0.0053 events per year for the four models, and the 20 yr probabilities 6, 14, 13, and 10%, respectively. Increased precision in the date of the second to last event on the southern section of the fault would result in only small changes to these rates and probabilities. The indicated hazard under the lognormal model is about double the long‐term average rate but less than half of that estimated in previous studies that did not take account of all the uncertainties. Dating additional prehistoric ruptures is likely to have a greater effect on the hazard estimates than improved precision in the existing data.

Regularity of seismic slip along a 9 km segment of the Calaveras fault zone is believed to result from steady-state loading of a creeping fault to generate stresses on an isolated stuck patch which moves in a stick-slip event in the magnitude range 3 to 4 whenever a critical threshold is reached. The patch behavior can be described by a simple model similar to the spring-driven frictional models used in laboratory simulations of stick-slip. The (M?3) recurrence time for this model is directly proportional to the seismic slip (computed from magnitudes) since the last time the threshold was reached. If the model is correct, an (3?M?4) earthquake should occur at 37°K 17'+/-2' N, 121° 39'+/-2' W within 48 days of January 1, 1977.

Eine Aufstellung der größeren Erdbeben Chiles (angenommene Stärke größer als 7,5) wird vorgelegt. Dieser Aufstellung liegt eine Lochkartenkartei chilenischer Erdbeben mit mehr als 15 000 Eintragungen zugrunde. Für jedes Beben werden die Auswirkungen einschließlich der Tsunami-Beobachtungen beschrieben und Schätzungen der Lage der Epizentren und der Stärke angegeben. Größere Erdbeben treten in Chile in nur wenigen Bebengebieten auf. Diese sind linear im Meer und entlang der Verwerfungen zwischen der Küstenkette und dem Zentraltal angeordnet. In Mittelchile zwischen Valparaiso und Concepción treten größere Erdbeben hauptsächlich im Innern des Landes auf. Südlich von Concepción liegen die größeren Epizentren im Meer. Jedes Herdgebiet liefert voraussagbare seismische und Tsunami-Effekte.

This paper is an update of a 1970 publication, “Major Earthquakes and Tsunamis in Chile” (Lomnitz, 1970), which appeared in the Geologische Rundschau , now International Journal of Earth Sciences. The reference has always been hard to find and in recent years has become almost impossible to locate. Additionally, the database was overdue for revision in light of more recent results.
The earlier conclusion of the paper, that “Chile emerges as perhaps the most highly seismic region in the world, with the possible exception of Japan”, still stands. One might add that Chilean earthquakes have provided data for important historical advances in the Earth sciences. For example, the 1835 earthquake in southern Chile was described by Darwin (1845) and has provided the earliest reliable observations of geodetic uplift along a subduction zone. The 1960 Chile earthquake ( Mw 9.5), possibly the largest earthquake in world history, continues to provide invaluable data and challenges for research. Tectonic changes in the coastal morphology of Chile may well be unique (Lomnitz, 1969). As part of a 5,000-km subduction system and with a subduction rate of more than 7 cm/year, Chile is located atop one of the most highly active subduction zones in the world.
Available information on the history of earthquakes in Chile is limited. This is partly due to the nature of the terrain. The northern part of Chile is a desert, where rain falls about once every 25 years. The southern part is extremely rainy and is covered with a dense rain forest. Farther south along the coast one finds glaciers. The country is also wedged between the Pacific Ocean and the Andes Mountains. With the exception of a few scattered ports, most of the population lives in a narrow longitudinal valley between latitudes 32° and 45° south.
I have experienced …

We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady
tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure
threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state
process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve
as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties:
(1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which
the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than,
equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than
. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound
can be placed on the time of the last rupture.
The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects
the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance
to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation
of “interaction” effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside
the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in
the loading cycle step perturbations occur. Transient effects may be much stronger than would be predicted by the “clock change”
method and characteristically decay inversely with elapsed time after the perturbation.

Statistics of ultimate strain of the earth's crust are obtained on the basis of levelling and triangulation data over earthquake areas. The mean value of ultimate strain e0 is obtained as 5.3 · 10−5 with a standard deviation σ amounting to 3.3 · 10−5 on the assumption that the deviation from the mean value is described by a Gaussian distribution.Assuming that crustal strain increases linearly with time t from an approximately zero value immediately after a large earthquake, which occurred at t = 0, the probability of having a crustal rupture or an earthquake occurrence during a time-interval from 0 to t can be calculated from e0 and a along with the data for strain accumulation over the area concerned as brought out by repetitions of geodetic survey.Applying the above theory to an area southwest of Tokyo, where an earthquake of magnitude 7.9 took place in 1923, the probabilities for repetition of an earthquake there are estimated as 0.2, 0.5 and 0.8 respectively for periods 1925–1980, 1925–2030, and 1925–2080.Similar studies are made for the areas off eastern Hokkaido and the Tokai district in Central Japan. No geodetic data over focal regions are available in these cases because observations are made only on land more than 100 km distant from epicentral area off the coast. In the circumstances theoretical land deformations caused by a plate subduction, which is believed to be taking place at the trench axis, are compared to the deforma tions actually detected by repeated surveys. Although the reliability of probability calculated on the basis of such processes may be substantially lower than that based on data taken in an area immediately covering a focal region, it is striking that the probabilities of reoccurrence of a large earthquake for a time-interval from the last shock to the present are so high that they exceed 0.8 ~ 0.9 for reasonable values of parameters involved.

Probability of a large-scale earthquake occurrence is estimated from crustal strain geodetically detected over an earthquake area. The Weibull distribution function, which is widely applied to quality-control research, is made use of in this paper in the probabilistic treatments of crustal strain.Using the table of ultimate strain presented by Rikitake (1974), a Weibull model representing a statistical distribution of crustal-rupture occurrence time is determined on the assumption that the crust is strained with a constant speed. In the case of the South Kanto District, the associated probability-density function has a maximum at about 84 years after the time when the strain energy accumulation starts.

Lake sediments as natural seismographs: earthquake-related deformations (seismites) in central Canadian lakes

- Doughty