ArticlePublisher preview available

Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake

Authors:
  • Kola Branch of the Unified Geophysical Survey of Russian Academy of Sciences
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude–frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.
This content is subject to copyright. Terms and conditions apply.
Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake
P. SHEBALIN
1
and S. BARANOV
2
Abstract—We study aftershock sequences of six major earth-
quakes in New Zealand, including the 2016 M7.8 Kaikaoura and
2016 M7.1 North Island earthquakes. For Kaikaoura earthquake,
we assess the expected number of long-delayed large aftershocks of
M5?and M5.5?in two periods, 0.5 and 3 years after the main
shocks, using 75 days of available data. We compare results with
obtained for other sequences using same 75-days period. We esti-
mate the errors by considering a set of magnitude thresholds and
corresponding periods of data completeness and consistency. To
avoid overestimation of the expected rates of large aftershocks, we
presume a break of slope of the magnitude–frequency relation in
the aftershock sequences, and compare two models, with and
without the break of slope. Comparing estimations to the actual
number of long-delayed large aftershocks, we observe, in general, a
significant underestimation of their expected number. We can
suppose that the long-delayed aftershocks may reflect larger-scale
processes, including interaction of faults, that complement an iso-
lated relaxation process. In the spirit of this hypothesis, we search
for symptoms of the capacity of the aftershock zone to generate
large events months after the major earthquake. We adapt an
algorithm EAST, studying statistics of early aftershocks, to the case
of secondary aftershocks within aftershock sequences of major
earthquakes. In retrospective application to the considered cases,
the algorithm demonstrates an ability to detect in advance long-
delayed aftershocks both in time and space domains. Application of
the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone
indicates that the most likely area for a delayed aftershock of
M5.5?or M6?is at the northern end of the zone in Cook Strait.
1. Introduction
A major earthquake of M7.8 occurred near the
coast of New Zealand on 13 November 2016. The
earthquake has initiated a significant tsunami with
amplitude more than 4 m. The earthquake fault was
located in about 100 km from the fault zone of the
Canterbury earthquake sequence. The Canterbury
sequence started on 3 September 2010 (Mw7.2) near
Darfield. An earthquake of Mw6.2 occurred in the
Eastern part of its fault zone about 6 months later, on
21 February 2011. The epicenter was located in
Christchurch, the second most populous city in New
Zealand, and the earthquake caused huge damage
including about 200 casualties. Further large events
with magnitude 5.5 and higher occurred in the area on
13 June 2011, on 22 December 2011, and on 14
February 2016. The proximity of the 2016 M7.8 and
2010 M7.1 fault zones raises a question: should we
expect a similar scenario, with a set of successive
large long-delayed aftershocks?
Various methods were developed recently aimed
to an operational forecasting of aftershocks (Ger-
stenberger et al. 2005; Omi et al. 2013,2016; Steacy
et al. 2014; Cattania et al. 2014). Some aftershock
forecasting models are being tested in real time in the
New Zealand earthquake forecast testing center
(Gerstenberger and Rhoades 2010). Most of those
models are based on the idea of independent com-
bining of well-known Gutenberg–Richter and
Omori–Utsu relations, the idea first proposed by
Reasenberg and Jones (1989). All those models are
designed to assess the expected rates of seismic
events in specified space–time–magnitude volumes.
An important direction in development of the earth-
quake rate models is the ETAS model (Ogata 1983)
and its modifications, including applications in New
Zealand (Harte 2014). Another important new ten-
dency is combining different models, including
hybrid statistical and physics-based models (Rhoades
2013; Rhoades et al. 2014,2016; Shebalin et al. 2014;
Cattania et al. 2014).
The Canterbury earthquake sequence has strongly
reactivated the interest to the problem of forecasting
aftershocks. Recently, a retrospective analysis of
1
Institute of Earthquake Prediction Theory and Mathemati-
cal Geophysics, Russian Academy of Sciences, Moscow, Russia.
E-mail: p.n.shebalin@gmail.com
2
Kola Branch of the Geophysical Survey of Russian Acad-
emy of Sciences, Apatity, Russia.
Pure Appl. Geophys. 174 (2017), 3751–3764
Ó2017 Springer International Publishing AG
DOI 10.1007/s00024-017-1608-9 Pure and Applied Geophysics
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... This behavior was observed in laboratory experiments (Smirnov et al. 2010(Smirnov et al. , 2019Sobolev et al. 1996), as well as in real and synthetic aftershock sequences (Knopoff et al. 1982;Helmstetter and Sornette 2002;Ogata and Katsura 2014;Rodkin and Tikhonov 2016;Tamaribuchi et al. 2018). However, the effect referred to may be due to catalog incompleteness, which depends both on magnitude and on time (Helmstetter et al. 2006;Hainzl 2016;Shebalin and Baranov 2017;Baranov et al. 2019c). Gulia et al. (2018) in contrary observed a positive step of the b value after the main shock and its subsequent decrease. ...
... The estimate of the b value in the Gutenberg-Richter law for relative magnitudes has the following advantage. It is known that completeness magnitude for early aftershocks depends on the magnitude of the main shock, while the relative completeness magnitude does not (Helmstetter et al. 2006;Hainzl 2016;Shebalin and Baranov 2017). ...
... They estimated b = 1.0 using the procedure described by Bender (1983) in the interval [−2, −0.5]. Limiting the estimate on the right at −0.5 was introduced to exclude a possible effect of finite volumes (Romanowicz 1992) or break of slope in magnitude-frequency distribution caused by postseismic creep (Shebalin, Baranov 2017;. Both effects are expressed in a deficit of larger events, which may lead to significant overestimation of the b value (Marzocchi et al. 2020) when the completeness magnitude is 2 units below the maximum magnitude. ...
Article
Full-text available
We provide an overview of the basic models of the aftershock processes and advanced methods used to predict postseismic hazard. We consider both the physical mechanisms for aftershock generation and models of aftershocks and time-dependent models of aftershock processes. In particular, we provide a validation of the aftershock process using a superposition of the Gutenberg–Richter and Omori–Utsu laws. We show that the key role in assessment of postseismic hazard is earthquake productivity, which characterizes the ability of earthquakes to produce subsequent shocks. We discuss the recently established exponential law of earthquake productivity and show that the exponential form is invariant under variations in magnitude and focus depth. Being in discordance with the popular epidemic type aftershock sequence (ETAS) model, the law makes it possible to build a corrected model. We study versions of theoretical validation for the Båth law, which specifies the mean difference between the magnitudes of the main shock and the largest aftershock. We consider also the time-dependent Båth law. We provide a detailed review of modern approaches and methods for dealing with the estimation of the magnitude of the largest aftershock. As well, we review the problem of estimating the duration of aftershocks with magnitudes equal to or greater than a specified value, the hazardous period.
... First, the magnitude of completeness for early aftershocks depends on the main shock magnitude, while the relative completeness magnitude does not have such a dependence (Helmstetter et al., 2006;Hainzl, 2016). Second, the value is estimated for all series using large magnitudes, thereby minimizing the possible effect of break of slope in frequency magnitude distribution due to possible post-seismic deformations in the earthquake source (Vorobieva et al., 2016;Shebalin and Baranov, 2017). Third, the use of large magnitudes for estimating b-value eliminates the problem of incomplete detection of early weak aftershocks. ...
Article
Full-text available
The paper considers the distribution of magnitudes of the strongest aftershocks–depending on the time after the main shock–that occur during the extraction of minerals in tectonically loaded rock massifs. The study is based on the data of long-term seismological observations at the apatite-nepheline deposits of the Khibiny Massif located in the Kola Peninsula. The article demonstrates that the distribution of the difference between the magnitudes of the strongest aftershock and the main shock is described by the dynamic Båth law, previously obtained by the authors during the study of the regularities of aftershock processes of tectonic earthquakes.
... Early aftershocks can also help constrain the geometry of a seismogenic fault (Chang et al., 2007;Peng and Zhao, 2009;Wu et al., 2017;Yin et al., 2018). However, the standard catalogs of early aftershocks are usually incomplete: due to the extremely intense flow of earthquakes and high noise levels, many events, including quite strong ones, can be missed (Kagan, 2004;Helmstetter et al., 2006;Shebalin and Baranov, 2017). To improve the completeness of catalogs of early aftershocks, special techniques for processing station data high-pass filtered seismograms following the main shock (Enescu et al., 2007;Peng et al., 2007); waveform matched-filter technique (Gibbons and Ringdal, 2006;Shelly et al., 2007;Yang et al., 2009;Wu et al., 2017;Yin et al., 2018) are actively developed. ...
Article
Full-text available
Early aftershocks contain important information about the physics of earthquake occurrence and postseismic relaxation processes. However, the standard catalogs of early aftershocks are usually incomplete. Many events can be missed in the main shock coda, some of which are strong enough due to the extremely high noise level. Under these conditions, the process of event identification becomes largely stochastic. Due to different network configurations and record processing methods, different agencies may register/miss different events, thus merging catalogs can improve the completeness of the aftershock sequence. When merging catalogs, the problem of identifying duplicates (records related to the same seismic event) arises. The main difficulty is discriminating aftershocks and duplicates, since both are events close in space and time. The problem is analogous to the problem of discriminating aftershocks and independent events. The solution methods are usually similar too. In this paper, we apply the nearest neighbor method modified for our problem. This method has become widespread in recent years in the problem of identifying aftershocks, and a probabilistic metric in the space of network errors in determining the epicenters and times of seismic events. It is applied for automatic identification of duplicates when merging catalogs of aftershocks for the Tohoku earthquake. An analysis of the space-time structure of duplicates and aftershocks shows their significant difference, which makes it possible to successfully solve the problem. In a sample from the global Advanced National Seismic System (ANSS) catalog ( M > 4 ), were found more than 700 events missed by the Japan Meteorological Agency (JMA) seismic network, which is one of the best in the world. Among the misses, there are several events with M > 6 in the first hours after the main shock. Duplicate identification reliability is >97%. The method can be used to improve the completeness of aftershock sequences. The reliable identification of duplicates allows, in addition, to study the correspondence of the magnitudes determined by different agencies. Therefore the present method is an effective tool for creating merged catalogs of earthquakes with a uniform magnitude.
... The catalogue completeness usually decreases after large earthquakes (Helmstetter et al., 2006;Hainzl, 2016;Shebalin and Baranov, 2017). For each aftershock sequence we began by estimating the magnitude of completeness M c using data in the interval (t c , 1 month) after the main shock by the MBS method (Cao and Gao, 2002;Wossner and Wiemer, 2005) and t c 1 h. ...
Article
Full-text available
The differential probability gain approach is used to estimate quantitatively the change in aftershock rate at various levels of ocean tides relative to the average rate model. An aftershock sequences are analyzed from two regions with high ocean tides, Kamchatka and New Zealand. The Omori-Utsu law is used to model the decay over time, hypothesizing an invariable spatial distribution. Ocean tide heights are considered rather than phases. A total of 16 sequences of M ≥6 aftershocks off Kamchatka and 15 sequences of M ≥6 aftershocks off New Zealand are examined. The heights of the ocean tides at various locations were modeled using FES 2004. Vertical stress changes due to ocean tides are here about 10-20 kPa, that is, at least several times greater than the effect due to Earth tides. An increase in aftershock rate is observed by more than two times at high water after main M ≥6 shocks in Kamchatka, with slightly less pronounced effect for the earthquakes of M 7.8, December 15, 1971 and M 7.8, December 5, 1997. For those two earthquakes, the maximum of the differential probability gain function is also observed at low water. For New Zealand, we also observed an increase in aftershock rate at high water after thrust type main shocks with M ≥6. After normal-faulting main shocks there was the tendency of the rate increasing at low water. For the aftershocks of the strike-slip main shocks we observed a less evident impact of the ocean tides on their rate. This suggests two main mechanisms of the impact of ocean tides on seismicity rate, an increase in pore pressure at high water, or a decrease in normal stress at low water, both resulting in a decrease of the effective friction in the fault zone.
... Надежное выявление МЛТ аномалии стало возможным благодаря использованию огромного осреднения. Заметим, что существование МЛТ аномалии в виде тенденции можно усмотреть и по данным других авторов (например, [Shebalin, Baranov, 2017]); где по меньшей статиcтике, на фоне больших флуктуаций, выявляется эта же тенденция, но она выглядит не вполне убедительно (а для случая ранних афтершоков может быть и артефактом, порождаемым неполной их регистрацией) Несомненно, однако, что представляет интерес проверка адекватности метода ООСЗ на независимых примерах. Получив такое дополнительное подкрепление, мы затем обсудим перспективы использования МЛТ аномалии в прогнозе землетрясений. ...
... Firstly, when estimating the parameters, the local incompleteness of the catalog immediately after the earthquake is not taken into account. The larger the difference in magnitude between the initiating shock and the aftershocks under consideration the longer the interval during which the catalog is incomplete (Helmstetter et al., 2006;Shebalin and Baranov, 2017;. ...
Article
Continuing the series of publications on aftershock hazard assessment, we consider the problem of estimating the time interval after a strong earthquake that is prone to aftershocks which may pose an independent hazard. The distribution model of this quantity, which depends on three parameters of the Omori–Utsu law, is constructed. With the appropriate averaged parameter estimates, the model fairly closely fits the real (empirical) distributions of this quantity on the global and regional scale. A key parameter in the model is the expected number of aftershocks of a given magnitude. This number broadly varies from earthquake-to-earthquake, which determines the wide confidence variant of the estimates based on the averaged parameters. Therefore, for forecasting the duration of the hazardous aftershock-prone period, we propose to use two variants of the estimates. The first variant is based only on the averaged parameter estimates for the region under study and on the value of the magnitude of the earthquake. This variant is applicable immediately after a strong earthquake. The second variant employs information about the aftershocks that occurred during the first few hours after an earthquake, which improves the forecast considerably.
Chapter
This chapter uses worldwide and regional earthquake catalogs to show that productivity obeys the exponential distribution rather than the Poisson distribution as commonly assumed. Productivity determines the number of events within a space–time interval, hence it is a key parameter in statistical seismology and is of critical importance for assessing the hazard due to aftershocks. The chapter shows that aftershock times and magnitudes are independent to find an analytical expression for the time‐dependent shift that affects the distribution. It focuses on a general property of seismicity that is broader than the mainshock–aftershocks paradigm. The chapter uses two options of the estimates for the prediction of the hazardous period. One option is based on average parameter estimates alone for the area of interest and on the earthquake magnitude. The other option makes use of information on the aftershocks that have occurred during the few hours after the mainshock, thus making the forecast much more precise.
Article
This paper considers the global statistics of times of largest aftershocks relative to the times of the corresponding main shocks. A large data set was used to show that the time-dependent distribution of largest aftershocks obeys a power law distribution. This is analogous to the Omori law for the sequence of all after- shocks. It is also shown that the times of the second, etc., largest aftershocks obey the same distribution. Thereby, we have confirmed the hypothesis that the times and magnitudes in an aftershock sequence are independent and make a good case for the Reasenberg-Jones representation of the aftershock process as a superposition of the Omori-Utsu law and the Gutenberg–Richter relation. Events that are smaller than the largest in an aftershock sequence show no delay relative to the largest event; this rejects the idea of the after- shock process as a direct failure cascade involving gradual transitions from larger to lesser scales, which imposes certain restrictions on the widely popular stochastic models of aftershock generation as branching processes. The above result is important in practice for prediction of aftershock activity and for assessing the hazard of large aftershocks.
Article
Full-text available
In this paper, we consider the problem of forecasting the magnitude of the strongest aftershock starting from a certain instant of time in the future. This problem is topical since the strong aftershocks that occur later against a background of less frequently repeating shocks are less expected and thus pose an independent hazard. At the same time, the magnitudes of the strongest aftershocks decrease with time after the main shock. The purpose of accurately forecasting them is to minimize the underestimation or overestimation of the magnitude of future risks. In this study, the aftershock process is represented by the superposition of the Gutenberg–Richter and Omori–Utsu laws whose parameters are estimated by the Bayess method using the data on the aftershocks that have already occurred up to a given time point and the a priori information about the probable values of the parameters. This significantly improves the forecast compared to the estimates that are based on the magnitude of the main shock alone. The quality of forecasting is estimated relative to Båth’s dynamic law with the use of two independent criteria. The first criterion is based on the similarity estimates, and the second, on the error diagram.
Article
Full-text available
Real-time aftershock forecasting is important to reduce seismic risks after a damaging earthquake. The main challenge is to prepare forecasts based on the data available in real time, in which many events, including large ones, are missing and large hypocenter determination errors are present due to the automatic detection process of earthquakes before operator inspection and manual compilation. Despite its practical importance, the forecast skill of aftershocks based on such real-time data is still in a developmental stage. Here, we conduct a forecast test of large inland aftershock sequences in Japan using real-time data from the High Sensitivity Seismograph Network (Hi-net) automatic hypocenter catalog (Hi-net catalog), in which earthquakes are detected and determined automatically in real time. Employing the Omori–Utsu and Gutenberg–Richter models, we find that the proposed probability forecast estimated from the Hi-net catalog outperforms the generic model with fixed parameter values for the standard aftershock activity in Japan. Therefore, the real-time aftershock data from the Hi-net catalog can be effectively used to tailor forecast models to a target aftershock sequence. We also find that the probability forecast based on the Hi-net catalog is comparable in performance to the one based on the latest version of the manually compiled hypocenter catalog of the Japan Meteorological Agency when forecasting large aftershocks with M >3:95, despite the apparent inferiority of the automatically determined Hi-net catalog. These results demonstrate the practical usefulness of our forecasting procedure and the Hi-net automatic catalog for real-time aftershock forecasting in Japan.
Article
Full-text available
Crustal faults accommodate slip either by a succession of earthquakes or continuous slip, and in most instances, both these seismic and aseismic processes coexist. Recorded seismicity and geodetic measurements are therefore two complementary data sets that together document ongoing deformation along active tectonic structures. Here we study the influence of stable sliding on earthquake statistics. We show that creep along the San Andreas Fault is responsible for a break of slope in the earthquake size distribution. This slope increases with an increasing creep rate for larger magnitude ranges, whereas it shows no systematic dependence on creep rate for smaller magnitude ranges. This is interpreted as a deficit of large events under conditions of faster creep where seismic ruptures are less likely to propagate. These results suggest that the earthquake size distribution does not only depend on the level of stress but also on the type of deformation.
Article
We present highlights from the first decade of operation of the New Zealand Earthquake Forecast Testing Center of the Collaboratory for the Study of Earthquake Predictability (CSEP). Most results are based on reprocessing using the best available catalog, because the testing center did not consistently capture the complete real-time catalog. Tests of models with daily updating show that aftershock models incorporating Omori- Utsu decay can outperform long-term smoothed seismicity models with probability gains up to 1000 during major aftershock sequences. Tests of models with 3-month updating show that several models with every earthquake a precursor according to scale (EEPAS) model, incorporating the precursory scale increase phenomenon and without Omori-Utsu decay, and the double-branching model, with both Omori-Utsu and exponential decay in time, outperformed a regularly updated smoothed seismicity model. In tests of 5-yr models over 10 yrs without updating, a smoothed seismicity model outperformed the earthquake source model of the New Zealand National Seismic Hazard Model. The performance of 3-month and 5-yr models was strongly affected by the Canterbury earthquake sequence, which occurred in a region of previously low seismicity. Smoothed seismicity models were shown to perform better with more frequent updating. CSEP models were a useful resource for the development of hybrid time-varying models for practical forecasting after major earthquakes in the Canterbury and Kaikoura regions.
Article
The technique for forecasting the spatial domain where fairly intense aftershocks should be expected after a strong earthquake is considered. The paper presents the task of estimating the area prone to the strong future aftershocks using the data for the first 12 h after the main shock. The existing aftershock identification techniques are inapplicable to this task because they either analyze the distributions of the epicenters of the aftershock process that has been already completed or only consider the parameters of the main shock and only provide rough estimates. Using the developed criteria of estimating the quality of the prediction, we quantitatively compared quite a few different candidates. The latter included the main known techniques and their modifications suggested by us. In these modifications, we took into account the results of the recent studies on the dynamics of the aftershock process. This enabled us to select the optimal procedure which demonstrated the best results of the quantitative tests for more than 120 aftershock sequences with the magnitudes starting from 6.5 all over the world. This procedure can be used in the seismological monitoring centers for forecasting the area prone to the aftershock activity after a strong earthquake based on the data of operative processing.
Article
The method for forecasting the intensity of the aftershock processes after strong earthquakes in different magnitude intervals is considered. The method is based on the joint use of the time model of the aftershock process and the Gutenberg–Richter law. The time model serves for estimating the intensity of the aftershock flow with a magnitude larger than or equal to the magnitude of completeness. The Gutenberg–Richter law is used for magnitude scaling. The suggested approach implements successive refinement of the parameters of both components of the method, which is the main novelty distinguishing it from the previous ones. This approach, to a significant extent, takes into account the variations in the parameters of the frequency–magnitude distribution, which often show themselves by the decreasing fraction of stronger aftershocks with time. Testing the method on eight aftershock sequences in the regions with different patterns of seismicity demonstrates the high probability of successful forecasts. The suggested technique can be employed in seismological monitoring centers for forecasting the aftershock activity of a strong earthquake based on the results of operational processing.
Article
Important information about the earthquake generation process can be gained from instrumental earthquake catalogs, but this requires complete recordings to avoid biased results. The local completeness magnitude Mc is known to depend on general conditions such as the seismographic network and the environmental noise, which generally limit the possibility of detecting small events. The detectability can be additionally reduced by an earthquake‐induced increase of the noise level leading to short‐term variations of Mc, which cannot be resolved by traditional methods relying on the analysis of the frequency–magnitude distribution. Based on simple assumptions, I propose a new method to estimate such temporal excursions of Mc solely based on the estimation of the earthquake rate resulting in a high temporal resolution of Mc. The approach is shown to be in agreement with the apparent decrease of the estimated Gutenberg–Richter b‐value in high‐activity phases of recorded data sets and the observed incompleteness periods after mainshocks. Furthermore, an algorithm to estimate temporal changes of Mc is introduced and applied to empirical aftershock and swarm sequences from California and central Europe, indicating that observed b‐value fluctuations are often related to rate‐dependent incompleteness of the earthquake catalogs.
Article
The Regional Earthquake Likelihood Models experiment in California tested the performance of earthquake likelihood models over a five-year period. First-order analysis showed a smoothed-seismicity model by Helmstetter et al. (2007) to be the best model. We construct optimal multiplicative hybrids involving the best individual model as a baseline and one or more conjugate models. Conjugate models are transformed using an order-preserving function. Two parameters for each conjugate model and an overall normalizing constant are fitted to optimize the hybrid model. Many two-model hybrids have an appreciable information gain (log probability gain) per earthquake relative to the best individual model. For the whole of California, the Bird and Liu (2007) Neokinema and Holliday et al. (2007) pattern informatics (PI) models both give gains close to 0.25. For southern California, the Shen et al. (2007) geodetic model gives a gain of more than 0.5, and several others give gains of about 0.2. The best three-model hybrid for the whole region has the Neokinema and PI models as conjugates. The best three-model hybrid for southern California has the Shen et al. (2007) and PI models as conjugates. The information gains of the best multiplicative hybrids are greater than those of additive hybrids constructed from the same set of models. The gains tend to be larger when the contributing models involve markedly different concepts or data. These results need to be confirmed by further prospective tests. Multiplicative hybrids will be useful for assimilating other earthquake-related observations into forecasting models and for combining forecasting models at all timescales. © 2014, Bulletin of the Seismological Society of America. All rights reserved.