Article

The SCEC Southern California Reference Three-Dimensional Seismic Velocity Model Version 2

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We describe Version 2 of the three-dimensional (3D) seismic velocity model of southern California developed by the Southern California Earthquake Center and designed to serve as a reference model for multidisciplinary research activities in the area. The model consists of detailed, rule-based representations of the major southern California basins (Los Angeles basin, Ventura basin, San Gabriel Valley, San Fernando Valley, Chino basin, San Bernardino Valley, and the Salton Trough), embedded in a 3D crust over a variable depth Moho. Outside of the basins, the model crust is based on regional tomographic results. The model Moho is represented by a surface with the depths determined by the receiver function technique. Shallow basin sediment velocities are constrained by geotechnical data. The model is implemented in a computer code that generates any specified 3D mesh of seismic velocity and density values. This parameterization is convenient to store, transfer, and update as new information and verification results become available.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For the Next Generation Attenuation project (NGA-West2, Table 1), whereas Abrahamson et al. (2014;hereafter, ASK14), Boore et al. (2014;hereafter, BSSA14), and Chiou and Youngs (2014;hereafter, CY14) used Z 1:0 in their site terms, Campbell and Bozorgnia (2014;hereafter, CB14) opted for Z 2:5 . Most of these depth parameters were obtained from regional velocity structure models (Graves and Aagaard, 2011;Ancheta et al., 2014), that is, the Community Velocity Model-S4 (CVM-S4) (Magistrale et al., 2000) and CVM-H1.1.0 (Süss and Shaw, 2003) basin models for southern California, 3D velocity model of the bay area (Boatwright et al., 2004) for northern California, and the Japan Seismic Hazard Information Station (J-SHIS) model (Fujiwara et al., 2009) for Japan. ...
... Quality of site data is of great significance for empirical studies on site effects. Stewart et al. (2005) investigated the bias of southern California basin depth parameters (Magistrale et al., 2000), which were found to have an underestimation bias near the basin margin and overestimation bias near the middle of the basin. The J-SHIS velocity model was queried to establish depth database for sites without depth measurements to 1.0 and 2:5 km=s in Japan during the NGA-West2 project. ...
... In contrast, there exists a negative depth difference when Z x J−SHIS is relatively large. Similar problems were also identified in the southern California basin model (Magistrale et al., 2000) by Stewart et al. (2005). For the southern California basin model, the mean and standard deviation of residuals between inferred and measured Z 1:5 are 107 and 408 m, respectively, but they are 8.88 and 78.71 m for the J-SHIS model. ...
Article
In the Next Generation Attenuation (NGA-West2) project, a three-dimensional subsurface structure model (Japan Seismic Hazard Information Station, J-SHIS) was queried to establish depths to 1.0 and 2.5 km/s velocity isosurfaces for sites without depth measurement in Japan. In this paper, we evaluate the depth parameters in the J-SHIS velocity model by comparing them to their corresponding site-specific depth measurements derived from selected KiK-net velocity profiles. The comparison indicates that the J-SHIS model underestimates site depths at shallow sites and overestimates depths at deep sites. Similar issues were also identified in the Southern California Basin Model. Besides, our results also show that these under- and over-estimations have a potentially significant impact on ground-motion prediction using NGA-West2 ground motion models (GMMs). Site resonant period may be considered as an alternative to depth parameter in the site term of a GMM.
... Extracting the average basin edge gradient from 11.25 to 13.25 km along profile A-A' in Figure 12 gives a dip angle of 72-73°. The SCEC CVMs have evolved from the original models of Magistrale et al. (1996Magistrale et al. ( , 2000. For the Los Angeles basin, an empirically determined velocity law for compacted sediments is used (Faust, 1951). ...
... Indeed, the northeastern components of the CSN operate within the surface expression of the lower Puente and Topanga units of the LA basin stratigraphic column, which were assembled early within the LA basin sequence and support a shallow sequence of basin rocks toward to the right of profile A-A' (Yerkes et al., 2005). In the Supporting Information S1, we further discuss these two main features in the context of fitting the rule-based CVM1 (Magistrale et al., 1996(Magistrale et al., , 2000 to profile A-A'. By perturbing the locations of the loosely constrained geological contacts that define the CVM1, we analyze the outcomes of our fully 3D inversion in terms of geological structure, and find that the steep basin sidewall is consistent with recently (≤4 Ma) active deformation. ...
Article
Full-text available
The proliferation of dense arrays promises to improve our ability to image geological structures at the scales necessary for accurate assessment of seismic hazard. However, combining the resulting local high‐resolution tomography with existing regional models presents an ongoing challenge. We developed a framework based on the level‐set method that infers where local data provide meaningful constraints beyond those found in regional models ‐ for example the Community Velocity Models (CVMs) of southern California. This technique defines a volume within which updates are made to a reference CVM, with the boundary of the volume being part of the inversion rather than explicitly defined. By penalizing the complexity of the boundary, a minimal update that sufficiently explains the data is achieved. To test this framework, we use data from the Community Seismic Network, a dense permanent urban deployment. We inverted Love wave dispersion and amplification data, from the Mw 6.4 and 7.1 2019 Ridgecrest earthquakes. We invert for an update to CVM‐S4.26 using the Tikhonov Ensemble Sampling scheme, a highly efficient derivative‐free approximate Bayesian method. We find the data are best explained by a deepening of the Los Angeles Basin with its deepest part south of downtown Los Angeles, along with a steeper northeastern basin wall. This result offers new progress toward the parsimonious incorporation of detailed local basin models within regional reference models utilizing an objective framework and highlights the importance of accurate basin models when accounting for the amplification of surface waves in the high‐rise building response band.
... The structure of these workflow applications is shown in Fig. 6. These workflows are frequently studied under the background of workflow scheduling [12,27], and they are diverse in types of tasks: Cybershake is a CPU intensive application that generates synthetic seismograms to characterize earthquake hazards [36]; Epigenomics is a bioinformatics application used to automate the execution of diverse genome-sequence operations and requires high CPU and low I/O utilization [37]; LIGO is utilized to ...
Article
Full-text available
Nowadays, heterogeneous cloud resources are charged by cloud providers according to the pay-as-you-go pricing model. To execute workflow applications in clouds under deadline constraints, cloud resources have to be utilized appropriately and judiciously, challenging traditional workflow scheduling algorithms, which are either inapplicable to the cloud environment or fail to fully exploit the features of scheduling problem for cost optimization. In this paper, we propose a heuristic algorithm CSDW and a meta-heuristic algorithm N-WOA to minimize the execution cost of the given workflow subject to the deadline constraint in clouds. CSDW first assigns the sub-deadline to each task based on the modified probabilistic upward rank, and then tasks are sorted and mapped to appropriate instances, finally instance-type upgrading and downgrading method is adopted to further accelerate workflow execution and reduce the total cost, respectively. N-WOA employs whale optimization algorithm for deadline-constrained cost optimization by refining the task ordering step in CSDW. By simulation experiments on scientific workflows with existing algorithms, the results demonstrate the capability of the proposed algorithms in meeting the deadlines and reducing the execution costs, CSDW is highly competitive and N-WOA achieves the best performance in all cases.
... Over the last two decades, there has been a growing interest in the development of velocity models that can be used in a wide range of studies, including earthquake relocation, seismic hazards, tomography, earth structure dynamics, and ground motion parameters (e.g., Magistrale et al. 2000;Chang et al. 2014;Gao and Shen 2014b;Molinari et al. 2015;Taborda et al. 2016;Riaño et al. 2017;Yang and Ma 2019). There exists a number of global velocity model (e.g., FRS 1939;Jeffreys 1939;Jeffreys and Bullen 1940;Bullen 1985Bullen , 2012Dziewonski and Anderson 1981;Kennett and Engdahl 1991;Morelli and Dziewonski 1993;Kennett et al. 1995) and local velocity models for different regions across the world (e.g., Flanagan et al. 2000;Kohler et al. 2003;Brocher et al. 2006;Koketsu et al. 2008;Mandal and Chadha 2008;Fujiwara et al. 2009;Chaljub et al. 2010;Manakou et al. 2010), including Pakistan (e.g., Reiter et al. 2001Reiter et al. , 2005Johnson and Vincent 2002;Shehzad and Yokoi 2013;Soomro et al. 2022).With different velocity models developed using different techniques and data, the selection of a velocity model should be based on geological and seismic data (e.g., Gao and Shen 2014b;Szwillus et al. 2019). ...
Article
Fatehjang is a low seismicity area located in Northern Punjab, Pakistan, where relatively stable intraplate environment has mostly caused only minor earthquakes. This study considers the only moderate earthquake (MW 4.1) in the last 10 years that occurred on August 28, 2020. Full-waveform inversion method was used in this study utilizing four different velocity models. The variance reduction and condition number were used to compare the calculated solutions from the four-velocity models. A large variance reduction and small condition number of the local velocity model indicates that it is more suitable than the three regional and global velocity models. Thrust faulting with a slight contribution of strike-slip movement with shallow focal depths is depicted in the obtained solution. Results using the selected local velocity model are in line with the subsurface interpreted geological structure using the seismic reflection profiles of the Ratana oil field some 48 km away from the epicenter of the seismic event. The source parameters from P-wave displacement spectra using Brune’s source model were also estimated, and the calculated moment magnitude was found to be consistent with the waveform inversion results.
... However, to accurately assess the physical effects related to local site conditions in surface ground motions, it is important to carefully estimate the structure deeper than the top 30 meters, preferably down to the bedrock layer. Particularly for deterministic hazard assessments which require analyses beyond empirical ground motion models, information on deeper structure via 1D, 2D or even 3D velocity models at sites or regions of interest become crucial (e.g., Magistrale et al. 2000;Asten et al., 2014;Askan et al., 2015). The backbone to multi-dimensional models is 1D pro les well-resolved both spatially and depth-wise. ...
Preprint
Full-text available
The study used data acquired by the ESG6 Blind Prediction Step BP1 Working Group, for purposes of facilitating a comparison of interpretation methods for obtaining shear-wave velocity profiles (Vs) from array observations of microtremor (passive seismic) noise. This work uses the direct-fitting MMSPAC method and the krSPAC method on passive seismic data supplied from four seven-station nested triangular arrays with apertures ranging from 1 m to 962 m, located within Kumamoto City, Japan. The data allows a useful frequency range of 38 Hz down to 0.3 Hz, giving depth sensitivities from 2 m to > 1000 m. Results are presented as a seven-layer model which has time-averaged shear wave velocities for top 30m and 300m of Vs30=189 m/s and Vs300=584 m/s, respectively. HVSR spectra show two significant peaks at 1.2 and 0.35 Hz which are indicative of major Vs contrasts at depths 26 m and 750 m. The MMSPAC method (and its krSPAC variant) also proved viable on one asymmetric array where four of the seven stations were corrupted by incoherent low-frequency noise. Indications of a lateral variation in Vs could be detected due to the non-concentric geometry of the four arrays, and also from variations in HVSR spectra at stations of the largest array. Further analysis in step 4 of the blind trials, making use of geological data and a Preferred model supplied to participants, showed apparent discrepancies between the Preferred and our BP1 model for the upper 40 m where a supplied PS log appears to be inconsistent with geological data and the blind BP1 model. At low frequencies 0.5–2.5 Hz dispersion data and the BP1 model suggest that use of the Rayleigh effective mode is superior to use of the fundamental mode in deducing the Vs model at depths below 100 m. The method of direct-fitting of model and observed SPAC spectra used in MMSPAC also enabled use of a bandwidth 0.5–38 Hz for interpretation, which is a wider bandwidth than that achieved by other participants for use of passive seismic data alone.
... Hence, multi-level solutions are utilized to guide the optimizer to the global minimum by solving the inverse problem on increasingly finer meshes (Bunks et al. 1995). Askan et al. (2007) applied the numerical algorithm described above to reconstruct a (synthetic) 2D V S target distribution representing a vertical cross-section of the Los Angeles basin in the San Fernando Valley (Magistrale 2000). The KKT system is discretized with Galerkin-type finite elements and finite differences in space and time, respectively. ...
Article
Full-text available
Seismic site characterization attempts to quantify seismic wave behavior at a specific location based on near-surface geophysical properties, for the purpose of mitigating damage caused by earthquakes. In recent years, techniques for estimating near-surface properties for site characterization using geophysical observations recorded at the surface have become an increasingly popular alternative to invasive methods. These observations include surface-wave phenomenology such as dispersion (velocity-frequency relationship) as well as, more recently, full seismic waveforms. Models of near-surface geophysical properties are estimated from these data via inversion, such that they reproduce the observed seismic observations. A wide range of inverse problems have been considered in site characterization, applying a variety of mathematical techniques for estimating the inverse solution. These problems vary with respect to seismic data type, algorithmic complexity, computational expense, physical dimension, and the ability to quantitatively estimate the uncertainty in the inverse solution. This paper presents a review of the common inversion strategies applied in seismic site characterization studies, with a focus on associated advantages/disadvantages as well as recent advancements.
... Basin extents and boundaries were constrained using isostatic gravity anomaly data. CVM-S4 also includes a geotechnical layer (GTL) to describe the material in the uppermost 300 m of the model (Magistrale et al., 2000). The GTL for CVM-S4 is constrained by V p and V s borehole measurements and is implemented for a particular location using a weighted average of nearby profiles and a mean profile corresponding to the NEHRP category (as implemented by Wills et al., 2000) at that site. ...
Article
We introduce procedures to validate site response in sedimentary basins as predicted using ground motion simulations. These procedures aim to isolate contributions of site response to computed intensity measures relative to those from seismic source and path effects. In one of the validation procedures, simulated motions are analyzed in the same manner as earthquake recordings to derive non-ergodic site terms. This procedure compares the scaling with sediment isosurface depth of simulated versus empirical site terms (the latter having been derived in a separate study). A second validation procedure utilizes two sets of simulations, one that considers three-dimensional (3D) basin structure and a second that utilizes a one-dimensional (1D) representation of the crustal structure. Identical sources are used in both procedures, and after correcting for variable path effects, differences in ground motions are used to estimate site amplification in 3D basins. Such site responses are compared to those derived empirically to validate both the absolute levels and the depth scaling of site response from 3D simulations. We apply both procedures to southern California in a manner that is consistent between the simulated and empirical data (i.e. by using similar event locations and magnitudes). The results show that the 3D simulations overpredict the depth-scaling and absolute levels of site amplification in basins. However, overall patterns of site amplification with depth are similar, suggesting that future calibration may be able to remove observed biases.
... For these reasons, we selected CVM-Si as the basis for the 3D velocity structure to use for the simulations. However, one issue with CVM-Si is the unrealistically high surface velocities (V S about 2500 m/s) despite the inclusion of the near-surface geotechnical layer (Magistrale et al., 2000;Taborda et al., 2016). To address this, we have applied a velocity taper in the upper 100 m of CVM-Si, which is matched to the V S30 values of Wills et al. (2015) at the surface. ...
Article
Full-text available
The main objective of this study is to develop physics-based constraints on the spatiotemporal variation of the slip-rate function using a simplified dynamic rupture model. First, we performed dynamic rupture modeling of the 2019 Mw 7.1 Ridgecrest, California, earthquake, to analyze the effects of depth-dependent stress and material friction on slip rate. Then, we used our modeling results to guide refinements to the slip-rate function that were implemented in the Graves–Pitarka kinematic rupture generation technique. The dynamic ruptures were computed on a surface-rupturing, planar strike-slip fault that includes a weak (negative to low-stress-drop) zone in the upper 4 km of the crust. Below the weak zone, we placed high-stress-drop patches designed to mirror the large-slip areas seen in various rupture model inversions of the event. The locations of the high-stress-drop patches and the hypocenter were varied in multiple realizations to investigate how changing the dynamic conditions affected the resulting rupture kinematics, in particular, the slip rate. From these simulations, we observed a systematic change in the shape of the slip-rate function from Kostrov type below the weak zone to a predominantly symmetric shape within the weak zone, along with a depth-dependent reduction of peak slip rate. We generalized these shallow rupture features into a depth-dependent parametric variation of the slip-rate function and implemented it in the Graves–Pitarka kinematic rupture model generator. The performance of the updated kinematic approach was then verified in 0–4 Hz simulations of the Mw 7.1 Ridgecrest earthquake, which showed that incorporating the depth-dependent variation in the shape of the slip-rate function improves the fit to the observed near-fault ground motions in the 0.5–3 s period range.
... CyberShake: this workflow is used by the Southern California Earthquake Center to classify earthquake alarms [40] Epigenomics: this workflow uses DNA sequence lanes to generate multiple lanes of DNA sequences [41] Montage: this workflow is created by the NASA/IPAC stitches to gather multiple input images for the creation of custom mosaics of the sky [42] Inspiral: this workflow is used to generate and analyze gravitational waveforms from the data collected during the coalescing of compact binary systems [43] Sipht: this workflow is used in bioinformatics to search for small nontranslated bacterial regulatory RNAs [44] e mentioned datasets are processed to meet the current requirements after obtaining them from external storage. ...
Article
Full-text available
Fairness plays a vital role in crowd computing by attracting its workers. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Requester-centric task allocation algorithms aim to maximize the completion quality of the entire workflow and minimize its total cost, which are discriminatory for workers. The crowdsourcing workflow needs to balance two objectives, namely, fairness and cost. In this study, we propose an alternative greedy approach with four heuristic strategies to address such an issue. In particular, the proposed approach aims to monitor the current status of workflow execution and use heuristic strategies to adjust the parameters of task allocation. We design a two-phase allocation model to accurately match the tasks with workers. The F-Aware allocates each task to the worker that maximizes the fairness and minimizes the cost. We conduct extensive experiments to quantitatively evaluate the proposed algorithms in terms of running time, fairness, and cost by using a customer objective function on the WorkflowSim, a well-known cloud simulation tool. Experimental results based on real-world workflows show that the F-Aware, which is 1% better than the best competitor algorithm, outperforms other optimal solutions in finding the tradeoff between fairness and cost.
... Substantial work has been done in order to improve the accuracy of the velocity models of southern California. Examples of such efforts include the development of sophisticated imaging methods (e.g., Muir & Tsai, 2020;Tape et al., 2009;Zhong & Zhan, 2020), the incorporation of more seismic data (e.g., Kohler et al., 2003;Lee et al., 2014;Magistrale et al., 2000), and the deployment of temporary nodal arrays that allows us to explore the complex architecture of Los Angeles basin with refined detail (e.g., Liu et al., 2018). In particular, the introduction of oil-industry surveys into the field of crustal geophysics has dramatically increased the resolution of regional velocity models and, with it, improved the prediction of several seismic observables (Jia & Clayton, 2021;Lin et al., 2013). ...
Article
Full-text available
The metropolitan Los Angeles region represents a zone of high-seismic risk due to its proximity to several fault systems, including the San Andreas fault. Adding to this problem is the fact that Los Angeles and its surrounding cities are built on top of soft sediments that tend to trap and amplify seismic waves generated by earthquakes. In this study, we use three dense petroleum industry surveys deployed in a 16x16-km area at Long Beach, California, to produce a high-resolution model of the top kilometer of the crust and investigate the influence of its structural variations on the amplification of seismic waves. Our velocity estimates reveal substantial lateral contrasts and correlate remarkably well with the geological background of the area, illuminating features such as the Newport-Inglewood fault, the Silverado aquifer, and the San Gabriel river. We then use computational modeling to show that the presence of these small-scale structures have a clear impact on the intensity of the expected shaking, and can cause ground-motion motion acceleration to change by several factors over a sub-kilometer horizontal scale. These results shed light onto the scale of variations that can be expected in this type of tectonic settings and highlight the importance of resolution in modern-day seismic hazard estimates.
... Basin-scale models that provide better characterization of sedimentary structures have also been inserted into regional models to improve them. For example, an earlier version of CVM-S 4.26 embedded detailed rule-based basin models in irregular domains (Magistrale et al., 2000) into a regional travel time tomographic model (Hauksson, 2000). The predecessor of CVM-H 15.1 also embedded high-resolution basin models (Komatitsch et al., 2004;J. ...
Article
Full-text available
Updating Earth models used by the scientific community in geologic studies and hazard assessment has a significant societal impact but is computationally prohibitive due to the large spatial scale. The advent of urban seismology allowed rapid development of local high-resolution models using short-term dense seismic arrays to become conventional. To incorporate the details in these local models in community models, we developed a technique for constructing window taper functions like the cosine taper in arbitrarily shaped spatial domains on regular grids. We apply our algorithm to the problem of low-frequency ground shaking estimation near the southernmost San Andreas fault by creating two hybrid models. These models consist of basin-scale (top 10 km or less) high-resolution models developed using controlled source data embedded into two popular Southern California Earthquake Center community models. We evaluate the models by computing long period (6–30 s) wavefield energy misfits using 11 earthquakes with moment magnitudes between 3.5 and 5.5 not used in developing any of the models under consideration. One of the hybrid models produces an ∼24% decrease while the other has an ∼0.6% increase in the overall median misfit relative to their original community models. The overlapping misfit values between the models and variability in waveform fit for different events and stations emphasize the difficulties in model validation. Our approach can merge any type of gridded multiscale and multidimensional datasets, and represents a valuable tool for modeling in the computational sciences.
... Withers et al., 2015Withers et al., , 2019. The most commonly used regional seismic velocity models for this purpose are the two Southern California Earthquake Center (SCEC) community velocity models (CVMs): CVM-S (Kohler et al., 2003;Lee et al., 2014b;Magistrale et al., 2000) and CVM-H (Shaw et al., 2015;Su¨ss and Shaw, 2003). The differences in the two CVMs lead to different simulation results and performance (Lee et al., 2014a;Bielak, 2013, 2014). ...
Article
We study ground-motion response in urban Los Angeles during the two largest events (M7.1 and M6.4) of the 2019 Ridgecrest earthquake sequence using recordings from multiple regional seismic networks as well as a subset of 350 stations from the much denser Community Seismic Network. In the first part of our study, we examine the observed response spectral (pseudo) accelerations for a selection of periods of engineering significance (1, 3, 6, and 8 s). Significant ground-motion amplification is present and reproducible between the two events. For the longer periods, coherent spectral acceleration patterns are visible throughout the Los Angeles Basin, while for the shorter periods, the motions are less spatially coherent. However, coherence is still observable at smaller length scales due to the high spatial density of the measurements. Examining possible correlations of the computed response spectral accelerations with basement depth and Vs30, we find the correlations to be stronger for the longer periods. In the second part of the study, we test the performance of two state-of-the-art methods for estimating ground motions for the largest event of the Ridgecrest earthquake sequence, namely three-dimensional (3D) finite-difference simulations and ground motion prediction equations. For the simulations, we are interested in the performance of the two Southern California Earthquake Center 3D community velocity models (CVM-S and CVM-H). For the ground motion prediction equations, we consider four of the 2014 Next Generation Attenuation-West2 Project equations. For some cases, the methods match the observations reasonably well; however, neither approach is able to reproduce the specific locations of the maximum response spectral accelerations or match the details of the observed amplification patterns.
... Here, c epi (x 3 ) and c station (x 3 ) denote the values of the parameter under consideration at the depth |x 3 | for the coordinates of the epicenter and station, respectively. The values of c epi (x 3 ) and c station (x 3 ) are taken same as those obtained from the three-dimensional reference velocity model (Version 4) of southern California [56] by inputting the depth |x 3 |, and the coordinates of the epicenter (as in Table 1) and the station (as in Table 2). Thus, the take-off seismic rays encounter the stratum underlying the epicenter near their point of origin and the stratum underlying the recording station near their point of incidence on the free surface. ...
Article
It is convenient to synthesize the Fourier spectra of the rocking ground motions at a station from the recorded translational motions at the same station. The conventional models in this approach assume (1) the seismic source to be a point source, (2) the medium to be a horizontally stratified elastic half-space, and (3) the translational motions to be caused primarily by the planar wavefronts of the body waves. An improved ‘planar wavefront’ model is proposed under this approach such that (1) the model applies also to near the epicenter, with the hypocentral distance of the station assumed to be much larger than the wavelengths of the incident waves, (2) the information on the underlying focal mechanism is accounted for, and (3) both in-plane and out-of-plane rocking spectra can be estimated. The proposed model assumes the material properties of the stratified medium to vary smoothly with depth. Further, it considers the spatial variations of both the amplitudes and the incidence angles of the incoming waves. The spatial variation of the amplitudes is formulated by considering the physics of (1) the radiation of body waves emitted by a kinematic shear dislocation point source and (2) the propagation of those waves through the stratified medium. A numerical study shows that the proposed model leads to more accurate rocking spectra over a large range of fault parameters and frequencies. Further, a few example near-epicenter records of translational ground motions illustrate the disparities between the time-domain estimates of the proposed model with those of the conventional models as well as between the responses of simple structures subjected to the two sets of motions.
... • There is a large body of work, spanning several decades, to develop seismic velocity models for the region's sedimentary basin structures (i.e., Magistrale et al. 2000; other documents cited in Table 2.1). ...
Technical Report
Seismic site response can be influenced by a variety of physical mechanisms that include amplification due to resonance, nonlinearity, topographic effects, impedance contrasts, and contributions from two- or three-dimensional wave propagation in sedimentary basins. Current ground motion models use ergodic procedures that average these effects over many sites globally by conditioning on the time-averaged shear wave velocity in the upper 30 m (VS30), and in some cases, on the depth to a shear wave velocity isosurface (zx) that is also known as the basin depth parameter. Current site amplification models conditioned on VS30 reflect, in an average sense, most of the aforementioned physical mechanisms, including basin effects. The site response contributions from basin effects are associated with a differential depth parameter taken as the difference between depth for a particular site and average basin depth conditioned on VS30. The basin amplification models are “centered”, in the sense that they predict changes in ground motion amplification for non-zero differential depths. The changes in ground motion amplification are positive and negative for sites with positive and negative differential depths, respectively. The models predicting this behavior are derived using data from both northern and southern California, and for sites situated within sedimentary basin structures but also other geomorphic provinces (e.g., sedimentary structures of different scales and sites with shallow soil overlying rock). We investigate the benefits of regionalizing basin response in ergodic ground motion models. Using southern California data we consider the following questions: (1) how should basin and non-basin locations be classified?; (2) how does mean site response and the associated ground motion variability differ for basins compared to non-basin geomorphic provinces?; and (3) what are the variations in basin response between different basin structures and how can this be modelled for predicting ground motion intensity measures? We recommend a site classification scheme that distinguishes basins, basin edges, valley (sedimentary structures smaller in scale than basins), and mountain/hill areas. Moreover, we distinguish basins in southern California based on their geologic origin: coastal basins with varied depositional histories and large depths (e.g., Los Angeles Basin); inland, fault-bounded basins with relatively shallow sediments derived from neighboring mountains (e.g., Chino Basin); Imperial Valley, a basin on the transform fault plate boundary at the location of a graben located in the step- over between the San Andreas and Imperial Faults. To address the second and third questions, we compile a large ground motion database for southern California that significantly expands upon the data available in the NGA-West2 project, and which has the benefit of significantly increasing the number of recorded events per site. We verify that an NGA-West2 ground motion model has unbiased source and path terms relative to the dataset, and we make minor modifications to the global VS30-scaling function to fit the mean of southern California data. Using the slightly adjusted ground motion model, we compute site terms for 670 sites, which approximately represent the mean difference between the actual site response and the ground motion model prediction. We find that the combined data (i.e., site terms) from all sites exhibit trends with differential depth that are qualitatively similar to those in NGA-West2 models (de-amplification for negative differential depth, amplification for positive). We find basin and basin edge categories to be similar to each other, but different from the combined data set in the sense that de- amplification at negative differential depths is generally absent. Moreover, the depth-invariant mean amplification for this condition is positive, indicating under-prediction from the VS30-scaling model. We find the valley and mountain/hill categories to exhibit similar trends in which amplification scales with differential depths and has both positive and negative values. The depth- invariant mean amplification for these conditions is negative. Among basin sites, we find differences for coastal and inland basins. The response of coastal basins essentially matches that for the overall basin category (amplification scales up with increasing differential depth). A similar pattern is followed by the Imperial Valley Basin. However, the response of other inland basins is different, with no appreciable dependence on differential depth, but apparent uniform shifts that are specific to individual basins (but which are poorly constrained by the data). Models are proposed to capture the mean behavior of the recommended groups -- coastal basins, inland basins, and Imperial Valley Basin. Site-to-site standard deviation terms (S2S) are found to vary strongly across geomorphic provinces, with basins and valleys having notably lower dispersions than mountain/hill sites and the reference ergodic model
... The low-frequency ground motion waveforms are generated using numerical methods incorporating the 3-D seismic wave-speed structure of the earth. Seismologists have created 3-D earth models ( [17,[27][28][29][30][31][32][33][34][35]) of seismic wave speeds and density, and now can study 3-D global and regional seismic wave propagation using approaches-based, for instance, on the finite element and the finite difference methods (for e.g., [4,6,[36][37][38][39][40][41][42], etc.). Here, to numerically propagate seismic waves, we use the open-source package SPECFEM3D (V2.0 SESAME) that is based on the spectral-element method ( [16,43]). ...
Article
Full-text available
Seismic wave-propagation simulations are limited in their frequency content by two main factors: (1) the resolution of the seismic wave-speed structure of the region in which the seismic waves are propagated through; and (2) the extent of our understanding of the rupture process, mainly on the short length scales. For this reason, high-frequency content in the ground motion must be simulated through other means. Toward this end, we adopt a variant of the classical empirical Green’s function (EGF) approach of summing, with suitable time shift, recorded seismograms from small earthquakes in the past to generate high-frequency seismograms (0.5–5.0 Hz) for engineering applications. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. The non-uniform time- shift scheme used in this work alleviates the over-estimation of high-frequency content of the ground motions observed. We validate the methodology by simulating broadband motions from the 1999 Hector Mine and the 2006 Parkfield earthquakes and comparing them against recorded seismograms.
... The SCEC CFM is widely used in crustal deformation modeling, wave propagation simulations, earthquake simulators, and probabilistic seismic hazards assessment (e.g., Uniform California Earthquake Rupture Forecast,version 3 [v.3]). The CFM also directly contributes to other property modeling efforts, such as the development of 3D seismic-wavespeed models (e.g., Community Velocity Models; Magistrale et al., 2000;Süss and Shaw, 2003;Shaw et al., 2015) that are used in many aspects of seismology, including strong ground motion prediction. ...
Article
We present new 3D source fault representations for the 2019 M 6.4 and M 7.1 Ridgecrest earthquake sequence. These representations are based on relocated hypocenter catalogs expanded by template matching and focal mechanisms for M 4 and larger events. Following the approach of Riesner et al. (2017), we generate reproducible 3D fault geometries by integrating hypocenter, nodal plane, and surface rupture trace constraints. We used the southwest–northeast-striking nodal plane of the 4 July 2019 M 6.4 event to constrain the initial representation of the southern Little Lake fault (SLLF), both in terms of location and orientation. The eastern Little Lake fault (ELLF) was constrained by the 5 July 2019 M 7.1 hypocenter and nodal planes of M 4 and larger aftershocks aligned with the main trend of the fault. The approach follows a defined workflow that assigns weights to a variety of geometric constraints. These main constraints have a high weight relative to that of individual hypocenters, ensuring that small aftershocks are applied as weaker constraints. The resulting fault planes can be considered averages of the hypocentral locations respecting nodal plane orientations. For the final representation we added detailed, field-mapped rupture traces as strong constraints. The resulting fault representations are generally smooth but nonplanar and dip steeply. The SLLF and ELLF intersect at nearly right angles and cross on another. The ELLF representation is truncated at the Airport Lake fault to the north and the Garlock fault to the south, consistent with the aftershock pattern. The terminations of the SLLF representation are controlled by aftershock distribution. These new 3D fault representations are available as triangulated surface representations, and are being added to a Community Fault Model (CFM; Plesch et al., 2007, 2019; Nicholson et al., 2019) for wider use and to derived products such as a CFM trace map and viewer (Su et al., 2019).
... Eisner and Clayton [2001,2005] used a finite difference method to calculate reciprocal Green's functions to build 300 source scenarios for five major Southern California faults in the 3-D heterogeneous crustal model of Magistrale et al. [2000]. Liu et al. [2004] used a spectral-element method to calculate strain Green's tensors for CMT inversion in a 3-D crustal model of Southern California [Süss and Shaw, 2003]. ...
Article
Full-text available
Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua NewGuinea (PNG) and the Solomon Islands. Ourmethod is based on a library of Green’s functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green’s functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw < 6, which contributes to accurate tectonic interpretation of the region.
... Eisner and Clayton [2001,2005] used a finite difference method to calculate reciprocal Green's functions to build 300 source scenarios for five major Southern California faults in the 3-D heterogeneous crustal model of Magistrale et al. [2000]. Liu et al. [2004] used a spectral-element method to calculate strain Green's tensors for CMT inversion in a 3-D crustal model of Southern California [Süss and Shaw, 2003]. ...
Article
Full-text available
Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua NewGuinea (PNG) and the Solomon Islands. Ourmethod is based on a library of Green’s functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green’s functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw < 6, which contributes to accurate tectonic interpretation of the region.
... We use the 3-D finite element software FaultMod (Barall, 2009), which performed well in dynamic rupture code verification exercises (Harris et al., 2018), to conduct dynamic simulations of the M6.4 and M7.1 Ridgecrest earthquakes. We embed the faults in an elastic 3-D medium with material properties from the SCEC Community Velocity Model (Magistrale et al., 2000) and implement linear slip-weakening friction (Ida, 1972;Andrews, 1976), using frictional coefficients within a range of values commonly used in dynamic rupture simulations (e.g., Harris et al., 2018). We nucleate both ruptures at the locations in our 3-D fault model closest to their SCSN hypocenters, by raising shear stress to 10% above yield stress and forcing rupture over an area larger than the critical patch required for self-sustained rupture (Day, 1982). ...
Article
Full-text available
Plain Language Summary The M6.4 and M7.1 Ridgecrest, California, earthquakes of July 2019 occurred 34 hr apart, on two faults that cross each other. We used physics‐based computer simulations of the earthquake process to investigate why both faults did not move together in one bigger earthquake and whether the second earthquake only happened due to effects from the first. We found that the fault movement in the first earthquake compressed the second fault, which prevented it from moving at the same time. We also found that the second fault could have had a M7.1 earthquake on its own, without the influence of the M6.4 on the previous day but that the first earthquake affected the details of the second and likely made the second one happen sooner than it would have otherwise. This has meaning both for understanding why the Ridgecrest earthquakes happened this way and also for understanding possible earthquake behaviors on other crossing faults. We also looked at whether the Ridgecrest earthquakes brought the nearby Garlock Fault, which is capable of a M8 earthquake, closer to having a big earthquake, and we found that this is possible but not certain.
... Additionally, subsurface geology is better characterized for this region. For example, Magistrale et al. [290,291] presented a detailed three-dimensional seismic velocity model of major southern California basins, including Los Angeles basin, Chino basin, Ventura basin, San Bernardino Valley, San Fernando Valley, San Gabriel Valley, and the Salton Trough. ...
Article
Full-text available
We review key ingredients that are needed for high-fidelity simulation of earthquake-generated seismic waves in large regions. Specifically, we survey existing literature on wave-discretization methods, topography effects, seismic source modeling, domain truncation methods, and large-scale seismic wave simulations. We comment on key limitations of widely-used techniques, and speculate about future trends.
... The application of seismic tomography to the rich datasets of the Southern California Seismic Network (SCSN) has produced 3D crustal models of increasing accuracy and spatial resolution (Hauksson, 2000;Chen et al., 2007;Lin et al., 2010;Tape et al., 2010;Allam & Ben-Zion, 2012;Lee, Chen, Jordan, Maechling et al., 2014;Zigone et al., 2015). This imaging has been augmented by high-resolution, active-source experiments (Fuis et al., 2017;Lutter et al., 1999Lutter et al., , 2004, as well as by systematic efforts to assimilate exploration and well-log information into the community velocity models (CVMs) of the Southern California Earthquake Center (SCEC) (Kohler et al., 2003;Magistrale et al., 2000;Shaw et al., 2015;Süss & Shaw, 2003). ...
Article
Full-text available
We map crustal regions in Southern California that have similar depth variations in seismic velocities by applying cluster analysis to 1.5 million P and S velocity profiles from the three‐dimensional tomographic model CVM‐S4.26. We use a K‐means algorithm to partition the profiles into K sets that minimize the inter‐cluster variance. The regionalizations for K ≤ 10 generate a coherent sequence of structural refinements: each increment of K introduces a new region typically by partitioning a larger region into two smaller regions or by occupying a transition zone between two regions. The results for K ≤ 7 are insensitive to initialization and trimming of the model periphery; nearly identical results are obtained if the P and S velocity profiles are treated separately or jointly. The regions for K = 7 can be associated with major physiographic provinces and geologic areas with recognized tectonic affinities, including the Continental Borderland, Great Valley, Salton Trough, and Mojave Desert. The regionalization splits the Sierra Nevada and Peninsular Range batholiths into the western and eastern zones consistent with geological, geochemical, and potential‐field mapping. Three of the regions define a geographic domain comprising almost all of the upper crust derived from continental lithosphere. Well‐resolved regional boundaries coincide with major faults, topographic fronts, and/or geochemical transitions mapped at the surface. The consistent alignment of these surface features with deeper transitions in the crustal velocity profiles indicates that regional boundaries are typically narrow, high‐angle structures separating regions with characteristic crustal columns that reflect different compositions and tectonic histories.
... Lin et al. 2013;Nakata et al. 2015). Magistrale et al. (2000) used a combination of receiver functions (Zhu & Kanamori 2000), geotechnical data (Magistrale et al. 1996) and tomography (Hauksson 2000) to produce the first Community Velocity Model, known as CVM-S. To determine the shape of the sedimentary section of the LA Basin, Süss & Shaw (2003) used P-wave velocity measurements derived from stacking velocities obtained from reflection surveys and calibrated them with numerous borehole sonic logs. ...
Article
We use broadband stations of the ’Los Angeles Syncline Seismic Interferometry Experiment’ (LASSIE) to perform a joint inversion of the Horizontal to Vertical spectral ratios (H/V) and multimode dispersion curves (phase and group velocity) for both Rayleigh and Love waves at each station of a dense line of sensors. The H/V of the auto-correlated signal at a seismic station is proportional to the ratio of the imaginary parts of the Green’s function. The presence of low-frequency peaks (∼0.2 Hz) in H/V allows us to constrain the structure of the basin with high confidence to a depth of 6 km. The velocity models we obtain are broadly consistent with the SCEC CVM-H community model and agree well with known geological features. Because our approach differs substantially from previous modeling of crustal velocities in southern California, this research validates both the utility of the diffuse field H/V measurements for deep structural characterization and the predictive value of the CVM-H community velocity model in the Los Angeles region. We also analyze a lower frequency peak (∼0.03 Hz) in H/V and suggest it could be the signature of the Moho. Finally, we show that the independent comparison of the H and V components with their corresponding theoretical counterparts gives information about the degree of diffusivity of the ambient seismic field.
... Lin et al. 2013;Nakata et al. 2015). Magistrale et al. (2000) used a combination of receiver functions (Zhu & Kanamori 2000), geotechnical data (Magistrale et al. 1996) and tomography (Hauksson 2000) to produce the first Community Velocity Model, known as CVM-S. To determine the shape of the sedimentary section of the LA Basin, Süss & Shaw (2003) used P-wave velocity measurements derived from stacking velocities obtained from reflection surveys and calibrated them with numerous borehole sonic logs. ...
Article
We use broad-band stations of the ‘Los Angeles Syncline Seismic Interferometry Experiment’ (LASSIE) to perform a joint inversion of the Horizontal to Vertical spectral ratios (H/V) and multimode dispersion curves (phase and group velocity) for both Rayleigh and Love waves at each station of a dense line of sensors. The H/V of the autocorrelated signal at a seismic station is proportional to the ratio of the imaginary parts of the Green’s function. The presence of low-frequency peaks (∼0.2 Hz) in H/V allows us to constrain the structure of the basin with high confidence to a depth of 6 km. The velocity models we obtain are broadly consistent with the SCEC CVM-H community model and agree well with known geological features. Because our approach differs substantially from previous modelling of crustal velocities in southern California, this research validates both the utility of the diffuse field H/V measurements for deep structural characterization and the predictive value of the CVM-H community velocity model in the Los Angeles region. We also analyse a lower frequency peak (∼0.03 Hz) in H/V and suggest it could be the signature of the Moho. Finally, we show that the independent comparison of the H and V components with their corresponding theoretical counterparts gives information about the degree of diffusivity of the ambient seismic field.
... As reported by Dhakal and Yamanaka (2013), there are evident discrepancies between the J-SHIS model and other subsurface models for the same region. For regions outside Japan, the 3D Community Velocity Model CVM (Version 2, Magistrale et al. 2000) for southern California was also found to have a significant bias and, as pointed out by Graves and Aagaard (2011), further refinement to the CVM was needed. Content courtesy of Springer Nature, terms of use apply. ...
Article
Full-text available
This study aims to identify the best-performing site characterization proxy alternative and complementary to the conventional 30 m average shear-wave velocity VS30, as well as the optimal combination of proxies in characterizing linear site response. Investigated proxies include T0 (site fundamental period obtained from earthquake horizontal-to-vertical spectral ratios), VSz (measured average shear-wave velocities to depth z, z= 5, 10, 20 and 30 m), Z0.8 and Z1.0 (measured site depths to layers having shear-wave velocity 0.8 and 1.0 km/s, respectively), as well as Zx-infer (inferred site depths from a regional velocity model, x=0.8 and 1.0, 1.5 and 2.5 km/s). To evaluate the performance of a site proxy or a combination, a total of 1840 surface-borehole recordings is selected from KiK-net database. Site amplifications are derived using surface-to-borehole response-, Fourier- and cross-spectral ratio techniques and then are compared across approaches. Next, the efficacies of 7 single-proxies and 11 proxy-pairs are quantified based on the site-to-site standard deviation of amplification residuals of observation about prediction using the proxy or the pair. Our results show that T0 is the best-performing single-proxy among T0, Z0.8, Z1.0 and VSz. Meanwhile, T0 is also the best-performing proxy among T0, Z0.8, Z1.0 and Zx-infer complementary to VS30 in accounting for the residual amplification after VS30-correction. Besides, T0 alone can capture most of the site effects and should be utilized as the primary site indicator. Though (T0, VS30) is the best-performing proxy pair among (VS30, T0), (VS30, Z0.8), (VS30, Z1.0), (VS30, Zx-infer) and (T0, VSz), it is only slightly better than (T0, VS20). Considering both efficacy and engineering utility, the combination of T0 (primary) and VS20 (secondary) is recommended. Further study is needed to test the performances of various proxies on sites in deep sedimentary basins.
... We compute the station-to-station Green's functions in the CVM-S4.26 (Lee et al., 2014), which was obtained by full 3-D tomography and iteratively improving the CVM-S4 (Kohler et al., 2003;Magistrale et al., 2000) using 38,000 earthquake seismograms and 12,000 ambient field correlograms. We will also extract ambient field Green's functions by using 1 year of ambient seismic data recorded at 115 stations in the Southern California Seismic Network. ...
Article
Full-text available
We test the amplitude of deconvolution‐based ambient field Green's functions in 3‐D numerical simulations of seismic wave propagation. We first simulate strongly scattered waves in a hypothetical 3‐D sedimentary basin with small‐scale heterogeneities, which provides an ideally random environment to test various approaches of extracting the amplitude of Green's functions, as the deterministic station‐to‐station Green's functions can be computed in the given velocity structure. Our second model computes the station‐to‐station Green's functions in the Community Velocity Model (S4.26) for Southern California and compares them with the Green's functions extracted from 1 year of ambient noise data. In both models, remarkable waveform similarity among different Green's functions is obtained. In the hypothetical basin model where the wavefield is nearly random, both the correlation‐ and deconvolution‐based Green's functions contain robust amplitude information. However, large‐amplitude biases are observed in the Green's functions extracted from ambient noise in Southern California, showing a strong azimuthal dependence. The deconvolution approach in general overestimates the amplitude along the direction of noise propagation but underestimates it in other azimuths. The correlation approach with temporal normalization and spectral whitening generates similar amplitude to the deconvolution in a wide azimuthal range. Our results corroborate that the inhomogeneous distribution of noise sources biases the amplitude of Green's functions. Carefully reducing these biases is necessary to use these ambient field Green's functions in the virtual earthquake approach.
... Previous studies in the basins of southern California [Magistrale et al., 2000] and the Lower Rhine Embayment [Ewald et al., 2006] showed the importance of 3D basin geometry in modeling the characteristic basin propagation of seismic waves. Data from the 3D simulations of wave propagation are available including wave front focusing due to low velocity zones or edge-diffracted waves [Ewald et al., 2006], basin resonance [Castellaro et al., 2014], and local-scale multi-scattering and prolonged ground-motion [Olsen et al., 2006;Furumura & Hayakawa, 2007;Denolle et al., 2014,b;Cruz-Atienza et al., 2016;Viens et al., 2016]. ...
Thesis
Full-text available
Earthquakes are among the most costly, devastating and deadly natural hazards. The extent of the seismic hazard is often influenced by factors like the source location and site characteristics, while the susceptibility of assets is influenced by the population density, building design, infrastructure and urban planning. A comprehensive knowledge of the nature of source and local geology enables the establishment of an effective urban planning that takes into account the potential seismic hazard, which in turn may reduce the degree of vulnerability. The first probabilistic seismic hazard assessment (PSHA) incorporating the effects of local site characteristic for the island of Sulawesi in Indonesia has been conducted. Most of the island, with the exception of South Sulawesi, is undergoing rapid deformation. This leads to high hazard in most regions (such that PGA > 0.4g at 500 year return period including site effects) and extremely high hazard (like PGA > 0.8 g at 500 year return period) along fast-slipping crustal fault. On the other hand, a distant site relative to fault might suffer higher ground motion if that site is composed of soft soil. This research has proven that incorporating near-surface physical properties, in this case is represented by VS30, surface geology contribute significantly to ground motions, consequently, responsible for potential building damage. The PSHA study that took place in Sulawesi took us move further, investigate the effect of deep structure on seismic waves. Jakarta was chosen for its location sitting on less known deep sediment basin and economic and political importances. A dense portable-seismic-broadband network, comprising 96 stations, has been operated within four months covering the Jakarta. The seismic network sampled broadband seismic-noise mostly originating from ocean waves and anthropogenic activity. We used Horizontal-toVertical Spectral Ratio (HVSR) measurements of the ambient seismic noise to estimate the fundamental-mode Rayleigh wave ellipticity curves, which were used to infer the seismic velocity structure of the Jakarta Basin. By mapping and modeling the spatial variation of low-frequency (0.124{0.249 Hz) HVSR peaks, this study reveals variations in the depth to the Miocene basement. To map these velocity profiles of unknown complexity, we employ a Transdimensional-Bayesian framework for the inversion of HVSR curves for 1D profiles of velocity and density beneath each station. The inverted velocity profiles show a sudden change of basement depth from 400 to 1350 m along N-S profile through the center of the city, with an otherwise gentle increase in basin depth from south to north. Seismic wave modelings are conducted afterward and shows that for very deep basin of Jakarta, available ground motion prediction equation (GMPE) is less sufficient in capturing the effect of basin geometry on seismic waves. Earrthquake scenario modeling using SPECFEM2D is performed to comprehend the effect of deep basin on ground motions. This modeling reveals that the city may experience high peak ground velocity (PGV) during large megathrust earthquake. The complexity of the basin is responsible for magnifying ground motions observed in the basin.
... Southern California Earthquake Center (SCEC) researchers have developed two regional velocity models, including the Community Velocity Model -Harvard (CVM-H) and Community Velocity Model -SCEC (CVM-S). Previous versions of CVM-S and CVM-H included a regional background seismic velocity model from Hauksson (2000) and Magistrale et al. (2000) travel time tomography. CVM-H includes a near-surface geotechnical layer from Ely et al. (2010), crust-mantle interface from Yan and Clayton (2007) receiver function studies, upper mantle velocity model from Prindle and Tanimoto (2006) finite-frequency tomography, and high resolution seismic velocity models of Southern California basins, developed by Süss and Shaw (2003), using oil industry well logs and seismic reflection data. ...
Article
Full-text available
The Coachella Valley in the northern Salton Trough is known to produce destructive earthquakes, making it a high seismic hazard area. Knowledge of the seismic velocity structure and geometry of the sedimentary basins and fault zones is required to improve earthquake hazard estimates in this region. We simultaneously inverted first P wave travel times from the Southern California Seismic Network (39,998 local earthquakes) and explosions (251 land/sea shots) from the 2011 Salton Seismic Imaging Project to obtain a 3‐D seismic velocity model. Earthquakes with focal depths ≤10 km were selected to focus on the upper crustal structure. Strong lateral velocity contrasts in the top ~3 km correlate well with the surface geology, including the low‐velocity (<5 km/s) sedimentary basin and the high‐velocity crystalline basement rocks outside the valley. Sediment thickness is ~4 km in the southeastern valley near the Salton Sea and decreases to <2 km at the northwestern end of the valley. Eastward thickening of sediments toward the San Andreas fault within the valley defines Coachella Valley basin asymmetry. In the Peninsular Ranges, zones of relatively high seismic velocities (~6.4 km/s) between 2 to 4 km depth may be related to Late Cretaceous mylonite rocks or older inherited basement structures. Other high‐velocity domains exist in the model down to 9 km depth and help define crustal heterogeneity. We identify a potential fault zone in Lost Horse Valley unassociated with mapped faults in Southern California from the combined interpretation of surface geology, seismicity, and lateral velocity changes in the model.
... Propagation of these waves and its dependence on ground characteristics are well studied in context of seismicity. Phenomenological location-dependent velocity models [4,5] as well as global theoretical models [6] are in common use. For the purpose of this study it would be suffice to note that as long as uniform media is assumed, ground characteristics affect propagation via its equation of state (EOS). ...
Chapter
Full-text available
Blast waves are being formed by aerial, surface, or subsurface explosions. The waves propagate in the surrounding media – air and\or ground. In cases of aerial or surface explosion, blast waves are being formed in the ambient air. The waves hit the ground and form a ground shock wave. Ground contraction due to high pressure coming from above followed by rapid expansion may produce a new shock wave, going upward from ground surface. The original wave propagation may be affected as well. Ground movements may also cause fracturing and fragmentation, thus contributing to dust lofting. Suspended dust mitigates aerial wave propagation. All these phenomena, accompanied by ground deformation, make the ground an active agent affecting the aerial shock propagation.
... Inclusion of groundmotion simulations in hazards estimates provides critical information from source rupture directivity, site amplification, basin-generated surface waves, and other 3D basin effects. Yet, one of the most critical steps for successful ground-motion simulations is the development of detailed 3D P-and S-wave velocity models that represent the geologic subsurface (e.g., Magistrale et al., 2000;Stephenson, 2007). Predicted ground motion can only be as accurate as the resolution and validity of the geophysical and geological model used in the simulation. ...
... • There is a large body of work, spanning several decades, to develop seismic velocity models for the region's sedimentary basin structures (i.e., Magistrale et al. 2000; other documents cited in Table 2.1). ...
Conference Paper
We investigate benefits of regionalizing basin response in ergodic ground motion models. Using southern California data, we find average responses between basin structures, even when the primary site variables used in ground motion models (VS30 and depth parameters) are controlled for. For example, the average site response in relatively modestly sized sedimentary structures (such as Simi Valley) are under-predicted at short periods by current models, whereas under- prediction occurs at long periods for larger sedimentary structures. Moreover, site-to-site within- event standard deviations vary appreciably between large basins, basin edges, smaller valleys, and non-basin (mountainous) locations. Such variations can appreciably impact aleatory variability.
... Previous studies in the basins of southern California (Magistrale et al. 2000) and the Lower Rhine Embayment (Ewald et al. 2006) showed the importance of 3-D basin geometry in modelling the characteristic basin propagation of seismic waves. Data from the 3-D simulations of wave propagation are available including wave front focusing due to low-velocity zones or edge-diffracted waves (Ewald et al. 2006), basin resonance (Castellaro et al. 2014) and local-scale multiscattering and prolonged ground motion (Olsen et al. 2006;Furumura & Hayakawa 2007;Denolle et al. 2014a;Cruz-Atienza et al. 2016;Viens et al. 2016). ...
Article
Full-text available
Characterizing the interior structure of the Jakarta Basin, Indonesia, is important for the improvement of seismic hazard assessment there. A dense-portable seismic broad-band network, comprising 96 stations, has been operated between October 2013 and February 2014 covering the city of Jakarta. The seismic network sampled broad-band seismic noise mostly originating from ocean waves and anthropogenic activity. We used horizontal-to-vertical spectral ratio (HVSR) measurements of the ambient seismic noise to estimate fundamental-mode Rayleigh wave ellipticity curves, which were used to infer the seismic velocity structure of the Jakarta Basin. By mapping and modelling the spatial variation of low-frequency (0.124–0.249 Hz) HVSR peaks, this study reveals variations in the depth to the Miocene basement. These variations include a sudden change of basement depth from 500 to 1000 m along N–S profile through the centre of the city, with an otherwise gentle increase in basin depth from south to north. Higher frequency (2–4 Hz) HVSR peaks appear to reflect complicated structure in the top 100 m of the soil profile, possibly related to the sediment compaction and transitions among different sedimentary sequences. In order to map these velocity profiles of unknown complexity, we employ a trans-dimensional Bayesian framework for the inversion of HVSR curves for 1-D profiles of velocity and density beneath each station. Results show that very low-velocity sediments (<240 m s−1) up to 100 m in depth cover the city in the northern to central part, where alluvial fan material is deposited. These low seismic velocities and the very thick sediments in the Jakarta Basin will potentially contribute to seismic amplification and basin resonance, especially during giant megathrust earthquakes or large earthquakes with epicentres close to Jakarta. Results have shown good correlation with previous ambient seismic noise tomography and microtremor studies. We use the 1-D profiles to create a pseudo-3-D model of the basin structure which can be used for earthquake hazard analyses of Jakarta, a megacity in which highly variable construction practices may give rise to high vulnerability. The methodology discussed can be applied to any other populated city situated in a thick sedimentary basin.
... • CyberShake [55]: used by the Southern California Earthquake Center to classify earthquake alarms. • Epigenomics [56]: used DNA sequence lanes to generate multiple lanes of DNA sequences. ...
Article
Cloud computing is emerging with growing popularity in workflow scheduling, especially for scientific workflow. Deploying data-intensive workflows in the cloud brings new factors to be considered during specification and scheduling. Failure to establish intermediate data security may cause information leakage or data alteration in the cloud environment. Existing scheduling algorithms for the cloud disregard the interaction among tasks and its effects on application security requirements. To address this issue, we design a new systematic method that considers both tasks security demands and interactions in secure tasks placement in the cloud. In order to respect security and performance, we formulate a model for task scheduling and propose a heuristic algorithm which is based on task's completion time and security requirements. In addition, we present a new attack response approach to reduce certain security threats in the cloud. To do so, we introduce task security sensitivity measurement to quantify tasks security requirements. We conduct extensive experiments to quantitatively evaluate the performance of our approach, using WorkflowSim, a well-known cloud simulation tool. Experimental results based on real-world workflows show that compared with existing algorithms, our proposed solution can improved the overall system security in terms of quality of security and security risk under a wide range of workload characteristics. Additionally, our results demonstrate that the proposed attack response algorithm can effectively reduce cloud environment threats.
... Development of detailed three-dimensional (3D) seismic velocity models for ground-motion simulations has been carried out, for example, in Japan (e.g., Fujiwara et al. 2009;Koketsu et al. 2012) and in the USA (e.g., Magistrale et al. 2000;Shaw et al. 2015) in order to achieve more reliable simulations. Still, the velocity models are generally much simpler than the realistic Earth structure, which contains heterogeneity on various scales. ...
Article
Full-text available
The effects of the short-wavelength heterogeneity of the crustal structure on predicted ground motion at periods of 1 s and longer were investigated by conducting three-dimensional finite-difference simulations using a detailed realistic velocity model of the Kanto region of Japan. The short-wavelength heterogeneity of the media within the upper crust was randomly modeled using the exponential-type autocorrelation function where the velocity standard deviation was set to 5%. Combinations of heterogeneous media with various correlation lengths and point source models with various depths and durations were considered in order to investigate the variability of the predicted ground motion and the sensitivity to the parameters. The effects of random heterogeneity on the simulated ground motion appeared as a result of changes in the amplitude and phase of both the direct waves and later phases, as well as the spectral peaks of the response spectra. Ground-motion variability, in terms of peak ground velocity (PGV) and velocity response spectra (Sv), was evaluated using the residual, or the difference in logarithm of PGV and Sv between the ground motion computed with and without random heterogeneity. While the residual averaged over the surface of the computed area was almost negligible in the studied period range, the variability of the residual was found to increase with distance and to be larger for point sources with shorter durations.
Article
The near-surface seismic structure (to a depth of about 1000 m), particularly the shear-wave velocity (VS), can strongly affect the propagation of seismic waves, and therefore must be accurately calibrated for ground motion simulations and seismic hazard assessment. The VS of the top (<300 m) crust is often well-characterized from borehole studies, geotechnical measurements, and water and oil wells, while the velocities of the material deeper than about 1000 m are typically determined by tomography studies. However, in depth ranges lacking information on shallow lithological stratification, typically rock sites outside the sedimentary basins, the material parameters between these two regions are typically poorly characterized due to resolution limits of seismic tomography. When the alluded geological constraints are not available, models, such as the Southern California Earthquake Center (SCEC) Community Velocity Models (CVMs), default to regional tomographic estimates that do not resolve the uppermost VS values, and therefore deliver unrealistically high shallow VS estimates. The SCEC Unified Community Velocity Model (UCVM) software includes a method to incorporate the near-surface earth structure by applying a generic overlay based on measurements of time-averaged VS in top 30 m (VS30) to taper the upper part of the model to merge with tomography at a depth of 350 m, which can be applied to any of the velocity models accessible through UCVM. However, our 3D simulations of the 2014 Mw 5.1 La Habra earthquake in the Los Angeles area using the CVM-S4.26.M01 model significantly underpredict low-frequency (<1 Hz) ground motions at sites where the material properties in the top 350 m are significantly modified by the generic overlay (“taper”). On the other hand, extending the VS30-based taper of the shallow velocities down to a depth of about 1000 meters improves the fit between our synthetics and seismic data at those sites, without compromising the fit at well constrained sites. We explore various tapering depths, demonstrating increasing amplification as the tapering depth increases, and the model with 1000 m tapering depth yields overall favorable results. Effects of varying anelastic attenuation are small compared to effects of velocity tapering and do not significantly bias the estimated tapering depth. Although a uniform tapering depth is adopted in the models, we observe some spatial variabilities that may further improve our method.
Article
We have simulated 0-5 Hz deterministic wave propagation for a suite of 17 models of the 2014 Mw 5.1 La Habra, CA, earthquake with the Southern California Earthquake Center Community Velocity Model Version S4.26-M01 using a finite-fault source. Strong motion data at 259 sites within a 148 km by 140 km area are used to validate our simulations. Our simulations quantify the effects of statistical distributions of small-scale crustal heterogeneities (SSHs), frequency-dependent attenuation Q(f), surface topography, and near-surface low velocity material (via a 1D approximation) on the resulting ground motion synthetics. The shear wave quality factor QS(f) is parameterized as QS, 0 and QS, 0fγ for frequencies less than and higher than 1 Hz, respectively. We find the most favorable fit to data for models using ratios of QS, 0 to shear-wave velocity VS of 0.075-1.0 and γ values less than 0.6, with the best-fitting amplitude drop-off for the higher frequencies obtained for γ values of 0.2-0.4. Models including topography and a realistic near-surface weathering layer tend to increase peak velocities at mountain peaks and ridges, with a corresponding decrease behind the peaks and ridges in the direction of wave propagation. We find a clear negative correlation between the effects on peak ground velocity amplification and duration lengthening, suggesting that topography redistributes seismic energy from the large-amplitude first arrivals to the adjacent coda waves. A weathering layer with realistic near-surface low velocities is found to enhance the amplification at mountain peaks and ridges, and may partly explain the underprediction of the effects of topography on ground motions found in models. Our models including topography tend to improve the fit to data, as compared to models with a flat free surface, while our distributions of SSHs with constraints from borehole data fail to significantly improve the fit. Accuracy of the velocity model, particularly the near-surface low velocities, as well as the source description, controls the resolution with which the anelastic attenuation can be determined. Our results demonstrate that it is feasible to use fully deterministic physics-based simulations to estimate ground motions for seismic hazard analysis up to 5 Hz. Here, the effects of, and trade-offs with, near-surface low-velocity material, topography, SSHs and Q(f) become increasingly important as frequencies increase toward 5 Hz, and should be included in the calculations. Future improvement in community velocity models, wider access to computational resources, more efficient numerical codes and guidance from this study are bound to further constrain the ground motion models, leading to more accurate seismic hazard analysis.
Article
Full-text available
As computing power and ground motion studies using numerical simulation methods have continuously improved, velocity model accuracies have become bottlenecks for simulation result accuracies. The characteristics of a hydrodynamic sedimentary environment in the Yuxi Basin are used as a classification standard, and a refined velocity model that depends on the lateral heterogeneity in sediments is constructed. The results of the lateral heterogeneity model are compared with those of the lateral uniformity model, and the distributions of the displacement peak ratio, focusing effect and edge effect change due to the refined basin structure. For the 4% of the surface area that is in the region with the largest amplification factor, the difference between the two models is greater than 20%, and this difference for nearly half of the surface area is greater than 5%.
Article
We investigate near-surface 3D structure giving rise to P-to-Rayleigh wave conversions from teleseismic P waves recorded by the Long Beach array, southern California. Our previous study demonstrated that a local Rayleigh wave having circular wavefronts with phase velocities of ∼1 km/s arise from Signal Hill in array P wave data from a large Fiji Islands earthquake. A group of high-spatial frequency, low velocity 0.7–0.9 km/s Rayleigh waves having linear wavefronts also propagate away from strands of the Newport–Inglewood fault zone (NIFZ), suggesting that P-to-Rayleigh wave conversions from fault damage zones can also be observed. We compute synthetic waveforms using 3D finite difference to show that topography of Signal Hill accounts for much of the circular P-to-Rayleigh wave conversions. The NIFZ is best modeled by low-velocity, vertical tabular features above a depth of 500 m, with a width of 100–120 m, and ∼15% reduction in VP and VS compared with the background model. We observe that structure from the northwestern part of the inferred southwest boundary fault of the Signal Hill anticline dominates the scattering from fault damage zones. It is remarkable that the combination of low, near-surface velocity with relatively small-scale heterogeneity can significantly affect the signature of long horizontal wavelength teleseismic P waves, suggesting additional complexities in interpreting receiver functions for stations on deep sedimentary basins or in areas of significant topography.
Chapter
In this paper, two scheduling algorithms are presented, namely, time-constrained early-deadline cost-effective algorithm (TECA) to schedule these time-sensitive workflows at the lowest cost and versatile time-cost algorithm (VTCA) which consider both time and cost constraints; these algorithms considerably enhance the earlier algorithms. TECA schedules activities to be completed as soon as possible and optimizes the costs in resource provisioning. VTCA supports quality of service (QoS)-based scheduling, keeping a balance between completion time and cost for the selected QoS level. Both algorithms schedule tasks of the same height within the minimum completion time (using Max-Min algorithm). The tasks get scheduled on new resources only when their completion times are more than the calculated minimum completion times for the given resource. CloudSim-based results show that our algorithms minimize completion time better than other popular algorithms, in addition to reducing costs. The modeling for costs satisfies the criteria of earliest completion time.
Article
Task allocation plays a vital role in crowd computing by determining its performance. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Task allocation aims to maximize the completion quality of the entire workflow and minimize its total cost. Trust can affect the quality of the produced results and costs. Selecting workers with high levels of trust could provide better solution to the workflow and increase the budget. Crowdsourcing workflow needs to balance the two conflicting objectives. In this paper, we propose an alternative greedy approach with four heuristic strategies to address the issue. In particular, the proposed approach aims to monitor the current status of workflow execution and use heuristic strategies to adjust the parameters of task allocation. We design a two-phase allocation model to accurately match the tasks with workers. T-Aware allocates each task to the worker that maximizes the trust level, while minimizing the cost. We conduct extensive experiments to quantitatively evaluate the proposed algorithms in terms of running time, task failure ratio, trust and cost using a customer objective function on WorkflowSim, a well-known cloud simulation tool. Experimental results based on real-world workflows show that T-Aware outperforms other optimal solutions on finding the tradeoff between trust and cost, which is 3 to 6% better than the best competitor algorithm.
Chapter
This chapter explains the “effect of propagation” on ground motion and the interaction between wave propagation phenomena and the “effect of the earthquake source”. One-dimensional (1-D) velocity structures are commonly used for evaluating the “effect of propagation” (Sect. 3.1). The methods used for this evaluation, such as the “propagator matrix”, “reflection/transmission matrix”, “wavenumber integration”, “surface wave”, “teleseismic body wave”, and “crustal deformation”, are then reviewed. The “discontinuity vector” is explained as a representation of the “effect of the earthquake source”. In Sect. 3.2, three-dimensional (3-D) velocity structures are introduced as more realistic models for the effect of propagation. The methods of evaluation in a 3-D velocity structure, such as the “ray theory”, “ray tracing”, “finite difference method”, “finite element method”, and “Aki-Larner method”, are then reviewed. Finally, Sect. 3.3 explains various methods and models for the analysis of propagation, such as “long-period ground motion”, “microtremors”, “seismic interferometry”, and “seismic tomography”.
Conference Paper
In this study, we invert the receiver function beneath GSI seismic station to estimate the crustal structure beneath Nias Island, Sumatra. The receiver function is computed from 24 teleseismic events using a time-domain deconvolution technique and divided into three groups by observations with similar back azimuth namely GSI_NW, GSI_NE, and GSI_SE. The resulted receiver function then stacked to increase the signal-to-noise ratio. The shear wave velocity structures of the crust are obtained by inverting receiver functions using linearized-iterative inversion with A0K-135 for the initial model. The inversion result shows a thick sediment layer ranged from 1 to 3 km with velocities around 2,4 km/s near the surface. The Moho and the top of the subducting oceanic slab are observed around 22-24 km and 32 km depth respectively. The low shear wave velocity above the transition to the subducting slab is suggested as the signature of the serpentinization.
Article
We simulate 0–2.5 Hz deterministic wave propagation in 3-D velocity models for the 2008 Chino Hills, CA, earthquake using a finite-fault source model and frequency-dependent anelastic attenuation. Small-scale heterogeneities are modeled as 3-D random fields defined using an elliptically anisotropic von Kármán autocorrelation function with its parameters constrained using Los Angeles basin borehole data. We superimpose the heterogeneity models on a leading deterministic community velocity model (CVM) of southern California. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.5 Hz, with ensemble median values of various ground motion metrics varying up to ±50 per cent compared to those computed using the deterministic CVM only. In addition, we show that frequency-independent values of the shear-wave quality factor (Qs0) parametrized as Qs0 = 150Vs (Vs in km s–1) provides the best agreement with data when assuming the published moment magnitude (Mw) of 5.4 (M0 = 1.6 × 1017 Nm) for the finite-fault source model. This model for Qs0 trades off with Qs0 = 100Vs assuming Mw = 5.5 (M0 = 2.2 × 1017 Nm), which represents an upper bound of the Mw estimates for this event. We find the addition of small-scale heterogeneities provides limited overall improvement to the misfit between simulations and data for the considered ground motion metrics, because the primary sources of misfit originate from the deterministic CVM and/or the finite-fault source description.
Article
This paper presents a generalised velocity model construction methodology, its computational implementation, and application in the construction of a New Zealand Velocity Model (NZVM2.0) for use in physics-based broadband ground motion simulation. The methodology utilises multiple datasets spanning different length scales, which is enabled via the use of modular subregions, geologic surfaces, and parametric representations of crustal velocity. A number of efficiency-related workflows to decrease the overall computational construction time are employed, while maintaining the flexibility and extensibility to incorporate additional datasets and refined velocity parameterisations as they become available. The methodology and computational implementation processes are then applied for development of a New Zealand Velocity Model (NZVM2.0) for use in broadband ground motion simulation. The model comprises explicit representations of the Canterbury, Wellington, Nelson-Tasman, Kaikōura, Marlborough, Waiau, Hanmer and Cheviot sedimentary basins embedded within an existing regional travel-time tomography-based velocity model for the shallow crust.
Article
Full-text available
In this paper, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California under plausible San Andreas fault earthquakes in the next 30 years is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance-based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for 60 scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0–8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under three-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next 30 years is evaluated.
Article
With the increasing popularity of Internet of Things (IoT), edge computing has become the key driving force to provide computing resources, storage and network services closer to the edge on the basis of cloud computing. Workflow scheduling in such distributed environment is regarded as an NP-hard problem, and the existing approaches may not work well for task scheduling with multiple optimization goals in complex applications. As an intelligent algorithm, particle swarm optimization (PSO) has the advantages of fewer parameters, simpler algorithm and faster convergence speed, which is widely applied to workflow scheduling. However, there are also some shortcomings such as easy to fall into local optimum and sometimes difficult to obtain real optimal solution. To address this issue, first, the scheduling problem of workflow applications and objective function based on two optimized factors are clearly formalized, which can provide a theoretical foundation for workflow scheduling strategy. Then this paper proposes a novel directional and non-local-convergent particle swarm optimization (DNCPSO) that employs non-linear inertia weight with selection and mutation operations by directional search process, which can reduce the makespan and cost dramatically and obtain a compromising result. The results of simulation experiments based on various real and random workflow examples show that our DNCPSO can achieve better performance than other classical and improved algorithms, which sufficiently demonstrate the effectiveness and efficiency of DNCPSO.
Article
We model deterministic broadband (0-7.5 Hz) ground motion from an M w 7.1 bilateral strike-slip earthquake scenario with dynamic rupture propagation along a rough-fault topography embedded in a medium including small-scale velocity and density perturbations. Spectral accelerations (SAs) at periods 0.2-3 s and Arias intensity durations show a similar distance decay (at the level of 1-2 interevent standard deviations above the median) when compared to Next Generation Attenuation-West2 (NGA)- West2 ground-motion prediction equations (GMPEs) using aQ(f ) power-law exponent of 0.6-0.8 above 1 Hz in models with a minimum V S of 750 m/s.With a trade-off from Q(f ), the median ground motion is slightly increased by scattering from statistical models of small-scale heterogeneity with standard deviation (σ) of the perturbations at the lower end of the observed range (5%) but reduced by scattering attenuation at the upper end (10%) when using a realistic 3D background velocity model. The ground-motion variability is strongly affected by the addition of small-scale media heterogeneity, reducing otherwise large values of intraevent standard deviation closer to those of empirical observations. These simulations generally have intraevent standard deviations for SAs lower than the GMPEs for the modeled bandwidth, with an increasing trend with distance (most pronounced in low-to-moderate scattering media) near the level of observations at distances greater than 35 km from the fault. Durations for the models follow the same increasing trend with distance, in which σ 5% produces the best match to GMPE values. We find that a 3D background-velocity model reduces the pulse period into the expected range by breaking up coherent waves from directivity, generating a lognormal distribution of ground-motion residuals. These results indicate that a strongly heterogeneous medium is needed to produce realistic deterministic broadband ground motions. Finally, the addition of a thin surficial layer with low, frequency-independent Q in the model (with a minimum VS of 750 m/s) controls the high-frequency decay in energy, as measured by the parameter, that may be necessary to include as simulations continue to extend to higher frequencies. Electronic Supplement: Verification of the two-step procedure (converting sliprate output from dynamic rupture propagation to a kinematic source) by comparing the Support Operator Rupture Dynamics (SORD) and anelastic wave-propagation (AWP) synthetics; misfit of synthetic ground motion modeled with AWP, including smallscale media heterogeneity as compared with frequency-wavenumber (f-k), as well as SW4; histograms and q-q plots analyzing the lognormality of ground-motion residuals; figures of ground motion for additional simulations not shown in the main article; and an animation of wave propagation for simulations, including rough-fault topography with and without small-scale media heterogeneity.
Article
Full-text available
We use teleseismic receiver functions computed from an ∼35‐day nodal dataset recorded along three profiles in the northern basins of Los Angeles, California, to map the depth and shape of the sediment–basement interface and to identify possible deep fault offsets. The results show the Moho discontinuity, the bottom of the basement, and intermediary sedimentary layers. There are also indications of midcrustal offsets along strike of the Red Hill and Raymond faults. The results are compared with receiver functions from nearby permanent broadband stations and the 1993 Los Angeles Region Seismic Experiment (LARSE) profile. The images show that dense deployments of node‐type sensors can be used to characterize basin structure in a noisy urban environment.
Conference Paper
The analysis of the risk caused by natural phenomena has as a main objective to assess the implications and potential impacts of natural events such as earthquakes, storms, floods and other hazards on the population, productive activities and infrastructure. Within the framework of structural engineering, the study of seismic hazard is a topic of great importance; this is due to the level of impact that seismic events can have on the structural integrity of buildings, which is associated with the magnitude of the loads generated by these events. In order to perform a risk analysis is necessary to know the magnitude of the expected events, and the level of the ground motions that these events can generate. Nonetheless, in areas where we have few, if any
Article
Full-text available
The Los Angeles basin is one of the Neogene basins along the California continental margin that was formed by extension related to complex wrench-fault mechanisms. For the last 60 years, the biostratigraphic framework for correlation between the oil fields of the Los Angeles basin has been based on the benthic foraminiferal zones and divisions. With the utilization of other microfossil disciplines (e.g., siliceous and calcareous planktonic microfossils), a more refined biostratigraphic scheme has been developed. Correlation of plankton biostratigraphies with the radiometric time scale has in turn allowed correlation and calibration of benthic foraminiferal zonations. Three regional cross sections were constructed based on the thickness of the Neogene sedimentary package and based on the chronostratigraphic relationships between the benthic foraminiferal zones and the other fossil groups. -from Author
Article
Full-text available
The east Ventura basin originated in the middle Miocene as a rift system bounded on one side by the Oak Ridge-Semi Hills structural shelf and on the other side by a granitic ridge parallel to the San Gabriel fault. This fault began accumulating right slip 10-12 m.y. ago at a rate of 4.5-9 mm/yr slowing to about 1 mm/yr in the Quaternary. A change to contractile tectonics occurred in the Pliocene with deposition of the Fernando Formation, when the Newhall-Potrero anticline developed as a monocline above a blind reverse fault; the Pico anticline to the southeast and the Temescal and Hopper Ranch-Modelo anticlines to the northwest may have a similar origin. Tectonic inversion and displacement on the southwest-verging Santa Susana fault began about 0.5 Ma. Northeast-trending discontinuities and structures divide the present deformation zone into four segments. -from Authors
Article
Full-text available
The analysis of seismograms from 32 aftershocks recorded by 98 seis-mic stations installed after the Northridge earthquake in the San Fernando Valley, the Santa Monica Mountains, and Santa Monica, California, indicates that the en-hanced damage in Santa Monica is explained in the main by focusing due to a lens structure at a depth of several kilometers beneath the surface and having a finite lateral extent. The diagnosis was made from the observation of late-arriving S phases with large amplitudes, localized in the zones of large damage. The azimuths and angles of incidence of the seismic rays that give rise to the greatest focusing effects correspond to radiation that would have emerged from the lower part of the rupture surface of the mainshock. Thus the focusing and, hence, the large damage in Santa Monica were highly dependent on the location of the Northridge event, and an earth-quake of similar size, located as little as one source dimension away, would not be likely to repeat this pattern. We show from coda wave analysis that the influence of surface geology as well as site effects on damage in Santa Monica is significantly smaller than are the focusing effects.
Article
Full-text available
This study determines site-response factors that can be applied as corrections to a rock-attenuation relationship for use in probabilistic seismic-hazard analysis. The site-response factors are amplitude and site-class dependent. These amplification factors are determined by averaging ratios between observed and predicted ground motions for peak ground acceleration (PGA) and for 5% damped response spectral acceleration at 0.3, 1.0, and 3.0 sec oscillator periods. The observations come from the strong-motion database of the Southern California Earthquake Center (SCEC), and the predictions are based on the Sadigh (1993) rock-attenuation relation. When separated and averaged according to surface geology, significantly different site-response factors are found for Quaternary and Mesozoic units, but a subclassification of Quaternary is generally not justified by the data. The low input motion amplification factors are consistent with those obtained from independent aftershock studies at the PGA and 0.3-second period. An observed trend of decreasing Quaternary site amplification with higher input motion is consistent with nonlinear soil behavior; however, the trend exists for Mesozoic sites as well, implying that this may be an artifact of the Sadigh relationship, There is a correlation between larger site-response factors and lower average shear-wave velocity in the upper 30 m for low predicted PGA input motions, with an increase in the correlation with increasing period. The 0.3-sec site response factors for Quaternary data in southern California determined in this study are consistent with 0.3-sec NEHRP site-response correction factors; however, at 1.0-sec period some inconsistencies remain. A trend is also seen with respect to sediment basin depth, where deeper sites have higher average site-response factors. These results constitute a customized attenuation relationship for southern California, The implication of these customized attenuation models with respect to probabilistic hazard analysis is examined in Field and Petersen (2000).
Article
Full-text available
This paper presents data and modelling results from a crustal and upper mantle wide-angle seismic transect across the Salton Trough region in southeast California. The Salton Trough is a unique part of the Basin and Range province where mid-ocean ridge/transform spreading in the Gulf of California has evolved northward into the continent. In 1992, the U.S. Geological Survey (USGS) conducted the final leg of the Pacific to Arizona Crustal Experiment (PACE). Two perpendicular models of the crust and upper mantle were fit to wide-angle reflection and refraction travel times, seismic amplitudes, and Bouguer gravity anomalies. The first profile crossed the Salton Trough from the southwest to the northeast, and the second was a strike line that paralleled the Salton Sea along its western edge. We found thin crust (∼21–22 km thick) beneath the axis of the Salton Trough (Imperial Valley) and locally thicker crust (∼27 km) beneath the Chocolate Mountains to the northeast. We modelled a slight thinning of the crust further to the northeast beneath the Colorado River (∼24 km) and subsequent thickening beneath the metamorphic core complex belt northeast of the Colorado River. There is a deep, apparently young basin (∼5–6 km unmetamorphosed sediments) beneath the Imperial Valley and a shallower (∼2–3 km) basin beneath the Colorado River. A regional 6.9-km/s layer (between ∼15-km depth and the Moho) underlies the Salton Trough as well as the Chocolate Mountains where it pinches out at the Moho. This lower crustal layer is spatially associated with a low-velocity (7.6–7.7 km/s) upper mantle. We found that our crustal model is locally compatible with the previously suggested notion that the crust of the Salton Trough has formed almost entirely from magmatism in the lower crust and sedimentation in the upper crust. However, we observe an apparently magmatically emplaced lower crust to the northeast, outside of the Salton Trough, and propose that this layer in part predates Salton Trough rifting. It may also in part result from migration of magmatic spreading centers associated with the southern San Andreas fault system. These spreading centers may have existed east of their current locations in the past and may have influenced the lower crust and upper mantle to the east of the current Salton Trough.
Article
Full-text available
Local site effects have an enormous influence on the character of ground motions. Currently, soil categories and site factors used in building codes for seismic design are generally based on, or at least correlated with, the seismic velocity of the surface layer. We note, however, that the upper 30 m (a typical depth of investigation) would almost never represent more than 1% of the distance from the source; 0.1% to 0.2% would be more typical of situations where motion is damaging. We inves- tigate the influence of this thin skin on the high-frequency properties of seismograms. We examine properties of seismograms consisting of vertically propagating S waves through an arbitrarily complex stack of flat, solid, elastic layers, where the properties of the lowermost layer (taken at 5 km depth) and a surface layer (thickness 30 m) are constrained. Input at the bottom of the stack is an impulse. We find that the character of the seismograms, and the peak spectral frequencies, are strongly influ- enced by the properties of the intervening layers. However, for infinite Q, the integral of amplitude squared at the surface (which determines energy if the input and output are regarded as velocity, or Arias intensity if the input and output are regarded as acceleration) is independent of the intervening layers. Also, the peak amplitude of the seismogram at the surface is relatively independent of the intervening properties. For finite, frequency-independent Q, the integral of amplitude squared and peak amplitude decrease as t* increases. There is some scatter that depends on the inter- vening layers, but it is surprisingly small. These calculations suggest that the surficial geology has a greater influence on ground motions than might be expected based on its thickness alone. They suggest that variable influences of Q along the entire path have a comparable importance for predictions of ground motions. Finally, they suggest that detailed characterization of deeper velocity structure in regions where a 1D model is appropriate gives only a limited amount of added information. Based on our 1D numerical results, we propose a new method to characterize these properties as site factors that could be used in building codes. Full three-dimensional synthetics are tested and give a similar con- clusion.
Article
Full-text available
Seismic hazard studies of the Los Angeles basin area require a realistic seismic-velocity model. We use geologic information about depth to crystalline basement, depths to sedimentary horizons, uplift of sediments, and surface geology in a velocity-depth-age function to construct a three-dimensional velocity model. In earthquake location tests, the model predicts travel times satisfactorily, and in earthquake ground-motion simulations, the model correctly determines the timing and amplitude of late-arriving waves.
Article
Full-text available
The number of broadband three-component seismic stations in southern California has more than tripled recently. In this study we use the teleseismic receiver function technique to determine the crustal thicknesses and VP/VS ratios for these stations and map out the lateral variation of Moho depth under southern California. It is shown that a receiver function can provide a very good point measurement of crustal thickness under a broadband station and is not sensitive to crustal P velocity. However, the crustal thickness estimated only from the delay time of the Moho P-to-S converted phase trades off strongly with the crustal VP/VS ratio. The ambiguity can be reduced significantly by incorporating the later multiple converted phases, namely, the PpPs and PpSs +PsPs . We propose a stacking algorithm which sums the amplitudes of receiver function at the predicted arrival times of these phases by different crustal thicknesses H and VP/VS ratios. This transforms the time domain receiver functions directly i
Article
Full-text available
When creating camera-ready figures, most scientists are familiar with the sequence of raw data --> processing --> final illustration and with the spending of large sums of money to finalize papers for submission to scientific journals, prepare proposals, and create overheads and slides for various presentations. This process can be tedious and is often done manually, since available commercial or in-house software usually can do only part of the job.To expedite this process, we introduce the Generic Mapping Tools (GMT), which is a free, public domain software package that can be used to manipulate columns of tabular data, time series, and gridded data sets and to display these data in a variety of forms ranging from simple x-y plots to maps and color, perspective, and shaded-relief illustrations. GMT uses the PostScript page description language, which can create arbitrarily complex images in gray tones or 24-bit true color by superimposing multiple plot files. Line drawings, bitmapped images, and text can be easily combined in one illustration. PostScript plot files are device-independent, meaning the same file can be printed at 300 dots per inch (dpi) on an ordinary laserwriter or at 2470 dpi on a phototypesetter when ultimate quality is needed. GMT software is written as a set of UNIX tools and is totally self contained and fully documented. The system is offered free of charge to federal agencies and nonprofit educational organizations worldwide and is distributed over the computer network Internet.
Article
We examine the extent to which the response of a perfectly elastic half-space to an SH-wave incident from below can be characterized when knowledge about the elastic structure is limited to the near surface. Elastic properties are modeled as piecewise continuous functions of the depth coordinate. It is found that the site amplification function can be determined with a frequency resolution that depends inversely on the depth to which the elastic structure is known. Specifically, certain spectral averages of the site amplification function, concentrated over bandwidth Δƒ, depend only on the elastic structure down to a two-way travel-time depth of 1/Δƒ. These spectral averages are entirely independent of the elastic properties at greater depth. Equivalently, when the incident motion has a bandlimited white power spectrum of bandwidth Δƒ, the site amplification of the root mean square (rms) ground motion depends only on the elastic structure down to a two-way travel-time depth of 1/Δƒ. When the bandwidth is sufficiently large, the following corollary applies: the rms surface ground motion equals the rms incident motion multiplied by 2√Ib/I0, where I0 and Ib are shear impedances at the ground surface and basement depth, respectively. This result provides justification for a procedure conventionally used to correct stochastic estimates of earthquake ground motion to account for local site effects. The analysis also clarifies the limitations of that conventional procedure. The results define specific site-response parameters that can be computed from knowledge of shallow structure alone and may thereby contribute to improved understanding of the physical basis for, and limitations of, site classification schemes that are based on average S-wave velocity at shallow depth. While the analytical results are rigorous only for infinite Q, numerical experiments indicate that similar results apply to models with finite, frequency-independent Q. The practical utility of the results is likely to be limited primarily by the degree of lateral heterogeneity present near sites of interest and the degree to which the sites respond nonlinearly to incident ground motion.
Article
The major, historically active San Jacinto and San Andreas fault systems pass through the San Bernardino Valley area of southern California. An array of six portable, high-gain seismographs was operated for five 2-week recording sessions during the summer of 1972 and winter and spring of 1973 in order to detail the microseismicity of the region. A crustal model for the Valley, modified after Gutenberg, was established using a 6-km reversed seismic refraction profile and a series of monitored quarry blasts. Fifty-five microearthquakes were used to establish a magnitude scale (1.5 to 3.3) based on coda lengths recorded by instruments peaked at 20 Hz. Forty-five hypocenters from the analysis of over 6,000 hr of low-noise records define two northeast trending lineations within the western portion of the Valley. A composite first-motion plot of 22 microearthquakes from these lineations indicates left-lateral strike-slip faulting. Fluctuations in microseismicity appear to reflect rapid changes in the stress patterns of southern California. Minor activity along the strike of the San Jacinto fault zone suggests a purely right-lateral strike-slip motion. Only minimal strain release was observed along the San Andreas fault zone.
Article
This article presents an overview of the Southern California Earthquake Center (SCEC) Phase-III effort to determine the extent to which probabilistic seismic hazard analysis (PSHA) can be improved by accounting for site effects. The contributions made in this endeavor are represented in the various articles that compose this special issue of BSSA. Given the somewhat arbitrary nature of the site-effect distinction, it must be carefully defined in any given context. With respect to PSHA, we define the site effect as the response, relative to an attenuation relationship, averaged over all damaging earthquakes in the region. A diligent effort has been made to identify any attributes that predispose a site to greater or lower levels of shaking. The most detailed maps of Quaternary geology are not found to be helpful; either they are overly detailed in terms of distinguishing different amplification factors or present southern California strong-motion observations are inadequate to reveal their superiority. A map based on the average shear-wave velocity in the upper 30 m, however, is found to delineate significantly different amplification factors. A correlation of amplification with basin depth is also found to be significant, implying up to a factor of two difference between the shallowest and deepest parts of the Los Angeles basin. In fact, for peak acceleration the basin-depth correction is more influential than the 30-m shear-wave velocity. Questions remain, however, as to whether basin depth is a proxy for some other site attribute. In spite of these significant and important site effects, the standard deviation of an attenuation relationship (the prediction error) is not significantly reduced by making such corrections. That is, given the influence of basin-edge-induced waves, subsurface focusing, and scattering in general, any model that attempts to predict ground motion with only a few parameters will have a substantial intrinsic variability. Our best hope for reducing such uncertainties is via waveform modeling based on first principals of physics. Finally, questions remain with respect to the overall reliability of attenuation relationships at large magnitudes and short distances. Current discrepancies between viable models produce up to a factor of 3 difference among predicted 10% in 50-yr exceedance levels, part of which results from the uncertain influence of sediment nonlinearity.
Article
One simple way of accounting for site conditions in calculating seismic hazards is to use the shear-wave velocity in the shallow subsurface to classify materials. The average shear-wave velocity to 30 m ( V 30s) has been used to develop site categories that can be used for modifying a calculated ground motion to account for site conditions. We have prepared a site-category map of California by first classifying the geologic units shown on 1:250,000 scale geologic maps. Our classification of geologic units is based on V 30s measured in 556 profiles and geological similarities between units for which we have V s data and the vast majority of units for which we have no data. We then digitized the geologic boundaries from those maps that separated units with different site classifications. V s data for California shows that several widespread geologic units have ranges of V 30s values that cross the boundaries between NEHRP-UBC site categories. The Franciscan Complex has V 30s values across UBC categories B and C with a mean value near the boundary between those two categories. Older alluvium and late Tertiary bedrock have V 30s values that range from about 300 to about 450 m/sec, across the boundary between categories C and D. To accommodate these units we have created intermediate categories, which we informally call BC and CD. Geologic units that have, or are interpreted to have, V 30s values near the boundary of the UBC categories are placed in these intermediate units. In testing our map against the available V 30s measurements, we have found that 74% of the measured V 30s values fall within the range assigned to the V s category where they fall on the map. This ratio is quite good considering the inherent problems in plotting site-specific data on a regional map and the variability of physical properties in geologic units. We have also calculated the mean and distribution of V 30s for each of our map units and prepared composite profiles, showing the variation of V s in the upper 100 m from the available V s data. These data show that the map categories that we have defined based on geologic units have different V s properties that can be taken into account in calculating seismic hazards.
Article
Three-dimensional finite difference simulations of elastic waves in the San Bernardino Valley were performed for two hypothetical earthquakes on the San Andreas fault: a point source with moment magnitude M5 and an extended rupture with M6.5. A method is presented for incorporating a source with arbitrary focal mechanism in the grid. Synthetics from the 3-D simulations are compared with those derived from 2-D (vertical cross section) and 1-D (flat-layered) models. The synthetic seismograms from the 3-D and 2-D simulations exhibit large surface waves produced by conversion of incident S waves at the edge of the basin. Seismograms from the flat-layered model do not contain these converted surface waves and underestimate the duration of shaking. Maps of maximum ground velocities occur in localized portions of the basin. The location of the largest velocities changes with the rupture propagation direction. Contours of maximum shaking are also dependent on asperity positions and radiation pattern. -from Author
Article
Aftershocks of the 1994 Northridge (M w = 6.7) earthquake provide insights into the geometry of the seismic source faults in the San Fernando Valley and the east Ventura Basin and allow the calculation of deformation rates for the region. The Northridge thrust and Santa Susana faults dip in opposite directions, with the Northridge thrust entirely beneath the Santa Susana fault. These opposing reverse faults interact, resulting in a folded active Santa Susana fault and uplift in the footwall block of that reverse fault. Two balanced cross sections suggest thick-skinned deformation of the western Transverse Ranges. The western section, across the Modelo lobe segment of the north-dipping San Cayetano fault and the easternmost surface trace of the south-dipping Oak Ridge fault, is west of any aftershocks of the Northridge earthquake and has been termed the Hopper Canyon segment of the deformation belt. Structural modeling predicts a dip of 46 ° S on the Oak Ridge fault at seismogenic depths. Horizontal shortening rates are calculated by adding the products of the dip-slip displacements and the cosines of the dips of both faults. The eastern cross section shows the Northridge mainshock, with a 45 ° south-dipping nodal plane at a depth of 18 km. Aftershocks reach a depth of 20 km. In a thin-skinned paradigm, a hinge should occur at the surface near the Santa Monica Mountains due to rocks moving from a decollement at the brittle-plastic transition and changing dip as they move up the ramp. No hinge of that magnitude occurs there. Calculation of horizontal short-ening rates across this part of the western Transverse Ranges must take into account the displacement on both the Northridge thrust (eastern extension of the Oak Ridge fault) and the Santa Susana fault (Placerita segment). Horizontal shortening rates are 8.2 + 2.4 mm/yr across the Modelo lobe segment of the San Cayetano fault and the Oak Ridge fault and 5.7 _ 2.5 mm/yr across the Northridge thrust and the Santa Susana fault. These rates are consistent with those based on tectonic geodesy using GPS. Dip-slip displacement rates on the faults are 1.7 mm/yr for the Northridge thrust since 2.3 Ma, 4.1 ___ 0.4 mm/yr for the Oak Ridge fault since 500 ka, 5.9+3.9]_3. 8 mm/yr for the Santa Susana fault since 600 to 2300 ka, and 7.4 + 3.0 mm/yr for the Modelo lobe segment of the San Cayetano fault since 500 ka. This indicates that the slip rates on the north-dipping, the eastern San Cayetano, and the Santa Susana faults are comparable, but the slip rate on the south-dipping faults decreases eastward; the slip rate on the Oak Ridge fault in the Ventura basin is more than double that of the Northridge thrust.
Article
This article presents an overview of the Southern California Earthquake Center (SCEC) Phase-III effort to determine the extent to which probabilistic seismic hazard analysis (PSHA) can be improved by accounting for site effects. The contributions made in this endeavor are represented in the various articles that compose this special issue of BSSA, Given the somewhat arbitrary nature of the site-effect distinction, it must be carefully defined in any given context. With respect to PSHA, we define the site effect as the response, relative to an attenuation relationship, averaged over all damaging earthquakes in the region. A diligent effort has been made to identify any attributes that predispose a site to greater or lower levels of shaking, The most detailed maps of Quaternary geology are not found to be helpful; either they are overly derailed in terms of distinguishing different amplification factors or present southern California strong-motion observations are inadequate to reveal their superiority. A map based on the average shear-wave velocity in the upper 30 m, however, is found to delineate significantly different amplification factors. A correlation of amplification with basin depth is also found to be significant, implying up to a factor of two difference between the shallowest and deepest parts of the Los Angeles basin. In fact, for peak acceleration the basin-depth correction is more influential than the 30-m shear-wave velocity, Questions remain, however, as to whether basin depth is a proxy for some other site attribute. In spite of these significant and important site effects, the standard deviation of an attenuation relationship (the prediction error) is not significantly reduced by making such corrections, That is, given the influence of basin-edge-induced waves, subsurface focusing, and scattering in general, any model that attempts to predict ground motion with only a few parameters will have a substantial intrinsic variability. Our best hope for reducing such uncertainties is via waveform modeling based on first principals of physics. Finally, questions remain with respect to the overall reliability of attenuation relationships at large magnitudes and short distances, Current discrepancies between viable models produce up to a factor of 3 difference among predicted 10% in 50-yr exceedance levels, part of which results from the uncertain influence of sediment nonlinearity.
Article
The attenuation relationship presented by Boore et al. (1997) has been evaluated and customized with respect to southern California strong-motion data (for peak ground acceleration (PGA) and 0.3-, 1.0-, and 3.0-sec period spectral acceleration). This study was motivated by the recent availability of a new site-classification map by Wills et al. (2000), which distinguishes seven different site categories for California based on the 1994 NEHRP classification. With few exceptions, each of the five site types represented in the southern California strong-motion database exhibit distinct amplification factors, supporting use of the Wills et al. (2000) map for microzonation purposes. Following other studies, a basin-depth term was also found to be significant and therefore added to the relationship. Sites near the center of the LA Basin exhibit shaking levels up to a factor of 2 greater, on average, than otherwise equivalent sites near the edge. Relative to Boore et al. (1997), the other primary difference here is that PGA exhibits less variation among the Wills et al. (2000) site types. In fact, the PGA amplification implied by the basin-depth effect is greater than that implied by site classification. The model does not explicitly account for nonlinear sediment effects, which, if important, will most likely influence rock-site PGA predictions the most. Evidence for a magnitude-dependent variability, or prediction uncertainty, is also found and included as an option.
Article
It is well established that sedimentary basins can significantly amplify earthquake ground motion. However, the amplification at any given site can vary with earthquake location. To account for basin response in probabilistic seismic hazard analysis, therefore, we need to know the average amplification and intrinsic variability (standard deviation) at each site, given all earthquakes of concern in the region. Due to a dearth of empirical ground-motion observations, theoretical simulations constitute our best hope of addressing this issue. Here, 0-0.5 Hz finite-difference, finite-fault simulations are used to estimate the three-dimensional (3D) response of the Los Angeles basin to nine different earthquake scenarios. Amplification is quantified as the peak velocity obtained from the 3D simulation divided by that predicted using a regional one-dimensional (1D) crustal model. Average amplification factors are up to a factor of 4? with the values from individual scenarios typically differing by as much as a factor of 2.5. The average amplification correlates with basin depth, with values near unity at sites above sediments with thickness less than 2 km, and up to factors near 6 above the deepest ( approximate to 9 km) and steepest-dipping parts of the basin. There is also some indication that amplification factors are greater for events located farther from the basin edge. If the 3D amplification factors are divided by the 1D vertical SH-wave amplification below each site, they are lowered by up to a factor of 1.7. The duration of shaking in the 3D model is found to be longer, by up to more than 60 seconds, relative to the 1D basin response. The simulation of the 1994 Northridge earthquake reproduces recorded 0-0.5 Hz particle velocities relatively well, in particular at near-source stations. The synthetic and observed peak velocities agree within a factor of two and the log standard deviation of the residuals is 0.36. This is a reduction of 54% and 51% compared to the values obtained for the regional 1D model and a ID model defined by the velocity and density profile below a site in the middle of the basin (DOW), respectively. This result suggests that long-period ground-motion estimation can be improved considerably by including the 3D basin structure. However, there are uncertainties concerning accuracy of the basin model, model resolution, the omission of material with shear velocities lower than 1 km/s, and the fact that only nine scenarios have been considered. Therefore, the amplification factors reported here should be used with caution until they can be further tested against observations. However, the results do serve as a guide to what should be expected, particularly with respect to increased amplification factors at sites located above the deeper parts of the basin.
Article
The rootless Ventura Avenue, San Miguelito, and Rincon anti-clines (Ventura fold belt) in Pliocene-Pleistocene turbidites are fault-propagation folds related to south-dipping reverse faults rising from a decollement in Miocene shale. To the east, the Sulphur Mountain anti-clinorium overlies and is cut by the Sisar, Big Canyon, and Lion south-dipping thrusts that merge downward into the Sisar decollement in lower Miocene shale. Shortening of the Miocene and younger sequence is ˜3 km greater than that of underlying competent Paleogene strata in the Ventura fold belt and ˜7 km greater farther east at Sulphur Mountain. Cross-section balancing requires that this difference be taken up by the Paleogene sequence at the Oak Ridge fault to the south. Convergence is northeast to north-northeast on the basis of earthquake focal mechanisms, borehole breakouts, and piercing-point offset of the South Mountain seaknoll by the Oak Ridge fault. A northeast-trending line connecting the west end of Oak Ridge and the east end of the Sisar fault separates an eastern domain where late Quaternary displacement is taken up entirely on the Oak Ridge fault and a western domain where displacement is transferred to the Sisar decollement and its overlying rootless folds. This implies that (1) the Oak Ridge fault near the coast presents as much seismic risk as it does farther east, despite negligible near-surface late Quaternary movement; (2) ground-rupture hazard is high for the Sisar fault set in the upper Ojai Valley; and (3) the decollement itself could produce an earthquake analogous to the 1987 Whittier Narrows event in Los Angeles.
Article
This article evaluates the possibility of improving a particular ground-motion relationship for predicting peak acceleration (PGA) and absolute response spectral accelerations at the periods of 0.3, 1.0, and 3.0 sec in southern California. We use the attenuation model of Abrahamson and Silva (1997), which Lee et al. (2000) found satisfactory for this region. We examine differences between observed and predicted values (residuals) as a function of several site attributes to determine whether corrections can be made to improve the predictions. This study differs from that of Steidl (2000) in that we use an attenuation model that accounts for sediment amplification and nonlinearity (Steidl used only a rock-site relationship). Residuals are significantly correlated with basin depth. Depending on the specific frequencies considered, ground motions at the deepest part of the basin average 30% to 80% higher than at the edge of the basin. Residuals rue also significantly correlated with the estimates of the average amplification of peak velocity due to the basin velocity structure (Olsen, 2000). Since the basin depth is correlated with the average basin amplification, there is no need to correct the ground-motion model for both effects. Detailed geology is generally found unhelpful in improving ground-motion predictions. Overall, correcting the ground-motion relation reduces the standard error of ground-motion predictions by about 5%. Whether the ground-motion relation modifications suggested here are significant in terms of the implied seismic hazard is evaluated in Field and Petersen (2000). The weak correlation of residuals with respect to site parameters motivated us to apply the test proposed by Lee et nl. (1998). This involves plotting residuals versus residuals for stations that have recorded more than one earthquake. To the extent that systematic site effects cause the misfit between observations and the ground-motion model, such a plot will show correlation among the residuals. Correlation coefficients are, surprisingly, very low, ranging from 0.16 for PGA residuals to 0.26 for 3-sec response spectra. Thus it seems that it will be very difficult to refine ground-motion prediction equations beyond the very general categories now in use, improved physical understanding of the site, source, and path contributions must play a major role in any future efforts to reduce the uncertainty in the ground-motion predictions.
Article
Velocity data are compiled from measurements of nearly 1,000,000 feet of section in 500 well surveys in the United States and Canada. Average velocities for shale and sandstone sections are arranged by depth and geologic age. Deviations from the mean values are attributed to lithologic variations. The variations of velocity with depth and time are studied independently in order to develop a quantitative relationship. It is concluded that the velocity for an average shale and sand section is given by the equation V = 125.3 ( ZT ) 1 6 , where V is velocity in feet per second, Z is depth in feet, and T is age in years. Velocities in limestone show less definite evidence of increase with age and depth.
Article
Travel-time data obtained from both natural and artificial events occurring in southern California indicate a major, lateral crustal transition within the Transverse Range Province. The eastern crust is very similar to the adjacent Mojave region, where a crustal velocity of 6.2 km/sec is typically observed. The western ranges are dominated by an extensive 6.7 km/sec layer. Pn velocity beneath the western Mojave, Transverse Ranges, and northern Peninsular Ranges is 7.8 km/sec. The crustal thickness of these provinces is 30 to 35 km. The Transverse Ranges do not have a distinct crustal root. Unlike other provinces within southern California, the Transverse Ranges are underlain at a depth of 40 km by a refractor with a P-velocity of 8.3 km/sec. P-delays from a vertically incident, well-recorded teleseism suggest that this velocity anomaly extends to a depth of 100 km. These data indicate that this high-velocity, ridge-like structure is coincident with much of the areal extent of the geomorphic Transverse Ranges and is not offset by the San Andreas fault. Four hypotheses are advanced to explain the continuity of this feature across the plate boundary: (1) dynamic phase change; (2) a coincidental alignment of crust or mantle anomalies; (3) the litho-sphere is restricted to the crust; (4) the plate boundary at depth is displaced from the San Andreas fault at the surface. Within the context of the last model, we suggest the plate boundary at depth is at the eastern end of the velocity anomaly, in the vicinity of the active Helendale-Lenwood-Camprock faults. The regionally observed 7.8 km/sec layer is suggested as a zone of decoupling necessary to accommodate the horizontal shear that must result from the divergence of the crust and upper mantle plate boundaries. The geomorphic Transverse Ranges are viewed as crustal buckling caused by the enhanced coupling between the crust and upper mantle which is suggested by the locally thin, 7.8 km/sec layer.
Article
Long-period (1–10 sec) ground motions recorded in the Los Angeles basin region during the Northridge earthquake show complex waveforms, extended durations and multiple sets of arrivals which cannot be attributed solely to source processes or wave propagation within a plane-layered medium. These features suggest a strong interaction between the propagating seismic wave field and the laterally varying subsurface geologic structure of this region. Recorded motions at a hard rock site in the Santa Monica Mountains (scrs) are smaller by nearly a factor of three in peak velocity compared to recordings at more distant sites in the adjacent, northwest portion of the Los Angeles basin. In addition, although the rock site recording has a relatively simple wave shape, the basin site recordings are dominated by late arriving, large amplitude pulses of energy. We interpret these arrivals to be surface waves, which are generated by body waves interacting with the thickening margin of the basin. A preliminary modeling analysis of these data indicates that a combination of both large-scale (deep basin) and small-scale (shallow micro-basin) structures are needed to explain the observed responses.
Article
Three cross sections are balanced and retrodeformed to the top of the Pleistocene Saugus Formation and to a horizon close to the base of the Jaramillo subchron to yield crustal shortening and shortening rates for the western Transverse Ranges of California. The cross sections compare the shortening that occurs along a transfer zone in which displacement is transferred eastward from a surface reverse fault (Red Mountain fault) to a blind thrust and from that to a combination of both a surface reverse fault (San Cayetano fault) and the blind thrust. Using an age of 250 +/- 50 ka for the top of the Saugus Formation based on amino-acid racemization of fossil mollusks, crustal shortening rates appear to have abruptly accelerated through time from 3(+4/-3) to 5 +/- 4 mm/yr between 250 and 975 ka to 20 +/- 6 to 28 +/- 1 mm/yr since 250 +/- 50 ka. Using an age of 500 ka for the top of the Saugus Formation based on paleomagnetic stratigraphy, there is little increase in deformation rates through time, ranging from 5 +/- 5 to 8 +/- 6 mm/yr between 975-500 ka and from 10 +/- 3 to 14 +/- 1 mm/yr since 500 ka. Crustal convergence rates determined by Global Positioning System surveys, taken over 4.6 years, indicate a shortening rate of 7-10 mm/yr across the basin. This is consistent with the slower rates of deformation calculated using an age of 500 ka for the top of the Saugus Formation.
Article
A 3D P wave velocity model of the southern California crust is constructed by combining existing 1D models, each describing a region defined by surface geology. The model is calibrated with travel times from three explosions. The technique developed by Roecker (1981) is used to invert, starting with the forward model, about 21,000-P wave arrivals from earthquakes for hypocenters and block slownesses. The variance of these P wave travel time residuals decreases 47 percent during the inversion. Many of the blocks representing the upper crust and midcrust are well sampled and well resolved. The resulting model is useful both for locating earthquakes and for comparing the geologies of the different regions.
Article
Using strong-motion data recorded in the Los Angeles region from the 1992 (Mw 7.3) Landers earthquake, we have tested the accuracy of existing three-dimensional (3D) velocity models on the simulation of long-period (≧2 sec) ground motions in the Los Angeles basin and surrounding San Fernando and San Gabriel Valleys. First, the overall pattern and degree of long-period excitation of the basins were identified in the observations. Within the Los Angeles basin, the recorded amplitudes are about three to four times larger than at sites outside the basins; amplitudes within the San Fernando and San Gabriel Valleys are nearly a factor of 3 greater than surrounding bedrock sites. Then, using a 3D finite-difference numerical modeling approach, we analyzed how variations in 3D earth structure affect simulated waveforms, amplitudes, and the fit to the observed patterns of amplification. Significant differences exist in the 3D velocity models of southern California that we tested (Magistrale et al., 1996; Graves, 1996a; Hauksson and Haase, 1997). Major differences in the models include the velocity of the assumed background models; the depth of the Los Angeles basin; and the depth, location, and geometry of smaller basins. The largest disparities in the response of the models are seen for the San Fernando Valley and the deepest portion of the Los Angeles basin. These arise in large part from variations in the structure of the basins, particularly the effective depth extent, which is mainly due to alternative assumptions about the nature of the basin sediment fill. The general ground-motion characteristics are matched by the 3D model simulations, validating the use of 3D modeling with geologically based velocity-structure models. However, significant shortcomings exist in the overall patterns of amplification and the duration of the long-period response. The successes and limitations of the models for reproducing the recorded ground motions as discussed provide the basis and direction for necessary improvements to earth structure models, whether geologically or tomographically derived. The differences in the response of the earth models tested also translate to variable success in the ability to successfully model the data and add uncertainty to estimates of the basin response given input "scenario" earthquake source models.
Article
New three-dimensional (3-D) VP and VP/VS models are determined for southern California using P and S-P travel times from local earthquakes and controlled sources. These models confirm existing tectonic interpretations and provide new insights into the configuration of geological structures at the Pacific-North America plate boundary. The models extend from the U.S.-Mexico border in the south to the southernmost Coast Ranges and Sierra Nevada in the north and have a 15-km horizontal grid spacing and an average vertical grid spacing of 4 km, down to 22 km depth. The heterogeneity of the crustal structure as imaged by VP and VP/VS models is larger within the Pacific plate than the North American plate. Similarly, the relocated seismicity deepens and shows more complex 3-D distribution in areas of the Pacific plate exhibiting compressional tectonics. The models reflect mapped changes in the lithology across major geological terranes such as the Mojave Desert, the Peninsular Ranges, and the Transverse Ranges. The interface between the shallow Moho of the Continental Borderland and the deep Moho of onshore California forms a broad zone to the north beneath the western Transverse Ranges, Ventura basin, and the Los Angeles basin and a narrow zone to the south, along the Peninsular Ranges. The near-surface increase in velocity, from the surface to up to 8 km depth, is rapid and has a logarithmic shape for stable blocks and mountain ranges but is slow with a linear shape for sedimentary basins. At midcrustal depths a rapid increase in VP is imaged beneath the sediments of the large sedimentary basins, while beneath the adjacent mountain ranges the increase is small or absent.
Article
The San Fernando Valley lies above the north-dipping 1971 Sylmar and south-dipping 1994 Northridge earthquake faults. To understand the tectonic setting of these two earthquakes, we mapped subsurface geology of the San Fernando Valley down to a depth of ~3 km, using industry oil-well and seismic data. The 1994 Northridge earthquake did not rupture the surface, and the south-dipping aftershock zone terminated against the north-dipping 1971 aftershock zone at a depth of 5-8 km. However, the blind Northridge fault has a near surface geologic expression; fault-propagation folding related to the Northridge fault has preserved a thick forelimb sequence of Plio-Pleistocene Saugus Formation in the Sylmar basin and Merrick syncline, which are located on the hanging wall side of north-dipping reverse faults. The north-dipping Mission Hills, Verdugo, and Northridge Hills reverse faults are interpreted to be potential seismic sources because fault-propagation folds above these faults have tectonic geomorphic expression. These north-dipping reverse faults were initiated during the deposition of the Saugus Formation between 2.3 and 0.5 Ma. and have minimum dip-slip rates of 0.35 to 1.1 mm/yr based on the oldest possible age of the initiation of faulting. The Northridge Hills and Mission Hills faults are interpreted to merge at depth and are located at the updip extension of the 1971 aftershock zone, even though these faults did not rupture during the 1971 earthquake. Surface breaks appeared north of these faults mostly along north-dipping bedding planes and are interpreted as secondary features related to flexural-slip folding rather than a direct extension of the 1971 seismogenic fault. Surface and subsurface geology, together with seismological data of the 1971 and 1994 earthquakes, suggests that the north- and south-dipping deformation zones in the San Fernando Valley are divided into multiple segments separated by northeast-trending structural discontinuities.
Article
The physiographic basin is underlain by a deep structural depression; the buried basement surface has relief of as much as 4.5 miles in a distance of 8 miles. Parts of this depression have been the sites of discontinuous deposition since late Cretaceous time and of continuous subsidence and deposition since middle Miocene time. In middle Miocene time this depositional basin extended well beyond the margins of the present- day physiographic basin. The term ''Los Angeles basin'' refers herein to the larger area. The evolution of the basin is interpreted in 5 major phases, each of which is represented by a distinctive rock assemblage--the predepositional phase and basement rocks, the prebasin phase of deposition and upper Cretaceous to lower Miocene rocks, the basin-inception phase and middle Miocene rocks, the principal phase of subsidence and deposition and upper Miocene to lower Pleistocene rocks, and the basin disruption phase and upper Pleistocene to Recent deposits. The Los Angeles basin in California's most prolific oil- producing district in proportion to its size: at the end of 1961, its cumulative production (5,035 billion bbl) was nearly half that of California's. (103 refs.)
Conference Paper
Oil in the Los Angeles basin has accumulated principally in anticlinal folds associated with strong Neogene faulting and compression. The still-continuing Pasadenan orogeny, involved northwest-trending right-lateral wrench faults related to the San Andreas system, has been the dominant influence. But that deformation has been overprinted on structures formed during two discrete earlier orogenic episodes: (1) the major widespread block faulting of the middle Miocene and (2) a less-obvious phase of compressive deformation that occurred during latest Miocene and earliest Pliocene time, involved left-lateral faulting - and related folding - on more westerly trends, and was especially significant along the northern margin of the basin. The resulting present-day structural pattern, therefore, is not a simple pattern of classical right-lateral wrench-fault deformation. It is a pattern which reflects the radical Neogene evolution of the regional stress field, and the interplay of wrench faulting with the hinge lines and strong sedimentary wedge belts formed by earlier vertical movements.
Article
Simulation of 2 minutes of long-period ground motion in the Los Angeles area with the use of a three-dimensional finite-difference method on a parallel supercomputer provides an estimate of the seismic hazard from a magnitude 7.75 earthquake along the 170-kilometer section of the San Andreas fault between Tejon Pass and San Bernardino. Maximum ground velocities are predicted to occur near the fault (2.5 meters per second) and in the Los Angeles basin (1.4 meters per second) where large amplitude surface waves prolong shaking for more than 60 seconds. Simulated spectral amplitudes for some regions within the Los Angeles basin are up to 10 times larger than those at sites outside the basin at similar distances from the San Andreas fault.
Article
The unconsolidated sediments that blanket the ocean floor are of widely varying thickness but seismic observations indicate that 200 to 400 meters in the Pacific and one kilometer in the Atlantic are fairly typical values for deep water. At present direct observation of these sediments is limited to such samples as may be recovered by dredging or coring operations, for drilling has been carried out only in the shallow waters of the coastal shelves. Knowledge of the physical properties of the great bulk of the sediments deeper than the few tens of feet reached by coring equipment is thus necessarily derived from geophysical observations.
A master station (MS) method is presented in this paper to rapidly determine hypocenters in three-dimensional (3-D) heterogeneous velocities. An equal differential time (EDT) surface is defined as the collection of all spatial points that satisfy the time difference between the two arrivals, which can be two picks at two stations or two different phase picks at one station. The EDT surface is independent of the origin time and will contain the hypocenter. For an event with J arrivals, there are (J-1) independent EDT surfaces. The MS method determines the hypocenter that satisfies the two types of constraints: to be traversed by most EDT surfaces and to yield minimal travel time residual statistics. The statistics include both the residual variance and the amplitude of the origin time error. The combined use of the EDT surfaces and residual statistics allows for a unique determination of the hypocenter and origin time using different types of phase arrivals. An illustration of the MS method is given for 27 small events that occurred in southern California, using a P wave velocity model modified from that of Magistrale et al. (1992). The average misfit between the bulletin hypocenters and the new solutions is 3.8 km. If a 3-D velocity model is accurate, the MS method can be a viable means of hypocenter determination.
Article
This is the third edition of this book but it has been extensively revised so that it can almost be described as a new book rather than a revision. The book covers the whole range of applied geographical techniques but has a strong emphasis on seismology. Here, properties of seismic waves, instruments data acquisition on land and sea, data processing and geological interpretation are all covered in separate chapers. Three chapters are used for gravity (principle and instrument) field measurements and reductions interpretation and three for magnetics (principles, surveying techniques, interpretation). All vertical methods are in one chapter. Particular change in this edition includes extensive coverage of data processing of seismic data and the use of elementary calculations to present the basic principles.- P.N.Chroston
Late Cenozoic thrust ramps of southern California, Final Report to the Southern California Earthquake Center for
  • J Namson
  • T Davis
Namson, J., and T. Davis (1992). Late Cenozoic thrust ramps of southern California, Final Report to the Southern California Earthquake Center for 1991 Contract, Davis & Namson Consulting Geologists, Valencia, California.
Surface geology based strong motion amplification factors for the San Francisco Bay and Los Angeles areas
  • W Silva
  • S Li
  • R Darragh
  • N Gregor
Silva, W., S. Li, R. Darragh, and N. Gregor (1999). Surface geology based strong motion amplification factors for the San Francisco Bay and Los Angeles areas, P.G. & E. PEER Task 5.B Final Report, Pacific Engineering and Analysis, El Cerrito, California, 109 pp.
Source parameter inversion using 3D Green's functions: Application to the 1994 Northridge, California, earthquake
  • P Liu
  • R Archuleta
Liu, P., and R. Archuleta (1999). Source parameter inversion using 3D Green's functions: Application to the 1994 Northridge, California, earthquake, EOS Trans. AGU. 80, p. F709.
Synthetic seismogram modeling for the laterally varying structure in the Imperial Valley, in The Imperial Valley, California, Earthquake of October 15
  • W D Mooney
  • G A Mcmechan
Mooney, W. D., and G. A. McMechan (1982). Synthetic seismogram modeling for the laterally varying structure in the Imperial Valley, in The Imperial Valley, California, Earthquake of October 15, 1979, U.S. Geol. Surv. Profess. Paper 1254, 101-108.
Proceedings of the NCEER/SEAOC/BSSC Workshop on site response during earthquakes and seismic code provisions
  • G Martin