Questions related to Earthquake Seismology
We can calculate the dimension of a fault plane for a particular magnitude earthquake using Wells and Coppersmith (1994) relation. But how can we locate it on map.
I need the digitized data (Acceleration time history) of 1968 Tokachi-Oki Earthquake (Hachinohe harbor).
Here you can see the graph of this record (Acc. time history) that used in this article:
And also an old report of this event is available in Port and Airport Research Institute of Japan website: [pari.go.jp] (Strong-Motion Earthquake Records on The 1968 Tokachi-Oki Earthquake And Its Aftershocks)
I wonder if someone can help me find this record. Thanks.
This topic has been open for a wide range of researchers here and outside of RG, as open access to write and read comments related.
I encourage researchers with a deterministic view on earthquake nature for being proactive by trying to use resources of this space as much as possible in a way of understanding this phenomena by challenging their forecast models through forecast tests.
On forecast, I would like to find the method, data and time window for forecast, with magnitude range and probability of occurence. If the location of future event is possible, please specify. It's a test, and additional option to write about our successful forecasts and our correct models.
Also, please be concise, as possible.
In addition, I suggest to make references to own research, or other sources, for keeping the transparency on sensitive questions such as autor rights, originality and other aspect. For this, in case of not published yet ideas, I suggest to publish and come down in the comments with reference to official open to public source - article.
N.B. Regarding the forecasts based on statistical methods, and random nature of the EQ phenomenon view, are also welcomed.
What is your opinion about the criterion recommended in seismic codes for determining scaling period, which are used to scale ground motion records?
As you know, the mentioned criterion is the period of the structure’s dominant mode, which has the largest modal participating mass ratio (usually the first vibration mode). Hence, the period of the mode with the second largest modal participating mass ratio is not considered in the scaling process. Consequently, although this criterion usually results in the largest value of scaling period, it is not logical ones.
This is especially important when Tuned Mass damper (TMD) or Base-Isolation system is utilized, which cause the modal properties of the structures to change.
I used a new criterion based on the weighted mean value of the periods for the structures equipped with TMD.
Have you used any criteria other than the criterion mentioned in the seismic codes?
Actually I've had done it, but the result only show tsunami generated by landslide. Somehow the earthquake just disappeared in the model.
The purpose of earthquake engineering is not to build strong and earthquake-resistant buildings that do not experience the slightest damage in rare and severe earthquakes. The cost of such structures for the vast majority of users will have no economic justification.
Instead, engineers focus on buildings that resist earthquakes' effects and do not collapse, even in severe external excitations. It is the most important goal of international standards in the seismic design of buildings.
Below I have mentioned some crucial points in reducing the seismic demand in reinforced concrete structures. If there is anything else that is not on the list, feel free to append:
1- Selecting suitable construction conditions with the desired soil type of seismic design
2. Avoid using unnecessary masses in the building
3- Using simple structural elements with minimal torsional effects
4. Avoid sudden changes in strength and stiffness in building height
5. Prevent the formation of soft-story
6. Provide sufficient lateral restraint to control drift through shear walls
7- Preventing disturbance in the lateral behavior of the structure by non-structural components
The summary can be accessed from the link below:
Please let me know whether the information that has been gathered in the summary file is informative or not.
If the summary description is not clear enough, any suggestions will be welcome.
The provisions of ASCE 7-10 states that New Next Generation Attenuation Relationships (NNGAR) has been used in the process of Probabilistic Seismic Hazard Analysis (PSHA) to prepare seismic hazard maps provided in United State Geologic Survey (USGS).
Now, I want to know what is new next generation attenuation relationships and how are they different from other typical attenuation relationships such as Campbell, Douglas, Godrati, BJF, etc?
Risk-Adjusted Maximum Considered Earthquake (MCER) Ground Motion, which considers structural collapse risk of each site in seismic hazard analyses, is used to prepare seismic hazard maps provided in United State Geologic Survey 2010 (USGS 2010).
The mentioned collapse risk are considered in MCER through risk coefficients and applying to Maximum Considered Earthquake Geometric Mean (MCEG).
How these coefficients are calculated and applied to seismic hazard maps?
I am doing my MSc thesis on base isolation of structures with variable curvature friction pendulum systems. I have downloaded the benchmark problem for base isolated buildings and everything is fine. However, the file which contains the nonlinear analysis algorithm of the building is in DLL format and I cannot open and edit it for my new bearing type.
My Matlab version is R2013b. I also tried some lower versions, but that did not work.
Please let me know how I can solve this problem. I really appreciate your assistance.
I went through FEMA-356 and FEMA-440 for target displacement and I cam across the formula as well but it wasn't helpful as i'm not able to understand it completely and I wasn't able to find all of the coefficients. It would be great if i can get a solution for that.
I need a small clarification on artificial and synthetic accelerograms. To my knowledge, synthetic accelerograms are those which can be derived from seismological parameters and artificial accelerograms those which can be derived from spectrum compatible. If my explanation is wrong, kindly correct me.
Another clarification on synthetic accelerogram is, why do we multiply source, path and site terms in generating synthetic accelerogram. To my knowledge, source is related to magnitude, path is related to hypocentral distance and site is related to local geology. If my explanation is wrong, kindly provide detailed information.
I am using ocean bottom seismometer (OBS) data for ambient noise H/V with a target of nearly 5 Km. To consider the water layer effect, it is important to know the theoretical concept of P-wave contribution to ambient noise H/V peaks. The frequency range I use is 0.03 to 2 Hz.
I was wondering if someone could help me with following question:
I am trying to perform IDA analysis and I downloaded FEMA-P695 (22 far-field records) recommended records. the problem is the PGA of the downloaded records differ from FEMA table. In FEMA it has been written that "After this work was completed, version 7.3 of the database was released, and some ground motion parameters differ slightly in the new version, such as PGV. Even so, the variations are small and have no tangible impact on the FEMA P695 Methodology".
So how can I be sure if I'm using a correct record? Does the downloaded records need any filtering or base line correction to make them as the records from FEMA?
I'm currently working on pushover analysis for multi-story structures on ETABS 17.1. The calculation time that it took was 96+ hours that too, for a building of 20 floors. I've many models to run, and some are even of 80 stories. Can someone suggest ways to reduce its calculation time for the structure?
Geotechnical earthquake engineering can be defined as that subspecialty within the field of geotechnical engineering that deals with the design and construction of projects in order to resist the effects of earthquakes.
“While many cases of soil effects had been observed and reported for many years, it was not until a series of catastrophic failures, involving landslides at Anchorage, Valdez and Seward in the 1964 Alaska earthquake, and extensive liquefaction in Niigata, Japan, during the earthquake in 1964, caused geotechnical engineers to become far more aware of, and eventually engaged in understanding, these phenomena.” (I. M. Idriss, 2002)
Geotechnical Earthquake Engineering deals with the following geotechnical engineering subjects and problems::
- Site specific seismic hazard assessment
- Local site effects
- Design ground motions
- Design spectra and response spectra
- Seismic slope stability
- Seismic design of retaining walls
- Seismic design of dams
- Soil improvement to mitigate seismic hazards
- Seismic risk analysis
- Foundation performance
Geotechnical Earthquake Engineering and Soil Dynamics, as well as their interface with Engineering Seismology, Geophysics and Seismology, have all made remarkable progress over the past 20 years, mainly due to the development of instrumented large scale experimental facilities, to the increase in the quantity and quality of recorded earthquake data, to the numerous well-documented case studies from recent strong earthquakes as well as enhanced computer capabilities. One of the major factors contributing to the aforementioned progress is the increasing social need for a safe urban environment, large infrastructures and essential facilities. Researchers in the fields of geotechnical engineeromg, geology, and seismology have all contributed to the developments in the area of earthquake geotechnical engineering, seismology and soil dynamics.
Large earthquakes are infrequent and unrepeatable but they can be devastating. All of these factors make it difficult to obtain the required data to study their effects by post earthquake field investigations. Instrumentation of full scale structures is expensive to maintain over the large periods of time that may elapse between major temblors, and the instrumentation may not be placed in the most scientifically useful locations. Even if engineers are lucky enough to obtain timely recordings of data from real failures, there is no guarantee that the instrumentation is providing repeatable data. In addition, scientifically educational failures from real earthquakes come at the expense of the safety of the public. Understandably, after a real earthquake, most of the interesting data is rapidly cleared away before engineers have an opportunity to adequately study the failure modes.
Centrifuge modeling is a valuable tool for studying the effects of ground shaking on critical structures without risking the safety of the public. The efficacy of alternative designs or seismic retrofitting techniques can compared in a repeatable scientific series of tests.
Geotechnical centrifuge modeling is a technique for testing physical scale models of geotechnical engineering systems such as natural and man-made slopes and earth retaining structures and building or bridge foundations.
The scale model is typically constructed in the laboratory and then loaded onto the end of the centrifuge, which is typically between 0.2 and 10 metres (0.7 and 32.8 ft) in radius. The purpose of spinning the models on the centrifuge is to increase the g-forces on the model so that stresses in the model are equal to stresses in the prototype. For example, the stress beneath a 0.1-metre-deep (0.3 ft) layer of model soil spun at a centrifugal acceleration of 50 g produces stresses equivalent to those beneath a 5-metre-deep (16 ft) prototype layer of soil in earth's gravity.
Is there any relation between the shape of the elliptical earth and the prediction of the epicenter of earthquakes?
There are many causes of earthquakes. Tectonics plates and geodynamic conditions control the earthquake occurrence in active regions. Thus monitoring the subsurface conditions clearly help us to predict an earthquake epicenter. It is well known that the energy explodes in weak regions, fractured area or the most easiest break up part of earth.
A professor remarked that after a couple thousand kilometers from impact the length of a Rayleigh wave would be in the 200 meter range. Can anyone verify this and perhaps discuss their size when they are initially formed?
Let me briefly go through the problem I am facing.
Currently, I have data ( of ground acceleration) obtained from the "seismic accelerograph instrument system" which was placed at the basement of the building and the plot is shown below. According to the plot, it is showing a random waveform up to a certain time and it starts decaying (damping occurs). However, it again gets another waveform (sinusoidal, as shown in the figure) after 300 sec. It looks very unusual to me. I suspect the sinusoidal part to be a building response. But I couldn't decide whether my assumption is valid or not.
So, my questions are:
- Is there anything (books/journals/published or unpublished thesis/lecture notes) that talks about the limitations of the time period which we are supposed to make while plotting the ground motion data?
- Is there any specific guidelines or any thumb-rule to determine whether the certain waveform is coming from the earthquake motion or is a building response? Normally, what I do is- I consider the random waveform as an "earthquake response" and a sinusoidal waveform as a "building response". Is it the correct way or is there another way we need to look at?
- My confusion arises when I saw a portion of "sinusoidal" wave before there is damping. In the figure, it is shown under the "orange" box. So, is it acceptable if I make a statement like- the presence of sinusoidal wave along with the random wave is due to the fact that the sensors recorded the both "earthquake and building response" at a time?
- If No, how can it be justified? If Yes, how do I correct this problem?
Thank you so much.
When you look at a waveform and/or a seismograph you will see many oscillation on that. so identifying the waveform of an earthquake is a matter in signal processing. Then doing this job automatically can be more interested. So how we can do that?
I am looking for some examples on how to start local earthquake tomography with the use of Neural network non linear optimization. I have found some previous works on the topic, But I am not much clear on how to apply the methodology on this problem. How to define the parameters? How to formulate our goal function for the tomography data. I am mainly interested in local earthquakes p and S- wave travel time data and invert it for the seismic velocity model. Is there some one working on this topic? Any help regarding my problem would be appreciated.
When we face with a large database of continuous waveform, we have to use the necessary algorithms for extracting events automatically.
I'm trying to generate synthetic seismograms for an observed seismic event.
I did compare the amplitude in the time domain, and applied the FFT for both observed and synthetic waveforms.
although all the signals are beneath 0.3Hz, I'm asking about the tolerance range in the frequency domain.
How near the frequency ranges need to be in the FFT, to tell if my two signals are similar.
Thank you in advance
About the HVSR method, I am interested to know:
1) The depth coverage for depicting regional structure
2) The window length, frequency range, etc. for such works.
3) Should the data be broadband or short period data would do?
4) Also wish to know if it was really possible by using data of surface seismometers.
Is there a GMT script for plotting ternary diagram of Global CMT psmeca input (https://www.globalcmt.org/CMTsearch.html)? Like the one shown in the first figure.
I've been using FMC (https://josealvarezgomez.wordpress.com/2014/04/22/fmc-a-python-program-to-manage-classify-and-plot-focal-mechanism-data/) but I can't customize it such that it can also show the depth for each seismicity.
Dear all, Please kindly let me know that from where I can get information about earthquakes which occurred in Kabul region, Afghanistan in the recent years?
Thanks and advance
Is it possible to infer the background stress tensor from the moment tensors of tensile earthquakes, which have a significant non double-couple part?
Can I apply techniques used for stress inversion of pure double-couple moment tensors to the double-couple part of my moment tensors? (i.e. using methods which assume the slip vector is within the fault plane, for example Michael (1984) method and variants of.)
If none of the above: what/how can I learn about regional stress conditions from such data?
I would like to know if "2008 NYC DOT seismic design guidelines for bridges considering local site conditions" is implemented in practice or not. If anyone has some information regarding this it will be really helpful.
My primary goal is to find out Rayleigh wave dispersion from array of seismometer for several 100s of meters to depict shear wave velocity layers in the ocean bottom.
I am a beginner with programming.
I have acceleration data of an Earthquake in the N-S direction that I wish to impose on a structure. I have selected the domains and have applied a boundary load. But what function should I use to define the movement with the data I have? Accerleration & time.
I'm going to perform seismic analysis on LUSAS, FEM software. Downloaded ground motion data (accelerograms) from CESM database in the form of a text file. I have no clue how to move forward. I just know that I need two columns in a spreadsheet where one represents time, while the other one- acceleration.
Any help would be appreciated!
i want to understand how dominant frequency change from site to another and what controls this change? are the thickness of sediments affect on frequency and if that true what is the relation among them ?
The design response spectrum provides a general procedure to estimate the expected dynamic load on a structure which is expressed as a function of natural period. Thus knowing the period of the structure, design load could be calculated. It well known that the deterministic (DSHA) and probabilistic (PSHA) seismic hazard maps provide prediction of peak ground acceleration and ground motions for a specific site. As per NEHRP guidelines, design response spectrum is developed from the PSHA framework. The 2% pr 10% probability hazard level can be used for development of design response spectra which is actually satisfying a MCE level condition.
The accuracy in determination of PSA is very important in calculating the final shear load. Could you explain how to estimate such value for a given site?
How to calculate spectral acceleration (design acceleration) for the each type of site class?.
As you know that Mw is related to released energy for specific earthquake, so why their recorded accelerations vary for the same distance and Mw for different seismotectonic regions?
It is very important to study the time-acceleration history for all seismotectonic units in all overall the world to establish new type of ground motion attenuation and for data truth.
I would like to know type of filters or any other guidelines during collection of microtremor data and processing. I would appreciated if some one help me.
In statistical and probabilty anslyses we are asked to compute error and uncertainty curves and values.
Do you think there are absolute fixed error or uncertainty? Do we reach the time to say what error exist in our answers?
As we know, there are many software for 1D site response analysis, namely SHAKE, DEEPSOIL...
Now, I need a software for 3D site response analysis. Is there any commercial software for 3D analysis of basin response to ground motion?
Can we use ANSYS or ABAQUS or PLAXIS for 3D analysis of basin response to long-period ground motion?
It is well known that SAR (Synthetic Aperture Radar) interferometry is based on the SAR technology. How SAR can detect the deformation in three dimensions after earthquake happen? is it possible to measure the slip rate and fault parameters from SAR technology?
It is clear that both systems keep the general shape of signal similar. But do they have differences in saving same signal in both systems?
digital save it in binary mood so what is the effect of sampling rate and digital recording system on signal saving?
someone knows a program available to calculate synthetic Receiver Function?
I tried to use the Matlab toolbox FuncLab, written by Kevin C. Eagar but I think that it works only functions for RF analysis ...
Someone can help me, please?
Thanks in advance,
I found in a preliminary study that there is a relationship between focal depth and slip in local study in Fiji islands region,
I want to study the dynamic behavior of some structures that are sensitive to seismic vertical loads. All what I found is the lateral ground motions, but I couldn't find any time histories of the vertical motions/components of earthquakes. Any help will be appreciated.
Strong motion data of any earthquake can give us the acceleration time history of the earthquake at the recording station, how do we calculate the ground displacement from these records? i have tried a few numerical integration techniques but am not satisfied with the values.
For the Poisson model the addition of fault sources to the smoothed seismicity raises the hazard by 50% at locations where the smoothed seismicity contributes the highest hazard and up to 100% at locations where the hazard from smoothed seismicity is low. For the strongest aperiodicity parameter (smallest α), the hazard may further increase 60%–80% or more or may decrease by as much as 20% depending on the recency of the last event on the fault that dominates the hazard at a given site.
What is the effect of Time-Dependent Source Model using Brownian passage time recurrence model on the Probabilistic Seismic Hazard?.
In Time Dependent Model, the definition of foreshock, mainshock and aftershock is not necessary. In this model, every event is potentially triggered by all the previous events and every event can trigger subsequent events according to their relative time-space distance. What do you say about Stochastic models of earthquake clustering?
What are Stochastic models of earthquake clustering?.
When we study the seismicity of any region we need to build a trustable caatalog.
Could you please explain these optimal parameters of earthquake clustering?
how to build on these seismic clustered sources?
I am interested in estimating the strain released by particular earthquakes (say x earthquake) and the approximate time required to accommodate same amount of strain in that area.
I want to investigate the linear as well as non-linear responses of earthquake events, regarding this is there any other available tools such as NERA and EARA.
There are different kinds of earthquake scales (Magnitude) (1) local magnitude (ML), commonly referred to as "Richter magnitude", (2) surface-wave magnitude (Ms), (3) body-wave magnitude (Mb), and (4) moment magnitude (Mw). Scales 1-3 have limited range and applicability and do not satisfactorily measure the size of the largest earthquakes. The moment magnitude (Mw) scale, based on the concept of seismic moment, is uniformly applicable to all sizes of earthquakes but is more difficult to compute than the other types. All magnitude scales should yield approximately the same value for any given earthquake, But we always find some differences.
In reliability analysis of seismic structures for determining the probability of failure, first we need to define a limit state function (LSF). LSF can be a function of some structural responses such as inter-story drift, displacement and acceleration. The LSF which contains multiple responses leads to a multiple limit state function (MLSF). which combination of structural responses is more appropriate to define MLSFs? How can we select the value of the response capacity? Is there any suggestion in codes?
The question is designed to triger a scientific disscussion about this important issue, correct some misconceptions concerning the whole spectrum of the area's seismicity. ( Instrumental, historical, paleo and archaeo seismicity.
As far as I know, in seismology, we usually assume the dislocation on the fault is unidirectional thus we only see a positive part on displacement record. However, I could see not a few traces with a large negative part. Dose it mean the dislocation reverse it direction? Thanks!
Previous evaluations of regional liquefaction hazards identify several geologic and hydrologic factors that influence the susceptibility of a deposit to liquefaction, including (1) the age and depositional environment of the deposit; (2) the relative consolidation of sands and silts; and (3) the local depth to ground water
Can you outline these procedures in preparation of liquefaction hazard map? If you have developed relationship in each stage stated clearly?
What are the procedure for preparation of liquefaction susceptibility map?.
I read that there is a direct relation between variation of pressure and seismic noise: high pressure correspond to an increase of noise, low pressure correspond to a decrease of noise.
Actually I want to pick the coda window from the seismographs, for which I don't know the OT, But using visual inspection I can pick the P and S wave onset.
i designed 8 story rigid frame building using static equivalent seismic load as lateral load
when i did linear time history analysis as lateral load (SE not used anymore), some of its members are overstress.
somebody told me that SE is always larger than THA and it's impossible that my members are overstress
is there possibility that THA is larger than SE in member internal forces?
i'm pretty sure that my step is based on ASCE 7-10
I am looking for accelerograms database to apply to my models. A graph including spectra acceleration and period. I can find the JPG file, but I can't download the detailed of those graphs. I've tried these websites as well.
I need help to deconvolute Simulated Earthquake Data (created from Response Spectra data of IS :1893-2002) Target Spectra
I am applying the input motion at the surface and recording the Time History Data at Bed rock level both in STRATA and DEEP SOIL.
1) I am not sure whether to give input motion as WITHIN or OUTCROP motion.
2) If I re-convolute the Time History Data obtained at Bedrock to Surface Level then the recorded Time History at surface is not matching the Target Spectra Data.
I am trying to fixed up error during pushover run in masonry infill model.after initial run, hinges formed in strut but suddenly it stop the analysis.I am not able to understand. Pls sombody help me.Your effort will highly appreciate. I have attached 2 model herewith. Pls give me solution.
Some areas of the study area did not have seismic activity, no earthquake activity area before done a more than 200 km of two-dimensional seismic profile, the seismic section has been the structure of the interpretation, How to combine the focal mechanism solution and the two - dimensional reflection seismic section structure to explain the results inversion of the regional tectonic stress field？to further explore the regional geodynamic background?
Old seismic codes provide seismic map with peak ground acceleration (PGA) values with a 10% probability of exceedance in 50 years. The new codes provide the Seismic Coefficients (Ss) & (S1). What the difference between them or what the relation between them
what could happened when seismic can see the fault but gravity can not. Seismic and gravity profile, same interval (station interval in gravity and Geophon interval in seismic as 10 meters),,,,,,,,,,any suggestions?
I would like to use strong motion data for site effects and estimation of predominant frequency, could anyone suggest me what is the methodology.
I am very thankful to you.
If a source is distant and shallow, then the incident wave to a basin will consist mainly of surface waves. When surface waves reach the basin, a part of the incident wave energy reflected back but the rest is impinging into the basin. It means the energy is decreased significantly but it still has large amplitude.
What are the causes of larger amplitude of basin-transduced surface waves?
Dear RG members
I want to know from seismic or earthquake data about the relative activity of these faults either thrusts or strike slip faults.
With best regars
Most great earthquakes leave in their wake stories of unusual animal behavior that preceded the main event, often by an interval long enough to be useful for warning purposes. Although much of this reported behavior has perhaps been classified as unusual only with the benefit of hindsight, a residue of convincing stories remains and it does seem possible that animals sometimes respond to certain geophysical changes associated with the approach of rock failure, that are not monitored by instruments. Anomalous animal behavior is difficult to interpret, because it may arise from various causes. But this is a defect shared with other short-term precursors of great earthquakes, so study of the subject as a possible part of an imminent danger warning scheme should not be neglected on that ground alone. Mankind`s past success in exploiting animal sensitivities, even before learning to understand them fully, encourages the hope that useful results will emerge.
I want to develop soil structure model in ansys apdl. And then develop response spectrum using earthquake as the dynamic loading.