Article

An Automatic phase picker for local and teleseismic events

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An automatic detection algorithm has been developed which is capable of time P-phase arrivals of both local and teleseismic earthquakes, but rejects noise bursts and transient events. For each signal trace, the envelope function is calculated and passed through a nonlinear amplifier. The resulting signal is then subjected to a statistical analysis to yield arrival time, first motion, and a measure of reliability to be placed on the P-arrival pick. An incorporated dynamic threshold lets the algorithm become very sensitive; thus, even weak signals are timed precisely. During an extended performance evaluation on a data set comprising 789 P phases of local events and 1857 P phases of teleseismic events picked by an analyst, the automatic picker selected 66 per cent of the local phases and 90 per cent of the teleseismic phases. The accuracy of the automatic picks was "ideal" (i.e., could not be improved by the analyst) for 60 per cent of the local events and 63 per cent of the teleseismic events.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For P onset time picking we include a traditional picker, the Baer-Kradolfer picker (Baer & Kradolfer, 1987), as baseline. The Baer-Kradolfer picker depends on four parameters: a minimum required time to declare an event, a maximum time allowed below a threshold for event detection, and two thresholds. ...
... The Baer-Kradolfer picker depends on four parameters: a minimum required time to declare an event, a maximum time allowed below a threshold for event detection, and two thresholds. For details on the parameters, we refer to Baer & Kradolfer (1987) or Kueperkoch et al. (2012). We set the second threshold to half of the first threshold to reduce the number of parameters. ...
Preprint
preprint: https://arxiv.org/abs/2110.13671 preprint of SEISBENCH manuscript: https://arxiv.org/abs/2111.00786 SEISBENCH on github here: https://github.com/orgs/seisbench Abstract: Seismic event detection and phase picking are the base of many seismological workflows. In recent years, several publications demonstrated that deep learning approaches significantly outperform classical approaches and even achieve human-like performance under certain circumstances. However, as most studies differ in the datasets and exact evaluation tasks studied, it is yet unclear how the different approaches compare to each other. Furthermore, there are no systematic studies how the models perform in a cross-domain scenario, i.e., when applied to data with different characteristics. Here, we address these questions by conducting a large-scale benchmark study. We compare six previously published deep learning models on eight datasets covering local to teleseismic distances and on three tasks: event detection, phase identification and onset time picking. Furthermore, we compare the results to a classical Baer-Kradolfer picker. Overall, we observe the best performance for EQTransformer, GPD and PhaseNet, with EQTransformer having a small advantage for teleseismic data. Furthermore, we conduct a cross-domain study, in which we analyze model performance on datasets they were not trained on. We show that trained models can be transferred between regions with only mild performance degradation, but not from regional to teleseismic data or vice versa. As deep learning for detection and picking is a rapidly evolving field, we ensured extensibility of our benchmark by building our code on standardized frameworks and making it openly accessible. This allows model developers to easily compare new models or evaluate performance on new datasets, beyond those presented here. Furthermore, we make all trained models available through the SeisBench framework, giving end-users an easy way to apply these models in seismological analysis.
... Several application examples showed that these methods were feasible and reliable (Küperkoch et al. 2012;Allen 1978Allen , 1982Bear and Kradolfer 1987;Saragiotis et al. 2002). However, for S wave, it is more difficult than P wave identification because of the slower propagation velocity of S wave, which is always influenced by P wave coda and converted waves such as PmP and PS waves (Diehl et al. 2009;Crampin 1977;Thurber and Atre 1993). ...
... However, we can distinguish S wave from the coda of P wave by its different properties, which include its linear A brief review on these related parameters such as the short-time average zerocrossing rate related to frequency and parameters related to the polarization properties such as deflection angle, the degree of polarization, and the ratio between transversal and total energy is discussed in the next section. In this paper, before carrying on S wave identification, we start with the assumption that P wave has been identified by using other methods, and the approach proposed by Bear and Kradolfer (1987) is recommended. ...
Article
Full-text available
S wave identification is an indispensably important step in the process of deriving the dynamic parameters of rock masses, which has great significance in guiding the construction and design of hydraulic engineering. However, it is difficult to identify the S wave accurately because of the influence of overlapped in P wave coda, and not complete separation between S and P waves because of the short distance of blasting seismic waves propagation. In the present study, a modified method is proposed to accurately identify the S wave by using blasting vibration signals under engineering scale. This method combines the short-time average zero-over rate with polarization analysis. By using the short-time average zero-over rate, the method can effectively shorten computation time and improve calculation efficiency. Meanwhile, polarization analysis is used to improve the method accuracy. Then, a comparison of the numerical identification results and theoretical results clearly shows that the improved method is available in S wave identification with errors of less than 2%. Finally, applying the method into measured vibration signals in Fengning pumped storage power station, we demonstrate the capability and accuracy of the proposed method by comparing the S wave velocities obtained by first arrival times with the suggestion value of S wave velocity.
... Concerning data processing, we initially eliminated clipped waveforms, by visual inspection of event-station (velocimeters) pairs of magnitude 4.5 (or greater) and maximum epicentral distance of 30.0 km. Stations with available data, but no manually determined P arrivals, were automatically picked, after filtering the signal between 1 and 20 Hz (Baer and Kradolfer, 1987). Then, using TauP (Crotwell et al., 1999) P difference time was calculated with a regional velocity model (Karakonstantis, 2017). ...
... After applying the necessary selection criteria, the final dataset consisted of 631 suitable observations for tw=1 s, 431 for tw=2 s and 333 for tw=3 s, amongst the six stations. On average, 65% of the total picks were automatically determined by the Baer and Kradolfer (1987) algorithm. As our study was conducted during the evolution of the sequence, we expected a large number of unpicked (but suitable) arrivals. ...
Article
Full-text available
The main goal of an Earthquake Early Warning System (EEWS) is to estimate the expected peak ground motion of the destructive S-waves using the first few seconds of P-waves, thus becoming an operational tool for real-time seismic risk management in a short timescale. EEWSs are based on the use of scaling relations between parameters measured on the initial portion of the seismic signal, after the arrival of the first wave. Herein, using the abundant seismicity that followed the 3 March 2021 Mw=6.3 earthquake in Thessaly we propose scaling relations for PGA, from data recorded by local permanent stations, as a function of the integral of the squared velocity (IV2p). The IV2p parameter was estimated directly from the first few seconds-long signal window (tw) after the P-wave arrival. Scaling laws are extrapolated for both individual and across sites (i.e., between a near-source reference instrument and a station located close to a target). The latter approach is newly investigated, as local site effects could have a significant impact on recorded data. Considering that further study on the behavior of IV2p is necessary, there are indications that this parameter could be used in future on-site single‐station earthquake early warning operations for areas affected by earthquakes located in Thessaly, as itpresents significant stability.
... For P onset time picking we include a traditional picker, the Baer-Kradolfer picker (Baer & Kradolfer, 1987), as baseline. The Baer-Kradolfer picker depends on four parameters: a minimum required time to declare an event, a maximum time allowed below a threshold for event detection, and two thresholds. ...
... The Baer-Kradolfer picker depends on four parameters: a minimum required time to declare an event, a maximum time allowed below a threshold for event detection, and two thresholds. For details on the parameters, we refer to (Baer & Kradolfer, 1987) or (Kueperkoch et al., 2012). We set the second threshold to half of the first threshold to reduce the number of parameters. ...
Article
Full-text available
Seismic event detection and phase picking are the base of many seismological workflows. In recent years, several publications demonstrated that deep learning approaches significantly outperform classical approaches, achieving human‐like performance under certain circumstances. However, as studies differ in the datasets and evaluation tasks, it is unclear how the different approaches compare to each other. Furthermore, there are no systematic studies about model performance in cross‐domain scenarios, that is, when applied to data with different characteristics. Here, we address these questions by conducting a large‐scale benchmark. We compare six previously published deep learning models on eight data sets covering local to teleseismic distances and on three tasks: event detection, phase identification and onset time picking. Furthermore, we compare the results to a classical Baer‐Kradolfer picker. Overall, we observe the best performance for EQTransformer, GPD and PhaseNet, with a small advantage for EQTransformer on teleseismic data. Furthermore, we conduct a cross‐domain study, analyzing model performance on data sets they were not trained on. We show that trained models can be transferred between regions with only mild performance degradation, but models trained on regional data do not transfer well to teleseismic data. As deep learning for detection and picking is a rapidly evolving field, we ensured extensibility of our benchmark by building our code on standardized frameworks and making it openly accessible. This allows model developers to easily evaluate new models or performance on new data sets. Furthermore, we make all trained models available through the SeisBench framework, giving end‐users an easy way to apply these models.
... The large number of stations included in the AASN, however, required several modifications in the procedure (details on the configuration and procedures used for the AASN are provided in text T1 in the supplementary material). Initially, an STA/LTA-based P-phase detector was combined with the Baer-Kradolfer (BK) picker algorithm (Baer & Kradolfer 1987) in SC3's scautopick module (see supplementary text T1 for details on this procedure), which was applied to vertical components of all stations (Fig. 1), subdivided into several ...
... ADAPT also allows a multi-slicing approach where the user may select different pickingtime windows for each picker separately. Analogue to other picking algorithm developments (e.g.,Baer & Kradolfer 1987; Alderson 2004; Diehl et al. 2009b) we tune the individual pickingalgorithms using manually-picked reference dataset composed of 11 events with a SC3-MLv ≥ 3.0 and a total of 1,373 P-phases. ...
Article
We take advantage of the new large AlpArray Seismic Network (AASN) as part of the AlpArray research initiative (www.alparray.ethz.ch), to establish a consistent seismicity catalogue for the greater Alpine region (GAR) for the time-period January 1st, 2016–December 31st, 2019. We use data from 1103 stations including the AASN backbone composed of 352 permanent and 276 (including 30 OBS) temporary broadband stations (network code Z3). Although characterized by a moderate seismic hazard, the European Alps and surrounding regions have a higher seismic risk due to the higher concentration of values and people. For these reasons, the GAR seismicity is monitored and routinely reported in catalogues by a 11 national and 2 regional seismic observatories. The heterogeneity of these dataset limits the possibility of extracting consistent information by simply merging to investigate the GAR's seismicity as a whole. The uniformly spaced and dense AASN provides, for the first time, a unique opportunity to calculate high-precision hypocentre locations and consistent magnitude estimation with uniformity and equal uncertainty across the GAR. We present a new, multi-step, semi-automatic method to process ∼50 TB of seismic signals, combining three different software. We used the SeisComP3 for the initial earthquake detection, a newly developed Python library ADAPT for high-quality repicking, and the well-established VELEST algorithm both for filtering and final location purposes. Moreover, we computed new local magnitudes based on the final high-precision hypocentre locations and re-evaluation of the amplitude observations. The final catalogue contains 3293 seismic events and is complete down to local magnitude 2.4 and regionally consistent with the magnitude 3 + of national catalogues for the same time period. Despite covering only 4 years of seismicity, our catalogue evidences the main fault systems and orogens’ front in the region, that are documented as seismically active by the EPOS-EMSC manually-revised regional bulletin for the same time period. Additionally, we jointly inverted for a new regional minimum 1D P-wave velocity model for the GAR and station delays for both permanent station networks and temporary arrays. These results provide the base for a future re-evaluation of the past decades of seismicity, and for the future seismicity, eventually improving seismic-hazard studies in the region. Moreover, we provide a unique, consistent seismic dataset fundamental to further investigate this complex and seismically active area. The catalogue, the minimum 1D P-wave velocity model, and station delays associated are openly shared and distributed with a permanent DOI listed in the Data Availability section.
... We use these tools to detect the seismic sources, and where possible, characterize azimuths using a suite of automated and interactive seismic operator techniques. To detect low-frequency (LF) regional and teleseismic, and HF local events we use the vertical component of seismic data in an STA/LTA approach (Baer & Kradolfer, 1987;Withers et al., 1998), and when attempting to detect nearly identical VHF signals, we implement a template detector which takes advantage of cross-correlation between waveforms to make detections (Forghani-Arani et al., 2013;Poli, 2017). The template approach is only used for events that produce source time functions that were highly auto-correlated to one-another. ...
... The STA/LTA method is used to automate the detection of P and S wave onsets and for seismic event identification (Baer & Kradolfer, 1987;Withers et al., 1998). The method uses a STA variance of a seismogram and that is then compared to a longer-term time window average of variance through division, resulting in a ratio. ...
Article
Full-text available
Future missions carrying seismometer payloads to icy ocean worlds will measure global and local seismicity to determine where the ice shell is seismically active. We use two locations, a seismically active site on Gulkana Glacier, Alaska, and a more seismically quiet site on the northwestern Greenland Ice Sheet as geophysical analogs. We compare the performance of a single‐station seismometer against a small‐aperture seismic array to detect both high (>1 Hz) and low (<0.1 Hz) frequency events at each site. We created catalogs of high frequency (HF) and low frequency (LF) seismicity at each location using an automated short‐term average/long‐term average technique. We find that with a 1‐m small‐aperture seismic array, our detection rate increased (9% for Alaska and 46% for Greenland) over the single‐station approach. At Gulkana, we recorded an order of magnitude greater HF events than the Greenland site. We ascribe the HF events sources to a combination of icequakes, rockfalls, and ice‐water interactions, while very HF events are determined to result from bamboo poles that were used to secure gear. We further find that local environmental noise reduces the ability to detect LF global tectonic events. Based upon this study, we recommend that (a) future missions consider the value of the expanded capability of a small array compared to a single station, (b) design detection algorithms that can accommodate variable environmental noise, and (c) assess the potential landings sites for sources of local environmental noise that may limit detection of global events.
... Duration time refers to the time elapsed between the time the wave height first exceeds the threshold voltage and the time the wave height is less than the threshold voltage from the same AE event. To apply the AE signal analysis method to real manufacturing processes, it is essential to distinguish the noise or ripple voltage signal from the AE signal [16][17][18][19]. Hence, a trigger algorithm using the threshold voltage value [20][21][22][23] and a low-pass filter that removes high-frequency noise [24] have been widely applied. ...
... Hence, an appropriate threshold voltage is set for a typically processed signal, as described in the introduction. It is common to analyze the aforementioned AE parameters based on the threshold voltage [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. Figure 9 illustrates the measurement results when the press is in a non-operative state during the hole expansion test. However, because the press was not operating, only the noise and ripple voltage signals were measured. ...
Article
Full-text available
In this study, an acoustic emission (AE) sensor was utilized to predict fractures that occur in a product during the sheet metal forming process. An AE activity was analyzed, presuming that AE occurs when plastic deformation and fracturing of metallic materials occur. For the analysis, a threshold voltage is set to distinguish the AE signal from the ripple voltage signal and noise. If the amplitude of the AE signal is small, it is difficult to distinguish the AE signal from the ripple voltage signal and the noise signal. Hence, there is a limitation in predicting fractures using the AE sensor. To overcome this limitation, the Kalman filter was used in this study to remove the ripple voltage signal and noise signal and then analyze the activity. However, it was difficult to filter out the ripple voltage signal using a conventional low-pass filter or Kalman filter because the ripple voltage signal is a high-frequency component governed by the switch-mode of the power supply. Therefore, a Kalman filter that has a low Kalman gain was designed to extract only the ripple voltage signal. Based on the KF-RV algorithm, the measured ripple voltage and noise signal were reduced by 97.3% on average. Subsequently, the AE signal was extracted appropriately using the difference between the measured value and the extracted ripple voltage signal. The activity of the extracted AE signal was analyzed using the ring-down count among various AE parameters to determine if there was a fracture in the test specimen.
... A large class of inverse problems in imaging aims at recovering locations of sources of waves from sensor measurements of the wavefield radiated by these sources. Many applications for locating sources exist in the literature, in various fields such as acoustics, geophysics, non-destructive evaluation and more [1,2,3,4,5,6,7,8,9,10]. ...
Preprint
Full-text available
Inverse source problems are central to many applications in acoustics, geophysics, non-destructive testing, and more. Traditional imaging methods suffer from the resolution limit, preventing distinction of sources separated by less than the emitted wavelength. In this work we propose a method based on physically-informed neural-networks for solving the source refocusing problem, constructing a novel loss term which promotes super-resolving capabilities of the network and is based on the physics of wave propagation. We demonstrate the approach in the setup of imaging an a-priori unknown number of point sources in a two-dimensional rectangular waveguide from measurements of wavefield recordings along a vertical cross-section. The results show the ability of the method to approximate the locations of sources with high accuracy, even when placed close to each other.
... Among all, the STA/LTA (Allen 1978), a relatively simple technique, continues to remain as a broadly accepted method in which the ratio of average energy between two consecutive moving-time windows, i.e., the short-term window and its subsequent long-term window, is used for analyzing. However, this method has also disadvantages (Baer and Kradolfer 1987;Earle and Shearer 1994;Xiantai et al. 2011;Lomax et al. 2012;Vassallo et al. 2012;Velasco et al. 2016). For example, it requires careful selection of trigger threshold where a low threshold would lead to many incorrect triggers while it may miss weak events for a high threshold. ...
Article
Full-text available
The goal of this work is to propose a new strategy for automatically quantifying the high-pass cut-off frequency of digital ground motion (GM) records. In previous studies, the high-pass filter cut-off frequency of GM records was often identified by visual inspection. For this, this article proposes a simple and efficient method to detect a correct cut-off frequency where the displacement waveform exactly does not change significantly. The displacement change is delineated as the change rate of the displacement at the ending of padded record with respect to the cut-off frequency. Based on the Short-Term-Average to the Long-Term-Average (STA/LTA) method, detecting the ratio of calculated average energy in two consecutive moving-time windows above a threshold, a critical frequency where the ending displacement starts deviating from zero level can be identified. Nevertheless, the cut-offs may be affected by GM intensities even though meticulous setting for window length or trigger threshold is adopted. In this paper, a new characteristic function (CF) on the basis of a modified cumulative envelope function is proposed. It has an appealing advantage to detect the critical cut-off frequency based on the CF shape and free of considering the intensity difference among GM records. Finally, the comparison of results with the cut-off frequencies of GMs from the 2008 Wenchuan mainshock calculated by the proposed method and those derived from two traditional methods is presented. Influence of the cut-off frequency on ground motion displacement, elastic, and inelastic spectrum is also studied.
... Conventional seismic methods involve computations and statistical observations to detect earthquakes in a waveform [17]. Many algorithms were proposed in the field of seismology but they are rarely used and only a few of them are applicable in real-time network's [18]- [20]. However, the precision of these methods is average and in a noisy environment it cannot be used as the main detection method. ...
Article
Full-text available
With the recent increase in the number of earthquakes in Korea, research efforts have been directed toward the real-time detection of earthquakes and the formulation of evacuation plans. Traditional seismometers can precisely record earthquakes but are incapable of processing them on-site to initiate an alert and response mechanism. By contrast, internet of things (IoT) devices equipped with accelerometers and CPUs can record and detect earthquake signals in real time and send out alert messages to nearby users. However, the signals recorded on IoT devices are noisy because of two main factors: the urban buildings and structures these devices are installed in and their cost–quality trade-off. Therefore, in this work, we provide an effective mechanism to deal with the problem of false alarms in IoT devices. We test our previously proposed artificial neural network (ANN) with different feature window sizes ranging from 2 s to 6 s and with various earthquake intensities. We find that setting the size of the feature window to a certain interval (i.e., 4–5 s) can improve model performance. Moreover, an evacuation route guidance platform that considers user location is proposed. The proposed platform provides and visualizes information to user devices in real time through the communication between server and user devices. In the event of a disaster, safe shelters are selected on the basis of the information entered from the server, and pedestrian paths are provided. Its application can reduce the direct and secondary damage caused by earthquakes.
... To process the data, we first ensured that no clipped recordings were included by visually inspecting suspicious event-station pairs (i.e., seismographs located within 30.0 km of earthquakes with a magnitude of at least 4.5). Then, we automatically picked the arrival of the P-phase with the algorithm proposed by [48]. To estimate any EEWS parameter, we used three time-windows starting from the P arrival and extending to 3 s, 4 s, or 5 s (t w ). ...
Article
Full-text available
The main goal of an Earthquake Early Warning System (EEWS) is to alert before the arrival of damaging waves using the first seismic arrival as a proxy, thus becoming an important operational tool for real-time seismic risk management on a short timescale. EEWSs are based on the use of scaling relations between parameters measured on the initial portion of the seismic signal after the arrival of the first wave. To explore the plausibility of EEWSs around the Eastern Gulf of Corinth and Western Attica, amplitude and frequency-based parameters, such as peak displacement (Pd), the integral of squared velocity (IV 2) and the characteristic period (τc), were analyzed. All parameters were estimated directly from the initial 3 s, 4 s, and 5 s signal windows (tw) after the P arrival. While further study is required on the behavior of the proxy quantities, we propose that the IV 2 parameter and the peak amplitudes of the first seconds of the P waves present significant stability and introduce the possibility of a future on-site EEWS for areas affected by earthquakes located in the Eastern Gulf of Corinth and Western Attica. Parameters related to regional-based EEWS need to be further evaluated.
... 随后, Küperkoch 等 [6] 进一步改进了 Saragiotis 方法, 通过 加权平均的方式实现更好的波形拾取效果. 然而, 考虑到地震监测中的时效性需求, 在具体实践中应用 最广泛的方案还是基于 STA/LTA 方法及其变种, 如 Allen Picker [8] , BK87 [26] 等 [9,27] . 其中, 文献 [9] 中提出的 FilterPicker 算法表现出对地震事件更高的敏感性, 具备良好的鲁棒性与高效性. ...
Article
Full-text available
Identifying the arrival times of seismic P-phases plays a significant role in real-time seismic monitoring, which provides critical guidance for emergency response activities. While considerable research has been conducted on this topic, efficiently capturing the arrival times of seismic P-phases hidden within intensively distributed and noisy seismic waves, such as those generated by the aftershocks of destructive earthquakes, remains a real challenge since most common existing methods in seismology rely on laborious expert supervision. To this end, in this paper, we present a machine learning-enhanced framework based on ensemble learning strategy, EL-Picker, for the automatic identification of seismic P-phase arrivals on continuous and massive waveforms. More specifically, EL-Picker consists of three modules, namely, Trigger, Classifier, and Refiner, and an ensemble learning strategy is exploited to integrate several machine learning classifiers. An evaluation of the aftershocks following the Ms 8.0 Wenchuan earthquake demonstrates that EL-Picker can not only achieve the best identification performance but also identify 120% more seismic P-phase arrivals as complementary data. Meanwhile, experimental results also reveal both the applicability of different machine learning models for waveforms collected from different seismic stations and the regularities of seismic P-phase arrivals that might be neglected during the manual inspection. These findings clearly validate the effectiveness, efficiency, flexibility, and stability of EL-Picker.
... The CF rises as soon as a signal with a higher amplitude than the preceding noise is encountered in the short time average window. Baer and Kradolfer (1987) developed an automatic phase picker by modifying Allen's characteristic function and implementing a dynamic threshold. The algorithm developed by Küperkoch et al. (2010) modifies and applies the scheme of Saragiotis et al. (2002). ...
Article
Full-text available
We present an extensive dataset of highly accurate absolute travel times and travel-time residuals of teleseismic P waves recorded by the AlpArray Seismic Network and complementary field experiments in the years from 2015 to 2019. The dataset is intended to serve as the basis for teleseismic travel-time tomography of the upper mantle below the greater Alpine region. In addition, the data may be used as constraints in full-waveform inversion of AlpArray recordings. The dataset comprises about 170 000 onsets derived from records filtered to an upper-corner frequency of 0.5 Hz and 214 000 onsets from records filtered to an upper-corner frequency of 0.1 Hz. The high accuracy of absolute and residual travel times was obtained by applying a specially designed combination of automatic picking, waveform cross-correlation and beamforming. Taking travel-time data for individual events, we are able to visualise in detail the wave fronts of teleseismic P waves as they propagate across AlpArray. Variations of distances between isochrons indicate structural perturbations in the mantle below. Travel-time residuals for individual events exhibit spatially coherent patterns that prove to be stable if events of similar epicentral distance and azimuth are considered. When residuals for all available events are stacked, conspicuous areas of negative residuals emerge that indicate the lateral location of subducting slabs beneath the Apennines and the western, central and eastern Alps. Stacking residuals for events from 90∘ wide azimuthal sectors results in lateral distributions of negative and positive residuals that are generally consistent but differ in detail due to the differing direction of illumination of mantle structures by the incident P waves. Uncertainties of travel-time residuals are estimated from the peak width of the cross-correlation function and its maximum value. The median uncertainty is 0.15 s at 0.5 Hz and 0.18 s at 0.1 Hz, which is more than 10 times lower than the typical travel-time residuals of up to ±2 s. Uncertainties display a regional dependence caused by quality differences between temporary and permanent stations as well as site-specific noise conditions.
... The first one is FilterPicker (FP) for automatic, real-time phase picking Vassallo et al., 2012). FP is designed on the basis of the classical short-term average/long-term average (STA/LTA) algorithms (Allen, 1982;Baer and Kradolfer, 1987) and can realize real-time phase picking from continuous data streams with high efficiency and accuracy. FP adopts two picking thresholds S 1 and S 2 , and the picking is carried out when the value of a characteristic function exceeds S 1 and meanwhile the integral of the characteristic function exceeds S 2 . ...
... We define the characteristic function (CF) for each channel in the network as the kurtosis of a sliding 5 s window of the continuous seismograms. We dynamically define the threshold (e.g., Baer and Kradolfer, 1987) by calculating it for the running kurtosis window as the mean of the CF added to the multiplication of standard deviation (st. dev.) of the CF and a scaling factor (SCL) [kurtosis threshold = mean (CF) + st. ...
Article
Full-text available
Stretching nearly the extent of the Canadian Cordillera, the Rocky Mountain trench (RMT) forms one of the longest valleys on Earth. Yet, the level of seismicity, and style of faulting, on the RMT remains poorly known. We assess earthquakes in the southern RMT using a temporary network of seismometers around Valemount, British Columbia, and identify active structures using a probabilistic earthquake catalog spanning from September 2017 to August 2018. Together with results from earlier geological and seismic studies, our new earthquake catalog provides a constraint on the geometry of subsurface faults and their level of activity during a year of recording. The tectonic analysis presented here benefits from the catalog of 47 earthquakes, including robust horizontal and vertical uncertainty quantification. The westward dip of the southern RMT fault is one of the prominent subsurface structures that we observe. The seismicity observed here occurs on smaller surrounding faults away from the RMT and shifts from the east to the west of the trench from north to south of Valemount. The change in distribution of earthquakes follows changes in the style of deformation along the length of the RMT. Focal mechanisms calculated for two earthquakes with particularly clear waveforms reveal northeast–southwest-oriented thrusting. The seismicity reveals a change in the pattern of deformation from narrowly focused transpression north of Valemount to more broadly distributed activity in an area characterized by normal faulting to the south. Six sets of repeating events detected here produce similar waveforms whose P waves exhibit correlation coefficients that exceed 0.7 and may result from the migration of fluids through the fractured crust.
... The ratio of the average energies of the short-and long-time windows increased in the firstarrival time. Allen [9,10], Baer [11], and Shearer [12] calculated the STA/LTA using different features of the target signal. Energy-based methods are efficient, but the threshold has to be selected manually, and the methods tend to fail at low signal-to-noise ratios (SNRs). ...
Article
Ultrasound sound-speed tomography (USST) has shown great prospects for breast cancer diagnosis due to its advantages of non-radiation, low cost, three-dimensional (3D) breast images, and quantitative indicators. However, the reconstruction quality of USST is highly dependent on the first-arrival picking of the transmission wave. Traditional first-arrival picking methods have low accuracy and noise robustness. To improve the accuracy and robustness, we introduced a self-attention mechanism into the Bidirectional Long Short-Term Memory (BLSTM) network and proposed the self-attention BLSTM (SAT-BLSTM) network. The proposed method predicts the probability of the first-arrival time and selects the time with maximum probability. A numerical simulation and prototype experiment were conducted. In the numerical simulation, the proposed SAT-BLSTM showed the best results. For signal-to-noise ratios (SNRs) of 50, 30, and 15 dB, the mean absolute errors (MAEs) were 48, 49, and 76 ns, respectively. The BLSTM had the second-best results, with MAEs of 55, 56, and 85 ns, respectively. The MAEs of the Akaike Information Criterion (AIC) method were 57, 296, and 489 ns, respectively. In the prototype experiment, the MAEs of the SAT-BLSTM, the BLSTM, and the AIC were 94, 111, and 410 ns, respectively.
... Most of the existing detector / pickers are based on tracking the abrupt changes in the signal characteristics, such as amplitude, energy, frequency, higher-order statistics, etc, either in original domain or the transformed domain on the arrival of seismic event [1][2][3][4][10][11][12][13][14]. Among these techniques, STA / LTA (short-term average / long-term average) [2] and its variants [15,16] are widely used for the ease of computation and online implementation. The working principle of STA/LTA methods is based on comparing the ratio of energies in a short and long window with a threshold. ...
Article
Onset detection of P-wave in seismic signals is of vital importance to seismologists because it is not only crucial to the development of early warning systems but it also aids in estimating the seismic source parameters. All the existing P-wave onset detection methods are based on a combination of statistical signal processing and time-series modeling ideas. However, these methods do not adequately accommodate some advanced ideas that exist in fault detection literature, especially those based on predictive analytics. When combined with a time-frequency (t-f) / temporal-spectral localization method, the effectiveness of such methods is enhanced significantly. This work proposes a novel real-time automatic P-wave detector and picker in the prediction framework with a time-frequency localization feature. The proposed approach brings a diverse set of capabilities in accurately detecting the P-wave onset, especially in low signal-to-noise ratio (SNR) conditions that all the existing methods fail to attain. The core idea is to monitor the difference in squared magnitudes of one-step-ahead predictions and measurements in the time-frequency bands with a statistically determined threshold. The proposed framework essentially accommodates any suitable prediction methodology and time-frequency transformation. We demonstrate the proposed framework by deploying auto-regressive integrated moving average (ARIMA) models for predictions and the well-known maximal overlap discrete wavelet packet transform (MODWPT) for the t-f projection of measurements. The ability and efficacy of the proposed method, especially in detecting P-waves embedded in low SNR measurements, is illustrated on a synthetic data set and 200 real-time data sets spanning four different geographical regions. A comparison with three prominently used detectors, namely, STA/LTA, AIC, and DWT-AIC, shows improved detection rate for low SNR events, better accuracy of detection and picking, decreased false alarm rate, and robustness to outliers in data. Specifically, the proposed method yields a detection rate of 89% and a false alarm rate of 11.11%, which are significantly better than those of existing methods.
... Picking P-wave arrival times and polarities were determined by the algorithm presented by Baer and Kradolfer (1987) as implemented in the ObsPy python package. Each phase arrival time was then reviewed by an analyst and adjusted when necessary. ...
Conference Paper
Full-text available
Monitoring mining-induced seismicity can provide valuable insights into the rock mass response to mining. There are many approaches to monitoring seismicity in mining depending on the mining method, mining geometry, data quality requirements, and acceptable cost of monitoring. One flexible, inexpensive monitoring method is a temporary surface seismic deployment. The National Institute for Occupational Safety and Health has conducted temporary deployments above longwall panels at two longwall coal mines in the western United States. This study evaluates the effectiveness of these deployments in meeting basic monitoring objectives and examines the seismicity recorded at each mine. A total of 901 events were detected at the first mine and 30 events were detected at the second mine. Event magnitudes ranged from 0.1 to 1.6 for one mine and from 0.4 to 0.7 at the other. The two deployments were successful in their goals; however, the results highlight the importance of well-designed arrays and accounting for seismic velocity changes caused by mining. Although the deployments only lasted a few weeks, notable seismic features of each panel were observed. The two mines exhibit starkly different responses to similar mining methods, quantified by the rates, magnitudes, and locations of the events.
... Conventional algorithms for automatic seismic phase picking often rely upon detecting abrupt changes in the time series using statistical methods. A commonly adopted approach is to transform the seismogram into a time-dependent characteristic function that is more sensitive to abrupt changes, such as the ratio of short-term and longterm averages (STAs/LTAs; Allen, 1982), envelope functions (Baer and Kradolfer, 1987), autoregressive Akaike information criterion (AR-AIC; Sleeman and Van Eck, 1999), kurtosis (Saragiotis et al., 2002;Baillard et al., 2014), skewness (Nippress et al., 2010;Ross and Ben-Zion, 2014), filtering (Lomax et al., 2012), and particle-motion polarization (Jurkevics, 1988;Cichowicz, 1993;Baillard et al., 2014). After applying some postprocessing steps on the characteristic function to reduce false detection rate, the arrival times of seismic phases are then picked automatically, based upon the temporal positions of local extrema on the characteristic function. ...
Article
Full-text available
Seismograms are convolution results between seismic sources and the media that seismic waves propagate through, and, therefore, the primary observations for studying seismic source parameters and the Earth interior. The routine earthquake location and travel-time tomography rely on accurate seismic phase picks (e.g., P and S arrivals). As data increase, reliable automated seismic phase-picking methods are needed to analyze data and provide timely earthquake information. However, most traditional autopickers suffer from low signal-to-noise ratio and usually require additional efforts to tune hyperparameters for each case. In this study, we proposed a deep-learning approach that adapted soft attention gates (AGs) and recurrent-residual convolution units (RRCUs) into the backbone U-Net for seismic phase picking. The attention mechanism was implemented to suppress responses from waveforms irrelevant to seismic phases, and the cooperating RRCUs further enhanced temporal connections of seismograms at multiple scales. We used numerous earthquake recordings in Taiwan with diverse focal mechanisms, wide depth, and magnitude distributions, to train and test our model. Setting the picking errors within 0.1 s and predicted probability over 0.5, the AG with recurrent-residual convolution unit (ARRU) phase picker achieved the F1 score of 98.62% for P arrivals and 95.16% for S arrivals, and picking rates were 96.72% for P waves and 90.07% for S waves. The ARRU phase picker also shown a great generalization capability, when handling unseen data. When applied the model trained with Taiwan data to the southern California data, the ARRU phase picker shown no cognitive downgrade. Comparing with manual picks, the arrival times determined by the ARRU phase picker shown a higher consistency, which had been evaluated by a set of repeating earthquakes. The arrival picks with less human error could benefit studies, such as earthquake location and seismic tomography.
... We used the standard STA/LTA subroutine provided in the ObsPy library [36] for already triggered events. There are precise algorithms to pick first arrival [37][38][39][40]. However, a simple STA/LTA algorithm is sufficient to work out for the purpose of dividing the data into signal and noise. ...
Article
Full-text available
It is necessary to monitor, acquire, preprocess, and classify microseismic data to understand active faults or other causes of earthquakes, thereby facilitating the preparation of early-warning earthquake systems. Accordingly, this study proposes the application of machine learning for signal–noise classification of microseismic data from Pohang, South Korea. For the first time, unique microseismic data were obtained from the monitoring system of the borehole station PHBS8 located in Yongcheon-ri, Pohang region, while hydraulic stimulation was being conducted. The collected data were properly preprocessed and utilized as training and test data for supervised and unsupervised learning methods: random forest, convolutional neural network, and K-medoids clustering with fast Fourier transform. The supervised learning methods showed 100% and 97.4% of accuracy for the training and test data, respectively. The unsupervised method showed 97.0% accuracy. Consequently, the results from machine learning validated that automation based on the proposed supervised and unsupervised learning applications can classify the acquired microseismic data in real time.
... We automatically picked the first arrivals of the SV signal (Baer and Kradolfer, 1987) using data with a signal-to-noise ratio greater than 6 and corrected for the sign of the onset. The magnitude of the events was only considered for copying data from the archives. ...
Article
Full-text available
In the frame of the AlpArray project we analyse teleseismic data from permanent and temporary stations of the Alpine region to study seismic discontinuities down to about 140 km depth. We average broadband teleseismic S-waveform data to retrieve S-to-P converted signals from below the seismic stations. In order to avoid processing artefacts, no deconvolution or filtering is applied, and S arrival times are used as reference for stacking. We show a number of north–south and east-west profiles through the Alpine area. The Moho signals are always seen very clearly, and negative velocity gradients below the Moho depth are also visible in a number of profiles. A Moho depression is visible along larger parts of the Alpine chain. It reaches its largest depth of 60 km beneath the Tauern Window. However, the Moho depression ends abruptly near about 13∘ E below the eastern Tauern Window. This Moho depression may represent the crustal trench, where the Eurasian lithosphere is subducted below the Adriatic lithosphere. East of 13∘ E an important along-strike change occurs; the image of the Moho changes completely. No Moho deepening is found in this easterly region; instead the Moho bends up along the contact between the European and the Adriatic lithosphere all the way to the Pannonian Basin. An important along-strike change was also detected in the upper mantle structure at about 14∘ E. There, the lateral disappearance of a zone of negative velocity gradient in the uppermost mantle indicates that the S-dipping European slab laterally terminates east of the Tauern Window in the axial zone of the Alps. The area east of about 13∘ E is known to have been affected by severe late-stage modifications of the structure of crust and uppermost mantle during the Miocene when the ALCAPA (Alpine, Carpathian, Pannonian) block was subject to E-directed lateral extrusion.
... First arrivals picking is one of the most time-consuming tasks in the seismic data processing. The short-term average/long-term average (STA/LTA) is the most widely used method in first arrivals picking (Baer et al., 1987). However, STA/LTA is sensitive to the time window that computes the average energy ratio of seismic amplitudes. ...
... Mainly, methods based on both the sliding window and the threshold value are considered to be traditional events recognition algorithms. Some commonly used methods are the STA/LTA (the short-term average to long-term average ratio) algorithm [9][10][11], as well as multi-window techniques [12] and the modified energy ratio method [13]. This method, with an operation speed that is extremely fast, is an ordinary discrimination process for the detection of the first arrival of a seismic phase [7]. ...
Article
Full-text available
The technology of microseismic monitoring, the first step of which is event recognition, provides an effective method for giving early warning of dynamic disasters in coal mines, especially mining water hazards, while signals with a low signal-to-noise ratio (SNR) usually cannot be recognized effectively by systematic methods. This paper proposes a wavelet scattering decomposition (WSD) transform and support vector machine (SVM) algorithm for discriminating events of microseismic signals with a low SNR. Firstly, a method of signal feature extraction based on WSD transform is presented by studying the matrix constructed by the scattering decomposition coefficients. Secondly, the microseismic events intelligent recognition model built by operating a WSD coefficients calculation for the acquired raw vibration signals, shaping a feature vector matrix of them, is outlined. Finally, a comparative analysis of the microseismic events and noise signals in the experiment verifies that the discriminative features of the two can accurately be expressed by using wavelet scattering coefficients. The artificial intelligence recognition model developed based on both SVM and WSD not only provides a fast method with a high classification accuracy rate, but it also fits the online feature extraction of microseismic monitoring signals. We establish that the proposed method improves the efficiency and the accuracy of microseismic signals processing for monitoring rock instability and seismicity.
... In terms of methods, again there are a wide variety of techniques proposed. Automated picking algorithms can encompass traditional characteristic function-based approaches, applied for decades in real-time detection pipelines (Allen 1978, 1982, Baer & Kradolfer 1987, Lomax et al. 2012), but more recently, deep learning based picker have emerged as the leading automated picking method , Ross et al. 2018, Mousavi et al. 2020, Soto & Schurr 2021. Deep learning routines are typically trained on millions of labelled phase examples to automatically infer the characteristic properties of seismic phase onsets. ...
Preprint
Full-text available
Machine Learning (ML) methods have demonstrated exceptional performance in recent years when applied to the task of seismic event detection. With numerous ML techniques now available for detecting seismicity, applying these methods in practice can help further highlight their advantages over more traditional approaches. Constructing such workflows also enables benchmarking comparisons of the latest algorithms on practical data. We combine the latest methods in seismic event detection to analyse an 18-day period of aftershock seismicity for the $M_{w}$ 6.4 2019 Durr\"es earthquake in Albania. We test two phase association-based event detection methods, the EarthQuake Transformer (EQT; Mousavi et al., 2020) end-to-end seismic detection workflow, and the PhaseNet (Zhu & Beroza, 2019) picker with the Hyperbolic Event eXtractor (Woollam et al., 2020) associator. Both ML approaches are benchmarked against a data set compiled by two independently operating seismic experts who processed a subset of events of this 18-day period. In total, PhaseNet & HEX identifies 3,551 events, and EQT detects 1,110 events with the larger catalog (PhaseNet & HEX) achieving a magnitude of completeness of ~1. By relocating the derived catalogs with the same minimum 1D velocity model, we calculate statistics on the resulting hypocentral locations and phase picks. We find that the ML-methods yield results consistent with manual pickers, with bias that is no larger than that between different pickers. The achieved fit after relocation is comparable to that of the manual picks but the increased number of picks per event for the ML pickers, especially PhaseNet, yields smaller hypocentral errors. The number of associated events per hour increases for seismically quiet times of the day, and the smallest magnitude events are detected throughout these periods, which we interpret to be indicative of true event associations.
... This method is efficient, often effective, but susceptible to noise and has low accuracy for arrival times, particularly for shear waves. Baer and Kradolfer (1987) improved the STA/LTA method using the envelope as the characteristic function. Sleeman and Van Eck (1999) applied joint autoregressive (AR) modeling of the noise and seismic signal and used the Akaike Information Criterion (AIC) to determine the onset of a seismic signal. ...
Thesis
Seismic waveforms contain valuable information about earthquakes and earth structure. Dense seismic monitoring networks are deployed across the world and collect massive amounts of observational data; however, the sheer amount of this data poses a challenge for seismic data processing and analysis. Developing effective and efficient algorithms and models for seismic data analysis is thus important for studying earthquake physics, improving earthquake forecasting, and mitigating earthquake hazards. In the work presented in this thesis, I have explored a promising approach to advancing earthquake detection and inversion using deep learning. Deep learning has in recent years achieved super-human performance in solving many challenging problems, such as image recognition, protein folding, and playing Go or Atari games. In contrast to conventional algorithms that rely on expert-designed features and decision rules, deep neural networks can automatically learn characteristic features and statistical criteria from large training data sets accompanied by manual labels. The huge amount of archived seismic data collected in the past few decades provides excellent training resources for deep learning, making it a very promising approach to studying seismic signals and addressing research challenges, such as detecting hidden small earthquakes whose numbers dominate earthquake catalogs. However, at the time I started my PhD research, there was limited work on deep learning applications in seismology. To explore the potential of deep learning in seismology, I focused on two directions: First, I developed modular deep learning algorithms to improve earthquake monitoring including signal denoising, phase picking, phase association, and earthquake detection. The results of my work show that these deep learning algorithms significantly improve earthquake monitoring by detecting up to orders of magnitude more small earthquakes than are detected in standard catalogs, and by doing so reveal a far more detailed picture of earthquake sequences and fault structures. In addition, I utilized cloud computing to scale-up our detection workflow to solve the big data challenge in mining large archived data sets. Second, I studied the connection between deep learning optimization and conventional seismic inversion, such as full-waveform inversion. I developed a new inversion approach to solving seismic inverse problems using automatic differentiation and proposed a new regularization method by parameterizing inversion targets using neural networks. The results show that the rapid development of deep learning frameworks and neural network architectures can improve seismic inversion to constrain physical parameters of interest from detected seismic waveforms. In all, these applications of deep learning to both earthquake monitoring and inverting underlying parameters demonstrate that deep learning is an effective tool to improve the extraction of useful information from seismic data and that it holds great promise for future developments in seismology.
... Among the early earthquake detection methods proposed by seismologists, the most representative ones can be categorized into three types: (1) amplitude and energy ratio-based, such as short/long time windows (Allen, 1978;Withers et al., 1998;Baer and Kradolfer, 1987); (2) waveform similarity-based, such as the template matching algorithm (Peng ZG and Zhao P, 2009;Gibbons and Ringdal, 2006;Shelly et al., 2007) and the fingerprint and similarity thresholding (FAST) method (Yoon et al., 2015); and (3) traditional machine learning-based (Wang J and Teng TL, 1997;Gentili and Michelini, 2006;Dai HC and MacBeth, 2007). In these traditional methods, features such as waveform polarization and sudden changes in amplitude and frequency are utilized to pick up seismic signals. ...
... This algorithm is still widely used because of its effectiveness and ease of application to seismic data compared to other methods (Trnkoczy, 2009). Other traditional pickers follow a similar approach to detect earthquakes: define a function to represent the features of incoming earthquake signals and use a preexisting threshold to detect and pick the phase (Baer and Kradolfer, 1987;Baillard et al., 2014;Cichowicz, 1993;Saragiotis et al., 2002;Sleeman and Van Eck, 1999). ...
Article
Full-text available
We applied the automatic detection and picking of P- and S-wave to one-year continuous raw seismic data from 17 seismic stations in the Muong Te area, northwestern Vietnam. The deep learning picker, Earthquake Transformer, has performed automatic picking of P- and S-waves, and phase association, then we located the earthquakes using Hypoinverse and NonLinLoc programs. The newly derived catalog consisted of 893 events, which is significantly higher than the number of events in the manual catalog. From this new catalog, we can observe more earthquakes related to the Muong Te ML 4.9 earthquake on June 16, 2020, and the earthquake activity in other faults such as the Dien Bien Phu and Muong Nhe faults. The extended catalog can further study the seismogenesis and the seismic velocity structure of the crust beneath northwestern Vietnam.
... Substantial progress in the automatic detection of seismic events has been achieved with traditional seismic detection algorithms, such as the short-term average/long-term average method (Allen, 1978;Baer and Kradolfer, 1987), the autoregressive model (Sleeman and Van Eck, 1999), and the approach based on higher-order statistics (Saragiotis et al., 2002). However, because picking methods use only some of the waveform characteristics, the picking accuracy is lower than that of manual processing, and these methods often suffer from false alerts. ...
Article
Full-text available
The accurate and reliable discrimination of earthquakes from background noise is a primary task of earthquake early warning (EEW); however, ubiquitous and complex microtremor signals substantially complicate this task. To mitigate this problem, a generative adversarial network (GAN) is adopted to distinguish between earthquakes and microtremors in this study. We train a GAN based on 52,537 K-NET and KiK-net strong ground motion records from Japan, and use the well-trained discriminator to identify 5373 P waves and 5373 microtremors in the testing set. The results indicate that this algorithm can correctly identify 99.89% of P waves and 99.24% of microtremors with high confidence. In addition, a verification of the proposed algorithm on data from the Great East Japan earthquake confirms that this model can achieve robust results for local records of large events and ultimately discriminate earthquakes from microtremors. This algorithm is an exploratory test of a GAN for identifying earthquake P waves. Though the GAN uses only P waves for training (There are no microtremors in the input data.), it has extensive potential in seismological and EEW applications.
... Another widely used picking algorithm is the one proposed by Baer and Kradolfer (1987). This algorithm is frequently applied, e.g. by 'Programmable Interactive Toolbox for Seismological Analysis' [PITSA, Scherbaum (1992)] and the picking system MannekenPix (Aldersons 2004). ...
Article
Full-text available
In this paper, we propose a novel picking algorithm for the automatic P-and S-waves onset time determination. Our algorithm is based on the variance piece-wise constant models of the earthquake waveforms. The effectiveness and robustness of our picking algorithm are tested both on synthetic seismograms and real data. We simulate seismic events with different magnitudes (between 2 and 5) recorded at different epicentral distances (between 10 and 250 km). For the application to real data, we analyse waveforms from the seismic sequence of L'Aquila (Italy), in 2009. The obtained results are compared with those obtained by the application of the classic STA/LTA picking algorithm. Although the two algorithms lead to similar results in the simulated scenarios, the proposed algorithm results in greater flexibility and automation capacity, as shown in the real data analysis. Indeed, our proposed algorithm does not require testing and optimization phases, resulting potentially very useful in earthquakes routine analysis for novel seismic networks or in regions whose earthquakes characteristics are unknown.
... Auf diese Weise wird vermieden, dass kurzeitige Signalstörungen mit vergleichsweise großen Amplituden als Ersteinsatz detektiert werden. Derartige Auswerteverfahren nutzen verschiedene Filter-/Ansatzfunktionen oder Detektionskriterien(BAER & KRADOLFER 1987, EARLE & SHEARER 1994. Da Rauschen und Nutzsignal von Ultraschallsignalen oft im gleichen Frequenzbereich liegen, ist die Anwendung derartiger Verfahren für die ZfP von Betonen nur begrenzt möglich(KURZ et al. 2005).Eine Anwendung automatischer Auswerteverfahren auf die Schallemissionsanalyse und die zerstörungsfreie Prüfung von Frischbeton wird in), REINHARDT et. ...
Thesis
Full-text available
Im Rahmen dieser Arbeit wird das Charakterisieren struktureller Veränderungen zementgebundener Baustoffe durch zwei auf dem Ultraschall-Transmissionsverfahren beruhenden Methoden der zerstörungsfreien Prüfung (ZfP) mit mechanischen Wellen vorgenommen. Zur kontinuierlichen Charakterisierung der Erstarrung und Erhärtung frischer zementgebundener Systeme wird ein auf Ultraschallsensoren für Longitudinal- und Scherwellen basierendes Messsystem in Kombination mit zugehörigen Verfahrensweisen zur Datenauswertung konzipiert, charakterisiert und angewandt. Gegenüber der bislang üblichen alleinigen Bewertung der Verfestigung anhand indirekter Ultraschallparameter wie Ausbreitungsgeschwindigkeit, Signalenergie oder Frequenzgehalt der Longitudinalwelle lässt sich damit eine direkte, sensible Erfassung der sich während der Strukturbildung entwickelnden dynamischen elastischen Eigenschaften auf der Basis primärer physikalischer Werkstoffparameter erreichen. Insbesondere Scherwellen und der dynamische Schubmodul sind geeignet, den graduellen Übergang zum Festkörper mit Überschreiten der Perkolationsschwelle sensibel und unabhängig vom Luftgehalt zu erfassen. Die zeitliche Entwicklung der dynamischen elastischen Eigenschaften, die Strukturbildungsraten sowie die daraus extrahierten diskreten Ergebnisparameter ermöglichen eine vergleichende quantitative Charakterisierung der Strukturbildung zementgebundener Baustoffe aus mechanischer Sicht. Dabei lassen sich typische, oft unvermeidbare Unterschiede in der Zusammensetzung der Versuchsmischungen berücksichtigen. Der Einsatz laserbasierter Methoden zur Anregung und Erfassung von mechanischen Wellen und deren Kombination zu Laser-Ultraschall zielt darauf ab, die mit der Anwendung des konventionellen Ultraschall-Transmissionsverfahrens verbundenen Nachteile zu eliminieren. Diese resultieren aus der Sensorgeometrie, der mechanischen Ankopplung und bei einer Vielzahl von Oberflächenpunkten aus einem hohen prüftechnischen Aufwand. Die laserbasierte, interferometrische Erfassung mechanischer Wellen ist gegenüber Ultraschallsensoren rauschbehaftet und vergleichsweise unsensibel. Als wesentliche Voraussetzung der scannenden Anwendung von Laser-Ultraschall auf zementgebundene Baustoffe erfolgen systematische experimentelle Untersuchungen zur laserinduzierten ablativen Anregung. Diese sollen zum Verständnis des Anregungsmechanismus unmittelbar auf den Oberflächen von zementgebundenen Baustoffen, Gesteinskörnungen und metallischen Werkstoffen beitragen, relevante Einflussfaktoren aus den charakteristischen Materialeigenschaften identifizieren, geeignete Prozessparameter gewinnen und die Verfahrensgrenzen aufzeigen. Unter Einsatz von Longitudinalwellen erfolgt die Anwendung von Laser-Ultraschall zur zeit- und ortsaufgelösten Charakterisierung der Strukturbildung und Homogenität frischer sowie erhärteter Proben zementgebundener Baustoffe. Während der Strukturbildung wird erstmals eine simultane berührungslose Erfassung von Longitudinal- und Scherwellen vorgenommen. Unter Anwendung von tomographischen Methoden (2D-Laufzeit¬tomo¬graphie) werden überlagerungsfreie Informationen zur räumlichen Verteilung struktureller Gefügeveränderungen anhand der longitudinalen Ausbreitungsgeschwindigkeit bzw. des relativen dynamischen Elastizitätsmoduls innerhalb von virtuellen Schnittebenen geschädigter Probekörper gewonnen. Als beton-schädigende Mechanismen werden exemplarisch der kombinierte Frost-Tausalz-Angriff sowie die Alkali-Kieselsäure-Reaktion (AKR) herangezogen. Die im Rahmen dieser Arbeit entwickelten Verfahren der zerstörungsfreien Prüfung bieten erweiterte Möglichkeiten zur Charakterisierung zementgebundener Baustoffe und deren strukturellen Veränderungen und lassen sich zielgerichtet in der Werkstoffentwicklung, bei der Qualitätssicherung sowie zur Analyse von Schadensprozessen und -ursachen einsetzen.
... The first one is classical methods consisting of statistical and rule-based methods. Short Term Average (STA)/Long Term Average (LTA) algorithms [1,6,25], and heuristic pattern matching methods [15,38] from previous earthquake waveforms are among the classical methods. While heuristic pattern matching methods that applied to detect similar patterns in the past earthquakes achieve good sensitivity scores, these models are prone to noise and take a lot of time with large-scale data. ...
Article
Full-text available
The earthquake prediction problem can be defined as given a minimum Richter magnitude scale and a specified geographic region, predicting the possibility of an earthquake in that region within a time interval. This is a long-time studied research problem but not much progress is achieved until the last decade. With the advancement of computational systems and deep learning models, significant results are achieved. In this study, we introduce novel models using the structural recurrent neural network (SRNN) that capture the spatial proximity and structural properties such as the existence of faults in regions. Experimental results are carried out using two distinct regions such as Turkey and China where the scale and earthquake zones differ greatly. SRNN models achieve better performance results compared with the baseline and the state-of-the-art models. Especially the SRNNClassnear\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{SRNNClass}_\mathrm{near}$$\end{document} model, that captures the first-order spatial neighborhood and structural classification based on fault lines, results in the highest F1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F_1$$\end{document} score.
Conference Paper
Full-text available
Acoustic emission (AE) method has been widely used for investigating damage in materials and structures. It allows monitoring the damage progress within the material during loading. In conventional parameter-based AE analysis, only AE parameters are recorded and analyzed to clarify fracture behavior in materials. Quantitative analysis methods allow for a comprehensive fracture characterization of materials in which waveform itself is stored and analyzed. Large number of research and case studies reveal that for civil infrastructure monitoring, parameter-based analyses of AE are preferred compared with quantitative analyses. This is because, concrete, being mostly used material for constructing civil structures, has a very complex nature and without proper and adequate instrumentation, set-up and algorithms, it is difficult to properly characterize fracture mechanisms with quantitative techniques. Still, it is very important to obtain these fracture characteristics such as the time crack developed, the location of the crack and also the type of the crack to better understand the fracture behavior of a structural element under consideration such as how this element behaves under certain load levels and what leads to its failure. There have been numerous challenges and solutions in application of quantitative AE techniques to civil infrastructure from reliable AE source localization to defect type identification. In this study, these applications and solution methods have been reviewed thoroughly and presented
Article
With the wide application of the high‐density and high‐productivity acquisition technology in the complex areas of oil fields, the first break picking of massive low signal‐to‐noise data is a great challenging job. Conventional first break automatic picking methods (Akaike Information Criterion method, Energy ratio method, Correlation method, and Boundary detection method, etc.) require a lot of manual adjustments due to their poor anti‐noise performance. A lot of adjustments affect the accuracy and efficiency of picking. First break picking takes up about one‐third of the whole processing cycle, which restricts petroleum exploration and development progress severely. In order to overcome the above mentioned shortcoming, this paper proposes the first break automatic picking technology based on semantic segmentation. Firstly, design the time‐window for primary wave and pick a certain quantity of first breaks from newly acquired data in different zones of the exploration area by using the commonly used Akaike Information Criterion method and interactive adjustments; and then perform pre‐processing on the data within the time‐window to extract multiple first break attribute features and perform feature enhancement, to obtain multidimensional features data blocks, at the same time, label the first breaks. Secondly, UNet‐like encoding and decoding network is used to implement end‐to‐end feature learning from primary wave attribute data to first break labels. The encoding and decoding process of the encoding and decoding network is used to fuse the extraction and feature positioning of primary wave attribute features. At the same time, normalize each layer and use the ReLU function as nonlinear factor to improve the generalization and sensitivity of network model to low signal‐to‐noise primary waves. Finally, an optimized deep network model is used to predict the first breaks of the data to improve the accuracy and efficiency of first break picking. This method innovatively fuses the multiple conventional automatic picking methods (Akaike Information Criterion method, Energy ratio method, Correlation method, and Boundary detection method, etc.) to extract multiple attribute features of primary wave, and improves the accuracy of the training network model to the first‐break detection by using the improved UNet‐like encoding and decoding network. The feasibility of the new method is proved by model data. A comparative test is made between the new method and the Akaike Information Criterion method with the actual data, which verifies that the method in this paper has a higher picking accuracy and stable first break processing capability for the data with low signal–to‐noise, our method shows a significant advantage when applied to low signal–to‐noise seismic records from high productivity acquisition and can meet the demands of the accuracy and efficiency for near‐surface model building and static calculation of massive data. This article is protected by copyright. All rights reserved
Preprint
Full-text available
In the frame of the AlpArray project we analyze teleseismic data from permanent and temporary stations of the greater Alpine region to study seismic discontinuities down to about 140 km depth. We average broadband teleseismic S waveform data to retrieve S-to-P converted signals from below the seismic stations. In order to avoid processing artefacts, no deconvolution or filtering is applied and S arrival times are used as reference. We show a number of north-south and east-west profiles through the greater Alpine area. The Moho signals are always seen very clearly, and also negative velocity gradients below the Moho are visible in a number of profiles. A Moho depression is visible along larger parts of the Alpine chain. It reaches its largest depth of 60 km beneath the Tauern Window. The Moho depression ends however abruptly near about 13° E below the eastern Tauern Window. The Moho depression may represent the mantle trench, where the Eurasian lithosphere is subducted below the Adriatic lithosphere. East of 13° E an important along-strike change occurs; the image of the Moho changes completely. No Moho deepening is found in this easterly region; instead the Moho is updoming along the contact between the European and the Adriatic lithosphere all the way into the Pannonian Basin. An important along strike change was also detected in the upper mantle structure at about 14° E. There, the lateral disappearance of a zone of negative P-wave velocity gradient indicates that the S-dipping European slab laterally terminates east of the Tauern Window in the axial zone of the Alps. The area east of about 13° E is known to have been affected by severe late-stage modifications of the structure of crust and uppermost mantle during the Miocene when the ALCAPA (Alpine, Carpathian, Pannonian) block was subject to E-directed lateral extrusion.
Article
We here present one lightweight phase picking network (LPPN) to pick P/S phases from continuous seismic recordings. It first classifies the phase type for a segment of waveform, and then performs regression to get accurate phase arrival time. The network is optimized using deep separable convolution to reduce the number of trainable parameters and improve its computation efficiency. Experiments using the STanford EArthquake Dataset (STEAD) show that the precision of LPPN can reach 95.2% and 83.7% with the recalls 94.4% and 84.7% for P and S phases, respectively. The classification–regression approach shows comparable performance to traditional point-to-point methods with lower computation cost. LPPN can be configured to have different model size and run on a wide range of devices. It is open-source and can support phase picking for large-scale dataset or in other speed sensitive scenarios.
Article
Impact-induced anomalies such as travelling waves can be found in many devices or structures. This article provides an experimental method to localize a point of impact in three-dimensional structures using an array of redundant sensors. Concurrent measurement of vibration waveforms in moving measurement points and stationary reference allows determining the times of arrival of the travelling wave and, more importantly, times of arrival relative to the reference. These relative times of arrival allow for employing more measurement points than channels in the data acquisition system. This work proposes a modified method to estimate the times of arrival by combining continuous wavelet transform with optimal interval partitioning. The work also considers that the path between the impact and the measurement point depends on the properties of the structure, causing apparent wave speeds to differ across the measurement points. Therefore, regular triangulation and multilateration methods, which assume equal wave speeds, offer a reduced accuracy. The localization is solved as a constraint optimization problem considering variable apparent speeds.
Preprint
Full-text available
Seismic data obtained from seismic stations are the major source of the information used to forecast earthquakes. With the growth in the number of seismic stations, the size of the dataset has also increased. Traditionally, STA/LTA and AIC method have been applied to process seismic data. However, the enormous size of the dataset reduces accuracy and increases the rate of missed detection of the P and S wave phase when using these traditional methods. To tackle these issues, we introduce the novel U-net-Bidirectional Long-Term Memory Deep Network (UBDN) which can automatically and accurately identify the P and S wave phases from seismic data. The U-net based UBDN strongly maintains the U-net’s high accuracy in edge detection for extracting seismic phase features. Meanwhile, it also reduces the missed detection rate by applying the Bidirectional Long Short-Term Memory (Bi-LSTM) mode that processes timing signals to establish the relationship between seismic phase features. Experimental results using the Stanford University seismic dataset and data from the 2008 Wenchuan earthquake aftershock confirm that the proposed UBDN method is very accurate and has a lower rate of missed phase detection, outperforming solutions that adapt traditional methods by an order of magnitude in terms of error percentage.
Article
Full-text available
La detección temprana de eventos sísmicos permite reducir daños materiales, el número de personas afectadas e incluso salvar vidas. En particular, la actividad sísmica en Ecuador es alta, dado que se encuentra en el denominado Cinturón de Fuego del Pacífico. En tal contexto, el presente artículo tiene como objetivo comparar algoritmos para la detección automática de eventos sísmicos. Dicha comparación se realiza con respecto a la funcionalidad y configuración de los parámetros requeridos para cada algoritmo. Además, la implementación se lleva a cabo sobre una plataforma computacional tipo SBC (Single Board Computer) con la finalidad de obtener una herramienta portable, escalable, económica y de bajo costo computacional. Los métodos comparados son: Classic STA/LTA, Recursive STA/LTA, Delayed STA/LTA, Z-detector, Baer and Kradolfer picker y AR-AIC (Autoregressive-Akaike-Information-Criterion-picker). Para la evaluación y comparación se desarrollan múltiples experimentos empleando registros sísmicos reales proporcionados por la Red Sísmica del Austro (RSA), disponibles como fuente de entrada a los algoritmos. Como resultado se obtiene que el algoritmo Classic STA/LTA presenta el mejor rendimiento, ya que del total de eventos reales (58), solo un evento no fue detectado. Además, se consiguen 6 falsos negativos, logrando un 98,2% de precisión. Cabe recalcar que el software utilizado para la comparación de algoritmos de detección de eventos sísmicos está disponible de forma libre.
Article
The structure of a high-performance engine is becoming more and more complex, so it is very important to accurately and quickly locate the faults to ensure its operation safety. The acoustic emission (AE) signal which is caused by the structural damage of engines contains important structural integrity information; however, the accuracy of existing AE location methods is affected by complex structures, such as stiffeners, holes, variable wall thickness, and interface coupling. Therefore, this paper proposes a new AE source location method that combines the two-step Akaike information criterion (AIC) based on the dispersion curve and the time difference matrix (TDM). This method can precisely locate the faults of complex structures without considering the wave velocity. Through theoretical calculation and numerical simulation, this paper intends to show that the proposed method is superior to the traditional AIC and can track the arrival time of the AE signal more accurately. In addition, experiments on different structures illustrate that the proposed method has higher location accuracy in complex structures. This paper also analyzes the location sensitivity to the number and array of sensors in a complex structure and puts forward an optimal scheme of sensor layout on the condition of high location accuracy. The results show that the proposed method can be used as a reliable tool for AE source location and fault monitoring of complex structures. It will have a wide application prospect in the engine and other fields.
Article
Seismic data obtained from seismic stations are the major source of the information used to forecast earthquakes. With the growth in the number of seismic stations, the size of the dataset has also increased. Traditionally, STA/LTA and AIC method have been applied to process seismic data. However, the enormous size of the dataset reduces accuracy and increases the rate of missed detection of the P and S wave phase when using these traditional methods. To tackle these issues, we introduce the novel U-net-Bidirectional Long-Term Memory Deep Network (UBDN) which can automatically and accurately identify the P and S wave phases from seismic data. The U-net based UBDN strongly maintains the U-net’s high accuracy in edge detection for extracting seismic phase features. Meanwhile, it also reduces the missed detection rate by applying the Bidirectional Long Short-Term Memory (Bi-LSTM) mode that processes timing signals to establish the relationship between seismic phase features. Experimental results using the Stanford University seismic dataset and data from the 2008 Wenchuan earthquake aftershock confirm that the proposed UBDN method is very accurate and has a lower rate of missed phase detection, outperforming solutions that adapt traditional methods by an order of magnitude in terms of error percentage.
Chapter
Quantitative methods in acoustic emission (AE) analysis require localization techniques to estimate the source coordinatesSource localizationsource coordinates of the AE events as accurately as possible. There are a number of different ways to localize AE sources in practice, i.e. to obtain the desired point estimate in one, two, or three dimensions. This chapter starts with approaches for automated onset detection since the travel time information is one of the most critical input parameters for most localization approaches. In general, most localization methods presented in this chapter have in common that the travel time information from source to receiver is used for localizing an AE source. Most of the methods of AE localization discussed here were developed in the framework of earthquake seismology and GPS techniques. Array-type approaches, which were designed especially for plate-like structures, are also discussed. Different techniques for one, two and three dimensional source localization are described. Approaches based on numerical inversions as well as grid search and array localization approaches are discussed. Further concepts developed or adapted for the AE localization problem presented in this chapter use, e.g., neural networks, probabilistic approaches, or direct algebraic methods from GPS technology. Localization accuracyLocalizationlocalization accuracy is influenced by various factors. Therefore, how to determine localization errors and some measures to ensure high localization accuracy are also listed and discussed.
Article
The Swiss Seismological Service (SED) at ETH has been developing methods and open-source software for Earthquake Early Warning (EEW) for more than a decade and has been using SeisComP for earthquake monitoring since 2012. The SED has built a comprehensive set of SeisComP modules that can provide EEW solutions in a quick and transparent manner by any seismic service operating SeisComP. To date, implementations of the Virtual Seismologist (VS) and Finite-Fault Rupture Detector (FinDer) EEW algorithms are available. VS provides rapid EEW magnitudes building on existing SeisComP detection and location modules for point-source origins. FinDer matches growing patterns of observed high-frequency seismic acceleration amplitudes with modeled templates to identify rupture extent, and hence can infer on-going finite-fault rupture in real-time. Together these methods can provide EEW for all event dimensions from moderate to great, if a high quality, EEW-ready, seismic network is available. In this paper, we benchmark the performance of this SeisComP-based EEW system using recent seismicity in Switzerland. Both algorithms are observed to be similarly fast and can often produce first EEW alerts within 4–6 s of origin time. In real time performance, the median delay for the first VS alert is 8.7 s after origin time (56 earthquakes since 2014, from M2.7 to M4.6), and 7 s for FinDer (10 earthquakes since 2017, from M2.7 to M4.3). The median value for the travel time of the P waves from event origin to the fourth station accounts for 3.5 s of delay; with an additional 1.4 s for real-time data sample delays. We demonstrate that operating two independent algorithms provides redundancy and tolerance to failures of a single algorithm. This is documented with the case of a moderate M3.9 event that occured seconds after a quarry blast, where picks from both events produced a 4 s delay in the pick-based VS, while FinDer performed as expected. Operating on the Swiss Seismic Network, that is being continuously optimised for EEW, the SED-ETHZ SeisComP EEW system is achieving performance that is comparable to operational EEW systems around the world.
Article
Full-text available
We developed an automatic seismic wave and phase detection software based on PhaseNet, an efficient and highly generalized deep learning neural network for P- and S-wave phase picking. The software organically combines multiple modules including application terminal interface, docker container, data visualization, SSH protocol data transmission and other auxiliary modules. Characterized by a series of technologically powerful functions, the software is highly convenient for all users. To obtain the P- and S-wave picks, one only needs to prepare three-component seismic data as input and customize some parameters in the interface. In particular, the software can automatically identify complex waveforms (i.e. continuous or truncated waves) and support multiple types of input data such as SAC, MSEED, NumPy array, etc. A test on the dataset of the Wenchuan aftershocks shows the generalization ability and detection accuracy of the software. The software is expected to increase the efficiency and subjectivity in the manual processing of large amounts of seismic data, thereby providing convenience to regional network monitoring staffs and researchers in the study of Earth's interior.
Preprint
Full-text available
PhaseNet and EQTransformer are two state-of-the-art earthquake detection methods that have been increasingly applied worldwide. To evaluate the generalization ability of the two models and provide insights for the development of new models, this study took the sequences of the Yunnan Yangbi M 6.4 earthquake and Qinghai Maduo M 7.4 earthquake as examples to compare the earthquake detection effects of the two abovementioned models as well as their abilities to process dense seismic sequences. It has been demonstrated from the corresponding research that due to the differences in seismic waveforms found in different geographical regions, the picking performance is reduced when the two models are applied directly to the detection of the Yangbi and Maduo earthquakes. PhaseNet has a higher recall than EQTransformer, but the recall of both models is reduced by 13–56% when compared with the results reported in the original papers. The analysis results indicate that neural networks with deeper layers and complex structures may not necessarily enhance earthquake detection performance. In designing earthquake detection models, attention should be paid to not only the balance of depth, width, and architecture but also to the quality and quantity of the training datasets. In addition, noise datasets should be incorporated during training. According to the continuous waveforms detected 21 days before the Yangbi and Maduo earthquakes, the Yangbi earthquake exhibited foreshock, while the Maduo earthquake showed no foreshock activity, indicating that the two earthquakes’ nucleation processes were different.
Article
Full-text available
The increase of available seismic data prompts the need for automatic processing procedures to fully exploit them. A good example is aftershock sequences recorded by temporary seismic networks, whose thorough analysis is challenging because of the high seismicity rate and station density. Here, we test the performance of two recent Deep Learning algorithms, the Generalized Phase Detection and Earthquake Transformer, for automatic seismic phases identification. We use data from the December 2019 Mugello basin (Northern Apennines, Italy) swarm, recorded on 13 permanent and nine temporary stations, applying these automatic procedures under different network configurations. As a benchmark, we use a catalog of 279 manually repicked earthquakes reported by the Italian National Seismic Network. Due to the ability of deep learning techniques to identify earthquakes under poor signal‐to‐noise‐ratio (SNR) conditions, we obtain: (a) a factor 3 increase in the number of locations with respect to INGV bulletin and (b) a factor 4 increase when stations from the temporary network are added. Comparison between deep learning and manually picked arrival times shows a mean difference of 0.02–0.04 s and a variance in the range 0.02–0.07 s. The improvement in magnitude completeness is ∼0.5 units. The deep learning algorithms were originally trained using data sets from different regions of the world: our results indicate that these can be successfully applied in our case, without any significant modification. Deep learning algorithms are efficient and accurate tools for data reprocessing in order to better understand the space‐time evolution of earthquake sequences.
Article
On 3 May 2020, an ML 3.1 earthquake occurred in Haenam, southwestern Korea, in an area devoid of recorded seismicity since instrumental observations began in 1978. Careful examination of the temporal occurrence of seismicity, and the magnitude distribution of the sequence before and after the ML 3.1 earthquake, indicates typical swarm-like behavior. The earthquake swarm started with an ML 0.6 event on 26 April 2020, intensified up to 3 May 2020, and abruptly terminated with an ML 1.0 event on 9 May 2020. The Pusan National University Geophysics Laboratory (PNUGL) deployed a temporary seismic array with eight three-component short-period instruments to monitor the short-lived bursts of seismicity. During the monitoring campaign, we detected > 700 microearthquakes by applying a matched-filter technique to the combined dataset produced by PNUGL, the Korea Meteorological Administration, and the Korea Institute of Ocean Science and Technology. We determined earthquake parameters for 299 earthquakes that were detected at four or more seismic stations. We also determined the focal mechanism solutions of the 10 largest earthquakes in the swarm using first-motion polarities with S/P ratios. The focal mechanism, hypocentral depth, and stress orientation of the largest earthquake in the sequence were also determined using waveform inversions. The distribution of earthquake hypocenters, together with focal mechanism solutions, indicates that the earthquake swarm activated deeply-buried faults (~20 km) oriented either NNE-SSW or WNW-ESE. We also report details of the temporary seismic monitoring network, including the instrumentation, detection of microearthquakes, and variations in event-detection threshold influenced by anthropogenic and natural noise fluctuations. We also discuss the limitations associated with lowering the detection threshold of microearthquakes by increasing the number of seismic stations or by adopting advanced event-detection techniques.
ResearchGate has not been able to resolve any references for this publication.