Article

Suppression of multiple reflections using the Radon transform

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Multiple suppression using a variant of the Radon transform is discussed. This transform differs from the classical Radon transform in that the integration surfaces are hyperbolic rather than planar. This specific hyperbolic surface is equivalent to parabolae in terms of computational expense but more accurately distinguishes multiples from primary reflections. The forward transform separates seismic arrivals by their differences in traveltime moveout. Multiples can be suppressed by an inverse transform of only part of the data. Examples show that multiples are effectively attenuated in prestack and stacked seismograms.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Multiple attenuation in the CMP domain is still a key component, as their occurrence may mislead reflector relocating in migration and destroy seismic quantitative amplitude analysis and interpretation [1,2]. Most of the multiple attenuation methods presented in the CMP domain are based on their periodicity or the difference of velocity stacking between multiples and primaries, such as predictive deconvolution, Radon transform [3][4][5] and velocity stacking [6][7][8], which works at near-or far-offsets, respectively, for surface-related or layer-interbed multiples. In addition, Ref. [9] proposed a modification of the inverse data space method, which was first introduced by [10], for internal multiple attenuation in the CMP gathers with dramatically reduced computational overhead in comparison with the shot gathers. ...
... 0.0, 0.1) s/km are also extracted and plotted in Figure 6d-f, respectively. All predicted errors both in the shot and the CMP domain are zeros at dipping angle α = 0 • in Figure 6d-f, which agrees with the fact in Equation (3). However, at the fixed horizontal slowness, the prediction errors of using the shot and the CMP gathers do not show the linear relationship with varying dipping angles, and limited advantages may be found in the CMP domain. ...
... (a) The shot gather created using the velocity model shown inFigure 7with the source location indicated as the red dot. (b) The internal multiple prediction using Equation(3). (c) The shot gather after the least-square matching subtraction, i.e., c = a-factor*b. ...
Article
Full-text available
Internal multiple prediction remains a high-priority problem in seismic data processing, such as subsurface imaging and quantitative amplitude analysis and inversion, particularly in the common-midpoint (CMP) gathers, which contain multicoverage reflection information of the subsurface. Internal multiples, generated by unknown reflectors in complex environments, can be reconstructed with certain combinations of seismic reflection events using the inverse scattering series internal multiple prediction algorithm, which is usually applied to shot records in source–receiver coordinates. The computational overhead is one of the major challenges limiting the strength of the multidimensional implementation of the prediction algorithm, even in the coupled plane-wave domain. In this paper, we first comprehensively review the plane-wave domain inverse scattering series internal multiple prediction algorithm, and we propose a new scheme of achieving 2D multiple attenuation using a 1.5D prediction algorithm in the CMP domain, which significantly reduces the computational burden. Moreover, we quantify the difference in behavior of the 1.5D prediction algorithm for the shot/receiver and the CMP gathers on tilted strata. Numerical analysis of prediction errors shows that the 1.5D algorithm is more capable of handling dipping generators in the CMP domain than in the shot/receiver gathers, and it is able to predict the accredited traveltimes of internal multiples caused by dipping reflectors with small inclinations. For more complex cases with large inclination, using the 1.5D prediction algorithm, internal multiple predictions fail both in the CMP domain and in the shot/receiver gathers, which require the full 2D prediction algorithm. To attenuate internal multiples in the CMP gathers generated by large-dipping strata, a modified version is proposed based on the full 2D plane-wave domain internal multiple prediction algorithm. The results show that the traveltimes of internal multiples caused by dipping generators seen in the simple benchmark example are correctly predicted in the CMP domain using the modified 2D prediction algorithm.
... 2. Predict multiple energy and then subtract from seismic data (Dragoset & Jeričević, 1998). Other authors classified MAM into three classes (Wang et al., 2023): filtering-based methods (Treitel, 1969;Taner, 1980;Weglein et al., 1997), Transformation methods in a particular domain (Foster & Mosher, 1992;Yilmaz, 2001;Schonewille & Aaron, 2007). The last class is the machine learning (ML) techniques (Tao et al., 2022) and deep neural networks (Li et al., 2021;Li and Gao, 2020;Wang et al., 2022). ...
... Depending on the NMO-Velocities (VNMO) differences, the multiples, and primaries can be transformed into specific domains where they can be separated more effectively than the time domain (t-x domain). The common methods that use different VNMO are f-k filtering (Sengbush, 1983), cluster filtering, and Radon transform-based methods (Hampson, 1986;Foster & Mosher, 1992;Sacchi and Ulrych, 1995;Hargreaves and Cooper, 2001;Li & Lu, 2014;Li and Yue, 2017;Trad, 2003). ...
Article
This article presents the pre-stack attenuation of multiple noises from a 2-D land seismic survey acquired on the Iraq-Kuwait border in southern Iraq. The processing workflow is performed using the Geovation software from CGG employs a De-aliased High-Resolution Radon with a non-iterative process for seismic reflection data. First, we apply noise attenuation, surface deconvolution (predictive), and normal move-out (NMO) correction which is executed using the root-mean-square velocity (RMS velocity) to flatten the primary event, and the Common Mid-Point gathers (CMP gathers) are converted into the Radon domain using a least-squares-parabolic Radon transform. Several tests were performed on some parameters to predict the multiple events using a High-Resolution de-aliased multiple attenuation (RAMUR) algorithm aimed to attenuate multiple events. RAMUR algorithms compute a model of primary and multiple events based on data decomposition into user-defined parabolas and are performed using a high-resolution, de-aliased least-squares method. The Radon transforms separate primary reflection and multiple reflection events in the (τ-q) domain based on move-out differences between primary and multiple events which are characterized as events with slower velocity. Reflection events relating to parabolas with a higher curvature are counted as multiples, whereas events with smaller than are counted as primary events. The RAMUR algorithms subtract the model of multiples from the input gathers. The workflow effectively attenuates reflection-based generated multiples events, improves the overall seismic response after imaging, and enhances correlation with well information.
... Parabolic radon transform, which is arguably, the most widely used technique in the Industry was successful in removing multiples provided the velocities are estimated correctly (Foster and Mosher, 1992;Russell et al., 1990;Sacchi and Porsani, 1999). Input to radon entails a perfect transformation, of CDP gathers from t-x domain into τ-p domain. ...
... Multiples are muted in the τ-q domain and muted gathers are transformed to time-space domain to obtain multiples-free CDP gathers. Parabolic radon transform is reasonably successful in multiples elimination if correct primaries velocities are used (Foster and Mosher, 1992;Russell et al., 1990;Sacchi and Porsani, 1999). Advancement in Radon algorithms has been made to improve computational performance and efficiency (B. ...
... Many methods have been developed including: stacking, FK filter, Radon transform, deconvolution and Feedback loop. They make statistical assumptions, assume move-out differences, or require knowledge of the subsurface and the generators of the multiples (e.g., Foster and Mosher, 1992; Verschuur et al.da Costa Filho et al., 2017;Lomas and Curtis, 2019). As the industry moved to deep water and more complex on-shore and off-shore plays, these methods bumped up against their assumptions. ...
... Many methods have been developed including: stacking, FK filter, Radon transform, deconvolution and Feedback loop. They make statistical assumptions, assume move-out differences, or require knowledge of the subsurface and the generators of the multiples (e.g., Foster and Mosher, 1992;Verschuur et al., 1992;Berkhout and Verschuur, 1997;Jakubowicz, 1998;Robinson and Treitel, 2008;Wu and Wang, 2011;Meles et al., 2015;da Costa Filho et al., 2017;Lomas and Curtis, 2019). As the industry moved to deep water and more complex on-shore and off-shore plays, these methods bumped up against their assumptions. ...
Article
Full-text available
Multiple removal is a longstanding problem in exploration seis-mology. Many methods have been developed including: stacking , FK filter, Radon transform, deconvolution and Feedback loop. They make statistical assumptions, assume move-out differences , or require knowledge of the subsurface and the generators of the multiples (e.g., Foster and Mosher, 1992; Ver-schuur et al. da Costa Filho et al., 2017; Lomas and Cur-tis, 2019). As the industry moved to deep water and more complex onshore and offshore plays, these methods bumped up against their assumptions. The Inverse Scattering Series (ISS) internal-multiple-attenuation algorithm (Araújo et al., 1994, Weglein et al., 1997 and Weglein et al., 2003) made none of the assumptions of previous methods (listed above) and stands alone, and is unique in its effectiveness when the subsurface and generators are complicated and unknown. It is the only multi-dimensional internal-multiple-removal method that can predict all internal multiples with exact arrival time and approximate amplitude without requiring any subsurface information. When internal multiples and primaries are isolated, the ISS internal-multiple-attenuation algorithm is usually combined with an energy-minimization adaptive subtraction to remove internal multiples. For isolated internal multiples, the ISS attenuator combined with energy-minimization adaptive subtraction is successful and effective. However, when internal multiples are proximal to and/or interfering with primaries or other events, the criteria behind energy-minimization adaptive subtraction can fail (e.g., the energy can increase rather than decrease when a multiple is removed from a destructively interfering primary and multiple). With interfering events, energy-minimization adaptive subtraction can lead to damaging the target primary, which is the worst possible outcome. In this paper , we provide the first multi-dimensional ISS internal-multiple-elimination algorithm that can predict both the correct time and amplitude of internal multiples. This is an important part of a three-pronged strategy proposed by Weglein at the 2013 SEG International Conference (Weglein 2014). Herrera and We-glein (2012) proposed a 1D ISS internal-multiple-elimination algorithm for all first-order internal-multiples generated at the shallowest reflector. Y. Zou and Weglein (2014) then went further and developed and illustrated an elimination algorithm that can eliminate all first-order internal multiples generated by all reflectors for a 1D earth. In this paper we provide the first multidimensional ISS internal-multiple-elimination method that can remove internal multiples interfering with primaries, without subsurface information, and without damaging the primary. We also compare the ISS elimination result with ISS atten-uation plus energy-minimization adaptive subtraction for an interfering primary and internal multiple. This ISS internal-multiple-elimination algorithm is more effective and more compute-intensive than the current most capable ISS attenuation-plus-adaptive-subtraction method. We provide it as a new capability in the multiple-removal toolbox and a new option for circumstances when this type of capability is called for, indicated and necessary. That can frequently occur in offshore and onshore conventional and unconventional plays. We are exploring methods to reduce the computational cost of these ISS attenuation and elimination algorithms, without compromising effectiveness.
... The unmigrated synthetic seismic forward model performed with constant interval velocities (Fig.9A) constitutes itself a very useful tool for studying the subsurface in the absence of data (Anselmetti et al., 1997;Duchesne et al., 2006;Falivene et al., 2010;Mascolo and Lecomte, 2021;Tomassi et al., 2022) or to calibrate well data (Morozov and Ma, 2009). However, this seismic forward model is affected by some artifacts such as multiple reflections (Berkhout and Verschuur, 2006;Claerbout, 1971;Foster and Mosher, 1992), or the shallow high amplitude reflectors or diffractions generated by topography that do not allow for clear interpretation. These signals are difficult to decode and interpret in geologic horizons. ...
Article
Carbonate ramp systems present significant seismic interpretation challenges due to their pronounced facies heterogeneity, which frequently results in chaotic seismic outputs that obscure underlying geological structures. The Porto Badisco Calcarenite in Salento, Southern Italy, an Oligocene carbonate ramp, serves as the case study for this research, offering an analogue for understanding similar geological systems. By integrating fieldwork, laboratory analysis, and Matlab modelling, this study pioneers the use of detailed petrophysical data to construct innovative velocity models based on the velocity ranges of the different lithofacies analysed. These models distinctly illustrate the impact of facies heterogeneity on seismic velocities, providing fresh insights into acoustic impedance and variable propagation velocities across different facies constituting the carbonate ramp. Through advanced high-resolution synthetic seismic modelling conducted on carefully fine-tuned unmigrated stack sections, the research demonstrates how variations in petrophysical characteristics within measured ranges reflecting carbonate textures can dramatically alter seismic imaging. The innovative models based on propagation velocity ranges, not only deepen the understanding of the seismic representation of lithofacies but also act as a potent tool for probing the subsurface architecture of complex carbonate systems providing an interpretative key for the analysis of seismic images. This approach signifies a substantial advancement in seismic modelling, aimed at refining interpretations and enhancing exploration strategies in carbonate ramp environments globally.
... Transforms that focus seismic signals can be used as powerful processing tool. Therefore, Radon transforms have been widely used in seismic data processing for many applications such as interpolation (Kabir and Verschuur, 1995;Sacchi and Ulrych, 1995a;Trad et al., 2002;, multiple separation (Hampson, 1986a;Foster and Mosher, 1992;Landa and Baina, 2015), noise removal (Russell et al., 1990a,b) and micro-seismic signal detection (Sabbione et al., 2013. ...
Preprint
The advent of high density 3D wide azimuth survey configurations has greatly increased the cost of seismic acquisition. Simultaneous source acquisition presents an opportunity to decrease costs by reducing the survey time. Source time delays are typically long enough for seismic reflection energy to decay to negligible levels before firing another source. Simultaneous source acquisition abandons this minimum time restriction and allows interference between seismic sources to compress the survey time. Seismic data processing methods must address the interference introduced by simultaneous overlapping sources. Simultaneous source data are characterized by high amplitude interference artefacts that may be stronger than the primary signal. These large amplitudes are due to the time delay between sources and the rapid decay of seismic energy with arrival time. Therefore, source interference will appear as outliers in denoising algorithms that make use of a Radon transform. This will reduce the accuracy of Radon transform de-noising especially for weak signals. Formulating the Radon transform as an inverse problem with an L1 misfit makes it robust to outliers caused by source interference. This provides the ability to attenuate strong source interference while preserving weak underlying signal. In order to improve coherent signal focusing, an apex shifted hyperbolic Radon transform (ASHRT) is used to remove source interferences. ASHRT transform basis functions are tailored to match the travel time hyperbolas of reflections in common receiver gathers. However, the ASHRT transform has a high computational cost due to the extension of the model dimensions by scanning for apex locations. By reformulating the ASHRT operator using a Stolt migration/demigration kernel that exploits the Fast Fourier Transform (FFT), the computational efficiency of the operator is drastically improved.
... Mapping based on hyperbolic arrivals allows for a precise and sparse representation of seismic data as seismic signals show up as hyperbolic events on standard seismic surveys. The Radon transform has been employed in various applications, including velocity analysis [15], multiple suppression [16], interpolation in the frequency or time domain [17], and deblending via denoising [8]. ...
Article
Full-text available
We implement an inversion-based deblending method in the common midpoint gathers (CMP) as an alternative to the standard common receiver gather (CRG) domain methods. The primary advantage of deblending in the CMP domain is that reflections from dipping layers are centred around zero offsets. As a result, CMP gathers exhibit a simpler structure compared to common receiver gathers (CRGs), where these reflections are apex-shifted. Consequently, we can employ a zero-offset hyperbolic Radon operator to process CMP gathers. This operator is a computationally more efficient alternative to the apex-shifted hyperbolic Radon required for processing CRG gathers. Sparse transforms, such as the Radon transform, can stack reflections and produce sparse models capable of separating blended sources. We utilize the Radon operator to develop an inversion-based deblending framework that incorporates a sparse model constraint. The inclusion of a sparsity constraint in the inversion process enhances the focusing of the transform and improves data recovery. Inversion-based deblending enables us to account for all observed data by incorporating the blending operator into the cost function. Our synthetic and field data examples demonstrate that inversion-based deblending in the CMP domain can effectively separate blended sources.
... The F-K [5] filtering-based processing can suppress the coherent noise to some extent, but it may cause damage to the effective wave. Suppressing the coherent noise in the Radon domain has also been tried several times [6][7][8], but there is still a problem of low resolution. Rabiner et al. [9] created a median filtering denoising method and later produced many derivative algorithms [10,11]. ...
Article
Full-text available
In passive seismic exploration, the number and location of underground sources are very random, and there may be few passive sources or an uneven spatial distribution. The random distribution of seismic sources can cause the virtual shot recordings to produce artifacts and coherent noise. These artifacts and coherent noise interfere with the valid information in the virtual shot record, making the virtual shot record a poorer presentation of subsurface information. In this paper, we utilize the powerful learning and data processing abilities of convolutional neural networks to process virtual shot recordings of sources in undesirable situations. We add an adaptive attention mechanism to the network so that it can automatically lock the positions that need special attention and processing in the virtual shot records. After testing, the trained network can eliminate coherent noise and artifacts and restore real reflected waves. Protecting valid signals means restoring valid signals with waveform anomalies to a reasonable shape.
... For example, when the Radon transform algorithm is used in applications, it fails if the common depth point spacing is too large. Under these conditions, spatial aliasing occurs, which prevents the algorithm from successfully demultiplying (Foster and Mosher, 1992); (Zhang et al., 2019). ...
Article
Full-text available
The seismic data interpolation method has been widely used to increase the fold coverage in seismic data processing. This technique can be applied to convert multi-2D lines into pseudo-3D, which is an alternative to obtaining 3D seismic volume data due to the relatively high acquisition cost. However, the quality of the seismic interpolation results is not the same as the real 3D seismic data acquisition results. This study carefully analyzed these differences to understand how accurate the results were. There are two methods used for data interpolation, namely Unaliased f-k trace interpolation (UFKI) and Regularized Interpolation Nonstationary Autoregression (RNA) methods, which are applied to 2D pre-stack data to increase the fold coverage and 3D data to convert multi-2D lines into pseudo-3D. Then, the interpolation results on the pre-stack data are evaluated on the 2D and 3D data, and an amplitude change is analyzed. It is done to test whether the amplitude of the seismic data from the interpolation results is still relatively preserved based on the evaluation results of the changes in the AVO response. The results show that the interpolation process in the receiver and shot gather domain (UFKI and RNA) could increase the fold coverage and maintain the relative amplitude preservation and AVO response
... where η = λ/ξ. Finally, the optimization problem of Equation (7) can be solved through several iterations and a sparse Radon model m can be obtained using Equation (13). It is important to note that only a single pseudoinverse matrix L T L + (2β + ξ)I −1 needs to be calculated in the procedure. ...
Article
Full-text available
Multiple reflection is a common interference wave in offshore petroleum and gas exploration, and the Radon-based filtering method is a frequently used approach for multiple removal. However, the filtering parameter setting is crucial in multiple suppression and relies heavily on the experience of processors. To reduce the dependence on human intervention, we introduce the geometric mode decomposition (GMD) and develop a novel processing flow that can automatically separate primaries and multiples, and then accomplish the suppression of multiples. GMD leverages the principle of the Wiener filtering to iteratively decompose the data into modes with varying curvature and intercept. By exploiting the differences in curvature, GMD can separate primary modes and multiple modes. Then, we propose a novel sparse Radon transform (RT) constrained with the elastic half (EH) norm. The EH norm contains a l1/2 norm and a scaled l2 norm, which is added to overcome the numerical oscillation problem of the l1/2 norm. With the help of the EH norm, the estimated Radon model can reach a remarkable level of sparsity. To solve the optimization problem of the proposed sparse RT, an efficient alternating multiplier iteration algorithm is employed. Leveraging the high sparsity of the Radon model obtained from the proposed transform, we improve the GMD-based multiple removal framework. The high-sparsity Radon model obtained from the proposed Radon transform can not only simplify the separation of primary and multiple modes but also accelerate the convergence of GMD, thus improving the processing efficiency of the GMD method. The performance of the proposed GMD-based framework in multiple elimination is validated through synthetic and field data tests.
... High-resolution linear Radon transform (LRT) methods have been widely used for wavefield separation of seismic data (Foster and Mosher, 1992;Trad et al., 2003;Nowak and Imhof, 2006). Highresolution LRTs are quasireversible transforms, and high-resolution forward and inverse LRT can effectively preserve the amplitude and phase information of the signal, maintaining the characteristics of the original signal (Hu et al., 2016). ...
Article
As the primary source of noise in multicomponent seismic data, ground roll substantially contaminates the effective signals and reduces the signal-to-noise ratio of the seismic data. Therefore, ground-roll attenuation is an essential step in multicomponent seismic data processing. In this study, we develop a method to attenuate ground roll using filtering method which preserves the polarization characteristics of the multicomponent seismic data. In this method, radial and vertical components of the multicomponent seismic data are combined into complex vector seismic data and then transformed from the t-x domain into the t-f-v domain using S- and Radon transforms. The reciprocal ellipticity is calculated in the t-f-v domain, and a polarization filter is designed according to the parameters of time, frequency, apparent velocity, and reciprocal ellipticity. Through polarization filtering, the ground roll of the multicomponent seismic data can be effectively attenuated. After performing the inverse Radon and S- transforms, the radial and vertical components of seismic data are obtained in the t-x domain without the ground roll. Tests on synthetic and field seismic data show that the proposed method effectively attenuates the ground roll from the multicomponent data, and provide better results than the established conventional methods.
... Multiples are muted in the τ-q domain and muted gathers are transformed back to the t-x domain to obtain multiples CDP gathers free of multiples. Parabolic radon transform is reasonably successful in removing multiples provided the primaries velocities are estimated correctly (Foster and Mosher, 1992;Hampson, 1986;Russell et al., 1990;Sacchi and Porsani, 1999;Sava and Guitton, 2005). Advancement in Radon algorithms has been made to improve computational performance and efficiency (Ursin et al., 2009;Gholami and Sacchi, 2017). ...
Article
Full-text available
Abbasi, S. and Ismail, A., 2021. Elimination of multiples from marine seismic data using the primary-multiple intermediate velocities in the τ-q domain. Journal of Seismic Exploration, 30: 85-100. Removing seismic multiples is one of the essential steps in seismic data processing and is often carried out using the Radon transform (intercept time (τ) and curvature (q) domain). In this method, the normal moveout (NMO)-corrected CDP gathers using primary (signal) velocity are transformed into the τ-q domain where multiples can be separated from primaries, based on their curvatures, and muted. A drawback of using the primary velocity for NMO correction is that primaries and multiples often exhibit similar curvature in the τ-q domain, particularly at near offsets. We propose the use of velocity function intermediate between primaries and multiples for the NMO correction of the CDP gathers as input to τ-q domain to enhance primaries-multiples separation. The primary-multiple intermediate velocity approach is applied to synthetic and real short-streamer marine seismic data. A semblance-weighted Radon transform is used to reduce smearing in the radon space. The results showed more primary-multiple separation and better multiple removal.
... However, it suffers from insufficient energy concentration, resulting in severe "tail" phenomena that adversely affect the suppression of multiple waves. Floster [3] improved the parabolic Radon transform and proposed the hyperbolic Radon transform to further suppress multiple waves. Wood [4] et al. proposed a high-resolution Radon transform based on the hybrid domain after research, which further improved the sparsity and resolution of the solutions in the Radon domain by taking advantage of the features of high resolution in the time domain and decoupling in the frequency domain. ...
Article
Full-text available
The existence of multiple waves in seismic data has a substantial impact on earthquake imaging, inversion, and interpretation results. As a result, multiples are often suppressed as noise prior to seismic data preattack processing. Currently, there are two primary methods for suppressing multiple reflections in seismic data. One is a filtering technique based on geometric differences in the seismic waves, while the other is based on wave equation methods. The Radon transform, which suppresses multiple reflections, belongs to the class of geometric seismic diffraction filtering techniques. In the process of seismic data processing, it effectively separates multiple reflections from seismic data by utilizing the characteristic differences between primary and multiple reflections, thereby achieving noise attenuation and improving the signal-to-noise ratio of the data. This article primarily investigates the current state of methods for suppressing multiple reflections based on the Radon transform, both domestically and internationally. By introducing various forms and fundamental principles of the Radon transform, this study analyzes and compares the adaptability and pros and cons of each type of Radon transform. The high-precision Radon transform can effectively address the sawtooth phenomenon and energy dispersion issues that arise in the least-squares Radon transform. It also highlights the challenges associated with suppressing multiple reflections in the Radon transform and provides a glimpse into future developments.
... Various multiple elimination approaches have been developed in the past few decades that can be classified into three classes in general. The first class is filtering-based methods, such as the predictive deconvolution method relying on the periodicity difference between primaries and multiples (Peacock and Treitel 1969;Taner 1980) and the transformation methods founded on the separability of primaries and multiples in a specific domain (Foster and Mosher 1992;Schonewille and Aaron 2007;Yilmaz 2001). Unlike the first class of techniques, the second class is wave-equation-based methods provided with higher precision and a wider range of applications. ...
Article
Full-text available
Seismic surface-related multiples have become a hot topic of great significance due to the buried geological information provided by broader illumination areas than primaries. In recent years, researchers attempt to extract the hidden hint of multiples rather than treating them as noise and eliminating them directly. The elimination methods, e.g., the surface-related multiple elimination (SRME) and the inverse scattering series free-surface multiple elimination (ISS-FSME), may be affected by the overlapping or proximity of primaries and multiples. Typical imaging methods, e.g., the reverse time migration (RTM) and the least-square reverse time migration (LSRTM), suffer severe crosstalk artifacts from multiples of inappropriate order and smooth migration velocities. To study the characteristics of primaries and surface-related multiples, whether for elimination or imaging, we propose a forward modeling method of acoustic surface-related order-separated multiples established on the areal/virtual source assumption. The free surface is replaced with an absorbing surface under the dipole source approximation and the ghost creation approach. We present two reflection operators to approximate the reflection at the free surface and apply them to the areal source to obtain ideal results. Numerical experiments on three models prove the effectiveness of the proposed forward modeling method of acoustic surface-related order-separated multiples.
... This performance demonstrates the influence of free surface. To further improve the applicability of FWIRLG, other efficient techniques such as multiple suppression [51] can be utilized. ...
Article
Full-text available
Full-waveform inversion (FWI) is a powerful technique for building high-quality subsurface geological structures. It is known to suffer from local minima problems when a good starting model is lost. To obtain a desirable solution, regularization constraints are needed to impose suitable priors, in particular for salt models. Recent studies have allowed cooperating the denoising operator as a specific prior with optimization algorithms to effectively solve various image reconstruction tasks. Inspired by the promising performance of regularization by denoising (RED), we propose a flexible unsupervised learning FWI framework, in which the regularizer is an extensive variant of RED that powered by learned gradient for addressing salt models inversion. To further mitigate local minima issues, the Wasserstein distance induced by optimal transport theory with a new preprocessing transformation is applied as a measure in data domain. Integrating the physical constraints featured by the wave equation and the model priors featured by the deep convolutional neural network, our method is able to produce measurement consistent and high-resolution results. We experimentally compare the proposed method with traditional total variation regularized FWI and RED regularized FWI on the well-known geological models. The numerical results demonstrate the effectiveness of our method for handling the inversion task of high-contrast media in the presence of a free-surface case. Moreover, our framework is adaptive to restrict the solutions by leveraging the regularizer with learned gradient, whose training does not need any geological images, thus paving the way to develop FWI algorithms with state-of-the-art deep learning techniques in interdisciplinary research.
... For coherent noise, scholars of seismic exploration have presented many seismic denoising approaches to suppress it. The traditional noise removal methods mainly suppress coherent noise according to the difference between effective signal and coherent noise in apparent velocity and frequency, such as f-k filter [4,5], K-L transform [6,7], Radon transform [8,9], radial channel transform [10,11], empirical mode decomposition (EMD) [12,13], and so on. Wavelet transform [14,15] is an effective tool for seismic data processing, which has great advantages in onedimensional data processing, though this advantage cannot be simply extended to two-dimensional or three-dimensional data. ...
Article
Full-text available
The technique of the source array based on the vibroseis can provide the strong energy of a seismic wave field, which better meets the need for seismic exploration. The seismic coherent noise reduces the signal-to-noise ratio (SNR) of the source array seismic data and affects the seismic data processing. The traditional coherent noise removal methods often cause some damage to the effective signal while suppressing coherent noise or cannot suppress the interference wave effectively at all. Based on the multi-scale and multi-direction properties of the non-subsampled Shearlet transform (NSST) and its simple mathematical structure, the seismic coherent noise removal method of source array in NSST domain is proposed. The method is applied to both the synthetic seismic data and the filed seismic data. After processing with this method, the coherent noise of the seismic data is greatly removed and the effective signal information is greatly protected. The analysis of the results demonstrates the effectiveness and practicability of the proposed method on coherent noise attenuation.
... To reduce the artifacts caused by multiples in imaging, many multiple removal methods have been proposed. Previously, multiple removal methods used the difference in periodicity and travel time between the primary and multiple waves; these methods included the normal moveout (NMO) method, common midpoint stack, Wiener filtering, predictive deconvolution, f-k filter, and Radon transformation [1][2][3][4][5][6][7][8]. Currently, the main internal multiple suppressing methods are based on the wave equation. ...
Article
Full-text available
Multiples can cause artifacts in imaging; however, they contain information about underground structures. If the internal multiples are removed as a noise, the information contained by the internal multiple will also be removed. This will cause loss of some useful structures in the image. If the multiples and the primary can be separated from the recorded seismic data for imaging, the information contained by the multiples can be used and the artifacts can be attenuated. Here we developed a method to separate the primary and internal multiples and use them in least squares reverse time migration (LSRTM). This method first separates the primary and the internal multiples in the data residual and predicts the wavefield of the primary and internal multiples in a forward- propagated wavefield. We use the high-order Born modeling method to predict the internal multiples. In this method, the internal multiples can be achieved by forward modeling of three times in the time domain. In the internal multiple prediction process, we get the wavefield of the primary and internal multiples in the forward-propagated wavefield. Then, by introducing the weighting matrix, we established the objective function for imaging of the primary and the internal multiples separately. In the calculation of gradient, we use the correlation of primary in the forward-propagated wavefield with the backward-propagated wavefield of the primary in the data residual, and internal multiples in the forward-propagated with the backward-propagated wavefield of internal multiples in the data residual. In this method, the multiple prediction process provided the internal multiples to suppress the artifacts, and LSRTM constructed the model for the multiple prediction process. Finally, we performed numerical tests using synthetic data, and the results indicated that the LSRTM without the internal multiple can suppress not only the artifacts of internal multiples but also some useful structures below the salt dome. LSRTM with primary and internal multiples can suppress the artifact of internal multiples, and the useful structures below the salt dome are compensated in the image.
... Several attenuation approaches have been developed and some are based on exploiting moveout differences between primary and multiple events [Berkhout and Verschuur, 2006]. For instance, Radon transform converts seismic data from offset-time domain to speed-intercept time domain, in which multiple and primary events are clearly distinguishable by their moveouts, allowing attenuation of multiples without affecting primaries [Durrani and Bisset, 1984;Foster and Mosher, 1992;Gholami, 2017;Ortiz-Alemán et al., 2019;Sacchi and Ulrych, 1995]. ...
Thesis
Finding salt geological structures is an important economic reason for exploration in the world because they constitute a natural trap for various resources such as oil, natural gas, water, and also the salt itself can be exploitable. However, the imaging of these structures is a great challenge. Due to the properties of salt, with propagation velocities much higher than the adjacent strata, seismic waves are trapped within these structures, producing a large number of spurious numerical artifacts, such as multiples. This interferes with the primary seismic signal, making it impossible to see clearly what is underneath the salt structures (salt domes for instance). Among all the geophysical exploration methods, the Reverse Time Migration method (RTM), which is part of the methods that solve the complete seismic waveform, is a very powerful imaging tool, even in regions of complex geology. In this work we use the adjoint-based RTM method, which basically consists of three stages: the solution of the wave equation (forward problem), the solution of the adjoint wave equation (adjoint problem), and the imaging condition, which consists in the correlation of the forward and adjoint wavefields. This work can be divided in two cases of study: the first case consists in a two-dimensional synthetic model of a salt dome, taken from the final migration of a real survey in the Gulf of Mexico. The second case consists in an experimental three-dimensional model (WAVES), elaborated by the LMA laboratory in Marseille (France), which simulates a salt structure (with surrounding sedimentary structures), and a basement. The model was immersed in water to recreate a reallistic marine survey. Two different data types were obtained in this experiment: zero-offset and multi-offset data. To compute the adjoint-based RTM method we use fourth-order finite differences in both cases. Furthermore, in the second case we used the UniSolver code, which solves the adjoint-based RTM method using fourth-order finite differences and MPI-based parallelism. It was also necessary to implement the viscoelastic equations to simulate the effect of attenuation. Because of this, the Checkpointing scheme is introduced to calculate the imaging condition and ensures physical and numerical stability in the migration procedure. In the first case study we analyze the recovery of the salt dome image that different sensitivity kernels produce. We calculate these kernels using different parametrizations (density - P velocity), (density - Lamé constants), or (density - P impedance) for an acoustic rheology. We also study how the use of different a priori models affects the final image depending on the kind of kernel computed. Using the results obtained previously in 2D, we calculate synthetic three-dimensional kernels using an elastic rheology. In the second case (the realistic/experimental case), we perform a calibration of the model properties for zero-offset data, and once the synthetic and real data fit well, we calculate the three-dimensional kernels. [...]
... The filtering methods primarily use the signal filtering theory to suppress multiples through the characteristic differences between primaries and multiples in a particular domain (Kennet 1979;Morley & Claerbout 1983;Lokshtanov 1999;Luo et al. 2003), such as the predictive deconvolution method (Robinson & Treitel 2000), the F-K filtering method (Yilmaz 2001), the Radon transform method (Foster & Mosher 1992;Wang 2003a) and the beam-forming filtering method (White 1988). However, if the characteristics between primaries and multiples are ambiguous or complicated, the filtering methods cannot achieve ideal results. ...
Article
Full-text available
In this paper, we comprehensively compare the application effects of five least-squares adaptive matched filtering methods (the single-channel, the multichannel, the equipoise multichannel, the pseudo multichannel and the equipoise pseudo multichannel) for multiple suppression in three representative datasets with different degrees of orthogonality. By introducing an error function, we can quantitatively analyse the influence of the five methods for multiple suppression in terms of the filter length, the normalized regularization factor, the number of matched channels, the iteration number, the amplitude ratio and noise immunity. In addition, we provide the corresponding optimal parameters or their selection principles. The comparison results show that: (i) the dependence on orthogonality is not the same for these five methods; only the equipoise multichannel and the equipoise pseudo multichannel methods can effectively reduce the dependence on orthogonality; (ii) the single-channel method is relatively balanced in all aspects; (iii) the pseudo multichannel and the equipoise pseudo multichannel methods have a stronger shaping ability but generate larger errors; (iv) the multichannel method requires a higher degree of orthogonality and (v) the optimal parameters derived from the three datasets will be better reference values for complex models or field data for the multiple suppression.
... In a common shot gather after applying the preliminary processing, strong WB multiples were identified, as shown in Fig. 5(a) (refer to the arrows). In this study, a combination of surface-related multiple elimination (SRME, Verschuur, 1992), surface-related wave equation multiple rejection (SRWEMR), predictive deconvolution (Peacock and Treitel, 1969), and a parabolic Radon filter (Douglas and Charles, 1992) was applied to effectively attenuate short-period and long-period multiples included in our seismic data. The detailed procedures for signal processing are shown in Fig. 2(a). ...
Article
Full-text available
Time- and depth-domain processing procedures were applied to seismic data acquired from shallow coastal areas using limited source-receiver offset ranges. A signal-to-noise ratio improvement strategy was applied in time-domain processing, and migration velocity analysis (MVA) with tomographic inversion was performed at every common image gather domain. Although the source-receiver offset was limited, the seismic image generated by the velocity from MVA represents more transparent reflections compared to the image from applied time processing only, which is likely because the shallow water depth of the survey area might yield a certain range of offsets. The processing procedure applied in this case also focused on the improvement of high-frequency content and removing as many multiples as possible in the time domain. Several small-scale faults and horizons of sedimentary units were well defined by this method. The improvement of the image with an accurate velocity by our method was demonstrated by representing the flattened events in the common image gather and velocity differences between time and depth processing. The final prestack depth migration image generated by our approaches provides refined information to interpreters that reduces uncertainties in geohazard assessments.
... The multiple reflections, however, have a residual moveout due to their undercorrection, and plot with positive values of T. This segregation of primary and multiple energy in the P-T domain allows a mute to be designed to remove the multiple. The Radon transform tool (Forster and Mosher, 1992) in ProMAX was used to do this. ...
Thesis
p>This thesis focuses on geophysical data collected over the Laxmi Ridge margin. A 480 km wide-angle velocity model is presented, along with coincident normal-incidence reflection data, gravity and magnetic modelling. The wide-angle model is broadly divided into four areas of crustal provenance: the southern-most crust is Chron 27 oceanic crust, which is around 5 km thick and reaches velocities of 7.4 km s-1 at its base. Laxmi Ridge is adjacent to this crust, and is a 130 km wide, 9 km thick section of thinned continental crust. It is underlain by 11 km of high velocity arterial, whose P-wave velocities reach 7.70 km s-1 at the base. Laxmi Ridge abuts against Gop Rift, which is a 55 km wide basin with crust up to 13 km thick. North of Gop Rift is the continental rise of India, which is interpreted as stretched continental crust. The Laxmi Ridge margin has features usually diagnostic of amagmatic rifted margins, including thin (5 km) first-formed oceanic crust south of Laxmi Ridge and weak seaward-dipping reflectors at the ocean-continent boundary. However, thick oceanic crust is observed in Gap Rift, an isolated asymmetric basin landward of Laxmi Ridge. Gop Rift is flanked by two ~ 100 km wide, almost 13 km thick bodies in the deep crust, whose P-wave velocities reach 7.70 km s-1. These bodies are interpreted to be magmatic underplate associated with rifting over a thermal anomaly. This apparent disparity between magmatic and amagmatic features on the same margin is solved if the magmatic features are attributed to a prior phase of spreading in Gap Rift, most likely Chron 29 in age, with the magmatic material supplied by enhanced melting over the Deccan plume. This spreading ceased once the thermal anomaly cooled. The final breakup of India and the Seychelles then occurred within the weakened underplated lithosphere, and was relatively amagmatic despite the rapid extension.</p
... Based on the different NMO velocities, multiples and primaries can be transformed into domains where they can be separated more effectively than the time domain. The conventional methods that use the second characteristic are F-K filtering (Sengbush, 1983), cluster filtering, and RT-based methods (Hampson, 1986;Foster and Mosher, 1992;Hargreaves et al., 2001;Sacchi and Ulrych, 1995;Trad et al., 2003;Nowak and Imhof, 2006;Lu, 2013;Li and Lu, 2014;Li and Yue, 2016;Jiang et al., 2020). The filtering methods mentioned above have a simple principle and low computational cost. ...
Article
Multiple attenuation is an important issue in seismic exploration. The high-resolution parabolic Radon transform (RT) is a widely used multiple attenuation method for the advantages of simple principle and low computational cost. In the Radon domain, however, the curvatures of primaries and multiples are difficult to focus on, sometimes even close to or overlap with each other, which makes their separation faces great difficulties, and ultimately leads to multiple leakages existing in the attenuation results. To eliminate the multiple leakages caused by the high-resolution parabolic RT, we propose a new multiple-attenuation method that combines the high-resolution parabolic RT with the connected-component analysis. The high-resolution parabolic RT is used to obtain the estimated multiples in the input data. Based on the estimated multiples, the connected-component analysis can fully use the continuity of multiples in the input data to eliminate the multiple leakages. The industrial model data are validated by both the proposed and the high-resolution parabolic RT methods. Comparing their attenuation results, we conclude that not only does the proposed method achieve better performance even under complex geological structures, it is also suitable for common-midpoint gathers before normal moveout.
... Several attenuation approaches have been developed and some are based on exploiting moveout differences between primary and multiple events (Berkhout and Verschuur 2006). For instance, Radon transform converts seismic data from offset time domain to speed intercept time domain, in which multiple and primary events are clearly distinguishable by their moveouts, allowing attenuation of multiples without affecting primaries (Durrani and Bisset 1984;Foster and Mosher 1992;Sacchi and Ulrych 1995;Gholami 2017;Ortiz-Alemán et al. 2019). ...
Article
Full-text available
It is of particular importance for structural geology, geophysical exploration and also obvious economical purposes to retrieve structures possibly hidden below salt domes. And these domes could trap hydrocarbon or gas. We thus propose a sensitivity analysis of seismic data in salt tectonic areas to identify different wavelengths associated with the geological structures under study involving salt domes. The wavelengths associated with the density or seismic velocities of the medium can give us information about the localization of shallow or deep geological structures surrounding salt domes in off-shore contexts. Seismic data can be more sensitive to density or to seismic velocities. Depending on the wavelengths associated with those two different properties, the dome shape and the different interfaces can be located and recovered at different depths. In a first approach, using velocity and density models from a salt tectonic region in the Gulf of Mexico we simulate a two-dimensional seismic data acquisition. Using these synthetic data, we aim at retrieving the salt dome shape as well as the surrounding and deepest geological layers. For this purpose, we propose to compute better imaging conditions by attenuating free surface multiples and introducing an adjoint theory-based reverse time migration (RTM) method, enhancing the limits of salt bodies and also the layers under salt structures. To obtain these imaging conditions, we compute the compressional and density sensitivity kernels KλKλK_\lambda and KρKρK_\rho using seismic sources activated separately. To attenuate the free surface multiples, the synthetic “observed” data computed with the free surface are introduced as adjoint sources and we replace the free surface condition by PML absorbing conditions in both the forward, backward and adjoint simulations needed to compute the kernels. We compare the quality of the kernels applying different strategies related to the normalization of kernels by the forward or adjoint energy, and different property parametrizations were tested to improve the imaging conditions. The specific wavelengths associated with the different (shallow to deep) interfaces are obtained using signal-to-noise ratios (SNRs) applied to both density and seismic velocity kernels. In some cases, density or seismic velocity kernels are more suited to retrieve the interfaces at different depths.
Article
Ultrasonic logging is a widely used technology to assess the cement bond condition in cased-hole. Advanced ultrasonic pitch-catch measurements can capture reflections at the cement-formation interface (or the Third-Interface-Echo, TIE in short), which enables the evaluation of the bond condition, casing eccentering, and the velocity of material in the cement annulus. However, the usage is limited by the TIE arrival picking due to the low amplitude resulting from the eccentered tool and casing, overlapping with tool-casing multiple reflections, as well as interference from the primary zero-order antisymmetric Lamb wave (A0). We develop a two-step method to enhance and accurately pick the TIE arrivals. We firstly employ the Mathematical Morphological Filtering (MMF) with a Neural Network (MMF-NN) framework to mitigate the influence of A0. Then we utilize the local window Multichannel Singular-Spectrum Analysis (MSSA) to suppress the effect of tool-casing multiples on the TIE. In the second step, we propose a sinusoid-search method with a path post-correction algorithm to automatically track the TIE arrivals from the enhanced TIE waveforms. Synthetic and field data results show that our TIE processing can effectively eliminate the A0 mode and suppress tool-casing multiples. Additionally, the picked TIE arrivals are consistent with actual TIE curves.
Article
The paper develops a multiple matching attenuation method based on extended filtering in the curvelet domain, which combines the traditional Wiener filtering method with the matching attenuation method in curvelet domain. Firstly, the method uses the predicted multiple data to generate the Hilbert transform records, time derivative records and time derivative records of Hilbert transform. Then, the above records are transformed into the curvelet domain and multiple matching attenuation based on least squares extended filtering is performed. Finally, the attenuation results are transformed back into the time-space domain. Tests on the model data and field data show that the method proposed in the paper effectively suppress the multiples while preserving the primaries well. Furthermore, it has higher accuracy in eliminating multiple reflections, which is more suitable for the multiple attenuation tasks in the areas with complex structures compared to the time-space domain extended filtering method and the conventional curvelet transform method.
Article
Multiple reflections are among the most challenging noises to suppress in seismic data, as they differ from effective waves only in terms of apparent velocity. Besides, the Radon transform, an essential technique for attenuating multiple reflections, has been widely incorporated into various commercial software packages. Thus, this study introduces a 3D Radon transform method based on the LP‒1 norm to enhance sparsity-constraining capability in the transform domain, leveraging high-resolution Radon transform techniques. Specifically, an iteratively reweighted least squares (IRLS) algorithm is employed to obtain the transformed data in the Radon domain. Given that the LP‒1 norm is applied to seismic data processing for the first time, this paper theoretically demonstrates its powerful sparsity-constraining capability. Indeed, the proposed strategy enhances energy concentration in the Radon transform domain, better-separating primaries from multiples and ultimately suppressing the multiples. Both model tests and real data indicate that the 3D Radon transform constrained by the LP‒1 norm outperforms existing high-resolution Radon transform methods with sparsity constraints regarding energy concentration and effectiveness in multiple reflection attenuation.
Article
Full-text available
Sparse representation and inversion have been widely used in the acquisition and processing of geophysical data. In particular, the low-rank representation of seismic signals shows that they can be determined by a few elementary modes with predominantly large singular values. We review global and local low-rank representation for seismic reflectivity models and then apply it to least-squares migration (LSM) in acoustic and viscoacoustic media. In the global singular value decomposition (SVD), the elementary modes determined by singular vectors represent horizontal and vertical stratigraphic segments sorted from low to high wavenumbers, and the corresponding singular values reflect the contribution of these basic modes to form a broadband reflectivity model. In contrast, local SVD for grouped patch matrices can capture nonlocal similarity and thus accurately represent the reflectivity model with fewer ranks than the global SVD method. Taking advantage of this favorable sparsity, we introduce a local low-rank regularization into LSM to estimate subsurface reflectivity models. A two-step algorithm is developed to solve this low-rank constrained inverse problem: the first step is for least-squares data fitting and the second is for weighted nuclear-norm minimization. Numerical experiments for synthetic and field data demonstrate that the low-rank constraint outperforms conventional shaping and total-variation regularizations, and can produce high-quality reflectivity images for complicated structures and low signal-to-noise data.
Article
In shallow-water ocean-bottom cable (OBC) seismic data, the ineffectiveness of conventional surface-related multiple elimination (SRME) methods due to poor seabed records is addressed. This research utilizes the seismic wavefield received by multiple cables from a single shot gather to predict shallow water multiple models for that shot gather. Initially, the seismic data within a finite aperture around a seismic trace in the time domain shot gather is treated as the known seismic wavefield. This seismic wavefield is then extrapolated along the water layer to this seismic trace, following the Fresnel diffraction principle. The extrapolated data becomes the shallow water multiple model for this seismic trace. This process is repeated for each trace in the shot gather to obtain the shallow water multiple model of the entire shot gather. Forward modeling tests have shown that smaller data apertures can effectively avoid the impact of spatial aliasing on multiple model prediction. To address the overlap of primary waves and shallow water multiples in deep seismic data, which have lower dominant frequencies, the multiple model data is used as a known seismic wavefield and extrapolated along the water layer again. This produces second-order and higher-order multiple models. Applying this model to suppress multiple waves can minimize primary waves loss. This entirely data-driven approach necessitates solely water depth information, imposing no additional conditions. Both forward modeling and real seismic data testing validate the efficacy of this method in shallow water.
Article
In marine seismic exploration, towed-streamer seismic data acquisition stands out as a prevalent and economical approach. This data records scalar pressure and captures P waves including those that are converted from sub-seabed S waves. Theoretically, towed-streamer seismic data can be used to reconstruct S-wave fields for elastic wave imaging via backward extrapolation. Our research applies an acoustic-elastic coupled equation that includes a pressure term to perform elastic reverse time migration, using the pressure seismic data as the boundary condition for backward extrapolation. The feasibility of employing towed-streamer seismic data for elastic wave imaging of complex subsurface structures is confirmed through synthetic and field data examples. The investigation demonstrates how the smoothness of velocity and density models impacts the quality of elastic reverse time migration using towed-streamer seismic data. We find that using smooth velocity and density models for elastic wave imaging can enhance imaging quality by utilizing transmitted S waves. Furthermore, certain nonphysical S waves generated during backward extrapolation can contribute to PS images, as evidenced by a synthetic data example. Overall, the results indicate that the towed-streamer seismic data for elastic wave imaging has potential in marine seismic exploration. Moving forward, it is crucial to focus on refining the quality of PS images and reducing artifacts caused by the backward extrapolated nonphysical waves.
Article
In passive source seismic surveys, signal continuity and signal-to-noise ratios have always tended to be low. On the one hand, since passive-source seismic surveys are often used for large-scale illumination of subsurface formations, the distances between receivers and sampling point intervals tend to be large. On the other hand, interference from coherent noise and spurious in-phase axes is unavoidable in passive source reconstruction recordings because of the signal originating from noise in the subsurface. All these problems lead to the continuity and signal-to-noise ratio of the virtual shot reconstructed from passive source seismic surveys are not guaranteed, which affects further processing and seriously limits the application of passive source seismic surveys. The traditional interpolation reconstruction methods cannot take noise suppression into account, or require additional operations to achieve both interpolation reconstruction and denoising. Based on this, this paper utilizes the powerful data processing ability of convolutional neural networks to design a global multi-scale fusion residual shrinkage network (GMF-RS) to solve the above passive source seismic exploration problem. It is tested that the trained network not only eliminates coherent noise and false events, but also improves the continuity in horizontal and vertical directions, enhances and extracts the effective signals, and provides better virtual shot records for subsequent seismic data processing. In addition, we designed a dual-input network and introduced active source seismic records as a complement to the passive source virtual seismic records, so that the processed waveforms can show better details.
Article
Multiple removal is a crucial step in seismic data processing prior to velocity model building and imaging. After the prediction, adaptive multiple subtraction is employed to suppress multiples (considered noise) in seismic data, thereby highlighting primaries (considered signal). In practice, conventional adaptive subtraction methods fit the predicted and recorded multiples in the least-squares sense using a sliding window, formulating a localized adaptive matched filter. Subsequently, the filter is applied to the prediction to remove multiples from the recorded data. However, such a strategy runs the risk of over attenuating the useful primaries under the minimization energy constraint. To avoid damage to valuable signals, we propose a novel approach that replaces the conventional matched filter with a structure oriented version. From the predicted multiples, we extract the structural information to be used in the derivation of the adaptive matched filter. The proposed structure oriented matched filter emphasizes the structures of predicted multiples which helps to better preserve primaries during the subtraction. Synthetic and field data examples demonstrate the efficacy of the proposed structure-oriented adaptive subtraction approach, highlighting its superior performance in multiple removal and primary preservation compared to conventional methods on 2D regularly sampled data.
Conference Paper
Full-text available
In order to overcome the low-efficiency and low-accuracy of present manual demarcation method for the sub-bottom layers, an automatic demarcation and extraction method based on sediment quality factor and peak-trough on the echo energy loss level curve was proposed. Firstly, the original SEGY file from sub-bottom profile (SBP) was decoded and the amplitude data of each trace was transformed to echo intensity data, so that original sub-bottom profile image can be obtained. Secondly, aiming at eliminating the impact of abnormal observation data, the trace-repair, de-noising, multiple-elimination were carried out. Thirdly, the sediment quality factor was extracted by spectral ratio method, at the same time, the peaks and troughs were also extracted on the echo energy loss level. Then, the locations corresponding to the sediment quality factor and peak were recorded respectively, which were combined to gain the control peaks by employing weighted average method. On the basis of control peaks, the typical peaks corresponding to the actual layer interfaces were searched out. Finally, considering the gradualness and continuity of the sub-bottom sediment structure, the discrete layer interfaces were connected to continuous sub-bottom layer interfaces by means of the Mode smooth approach. The experimental results show that the relative error of demarcation for sub-bottom layers comparing with the in-situ coring data was about 2%, the proposed method can realize automatic extraction for sub-bottom layers and achieve the same accuracy as coring sample data.
Article
Correlation and convolution based internal multiple prediction is one of the most common methods to predict and subsequently attenuate internal multiples. This technique requires the convolution of two seismic events from deep in the section followed by the correlation with an event from the shallow part section to satisfy the so called deep-shallow-deep (late-early-late) condition, otherwise, nonphysical seismic events can be predicted. To comply with this condition, seismic data are separated into shallow and deep portions according to a given phantom horizon. Internal multiples whose ray paths pass through the phantom horizon four times are predicted for one trial. In this way, only a portion of internal multiples are predicted. In order to address this limitation, we introduce a novel algorithm termed the generalized internal multiple prediction (GIMP) algorithm. This algorithm is founded upon the adaptation of the deep-shallow correlation and deep-deep convolution techniques, constrained by specific integration boundaries. The GIMP method effectively accommodates the deep-shallow-deep (late-early-late) mode, without necessitating the segmentation of the seismic dataset into shallow and deep segments via a user-defined phantom horizon. Notably, this approach is comprehensive in nature and has the capability to predict all potential internal multiples concurrently. Considering the computational cost, the current implementation focus of GIMP is primarily on 1D or 1.5D, however GIMP is applicable to both 2D and 3D scenarios.
Article
Multiple suppression is a very important step in seismic data processing. To suppress surface-related multiples, we propose a self-supervised deep neural network method based on a local wavefield characteristic loss function (SDNN-LWCLF). The first and second input data and the output data of the self-supervised deep neural network (SDNN) are the predicted surface-related multiples, the full-wavefield data, and the estimated true surface-related multiples, respectively. The role of the SDNN is to replace the convolutional filter part of adaptive subtraction. Although there are differences in amplitudes and phases between the predicted and true surface-related multiples, the predicted surface-related multiples correspond kinematically to the true surface-related multiples and can be mapped to the estimated true surface-related multiples by the SDNN. The SDNN-LWCLF uses a local wavefield characteristic (LWC) loss function with physical properties to constrain the nonlinear optimization process. The LWC loss function is composed of the mean-absolute-error (MAE) and local normalized cross-correlation (LNCC) loss functions. LNCC can measure the local similarity between the estimated multiples and the estimated primaries. By minimizing the LWC loss function, the MAE loss function corrects amplitudes and phases of the predicted surface-related multiples to their true values, and the LNCC loss function automatically checks and reduces the leaked multiples and residual primaries in the estimated true surface-related multiples. Our proposed SDNN-LWCLF method does not need label data, such as true primaries and true surface-related multiples, which are usually unavailable in real-world applications. Therefore, the SDNN-LWCLF solves the problem of missing training data. Synthetic and field data examples demonstrate that our proposed method can well suppress the surface-related multiples, and its suppression effect is better than both the traditional L1-norm adaptive subtraction method and the SDNN method only based on the MAE loss function (SDNN-MAELF).
Article
Marine Vertical Cable Seismic (VCS) is to probe targets near the submarine with use vertical hydrophone array suspending in deep sea water. Despite imaging of primary reflections from VCS data can provide seismic profiles with high resolution, its submarine illumination is much narrower than that of conventional towed‐streamer seismic owing to the irregular geometry. In view of the fact that imaging of multiples can provides better subsurface illumination with fine resolution, we present a strategy for imaging of receiver ghost reflections from VCS data, based on seismic interferometry by deconvolution. This method can convert first‐order receiver ghosts into virtual primaries from the towed‐streamer data without velocity model and source‐receiver position. We illustrate the deconvolution interferometry for the receiver ghost reflections with the real VCS data in Shenhu area, South China Sea and further obtain virtual primary reflections. Finally, we use these virtual primaries to successfully construct a stacked image and obtain a post‐stack migration image with conventional seismic data process method. This stacked and migrated image significantly extends the submarine illumination with higher resolution and it is useful in characteristics recognition of hydrate‐bearing sediments and free gas. This article is protected by copyright. All rights reserved
Article
Seismic data often contain noise that can disturb or mask effective information. Noise elimination is an important and challenging task in seismic signal processing. Considering the high amplitude continuity of seismic events in the shot domain, this article proposes a structure-oriented denoising method that can enhance the effective events and suppress disturbing noise, including both incoherent and coherent noise. Based on the common-reflection-surface (CRS) travel time, the local slope of seismic events in the shot domain is deduced and estimated to provide structural information for plane-wave prediction. The proposed CRS-based slope depends on fewer parameters (two in 2D) than the conventional full CRS travel time (three in 2D), making it computationally efficient. Using the local slope, the third dimension is created using the plane-wave differential equation to predict the current trace from its neighbor traces and trimmed mean filtering (TMF) is applied in this dimension. The added dimension can be regarded as flattening the seismic events within a neighboring window and collapsing after application of the TMF. Synthetic and field datasets are employed to demonstrate the effectiveness of the proposed structure-oriented TMF. Compared with wavelet and plane-wave destruction methods, the proposed method can preserve more useful information with greater continuity in amplitude.
Article
Surface-related multiples are generally removed as noise. To suppress surface-related multiples, we propose an unsupervised deep neural network approach based on ensemble learning (UDNNEL). The unsupervised deep neural network (UDNN) has excellent nonlinear mapping ability, which maps the predicted surface-related multiples to true surface-related multiples, thereby completing the separation and estimation of multiples and primaries. UDNN consists of three deep neural networks (DNNs), one input data, six outputs, and six pseudo-labels (PLs). In practical use, input data are the predicted surface-related multiples, PLs consist of full-wavefield data and 0 matrices, and outputs are the desired results of estimated true surface-related multiples and differences between these desired results. Input data, one DNN, and corresponding output data are combined into a single base learner. Each base learner corrects amplitudes and phases of the predicted surface-related multiples and maps predicted multiples to true surface-related multiples under the minimization of the total loss function. The principle ensures that our UDNNEL method does not need true primaries or true multiples as training data and solves the problem of missing training datasets. Ensemble learning combines the advantages of three base learners and integrates the nonlinear optimization capabilities of three DNNs to achieve better multiple suppression effectiveness than a single base learner. Therefore, UDNNEL is better than UDNN based on a single base learner (UDNNSBL). Two synthetic data examples verify that our proposed method has good surface-related multiple suppression effectiveness. Another field data example demonstrates that our proposed method can efficiently suppress multiples under complex conditions.
Article
The hyperbolic Radon transform (RT) is a widely used demultiple method in the seismic data processing. But this transformation faces two major defects. The limited aperture of acquisition leads to the scissor-like diffusion in the Radon domain, which introduces separation difficulties between primaries and multiples. In addition, the computation of large matrices inversion involved in hyperbolic RT reduces the processing efficiency. In this letter, a specific Convolutional Neural Network (CNN) is designed to conduct a Fast Sparse Hyperbolic Radon Transform (FSHRT). Two techniques are incorporated into CNN to find the sparse solution. One is the coding-decoding structure, which captures the sparse feature of Radon parameters. The other is the soft threshold activation function followed by the end of neural networks, which suppresses the small parameters and further improves the sparsity. Thus, the network realizes the direct mapping between the adjoint solution and the sparse solution. Furthermore, synthetic and field demultiple experiments are carried out to demonstrate the rapidity and effectiveness of the proposed method.
Article
Multiples have longer propagation paths and smaller reflection angles than primaries for the same source‐receiver combination, so they cover larger illumination area. Therefore, multiples can be used to image shadow zones of primaries. Least‐squares reverse time migration of multiples can produce high quality images with fewer artefacts, high resolution and balanced amplitudes. However, viscoelasticity exists widely in the earth, especially in the deep‐sea environment, and the influence of Q attenuation on multiples is much more serious than primaries due to multiples have longer paths. To compensate for Q attenuation of multiples, Q‐compensated least‐squares reverse time migration of different‐order multiples is proposed by deriving viscoacoustic Born modeling operators, adjoint operators and demigration operators for different‐order multiples. Based on inversion theory, this method compensates for Q attenuation along all the propagation paths of multiples. Examples on a simple four‐layer model, a modified attenuating Sigsbee2B model and a field data set suggest that the proposed method can produce better imaging result than Q‐compensated least‐squares reverse time migration of primaries and regular least‐squares reverse time migration of multiples. This article is protected by copyright. All rights reserved
Article
Detecting the top and base subsea permafrost from 2D seismic reflection data in shallow marine settings is a non‐trivial task due to the occurrence of strong free surface multiples. The potential to accurately detect permafrost layers on conventional 2D seismic reflection data is assessed through viscoelastic modeling. Reflection imaging of permafrost layers is examined through the evaluation of specific characteristics of the subsurface, acquisition parameters and their impact. Results show that limitations are related to the principles of the method, the intrinsic nature of the permafrost layers and the acquisition geometry. The biggest challenge is the occurrence of free surface multiples that overprint the base permafrost reflection, with the worst‐case scenario the case of a thin layer of ice‐bonded sand. Wedge models suggest that if the base permafrost is dipping it would intersect internal and free surface multiples of the seafloor and the top permafrost and be detected. Also, the amplitude ratio of the base permafrost reflection and the multiples decreases with increasing thickness of permafrost. Therefore, the crosscutting relationship between the reflection at base permafrost reflection and the multiples might not be enough to detect the base permafrost for thicker permafrost layers. Finally, the experiment results show that for partially ice‐bonded layers, the attenuation combined with the low reflectivity of the basal interface limits the likelihood to resolve the base permafrost, especially for thick permafrost layers. This article is protected by copyright. All rights reserved
Article
In recent years, a variety of deep learning (DL) models for seismic phase picking have attracted considerable attention and are widely adopted in many earthquake monitoring projects. However, most current DL models pick P and S arrivals trace by trace without simultaneously considering the spatial coherence of seismic phases among different stations in a seismic array. In this study, we develop a generalized neural network named CubeNet based on 3D U-Net to properly consider the spatial correlation of individual picks at different stations and thus improve the picking accuracy. To deal with data acquired by irregularly distributed stations, seismic data are first regularized into data cubes, which are then fed into CubeNet to calculate probability distributions of P arrivals, S arrivals, and noise. In addition, a variable trace resampling method for optimizing the differential sampling points between P and S arrivals in a trace for varying array apertures is also proposed to further improve the picking accuracy. CubeNet is trained by 47,000 microseismic data cubes and then tested by three data sets from different arrays with varying apertures and station intervals. It is found that CubeNet is rather resilient to impulsive noise and can avoid misidentifying most of the abnormal picks, which are challenging for the signal-trace based phase picking methods such as PhaseNet. We believe the newly proposed CubeNet is especially suitable for processing seismic data collected by large-N arrays.
Article
In least squares migration (LSM), multiples are usually a type of noise. Although they contain information about underground structures, they also cause artifacts in imaging. Therefore, multiple attenuation is an important way to reduce these artifacts in LSM images. Reweighted least squares reverse time migration (RWLSRTM) can use the weighting matrix and the predicted multiples to eliminate artifacts. Because the LSM provides a high resolution model, we can predict the internal multiples by using high-order Born modeling. The method is based on the inverse scattering series (ISS), and the difference is that it forwards the modeling of the internal multiples in the time domain; the model is constructed by the RWLSRTM. Because this method does not require performing as many Fourier transforms as the ISS method, it requires less calculation. We have applied the predicted multiples in the RWLSRTM to remove the artifacts caused by the multiples. The RWLSRTM image can also serve as a parameter of multiple predictions and can make the results of multiple predictions more accurate. The results of numerical tests using synthetic data show that this method can remove artifacts of internal multiples well. A comparison with the ISS method shows that our method can reduce the calculation.
Article
Compared with first-order surface-related multiples from marine data, the onshore internal multiples are weaker and are always combined with a hazy and occasionally strong interference pattern. It is usually difficult to discriminate these events from complex targets and highly scattering overburdens, especially when the primary energy from deep layers is weaker than that in shallow layers. The internal multiple elimination is even more challenging due to the fact that the velocity and energy difference between primary reflections and internal multiples is tiny. In this study, we propose an improved method which formulates the elimination of the internal multiples as an optimization problem and develops a convolution factor T. The generated internal multiples at all interfaces are obtained using the convolution factor T through iterative inversion of the initial multiple model. The predicted internal multiples are removed from seismic data through subtraction. Finally, several synthetic experiments are conducted to validate the effectiveness of our approach. The results of our study indicate that compared with the traditional virtual events method, the improved method simplifies the multiple prediction process in which internal multiples generated from each interface are built through iterative inversion, thus reducing the calculation cost, improving the accuracy, and enhancing the adaptability of field data.
Article
Seismic internal multiples are the key factors affecting the accuracy and reliability of velocity analysis and migration. The removal of internal multiples is a challenging direction. To effectively remove the internal multiples from the seismic data, we propose the unsupervised deep neural network (DNN) combined with the adaptive virtual events (AVEs) method. First, we use the AVE method to get the predicted internal multiples, which can calibrate the true internal multiples in the original data, also called the full wavefield data. Second, the unsupervised learning with the DNN is used as a nonlinear operator to minimize the difference between the estimated internal multiples and original data. The trained DNN can obtain the estimated internal multiples through the predicted internal multiples, thereby completing the suppression of the internal multiples. Since our proposed unsupervised learning is essentially an optimization process, it does not require true primaries as the label data to participate in the training process for the DNN. Therefore, our proposed method can deal with the problem of lack of training set and would have some good practical application value with low computational cost. The effectiveness and efficiency of our proposed method are verified through two sets of synthetic data and one land field data examples.
Article
The removal of free-surface-related multiples plays an important role in seismic data processing. We propose a novel scheme for predicting free-surface-related multiples by combining the revised Marchenko equations with free surface effects and convolutional seismic interferometry. This data-driven method can create the free-surface-related multiple prediction using only the reflection response recorded at the free surface and a macro-velocity model estimated in the first layer. By setting only one reference boundary in the first layer, we can stably and efficiently retrieve the one-way upgoing and scattered downgoing Green’s functions at virtual focusing points and then predict all orders of free-surface-related multiples. Numerical experiments on a 2D model and a subset of the Marmousi model show that this method can accurately and efficiently predict the travel times of all orders of free-surface-related multiples. After applying least-squares matched filtering, all orders of the surface-related multiples can be effectively eliminated from the raw shot gathers.
Article
Sub-bottom sediment classifications have been widely used in marine science and engineering to obtain high-resolution information on types of sediments; however, these are often plagued by inaccuracies. Classification difficulties arise from the inability to effectively filter multiple reflections, extract representative lithology characteristic parameters, identify sub-bottom layer interfaces, extract image samples, control sample quality, optimise characteristic parameters, etc. To generate a highly accurate sub-bottom profile sediment map, a five-step classification method that considers two key lithology characteristic parameters of sub-bottom profile acoustic data was proposed. First, multiple reflections were filtered from the sea surface and sub-bottom layer interfaces of the primary signal. Second, two key characteristic parameters (relative backscattering intensity difference and attenuation compensation residual) were calculated. These reflect the relative differences in backscattering intensity and the attenuation compensation between adjacent interfaces based on the sound intensity attenuation model of a sub-bottom profile. Third, a combined method based on the sediment quality factor and peak trough of the echo signal loss level curve was employed to identify the actual interfaces between layers. An additional technique was proposed to determine the image sample width and preferred characteristic parameters. The resulting high-quality image samples and preferred characteristic parameters not only resulted in a faster convergence rate and increased ability of self-aggregation and identification, but also ensured that the training results met the convergence accuracy requirement. Ultimately, the preferred image samples were trained to classify the overall sub-bottom map of a selected test area of approximately 36 km² in Bahai Bay, China. Compared with traditional methods, a considerably higher sediment identification accuracy was obtained. The experimental results indicate that the contribution rate of the two key lithology characteristic parameters was 65.49% according to principal component analysis, and the internal and external compatibilities were 97.98% and 84.76% for the training image samples, respectively. The total identification accuracy for the sub-bottom profile map was 98.2%. The two key characteristic parameters accurately captured the acoustic characteristics of sub-bottom sediments, significantly improving sediment classification. These results show that this method could be used to help refine the distributional estimates of submarine mineral resources.
Article
Normal moveout (NMO) and stacking, an important step in analysis of reflection seismic data, involves summation of seismic data over paths represented by a family of hyperbolic curves. This summation process is a linear transformation and maps the data into what might be called a velocity space: a 2-D set of points indexed by time and velocity. Examination of data in velocity space is used for analysis of subsurface velocities and filtering of undesired coherent events (eg multiples), but the filtering step is useful only if an approximate inverse to the NMO and stack operation is available.-from Authors
Article
This paper describes the discrete Radon transform (DRT) and the exact inversion algorithm for it. Similar to the discrete Fourier transform (DFT), the DRT is defined for periodic vector-sequences and studied as a transform in its own right. Casting the forward transform as a matrix-vector multiplication, the key observation is that the matrix-although very large-has a block-circulant structure. This observation allows construction of fast direct and inverse transforms. Moreover, we show that the DRT can be used to compute various generalizations of the classical Radon transform (RT) and, in particular, the generalization where straight lines are replaced by curves and weight functions are introduced into the integrals along these curves. In fact, we describe not a single transform, but a class of transforms, representatives of which correspond in one way or another to discrete versions of the RT and its generalizations. An interesting observation is that the exact inversion algorithm cannot be obtained directly from Radon's inversion formula. Given the fact that the RT has no nontrivial one-dimensional analog, exact invertibility makes the DRT a useful tool geared specifically for multidimensional digital signal processing. Exact invertibility of the DRT, flexibility in its definition, and fast computational algorithm affect present applications and open possibilities for new ones. Some of these applications are discussed in the paper.
Inverse velocity stacking for multiple elimina-tion: 56th Ann Expanded Abstracts
  • D Hampson
Hampson, D.. 1986, Inverse velocity stacking for multiple elimina-tion: 56th Ann. Internal. Mtg., Soc. Expl. Geophys., Expanded Abstracts. 422-424.