Article

Independent Simultaneous Sweeping – a method to increase the productivity of land seismic crews

Authors:
  • TNG Geophysics Ltd
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Simultaneous source acquisition significantly reduces the time and economic cost of a conventional seismic acquisition. However, the data acquired by simultaneous source contain blended wavefields and crosstalk noise from the simultaneous seismic sources (Beasley et al., 1998;Howe et al., 2008;Abma et al., 2015). Hence, separation of the multi-sources wavefields is necessary before further processing and imaging. ...
... In independent simultaneous source acquisition (Howe et al., 2008), the recorded data from different shots are blended spatially and temporally because of simultaneous sources positioning and dithering information. If we assume the seismic record is composed of multiple independent sources, the blended data can be represented by an equation below. ...
... Therefore, simultaneous source acquisition requires extra processing, called source separation or deblending. This step estimates the dataset one would have acquired via conventional acquisition (Beasley et al., 1998;Howe et al., 2008;Abma and Foster, 2020). ...
... For the pilot program, the 2D line length was 14.4 km in source spread with the shot-point interval of 25 m (576 SPs). Three pilot surveys were acquired on the same 2D line: blended acquisition with independent simultaneous sourcing (ISS, Howe et al., 2008), S-DSA and M-DSA, where the sources independently and simultaneously fired with no attempt to synchronize their activity. Figure 1a shows a conceptual plot of source and receiver spread for the ISS survey. ...
... The seismic acquisition industry has witnessed the emergence and thriving of the simultaneous-source technology in the past fifteen years, because of its capability of greatly improving the survey efficiency to save time and costs (Berkhout 2008, Beasley 2008, Abma et al. 2012. Many pioneering field efforts were focused on land and OBS surveys (Howe et al. 2008, Wolfarth et al. 2016, where one can achieve a large randomization of shot time dithering that makes the interfering energy from simultaneous shooting appear random in certain domains. This large randomization allows interference energy to be discriminated from coherent energy of the current source and, therefore, enables the subsequent deblending step to yield a good performance on source separation. ...
... A plethora of associated acronyms bear witness to the rapid development of the technology over recent years. Notable methods are slip-sweep (Rozemond, 1996), HFVS (Allen et al., 1998), DSSS (Bouska, 2010), ISS (Howe et al., 2008) and DSS (Bagaini and Ji, 2010). A useful summary was provided by Bagaini (2010). ...
... Indeed, for best wavefield separation, shooting times of different sources should be dithered with respect to each other, while real time communication and synchronization of the sources in the field, as was proposed by Vaage (2005), appeared quite complicated. For this reason, BP tested in 2006 a new approach -Independent Simultaneous Sourcing (ISS ®3 ), in which no effort is made to synchronize the sources (Howe, 2008), and the only constraint is on the receivers side: the recording has to be continuous. ...
Thesis
Simultaneous-source seismic data acquisition has recently attracted great attention both in the oil and gas industry and in academia, thanks to its capacity to save data acquisition time. Despite the evident time-saving advantage, the simultaneous-source method has a considerable draw-back: the sources interfere with each other creating cross-talk in the data, which leads to significant increase of the processing complexity and potential loss in the subsurface image quality. Recent advances in processing and imaging allow acceptable handling of the cross-talk, however, specific processing methods adapted for blended data still need to be improved. Many of the currently proposed separation methods need some preprocessing of the data, e.g., surface waves suppression. In this thesis, we propose to use a data-driven seismic event model in a greedy decomposition to obtain a separation suitable for raw data without any preprocessing. The proposed method is based on identifying coherent features in the data and classifying them according to their source of origin. We use two nested applications of Orthogonal Matching Pursuit, whose dictionaries are constituted of data-driven models of seismic events and wavelets. Thanks to several optimization steps and starting with appropriate initial conditions, we are able to effectively maximize a non-concave objective function and achieve a satisfactory separation quality, which we demonstrate on synthetic and real simultaneous-source signals.
... Simultaneous source acquisition, or blending acquisition, can be used to reduce acquisition time by allowing several seismic sources to fire at short random time intervals (Beasley et al., 1998;Howe et al., 2008). Responses from each source are recorded by the same receivers. ...
... See e.g. independent simultaneous sourcing (ISS, Howe et al., 2008); pseudo-random sweeping (Dean, 2014). This allows non-patterned shooting along the time dimension for each shot in the blended-source array. ...
Conference Paper
Full-text available
Recently, we established a blended-acquisition method: temporally signatured and/or modulated and spatially dispersed source array, namely S-/M-DSA, that jointly uses various signaturing and/or modulation in the time dimension and dispersed source array in the space dimension. Our S-/M-DSA method makes the blended-acquisition encoding and operations significantly simple and robust, as well as for the deblending processing. In this paper, we discuss how this method enhances the acquisition productivity, in addition to the deblending performance, both in the time and space dimension. This shows that S-/M-DSA enhances the acquisition productivity by decreasing the sweep length by a factor of the number of frequency bands in the time dimension, and decreasing the number of simultaneous shots using non-uniform, even-but-random and multi-scale sampling in the space dimension. This also reveals that S-DSA attains a best acquisition productivity whereas M-DSA attains a best deblending performance compared to conventional blending methods.
... tomohide.ishiyama@inpex.co.jp Ishiyama, Mercado and Belaid 2012); independent simultaneous sourcing (Howe et al. 2008); managed sources and spread (Bagaini, Daly and Moore 2012). There are plenty of examples using some concepts of this methodology; however, these are under certain constraints such as large distance separation among shot locations and large time shifts among shot times, thereby so far not fully enjoy the benefits of this methodology. ...
Article
We introduce a concept of generalized blending and deblending, develop its models, and accordingly establish a method of deblended‐data reconstruction using these models. The generalized models can handle real situations by including random encoding into the generalized operators both in the space and time domain, and both at the source and receiver side. We consider an iterative optimization scheme using a closed‐loop approach with the generalized blending and deblending models, in which the former works for the forward modelling and the latter for the inverse modelling in the closed loop. We applied our method to existing real data acquired in Abu Dhabi. The results show that our method succeeded to fully reconstruct deblended data even from the fully generalized, thus quite complicated blended data. We discuss the complexity of blending properties on the deblending performance. In addition, we discuss the applicability to time‐lapse seismic monitoring as it ensures high repeatability of the surveys. Conclusively, we should acquire blended data and reconstruct deblended data without serious problems but with the benefit of blended acquisition. This article is protected by copyright. All rights reserved
... See e.g. distance-separated simultaneous sweeping or shooting (DSSS, Bouska, 2010;Ishiyama et al., 2012); independent simultaneous sourcing (ISS, Howe et al., 2008); managed sources and spread (MSS, Bagaini et al., 2012). These conventional methods require certain constraints in the encoding, such as large distance separation among shot locations and large time shifts among shot times in the blended-source array, so that the shotgenerated wavefields do not overlap spatially and temporally with each other at least around the offset-time window of interest. ...
Conference Paper
Recently, we established a generalized blending model, which can explain any methods of blended acquisition by including the encoding into the generalized operators. With this highly flexible and tolerant model, we come up with a challenging question: what it is to be, and how to find an optimal blended-acquisition design, which should be the most suitable for deblended-data reconstruction among plenty of concepts of blended acquisition. In this paper, we introduce a method of blended-acquisition encoding: temporally modulated and spatially dispersed source array, namely M-DSA, that jointly uses modulation sequencing in the time dimension and dispersed source array in the space dimension. This allows quite straightforward deblending by filtering and physically separating frequency channels in the frequency domain. We run our blended-acquisition designing based on the deblending performance for several scenarios of blended acquisition. These examples show that: M-DSA attains the best deblending performance; this method has less constraints in the encoding with more operational flexibility, compared to other methods being developed in the industry today. Indeed, this method requires only simple signaturing in the encoding; merely frequency-banded and modulated signatures in the time dimension for each shot in the blended-source array. This could even render any other blending properties unnecessary. Those, such as distance separation among shot locations and time shifts among shot times, might not be required anymore. There might be no limitation on the number of sources, thus no limitation on the blending fold, in order to secure successful deblending. Furthermore, this method allows random sampling; randomly distributed sources in the space dimension in the blended-source array. Consequently, this method makes the blended-acquisition encoding and operations significantly simple and robust, as well as for the deblending processing. We believe that our M-DSA method should be one of the best methods of blended acquisition.
... Independent Simultaneous Sweeping (ISS also known as blended acquisition) was first introduced by Howe et al. (2008). The source points are divided into separate areas each with a fleet (usually containing a single vibrator) as shown in Figure 9. ...
... The wavefields resulting from this category of simultaneous ac- quisition are characterized by incoherent interference in certain do- mains, e.g., the common-receiver domain. This can be achieved by means of either (1) a randomized time delay between concurrent sources (Abma, 2012), (2) a randomized distance between concurrent sources for each shot (Monk and Bahorich, 2012), (3) a unique (often random) sweep signal for each source, or (4) a combination of the above as in independent simultaneous sweeping (Howe et al., 2008). ...
Article
Full-text available
In an effort to reduce acquisition costs or increase (source) sampling density, we have developed a coherent simultaneoussource scheme. Different from most existing simultaneous acquisition, our scheme enforces the received signal to remain coherent in all sorting domains, thus even in the commonreceiver domain. A major benefit of the enforced signal coherency is that it enables multidomain preprocessing prior to source separation. At the same time, it poses a challenge to the source separation itself. Based on the observation that the proposed coherent simultaneous-source scheme is equivalent to the traditional source array, we have developed a novel source separation method that comprises (1) interpolating the observed signal in the space domain and (2) removing the source-array effect. In practice, the source-array effect cannot perfectly be removed in the presence of notches. This fact can, however, be deliberately leveraged for noise attenuation.
Article
Compressive sensing introduces novel perspectives on non‐uniform sampling, leading to substantial reductions in acquisition cost and cycle time compared to current seismic exploration practices. Non‐uniform spatial sampling, achieved through source and/or receiver areal distributions, and non‐uniform temporal sampling, facilitated by simultaneous‐source acquisition schemes, enable compression and/or reduction of seismic data acquisition time and cost. However, acquiring seismic data using compressive sensing may encounter challenges such as an extremely low signal‐to‐noise ratio and the generation of interference noise from adjacent sources. A significant challenge to this innovative approach is to demonstrate the translation of theoretical gains in sampling efficiency into operational efficiency in the field. In this study, we propose a spatial compression scheme based on compressive sensing theory, aiming to obtain an undersampled survey geometry by minimizing the mutual coherence of a spatial sampling operator. Building upon an optimised spatial compression geometry, we subsequently consider temporal compression through a simultaneous‐source acquisition scheme. To address challenges arising from the recorded compressed seismic data in the non‐uniform temporal and spatial domains, such as missing traces and crosstalk noise, we present a joint deblending and reconstruction algorithm. Our proposed algorithm employs the iterative shrinkage‐thresholding method to solve an ℓ 2 – ℓ 1 optimization problem in the frequency–wavenumber–wavenumber ( ω – k x – k y ) domain. Numerical experiments demonstrate that the proposed algorithm produces excellent deblending and reconstruction results, preserving data quality and reliability. These results are compared with non‐blended and uniformly acquired data from the same location illustrating the robustness of the application. This study exemplifies how the theoretical improvements based on compressive sensing principles can significantly impact seismic data acquisition in terms of spatial and temporal sampling efficiency.
Article
Full-text available
ABSTRACT In the coastal Zone of Benin, the mangrove develops more in the sectors exchange and Westerner that Eastern, with a dominant species Rhizophoraracemosa. With the latter the species Avicenniagerminans, Conocarpus erectus and Lagunculariaracemosaare added which,are non-existent in the Eastern sector. Indeed, the natural harmful anthropogenes activities and:factors involved the regression of the surfaces of mangrove in the area of study. The aim of this research is to analyze the constraints which block the development of the mangrove in order to better define the approaches of their durable use. Collected data made with certain materials suchs: the GPS, the topographic chart, the guides of observation, the guides of maintenance and the investigations socio-anthropological (semi maintenance direct, x-ray group, interview) near the population target (fishing, salicultrices, etc.) allowed the use of model SEPO (Success, Echecs, Potentialités and Obstacles). This model puts forward successes, the failures, the potentialities and the obstacles related to the management of the mangrove to the Benin one. The results show that various activities of afforestation carried out by certain institutions (ABE, DFRN, PPL, CeRPA, FAO, etc.) in load of the protection of the mangrove in the coastal Zone of Benign failed because of several constraints. To limit these last and to transform them into opportunities, an evaluation of the strategies of durable management of the mangrove makes it possible to better create an adequate framework of perennisation of the functions and services of this ecosystem.
Article
We introduce a blended‐acquisition method: temporally signatured and/or modulated and spatially dispersed source array. The former, signatured and dispersed source array, has much less constraints in the encoding with operational flexibility, allowing non‐uniform sampling and non‐patterned shooting both in the space and time dimension. The latter, modulated and dispersed source array, allows straightforward deblending by filtering and physically separating frequency channels in the frequency domain. We demonstrate our method by synthesizing the blended acquisition followed by deblended‐data‐reconstruction processing in order to discuss the virtues. The examples show that this method could make the blended‐acquisition encoding and operations indeed simple and robust; the same is true for the deblending processing. This article is protected by copyright. All rights reserved
Conference Paper
The high productivity blended acquisition technique greatly improves the acquisition efficiency. However, this technology also introduces serious interference noise from adjacent sources. An efficient data deblending method based on sparse inversion is used to separate the simultaneous sources for offshore 3D DAS VSP data. Due to the difference in coherence between main source and blended noise on the time-space common receiver point gathers, which can be expressed by the sparsity in the FKK domain, we can solve the effective signal by imposing additional constraints on the unknown signal model. The signal model with gradually improved signal-to-noise ratio can be obtained by subtracting interference noise from simultaneous sources. The method has been proved to be efficient and useful in separating the main source from the interference noise by using the Offshore 3d DAS VSP data. It should be pointed out that this method is not applicable to the data acquired in the area with irregular spatial distribution of shot points because this method requires 3D FKK transformation of common receiver point data.
Article
Iterative rank-reduction implemented via Multichannel Singular Spectrum Analysis (MSSA) filtering has been proposed for data deblending. The original algorithm is based on the projected gradient descent method with a projection given by the MSSA filter. Unfortunately, MSSA filters operate on data deployed on a regular grid. We propose to adopt a recently proposed modification to MSSA, Interpolated-MSSA (I-MSSA), to deblend and reconstruct sources in situations where the acquired blended data correspond to sources with arbitrary irregular-grid coordinates. In essence, we propose an iterative rank-reduction deblending method that can honor true source coordinates. In addition, we show how the technique can also be used for source regularization and interpolation. We compare the proposed algorithm with traditional iterative rank reduction that adopts a regular source grid and ignores errors associated with allocating off-the-grid source coordinates to the desired output grid. Synthetic and field data examples show how the proposed method can deblend and reconstruct sources simultaneously.
Article
The simultaneous source data obtained by simultaneous source acquisition contain crosstalk noise and cannot be directly used in conventional data processing procedures. Therefore, it is necessary to deblend the blended wavefield to obtain the conventionally acquired single-shot recordings. In this study, we propose an iterative inversion method based on the unsupervised deep neural network (UDNN) to deblend the simultaneous source data from a denser shot coverage survey (DSCS). In the common receiver gather (CRG), the coherent effective signals in the blended data of the primary and secondary sources are similar. We exploit the excellent nonlinear optimization capability of the U-net network to extract similar coherent signals from the blended data of the primary and secondary sources by minimizing the total loss function. The proposed UDNN method does not need to use the raw unblended data as label data, which solves the problem of missing label data and is suitable for deblending the simultaneous source data in different work areas with complex underground structures. One synthetic data and one field data examples are used to prove that the proposed method can suppress crosstalk noise and protect weak effective signals effectively, and achieve good effectiveness for the separation of simultaneous source data.
Article
In the quest for denser, nimbler, and lower-cost seismic surveys, the industry is seeing a revolution in the miniaturization of seismic equipment, with autonomous nodes approaching the size of a geophone and sources becoming portable by crews on foot. This has created a paradigm shift in the way seismic is acquired in difficult terrains, making zero-environmental-footprint surveys a reality while reducing cost and health, safety, and environmental risk. The simplification of survey operation and the new entry price of seismic surveys unlocked by these technologies are already benefiting industries beyond oil and gas exploration. High trace density seismic has become accessible to industries playing a key role in the net-zero era, such as geothermal and carbon capture, utilization, and storage (CCUS), to which a good understanding of the subsurface geology is crucial to their success. We describe these benefits as observed during an ultra-high-density seismic survey acquired in June 2020 through a partnership between STRYDE, Explor, and Carbon Management Canada over the Containment and Monitoring Institute site. The smallest and lightest source and receiver equipment in the industry were used to achieve a trace density of 257 million traces/km ² over this test site dedicated to CCUS studies. We discuss the operational efficiency of the seismic acquisition, innovative techniques for data transfer and surveying, and preliminary results of the seismic data processing with a focus on the near-surface model and fast-track time migration.
Article
Seismic blended source acquisition, also referred to as simultaneous source acquisition, is a cost-effective technology that achieves a significant reduction in acquisition cycle time and increases seismic crew field productivity. The dispersed source array is a blended acquisition field technique that simultaneously employs sources emitting different types of sweeps (i.e., multi-sweep), in terms of frequency bandwidth and length, which ultimately will result a full broadband seismic data. In this paper, deblending of 3D multi-sweep seismic blended data and the subsequent merging of the data volumes having different frequency bandwidths will be discussed. In specific data domains where the signal component is coherent, interference shots (i.e., blending noise) are randomly distributed in the data space according to its own shot firing time. Therefore, the deblending process, which separates interference shots from a signal component, becomes a noise attenuation problem. A sparse inversion methodology is applied in the frequency-wavenumber-wavenumber (f-kx-ky) domain to attenuate blending noise. By applying this deblending methodology to both dispersed source array's low- and mid-high-frequency bandwidths, we obtained high quality deblending results. For both frequency bandwidths of the deblended dispersed source array data, additional effort was made to combine the two datasets to a single broadband data volume. Consequently, deblending and merging of the dispersed source array blended data generated a broadband, deblended and well-balanced seismic volume suitable for further processing and reservoir characterization applications. This article is protected by copyright. All rights reserved
Article
We solve the simultaneous source separation problem by adopting the projected gradient descent (PGD) method to iteratively estimate the data one would acquire via a conventional seismic acquisition. The projection operator is a windowed robust singular spectrum analysis (SSA) filter that suppresses source interferences in the fxf-x (frequency-space) domain. We reformulate the SSA filter as a robust optimization problem solved via a bifactored gradient descent (BFGD) algorithm. Robustness becomes achievable by adopting Tukey’s biweight loss function for the design of the robust SSA filter. The SSA filter requires breaking down common-receiver gathers or common offset gathers into small overlapping windows. The traditional SSA method needs the filter rank as an input parameter, which can vary from window to window. The latter has been a shortcoming for the application of classical SSA filtering to complex seismic data processing. The proposed robust SSA filter is less sensitive to rank-selection, making it appealing for deblending applications that require windowing. Additionally, the robust SSA projection provides an effective attenuation of random source interferences during the initial iterations of the PGD method. Comparing classical and robust SSA filters, we also report an acceleration of the PGD method convergence when we adopt the robust SSA filter. Finally, we provide synthetic and real data examples, and discuss heuristic strategies for parameter selection.
Article
Blended source acquisition has drawn great attention in industry due to its increased efficiency and reduced overall cost for acquiring seismic data. It eliminates the requirement of a minimum time (usually determined by record length) between adjacent shots and allows multiple sources to be activated simultaneously and independently. Conventional processing simply converts continuous records into fixed-length records using the source excitation time and then applies traditional denoising techniques to the fixed-length records. Source excitation time is used to extract fixed-length records that are the equivalent of traditional synchronous recording. Here, we elaborate on the usage of continuous records for land noise attenuation. Compared to conventional common shot/receiver/midpoint/offset domains, continuous records represent the data in the naturally recorded domain. This domain offers flexible and much longer record lengths to work with and, moreover, enables exploiting the characteristics of noise prior to correlation, shot slicing, or other preprocessing. We limit our discussions to the techniques and methods for attenuating coherent environmental and source-generated noise on vibroseis data. We have found that incoherent noise can be handled effectively by traditional noise suppression methods after deblending. We illustrate the effectiveness of noise attenuation in the continuously recorded domain for three different types of noise using field examples from the North Slope of Alaska and the Permian Basin.
Conference Paper
We recently introduced a blended-acquisition method: temporally signatured and/or modulated, and spatially dispersed source array, so-called S-/M-DSA. This method jointly uses various signaturing and/or modulation along with dispersed source array each in the time and space dimension. We just acquired and processed the first pilot survey with S-/M-DSA onshore Abu Dhabi. In this paper, we review the resulting acquisition-productivity enhancement in the time dimension, and discuss it in the space dimension as well. We then review the deblended-data-reconstruction processing, and discuss its results and performance. We last establish a relationship between the acquisition productivity and the deblended-data-reconstruction performance. We found that S-/M-DSA significantly enhances the acquisition productivity compared to conventional blending methods. For the deblended-data reconstruction, the deblended data can properly be reconstructed, with a high repeatability over the full frequency band, from the blended data. This method owns a relationship: the deblended-data-reconstruction performance increases with the acquisition time (inversely proportional to the acquisition productivity) in general, up to a plateau to be reached in cases of conventional blending methods with much longer acquisition times.
Conference Paper
We have developed a new iterative method for simultaneous source separation (deblending). The proposed technique adopts the robust sparse Radon transform to define a coherence pass operator, which, used in conjunction with the steepest descent method, guarantees solutions that honor simultaneous source records. We show that a substantial improvement in convergence is attainable when we adopt a robust coherence pass projection, such as the robust sparse Radon transform by ADMM method. The improvement is a consequence of having an iterative deblending algorithm that applies intense denoising to erratic blending noise in its initial iterations. The coherence pass robust Radon operator acts as a data projection operator that conserves coherent signals and annihilates incoherent blending noise right from the start of the iterative process. We compare the algorithm with its non-robust version and show that a coherence pass robust Radon operator can attain high-quality results for both synthetic and real data examples.
Article
The Tangguh gas fields in Eastern Indonesia are overlain by a complex overburden, including a thick, heavily faulted, and intensely karstified carbonate interval that tends to scatter and attenuate seismic energy. Development drilling is challenging, with the potential for pack-offs and stuck pipe when drilling into unstable, partially collapsed caves or karstified fault planes while on total losses. Ideally, these karst features are to be avoided when planning and drilling wells, but avoiding them depends on having a well-resolved seismic image. Historical towed-streamer and sparse ocean-bottom cable seismic is low fold and does not give a satisfactory image for well planning. Advances in ocean-bottom node technology, computer processing, and capacity coupled with efficient survey design and blended acquisition utilizing multiple source vessels allowed a step change in data density. This provided a new high-quality seismic image to support future development activities. The advantages of densely sampled, full-azimuth data include rapid delivery of fast-track products (because high-quality images can be constructed with relatively simple processing flows), greatly improved overburden imaging, and a corresponding uplift in deeper imaging leading to enhanced reservoir characterization.
Article
Numerous field acquisition examples and case studies have demonstrated the importance of recording, processing, and interpreting broadband land data. In most seismic acquisition surveys, three main objectives should be considered: (1) dense spatial source and receiver locations to achieve optimum subsurface illumination and wavefield sampling; (2) coverage of the full frequency spectrum, i.e., broadband acquisition; and (3) cost efficiency. Consequently, an effort has been made to improve the manufacturing of seismic vibratory sources by providing the ability to emit both lower (approximately 1.5 Hz) and higher frequencies (approximately 120 Hz) and of receivers by utilizing single, denser, and lighter digital sensors. All these developments achieve both operational (i.e., weight, optimized power consumption) and geophysical benefits (i.e., amplitude and phase response, vector fidelity, tilt detection). As part of the effort to reduce the acquisition cycle time, increase productivity, and improve seismic imaging and resolution while optimizing costs, a novel seismic acquisition survey was conducted employing 24 vibrators generating two different types of sweeps in a 3D unconstrained decentralized and dispersed source array field configuration. During this novel blended acquisition design, the crew reached a maximum of 65,000 vibrator points during 24 hours of continuous recording, which represents significantly higher productivity than a conventional seismic crew operating in the same area using a nonblended centralized source mode. Applying novel and newly developed deblending algorithms, high-resolution images were obtained. In addition, two data sets (i.e., low-frequency and medium-high-frequency sources) were merged to obtain full-bandwidth broadband seismic images. Data comparisons between the distributed blended and nonblended conventional surveys, acquired by the same crew during the same time over the same area, showed that the two data sets are very similar in the poststack and prestack domains.
Article
The high-density acquisition technique can improve subsurface imaging accuracy. However, it increases production cost rapidly and limits the wide application in practice. To solve this issue, the high productivity blending acquisition technology has emerged as a promising way to significantly increase the efficiency of seismic acquisition and reduce production cost. The great challenge of the blending acquisition technology lies in the severe interference noise of simultaneous sources. Therefore, the success of the blending acquisition technology relies heavily on the effectiveness of separating effective energy from the blended noise. We propose a blended noise suppression approach by using a hybrid median filter, normal moveout (NMO), and complex curvelet transform (CCT) approach. First, median filter is applied to original data after NMO correction. Second, the CCT-based thresholding denoising method is used to extract the remained effective energy from the data after median filtering to get the preliminary de-blended result. Next, the updated data are obtained by subtracting the pseudo-de-blended data of the de-blended result from the original data, and the process iterates. Last, the final de-blended result is obtained by adding the retrieved energy at each iteration until the signal-to-noise ratio satisfies the desired level. We demonstrate the effectiveness of the proposed approach on simulated synthetic and field data examples.
Article
We propose a workflow of deblending methodology comprised of rank reduction filtering followed by a signal enhancing process. This methodology can be used to preserve coherent subsurface reflections and at the same time to remove incoherent and interference noise. In pseudo‐deblended data, the blending noise exhibits coherent events whereas in any other data domain (i.e., common receiver, common midpoint and common offset) it appears incoherent and is regarded as an outlier. In order to perform signal deblending, a robust implementation of rank reduction filtering is employed to eliminate the blending noise, and is referred to as a joint sparse and low rank approximation. Deblending via rank reduction filtering gives a reasonable result with a sufficient signal‐to‐noise ratio (SNR). However, for land data acquired using unconstrained simultaneous shooting, rank reduction based deblending applications alone do not completely attenuate the interference noise. A considerable amount of signal leakage is observed in the residual component which can affect further data processing and analyses. In this study, we propose a deblending workflow via a rank reduction filter followed by post‐processing steps comprised of a nonlinear masking filter and a local orthogonalization weight (LOW) applications. Although each application shows a few footprints of leaked signal energy, the proposed combined workflow restores the signal energy from the residual component achieving significantly SNR enhancement. These hierarchical schemes are applied on land simultaneous shooting acquisition data sets and produced cleaner and reliable deblended data ready for further data processing. This article is protected by copyright. All rights reserved
Article
We present Matrioshka orthogonal matching pursuit (OMP), a method consisting of two nested OMPs for separating seismic sources at an early stage of the signal processing chain. Matrioshka OMP is based on models of sensor signals that place nonrestrictive assumptions on the seismic survey using simultaneous sources. Our seismic event model is based on the spatial coherence of signals, which results in a straight or slightly curved feature in the trace representation of the data with a specific wavelet, whose magnitude can linearly vary according to the offset. We demonstrate the effectiveness of the approach on synthetic and real data.
Article
We have developed an iterative method for simultaneous source separation (deblending) suitable for data acquired with a high blending factor. The proposed technique adopts the robust sparse Radon transform to define a coherence pass operator that is used in conjunction with the steepest descent method to guarantee solutions that honor simultaneous source records. We show that an important improvement in convergence is attainable when the coherence pass projection is derived from a robust sparse Radon transform. This is a consequence of having an iterative deblending algorithm that applies intense denoising to erratic blending noise in its initial iterations. The coherence pass robust Radon operator acts as a data projection operator that preserves coherent signals and annihilates incoherent blending noise right from the start of the iterative process. We compare the algorithm with its non-robust version and show that a coherence pass non-robust Radon operator will only achieve high-quality results for acquisitions with a moderate blending factor.
Article
The blended acquisition can help improve the seismic data quality or enhance the acquisition efficiency. However, the blended seismic data should first be separated for subsequent traditional seismic data processing steps. The signal is coherent in the common receiver domain, and the blending noise shows randomness when the blending operator is constructed using a random time delay series. The seismic data can be characterized sparsely by the curvelet transform which can be used for deblending. However, it has a high computational cost, especially for large-volume seismic data. The spectrum of the seismic data is band-limited with the conjugate symmetry property, and thus the principal frequency components can characterize the signal accurately. The size of the principal frequency components is at least halved. Thus, we propose to implement the curvelet transform on the principal frequency wavenumber (PFK) domain data instead of the time-space (TX) domain data. The size of the PFK domain data is at least halved compared with the TX domain data, which can improve the deblending efficiency reasonably. The related formulae are fully derived and the efficiency enhancement analysis is provided in detail. One synthetic and two field artificially blended data are provided to demonstrate the validity and flexibility of the proposed method in the efficiency improvement and the deblending performance. The separated gathers can be beneficial for subsequent traditional seismic data processing procedures.
Article
The availability of reliable simultaneous source deblending routines in seismic data processing allows 3D survey designers to consider the use of more than the traditional two sources used in marine towed streamer 3D surveys. Crossline sampling with sources rather than streamers provides opportunities to produce denser sampling for the same vessel sail-line effort, or the same sampling with fewer streamers, and/or the same sampling with wider streamer separations resulting in a considerable uplift in vessel efficiencies and reductions in cost, operational risks, and crew safety exposure.
Article
Compared with conventional seismic acquisition methods, simultaneous-source acquisition utilizes independent shooting that allows for source interference, which reduces the time and cost of acquisition. However, additional processing is required to separate the interfering sources. Here, we present an inversion-based deblending method, which distinguishes signal from blending noise based on coherency differences in 3D receiver gathers. We first transform the seismic data into the frequency-wavenumber-wavenumber domain and impose a sparse constraint to estimate the coherent signal. We then subtract the estimated signal from the original input to predict the interference noise. Driven by data residuals, the signal is updated iteratively with shrinking thresholds until the signal and noise fully separate. We test our presented method on two 3D field data sets to demonstrate how the method proficiently separates interfering vibroseis sources with high fidelity.
Article
A goal of simultaneous shooting is to acquire high-quality seismic data more efficiently, while reducing operational costs and improving acquisition efficiency. Effective sampling and deblending techniques are essential to achieve this goal. Inspired by compressive sensing (CS), we have formulated deblending as an analysis-based sparse inversion problem. We solve the inversion problem with an algorithm derived from the classic alternating direction method (ADM), associated with variable splitting and nonmonotone line-search techniques. In our testing, the analysis-based formulation together with nonmonotone ADM algorithm provides improved performance compared with synthesis-based approaches. A major issue for all deblending approaches is how to deal with real-world variations in seismic data caused by static shifts and amplitude imbalances. We evaluate the concept of including static and amplitude corrections obtained from surface-consistent solutions into the deblending formulation. We implement solutions that use a multistage inversion scheme to overcome the practical issues embedded in the field-blended data, such as strong coherent noise, statics, and shot-amplitude variations. The combination of these techniques gives high-fidelity deblending results for marine and land data. We use two field-data examples acquired with simultaneous sources to demonstrate the effectiveness of the proposed approach. Imaging and amplitude variation with offset quantitative analysis are carried out to indicate the amplitude-preserving character of deblended data with this methodology.
Article
Simultaneous sources acquisition, also referred to as “blended acquisition”, involves recording two or more shots simultaneously. It allows for denser spatial sampling and can greatly speed up the field data acquisition. Thus, it has potential advantage to improve seismic data quality and reduce acquisition cost. In order to achieve the goal of blended acquisition, a deblending procedure is necessary. It attenuates the interference and thus improves the resolution of the pre-stack time migration image. In this paper, we propose an efficient deblending method, which applies frequency-varying median and mean filters to cross-spread azimuth-offset gathers (XSPR-AO). The method can be used with variable window sizes according to the characteristics of the interference. The effectiveness of the method is validated by a field data example.
Article
Although narrow‐azimuth towed‐streamer data provides good image quality for structural interpretation, it is generally accepted that for wide‐azimuth marine surveys seabed receivers deliver superior seismic reflection measurements and seismically derived reservoir attributes. However, seabed surveys are not widely used due to the higher acquisition costs when compared to streamer acquisition. In recent years, there have been significant engineering efforts to automate receiver deployment and retrieval in order to minimize the cost differential and conduct cost‐efficient seabed receiver seismic surveys. These engineering efforts include industrially engineered nodes, nodes‐on‐a‐rope deployment schemes and even robotic nodes, which swim to and from the deployment location. This move to automation is inevitable, leading to robotization of seismic data acquisition for exploration and development activities in the oil and gas industry. We are developing a robotic‐based technology, which utilizes autonomous underwater vehicles as seismic sensors without the need of using a remotely operated vehicle for deployment and retrieval. In this paper, we describe the autonomous underwater vehicle evolution throughout the project years from initial heavy and bulky nodes to fully autonomous light and flexible underwater receivers. Results obtained from two field pilot tests using different generations of autonomous underwater vehicles indicate that the seismic coupling, and navigation based on underwater acoustics are very reliable and robust. This article is protected by copyright. All rights reserved
Conference Paper
Full-text available
Two important developments in seismic acquisition in the last decade were introduction of simultaneous shooting and reconstruction via inversion of the complete seismic wavefield. As the seismic wavefield is typically under sampled in at least one of the four spatial coordinates, both developments could contribute to improve the sampling and, in addition, to increase acquisition efficiency. Herein, we look at the impact of seismic noise when simultaneous shooting and seismic wavefield reconstruction are implemented for a land or ocean bottom seismic survey.
ResearchGate has not been able to resolve any references for this publication.