Geophysics

Published by Society of Exploration Geophysicists
Online ISSN: 0016-8033
Publications
Conference Paper
We have developed a computer-graphic-photographic system which uses color mimicry to display the frequency spectra of seismic events simultaneously with their time-varying waveforms. Mimicking the visible light spectrum, we have used red for the low frequencies and violet for the highs. The output of our system is a variable-area-wiggletrace seismic cross-section. The waveforms are the same as those on a conventional section; however, the variable-area part of the section appears in color. The color represents the frequency spectrum of the wavelets. Lateral changes in rock attenuation show up as color shifts on this type of display. Faults often stand out as interrupted color bands. Fault diffractions sometimes have a characteristic color signature. The cancellation of high frequencies due to misalignment of events on constant-velocity stacks can show up in color. Loss of high frequencies due to slight lateral changes in moveout velocity, and consequent trace misalignment, is often indicated by a shift toward red on a color seismic section.
 
Conference Paper
A formulation of the semblance coefficient in terms of data covariance matrix eigenstructure is given. The conventionally used semblance procedure is tied to eigenstructure, thereby allowing the seismic signal analyst an opportunity to relate the various displays of velocity spectra using more than visual appearance. After writing semblance in the eigenstructure form, comparisons are made to a number of well-known eigenstructure formulations, which separate signal and noise (vector) subspaces. It is shown qualitatively that semblance should have neither the resolving power nor the computational efficiency of the newer methods
 
Conference Paper
An electromagnetic sounder developed for an archaeological application in Egypt has been successfully tested in a California dolomite mine. Chambers in the mine 100 to 130 ft from the surface gave intense, well‐defined echoes consistent with an average attenuation coefficient of 0.6 db/m and a relative dielectric constant of 11. By moving the transmitter and receiver units on the hillside above the underground chambers, various characteristics of the propagation could be observed such as dispersion, chamber aspect sensitivity, and cross‐section. The transmitted pulse was one to one‐and‐a‐half radio‐frequency cycles long at a peak power of 0.2 Mw. Frequencies employed were 16 to 50 Mhz. The light weight, highly portable, battery‐powered equipment is potentially suited to other underground sounding applications.
 
Conference Paper
A method of estimating c <sub>66</sub>, one of the shear moduli of a transversely isotropic rock, using a guided wave generated during acoustic logging is described. The symmetry axis of the anisotropy is assumed to be parallel to the borehole. The inversion for c <sub>66</sub> is based on a cost function which has three terms: a measure of the misfit between the observed and predicted wavenumbers of the guided wave (tube wave), a measure of the misfit between the current estimate for c <sub>66</sub> and its most-likely value, and penalty functions which constrain the estimate for c <sub>66</sub> to physically acceptable values. Estimates for c <sub>66</sub> from synthetic data are almost always within 1% of their correct value. Estimates for c <sub>66</sub> from field data collected in a formation with a high clay content are typical of transversely isotropic rocks
 
Article
Adaptive subtraction is a key element in predictive multiple-suppression methods. It minimizes misalignments and amplitude differences between modeled and actual multiples, and thus reduces multiple contamination in the dataset after subtraction. Due to the high cross-correlation between their waveform, the main challenge resides in attenuating multiples without distorting primaries. As they overlap on a wide frequency range, we split this wide-band problem into a set of more tractable narrow-band filter designs, using a 1D complex wavelet frame. This decomposition enables a single-pass adaptive subtraction via complex, single-sample (unary) Wiener filters, consistently estimated on overlapping windows in a complex wavelet transformed domain. Each unary filter compensates amplitude differences within its frequency support, and can correct small and large misalignment errors through phase and integer delay corrections. This approach greatly simplifies the matching filter estimation and, despite its simplicity, narrows the gap between 1D and standard adaptive 2D methods on field data.
 
Article
Solutions to the problem of radiation of dipole antennas in the presence of a stratified anisotropic media are facilitated by decomposing a general wave field into transverse magnetic (TM) and transverse electric (TE) modes. Employing the propagation matrices, wave amplitudes in any region are related to those in any other regions. The reflection coefficients, which embed all the information about the geometrical configuration and the physical constituents of the medium, are obtained in closed form. In view of the general formulation, various special cases are discussed.
 
Article
We use the Discrete Element Method (DEM) to understand the underlying attenuation mechanism in granular media, with special applicability to the measurements of the so-called effective mass developed earlier. We consider that the particles interact via Hertz-Mindlin elastic contact forces and that the damping is describable as a force proportional to the velocity difference of contacting grains. We determine the behavior of the complex-valued normal mode frequencies using 1) DEM, 2) direct diagonalization of the relevant matrix, and 3) a numerical search for the zeros of the relevant determinant. All three methods are in strong agreement with each other. The real and the imaginary parts of each normal mode frequency characterize the elastic and the dissipative properties, respectively, of the granular medium. We demonstrate that, as the interparticle damping, $\xi$, increases, the normal modes exhibit nearly circular trajectories in the complex frequency plane and that for a given value of $\xi$ they all lie on or near a circle of radius $R$ centered on the point $-iR$ in the complex plane, where $R\propto 1/\xi$. We show that each normal mode becomes critically damped at a value of the damping parameter $\xi \approx 1/\omega_n^0$, where $\omega_n^0$ is the (real-valued) frequency when there is no damping. The strong indication is that these conclusions carry over to the properties of real granular media whose dissipation is dominated by the relative motion of contacting grains. For example, compressional or shear waves in unconsolidated dry sediments can be expected to become overdamped beyond a critical frequency, depending upon the strength of the intergranular damping constant.
 
Article
An evaluation was performed on SWIR (2000-2400 nm) data from two airborne remote sensing systems for discriminating and identifying alteration minerals at Cuprite, Nevada. The data were acquired by the NASA Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS) and the GEOSCAN Mk II multispectral scanner. The evaluation involved comparison of processed imagery and image-derived spectra with existing alteration maps and laboratory spectra of rock samples from Cuprite. Results indicate that both the AVIRIS and GEOSCAN data permit the discrimination of areas of alunite, buddingtonite, kaolinite, and silicification using color composite images formed from three SWIR bands processed with either the decorrelation stretch or a log residual algorithm. The laboratory spectral features alunite, kaolinite and buddingtonite could be seen clearly only in the log residual processed AVIRIS data. However, this does not preclude their identification with the GEOSCAN data.
 
Article
The NASA airborne Thermal Infrared Multispectral Scanner (TIMS) was flown over Death Valley, California in July 1983. Using the day and night surface temperature data and Landsat TM reflectance data, an apparent thermal inertia image has been produced. This image allows separation of some bedrock units and separation of bedrock from alluvium. The temperature images allow inferences about the soil moisture and/or soil conditions on some of the alluvial fans. -from Author
 
Article
The three electromagnetic properties appearing in Maxwell's equations are dielectric permittivity, electrical conductivity and magnetic permeability. The study of point diffractors in a homogeneous, isotropic, linear medium suggests the use of logarithms to describe the variations of electromagnetic properties in the earth. A small anomaly in electrical properties (permittivity and conductivity) responds to an incident electromagnetic field as an electric dipole, whereas a small anomaly in the magnetic property responds as a magnetic dipole. Neither property variation can be neglected without justification. Considering radiation patterns of the different diffracting points, diagnostic interpretation of electric and magnetic variations is theoretically feasible but is not an easy task using Ground Penetrating Radar. However, using an effective electromagnetic impedance and an effective electromagnetic velocity to describe a medium, the radiation patterns of a small anomaly behave completely differently with source-receiver offset. Zero-offset reflection data give a direct image of impedance variations while large-offset reflection data contain information on velocity variations.
 
Article
The National Aeronautics and Space Administration is supporting research in those areas of air- and spaceborne remote sensing which are related to the study of natural and cultural resources. Resources which can be studied in this manner include mineral districts, uncharted petroleum basins, little known coastal areas, soils and soil moisture, crops, timber, land and land use in both agricultural and metropolitan areas, watershed areas, transportation networks, sea state, marine near-surface life, and so forth. Remote sensing instruments in earth-orbital spacecraft possess a number of unique advantages, among which are: rapidity and continuity of observations, synoptic imagery for regional analyses, reduced data acquisition time, relative freedom from weather disturbances, reduced costs for those features requiring periodic measurements, and better quality data of some types. The progress being made by NASA in these areas is discussed.
 
Example 2: matching noisy synthetic traces. A synthetic trace is generated using the Marmousi velocity model and the other trace is created by applying a known mapping. The black arrows in the top figure connect corresponding peaks.
Spectra of signals transformed via LFA transformations (6a)-(6c). (a) u(t) and its envelope U e (t) = |u(t) + iH[u(t)]|, (b) Comparison of a wavelet u(t) and its LFA transformation U h (t) obtained using the Hilbert transform (6a), (c) Comparison of spectra of the wavelet u(t) and of the envelope signal U e (t), (d) Comparison of spectra of the wavelet u(t) and of the LFA signal U h (t), (e) Comparison of spectra of the wavelet u(t) and of the LFA signal U s (t) = u 2 (t), (f) Comparison of spectra of the wavelet u(t) and of the LFA signal U a (t) = |u(t)|. The blue solid (red dashed) lines in (c)-(f) respectively correspond to spectra of the wavelet u(t) (of the LFA signals).
Comparison of convergence among seismogram registration using three LFA signals: U h , U s , and U a . The black solid (red marked, blue dashed) line corresponds to L 2 data misfit using U h (U s , U a ), respectively.  
Article
This paper introduces an iterative scheme for acoustic model inversion where the notion of proximity of two traces is not the usual least-squares distance, but instead involves registration as in image processing. Observed data are matched to predicted waveforms via piecewise-polynomial warpings, obtained by solving a nonconvex optimization problem in a multiscale fashion from low to high frequencies. This multiscale process requires defining low-frequency augmented signals in order to seed the frequency sweep at zero frequency. Custom adjoint sources are then defined from the warped waveforms. The proposed velocity updates are obtained as the migration of these adjoint sources, and cannot be interpreted as the negative gradient of any given objective function. The new method, referred to as RGLS, is successfully applied to a few scenarios of model velocity estimation in the transmission setting. We show that the new method can converge to the correct model in situations where conventional least-squares inversion suffers from cycle-skipping and converges to a spurious model.
 
Article
The coupling of seismic energy under vacuum conditions, such as the moon, using an untamped surface charge is different from coupling in air. In vacuum, the explosive gas blast and the detonation products continuously expand outward and interact with the solid surface. A series of model experiments was performed to investigate the effect of vacuum on coupling seismic energy. HNS charges of 0.2 gm each were detonated in contact with a plate and block of acrylic plastic in vacuum and in air. The amplitudes of the first and second arrivals (longitudinal and shear plate wave) are about 50 percent greater in vacuum than in air because the plate velocity (∼2.4 km/sec) more closely matches the gas‐blast velocity (∼3.5 to 7.5 km/sec) than the sound‐wave velocity (∼0.35 km/sec). When the charges are detonated in contact with the block to generate direct body waves, little difference is noted in the first arrival amplitudes in air and vacuum; suspending the charge one charge‐diameter above the surface produces about 25 percent lower first amplitudes in a vacuum. Large scale experiments were also performed in air to examine the effect of the detonation configuration on seismic coupling.
 
Article
A new experimental satellite has provided, for the first time, thermal data that should be useful in reconnaissance geologic exploration. Thermal inertia, a property of geologic materials, can be mapped from these data by applying an algorithm that has been developed using a new thermal model. A simple registration procedure was used on a pair of day and night images of the Powder River basin, Wyoming, to illustrate the method. Preliminary assessment of these satellite data suggests that they will be of significant use for resource exploration when used in conjunction with other geologic, geophysical, and geochemical data.
 
Article
Seismic data are usually irregularly and sparsely sampled along the spatial coordinates, which yields suboptimal imaging results. For time‐lapse data, differences in the spatial sampling of base and monitor surveys lead to undesired differences between the images of the surveys that are not due to differences in the subsurface. In this paper, an efficient two‐step reconstruction method is proposed for irregularly and sparsely sampled 3D seismic data recorded with a dominant azimuth for large offsets. The first step is reconstruction along the receiver lines such that the midpoints are exactly on crosslines. The second step is a 2D reconstruction in the crossline midpoint‐offset domain using a least‐squares estimation of Fourier components. Sparse sampling can be handled by optimizing the parameterization of the least‐squares estimation of the Fourier coefficients, and consequently it is possible to reconstruct data that would be aliased when considering single common‐midpoint gathers. The method can be used to do a geometry transformation of the data to any desired spatial grid. Since the method yields consistent results independent of the actual sampling, it is very well suited for 4D processing.
 
Article
The understanding of the source term in the one way equation is essential if one wishes to use this equation for modeling seismic reflection data. A careful introduction of the source term and of the surface boundary condition in the one way wave equations allow to recover with accuracy, using depth extrapolation, a synthetic field generated using the (two way) acoustic wave equation and initial time conditions. 1 Introduction One-way depth extrapolation is widely used in many seismic imaging techniques, in particular in migration (Claerbout, 1985; Berkhout, 1985; Stolt and Benson, 1986). In modeling of a seismic reflection experiment, sources and surface conditions are usually introduced in an ad hoc way. For instance, the source term is represented as a boundary condition and boundary conditions are replaced by an approximation undertaking only upgoing energy. These approximations are due to the fact that the one-way wave equations are either derived without a source term, or bounda...
 
Article
Recently developed classification and regression methods are applied to extract geological information from acoustic well-logging waveforms. First, acoustic waveforms are classified into the ones propagated through sandstones and the ones propagated through shale using the local discriminant basis (LDB) method. Next, the volume fractions of minerals are estimated (e.g., quartz and gas) at each depth using the local regression basis (LRB) method. These methods first analyze the waveforms by decomposing them into a redundant set of time-frequency wavelets, i.e., the orthogonal wiggle traces localized in both time and frequency. Then, they automatically extract the local waveform features useful for such classification and estimation or regression. Finally, these features are fed into conventional classifiers or predictors. Because these extracted features are localized in time and frequency, they allow intuitive interpretation. Using the field data set, we found that it...
 
Fixed-grid eikonal solver: a constant velocity model 
Article
The point source traveltime field has an upwind singularity at the source point. Consequently, all formally high-order finite-difference eikonal solvers exhibit firstorder convergence and relatively large errors. Adaptive upwind finite-difference methods based on high-order Weighted Essentially NonOscillatory (WENO) Runge-Kutta difference schemes for the paraxial eikonal equation overcome this difficulty. The method controls error by automatic grid refinement and coarsening based on an a posteriori error estimation. It achieves prescribed accuracy at far lower cost than does the fixed-grid method. Moreover, the achieved high accuracy of traveltimes yields reliable estimates of auxiliary quantities such as takeoff angles and geometrical spreading factors.
 
The conventional view of inverse problems: find the model that predicts the measurements.
An improved view of inverse problems.
The inverse problem as part of a decision-making process.
Article
ntly, there must be elements of the model space that 2 have no influence on the data. This lack of uniqueness is apparent even for problems involving idealized, noise-free measurements. The problem only becomes worse when the uncertainties of real measurements are taken into account. Although the uniqueness question is a hotly debated issue in the mathematical literature on inverse problems, it is largely irrelevant for practical inverse problems because they invariably have a non-unique solution (if by solution we mean an Earth model). It is this non-uniqueness that makes Figure 1 deceptive, because the arrow pointing from data space to model space suggests that a unique model corresponds to every data set. m ^ Estimated model Forward problem Estimation problem Appraisal problem + uncertainty Figure 2: An improved view on inverse problems. A more realistic scheme of inverse problems is shown in Figure 2. Given a model m, the physics of the problem determines the predicted dat
 
Article
Common image gathers (CIGs) in the offset and surface azimuth domain are used extensively in migration velocity analysis and amplitude variation with offset (AVO) studies. If the geology is complex and the ray field becomes multipathed, the quality of the CIGs deteriorates. To overcome these problems, the CIGs are generated as a function of scattering angle and azimuth at the image point. The CIGs are generated using an algorithm based on the inverse generalized Radon transform (GRT), stacking only over migration dip angles. Including only dips in the vicinity of the geological dip, or focusing in dip, suppresses artifacts in and results in improved signal‐to‐noise ratio on the CIGs. Migration velocity analysis can be based upon the differential semblance criterion. The analysis~is~then carried out by minimizing a functional of the derivative of the CIGs with respect to horizontal coordinates (offset/azimuth or scattering‐angle/azimuth), but AVO/amplitude variation with angle (AVA) effects will degrade the performance of the velocity analysis. We overcome this problem by computing an inverse GRT modified to compensate for AVA effects. The resulting CIGs can be used for velocity analysis based upon differential semblance, while they can be stacked to produce improved images. The algorithms are developed for inhomogeneous anisotropic elastic media, but they have so far only been tested on imaging‐inversion of PP and PS reflected waves in an isotropic elastic medium. This was done on two synthetic datasets generated by finite‐difference modeling and ocean‐bottom seismic (OBS) data from the Valhall field. We show that by performing the imaging of the real OBS data in the angle domain, it is possible to construct a well‐focused PP image of the Valhall reservoir directly beneath the “gas cloud” in the overburden.
 
The p 3 components of outward normals at the two intersections on the convex slowness surface have opposite signs.  
Article
The first-arrival quasi-P wave traveltime field in an anisotropic elastic solid solves a first-order nonlinear partial differential equation, the qP eikonal equation. The difficulty in solving this eikonal equation by a finite-difference method is that for anisotropic media the ray (group) velocity direction is not the same as the direction of traveltime gradient, so that the traveltime gradient can no longer serve as an indicator of the group velocity direction in extrapolating the traveltime field. However, establishing an explicit relation between the ray velocity vector and the phase velocity vector overcomes this difficulty. Furthermore, the solution of the paraxial qP eikonal equation, an evolution equation in depth, gives the first-arrival traveltime along downward propagating rays. A second-order upwind finite-difference scheme solves this paraxial eikonal equation in O(N) floating point operations, where N is 1 the number of grid points. Numerical experiments using 2-D and 3-D transversely isotropic models demonstrate the accuracy of the scheme.
 
Tapering the trace cumulant. (a) Fourth-order moment function of a 25-points Ricker wavelet. (b) Fourth-order cumulant function of a 250-points trace generated by convolving the Ricker wavelet with a sparse reflectivity series. (c) The same cumulant function in (b) after the application of a 3-D Parzen window. (d) Fourth-order cumulant of the trace using 5000 points. All diagrams correspond to −12 ≤ τ 1 , τ 2 ≤ 12, and τ 3 = 0.
Linearized versus VFSA solutions. (a) Synthetic trace generated by convolving a Berlage wavelet with a sparse reflectivity series (plus 10% by amplitude Gaussian noise). (b) Actual mixed-phase Berlage wavelet. (c) Wavelet estimate after the CM using the linearized algorithm. (d) Wavelet estimate after the CM using the VFSA algorithm.
Sensitivity to wavelet phase. (a) Sensitivity of the wavelet moment for lag (0, 0, 0) (i.e., kurtosis). (b) Sensitivity of the wavelet moment for lag (3, 3, 3). Below a critical bandwidth relative to the central frequency (B ≤ f 0 ), the 4th-order moment is insensitive to wavelet phase. Maximum sensitivity is attained when the frequency content extends to zero frequency (B 2 f 0 ).
Non-Gaussian assumption. From top to bottom, subsequent rows correspond to Bernoulli-Gaussian, Laplace mixture, and Gaussian-γ models. (a), (b), and (c) Normalized cost function for the different models. This quantity express the matching between the true wavelet moment and the corresponding trace moment. (d), (e), and (f) Correlation between each wavelet estimate and the actual wavelet for the different models. (g), (h), and (i) Correlation versus kurtosis for the different models.
Field data example: Input data. (a) Original data section used for a trace-by-trace CM wavelet estimation (b) Same section in (a) after removing the wavelet phase using the average estimate.
Article
The fourth-order cumulant matching method has been developed recently for estimating amixed-phase wavelet from a convolutional process. Matching between the trace cumulant and the wavelet moment is done in a minimum mean-squared error sense under the assumption of a non-Gaussian, stationary, and statistically independent reflectivity series. This leads to a highly nonlinear optimization problem, usually solved by techniques that require a certain degree of linearization, and that invariably converge to the minimum closest to the initial model. Alternatively, we propose a hybrid strategy that makes use of a simulated annealing algorithm to provide reliability of the numerical solutions by reducing the risk of being trapped in local minima. Beyond the numerical aspect, the reliability of the derived wavelets depends strongly on the amount of data available. However, by using a multidimensional taper to smooth the trace cumulant, we show that the method can be used even ...
 
Article
We describe a global optimization method called mean field annealing (MFA) and its application to two basic problems in seismic data processing: Seismic deconvolution and surface related multiple attenuation. MFA replaces the stochastic nature of the simulated annealing method with a set of deterministic update rules that act on the average value of the variables rather than on the variables themselves, based on the mean field approximation. As the temperature is lowered, the MFA rules update the variables in terms of their values at a previous temperature. By minimizing averages, it is possible to converge to an equilibrium state considerably faster than a standard simulated annealing method.The update rules are dependent on the form of the cost function and are obtained easily when the cost function resembles the energy function of a Hopfield network. The mapping of a problem onto a Hopfield network is not a precondition for using MFA, but it makes analytic calculati...
 
Article
An iterative Born imaging scheme is employed to analyze the resolution properties of crosswell electromagnetic tomography. The imaging scheme assumes a cylindrical symmetry about a vertical magnetic dipole source and employs approximate forward modeling at each iteration to update the internal electric fields. Estimation of the anomalous conductivity is accomplished through least-squares inversion. Much of the mathematical formulation of this diffusion process appears similar to the analysis of wavefield solutions, but the attenuation implicit in the complex propagation constant invalidates many of the accepted wavefield criteria for resolution. Images of illustrative models show that vertical resolution improves with increasing frequency and with increased spatial sampling density. In addition, greater conductivity contrasts between the target and the background can result in better resolution. The horizontal resolution depends on the maximum aperture that is employed and with increas...
 
Article
Attenuating random noise with a prediction filter in the time-space domain generally produces results similar to those of predictions done in the frequency-space domain. However, in the presence of moderate- to high-amplitude noise, timespace, or t-x prediction passes less random noise than does frequency-space, or f-x prediction. The f-x prediction may also produce false events in the presence of strong parallel events where t-x prediction does not. These advantages of t-x prediction are the result of its ability to control the length of the prediction filter in time. An f-x prediction produces an effective t-x domain filter that is as long in time as the input data. Gulunay's f-x domain prediction, also referred to as FXDECON, tends to bias the predictions toward the traces nearest the output trace, allowing more noise to be passed, but this bias may be overcome by modifying the system of equations used to calculate the filter. The three-dimensional extension to the two-d...
 
Article
We demonstrate that automatic layered inversion of plane-wave electromagnetic data can be carried out by modifying standard least-squares inversion schemes. The modifications include a logarithmic reparameterization of the unknown model parameters, whereby all layer parameters are forced to remain within given bounds. However, the most important modification to help the optimization process find the best model and to avoid local minima is to split the data into several subbands, starting from the highest frequencies. By this stripping procedure, the shallower part of the model becomes well estimated first. Asmore data are introduced, more layers may be required to improve the data fit. The new inversion procedure has been applied to many sets of theoretical data representing increasingly complicated layering that is often found in near-surface studies. The main result of these simulations is that there is a very strong coupling between the resolution power of plane-wa...
 
Article
We describe a newmethod of automatic normal moveout (NMO) correction and velocity analysis that combines a feedforward neural network (FNN) with a simulated annealing technique known as very fast simulated annealing (VFSA). The task of the FNN is to map common midpoint (CMP) gathers at control locations along a 2-D seismic line into seismic velocities within predefined velocity search limits. The network is trained while the velocity analysis is performed at the selected control locations. The method minimizes a cost function defined in terms of the NMO-corrected data. Network weights are updated at each iteration of the optimization process using VFSA. Once the control CMP gathers have been properly NMO corrected, the derived weights are used to interpolate results at the intermediate CMP locations. In practical situations in which lateral velocity variations are expected, the method is applied in spatial data windows, each window being defined by a separate FNN. The...
 
Article
We describe algorithms for automating the process of picking seismic events in prestack migrated common depth image gathers. The approach uses supervised learning and statistical classification algorithms along with advanced signal/image processing algorithms. No model assumption is made such as hyperbolic moveout. We train a probabilistic neural network for voxel classification using event times, subsurface points and offsets (ground truth information) picked manually by expert interpreters. The key to success is using effective features that capture the important behavior of the measured signals. We test a variety of features calculated in a local neighborhood about the voxel under analysis. Feature selection algorithms are used to ensure that we use only the features that maximize class separability. This event picking algorithm has the potential to reduce significantly the cycle time and cost of 3D prestack depth migration, while making the velocity model inversion more robust.
 
Article
S-wave velocity and density information is crucial for hydrocarbon detection, because they help in the discrimination of pore filling fluids. Unfortunately, these two parameters cannot be accurately resolved from conventional P-wave marine data. Recent developments in ocean-bottom seismic (OBS) technology make it possible to acquire high quality S-wave data in marine environments. The use of S-waves for amplitude variation with offset (AVO) analysis can give better estimates of S-wave velocity and density contrasts. Like P-wave AVO, S-wave AVO is sensitive to various types of noise. We investigate numerically and analytically the sensitivity of AVO inversion to random noise and errors in angles of incidence. Synthetic examples show that random noise and angle errors can strongly bias the parameter estimation. The use of singular value decomposition offers a simple stabilization scheme to solve for the elastic parameters. The AVO inversion is applied to an OBS data se...
 
Article
We introduce a new partial prestack-migration operator, named Azimuth MoveOut (AMO), that rotates the azimuth and modifies the offset of 3-D prestack data. AMO can improve the accuracy and reduce the computational cost of 3-D prestack imaging. We have successfully applied AMO to the partial stacking of a 3-D marine data set over a range of offsets and azimuths. Our results show that when AMO is included in the processing flow, the high-frequency steeply-dipping energy is better preserved during partial stacking than when conventional partial-stacking methodologies are used. Because the test data set requires 3-D prestack depth migration to handle strong lateral variations in velocity, the results of our tests support the applicability of AMO to prestack depth-imaging problems.
 
Article
Because of its computational efficiency, prestack Kirchhoff depth migration is currently the method of choice in both 2-D and 3-D imaging of seismic data. The most algorithmically complex component of the Kirchhoff family of algorithms is the calculation and manipulation of accurate traveltime tables for each source and receiver point. Once calculated, we sum the seismic energy over all possible ray paths, allowing us to accurately image both specular and nonspecular scattered energy. Any seismic events that fall within the velocity passband, including reflected and diffracted signal, mode conversions, multiples, head waves, and aliases of surface waves, are imaged in depth. The transformation of time gathers to depth gathers can be quite complicated and nonintuitive to all but the seasoned imaging expert. In particular, easily recognized head-wave events on common-shot gathers are often difficult to differentiate from undermigrated coherent reflections on common-refl...
 
Article
INTRODUCTION Conventional seismic time-delay estimation relies on the crosscorrelation that quantifies the similarities between two measurements in the second-order time domain. When the noise correlation in the measurements is considerable, the correlation peak can be substantially distorted, resulting in imprecise and even biased estimation of the time delay. The synthetic data computed by Ikelle et al. (1993) and Ikelle and Yung (1994) in their studies of wave propagation through random media provide a good example of data with considerable noise correlation. In picking the arrival times in this data set, we found that the crosscorrelation technique suffers both from the severely restricted signal bandwidth and from the presence of coda. Here we present an alternative approach involving highresolution nonparametric time-delay estimation in the thirdorder domain. THIRD-ORDER BICOHERENCE-CORRELATION Given three zero-mean, real random signals<
 
Article
Traveltime calculation is a crucial part of seismic migration schemes, especially prestack migration. There are many different ways to compute traveltimes. These methods can be divided into three categories: (1) Ray tracing (Julian and Gubbins, 1977; Červený et al., 1977). These treat the problem as a initial value problem by shooting rays from the source to the receivers. Or they can also treat the problem as a two‐point boundary value problem. An initial raypath is bent using perturbation theory until Fermat’s principle is satisfied. Nichols (1994) also computed traveltimes with the amplitude information attached to it in two dimensions. (2) Finite‐difference methods (Reshel and Kosloff, 1986; Vidale, 1988; van Trier and Symes, 1991). These solve the eikonal equation directly by using different numerical schemes such as the Runge‐Kutta method, wavefront expansion, or upwind finite difference. (3) Graph theory (Moser, 1991; Fisher and Lees, 1993; Meng et al. 1994). This method recasts the traveltime problem into a shortest path search over a network, which is constructed from the velocity model. This method is guaranteed to find a stable minimum traveltime with any velocity model.
 
Article
We have used crosscorrelation, semblance, and eigenstructure algorithms to estimate coherency. The first two algorithms calculate coherency over a multiplicity of trial time lags or dips, with the dip having the highest coherency corresponding to the local dip of the reflector. Partially because of its greater computational cost, our original eigenstructure algorithm calculated coherency along an implicitly flat horizon. Although generalizing the eigenstructure algorithm to search over a range of test dips allowed us to image coherency in the presence of steeply dipping structures, we were somewhat surprised that this generalization concomitantly degenerated the quality of the fault images in flatter dip areas. Because it is a local estimation of reflector dip (including as few as five traces), the multidip coherency estimate provides an algorithmically correct, but interpretationally undesirable, estimate of the best apparent dip that explained the offset reflectors ...
 
Article
Using crosswell data collected at a depth of about 3000 ft (900 m) in West Texas carbonates, one of the first well-to-well reflection images of an oil reservoir was produced. The P and S brute stack reflection images created after wavefield separation tied the sonic logs and exhibited a vertical resolution that was comparable to well log resolution. Both brute stacks demonstrated continuity of several reflectors known to be continuous from log control and also imaged an angular unconformity that was not detected in log correlations or in surface seismic profiling. The brute stacks, particularly the S-wave reflection image, also exhibited imaging artifacts. We found that multichannel wavefield separation filters that attenuated interfering wavemodes were a critical component in producing high-resolution reflection images. In this study, the most important elements for an effective wavefield separation were the time-alignment of seismic arrivals prior to filter application and the implem...
 
Article
We have collected low-noise crosswell data in a high-velocity carbonate environment with a spatial sampling interval of 2.5 ft (0.76 m). This sampling reveals a variety of coherent events not previously identified in coarsely sampled gathers. Nearly every event in our field record can be explained using simple approximations for the geology, source, and receivers without accounting for the presence of the boreholes. We have used synthetic records as a guide in a moveout-based analysis of the field data. Our analysis shows that much of the full wavefield energy, i.e., scattered waves, in our data are converted modes arising from the direct P- and S-waves. This observation suggests that for crosswell reflection imaging, the focus of acquisition and wavefield separation techniques should be on the suppression of once-converted modes. INTRODUCTION Until recently, most crosswell imaging experiments have used only the direct-arrival P- and/or S-wave traveltimes to image the region between tw...
 
Article
Wave propagation in uid-lled porous media is governed by Biot's equations of poroelasticity. Gassmann's relation gives an exact formula for the poroelastic parameters when the porous medium contains only one type of solid constituent. The present paper generalizes Gassmann's relation and derives exact formulas for two elastic parameters needed to describe wave propagation in a conglomerate of two porous phases. The parameters were rst introduced by Brown and Korringa when they derived a generalized form of Gassmann's equation for conglomerates. These elastic parameters are the bulk modulus K s associated with changes in the overall volume of the conglomerate and the bulk modulus K associated with the pore volume when the uid pressure (p f ) and conning pressure (p) are increased, keeping the dierential pressure (p d = p p f ) xed. These moduli are properties of the composite solid frame (drained of uid) and are shown here to be completely determined in terms of the bulk modul...
 
Article
Subsurface rock properties are manifested in seismic records as variations in traveltimes, amplitudes, and waveforms. It is commonly acknowledged that traveltimes are sensitive to the long wavelength part of the velocity, whereas amplitudes are sensitive to the short wavelength part of the velocity. The inherent sensitivity of seismic velocity at different wavelengths suggests an approach that decomposes the waveform data into traveltime and amplitude components. Therefore we propose a divide-and-conquer approach to the elastic waveform inversion problem. We first estimate the smoothly varying background velocity from the traveltime and the rapidly changing perturbations from the amplitude by amplitude variation with offset (AVO) inversion based on linearized reflection coefficient. Thenwe combine the perturbation with the background to obtain a starting model to be used in the final waveform inversion that models all converted waves and internal multiples assuming a ...
 
1: Q as a function of frequency for two diierent and two diierent. Solid: = 4:6212 10 ?2 , = 1:5915 10 ?2 corresponds to 10 Hz. Dashed: = 4:6212 10 ?2 , = 1:5915 10 ?3 corresponds to 100 Hz. Dash-dotted: = 2:310610 ?2 , = 1:591510 ?2 corresponds to 10 Hz.
1: Approximations to a constant Q of 20 between 2 and 25 Hz. Q 0 18 in algorithm. Solid: Desired Q. Dashed:-method using two relaxation mechanisms. Dotted: Pad e approximant method using two relaxation mechanisms. Dash-dotted: Single relaxation mechanism approximant.
5: Seismograms collected at 3000 m ooset in the model with a constant Q of approximation of 20 between 2 and 25 Hz. Solid: Analytical solution. Dashed:-method using two relaxation mechanisms. Dotted: Pad e approximant method using two relaxation mechanisms. Dash-dotted: Single relaxation mechanism approximant.
Article
Linear anelastic phenomena in wave propagation problems can be well modeled through a viscoelastic mechanical model consisting of standard linear solids. In this paper we present a method for modeling of constant Q as a function of frequency based on an explicit closed formula for calculation of the parameter fields. The proposed method enables substantial savings in computations and memory requirements. Experiments show that the new method also yields higher accuracy in the modeling of Q than e.g., the Pad'e approximant method [7]. 1 Introduction Earth media attenuate and disperse propagating waves. Several modeling methods for wave propagation, which take attenuating and dispersive effects into account, have been presented (e.g., Robertsson et al. [16], Carcione et al. [5], and Martinez and McMechan [11]) as have inversion methods developed for viscoelastic media (e.g., Blanch and Symes [1] and Martinez and McMechan [12]). Attenuating effects also have a large impact on AVO re...
 
Entropy norm versus iterations for the noise-free example.
Article
A method for reconstructing the reflectivity spectrum using the minimum entropy criterion is presented. The algorithm (FMED) described is compared with the classical minimum entropy deconvolution (MED) as well as with the linear programming (LP) and autoregressive (AR) approaches. The MED is performed by maximizing an entropy norm with respect to the coefficients of a linear operator that deconvolves the seismic trace. By comparison, the approach presented here maximizes the norm with respect to the missing frequencies of the reflectivity series spectrum. This procedure reduces to a nonlinear algorithm that is able to carry out the deconvolution of band-limited data, avoiding the inherent limitations of linear operators. The proposed method is illustrated under a variety of synthetic examples. Field data are also used to test the algorithm. The results show that the proposed method is an effective way to process band-limited data. The FMED and the LP arise from similar conceptions. Bot...
 
Site locations for the Papua New Guinea Kube Kabe survey
Frequency-independent Groom-Bailey decompositions for PNG sites 121-122
Misfit error and strike azimuth from multi-site decomposition for PNG sites 101-108 constrained over frequency bands of half a decade.
Article
Accurate interpretation of magnetotelluric data requires an understanding of the directionality and dimensionality inherent in the data, and valid implementation of an appropriate method for removing the effects of shallow, small-scale galvanic scatterers on the data to yield responses representative of regional-scale structures. The galvanic distortion analysis approach advocated by Groom and Bailey has become the most adopted method, and rightly so given that the approach decomposes the magnetotelluric impedance tensor into determinable and indeterminable parts, and tests statistically the validity of the galvanic distortion assumption. As proposed by Groom and Bailey, one must determine the appropriate frequency-independent telluric distortion parameters and geoelectric strike by fitting the seven-parameter model on a frequency-by-frequency and site-by-site basis independently. Whilst this approach has the attraction that one gains a more intimate understanding of the data set, it i...
 
Article
The split-step Fourier method is used here to prestack migrate two synthetic borehole-to-surface shot gathers. Model structures in the zone of specular illumination beneath the shot are reconstructed by using the split-step Fourier method both to back-propagate the recorded wave-field and to forward propagate the source wavelet. The overburden is vertically and laterally inhomogeneous. Each depth interval is treated as a homogeneous strip with the mean velocity plus an inhomogeneity correction term. The inhomogeneity correction term is split and spatially multiplied with each spectral component of the wave-field on its entry to and upon its exit from each strip. Propagation through each strip is effected by multiplication in the spatial frequency domain. The split-step Fourier method offers a valuable alternative to finite-difference migration for machines with limited memory. Three imaging methods are compared for two signal-to-noise ratios. They are: image extract...
 
Article
The three electromagnetic properties appearing in Maxwell's equations are the electric permittivity, the electric conductivity and the magnetic permeability. The study of point diffractors in an homogeneous, isotropic, and linear medium suggests the use of logarithms to describe the variations of electromagnetic properties in the earth. A small anomaly in electric properties (permittivity and conductivity) responds to an incident electromagnetic field as an electric dipole, whereas a small anomaly in the magnetic property responds as a magnetic dipole. No property variation can be neglected compared to the others. Furthermore, considering radiation patterns of the different diffraction points, differentiating electric and magnetic variations is not an easy task using Ground Penetrating Radar. But using an effective electromagnetic impedance and an effective electromagnetic velocity to describe a medium, the cor- responding radiation patterns behave completely differently with the source-receiver offset. Zero-offset reflection data give a direct image of impedance variations with depth while large-offset reflection data contain information on velocity variations.
 
Article
The linear inverse method developed by Backus and Gilbert (1968) relates model estimates to actual earth models by use of a resolving kernel. Seismic source wavelet deconvolution can be treated within the framework of the Backus and Gilbert (1968) inverse theory as presented in Oldenburg (1981) and Treitel and Lines (1982). The model of the Backus and Gilbert theory is the ground impulse response, the mapping kernel is the source wavelet, and the resolving kernel is the convolution between the source wavelet and the shaping filter. Backus and Gilbert formalism introduces several measures for the resolving kernel.
 
Article
In this work, an analysis method is developed for the robust and efficient estimation of 3-D seismic local structural entropy, which is a measure of local discontinuity. This method avoids the computation of large covariance matrices and eigenvalues, associated with the eigenstructure-based and semblance-based coherency estimates. We introduce a number of local discontinuity measures, based on the relations between subvolumes (quadrants) of the analysis cube. The scale of the analysis is determined by the type of geological feature that is of interest to the interpreter. By combining local structural entropy volumes using various scales, we obtain a higher lateral resolution and better discrimination between incoherent and coherent seismic events. Furthermore, the method developed is computationally much more efficient than the eigenstructure-based coherency method. Its robustness is demonstrated by synthetic and real data examples.
 
Block diagram of the model adopted for impedance tensor estimation. The two subsystems denote two symmetrical tensor element estimation problems (e.g., dashed line) in a linear system with two inputs and one output.  
Block diagram of the adaptive time-domain technique for impedance tensor estimation. Dashed line represents one of the two symmetric unknown subsystems derived from the model in Figure 1. The adaptation algorithm adjusts the estimated PR vectors so as to reduce the MS error between the model and the measured E data, given the same H measurements.
Histogram of overall residual = normalized to the root mean square value of one component of E data after MT tensor estimation. The mathematically appealing Gaussian residual is not appropriate because the tails of the specified normal mixture pdf decay at rates lower than the rate of decay of a Gaussian pdf.  
Comparison between the apparent resistivity estimate that results from remote reference robust spectral analysis of 10 weeks of EMSLAB data (solid) and the one that is obtained from the single-site adaptive time-domain processing (dots) of overlapped subsequences of 5 days of data extracted from the entire EMSLAB data (Jones et al., 1989). The biases in the time-domain estimates below 200 s are caused by the low SNR of the H data.  
Article
The spectral analysis of magnetotelluric (MT) data for impedance tensor estimation requires the stationarity of measured magnetic (H) and electric (E) fields. However, it is well known that noise biases timedomain tensor estimates obtained via an iterative search by a descent algorithm to determine the leastmean -square residual between measured and estimated E data obtained from H data. To limit the noise that slows down, or even prevents convergence, the steepest descent step size is based upon the statistics of the residual (Bayes' estimation). With respect to uncorrelated noise, the time-domain technique is more robust than frequency-domain techniques. Furthermore, the technique requires only short-time station- arity . The time-domain technique is applied to data sets (Lincoln Line sites) from the EMSLAB Juan de Fuca project (Electromagnetic Sounding of the Lithosphere and Asthenosphere Beneath the Juan de Fuca Plate), as well as to data from a southern Italian site. The results...
 
Article
A general linear theory describes the extension of the convolutional method to nonstationary processes. This theory can apply any linear, nonstationary filter, with arbitrary time and frequency variation, in the time, Fourier, or mixed domains. The filter application equations and the expressions to move the filter between domains are all ordinary Fourier transforms or generalized convolutional integrals. Nonstationary transforms such as the wavelet transform are not required. There are many possible applications of this theory including: the one-way propagation of waves through complex media, time migration, normal moveout removal, time-variant filtering, and forward and inverse Q filtering. Two complementary nonstationary filters are developed by generalizing the stationary convolution integral. The first, called nonstationary convolution, corresponds to the linear superposition of scaled impulse responses of a nonstationary filter. The second, called nonstationary ...
 
(A) Effective fluid coefficient (EFC) nomograph of = [(∂ K pore /K pore )/(∂ K fluid /K fluid )] versus α K /φ for estimating the effect of fluid-modulus changes on the fluid-filled pore-space modulus. The curves are for values of K fluid /K solid . (B) Close-up view of the useful EFC sector of (A), where varies as 0 ≤ φ/α K ≤ ≤ 1.  
Saturated-rock bulk modulus (K sat ) versus porosity for various modulus–porosity models: Voigt (V), Reuss (R), Wood (W), Hashin-Shtrikman upper (HS+) and lower (HS−) bounds, modified Voigt (MV), and modified Hashin-Shtrikman upper (MHS+) bound. The solid-grain moduli are K solid = 38 GPa, µ solid = 44 GPa, and ν solid = 0.08. The fluid modulus K fluid is 2.25 GPa. The critical porosity φ c is 0.4.  
Fluid-filled pore-space bulk modulus (K pore ) versus porosity for various modulus–porosity models: Voigt (V), Reuss (R), Wood (W), Hashin-Shtrikman upper (HS+) and lower (HS−) bounds, modified Voigt (MV), and modified Hashin-Shtrikman upper (MHS+) bound. The solid-grain moduli are K solid = 38 GPa, µ solid = 44 GPa, and ν solid = 0.08. The fluid modulus K fluid is 2.25 GPa. The critical porosity φ c is 0.4.  
Saturated-rock shear modulus (µ sat ) versus porosity for various modulus–porosity models: Voigt (V), Reuss (R), Wood (W), Hashin-Shtrikman upper (HS+) and lower (HS−) bounds, modified Voigt (MV), and modified Hashin-Shtrikman upper (MHS+) bound. The solid-grain moduli are K solid = 38 GPa, µ solid = 44 GPa, and ν solid = 0.08. The fluid modulus K fluid is 2.25 GPa. The critical porosity φ c is 0.4.  
Plot of α K versus porosity (φ) for various modulus–porosity models: Voigt (V), Reuss (R), Wood (W), and Hashin-Shtrikman upper (HS+) and lower (HS−) bounds, and ν solid is the solid-grain Poisson's ratio. For critical-porosity models, substitute φ/φ c for φ, modified Voigt (MV) for V, and modified Hashin-Shtrikman upper (MHS+) for HS+.  
Article
Gassmann's equations relate the low-frequency drained and undrained elastic-wave response to fluids. This tutorial explores how different modulus--porosity relationships affect predictions of the low-frequency elastic-wave response to fluids based on Gassmann's equations. I take different modulus--porosity relations and substitute them into Gassmann's equations through the framework moduli. The results illustrate the range of responses to fluids and can be summarized in a nomograph of the effective fluid coefficient, which quantifies the change in the pore-space modulus (# K pore /K pore )in response to a change in fluid modulus (# K fluid /K fluid ). Two ratios control the effective fluid coefficient: the ratio of the fluid modulus to the solid-grain modulus (K fluid /K solid ) and the ratio of the Biot coefficient to porosity (# K /#). The effective fluid coefficient nomograph is a convenient tool for estimating how low-frequency elastic-wave properties will respond to changes in reservoir fluids.
 
Article
The phase-shift method of wavefield extrapolation applies a phase shift in the Fourier domain to deduce a scalar wavefield at one depth level given its value at another. The phase-shift operator varies with frequency and wavenumber, and assumes constant velocity across the extrapolation step. We use nonstationary filter theory to generalize this method to nonstationary phase shift (NSPS), which allows the phase shift to vary laterally depending upon the local propagation velocity. For comparison, we derive an analytic form for the popular phase shift plus interpolation (PSPI) method in the limit of an exhaustive set of reference velocities. NSPS and this limiting form of PSPI can be written as generalized Fourier integrals which reduce to ordinary phase shift in the constant velocity limit. In the (x,#) domain, these processes are the transpose of each other; however, only NSPS has the physical interpretation of forming the scaled, linear superposition of laterally-var...
 
Article
The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires O(N(-ln #) L + ( Q L #=1 M # ) P L #=1 log M # ) operations, where M # is the number of Fourier components along dimension #, N is the number of irregularly spaced samples, and # is the required accuracy. This is a dramatic improvement over the O(N Q L #=1 M # ) operations required for the direct evaluation (NDFT). The performance of the NFFT depends on the lowpass filter used in the algorithm. A truncated Gauss pulse, proposed in the literature, is optimized. A newly proposed filter, a Gauss pulse tapered with a Hanning window, performs better than the truncated Gauss pulse and the B-spline, also proposed in the literature. For small filter length, a numerically optimized filter shows the best results. Numerical experiments for 1-D and 2-D implementations confirm the theoretically predicted ...
 
Top-cited authors
Jean Virieux
  • Université Grenoble Alpes
Sergey Fomel
  • University of Texas at Austin
Douglas W. Oldenburg
  • University of British Columbia - Vancouver
George A. Mcmechan
  • University of Texas at Dallas
Leon Thomsen
  • University of Houston