Conference Paper

Unsupervised machine learning for seismic facies classification using a 3D grid approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Convolutional neural networks (CNNs) is a type of supervised learning technique that can be directly applied to amplitude data for seismic data classification. The high flexibility in CNN architecture enables researchers to design different models for specific problems. In this study, I introduce an encoder-decoder CNN model for seismic facies classification, which classifies all samples in a seismic line simultaneously and provides superior seismic facies quality comparing to the traditional patch-based CNN methods. I compare the encoder-decoder model with a traditional patch-based model to conclude the usability of both CNN architectures.
Article
Full-text available
A new graphical display is proposed for partitioning techniques. Each cluster is represented by a so-called silhouette, which is based on the comparison of its tightness and separation. This silhouette shows which objects lie well within their cluster, and which ones are merely somewhere in between clusters. The entire clustering is displayed by combining the silhouettes into a single plot, allowing an appreciation of the relative quality of the clusters and an overview of the data configuration. The average silhouette width provides an evaluation of clustering validity, and might be used to select an ‘appropriate’ number of clusters.
Conference Paper
As a powerful and the most popular seismic facies analysis method, self-organizing map (SOM) projects multiple attributes into a lower dimensional (usually 2D) latent space. Because of lacking geological time information, the seismic facies classification results sometimes are problematic or unreliable. In this study, we briefly introduce a stratigraphic constraint derived from seismic decomposition method into SOM facies analysis. After describing the principal method, we show an example of an improved SOM using information of sedimentary cycle, which is derived from variational mode decomposition (VMD) on seismic amplitude data. On an unconventional shale application, we observe that the constrained SOM facies map shows layers that are easily overlooked on traditional unconstrained SOM facies map. Presentation Date: Monday, October 17, 2016 Start Time: 1:50:00 PM Location: 170/172 Presentation Type: ORAL
Conference Paper
Seismic facies analysis is commonly carried out by classifying seismic waveforms based on their shapes in an interval of interest. It is also carried out by using different seismic attributes, reducing the dimensionality of the input data volumes using Kohonen's self-organizing maps (SOM), and organizing it into clusters on a 2D map. Such methods are computationally fast and inexpensive. However, they have shortcomings in that there is no definite criteria for selection of a search radius and the learning rate, as these are parameters dependent on the input data. In addition, there is no cost function that is defined and optimized and so usually the method is deficient in providing a measure of confidence that could be assigned to the results. Generative topographic mapping (GTM) has been shown to address the shortcomings of the SOM method and suggested as an alternative to it. We demonstrate the application of GTM to a dataset from central Alberta, Canada and show that its performance is more encouraging than the simplistic waveform classification or the SOM multiattribute approach.
Conference Paper
In this study, we use an example in a Barnett Shale play to demonstrate how supervised and unsupervised machine learning techniques provide the right leverages for seismic interpreters. By analyzing seismic facies map generated by unsupervised self-organizing map, gamma ray estimated by artificial neural network, and brittleness index estimated by supervised proximal support vector machine, we arrive at frackability and lithofacies interpretation of the Lower Barnett Shale. We find strong agreement between interpreted frackability in the Lower Barnett Shale with microseismic events.
Article
Interpretation of seismic reflection data routinely involves powerful multiple-central-processing-unit computers, advanced visualization techniques, and generation of numerous seismic data types and attributes. Even with these technologies at the disposal of interpreters, there are additional techniques to derive even more useful information from our data. Over the last few years, there have been efforts to distill numerous seismic attributes into volumes that are easily evaluated for their geologic significance and improved seismic interpretation. Seismic attributes are any measurable property of seismic data. Commonly used categories of seismic attributes include instantaneous, geometric, amplitude accentuating, amplitude-variation with offset, spectral decomposition, and inversion. Principal component analysis (PCA), a linear quantitative technique, has proven to be an excellent approach for use in understanding which seismic attributes or combination of seismic attributes has interpretive significance. The PCA reduces a large set of seismic attributes to indicate variations in the data, which often relate to geologic features of interest. PCA, as a tool used in an interpretation workflow, can help to determine meaningful seismic attributes. In turn, these attributes are input to self-organizing-map (SOM) training. The SOM, a form of unsupervised neural networks, has proven to take many of these seismic attributes and produce meaningful and easily interpretable results. SOM analysis reveals the natural clustering and patterns in data and has been beneficial in defining stratigraphy, seismic facies, direct hydrocarbon indicator features, and aspects of shale plays, such as fault/fracture trends and sweet spots. With modern visualization capabilities and the application of 2D color maps, SOM routinely identifies meaningful geologic patterns. Recent work using SOM and PCA has revealed geologic features that were not previously identified or easily interpreted from the seismic data. The ultimate goal in this multiattribute analysis is to enable the geoscientist to produce a more accurate interpretation and reduce exploration and development risk.
Article
Synonyms GMM; Mixture model; Gaussian mixture density Definition A Gaussian Mixture Model (GMM) is a parametric probability density function represented as a weighted sum of Gaussian component densities. GMMs are commonly used as a parametric model of the probability distribution of continuous measure-ments or features in a biometric system, such as vocal-tract related spectral features in a speaker recognition system. GMM parameters are estimated from training data using the iterative Expectation-Maximization (EM) algorithm or Maximum A Posteriori (MAP) estimation from a well-trained prior model.
Article
Organizing data into sensible groupings is one of the most fundamental modes of understanding and learning. As an example, a common scheme of scientific classification puts organisms into a system of ranked taxa: domain, kingdom, phylum, class, etc. Cluster analysis is the formal study of methods and algorithms for grouping, or clustering, objects according to measured or perceived intrinsic characteristics or similarity. Cluster analysis does not use category labels that tag objects with prior identifiers, i.e., class labels. The absence of category information distinguishes data clustering (unsupervised learning) from classification or discriminant analysis (supervised learning). The aim of clustering is to find structure in data and is therefore exploratory in nature. Clustering has a long and rich history in a variety of scientific fields. One of the most popular and simple clustering algorithms, K-means, was first published in 1955. In spite of the fact that K-means was proposed over 50 years ago and thousands of clustering algorithms have been published since then, K-means is still widely used. This speaks to the difficulty in designing a general purpose clustering algorithm and the ill-posed problem of clustering. We provide a brief overview of clustering, summarize well known clustering methods, discuss the major challenges and key issues in designing clustering algorithms, and point out some of the emerging and useful research directions, including semi-supervised clustering, ensemble clustering, simultaneous feature selection during data clustering, and large scale data clustering.
Article
This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval
Article
Techniques for partitioning objects into optimally homogeneous groups on the basis of empirical measures of similarity among those objects have received increasing attention in several different fields. This paper develops a useful correspondence between any hierarchical system of such clusters, and a particular type of distance measure. The correspondence gives rise to two methods of clustering that are computationally rapid and invariant under monotonic transformations of the data. In an explicitly defined sense, one method forms clusters that are optimally “connected,” while the other forms clusters that are optimally “compact.”