ArticlePDF Available

Neural Network Analysis for Hazardous Waste Characterization

Authors:

Abstract

This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-ofconcept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0:8 feet. These statistics were gathered based on a large test set and so can be consid...
Network Input
Multi−Resolution Scanning Window
4 x 4 Raw Data Window
16 x 16 Window Compressed to 4 x 4
32 x 32 Window Compressed to 4 x 4
+
x
6 Sensors
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Conference Paper
Multi-layer perceptrons and trained classification trees are two very different techniques which have recently become popular. Given enough data and time, both methods are capable of performing arbitrary non-linear classification. These two techniques, which developed out of different research communities, have not been previously compared on real-world problems. We first consider the important differences between multi-layer percepmns and classification trees and conclude that there is not enough theoretical basis for the clear-cut superiority of one technique over the other. For this reason, we performed a number of empirical tests on quite different problems in power system load forecasting and speaker-independent vowel identification. We compared the performance for classification and prediction in terms of accuracy outside the training set. In all cases, even with various sizes of training sets, the multi-layer percepmn performed as well as or better than the trained classification trees. We are confident that the univariate version of the trained classification trees do not perform as well as the multi-layer perceptron. More studies are needed, however, on the comparative performance of the linear combination version of the classification trees.
Article
Neural networks were used to estimate the offset, depth, and conductivity-area product of a conductive target given an electromagnetic ellipticity image of the target. Five different neural network paradigms and five different representations of the ellipticity image were compared. The networks were trained with synthetic images of the target and tested on field data and more synthetic data. The extrapolation capabilities of the networks were also tested with synthetic data lying outside the spatial limits of the training set. The data representations consisted of the whole image, the subsampled image, the peak and adjacent troughs, the peak, and components from a two-dimensional (2-D) fast Fourier transform. The paradigms tested were standard back propagation, directed random search, functional link, extended delta bar delta, and the hybrid combination of self-organizing map and back propagation. For input patterns with less than 100 elements, the directed random search and functional link networks gave the best results. For patterns with more than 100 elements, self-organizing map to back propagation was most accurate. Using the whole ellipticity image gave the most accurate results for all the network paradigms. The fast Fourier transform data representation also yielded good results with a much faster computation time. Average accuracies of offset, depth, and conductivity-area product as high as 97 percent could be achieved for test and field data and 88 percent for extrapolation data.
Conference Paper
There are many hazardous waste sites throughout the United States that have been mandated by law to undergo remediation. That process has been painfully slow and expensive, partially due to the inadequacy of the waste characterization schemes that have been counted on to define the extent and nature of site contamination. Conventional characterization efforts simply fall short of providing the level of information on buried waste that is consistent with human safety during clean-up operations. Preliminary design has been completed for a new technology, called Safe Step Remediation, that addresses this problem. The key component of the Safe Step approach is a Dig-face Monitoring System. The monitoring system produces waste characterization data in small, careful increments. Each new characterization increment drives a new increment of excavation. Dig-face monitoring poses some new challenges for geophysicists. One challenge is to meet performance requirements for quantitative, exact interpretations that go well beyond those that apply to most conventional geophysics. In dig-face monitoring, interpretations directly and continuously protect site workers by guiding the removal and handling of dangerous materials. Several unique aspects of the dig-face monitoring application should enhance capabilities for making these accurate interpretations. First, within the Safe Step Remediation approach, the monitoring system will be able to make an extraordinary set of measurements. These measurements will be made on multiple planes as the excavation progresses and will include close-up measurements made in the immediate vicinity of hazards. Second, and even more important, the physical retrieval of targets following each increment of characterization generates a unique opportunity to validate interpretations continually. This provides a basis for steadily improving the quality and accuracy of interpretations over time.
Article
Machine contouring must not introduce information which is not present in the data. The one-dimensional spline fit has well defined smoothness properties. These are duplicated for two-dimensional interpolation in this paper, by solving the corresponding differential equation. Finite difference equations are deduced from a principle of minimum total curvature, and an iterative method of solution is outlined. Observations do not have to lie on a regular grid. Gravity and aeromagnetic surveys provide examples which compare favorably with the work of draftsmen.
Article
The Buried Waste Integrated Demonstration (BWID) supports the applied research, development, demonstration, and evaluation of a suite of advanced technologies that form a comprehensive remediation system for the effective and efficient remediation of buried waste. These efforts are identified and coordinated in support of the US Department of Energy (DOE), Environmental Restoration and Waste Management (ERWM) needs and objectives. The present focus of BWID is to support retrieval and ex situ treatment configuration options. Future activities will explore and support containment and stabilization efforts in addition to the retrieval/ex situ treatment options. Long and short term strategies of the BWID are provided. Processes for identifying technological needs, screening candidate technologies for BWID applicability, researching technical issues, field demonstrating technologies, evaluating demonstration results to determine each technology's threshold of capability, and commercializing successfully demonstrated technologies for implementation for environmental restoration also are presented in this report.
Article
Neural networks are computer simulations of the brain's neural functions; as such they perform well on the same types of problems on which humans perform well, namely pattern recognition. Neural networks have shown the capability to learn human speech, read handwritten signatures and recpgnize human faces. Applied to geophysical data, neural networks offer the ability to estimate model parameters in near realtime.A backpropagation neural network was trained to estimate the spatial location (offset and depth) of a target given an image of the electromegnetic ellipticity. Three components of the magnetic field were measured from which the ellipticity was calculated. Theoretical ellipticity images were used for training the neural network; field data were used to test it.The input data representation was important in obtaining results with 10% error or less from the neural network; generally, smaller input vectors yielded more accurate results. Five different representations were examined: the whole image, the subsampled image, trough-peak-trough, peak amplitude and frequency domain. The frequency-domain representation estimated the target locations in the field data with the least error, 0.4% for the offset and 1.5% for the depth. The network was examined for its ability to generalize, to extrapolate beyond the spatial limits of the training set and to ignore discrepancies between synthetic and field data. The generalization from synthetic training data to synthetic test data had errors near 5% for most offset estimates and near 2% for most depth estimates. We considered extrapolation errors satisfactory (10%) up to 1.5 model spacings beyond the limits of the training set.
Conference Paper
Despite the fact that many symbolic and connec­ tionist (neural net) learning algorithms are ad­ dressing the same problem of learning from classified examples, very little Is known regarding their comparative strengths and weaknesses. This paper presents the results of experiments compar­ ing the ID3 symbolic learning algorithm with the perceptron and back-propagation connectionist learning algorithms on several large real-world data sets. The results show that ID3 and percep­ tron run significantly faster than does back- propagation, both during learning and during classification of novel examples. However, the probability of correctly classifying new examples is about the same for the three systems. On noisy data sets there is some indication that back- propagation classifies more accurately.