About
84
Publications
6,559
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
821
Citations
Publications
Publications (84)
A successive interference cancellation (SIC) algorithm is developed for the interception of signals transmitted on a CDMA forward link. The system is modelled after the IS-95 standard but the results are also applicable to other CDMA systems such as the wideband CDMA (W-CDMA) and IS-2000 standards. In realistic systems, the different forward link c...
With current concerns over the affects of destabilization of levees in the San Joaquin-Sacramento Bay Delta due to tree root incursion, we performed an assessment of the size and distribution of trees using tree crown detection algorithms applied to LIDAR data collected across the entirety of the legal Delta. Using local maximum heights as seeds, w...
Maximum likelihood detector algorithms are developed for the matrix of transmitted symbols in a multiuser system in which the received signal is the sum of K cochannel continuous phase modulated (CPM) signals and additive white Gaussian noise. We illustrate that the maximum likelihood matrix detector, which provides optimum detector performance, co...
An approach to automatic target cueing (ATC) in hyperspectral images, referred to as K-means reclustering, is introduced. The objective is to extract spatial clusters of spectrally related pixels having specified and distinctive spatial characteristics. K-means reclustering has three steps: spectral cluster initialization, spectral clustering and s...
The design of linear image filters based on properties of human visual perception has been shown to require the minimization of criterion functions in both the spatial and frequency domains. In this correspondence, we extend...
In this paper, we describe a new method for designing two-dimensional filters with the McClellan transform. It is well known that McClellan transform can simplify the two-dimensional filter design problem into a one dimensional problem.
We develop and evaluate receiver signal processing algorithms for the detection of signals transmitted via the forward link of a cell in a cellular system modeled after the IS-95 standard for direct-sequence spread-spectrum code-division multiple-access (CDMA) communications. Multiuser detectors on board airborne and terrestrial mobile interceptors...
We describe algorithms for automating the process of picking seismic events in prestack migrated common depth image gathers. The approach uses supervised learning and statistical classification algorithms along with advanced signal/image processing algorithms. No model assumption is made such as hyperbolic moveout. We train a probabilistic neural n...
We present an optimum multiuser receiver for detecting K cochannel asynchronous partial-response pulse amplitude modulated (PAM) signals. The receiver generalizes the optimum multiuser detector developed by Verdu (1983, 1986) for asynchronous full-response PAM signals and the optimum single-user receiver developed by Ungerboeck (1974) for the detec...
Through a combination of multiplexing and pipelining it is
possible to implement in FPGAs a high-order tunable IIR notch filter
using a new digital heterodyne technique. The notch center frequency can
be tuned from DC to the Nyquist frequency and the characteristics of the
IIR generated notch filter can be reprogrammed for specific
applications. A...
Continuous-phase modulation (CPM) schemes are much-used in the wireless communications infrastructure, due in part to the ability to use inexpensive transmitter demodulators. However as with other radio signals, these signals are subject to transmission through, sometimes severe, multipath channels. We use the Laurent (1986) representation to devel...
Two single-chip designs implement in FPGA's a high-order tunable
IIR notch filter using a new digital heterodyne technique. The notch
center frequency can be tuned from DC to the Nyquist frequency and the
characteristics of the IIR generated notch filter can be re-programmed
for specific applications. The first chip is a single-chip version of a
fi...
We develop and evaluate receiver algorithms for the detection of
signals transmitted via the downlink of a cellular system modeled after
the IS-95 standard for code-division multiple-access (CDMA)
communications. Multiuser detectors on board airborne and terrestrial
mobile signal monitors attempt the simultaneous detection, in a single
receiver, of...
Simple hardware implementations of three key elements, (1) a fixed-coefficient prototype filter. (2) the digital up-converter, and (3) a digital down-converter are proposed for realization in FPGAs or ASICs. Through layout and simulation the feasibility of a tunable heterodyne notch filter is established in which three up-converters, four notch fil...
Mean curvature diffusion is shown to be a position vector diffusion, tending to scalar diffusion as a flat image region is approached, and providing noise removal by steepest descent surface minimization. At edges, it switches to a nondiffusion state due to two factors: the Laplacian of position vanishes and the magnitude of the surface normal atta...
In this paper, we develop optimum and suboptimum receivers for jointly detecting two cochannel continuous phase modulated (CPM) signals. These receivers are based upon Laurent's representation of binary CPM as the sum of a finite number of pulse amplitude modulated signals. We also provide a review of the Laurent representation and its application...
A receiver is developed for MAP symbol detection of a burst (i.e., finite-length) CPM signal received in additive white Gaussian noise. This minimum probability of symbol error receiver requires the entire burst of data and involves the use of both forward and backward recursions. Performance results are provided comparing the MAP symbol detector t...
Representing the image as a surface, an inhomogeneous diffusion algorithm is developed, evolving the surface at a speed proportional to its mean curvature, reducing noise while preserving image structure. An adaptive scaling parameter increases the speed of the diffusion. The properties of a discrete algorithm are demonstrated experimentally
The iterative optimizing quantization technique (IOQT) is a novel method in reconstructing three-dimensional (3D) images from a limited number of 2D projections. IOQT can reduce the artifacts and image distortion due to a limited number of projections and limited range of viewing angles. Equivalently, by reducing the number of projections required...
Most of the recent work on inhomogeneous diffusion in image filtering focuses on diffusing the isotope curve. We present a less familiar approach to the development of inhomogeneous diffusion algorithms in which the image is regarded as a surface in three-space. The magnitude of the surface normal controls a diffusion that evolves the image surface...
We describe the properties of algorithms based on methods of inhomogeneous diffusion and their application to image and video coding. Filtering by inhomogeneous diffusion has the desirable property of reducing noise and other artifacts while preserving the image structure, maintaining or improving image quality. As a preprocessor, this filtering re...
We have developed a computer-aided system (Bony Parts) to analyze periodic bands in fish otoliths (or other structures) for age estimation. The image analysis program first scans the image of a thin otolith section, perpendicular to the bands specified by the user. Adjacent scans are averaged and filtered with Fourier transformation or spatial doma...
A unified eigenfilter approach is proposed for determining the
mean-square-optimal coefficients of the McClellan transformation. The
approach applies to all filter shapes without the use of prior knowledge
of the properties of the coefficients. Several design examples for
arbitrarily shaped and oriented 2-D fan, elliptical, and diamond filters
are...
We describe algorithms for automating the process of picking seismic events in pre-stack migrated gathers. The approach uses supervised learning and statistical classification algorithms along with advanced signal-image processing algorithms. We train a probabilistic neural network (PNN) for pixel classification using event times and offsets (groun...
Cochannel interference, which occurs when two or more signals share the same spectral and temporal channels, is a major problem in frequency- and timedivision multiple access systems. In this paper, we focus on the development and evaluation of a joint maximum likelihood algorithm for simultaneously detecting two cochannel continuous phase modulate...
The encoding of images at high quality is important in a number of applications. We have developed an approach to coding that produces no visible degradation and that we denote as perceptually transparent. Such a technique achieves a modest compression, but still significantly higher than error free codes. Maintaining image quality is not important...
Coding techniques, such as JPEG and MPEG, result in visibly degraded images at high compression. The coding artifacts are strongly correlated with image features and result in objectionable structured errors. Among structured errors, the reduction of the end of block effect in JPEG encoding has received recent attention, with advantage being taken...
In previous work, we reported on the benefits of noise reduction prior to coding of very high quality images. Perceptual transparency can be achieved with a significant improvement in compression as compared to error free codes. In this paper, we examine the benefits of preprocessing when the quality requirements are not very high, and perceptible...
We have recently proposed the use of geometry in image processing by
representing an image as a surface in 3-space. The linear variations in
intensity (edges) were shown to have a nondivergent surface normal.
Exploiting this feature we introduced a nonlinear adaptive filter that
only averages the divergence in the direction of the surface normal.
T...
Cochannel interference, which occurs when two or more signals share the same spectral and temporal channels, is a major problem in frequency- and time-division multiple access systems. We focus on the development and evaluation of a joint maximum likelihood algorithm for simultaneously detecting two cochannel continuous phase modulated (CPM) signal...
In the perceptually transparent coding of images, we use representation and quantization strategies that exploit properties of human perception to obtain an approximate digital image indistinguishable from the original. This image is then encoded in an error free manner. The resulting coders have better performance than error free coding for a comp...
A network model is proposed for invariant object recognition. The overall complexity of the weight connection is lower than that of several established networks. The performance of the proposed model is comparable to that obtained with Zernike moments.
A new search method over (x,y,/spl theta/), called position-orientation masking is introduced. It is applied to vertices that are allowed to be separated into different bands of acuteness. Position-orientation masking yields exactly one /spl theta/ value for each (x,y) that it considers to be the location of a possible occurrence of an object. Deta...
A network is described for separating binary image objects from
the background, to be used as a preprocessor for a recognition network
which assumes that only one object is present in the image plane. The
network has a simpler architecture compared with existing approaches,
while retaining reasonable performance
The inadequacy of the classic linear approach to edge detection and
scale space filtering lies in the spatial averaging of the Laplacian.
The Laplacian is the divergence of the gradient and thus is the
divergence of both magnitude and direction. The divergence in magnitude
characterizes edges and this divergence must not be averaged if the
image st...
A new formulation for inhomogeneous image diffusion is presented
in which the image is regarded as a surface in 3-space. The evolution of
this surface under diffusion is analyzed by classical methods of
differential geometry. A nonlinear filtering theory is introduced in
which only the divergence of the direction of the surface gradient is
averaged...
Image interpolation is an important image operation. It is
commonly used in image enlargement to obtain a close-up view of the
detail of an image. From sampling theory, an ideal low-pass filter can
be used for image interpolation. However, ripples appear around image
edges which are annoying to a human viewer. The authors introduce a new
FIR image...
Proposes a perceptual Wiener filter for image restoration, a
linear space-variant filter which accounts for the human visual system
response to image details and noise in the vicinity of an edge. This
filter provides a reduction in the ringing artifact observed in the
vicinity of edges, when compared to the response of the widely used
linear space-...
We introduce a new theory relating the magnitude of the image
surface normal to an inhomogeneous diffusion that solely diffuses
(averages) the mean curvature of the image surface. We discuss the
remarkable properties of this diffusion stressing the regularity it
imposes on regions and boundaries while preserving the locality of edges
and lines. Exp...
This paper describes a simple network for a selective attention mechanism for multiple binary objects. The network is based on a concept similar to the region growing approach and is able to focus attention on one object at a time from multiple objects for a recognition process. The network consists of a growing network and an attention network. Th...
This paper proposes a network architecture for invariant object recognition and rotation angle estimation. The model has four stages. The first stage is a network implementation of the Radon transform, which is used to separate rotation and translation of the input object into translations on the θ-axis and s-axis, respectively. The second stage pr...
A number of new approaches to image coding are being developed to meet the increasing need for a broader range of quality and usage environments for image compression. Several of these new approaches, such as subband coders, are intended to provide higher quality images at the same bit rate as compared to the JPEG standard, because they are not sub...
A method is developed for the synthesis of a nonlinear adaptive filter based on solutions to the inhomogeneous diffusion equation. The approach is based on the specification of the first derivative of the signal in time (scale). A general solution is derived and is then specialized to the scale invariance case, in which the diffusion coefficient is...
The encoding of Super High Definition Images presents new problems with regard to the effect of noise on the quality of images and on coding performance. Although the information content of images decreases with increasing resolution, the noise introduced in the image acquisition or scanning process, remains at a high level, independently of resolu...
An application of neural networks is the recognition of objects under translation, rotation, and scale change. Most existing networks for invariant object recognition require a huge number of connections and/or processing units. In this paper, we propose a new connectionist model for invariant object recognition for binary images with a reasonable...
The authors propose a new connectionist model for invariant object
recognition. The model has four stages. The first stage obtains
projections from the input image plane. The projection features can
separate translation and rotation into two independent translations in
s and θ, where s and θ are the axes of
the transformed domain directions. The fi...
The encoding of high quality and super high definition images requires new approaches to the coding problem. The nature of such images and the applications in which they are used prohibits the introduction of perceptible degradation by the coding process. In this paper, we discuss techniques for the perceptually transparent coding of images. Althou...
The authors describe a human visual perception approach and report some results for this problem. The approach is based on the differential quantization of images, in which smooth approximations are subtracted from the image prior to quantization. They consider two such approximations. The first one is an approximation by splines obtained from a sp...
In image processing operations involving changes in the sampling grid, including increases or decreases in resolution, care must be taken to preserve image structure. Structure includes regions of high contrast, such as edges, streaks, or corners, all indicated by large gradients. The authors consider the use of local spatial analysis for both samp...
In the encoding of high quality images beyond current standards, a reexamination of issues in the representation, processing and encoding problems is needed. The fundamental reason for that change of emphasis is that the image representation, sampling density, color and motion parameters are no longer given by accepted practices or standards and, t...
A new approach is developed for detection of image objects and their orientations, based on distance transforms of intermediate level edge information(i.e., edge segments and vertices). Objects are modeled with edge segments and these edges are matched to edges extracted from an image by correlating spatially transformed versions of one with a dist...
The goal of this research is to develop interpolation techniques
which preserve or enhance the local structure critical to image quality.
Preliminary results are presented which exploit either the properties of
vision or the properties of the image in order to achieve the goals.
Directional image interpolation is considered which is based on a loca...
The amount of data generated by computed tomography (CT) scanners is enormous, making the image reconstruction operation slow, especially for 3-D and limited-data scans requiring iterative algorithms. The inverse Radon transform, commonly used for CT image reconstructions from projections, and the forward Radon transform are computationally burdens...
Some preliminary results are presented on a simple progressive code for scanned 300 dpi documents. The effectiveness of the standard CCITT codes as a function of document resolution is determined. Extensions to the technique of document resolution are examined. For higher-resolution scanning, e.g. 300 dpi, a pyramid of the information source which...
The application of novel anisotropic filter design techniques
based on properties of human vision to the processing of luminance and
chrominance components of color images is considered. Applied
independently, these anisotropic filters can be used for the sequential
digital representation of images by subsampling. By using them with
two-dimensional...
Advances in computer networking have opened opportunities for
remote image processing, allowing processing tasks to be distributed
among specialized nodes. The integrated programming environment, Davis
interactive system (DAISY) provides a control panel interface on a
workstation to a library of image processing applications that run on
specialized...
We have developed over the past few years new linear filtering methods for images which incorporate the use of properties of human vision in design. These methods apply to noise removal, image enhancement and image restoration. One interesting extension of our work is to the use of the anisotropy of vision to achieve better visual quality. In this...
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transfo...
The quantitative use of remote sensing satellite images in many applications requires that the geometric distortion inherent in these images be corrected, or rectified, to a desired map projection. The most widely used technique relies on ground control points to empirically determine a mathematical coordinate transformation to correct the geometry...
High speed second-order digital filters with maximum input-Sampling frequencies of 2·86 MHz can be realized with a single multiplier-accumulator IC using the filter architecture presented. Extensions to N'th order digital filters are also presented
The estimation of corneal endothelium mean cell area (and, hence, mean cell density) is an important problem in clinical ophthalmology. Mitotic division of these cells is not known to occur, and cell deaths are followed by the enlargement of adjacent cells. As a consequence, cell-area distributions change drastically as functions of age and disease...
In this paper, we report on a complete operational procedure designed for use by the U.S. Army Corps of Engineers for the deterimation of land-use information for hydrologic planning purposes. The procedure combines photo interpretation techniques and the batch-mode computer analysis of Landsat digital data. Since the operational constraints preclu...
Images acquired by remote sensing contain radiometric errors caused by variations in the sensor response. In this note, we present a unified treatment of the correction of periodic or nonperiodic errors, which provides some insight into the relation of correction algorithms to the type of radiometric degradation. Successful correction of a NOAA Ver...
The use of image quality measures in the design of processing algorithms and equipment is a difficult task. Realistic and useful images are complex and far from the threshold conditions under which psychophysical measurements and models are obtained. For a class of processing algorithms, the image distortion is actually proportional to the image co...
We measured central corneal endothelial cell density and area from contact specular photomicrographs of ten normal and ten abnormal corneas, comparing the precision, cost, and speed of four methods: a rectangle, planimeter, digitizer, and cell sizer. The rectangle, planimeter, and digitizer gave results that differed less than 10% from each other;...
A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interac...
In this paper, we report on an operational procedure for use by the Corps of Engineers to acquire land use information for hydrologic planning purposes. The operational constraints preclude the use of dedicated, interactive image processing facilities. The procedure, which is summarized in detail, combines manual interpretation techniques and the b...
Satellite digital data from Landsat and NOAA satellites is often marred by striping or streaking errors due to variations in the response of the radiometric sensors. In this paper, we discuss the equalization of the digital data as a preprocessing step, prior to image enhancement or automatic classification. The methods described make use of statis...
A new algorithm is presented for the determination of the locations of major edges in a noisy image based, in part, on the approach of maximum-likelihood estimation.
The problem of edge extraction in sampled images is in formulated as a statistical estimation and detection problem. Known methods of statistical inference are applied to the problem, providing insight into the nature of edge extraction. Practical edge extraction algorithms are designed and applied to several images. Employing a formal approach to...
Using pacemakers implanted in canines with surgically induced atrioventricular blocks, the effects of the microwave-oven frequencies (915 and 2450 MHz) and two radar frequencies (2810 and 3050 MHz) were evaluated. Quantitative evaluation of these fields with respect to complete inhibition of pacemakers can be made. A narrow zone of inhibition durin...
The Iterative Optimizing Quantization Technique (IOQT) is a novel method in reconstructing three-dimensional images from a limited number of two-dimensional projections. IOQT reduces the artifacts and image distortion due to a limited number of projections and limited range of viewing angles. IOQT, which reduces the number of projections required f...
In this final report, we summarize some of our results from September 1989 to October 1990. The design, construction, and testing of a four-processor prototype multi-processor (RTP) board using TI TMS320C25 DSP chips has been completed. We are now finishing the extensive detailed final documentation of the RTP hardware and software. This extensive...
Abstract A Study of Physical and Circuit Models of the Human,Pinnae by Patrick Satarzadeh Master of Science in Electrical and Computer Engineering UNIVERSITY OF CALIFORNIA, DAVIS V. Ralph Algazi, Chair The importance of the pinna in sound localization has been known,for many,years. This has lead to a continued interest in understanding the relation...
The overall goal of this project is to evaluate the potential of extremely fast hardware architectures for image processing. Primary focus is upon realizations of the Radon transform and backprojection because of their applicability to computed tomography. In this report, we discuss our results thus far in the design, simulation, and realization of...