September 2010

·

51 Reads

Published by Elsevier BV

Online ISSN: 1095-4333

·

Print ISSN: 1051-2004

September 2010

·

51 Reads

The spectrum of the convolution of two continuous functions can be determined as the continuous Fourier transform of the cross-correlation function. The same can be said about the spectrum of the convolution of two infinite discrete sequences, which can be determined as the discrete time Fourier transform of the cross-correlation function of the two sequences. In current digital signal processing, the spectrum of the contiuous Fourier transform and the discrete time Fourier transform are approximately determined by numerical integration or by densely taking the discrete Fourier transform. It has been shown that all three transforms share many analogous properties. In this paper we will show another useful property of determining the spectrum terms of the convolution of two finite length sequences by determining the discrete Fourier transform of the modified cross-correlation function. In addition, two properties of the magnitude terms of orthogonal wavelet scaling functions are developed. These properties are used as constraints for an exhaustive search to determine an robust lower bound on conjoint localization of orthogonal scaling functions.

June 1997

·

77 Reads

Signal cancellation is a serious problem in adaptive nulling. The
problem arises when the actual direction of arrival of the signal is
slightly off the assumed direction of arrival. The adaptive algorithms
consider the actual signal a jammer as the direction of arrival is not
exactly specified. It is shown how to prevent signal cancellation when
the direction of arrival is not known exactly. Multiple look-direction
constraints are used to prevent signal cancellation. This paper outlines
the principles and illustrates how it can be incorporated in an adaptive
nulling situation using a deterministic direct data domain
approach

December 1996

·

16 Reads

A new distributed Bragg reflector (DBR) laser with continuously and arbitrarily chirped gratings is theoretically analysed. The chirped gratings are defined by bent waveguides on homogeneous grating fields. The influence of the chirped gratings on tunability of multielectrode DBR lasers is presented via the study of different bending functions. It is theoretically shown that the tunability of these components can be improved using the appropriate chirping function

January 1994

·

43 Reads

This paper describes a parallel implementation of a Hidden Markov Model (HMM) for spoken language recognition on the MasPar MP-1. By exploiting the massive parallelism of explicit duration HMMs, we can develop more complex models for real-time speech recognition. Implementational issues such as choice of data structures, method of communication, and utilization of parallel functions are explored. The results of our experiments show that the parallelism in HMMs can be effectively exploited by the MP-1. Training that use to take nearly a week can now be completed in about an hour. The system can recognize the phones of a test utterance in a fraction of a second

May 1993

·

16 Reads

Summary form only given. The Rutgers University Center for Computer Aids for Industrial Productivity (CAIP) is one of several tightly focused advanced technology centers chartered by the New Jersey Commission on Science and Technology. CAIP's focus is industrial application of leading-edge computing technology. The Center is an industrial consortium. The budget supports about 70 researchers. Each member company has a representative on the Board of Directors for the Center. The Board meets quarterly to review the research program, to advise on industry needs, to ready paths to application for research, and to become familiar with graduate students completing their degree programs. The Center also works closely with new start-up companies to provide laboratory, computing, and consulting assistance in their initial phases. In its sixth year, the Center had placed over 60 advanced-degree graduates in its affiliated industries, helped to attract over ⊄ in funding for new start-up businesses, and continually moves relevant research results, some of which are covered by patents, to its member companies. The author reviews the structure and research activities of the Center, and describes several in-progress transfers of technology to industry

December 1996

·

8 Reads

We have studied the locking characteristics of semiconductor lasers through numerical calculation of the output intensity and change in carrier density of the slave laser during injection locking. We have also obtained the dynamic locking range by examining the roots of the secular determinant of the perturbed system. The lower boundaries of the static and dynamic locking ranges coincide, but the upper boundaries do not. Both the static and dynamic locking ranges are asymmetrical about zero detuning and dependent on injection ratio, linewidth enhancement factor and biasing condition. The upper boundary of the dynamically stable region exhibits an abrupt bend at a very low injection level. Unlike previous work, the locking characteristics at both low and high injection levels have been carefully studied

March 2004

·

2,356 Reads

Image enhancement is one of the most important issues in low-level image processing. Mainly, enhancement methods can be classified into two classes: global and local methods. In this paper, the multi-peak generalized histogram equalization (multi-peak GHE) is proposed. In this method, the global histogram equalization is improved by using multi-peak histogram equalization combined with local information. In our experiments, different local information is employed. Experimental results demonstrate that the proposed method can enhance the images effectively.

January 2000

·

46 Reads

Schmidt-Nielsen, Astrid, and Crystal, Thomas H., Speaker Verification by Human Listeners: Experiments Comparing Human and Machine Performance Using the NIST 1998 Speaker Evaluation Data, Digital Signal Processing10(2000), 249–266.The speaker verification performance of human listeners was compared to that of computer algorithms/systems. Listening protocols were developed to emulate as closely as possible the 1998 algorithm evaluation run by the U.S. National Institute of Standards and Technology (NIST), while taking into account human memory limitations. A subset of the target speakers and test samples from the same telephone conversation data was used. Ways of combining listener data to arrive at a group decision were explored, and the group mean worked well. The human results were very competitive with the best computer algorithms in the same handset condition. For same numbertesting, with 3-s samples, listener panels and the best algorithm had the same equal-error rate (EER) of 8%. Listeners were better than typical algorithms. For different numbertesting, EERs increased but humans had a 40% lower equal-error rate. Human performance in general seemed relatively robust to degradation.

January 2003

·

146 Reads

Leandro Farias Estrozi

·

Luiz Gonzaga Rios-Filho·

Andrea Gomes Campos Bianchi·

[...]

·

Luciano da F. CostaA careful comparison of three numeric techniques for estimation of the curvature along spatially quantized contours is reported. Two of the considered techniques are based on the Fourier transform (operating over 1D and 2D signals) and Gaussian regularization required to attenuate the spatial quantization noise. While the 1D approach has been reported before and used in a series of applications, the 2D Fourier transform-based method is reported in this article for the first time. The third approach, based on splines, represents a more traditional alternative. Three classes of parametric curves are investigated: analytical, B-splines, and synthesized in the Fourier domain. Four quantization schemes are considered: grid intersect quantization, square box quantization, a table scanner, and a video camera. The performances of the methods are evaluated in terms of their execution speed, curvature error, and sensitivity to the involved parameters. The third approach resulted the fastest, but implied larger errors; the Fourier methods allowed higher accuracy and were robust to parameter configurations. The 2D Fourier method provides the curvature values along the whole image, but exhibits interference in some situations. Such results are important not only for characterizing the relative performance of the considered methods, but also for providing practical guidelines for those interested in applying those techniques to real problems.

March 2011

·

210 Reads

This paper considers the identification problems of the Hammerstein nonlinear systems. A projection and a stochastic gradient (SG) identification algorithms are presented for the Hammerstein nonlinear systems by using the gradient search method. Since the projection algorithm is sensitive to noise and the SG algorithm has a slow convergence rate, a Newton recursive and a Newton iterative identification algorithms are derived by using the Newton method (Newton–Raphson method), in order to reduce the sensitivity of the projection algorithm to noise, and to improve convergence rates of the SG algorithm. Furthermore, the performances of these approaches are analyzed and compared using a numerical example, including the parameter estimation errors, the stationarity and convergence rates of parameter estimates and the computational efficiency.

May 2009

·

62 Reads

This paper proposes a computationally efficient method for estimating angle of arrival and polarization parameters of multiple farfield narrowband diversely polarized electromagnetic sources, using arbitrarily spaced electromagnetic vector sensors at unknown locations. The electromagnetic vector sensor is six-component in composition, consisting of three orthogonal electric dipoles plus three orthogonal magnetic loops, collocating in space. The presented method is based on an estimation method named propagator, which requires only linear operations but no eigenvalue decomposition or singular value decomposition into the signal and noise subspaces, to estimate the scaled electromagnetic vector sensors' steering vectors and then to estimate the azimuth arrival angle, the elevation arrival angle, and the polarization parameters. Comparing with its ESPRIT counterpart [K.T. Wong, M.D. Zoltowski, Closed-form direction finding and polarization estimation with arbitrarily spaced electromagnetic vector-sensors at unknown locations, IEEE Trans. Antennas Propagat. 48 (5) (2000) 671–681], the propagator method has its computational complexity reduced by this ratio: the number of sources to sextuple the number of vector sensors. Simulation results show that at high and medium signal-to-noise ratio, the proposed propagator method's estimation accuracy is similar to its ESPRIT counterpart.

May 2005

·

95 Reads

We introduce a new shrinkage scheme, hyper-trim that generalizes hard and soft shrinkage proposed by Donoho and Johnstone (1994). The new adaptive denoising method presented is based on Stein's unbiased risk estimation (SURE) and on a new class of shrinkage function. The proposed new class of shrinkage function has continuous derivative. The shrinkage function is simulated and tested with ECG signals added with standard Gaussian noise using MATLAB. This method gives better mean square error (MSE) performance over conventional wavelet shrinkage methodologies.

January 2002

·

54 Reads

Mamic, G., and Bennamoun, M., Representation and Recognition of 3D Free-Form Objects, Digital Signal Processing12 (2002) 47–76The problem of 3D object recognition has been one that has perplexed the computer vision community for the past two decades. This paper describes and analyzes techniques which have been developed for object representation and recognition. A set of specifications, which all object recognition systems should strive to meet, forms the basis upon which this critical review has been formulated. The literature indicates that there is a powerful requirement for a precise and accurate representation, which is simultaneously concise in nature. Such a representation must be relatively inexpensive and provide a means for determining the error in the surface fit such that the effects of error propagation may be analyzed in the system and appropriate confidence bounds determined in the subsequent pose estimation.

March 2008

·

16 Reads

Quality of service is a critical consideration in the design of mobile systems, since it allows the user to receive high quality services. Therefore, in 3GPP systems, in order to realise a particular service, the quality of service requirements in terms of performance and latency, have to be satisfied. Turbo code features include parallel code concatenation, recursive convolutional encoding, nonuniform interleaving and an associated iterative decoding algorithm. Exploiting the quality of service classification according to the priority of latency or performance, possible examples of service scenarios are examined for flat Rayleigh fading channels with emphasis on the turbo decoding algorithm. Particularly, for two operating environments considering SOVA and log-MAP algorithms due to their data-flow similarities, this paper shows that SOVA is clearly optimal for most of the real-time applications, whereas for nonreal time applications with low data rate and small frames log-MAP is preferred. The use of the optimum algorithm in most scenarios results in a more efficient turbo decoder: applications that otherwise would have failed now can be realised.

April 2003

·

64 Reads

AC power lines have been considered as a convenient and low-cost medium for intra-building automation systems. In this paper, we investigate the problem of estimating the channel order and root mean squared (RMS) delay spread associated with the power lines, which are channel parameters that provide important information for determining the data transmission rate and designing appropriate equalization techniques for power lines communications (PLC). We start by showing that the key to the RMS delay spread estimation problem is the determination of the channel order, i.e., the effective duration of the channel impulse response. We next discuss various ways to estimate the impulse response length from a noise-corrupted channel estimate. In particular, four different methods, namely a signal energy estimation (SEE) technique, a generalized Akaike information criterion (GAIC) based test, a generalized likelihood ratio test (GLRT), and a modified GLRT, are derived for determining the effective length of a signal contaminated by noise. These methods are compared with one another using both simulated and experimentally measured power line data. The experimental data was collected for power line characterization in frequencies between 1 and 60 MHz.

January 2011

·

294 Reads

A problem of accelerometer and gyroscope signals' filtering is discussed in the paper. Triple-axis accelerometer and three single-axis gyroscopes are the elements of strapdown system measuring head. Effective noise filtration impacts on measured signal reliability and the computation precision of moving object position and orientation. The investigations were carried out to apply Kalman filter in a real-time application of acceleration and angular rate signals filtering. The filter parameter adjusting is the most important task of the investigation, because of unknown accuracy of the measuring head and unavailability of precisely known model of the system and the measurement. Results of calculations presented in the paper describe relation between filter parameters and two assumed criterions of filtering quality: output signal noise level and filter response rate. The aim of investigation was to achieve and find values of the parameters which make Kalman filter useful in the real-time application of acceleration and angular rate signals filtering.

November 2006

·

61 Reads

This paper presents a novel method for differential diagnosis of erythemato-squamous disease. The proposed method is based on fuzzy weighted pre-processing, k-NN (nearest neighbor) based weighted pre-processing, and decision tree classifier. The proposed method consists of three parts. In the first part, we have used decision tree classifier to diagnosis erythemato-squamous disease. In the second part, first of all, fuzzy weighted pre-processing, which can improved by ours, is a new method and applied to inputs erythemato-squamous disease dataset. Then, the obtained weighted inputs were classified using decision tree classifier. In the third part, k-NN based weighted pre-processing, which can improved by ours, is a new method and applied to inputs erythemato-squamous disease dataset. Then, the obtained weighted inputs were classified via decision tree classifier. The employed decision tree classifier, fuzzy weighted pre-processing decision tree classifier, and k-NN based weighted pre-processing decision tree classifier have reached to 86.18, 97.57, and 99.00% classification accuracies using 20-fold cross validation, respectively.

March 2010

·

41 Reads

In this paper, we introduce a new approach to the method of non-parametric adaptive spectral analysis by using the Amplitude and Phase Estimation (APES) method, and taking into account the small sample errors of the sample covariance matrix. This approach is referred to as Adaptive Tuning Amplitude and Phase Estimation method (ATAPES). The main advantage of the ATAPES algorithm is its elimination of biased estimation exists with APES method, which is a biased peak location and corresponding problem of the biased amplitude estimation. The ATAPES method provides more accurate peak location and amplitude estimation with higher resolution than APES method.

September 2010

·

12 Reads

In MIMO systems, channel estimation is important to distinguish the transmitted signals from multiple transmit antennas. When MIMO systems are introduced in cellular systems, we have to measure the received power from all the connectable base station (BS), as well as to distinguish all the channel state information (CSI) for the combination of transmitter and receiver antenna elements. One of the most typical channel estimation schemes for MIMO in a cellular system is to employ a code division multiplexing (CDM) scheme in which a unique spreading code is assigned to distinguish both BS and MS antenna elements. However, by increasing the number of transmit antenna elements, large spreading codes and pilot symbols are required to estimate an accurate CSI. To reduce this problem, in this paper, we propose a high time resolution carrier interferometry (HTRCI) for MIMO/OFDM to achieve an accurate CSI without increasing the number of pilot symbols.

September 2008

·

117 Reads

The main motivation of using an acoustic vector-sensor in direction-of-arrival (DOA) estimation applications has been its unambiguous two-dimensional directivity, insensitivity to the range of sources, and independence of signal frequency. The main objection lies in its lack of geometry-redundancy and limited degree of freedom. Four thus emerged challenging tasks and the corresponding solutions by recurring to the redundancies in the nonvanishing conjugate correlations of noncircular signals are described in the paper: (1) fulfilling source decorrelation in a multipath propagation environment; (2) enhancing processing capacity to accommodate more signals; (3) suppressing colored-noise with unknown covariance structure; and (4) deriving closed-form approaches to avoid iteration and manifold storage. Simulation experiments are carried out to examine the associated DOA estimators termed as: (1) phase-smoothing MUSIC (multiple signal classification); (2) virtual-MUSIC; (3) conjugate-MUSIC; and (4) noncircular-ESPRIT (estimation of signal parameters via rotational invariance techniques), respectively.

May 2009

·

59 Reads

This paper proposes a new underwater acoustic 2-D direction finding algorithm using two identically oriented vector hydrophones at unknown locations in non-Gaussian impulsive noise. The two applied vector hydrophones are four-component, orienting identically in space with arbitrarily and possibly unknown displacement. Each vector hydrophone has three spatially co-located but orthogonally oriented velocity hydrophones plus another pressure hydrophone. The proposed algorithm employs the spatial invariance between the two vector hydrophones, but requires no a priori information of vector hydrophones' spatial factors and impinging sources' temporal forms. We apply ESPRIT to estimate vector hydrophones manifold and then to pair the x-axis direction cosines with y-axis direction cosines automatically and yield azimuth and elevation angle estimates. We also consider the additive noise be non-Gaussian impulsive, which is often encountered in underwater acoustics applications. Two typical impulsive noise model, Gaussian-mixture noise and symmetric α-stable (SαS) noise models are adopted. Instead of using conventional second order correlation of array output data, we define the vector hydrophone array sign covariance matrix (VSCM) for Gaussian-mixture noise and vector hydrophone array fractional lower order moment (VFLOM) matrix for SαS noise with 1<α⩽2. These defined matrices may readily substitute customary vector hydrophone array covariance matrix for 2-D direction finding in impulsive noise.

May 2007

·

79 Reads

A non-destructive, real time device was developed to detect insect damage, sprout damage, and scab damage in kernels of wheat. Kernels are impacted onto a steel plate and the resulting acoustic signal analyzed to detect damage. The acoustic signal was processed using four different methods: modeling of the signal in the time-domain, computing time-domain signal variances and maximums in short-time windows, analysis of the frequency spectrum magnitudes, and analysis of a derivative spectrum. Features were used as inputs to a stepwise discriminant analysis routine, which selected a small subset of features for accurate classification using a neural network. For a network presented with only insect damaged kernels (IDK) with exit holes and undamaged kernels, 87% of the former and 98% of the latter were correctly classified. It was also possible to distinguish undamaged, IDK, sprout-damaged, and scab-damaged kernels.

July 2010

·

104 Reads

In this study, new neural network models with adaptive activation function (NNAAF) were implemented to classify ECG arrhythmias. Our NNAAF models included three types named as NNAAF-1, NNAAF-2 and NNAAf-3. Activation functions with adjustable free parameters were used in hidden neurons of these models to improve classical MLP network. In addition, these three NNAAF models were compared with the MLP model implemented in similar conditions. Ten different types of ECG arrhythmias were selected from MIT–BIH ECG Arrhythmias Database to train NNAAFs and MLP models. Moreover, all models tested by the ECG signals of 92 patients (40 males and 52 females, average age is 39.75±19.06). The average accuracy rate of all models in the training processing was found as 99.92%. The average accuracy rate of the all models in the test phases was obtained as 98.19.

September 2007

·

41 Reads

Pharmacological FMRI in humans involves BOLD signal acquisition before, during and after the administration of a drug, and often results in a heterogeneous pattern of drug-induced hemodynamic responses in the brain. Exploratory techniques, including blind source separation, can be useful for BOLD data that contains patterns of cross-dependencies. Bayesian source separation (BSS) is a multivariate technique used to calculate the presence of unobserved signal sources in measured FMRI data, as well as the covariance between data voxels and between reference waveforms. Unlike conventional univariate regression analysis, BSS does not assume independence between voxel time series or source components. In this study, BOLD measurement of the acute effect of an intravenous dose of cocaine, a substance shown previously to engage multiple sites within the orbitofrontal cortex, was processed with BSS. The utility of BSS in pharmacological FMRI applications was demonstrated in multiple examples featuring single-ROI, multiple-ROI and whole-slice data. The flexibility of the BSS technique was shown by choosing different modeling strategies to form the prior reference functions, including approximating the pharmacokinetics of cocaine, interpolating simultaneously measured behavioral data and using observed BOLD responses from known subcortical afferents to the cortex of interest.

January 2005

·

49 Reads

In nonparametric local polynomial regression the adaptive selection of the scale parameter (window size/bandwidth) is a key problem. Recently new efficient algorithms, based on Lepski's approach, have been proposed in mathematical statistics for spatially adaptive varying scale denoising. A common feature of these algorithms is that they form test-estimates different by the scale h∈H and special statistical rules are exploited in order to select the estimate with the best pointwise varying scale. In this paper a novel multiresolution (MR) local polynomial regression is proposed. Instead of selection of the estimate with the best scale h a nonlinear estimate is built using all of the test-estimates . The adaptive estimation consists of two steps. The first step transforms the data into noisy spectrum coefficients (MR analysis). On the second step, this noisy spectrum is filtered by the thresholding procedure and used for estimation (MR synthesis).