IET Signal Processing (IET SIGNAL PROCESS )

Publisher: Institution of Engineering and Technology


IET Signal Processing publishes novel contributions in signal processing including: advances in single and multi-dimensional filter design and implementation; linear and nonlinear, fixed and adaptive digital filters and multirate filter banks; statistical signal processing techniques and analysis; classical, parametric and higher order spectral analysis; signal transformation and compression techniques, including time-frequency analysis; system modelling and adaptive identification techniques; machine learning based approaches to signal processing; Bayesian methods for signal processing, including Monte-Carlo Markov-chain and particle filtering techniques; theory and application of blind and semi-blind signal separation techniques; signal processing techniques for analysis, enhancement, coding, synthesis and recognition of speech signals; direction-finding and beamforming techniques for audio and electromagnetic signals; analysis techniques for biomedical signals; baseband signal processing techniques for transmission and reception of communication signals; signal processing techniques for data hiding and audio watermarking.

  • Impact factor
    Show impact factor history
    Impact factor
  • 5-year impact
  • Cited half-life
  • Immediacy index
  • Eigenfactor
  • Article influence
  • Website
    IET Signal Processing website
  • Other titles
    Signal processing
  • ISSN
  • OCLC
  • Material type
    Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: The diagonal loading (DL) technique is the most widely used method to improve the robustness of the Capon beamformer in the presence of imprecise knowledge of the covariance matrix and the desired signal’s steering vector. The selection of the DL level is challenging in practice and might depend on some user-defined parameters which are possibly hard to be determined. A fully automatic and training-free method for the DL level selection is herein presented to extract the desired signal with constant modulus, which is a common feature for communication signals. Simulated results of the beamforming performance have demonstrated the efficacy of the proposed method.
    IET Signal Processing 10/2014; 8(8):823-830.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The authors apply the maximum empirical likelihood method to the problem of estimating the time delay of a measured digital signal when the signal can be seen as an instance of a stationary random process with additive independently and identically distributed (i.i.d.) noise. It is shown that, under these assumptions, an approximate log-likelihood function can be estimated from the measured data itself, and therefore a maximum likelihood estimate can be obtained without the prior knowledge of the formula for the signal likelihood. The Cramer-Rao lower bounds (CRLB) for two additive noise models (mixed-Gaussian and generalised normal distribution) are derived. The authors also show that the error produced by the maximum log-likelihood estimates (when the likelihood function is estimated from the measured data) better approximates the CRLB than other estimators for noise models other than Gaussian or Laplacian (special case of the generalised normal).
    IET Signal Processing 09/2014; 8(7):720-728.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The problem of speech enhancement is considered in an interference environment, typical to applications like hands-free voice communication and multi-party conferencing. In the proposed system, a directional microphone placed at each interference is used to estimate the power spectral density (PSD) of the interference and the quantised PSD estimate is transmitted over a wireless link to an omnidirectional primary microphone that observes the corrupted source speech signal. At the primary microphone, the received observations are fused to obtain an estimate of the PSD of the desired speech signal. The problem of minimising transmitted power is considered subject to constraints on total transmission rate and maintaining the mean-squared error in the estimated speech signal PSD below a prescribed limit, under narrowband and broadband signal models. The optimisation problem is solved by determining the optimum rate for encoding signal PSDs. The proposed strategy is analysed using sample speech and music signals.
    IET Signal Processing 09/2014; 8(7):792-799.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Linear prediction serves as a mathematical operation to estimate the future values of a discrete-time signal based on a linear function of previous samples. When applied to predictive coding of waveform such as speech and audio, a common issue that plagues compression performance is the non-stationary characteristics of prediction residuals around the starting point of the random access frames. This is because dependencies between prediction residuals and the historical waveform are interrupted to satisfy the random access requirement. In such cases, the dynamic range of the prediction residuals will fluctuate dramatically in such frames, leading to substantially poor coding performance in the subsequent entropy coder. In this study, the authors developed a solution to this long-standing issue by establishing a theoretical relationship between the energy envelope of linear prediction residuals in the random access frames and the prediction coefficients. Using the established relationship, an adaptive normalisation method is formulated as a preprocessor to the entropy coder to mitigate the poor coding performance in the random access frames. Simulation results confirm the superiority of the proposed method over existing solutions in terms of coding efficiency performance.
    IET Signal Processing 09/2014; 8(7):710-719.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study provides a performance analysis for the quantised innovation Kalman filter (QIKF). The covariance matrix of the estimation error is analysed with the correlation between the measurement error and the quantising error. By taking the quantisation errors as a random perturbation in the observation system, an equivalent state-observation system is obtained. Accordingly, the QIKF for the original system is equivalent to a Kalman-like filtering for the equivalent state-observation system. The boundedness of the error covariance matrix of the QIKF is obtained under some weak conditions. A sufficient condition for the stability of the QIKF is also obtained in the general vector case. Then, the relationship between the standard Kalman filtering and the QIKF for the original system is discussed. Based on the analysis of the stability of the QIKF, the design of number of quantised levels is discussed. The relationship between the filtering performance and the number of quantisation levels is also given. Finally, the validity of these results is demonstrated by numerical simulations.
    IET Signal Processing 09/2014; 8(7):759-773.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The recently introduced theory of compressed sensing (CS) enables the recovery of sparse or compressible signals from a small set of non-adaptive measurements, and furthermore, it holds promise for substantially improving the performance by leveraging more signal structures that go beyond simple sparsity. In this study, the authors study the weighted l1 minimisation problem for CS reconstruction when partial support information is available. Firstly, they focus on the coherence-based performance guarantees and show that if an estimated support can be obtained with its accuracy and relative size satisfying certain coherence-related conditions, the weighted l1 minimisation is then stable and robust under weaker sufficient conditions than that of the analogous standard l1 optimisation. Meanwhile, better upper bounds on the reconstruction error could also be achieved. Besides, a novel adaptive alternating direction method of multipliers with iterative support detection is outlined to solve the weighted l1 minimisation problem. Simulation results show that the authors' method achieves good convergence, and obtains improved reconstruction performance in comparison with the conventional methods.
    IET Signal Processing 09/2014; 8(7):749-758.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The electrocardiogram (ECG) signal is considered as one of the most important tools in clinical practice in order to assess the cardiac status of patients. In this study, an improved QRS (Q wave, R wave, S wave) complex detection algorithm is proposed based on the multiresolution wavelet analysis. In the first step, high frequency noise and baseline wander can be distinguished from ECG data based on their specific frequency contents. Hence, removing corresponding detail coefficients leads to enhance the performance of the detection algorithm. After this, the author's method is based on the power spectrum of decomposition signals for selecting detail coefficient corresponding to the frequency band of the QRS complex. Hence, the authors have proposed a function g as the combination of the selected detail coefficients using two parameters λ1 and λ2, which correspond to the proportion of the frequency ranges of the selected detail compared with the frequency range of the QRS complex. The proposed algorithm is evaluated using the whole arrhythmia database. It presents considerable capability in cases of low signal-to-noise ratio, high baseline wander and abnormal morphologies. The results of evaluation show the good detection performance; they have obtained a global sensitivity of 99.87%, a positive predectivity of 99.79% and a percentage error of 0.34%.
    IET Signal Processing 09/2014; 8(7):774-782.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A general form of compressive sensing (CS)-based higher order time-frequency distributions (TFDs) is proposed. Non-linear time-varying spectrum analysis requires higher order TFDs, but they cannot produce efficient result in the presence of strong noisy pulses. Consequently, the time-frequency analysis needs to be combined with the L-statistics. When applied to the higher order local auto-correlation function, the L-statistics removes all possibly corrupted samples and just a small number of samples remains for distribution calculation. In the proposed approach the discarded information can be completely recovered using CS reconstruction. Owing to the use of higher order local auto-correlation function, the signal becomes locally sparse in the transform domain. Hence, the idea is to cast all noisy samples as missing ones, then reconstruct the entire information and produce highly concentrated representation in the transform domain. The proposed CS-based distribution form provides significantly improved performance compared to the existing standard and L-estimate forms. It is proven by various experiments.
    IET Signal Processing 09/2014; 8(7):738-748.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this study, the authors propose a blind spot reduction method in wavelet transform-based time-frequency domain reflectometry (WTFDR) by using the Gaussian chirp as the mother wavelet. The blind spot is one of the intrinsic weak points in reflectometry and it means the overlapping ranges of the reference and the reflected signals when the fault is generated at a close distance. Owing to the blind spot, it is difficult to localise the close range fault. Thus, many researchers study the blind spot which is generated in various cables such as electric cable in flight, network cable and power cable. In this study, two methods are used to reduce the blind spot. Firstly, by using the linearity of a complex wavelet transform, the overlapped reference signal at the measured signal is separated and the blind spot is reduced by obtaining the difference of the moduli of the wavelet coefficients for the reference and the reflected signals. Secondly, by using the Gaussian chirp as the mother wavelet, which is designed by considering the characteristics of the cable, the wavelet analysis and the resolution of the WTFDR are improved. Finally, the computer simulations and the real experiments are performed to confirm the effectiveness and the accuracy of the proposed method.
    IET Signal Processing 09/2014; 8(7):703-709.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The amplitude-and-phase error (APE) between phased array channels is notorious in radar signal processing. This error can cause an inaccurate estimate of unknown steering vector of the target echo signal and eventually result in amplitude-phase discontinuity of the phased array output. Thus, how to handle the APE is a meaningful problem, particularly for the varying jamming environment, of which the signal-to-noise ratio is very low. In this study, the authors have developed a new method to obtain real-time amplitude and phase differences between two consecutive weight update periods based on the accurate estimation of steering vector. Such differences can be used to obtain a real-time weight vector with negligible amplitude-phase distortions, and hence improves the phased array signal-processing performance. The proposed method is very flexible: it works well in different array configurations, such as linear, rectangular and Y-shape arrays, and can be efficiently implemented in any eigenstructure-based direction-of-arrival system.
    IET Signal Processing 09/2014; 8(7):729-737.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The robust compressed sensing problem subject to a bounded and structured perturbation in the sensing matrix is solved in two steps. The alternating direction method of multipliers (ADMM) is first applied to obtain a robust support set. Unlike the existing robust signal recovery solutions, the proposed optimisation problem is convex. The ADMM algorithm that every subproblem has a global minimum is employed to solve the optimisation problem. Then, the standard robust regularised least-squares problem restrained to the support is solved to reduce the recovery error. The numerical tests show that the proposed approach provides a robust estimation of support set, although it is conservative to recover signal magnitudes as a result of minimising the worst-cast data error across all bounded perturbations.
    IET Signal Processing 09/2014; 8(7):783-791.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In many wireless communication systems, data can be divided into different importance levels. For these systems, unequal error protection (UEP) techniques are used to ensure lower bit error rate for the more important classes. Moreover, if the precise characteristics of the channel are known, UEP can be used to correctly recover the more important classes even under severe receiving conditions. In this study, a UEP scheme based on compressed sensing via a linear program is proposed. Discrete wavelet transform (DWT) is chosen as the sparsifying basis, and then DWT-coded information is divided into two-layered coded streams, each of which is transmitted differentially by applying an unequal number of information bits in linear codes according to the time-varying characteristic of the corrupted channel. In this proposed transmission scheme, the more important information is to guarantee error-free transmission. At the decoder, one can simply reconstruct the signal via the l1-minimisation algorithm. Simulation results show that the proposed scheme can achieve a higher peak signal-to-noise ratio (PSNR) and obviously improve the error resilience compared to the equal error protection scheme and other UEP methods. More importantly, with the increase of channel corrupted ratio, the drop rate of PSNR is much slower than other solutions. It indicates that the proposed method has better robustness for severe channel conditions.
    IET Signal Processing 09/2014; 8(7):800-808.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a new multilevel decomposition method for the separation of convolutive image mixtures. The proposed method uses an Adaptive Quincunx Lifting Scheme (AQLS) based on wavelet decompo- sition to preprocess the input data, followed by a Non-Negative Matrix Factorization whose role is to unmix the decomposed images. The un- mixed images are, thereafter, reconstructed using the inverse of AQLS transform. Experiments carried out on images from various origins showed that the proposed method yields better results than many widely used blind source separation algorithms.
    IET Signal Processing 07/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: H.264/AVC is the most widely used recent video coding standard. It provides a high encoding efficiency but it also has a high computational complexity. The block mode decision for motion estimation is the most time-consuming procedure. A complexity reduction method for the block mode decision procedure is proposed. To reduce the complexity, all block modes are divided into several candidate block mode groups. The sum of the absolute difference (SAD) value, including the motion cost of each mode, is used as a classification feature to divide the block modes into several groups. A refinement method using a Bayesian model based on the average SAD value is also proposed. For B-slices, a differential block mode allocation method is suggested. A different number of candidate modes are allocated for lists (list 0, list 1) based on the SAD value of each list after 16 ?? 16 block motion estimation. The proposed method achieves an average time-saving for the total encoding time of 65% for IPPP and 66.01% for the hierarchical-B structure.
    IET Signal Processing 07/2014; 8(5):530-539.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The success of coherent optical code-division multiple-access (OCDMA) systems is strongly dependent on the optical encoder/decoder technology and on the selection of the correct OCDMA codes/sequences. For this reason, in this study, the authors present a method to implement perfect sequences with Super-Structured Fibre Bragg Gratings (SSFBGs). A new SSFBG power reflection model has been found. They have also derived a property that explains why the SSFBGs should use codes derived from m-sequences. Usually, OCDMA researchers try many different codes into SSFBGs in order to select the SSFBG encoders that result in lower error probability. In the authors work, they show that a SSFBG can be considered to be a perfect sequence encoder. For this reason, the codes written into the SSFBGs should be selected based on their new property. This property permits to design and select quickly the correct codes with low power contrast ratios. In addition, a new error probability upper bound, which is a function of the code family and of its power contrast ratio is also presented. With this new bound, it is not necessary to use an optical simulator to estimate the maximum bit error rate of an OCDMA system, if some power contrast ratios of the selected SSFBG code set are known.
    IET Signal Processing 06/2014; 8(4):421-428.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study addresses the problem of spectrum trading in a cognitive radio network with multiple primary users (PUs) competing to sell spectrum to a secondary user (SU). The spectrum trading process is modelled using a 'Cournot game model' of competition by which the PUs set the size of spectrum to sell. In this study, the spectrum requirements for the PUs?? services are not fixed but time varying, and the spectrum trading process is carried out before the realisation of these values. If the spectrum retained for a PU after selling is less than the spectrum requirement for the PU's service, a cost must be charged to the PU. The Nash equilibrium (NE) for a static game when the PUs have complete knowledge on the utility functions of other PUs is studied first. A dynamic game, in which the players adaptively change their strategies to reach the NE, is discussed subsequently. Finally, the trading problem is extended to a scenario which involves multiple SUs.
    IET Signal Processing 06/2014; 8(4):410-420.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In previous decades, it has been observed that many physical systems are well characterised by fractional order models. Hence, their identification is attracting more and more interest of the scientific community. However, they pose a more difficult identification problem, which requires not only the estimation of model coefficients but also the determination of fractional orders with the tedious calculation of fractional order derivatives. This study focuses on an identification scheme, in the time domain, of dynamic systems described by linear fractional order differential equations. The proposed identification method is based on the recursive least squares algorithm applied to an ARX structure derived from the linear fractional order differential equation using adjustable fractional order differentiators. The basic ideas and the derived formulations of the identification scheme are presented. Illustrative examples are presented to validate the proposed linear fractional order system identification approach.
    IET Signal Processing 06/2014; 8(4):398-409.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Unlike synchronous processing, asynchronous processing is more efficient in biomedical and sensing networks applications as it is free from aliasing constraints and quantization error in the amplitude, it allows continuous–time processing and more importantly data is only acquired in significant parts of the signal. We consider signal decomposers based on the asynchronous sigma delta modulator (ASDM), a non-linear feedback system that maps the signal amplitude into the zero-crossings of a binary output signal. The input, the zero-crossings and the ASDM parameters are related by an integral equation making the signal reconstruction difficult to implement. Modifying the model for the ASDM, we obtain a recursive equation that permits to obtain the non-uniform samples from the zero-time crossing values. Latticing the joint time-frequency space into defined frequency bands, and time windows depending on the scale parameter different decompositions are possible. We present two cascade low- and high-frequency decomposers, and a bank-of-filters parallel decomposer. This last decomposer using the modified ASDM behaves like a asynchronous analog to digital converter, and using an interpolator based on Prolate Spheroidal Wave functions allows reconstruction of the original signal. The asynchronous approaches proposed here are well suited for processing signals sparse in time, and for low-power applications.
    IET Signal Processing 05/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: An analysis of robust estimation theory in the light of sparse signals reconstruction is considered. This approach is motivated by compressive sensing (CS) concept which aims to recover a complete signal from its randomly chosen, small set of samples. In order to recover missing samples, the authors define a new reconstruction algorithm. It is based on the property that the sum of generalised deviations of estimation errors, obtained from robust transform formulations, has different behaviour at signal and non-signal frequencies. Additionally, this algorithm establishes a connection between the robust estimation theory and CS. The effectiveness of the proposed approach is demonstrated on examples.
    IET Signal Processing 05/2014; 8(3):223-229.