M.G. Jafari

Gheorghe Asachi Technical University of Iasi, Socola, Iaşi, Romania

Are you M.G. Jafari?

Claim your profile

Publications (45)36.87 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a denoising and segmentation technique for the second heart sound (S2). To denoise, Matching Pursuit (MP) was applied using a set of non-linear chirp signals as atoms. We show that the proposed method can be used to segment the phonocardiogram of the second heart sound into its two clinically meaningful components: the aortic (A2) and pulmonary (P2) components.
    Conference proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 08/2012; 2012:3440-3.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose the audio inpainting framework that recovers portions of audio data distorted due to impairments such as impulsive noise, clipping, and packet loss. In this framework, the distorted data are treated as missing and their location is assumed to be known. The signal is decomposed into overlapping time-domain frames and the restoration problem is then formulated as an inverse problem per audio frame. Sparse representation modeling is employed per frame, and each inverse problem is solved using the Orthogonal Matching Pursuit algorithm together with a discrete cosine or a Gabor dictionary. The Signal-to-Noise Ratio performance of this algorithm is shown to be comparable or better than state-of-the-art methods when blocks of samples of variable durations are missing. We also demonstrate that the size of the block of missing samples, rather than the overall number of missing samples, is a crucial parameter for high quality signal restoration. We further introduce a constrained Matching Pursuit approach for the special case of audio declipping that exploits the sign pattern of clipped audio samples and their maximal absolute value, as well as allowing the user to specify the maximum amplitude of the signal. This approach is shown to outperform state-of-the-art and commercially available methods for audio declipping in terms of Signal-to-Noise Ratio.
    IEEE Transactions on Audio Speech and Language Processing 04/2012; · 1.68 Impact Factor
  • Maria Jafari, Mark D. Plumbley
    Audio Engineering Society Convention 128. 01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a robust method for the detection of the first and second heart sounds (s1 and s2), without ECG reference, based on a music beat tracking algorithm. An intermediate representation of the input signal is first calculated by using an onset detection function based on complex spectral difference. A music beat tracking algorithm is then used to determine the location of the first heart sound. The beat tracker works in two steps, it first calculates the beat period and then finds the temporal beat alignment. Once the first sound is detected, inverse Gaussian weights are applied to the onset function on the detected positions and the algorithm is run again to find the second heart sound. At the last step s1 and s2 labels are attributed to the detected sounds. The algorithm was evaluated in terms of location accuracy as well as sensitivity and specificity and the results showed good results even in the presence of murmurs or noisy signals.
    Electronics and Telecommunications (ISETC), 2012 10th International Symposium on; 01/2012
  • Source
    N. Cleju, M.G. Jafari, M.D. Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: Analysis based reconstruction has recently been introduced as an alternative to the well-known synthesis sparsity model used in a variety of signal processing areas. In this paper we convert the analysis exact-sparse reconstruction problem to an equivalent synthesis recovery problem with a set of additional constraints. We are therefore able to use existing synthesis-based algorithms for analysis-based exact-sparse recovery. We call this the Analysis-By-Synthesis (ABS) approach. We evaluate our proposed approach by comparing it against the recent Greedy Analysis Pursuit (GAP) analysis-based recovery algorithm. The results show that our approach is a viable option for analysis-based reconstruction, while at the same time allowing many algorithms that have been developed for synthesis reconstruction to be directly applied for analysis reconstruction as well.
    Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on; 01/2012
  • N. Cleju, M.G. Jafari, M.D. Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: The analysis sparsity model is a recently introduced alternative to the standard synthesis sparsity model frequently used in signal processing. However, the exact conditions when analysis-based recovery is better than synthesis recovery are still not known. This paper constitutes an initial investigation into determining when one model is better than the other, under similar conditions. We perform separate analysis and synthesis recovery on a large number of randomly generated signals that are simultaneously sparse in both models and we compare the average reconstruction errors with both recovery methods. The results show that analysis-based recovery is the better option for a large number of signals, but it is less robust with signals that are only approximately sparse or when fewer measurements are available.
    Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European; 01/2012
  • Source
    M.G. Jafari, M.D. Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: For dictionary-based decompositions of certain types, it has been observed that there might be a link between sparsity in the dictionary and sparsity in the decomposition. Sparsity in the dictionary has also been associated with the derivation of fast and efficient dictionary learning algorithms. Therefore, in this paper we present a greedy adaptive dictionary learning algorithm that sets out to find sparse atoms for speech signals. The algorithm learns the dictionary atoms on data frames taken from a speech signal. It iteratively extracts the data frame with minimum sparsity index, and adds this to the dictionary matrix. The contribution of this atom to the data frames is then removed, and the process is repeated. The algorithm is found to yield a sparse signal decomposition, supporting the hypothesis of a link between sparsity in the decomposition and dictionary. The algorithm is applied to the problem of speech representation and speech denoising, and its performance is compared to other existing methods. The method is shown to find dictionary atoms that are sparser than their time-domain waveform, and also to result in a sparser speech representation. In the presence of noise, the algorithm is found to have similar performance to the well established principal component analysis.
    IEEE Journal of Selected Topics in Signal Processing 10/2011; · 3.30 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider the separation of sources when only one movable sensor is available to record a set of mixtures at distinct locations. A single mixture signal is acquired, which is firstly segmented. Then, based on the assumption that the underlying sources are temporally periodic, we align the resulting signals and form a measurement vector on which source separation can be performed. We demonstrate that this approach can successfully recover the original sources both when working with simulated data, and for a real problem of heart sound separation.
    2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA); 09/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider the problem of separating a set of independent components when only one movable sensor is available to record the mixtures. We propose to exploit the quasi-periodicity of the heart signals to transform the signal from this one moving sensor, into a set of measurements, as if from a virtual array of sensors. We then use ICA to perform source separation. We show that this technique can be applied to heart sounds and to electrocardiograms.
    Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.
    Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on 01/2011; · 4.63 Impact Factor
  • Source
    Machine Audition: Principles, Algorithms and Systems. 01/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The method of "sparse representations," based on the idea that observations should be represented by only a few items chosen from a large number of possible items, has emerged recently as an interesting approach to the analysis of images and audio. New theoretical advances and practical algorithms mean that the sparse representations approach is becoming a potentially powerful signal processing and analysis method. Some of the key concepts in sparse representations will be introduced, including algorithms to find sparse representations of data. An overview of some applications of sparse representations in audio will be described, including for automatic music transcription and audio source separation, and pointers will be given for possible future directions in this area. [This work has been supported by grants and studentships from the UK Engineering and Physical Sciences Research Council.].
    The Journal of the Acoustical Society of America 11/2008; 124(4):2570. · 1.65 Impact Factor
  • Source
    M.G. Jafari, M.D. Plumbley, M.E. Davies
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a greedy adaptive algorithm that builds a sparse orthogonal dictionary from the observed data. In this paper, the algorithm is used to separate stereo speech signals, and the phase information that is inherent to the extracted atom pairs is used for clustering and identification of the original sources. The performance of the algorithm is compared to that of the adaptive stereo basis algorithm, when the sources are mixed in echoic and anechoic environments. We find that the algorithm correctly separates the sources, and can do this even with a relatively small number of atoms.
    Hands-Free Speech Communication and Microphone Arrays, 2008. HSCMA 2008; 06/2008
  • Source
    M.G. Jafari, M.D. Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider the problem of representing a speech signal with an adaptive transform that captures the main features of the data. The transform is orthogonal by construction, and is found to give a sparse representation of the data being analysed. The orthogonality property implies that evaluation of both the forward and inverse transform involve a simple matrix multiplication. The proposed dictionary learning algorithm is compared to the K singular value decomposition (K-SVD) method, which is found to yield very sparse representations, at the cost of a high approximation error. The proposed algorithm is shown to have a much lower computational complexity than K-SVD, while the resulting signal representation remains relatively sparse.
    Communications, Control and Signal Processing, 2008. ISCCSP 2008. 3rd International Symposium on; 04/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method for solving the permutation problem in blind source separation (BSS) by frequency-domain independent component analysis (FD-ICA). FD-ICA is a well-known method for BSS of convolutive mix-tures. However, FD-ICA has a source permutation prob-lem, where estimated source components can become swapped at different frequencies. Many researchers have suggested methods to solve the source permutation prob-lem including using correlation between adjacent frequen-cies. In this paper, we discuss a new method for solv-ing the permutation problem, based on the linearity of the phase response of the FD-ICA de-mixing matrix, and a combination method of the proposed phase linearity method and the inter-frequency correlation method. Ini-tial results indicate that our methods can provide an al-most perfect solution to the permutation problem in an anechoic environment, and better performance than the inter-frequency correlation method alone in an echoic en-vironment.
    01/2008;
  • Source
    Maria G Jafari, Mark D Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: We address the problem of source separation in echoic and anechoic environments, with a new algorithm which adap-tively learns a set of sparse stereo dictionary elements, which are then clustered to identify the original sources. The atom pairs learned by the algorithm are found to capture infor-mation about the direction of arrival of the source signals, which allows to determine the clusters. A similar approach is also used here to extend the dictionary learning K singu-lar value decomposition (K-SVD) algorithm, to address the source separation problem, and results from the two methods are compared. Computer simulations indicate that the pro-posed adaptive sparse stereo dictionary (ASSD) algorithm yields good performance in both anechoic and echoic envi-ronments.
    01/2008;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of convolutive blind source separation of stereo mixtures, where a pair of microphones records mixtures of sound sources that are convolved with the impulse response between each source and sensor. We propose an adaptive stereo basis (ASB) source separation method for such convolutive mixtures, using an adaptive transform basis which is learned from the stereo mixture pair. The stereo basis vector pairs of the transform are grouped according to the estimated relative delay between the left and right channels for each basis, and the sources are then extracted by projecting the transformed signal onto the subspace corresponding to each group of basis vector pairs. The performance of the proposed algorithm is compared with FD-ICA and DUET under different reverberation and noise conditions, using both objective distortion measures and formal listening tests. The results indicate that the proposed stereo coding method is competitive with both these algorithms at short and intermediate reverberation times, and offers significantly improved performance at low noise and short reverberation times.
    Neurocomputing. 01/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method for solving the permutation problem in blind source separation (BSS) by frequency- domain independent component analysis (FD-ICA). FD-ICA is a well-known method for BSS of convolutive mixtures. However, FD-ICA has a source permutation problem, where estimated source components can become swapped at differ- ent frequencies. Many researchers have suggested methods to solve the source permutation problem including using corre- lation between adjacent frequencies. In this paper, we discuss a new method for solving the permutation problem, based on the linearity of the phase response of the FD-ICA de-mixing matrix. Initial results indicate that our method can provide an almost perfect solution to the permutation problem in an anechoic environment, and better performance than the method based on correlation between adjacent frequencies in an echoic environment.
    01/2008;
  • Source
    Maria G. Jafari, Mark D. Plumbley
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we investigate the importance of the high frequencies in the problem of convolutive blind source separation (BSS) of speech signals. In particular, we focus on frequency domain blind source separation (FD-BSS), and show that when separation is performed in the low frequency bins only, the recovered signals are similar in quality to those extracted when all frequencies are taken into account. The methods are compared through informal listening tests, as well as using an objective measure.
    Independent Component Analysis and Signal Separation, 7th International Conference, ICA 2007, London, UK, September 9-12, 2007.; 01/2007
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of convolutive blind source separation (BSS). This is usually tackled through either multichannel blind deconvolution (MCBD) or using frequency-domain independent component analysis (FD-ICA). Here, instead of using a fixed time or frequency basis to solve the convolutive blind source separation problem we propose learning an adaptive spatial–temporal transform directly from the speech mixture. Most of the learnt space–time basis vectors exhibit properties suggesting that they represent the components of individual sources as they are observed at the microphones. Source separation can then be performed by projection onto the appropriate group of basis vectors.We go on to show that both MCBD and FD-ICA techniques can be considered as particular forms of this general separation method with certain constraints. While our space–time approach involves considerable additional computation it is also enlightening as to the nature of the problem and has the potential for performance benefits in terms of separation and de-noising.
    12/2006: pages 79-99;

Publication Stats

197 Citations
36.87 Total Impact Points

Institutions

  • 2012
    • Gheorghe Asachi Technical University of Iasi
      Socola, Iaşi, Romania
    • Technion - Israel Institute of Technology
      H̱efa, Haifa District, Israel
  • 2005–2011
    • Queen Mary, University of London
      • School of Electronic Engineering and Computer Science
      Londinium, England, United Kingdom
  • 2002–2004
    • King's College London
      • Centre for Digital Signal Processing Research
      London, ENG, United Kingdom
  • 2001
    • Imperial College London
      • Department of Electrical and Electronic Engineering
      London, ENG, United Kingdom