Joint Multi-Pitch Detection Using Harmonic Envelope Estimation for Polyphonic Music Transcription

Centre for Digital Music, Queen Mary Univ. of London, London, UK
IEEE Journal of Selected Topics in Signal Processing (Impact Factor: 2.37). 11/2011; 5(6):1111 - 1123. DOI: 10.1109/JSTSP.2011.2162394
Source: IEEE Xplore


In this paper, a method for automatic transcription of music signals based on joint multiple-F0 estimation is proposed. As a time-frequency representation, the constant-Q resonator time-frequency image is employed, while a novel noise suppression technique based on pink noise assumption is applied in a preprocessing step. In the multiple-F0 estimation stage, the optimal tuning and inharmonicity parameters are computed and a salience function is proposed in order to select pitch candidates. For each pitch candidate combination, an overlapping partial treatment procedure is used, which is based on a novel spectral envelope estimation procedure for the log-frequency domain, in order to compute the harmonic envelope of candidate pitches. In order to select the optimal pitch combination for each time frame, a score function is proposed which combines spectral and temporal characteristics of the candidate pitches and also aims to suppress harmonic errors. For postprocessing, hidden Markov models (HMMs) and conditional random fields (CRFs) trained on MIDI data are employed, in order to boost transcription accuracy. The system was trained on isolated piano sounds from the MAPS database and was tested on classic and jazz recordings from the RWC database, as well as on recordings from a Disklavier piano. A comparison with several state-of-the-art systems is provided using a variety of error metrics, where encouraging results are indicated.

Download full-text


Available from: Emmanouil Benetos, Mar 10, 2014
  • Source
    • "For example, in [4], the (B, F 0 ) parameters are learned on some single note recordings and interpolated on the tessitura. In [5] [6], they are jointly, roughly estimated during a preprocessing step. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for estimating the tuning and the inharmonicity coefficient of piano tones, from single notes or chord recordings. It is based on the Non-negative Matrix Factorization (NMF) framework, with a parametric model for the dictionary atoms. The key point here is to include as a relaxed constraint the inharmonicity law modelling the frequencies of transverse vibrations for stiff strings. Applications show that this can be used to finely estimate the tuning and the inharmonicity coefficient of several notes, even in the case of high polyphony. The use of NMF makes this method relevant when tasks like music transcription or source/note separation are targeted.
    Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European; 08/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ASO [1] is an adaptive embedding scheme that has proved its efficiency compared to HUGO [2] algorithm. It is based on the use of a detectability map that is correlated to the security of the embedding process. The detectability map is calculated using the Kodovský's ensemble classifiers[3] as an oracle, which preserves the distribution of the cover image and of the sender's database. In this article, we give the technical points about ASO. We give the details of the detectability map computation, then we study the security of the communication phase of ASO through the paradigm of the steganography by database. Since the introduced paradigm allows the sender to choose the most secure stego image(s) during the transmission of his message, we propose some security metrics that can help him to distinguish between secure and insecure images. We thus significantly increase the security of ASO.
    Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European; 08/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work, a probabilistic model for multiple-instrument automatic music transcription is proposed. The model extends the shift-invariant probabilistic latent component analysis method, which is used for spectrogram factorization. Proposed extensions support the use of multiple spectral templates per pitch and per instrument source, as well as a time-varying pitch contribution for each source. Thus, this method can effectively be used for multiple-instrument automatic transcription. In addition, the shift-invariant aspect of the method can be exploited for detecting tuning changes and frequency modulations, as well as for visualizing pitch content. For note tracking and smoothing, pitch-wise hidden Markov models are used. For training, pitch templates from eight orchestral instruments were extracted, covering their complete note range. The transcription system was tested on multiple-instrument polyphonic recordings from the RWC database, a Disklavier data set, and the MIREX 2007 multi-F0 data set. Results demonstrate that the proposed method outperforms leading approaches from the transcription literature, using several error metrics.
    Computer Music Journal 12/2012; 36(4):81-94. DOI:10.2307/41819549 · 0.47 Impact Factor
Show more