Extraction of Text under Complex Background Using Wavelet Transform and Support Vector Machine
ABSTRACT A method based on wavelet transform and support vector machine (SVM) for detecting text under complex background is proposed. First, the image is decomposed by wavelet, and then the texture characteristic of text is extracted by using SVM on low-frequency approximate sub-space and high-frequency energy sub-space. Combining wavelet transform and SVM not only reduces the number of input training samples but also accelerates the speed of SVM for learning and classification. This method utilizes the characteristic that SVM is suited to high-dimension space work and improves the efficiency of extracting text. Experimental results show that the current proposed method can correctly and effectively locate text region in the digital image
- SourceAvailable from: wseas.us[Show abstract] [Hide abstract]
ABSTRACT: This paper presents a multiple frames integration based approach to detect and localize static caption texts on news videos. Utilizing the temporal information of videos, the algorithm includes robust text features and the non-text line deletion technique, and yields precise and tight localization for detected text regions. The Canny edge detector is first applied on reference frames and is followed by executing the logical AND to reduce the edges from the variation of the background including the scrolling texts. Next, rough text candidate regions are determined by calculating the number black-white transition (BWT). Finally, the text regions are refined by the non-text line deletion technique. The proposed algorithm is applicable to multiple languages and robust to text polarities, alignments, and character sizes (from 10×10 to 30×30). According to the experimental results on various multilingual video sequences, the proposed algorithm has a 96% and above performance in recall, precision, and quality of bounding preciseness.01/2010;
- [Show abstract] [Hide abstract]
ABSTRACT: In this paper, we propose a simple yet powerful video text location scheme. Firstly, an edge-based background classification is applied to the input video frames, which are subsequently classified into three categories: simple, normal and complex. Then, for the three different types of video frames, different text location methods are adopted, respectively: for the simple background class, a stroke-based text location scheme is used; for the normal background class, a variant of morphology called conditional morphology is incorporated to remove the non-text noises; for the complex background situation, after location routine based on stroke analysis and conditional morphology, an SVM text detector is trained to reduce the false alarms. Experimental results show that our approach performs well in various videos with high speed and precision.IJDAR. 01/2010; 13:173-186.
- [Show abstract] [Hide abstract]
ABSTRACT: This paper presents a robust and efficient text detection algorithm for news video. The proposed algorithm uses the temporal information of video and logical AND operation to remove most of irrelevant background. Then a window-based method by counting the black-and-white transitions is applied on the resulted edge map to obtain rough text blobs. Line deletion technique is used twice to refine the text blocks. The proposed algorithm is applicable to multiple languages (English, Japanese and Chinese), robust to text polarities (positive or negative), various character sizes (from 4×7 to 30×30), and text alignments (horizontal or vertical).Three metrics, recall (R), precision (P), and quality of bounding preciseness (Q), are adopted to measure the efficacy of text detection algorithms. According to the experimental results on various multilingual video sequences, the proposed algorithm has a 96% and above performance in all three metrics. Comparing to existing methods, our method has better performance especially in the quality of bounding preciseness that is crucial to later binarization process.WSEAS Transactions on Signal Processing 01/2010; 6(4).