Yi-Chong Zeng

Academia Sinica, T’ai-pei, Taipei, Taiwan

Are you Yi-Chong Zeng?

Claim your profile

Publications (20)5.18 Total impact

  • Yi-Chong Zeng, Jing Fung Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose an adaptive template-matching method to recognize of low-resolution license number. The first step is to extract license plate from low-resolution image by manual. The license plate is divided into several blocks, and every block corresponds to a license character. Subsequently, we estimate the adaptive filter via pesudoinverse operation, and it is applied to template to generate filtered template which is similar to divided character. Eventually, the filtered template and the similarity measurement between the license character and the template are obtained. According to the similarity measurement, the proposed method can identify the license number. Moreover, our method outputs several possible resultants of license number recognition, it is helpful to police in image forensic. The advantage of our approach is that training process is unnecessary. The quantized rank histogram is measured to evaluate the recognition performance. The experiment results demonstrate that the proposed method is capable of coping with license number recognition in low-resolution images.
    Computer Symposium (ICS), 2010 International; 01/2011
  • Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of this work is to propose a new template matching scheme which is able to deal with the recognition issue against rotation. The proposed scheme, rotation-invariant filter-driven template matching (RI-FTM), starts to transform a Cartesian-coordinate pattern to a polar-coordinate pattern. Subsequently, we put our emphasis on how to estimate an appropriate filter which is adopted to establish the connection between query pattern and reference pattern, and then the similarity between those two patterns is computed via sum of squared differences (SSD). In addition, the proposed method can shorten the width of filter to reduce the computing time. The experiment results will demonstrate our method is capable of recognitions of license plate characters and commercial logos.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011
  • Yi-Chong Zeng, Chiou-Ting Hsu
    [Show abstract] [Hide abstract]
    ABSTRACT: Aiming at both compression and content protection, we propose a context intra-coding scheme for securing surveillance videos. We also develop methods to detect moving objects, skin color and human face in compression domain. In the proposed context intra-coding scheme, every DCT block is identified as either a parent block or a child block. The child block is then represented in terms of the corresponding parent block and the residual. A signature block is further employed to encrypt the residual. Finally, the parent blocks and the encrypted residuals are coded using conventional intra-coding. The frame content is well protected and can be accurately decoded only with the correct secret key. Experiment results demonstrate that our scheme is capable of protecting video content and also detecting significant objects in compression domain without violating personal privacy.
  • Source
    Yi-Chong Zeng, Fay Huang, Hong-Yuan Mark Liao
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of this research is to design a new JPEG-based compression scheme which simultaneously considers the security issue. Our method starts from dividing image into non-overlapping blocks with size 8×8. Among these blocks, some are used as reference blocks and the rest are used as query blocks. A query block is the combination of the residual and the resultant of a filtered reference block. We put our emphasis on how to estimate an appropriate filter and then use it as part of a secret key. With both reference blocks and the residuals of query blocks, one is able to encode secured images using a correct secret key. The experiment results will demonstrate that how different secret keys can control the quality of restored image based on the priority of authority.
    18th IEEE International Conference on Image Processing, ICIP 2011, Brussels, Belgium, September 11-14, 2011; 01/2011
  • Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new scheme to recognize old-film scene and modern scene. The proposed method automatically divides a video into several scenes. An adaptive histogram transform is performed on frames to yield flicker-free frames, and intensity flicker is measured and defined as the difference between original frame and flicker-free frame. A quantitative measurement of intensity flicker is computed as feature. Moreover, using color feature and the results of scene recognition, the proposed method identifies the characteristic of video and estimates the era of film in three periods, including, early period, middle period, and modern period. The experiment results will demonstrate that the proposed method not only recognizes old-film scene and modern scene, but it can handle video identification and era estimation of film and video.
    Image Processing (ICIP), 2010 17th IEEE International Conference on; 10/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a new method to automatically remove intensity flicker in digitized old films. We assume that intensity of two consecutive frames should not change too much, and thus the histograms of the two frames are similar. Under these circumstances, we can fix a corrupted frame by substituting its content by the content of its neighboring frame. The major contribution of this work is that we simultaneously apply global histogram transform as well as local histogram transform to preserve the quality of frame intensity across consecutive frames. The function of local histogram transform is to remove local intensity flicker. Performance of the proposed method is evaluated by both local and global measurements. In the global measurement, the mean and the standard deviation of frame intensity are used. In the local measurement, the average of absolute intensity difference between blocks is adopted. Experiment results show that the proposed method can remove intensity flickers in a digitized old film effectively.
    Advances in Multimedia Information Processing - PCM 2009, 10th Pacific Rim Conference on Multimedia, Bangkok, Thailand, December 15-18, 2009 Proceedings; 01/2009
  • Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the adaptive histogram adjustment (AHA) to improve image contrast. The proposed method is achieved based on the concepts of weighted histogram separation and gray-level grouping. It not only improves the contrast of the local detail but also solves group density and blocky effect which occur at the under-quantization problem of weighted histogram separation. Moreover, the adaptive histogram adjustment can prevent the contrast over-enhancement of conventional adaptive-based approach. The experimental results show that AHA has good contrast sensitivity, and it is compared with the five existing approaches, such as, histogram equalization, adaptive histogram equalization, weighted histogram separation, adaptive weighted histogram separation and gray-level grouping.
    Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, ICME 2009, June 28 - July 2, 2009, New York City, NY, USA; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel people counting system for an environment in which a stationary camera can count the number of people watching a TV-wall advertisement or an electronic billboard without counting the repetitions in video streams in real time. The people actually watching an advertisement are identified via frontal face detection techniques. To count the number of people precisely, a complementary set of features is extracted from the torso of a human subject, as that part of the body contains relatively richer information than the face. In addition, for conducting robust people recognition, an online classifier trained by Fisher's Linear Discriminant (FLD) strategy is developed. Our experiment results demonstrate the efficacy of the proposed system for the people counting task.
    Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, ICME 2009, June 28 - July 2, 2009, New York City, NY, USA; 01/2009
  • Yi-Chong Zeng, Hong-Yuan Mark Liao
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new algorithm to improve the visual appearance of compressed videos, which are derived and reproduced from old films. The new technique is especially useful in handling the digitized old films of digital archives. Two main techniques, saturation adjustment and contrast enhancement, are implemented to improve the chrominance and luminance of video frames on HSL color space. For contrast enhancement, we use weighted histogram separation (WHS) to improve the luminance. Subsequently, the saturation ratio transfer function is proposed to adjust the saturation level of every frame. Furthermore, the new calculation method for the error of inter-frame is adopted to avoid the incorrectly enhanced result and the intensive computation consumed in motion estimation. The experimental results demonstrate that the proposed method outperforms the existing approaches in terms of video quality.
    International Symposium on Circuits and Systems (ISCAS 2008), 18-21 May 2008, Sheraton Seattle Hotel, Seattle, Washington, USA; 01/2008
  • Yi-Chong Zeng, Soo-Chang Pei
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a novel automatic video diagnosing method. The purpose of this study is to detect the attack added on the authorized video and identify the attack category. The video is added with the crypto-watermarks using dual-domain quaternary watermarking algorithm in advance. The crypto-watermarks, which are generated using visual cryptography, have different capabilities against the various attacks. We extract the watermarks from the suspected video, measure the bit-error-rate between the extracted and specified crypto-watermarks, and then analyze bit-error-rates to determine what kind of attack added on the video. The experimental results demonstrate that the proposed method not only can identify the single attack, but it can identify the composite attack and detect the corrupted frames. Even if the video is not embedded with crypto-watermarks, we can differentiate it from the authorized videos.
    International Symposium on Circuits and Systems (ISCAS 2008), 18-21 May 2008, Sheraton Seattle Hotel, Seattle, Washington, USA; 01/2008
  • Source
    Soo-Chang Pei, Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method to localize and to recover the tampered areas in the watermarked image simultaneously. First, the proposed watermarking algorithm aims to embed a bi-watermark, which is the binary halftone of the downscaled host image, into the host image. While the watermarked image is tampered unauthorizedly, the extracted bi-watermark can be exploited to localize the tampered areas automatically without initial bi-watermark. Subsequently, Gaussian lowpass filter and quadratic programming are used to restore the bi-watermark to gray-scale image for inverse halftoning procedure. Eventually, the restored gray-scale image is used to recover the tampered areas.
    Image Processing, 2007. ICIP 2007. IEEE International Conference on; 01/2007
  • Source
    Soo-Chang Pei, Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: A novel image recovery algorithm for removing visible watermarks is presented. Independent component analysis (ICA) is utilized to separate source images from watermarked and reference images. Three independent component analysis approaches are examined in the proposed algorithm, which includes joint approximate diagonalization of eigenmatrices, second-order blind identification, and FastICA. Moreover, five different visible watermarking methods to embed uniform and linear-gradient watermarks are implemented. The experimental results show that visible watermarks are successfully removed, and that the proposed algorithm is independent of both the adopted ICA approach and the visible watermarking method. In the final experiment, several public domain images sourced from various websites are tested. The results of this study demonstrate that the proposed algorithm can blindly and successfully remove the visible watermarks without knowing the watermarking methods in advance
    IEEE Transactions on Information Forensics and Security 01/2007; DOI:10.1109/TIFS.2006.885031 · 2.07 Impact Factor
  • Yi-Chong Zeng, Soo-Chang Pei
    [Show abstract] [Hide abstract]
    ABSTRACT: A 2.5-domain tri-watermarking algorithm (2.5D-TW) is presented in this paper. The proposed algorithm is the integration of dual domain bi-watermarking algorithm and visual cryptography, and the embedded tri-watermark is composed of three binary watermarks. The tri-watermark is embedded into the encoded blocks in discrete cosine transform (DCT) domain, however, the tri-watermark can be extracted from the DCT blocks and the decoded frames in spatial domain. Moreover, two types of third binary watermark of the tri-watermark are developed namely the intra-watermark and inter-watermark, those binary watermarks reveal the different robustness against various attacks. The proposed method attempts to localize the tampered areas and to detect the temporal attack.
    Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007, July 2-5, 2007, Beijing, China; 01/2007
  • Yi-Chong Zeng, Soo-Chang Pei, Jian-Jiun Ding
    [Show abstract] [Hide abstract]
    ABSTRACT: A dual-domain bi-watermarking algorithm is proposed in this paper. This algorithm embeds bi-watermark in DCT domain, but bi-watermark can be extracted in both spatial and DCT domains. Therefore, it can implement on the DCT-based compressed image/frame. The dual-domain bi-watermarking algorithm is the extension of the spatial-domain bi-watermarking algorithm, which implements quantization index modulation with two quantization stepsizes. These quantization stepsizes construct the non-uniform quantization intervals. Additionally, the luminance quantization table of JPEG compression is considered in the algorithm. In the experimental result, two extracted watermarks show the capability for various compression rates, and the watermarks also reveal the different robustness against the global and the regional attacks. Index Terms—discrete cosine transform, dual-domain bi-watermarking
    Proceedings of the International Conference on Image Processing, ICIP 2006, October 8-11, Atlanta, Georgia, USA; 01/2006
  • Source
    Soo-Chang Pei, Yi-Chong Zeng, Jian-Jiun Ding
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a modified approach to the successive mean quantization transform, which is called as the Weighted Histogram Separation (WHS) for enhancement of color images. Property of WHS situates between the successive mean quantization transform and the histogram equalization. In addition, this approach is further applied to the local enhancement, which is similar to the adaptive histogram equalization, and it is termed as the Adaptive Weighted Histogram Separation (AWHS). A comparison with successive mean quantization transform and histogram equalization has been performed in the experiments.
    Proceedings of the International Conference on Image Processing, ICIP 2006, October 8-11, Atlanta, Georgia, USA; 01/2006
  • Soo-Chang Pei, Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel semi-fragile multiple-watermarking algorithm based on quantization index modulation. This algorithm utilizes two quantization steps to yield the non-uniform intervals in the real-number axis. Each interval corresponds to one binary symbol, includes stable-zero (S0), unstable-zero (U0), stable-one (S1), and unstable-one (U1). In addition, visual cryptography is integrated with the watermarking algorithm to increase the watermark capacity. Therefore, the host image is embedded the multiple watermarks, and then we extract the watermarks from the corrupted image. According to the extracted watermarks, the algorithm achieves the tamper proofing and attack identification. From the experimental result, it shows single and multiple tampered areas are detected and demonstrates that the amount of test images will not influence the accuracy of attack identification.
    Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, ASIACCS 2006, Taipei, Taiwan, March 21-24, 2006; 01/2006
  • Source
    Soo-Chang Pei, Yi-Chong Zeng, Ching-Hua Chang
    [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a novel algorithm using color contrast enhancement and lacuna texture synthesis is proposed for the virtual restoration of ancient Chinese paintings. Color contrast enhancement based on saturation and de-saturation is performed in the u'v'Y color space, to change the saturation value in the chromaticity diagram, and adaptive histogram equalization then is adopted to adjust the luminance component. Additionally, this work presents a new patching method using the Markov Random Field (MRF) model of texture synthesis. Eliminating undesirable aged painting patterns, such as stains, crevices, and artifacts, and then filling the lacuna regions with the appropriate textures is simple and efficient. The synthesization procedure integrates three key approaches, weighted mask, annular scan and auxiliary, with neighborhood searching. These approaches can maintain a complete shape and prevent edge disconnection in the final results. Moreover, the boundary between original and synthesized paintings is seamless, and unable to distinguish in which the undesirable pattern appears.
    IEEE Transactions on Image Processing 04/2004; 13(3):416-29. DOI:10.1109/TIP.2003.821347 · 3.11 Impact Factor
  • Source
    Soo-Chang Pei, Yi-Chong Zeng
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new histogram-based data-hiding algorithm that secret data is embedded in the least significant bit of the histogram value. To change the pixel value, it alters the histogram to accomplish data-hiding work. In the proposed algorithm, it is able to perform data hiding on the one-dimension histogram, two-dimension histogram map and three-dimension histogram cube. Besides, the multiple secret data hiding in various combinations of histogram spaces are successfully demonstrated in our experimental results. In addition, the natural and limited color images are tested in our experiments.
    Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on; 01/2004
  • Source
    Soo-Chang Pei, Yi-Chong Zeng, Chiao-Fen Hung
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose a moving object segmentation algorithm, and the segmented results are applied to the various applications, including: object elimination, object supplement, background estimation, moving and static objects discrimination, video editing, etc. The important issue is to segment the moving object accurately. In the proposed algorithm, the conventional 3D spatial-temporal data set is instead of 2D spatial-temporal one. According to the x-y-t video sequence, we slice it to a series of x-t image along with y-axis, and then separate the object and background. The advantage of our algorithm is simple, fast and efficient to achieve the moving object segmentation. The experimental results will show the various video editing by using the segmented results.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel people counting system for an environment in which a stationary camera can count the number of people watching a TV-wall advertisement or an electronic billboard without counting the repetitions in video streams in real time. The people actually watching an advertisement are identified via frontal face detection techniques. To count the number of people precisely, a complementary set of features is extracted from the torso of a human subject, as that part of the body contains relatively richer information than the face. In addition, for conducting robust people recognition, an online boosted classifier trained by Fisher's Linear Discriminant (FLD) strategy is developed. Our experiment results demonstrate the efficacy of the proposed system for the people counting task.