Sicheng Zhao

Harbin Institute of Technology, Charbin, Heilongjiang Sheng, China

Are you Sicheng Zhao?

Claim your profile

Publications (4)0 Total impact

  • Sicheng Zhao, Hongxun Yao, Xiaoshuai Sun
    [Show abstract] [Hide abstract]
    ABSTRACT: Most previous works on video classification and recommendation were only based on video contents, without considering the affective analysis of viewers. In this paper, we presented a novel method to classify and recommend videos based on affective analysis, mainly on facial expression recognition of viewers, by fusing the spatio-temporal features. For spatial features, we integrate Haar-like features into compositional ones according to the features' correlation, and train a mid classifier. Then this process is embedded into the improved AdaBoost learning algorithm to obtain spatial features. And for temporal feature fusion, we adopt HDCRFs based on HCRFs by introducing a time dimension variable. The spatial features are embedded into HDCRFs to recognize facial expressions. Experiments on the Cohn-Kanada database show that the proposed method has a promising performance. Then viewers' changing facial expressions are collected frame by frame from the camera when they are watching videos. Finally, we draw affective curves which tell the process of affection changes. Through the curves, we segment each video into affective sections, classify videos into categories, and list recommendation scores. Experimental results on our collected database show that most subjects are satisfied with the classification and recommendation results.
    Neurocomputing. 11/2013; 119:101-110.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A new Man-in-the-Middle (MitM) attack called SSLStrip poses a serious threat to the security of secure socket layer protocol. Although some researchers have presented some schemes to resist such attack, until now there is still no practical countermeasure. To withstand SSLStrip attack, in this paper we propose a scheme named Cookie-Proxy, including a secure cookie protocol and a new topology structure. The topology structure is composed of a proxy pattern and a reverse proxy pattern. Experiment results and formal security proof using SVO logic show that our scheme is effective to prevent SSLStrip attack. Besides, our scheme spends little extra time cost and little extra communication cost comparing with previous secure cookie protocols.
    Proceedings of the 14th international conference on Information and Communications Security; 10/2012
  • Source
    Sicheng Zhao, Hongxun Yao, Xiaoshuai Sun
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a novel affective video classification method based on facial expression recognition by learning the spatio-temporal feature fusion of actors' and viewers' facial expressions. For spatial features, we integrate Haar-like features into compositional ones according to the features' correlation, and train a mid classifier during the period. Then this process is embedded into improved AdaBoost learning algorithm to obtain spatial features. And for temporal feature fusion, we adopt hidden dynamic conditional random fields (HDCRFs) based on HCRFs by introducing time dimension variable. Finally spatial features are embedded into HDCRFs to recognize facial expressions. Experiments on the well-known Cohn-Kanada database show that the proposed method has a promising recognition performance. And affective classification experimental results on our own videos show that most subjects are satisfied with the classification results.
    Image and Graphics (ICIG), 2011 Sixth International Conference on; 09/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most previous works on video indexing and recommendation were only based on the content of video itself, without considering the affective analysis of viewers, which is an efficient and important way to reflect viewers' attitudes, feelings and evaluations of videos. In this paper, we propose a novel method to index and recommend videos based on affective analysis, mainly on facial expression recognition of viewers. We first build a facial expression recognition classifier by embedding the process of building compositional Haar-like features into hidden conditional random fields (HCRFs). Then we extract viewers' facial expressions frame by frame through the videos, collected from the camera when viewers are watching videos, to obtain the affections of viewers. Finally, we draw the affective curve which tells the process of affection changes. Through the curve, we segment each video into affective sections, give the indexing result of the videos, and list recommendation points from views' aspect. Experiments on our collected database from the web show that the proposed method has a promising performance.
    Proceedings of the 19th International Conference on Multimedea 2011, Scottsdale, AZ, USA, November 28 - December 1, 2011; 01/2011