Conference Paper

Personalized portraits ranking

DOI: 10.1145/2072298.2071993 Conference: Proceedings of the 19th International Conference on Multimedea 2011, Scottsdale, AZ, USA, November 28 - December 1, 2011
Source: DBLP


Portraits, also known as images of people, constitute an important part of consumer photos. Existing methods manage portraits based on either explicit objectives, e.g., a specified person or event, or aesthetics, i.e., the aesthetic quality of portraits. This paper presents a novel system for personalized portraits ranking. First, four kinds of personalized features, i.e., composition, clothing style, affection and social relationship are proposed to quantify users' intent. Then, example-based and sketch-based user interfaces (UI) are developed, which are capable of capturing users' personal intent hardly described by queries or aesthetics. Finally, portraits ranking is implemented by combing these features together with the developed user interfaces. Experimental results show that the system performs well in providing personalized preferences and the proposed features are effective for portraits ranking. From the user study, our system gets promising results.

6 Reads
  • [Show abstract] [Hide abstract]
    ABSTRACT: The field of genealogy has embraced the move towards digitisation, with increasingly large quantities of historical photographs being digitised in an effort to both preserve and share with a wider audience. Genealogy software is prevalent, but while many programs support photograph management, none use face recognition to assist in the identification and tagging of individuals. Genealogy is in the unique position of possessing a rich source of context in the form of a family tree, that a face recognition engine can draw information from. We aim to improve the accuracy of face recognition results within a family photograph album through the use of a filter that uses available contextual information from a given family tree. We also use measures of co-occurrence, recurrence and relative physical distance of individuals within the album to accurately predict the identity of individuals. This novel use of genealogical data as context has provided encouraging results, with a 26% improvement in accuracy at hit list size 1 and a 21% improvement at size 5 over the use of face recognition alone, when identifying 348 faces against a database of 523 faces from a challenging dataset of 173 family photographs.
    No preview · Conference Paper · Nov 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clothing style analysis is a critical step for understanding images of people. To automatically identify the style of clothing that people wear is a challenging task due to various poses of person and large variations for even the same clothing category. Suit as one of the clothing style is a key element in many important activities. In this paper, we propose a novel suits detection method for images of people in unconstrained environments. In order to cope with various human poses, human pose estimation is incorporated. By analyzing the style of clothing, we propose the color features, shape features and statistical features for suits detection. Experiments with four popular classifiers have been conducted to demonstrate that the proposed features are effective and robust. Comparative experiments with Bag of Words (BoW) method demonstrate that the proposed features are superior to BoW which is a popular method for object detection. The proposed method has achieved promising performance over our dataset, which is a challenging web image set with various human poses and diverse styles of clothing.
    No preview · Article · Oct 2014 · Journal of Visual Communication and Image Representation
  • [Show abstract] [Hide abstract]
    ABSTRACT: Human skin detection in images is desirable in many practical applications, e.g., human-computer interaction and adult-content filtering. However, existing methods are mainly suffer from confusing backgrounds in real-world images. In this paper, we try to address the issue by exploring and combining several human skin properties, i.e. color property, texture property and region property. First, images are divided into superpixels, and robust skin seeds and background seeds are acquired through color property and texture property of skin. Then we combining color, region and texture properties of skin by proposing a novel skin color and texture based graph cuts (SCTGC) to acquire the final skin detection results. Comprehensive and comparative experiments show that the proposed method achieves promising performance and outperforms many state-of-the-art methods over publicly available challenging datasets with a great part of hard images.
    No preview · Article · Feb 2015 · Journal of Visual Communication and Image Representation