ArticlePDF Available

Abstract and Figures

Nowadays, an increasingly growing demand for advanced multimedia search engines is arising, as huge amounts of digital visual content are becoming available. The contribution of this paper is the introduction of a hybrid multimedia retrieval model accompanied by the presentation of a search engine that is capable of retrieving visual content from cultural heritage multimedia libraries as in three modes: (i) based on their semantic annotation with the help of an ontology; (ii) based on the visual features with a view to finding similar content; and (iii) based on the combination of these two strategies in order to produce recommendations. To achieve this, the retrieval model is composed of two different parts, a low-level visual feature analysis and retrieval and a high-level ontology infrastructure. The main novelty is the way in which these two co-operate transparently during the evaluation of a single query in a hybrid fashion, making recommendations to the user and retrieving content that is both visually and semantically similar. A search engine has been developed implementing this model which is capable of searching through digital libraries of cultural heritage collections, and indicative examples are discussed, along with insights into its performance.
Content may be subject to copyright.
A preview of the PDF is not available
... The main idea for the hybrid semantic search is to link concepts hidden in the keyword description of textually rich attributes with semantic meaningful ontology instances that are present in the knowledge base. In the same direction, the work by Vrochidis et al. (2009) proposed a hybrid ontology retrieval model accompanied with search engine that is capable of retrieving visual content from CH multimedia library in three modes. It involves a semantic ontology, a visual feature with a view to finding similar content, and a combination of these two strategies in order to produce recommendations. ...
Article
Full-text available
Machine processable data that narrate digital/non-digital resources are termed as metadata. Different metadata standards exist for describing various types of digital objects. Several researches have reported on how to address issues related to accessing of metadata resources. Most studies on metadata involve cultural heritage domain, and this is an indication of the importance of this domain in metadata research and development. Research on metadata in cultural heritage mainly revolves around three fundamental issues: (1) lack of quality in metadata contents in most of the cases, (2) difficulty in accessing metadata contents due largely to limited user’s knowledge on the content of the metadata, and (3) heterogeneity of the data at the level of schemas which makes the access even more difficult. The lack of quality in metadata makes it difficult for the users to retrieve and explore information that satisfies their needs. So, in order to make its contents more accessible, enhancing the metadata content is required, especially for cultural heritage collections which consist of digital objects (structured documents) described by a variety of metadata schemas. This paper presents issues and challenges in enhancing access to metadata by reviewing the existing approaches in metadata environment with a particular emphasis on cultural heritage collections. In this paper, firstly, we look at the classification of metadata which is divided into two categories namely data retrieval and information retrieval. Then, we present the analysis, findings and suggestions on how to address issues in enhancing access to metadata contents especially in cultural heritage collections. A detailed comparison is given between information retrieval and data retrieval, and it focuses on the applicability of one approach over the other. A framework that aims to improve the effectiveness of retrieval when searching metadata is also proposed and tested. The proposed framework consists of approaches and methods that are expected to enhance access to metadata especially in cultural heritage collections and be useful for those with limited knowledge on cultural heritage. The experiments were conducted on CHiC2013 which is a collection on cultural heritage. The results show a considerable enhancement over other IR approaches that use the expansion methods.
... In [53] and [54] computer vision techniques are employed in order to automatically classify archaeological pottery sherds. Lastly, a Computer Vision technique is also used in [55] where the authors present a search engine for retrieving cultural heritage multimedia content. ...
Chapter
Full-text available
Intangible cultural heritage (ICH) is a relatively recent term coined to represent living cultural expressions and practices, which are recognised by communities as distinct aspects of identity. The safeguarding of ICH has become a topic of international concern primarily through the work of United Nations Educational, Scientific and Cultural Organization (UNESCO). However, little research has been done on the role of new technologies in the preservation and transmission of intangible heritage. This chapter examines resources, projects and technologies providing access to ICH and identifies gaps and constraints. It draws on research conducted within the scope of the collaborative research project, i-Treasures. In doing so, it covers the state of the art in technologies that could be employed for access, capture and analysis of ICH in order to highlight how specific new technologies can contribute to the transmission and safeguarding of ICH.
... Ubiquitous computing is becoming the next step of the ICT evolution. While cloud computing, as the most recent paradigm to emerge, promises reliable services delivered through next generation data centers that are based on virtualized storage technologies [36], the underlying IoT networking infrastructures that are also the most critical part of the ubiquitous computing, have to cope with all the difficulties of direct interfacing the environment. Beside the inherited ordinary computer network issues related to the MAC and routing protocols, security issues and all other aspect of the interfacing network, the design should be resource (computational, communication, memory) aware and energy aware. ...
Article
Full-text available
The computer communication paradigm is moving towards the ubiquitous computing and Internet of Things (IoT). Small autonomous wirelessly networked devices are becoming more and more present in monitoring and automation of every human interaction with the environment, as well as in collecting various other information from the physical world. Applications, such as remote health monitoring, intelligent homes, early fire, volcano, and earthquake detection, traffic congestion prevention etc., are already present and all share the similar networking philosophy. An additional challenging for the scientific and engineering world is the appropriateness of the alike networks which are to be deployed in the inaccessible regions. These scenarios are typical in environmental and habitat monitoring and in military surveillance. Due to the environmental conditions, these networks can often only be deployed in some quasi-random way. This makes the application design challenging in the sense of coverage, connectivity, network lifetime and data dissemination. For the densely deployed networks, the random geometric graphs are often used to model the networking topology. This paper surveys some of the most important approaches and possibilities in modeling and improvement of coverage and connectivity in randomly deployed networks, with an accent on using the mobility in improving the network functionality.
... In the context of artwork and cultural heritage retrieval, different strategies are proposed in the literature, such as in [Tsai, 2007;Jiang et al., 2005]. For cultural heritage retrieval, [Vrochidis et al., 2009] proposed a hybrid multimedia retrieval model which combines low-level visual features based retrieval and semantic annotation retrieval on finding the similar images. Reference [Yen et al., 2006] developed an image retrieval system based on AdaBoost [Freund and Schapire, 1997] and relevance feedback for painting image retrieval. ...
Thesis
Content-Based Image Retrieval (CBIR) is a discipline of Computer Science which aims at automatically structuring image collections according to some visual criteria. The offered functionalities include the efficient access to images in a large database of images, or the identification of their content through object detection and recognition tools. They impact a large range of fields which manipulate this kind of data, such as multimedia, culture, security, health, scientific research, etc.To index an image from its visual content first requires producing a visual summary of this content for a given use, which will be the index of this image in the database. From now on, the literature on image descriptors is very rich; several families of descriptors exist and in each family, a lot of approaches live together. Many descriptors do not describe the same information and do not have the same properties. Therefore it is relevant to combine some of them to better describe the image content. The combination can be implemented differently according to the involved descriptors and to the application. In this thesis, we focus on the family of local descriptors, with application to image and object retrieval by example in a collection of images. Their nice properties make them very popular for retrieval, recognition and categorization of objects and scenes. Two directions of research are investigated:Feature combination applied to query-by-example image retrieval: the core of the thesis rests on the proposal of a model for combining low-level and generic descriptors in order to obtain a descriptor richer and adapted to a given use case while maintaining genericity in order to be able to index different types of visual contents. The considered application being query-by-example, another major difficulty is the complexity of the proposal, which has to meet with reduced retrieval times, even with large datasets. To meet these goals, we propose an approach based on the fusion of inverted indices, which allows to represent the content better while being associated with an efficient access method.Complementarity of the descriptors: We focus on the evaluation of the complementarity of existing local descriptors by proposing statistical criteria of analysis of their spatial distribution. This work allows highlighting a synergy between some of these techniques when judged sufficiently complementary. The spatial criteria are employed within a regression-based prediction model which has the advantage of selecting the suitable feature combinations globally for a dataset but most importantly for each image. The approach is evaluated within the fusion of inverted indices search engine, where it shows its relevance and also highlights that the optimal combination of features may vary from an image to another.Additionally, we exploit the previous two proposals to address the problem of cross-domain image retrieval, where the images are matched across different domains, including multi-source and multi-date contents. Two applications of cross-domain matching are explored. First, cross-domain image retrieval is applied to the digitized cultural photographic collections of a museum, where it demonstrates its effectiveness for the exploration and promotion of these contents at different levels from their archiving up to their exhibition in or ex-situ. Second, we explore the application of cross-domain image localization, where the pose of a landmark is estimated by retrieving visually similar geo-referenced images to the query images
... Substantial research efforts have been invested in automatic classification of paintings. Computers have demonstrated ability to classify paintings by their creating artist (Kammerer et al., 2007;Keren, 2002;Shamir et al., 2010b;Johnson Jr et al., 2008;Shen, 2009;Cetinic and Grgic, 2013), emotion (Zhang et al., 2011(Zhang et al., , 2013, artistic movement (Zujovic et al., 2009;Culjak et al., 2011;Condorovici et al., 2013), or by keyˇ words associated with the paintings (Barnard et al., 2001;Lewis et al., 2004;Vrochidis et al., 2008). In addition to classification, computational methods can also estimate the similarities between different paintings (Garces et al., 2014), or similarities between painters by analyzing the visual content of their art (Shamir et al., 2010b;Shamir and Tarakhovsky, 2012;Shamir, 2012b;Kim and Kim, 2013;Wang and Takatsuka, 2012). ...
Article
In the past few years, computational methods have been becoming increasingly more prevalent in art history. Such methods can reveal new knowledge about art, and provide a novel approach to the studying of art history based on quantitative evidence. Here we used computational analysis to study the artistic style of Pablo Picasso and its change throughout time. Experimental results show a strong correlation between the visual content and the time of painting, evidenced by the ability of the computer to estimate the time of creation by analyzing the visual content of the painting. It also showed that some of the paintings were estimated by the algorithm to be artistically related to a different time period than the time period in which they were painted. The most significant numerical image content descriptor for estimating the time of creation is the fractal structure of the painting, which changed during his career in a nearly linear fashion. The software used in the experiment is publicly available, as well as detailed instructions of using the software.
... While early work in computer-based analysis of art was based on retrieval and analysis of captions and metadata (Mattison, 2004;Tsai, 2007), other studies attempted to perform automatic analysis of the visual content of the art (Postma et al., 2007;Stork, 2009b;Hurtut, 2010). Automatic analysis of visual art has mainly focused on classification of paintings by their creating artists (van den Herik and Postma, 2000;Keren, 2002;Widjaja et al., 2003;Johnson et al., 2008;Shen, 2009;Zujovic et al., 2009) or automatic association of paintings with captions and key words (Barnard and Forsyth, 2001;Lewis et al., 2004;Vrochidis et al., 2008). More recent work also showed that computers can assess the similarities between the artistic styles of painters, and thus automatically associate painters that share similar artistic styles or associated with the same schools of art (Shamir et al., 2010). ...
Article
Jackson Pollock introduced a revolutionary artistic style of dripping paint on a horizontal canvas. Here we study Pollock's unique artistic style by using computational methods for characterizing the low–level numerical differences between original Pollock drip paintings and drip paintings of other painters who attempted to mimic his signature drip painting style. Four thousands and twenty four numerical image content descriptors were extracted from each painting, and compared using weighted nearest neighbor classification such that the Fisher discriminant scores of the content descriptors were used as weights. In 93% of the cases, the computer analysis was able to differentiate between an original and a non–original Pollock drip painting. The most discriminative image content descriptors that were unique to the work of Pollock were the fractal features, but other numerical image content descriptors such as Zernike polynomials, Haralick textures, and Chebyshev statistics show substantial differences between original and non–original Pollock drip paintings. These experiments show that the uniqueness of Pollock's drip painting style is not reflected merely by fractals, but also by other numerical image content descriptors that reflect the visual content. The code and software used for the experiment is publicly available, and can be used to study the work of other artists.
... The ultimate goal of this work is to foster archaeological research and documentation. Moreover, a Computer Vision technique is also used in (Vrochidis et al. 2008) where the authors present a search engine for retrieving cultural heritage multimedia content. Their technique is based on the semantic annotation of multimedia content that is made feasible with the help of an ontology and the low level visual features. ...
Technical Report
Full-text available
The i-Treasures project deals with ICH (Intangible Cultural Heritage) preservation and transmission; its primary aim is to develop an open and extendable platform to provide access to ICH resources, enable knowledge exchange between researchers and contribute to the transmission of the rare know-how from Living Human Treasures to apprentices. The main purpose of this document is to define the system and user requirements of the i-Treasures platform. The requirements definition process was based on a participatory approach, where experts, performers and users have been actively involved through surveys and interviews, in the complex tasks of identifying specificities of rare traditional know-how, discovering existing teaching and learning practices and finally identifying the most cutting edge technologies able to support innovative learning approaches to ICH. Thus the document contains a state of the art review in the field, as well as the analysis of the artistic expressions (i.e. intangible heritages) identified by the project as use cases, namely: 1) rare traditional songs, 2) rare dance interactions, 3) traditional craftsmanship and 4) contemporary music composition.
Chapter
Full-text available
This volume collects the research and development cooperation activities conducted during the second year of the “3D Bethlehem” project. The technological survey and analysis of the urban fabric of Bethlehem, aimed at the production of a knowledge and management tool for the historic city, concerned the census and the cataloging of buildings for the structuring of a reliable threedimensional database. In this year researchers, students and teachers of the University of Pavia, together with the other project partners, worked together with the municipality of Bethlehem to build a representative model of the city. The study of the historical architecture and urban fabric of the old city was a reason to consolidate a cultural, social and human relationship, useful for mutual knowledge, from which to emerge a common thought on which to decline the forms and contents of a critical drawings, critic, whose purpose is to make Architecture.
Conference Paper
This paper proposes crisp and fuzzy ontology model for reducing the Semantic gap between the user requirements and the System model in order to provide better image classification and retrieval. The approach is based on building ontology of natural scenes using Protégé for querying in more natural way and then integrating fuzzy logic to improve the image retrieval.
Book
Full-text available
Conventional topographic databases, obtained by capture on aerial or spatial images provide a simplified 3D modeling of our urban environment, answering the needs of numerous applications (development, risk prevention, mobility management, etc.). However, when we have to represent and analyze more complex sites (monuments, civil engineering works, archeological sites, etc.), these models no longer suffice and other acquisition and processing means have to be implemented. This book focuses on the study of adapted lifting means for “notable buildings”. The methods tackled in this book cover lasergrammetry and the current techniques of dense correlation based on images using conventional photogrammetry.
Conference Paper
Full-text available
In this paper we discuss the use of knowledge for the automatic ex- traction of semantic metadata from multimedia content. For the representation of knowledge we extended and enriched current general-purpose ontologies to include low-level visual features. More specifically, we implemented a tool that links MPEG-7 visual descriptors to high-level, domain-specific concepts. For the exploitation of this knowledge infrastructure we developed an experimentation platform, that allows us to analyze multimedia content and automatically create the associated semantic metadata, as well as to test, validate and refine the ontolo- gies built. We pursued a tight and functional integration of the knowledge base and the analysis modules putting them in a loop of constant interaction instead of being the one just a pre- or post-processing step of the other.
Article
Full-text available
This paper presents the methodology that has been successfully employed over the past 7 years by an interdisciplinary team to create the CIDOC Conceptual Reference Model (CRM), a high-level ontology to enable information integration for cultural heritage data and their correlation with library and archive information. The CIDOC CRM is now in the process to become an ISO standard. This paper justifies in detail the methodology and design by functional requirements and gives examples of its contents. The CIDOC CRM analyses the common conceptualizations behind data and metadata structures to support data transformation, mediation and merging. It is argued that such ontologies are property-centric, in contrast to terminological systems, and should be built with different methodologies. It is demonstrated that ontological and epistemological arguments are equally important for an effective design, in particular when dealing with knowledge from the past in any domain. It is assumed that the presented methodology and the upper level of the ontology are applicable in a far wider domain.
Conference Paper
Adaptive Hypermedia systems have sought to support users by anticipating the usersí information requirements for a particular context, and rendering the appropriate version of the content and hypermedia links. Adaptable Hypermedia, on the other hand, takes the approach that there are times when adaptive approaches may not be feasible or available and that it would still be appropriate to facilitate user-determined access to information. For instance, users may come to a hypermedia and not have a well-defined goal in mind. Similarly their goals may change throughout exploration, or their expertise change from one section to another. To support these shifting conditions, we may need affordances on the content that a solely adaptive approach cannot best support. In this paper we present an interaction design to support user-determined adaptable content and describe three techniques which support the interaction: preview cues, dimensional sorting and spatial context. We call the combined approach mSpace. We present the preliminary task analysis that lead us to our interaction design, we describe the three techniques, overview the architecture for our prototype and consider next steps for generalizing deployment.
Article
To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse—definitions of classes, relations, functions, and other objects—is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages. This allows researchers to share and reuse ontologies, while retaining the computational benefits of specialized implementations.We discuss how the translation approach to portability addresses several technical problems. One problem is how to accommodate the stylistic and organizational differences among representations while preserving declarative content. Another is how to translate from a very expressive language into restricted languages, remaining system-independent while preserving the computational efficiency of implemented systems. We describe how these problems are addressed by basing Ontolingua itself on an ontology of domain-independent, representational idioms.
Article
Designing a search system and interface may best be served (and executed) by scrutinizing usability studies.
Conference Paper
Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.
Conference Paper
This paper presents a semantic portal, MuseumFinland, for publishing heterogeneous museum collections on the Semantic Web. The application is presented from the viewpoints of the end-user and the museums providing the contents. By semantic Web techniques, it is possible to make collections semantically interoperable and provide museum visitors with intelligent content-based search and browsing services to the global collection base. By using the MuseumFinland approach, the museums with their semantically rich and interrelated collection content can create consolidated semantic collection portals together on the Web.