[Show abstract][Hide abstract] ABSTRACT: The current trends for the future evolution of the web are without doubt the Semantic Web and Web 2.0. A common perception for these two visions is that they are competing. Nevertheless, it becomes more and more obvious that these two concepts are complementary. Semantic web technologies have been considered as a bridge for the technological evolution from Web 2.0 to Web 3.0, the web about recommendation and personalisation. Towards this perspective, in this work, we introduce a framework based on a three-tier architecture that illustrates the potential for combining Web 2.0 mashups and Semantic Web technologies. Based on this framework, we present an application for searching books from Amazon and Half eBay with a focus on personalisation. This implementation purely depends on ontology development, writing of rules for the personalisation, and on creation of a mashup with the aid of web APIs. However, there are several open issues that must be addressed before such applications can become commonplace. The aim of this work is to be a step towards supporting the development of applications which combine the two trends so as to conduce to the term Web 3.0, which is used to describe the next generation web.
International Journal of Knowledge and Web Intelligence. 09/2013; 4(2/3):142-165.
[Show abstract][Hide abstract] ABSTRACT: The added value the Semantic Web has to offer can find fertile ground in querying collections with rich metadata, such as ones often occurring in digital libraries and repositories. A relevant such effort is the semantic search service for the popular DSpace digital repository system. Semantic Search v2 introduces a structured query mechanism that makes query construction easier as well as several improvements in system design, performance and extensibility. Queries are targeted towards the dynamically created DSpace ontology, containing constructs that enable knowledge acquisition among available metadata. Both an empirical and a quantitative evaluation suggest that the system can bring semantic search closer to inexperienced users and make its benefits evident in the context of digital repositories, such as new querying dimensions, thus forming a paradigm production services can built upon.
International Journal of Metadata Semantics and Ontologies 05/2013; 8(1):46-55.
[Show abstract][Hide abstract] ABSTRACT: This paper describes an experiment exploring the hypothesis that innovative application of the Functional Requirements for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems inorder to improve ...
[Show abstract][Hide abstract] ABSTRACT: The growing availability of Linked Data and other structured information on the Web does not keep pace with the rich semantic descriptions and conceptual associations that would be necessary for direct deployment of user-tailored services. In contrast, the more complex descriptions become, the harder it is to reason about them. To show the efficacy of a potential compromise between the two, in this paper we propose an intelligent and scalable personalization service, built upon the idea of combining Linked Data with Semantic Web rules. This service is mashing up information from different bookstores, and suggests users with personalized data according to their preferences, which in turn are modeled by a set of Semantic Web rules. This information is made available as Linked Data, thus enabling third-party recipients to consume knowledge-enhanced information.
Signal-Image Technology and Internet-Based Systems (SITIS), 2011 Seventh International Conference on; 01/2011
[Show abstract][Hide abstract] ABSTRACT: The current trends for the future evolution of the Web are without doubt the Semantic Web and Web 2.0. A common perception for these two visions is that they are competing. Nevertheless, it becomes more and more obvious that these two concepts are complementary. Towards this perspective, in this work we introduce an application based on a 3-tier architecture that illustrates the potential for combining Web 2.0 and Semantic Web technologies. This application consists a framework for searching books from Amazon and Half EBay. The implementation's backbone is focused on developing the underlying ontology, writing a set of rules for personalisation and creating a mashup with the use of Web APIs.
Semantic Computing (ICSC), 2010 IEEE Fourth International Conference on; 10/2010
[Show abstract][Hide abstract] ABSTRACT: Thesauri are concept schemes that help in efficiently characterizing and retrieving items from digital libraries. SKOS is a data model that provides a standardized way to represent thesauri-and controlled vocabularies in general-using Resource Description Framework. A digital repository system that can inherently ingest and handle thesauri, although not in SKOS format, is DSpace. SKOS support in DSpace is implemented thanks to an add-on, provided by the University of Minho. Our initial objective was to apply this add-on to a running DSpace instance. We then tested this updated DSpace installation using a real vocabulary: the Thesaurus of Greek Terms for which we took on the task of bringing it in SKOS. As a final step, we tried to tackle with arising problems and to propose solutions, which are mostly based on the Semantic Web techniques.
Semantic Computing (ICSC), 2010 IEEE Fourth International Conference on; 10/2010
[Show abstract][Hide abstract] ABSTRACT: The current work is focused on the implementation of a robust multimedia application for watermarking digital images, which
is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval
technique for watermark detection. The existing highly robust watermark algorithms are applying “detectable watermarks” for
which a detection mechanism checks if the watermark exists or not (a Boolean decision) based on a watermarking key. The problem
is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection
algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image
databases. On the other hand “readable” watermarks may prove weaker but easier to detect as only the detection mechanism is
required. The proposed watermarking algorithm combine’s the advantages of both “detectable” and “readable” watermarks. The
result is a fast and robust multimedia application which has the ability to cast readable multibit watermarks into digital
images. The watermarking application is capable of hiding 214 different keys into digital images and casting multiple zero-bit watermarks onto the same coefficient area while maintaining
a sufficient level of robustness.
KeywordsWatermarking-Content based image retrieval-Spread spectrum analysis-Wavelet domain-Subband-DCT-Digital images
Multimedia Tools and Applications 05/2010; 47(3):581-597. · 1.01 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In order for Semantic Web applications to be successful a key component should be their ability to take advantage of rich content descriptions in meaningful ways. Reasoning consists a key part in this process and it consequently appears at the core of the Semantic Web architecture stack. From a practical point of view however, it is not always clear how applications may take advantage of the knowledge discovery capabilities that reasoning can offer to the Semantic Web. In this paper we present and survey current methods that can be used to integrate inference-based services with such applications. We argue that an important decision is to have reasoning tasks logically and physically distributed. To this end, we discuss relevant protocols and languages such as DIG and SPARQL and give an overview of our Knowledge Discovery Interface. Further, we describe the lessons-learned from remotely invoking reasoning services through the OWL API.
24th IEEE International Conference on Advanced Information Networking and Applications Workshops, WAINA 2010, Perth, Australia, 20-13 April 2010; 01/2010
[Show abstract][Hide abstract] ABSTRACT: Emerging parallel architectures provide the means to efficiently handle more fine-grained and larger numbers of parallel tasks. However, software for parallel programming still does not take full advantage of these new possibilities, retaining the high cost associated with managing large numbers of threads. A significant percentage of this overhead can be attributed to operations on queues. In this paper, we present a methodology to efficiently create and enqueue large numbers of threads for execution. In combination with advances in computer architecture, this reduces cost of handling parallelism and allows applications to express their inherent parallelism in a more fine-grained manner. Our methodology is based on the notion of Batches of Threads, which are teams of threads that are used to insert and extract more than one objects simultaneously from queues. Thus, the cost of operations on queues is amortized among all members of a batch. We define an API, present its implementation in the NthLib threading library and demonstrate how it can be used in real applications. Our experimental evaluation clearly demonstrates that handling operations on queues improves significantly.
2009 International Conference on Parallel Computing (ParCo 2009), Lyon, France; 09/2009
[Show abstract][Hide abstract] ABSTRACT: As a general and effective protection measure for copyright violations which occur with the use of digital technologies including peer to peer (P2P) networks, copyright owners often use digital watermarking techniques so as to encrypt copyright information to the content or otherwise restrict or even block access to the digital content through the Internet and the P2P infrastructure. This paper claims that DRM and P2P can be quite complementary. Specifically, a P2P infrastructure is presented which allows broad digital content exchange while on the same time supports copyright protection and management through watermarking technologies for digital images.
Digital Signal Processing, 2009 16th International Conference on; 08/2009
[Show abstract][Hide abstract] ABSTRACT: The unified transform method of A. S. Fokas has led to important new developments, regarding the analysis and solution of various types of linear and nonlinear PDE problems. In this work we use these developments and obtain the solution of time-dependent problems in a straightforward manner and with such high accuracy that cannot be reached within reasonable time by use of the existing numerical methods. More specifically, an integral representation of the solution is obtained by use of the A. S. Fokas approach, which provides the value of the solution at any point, without requiring the solution of linear systems or any other calculation at intermediate time levels and without raising any stability problems. For instance, the solution of the initial boundary value problem with the non-homogeneous heat equation is obtained with accuracy 10−15, while the well-established Crank–Nicholson scheme requires 2048 time steps in order to reach a 10−8 accuracy.
Journal of Computational and Applied Mathematics 05/2009; 227(1):75–82. · 0.99 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Metadata applications have evolved in time into highly structured ldquoislands of informationrdquo about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this paper we take upon the well-established Dublin Core metadata standard and suggest a proper Semantic Web OWL ontology, coping with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual Dublin Core metadata, originating from the live DSpace installation of the University of Patras institutional repository.
Innovations in Information Technology, 2008. IIT 2008. International Conference on; 01/2009
[Show abstract][Hide abstract] ABSTRACT: 4th International Conference on Open Repositories This presentation was part of the session : DSpace User Group Presentations Date: 2009-05-21 08:30 AM – 10:00 AM In many digital repository implementations, resources are often described against some flavor of metadata schema, popularly the Dublin Core Element Set (DCMES), as is the case with the DSpace system. However, such an approach cannot capture richer semantic relations that exist or may be implied, in the sense of a Semantic Web ontology. Therefore we first suggest a method in order to semantically intensify the underlying data model and develop an automatic translation of the flatly organized metadata information to this new ontology. Then we propose an implementation that provides for inference-based knowledge discovery, retrieval and navigation on top of digital repositories, based on this ontology. We apply this technique to real information stored in the University of Patras Institutional Repository that is based on DSpace, and confirm that more powerful, inference-based queries can indeed be performed.
[Show abstract][Hide abstract] ABSTRACT: Digital collections often foster a large number of digital resources that need to be efficiently managed, described and disseminated. Metadata play a key role in these tasks as they offer the basis upon which more advanced services can be built. However, it is not always the case that such collections' metadata expose explicit or even well-structured semantics. Ways to bridge this "semantic gap" are increasingly being sought, as our review of the current state-of-the-art reveals. Most importantly though, in this paper we comment on two well-known metadata standards, popular in cultural heritage applications, namely CIDOC-CRM and Dublin Core; as diverse their scope may be, we nevertheless show how applications can benefit from a transition to explicit semantic structures in these domains, in a way as painless as possible and conformant to Semantic Web standards. We conclude by presenting a concrete, prototype implementation that serves as a proof-of-concept about the ideas argued for.
Journal of Digital Information; Vol 10, No 6 (2009): Information Access to Cultural Heritage. 01/2009;
[Show abstract][Hide abstract] ABSTRACT: Digital repositories and digital libraries are today among the most common tools for managing and disseminating digital object
collections of cultural, educational, and other kinds of content over the Web. However, it is often the case that descriptive
information about these assets, known as metadata, are usually semi-structured from a semantics point of view; implicit knowledge
about this content may exist that cannot always be represented in metadata implementations and thus is not always discoverable.
To this end, in this article we propose a method and a practical implementation that could allow traditional metadata-intensive
repositories to benefit from Semantic Web ideas and techniques. In particular, we show how, starting with a semi-structured
knowledge model (like the one offered by DSpace), we can end up with inference-based knowledge discovery, retrieval, and navigation
among the repository contents. Our methodology and results are applied on the University of Patras institutional repository.
The resulting prototype is also available as a plug-in, although it can fit, in principle, any other kind of digital repository.
International Journal on Digital Libraries 01/2009; 10:179-199.
[Show abstract][Hide abstract] ABSTRACT: Information management, description and discovery, as they are today implemented in digital repositories and digital libraries systems, can surely benefit from the stack of Semantic Web technologies. Most importantly, the ability to infer implied information over declared facts and assertions, based on their rich descriptions and associations, can span new possibilities in how stored assets can be accessed, searched and discovered. In this paper we propose a process and implementation that provides for inference-based knowledge discovery, retrieval and navigation on top of digital repositories, based on existing metadata and other semi-structured information. We show that it is possible to produce added-value and meaningful results even when existing descriptions are only flatly organized and we achieve this with little manual intervention. Our work and results are based on real-world data and applied on the official University of Patras institutional repository that is based on DSpace.
KMIS 2009 - Proceedings of the International Conference on Knowledge Management and Information Sharing, Funchal - Madeira, Portugal, October 6-8, 2009; 01/2009
[Show abstract][Hide abstract] ABSTRACT: The current work is focusing on the implementation of a robust multibit watermarking algorithm for digital images, which is based on an innovative spread spectrum technique analysis. The paper presents the watermark embedding and detection algorithms, which use both wavelets and the Discrete Cosine Transform and analyzes the arising issues.
Journal of Computational and Applied Mathematics 01/2009; · 0.99 Impact Factor