Theodore S. Papatheodorou

University of Patras, Rhion, West Greece, Greece

Are you Theodore S. Papatheodorou?

Claim your profile

Publications (159)56.59 Total impact

  • Source
    Aikaterini K. Kalou · Dimitrios A. Koutsomitropoulos · Theodore S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: The current trends for the future evolution of the web are without doubt the Semantic Web and Web 2.0. A common perception for these two visions is that they are competing. Nevertheless, it becomes more and more obvious that these two concepts are complementary. Semantic web technologies have been considered as a bridge for the technological evolution from Web 2.0 to Web 3.0, the web about recommendation and personalisation. Towards this perspective, in this work, we introduce a framework based on a three-tier architecture that illustrates the potential for combining Web 2.0 mashups and Semantic Web technologies. Based on this framework, we present an application for searching books from Amazon and Half eBay with a focus on personalisation. This implementation purely depends on ontology development, writing of rules for the personalisation, and on creation of a mashup with the aid of web APIs. However, there are several open issues that must be addressed before such applications can become commonplace. The aim of this work is to be a step towards supporting the development of applications which combine the two trends so as to conduce to the term Web 3.0, which is used to describe the next generation web.
    Full-text · Article · Sep 2013
  • Source
    Dimitrios A. Koutsomitropoulos · Georgia D. Solomou · Theodore S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: The added value the Semantic Web has to offer can find fertile ground in querying collections with rich metadata, such as ones often occurring in digital libraries and repositories. A relevant such effort is the semantic search service for the popular DSpace digital repository system. Semantic Search v2 introduces a structured query mechanism that makes query construction easier as well as several improvements in system design, performance and extensibility. Queries are targeted towards the dynamically created DSpace ontology, containing constructs that enable knowledge acquisition among available metadata. Both an empirical and a quantitative evaluation suggest that the system can bring semantic search closer to inexperienced users and make its benefits evident in the context of digital repositories, such as new querying dimensions, thus forming a paradigm production services can built upon.
    Full-text · Article · May 2013 · International Journal of Metadata Semantics and Ontologies
  • Source
    Andreas Gizas · Sotiris Christodoulou · Theodore Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: For web programmers, it is important to choose the proper JavaScript framework that not only serves their current web project needs, but also provides code of high quality and good performance. The scope of this work is to provide a thorough quality and performance evaluation of the most popular JavaScript frameworks, taking into account well established software quality factors and performance tests. The major outcome is that we highlight the pros and cons of JavaScript frameworks in various areas of interest and signify which and where are the problematical points of their code, that probably need to be improved in the next versions.
    Preview · Article · Jan 2012
  • Source
    Dimitrios A. Koutsomitropoulos · Eero Hyvönen · Theodore S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes an experiment exploring the hypothesis that innovative application of the Functional Requirements for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems inorder to improve ...
    Full-text · Article · Jan 2012 · Semantic Web
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The growing availability of Linked Data and other structured information on the Web does not keep pace with the rich semantic descriptions and conceptual associations that would be necessary for direct deployment of user-tailored services. In contrast, the more complex descriptions become, the harder it is to reason about them. To show the efficacy of a potential compromise between the two, in this paper we propose an intelligent and scalable personalization service, built upon the idea of combining Linked Data with Semantic Web rules. This service is mashing up information from different bookstores, and suggests users with personalized data according to their preferences, which in turn are modeled by a set of Semantic Web rules. This information is made available as Linked Data, thus enabling third-party recipients to consume knowledge-enhanced information.
    Full-text · Conference Paper · Nov 2011
  • A. Gizas · S.P. Christodoulou · T.S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: The JavaScript programming language is widely used for web programming and increasingly, for general purpose computing. Since the growth of its popularity and the beginning of web 2.0 era, many JavaScript frameworks have become available for programming rich client-side interactions in web applications. The most popular and widely used is jQuery. The jQuery project and its community serve today a major part of web programmers. The scope of this paper is to provide a thorough quality and performance assessment on this JavaScript framework, taking into account well established software quality factors and performance tests. The main outcome of this work is to highlight the pros and cons of jQuery in various areas of interest and signify which and where are the weak points of its code.
    No preview · Article · Jan 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The current trends for the future evolution of the Web are without doubt the Semantic Web and Web 2.0. A common perception for these two visions is that they are competing. Nevertheless, it becomes more and more obvious that these two concepts are complementary. Towards this perspective, in this work we introduce an application based on a 3-tier architecture that illustrates the potential for combining Web 2.0 and Semantic Web technologies. This application consists a framework for searching books from Amazon and Half EBay. The implementation's backbone is focused on developing the underlying ontology, writing a set of rules for personalisation and creating a mashup with the use of Web APIs.
    Full-text · Conference Paper · Oct 2010
  • Source
    Georgia Solomou · Theodore Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: Thesauri are concept schemes that help in efficiently characterizing and retrieving items from digital libraries. SKOS is a data model that provides a standardized way to represent thesauri-and controlled vocabularies in general-using Resource Description Framework. A digital repository system that can inherently ingest and handle thesauri, although not in SKOS format, is DSpace. SKOS support in DSpace is implemented thanks to an add-on, provided by the University of Minho. Our initial objective was to apply this add-on to a running DSpace instance. We then tested this updated DSpace installation using a real vocabulary: the Thesaurus of Greek Terms for which we took on the task of bringing it in SKOS. As a final step, we tried to tackle with arising problems and to propose solutions, which are mostly based on the Semantic Web techniques.
    Full-text · Conference Paper · Oct 2010
  • Source
    Dimitrios K. Tsolis · Spyros Sioutas · Theodore S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: The current work is focused on the implementation of a robust multimedia application for watermarking digital images, which is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval technique for watermark detection. The existing highly robust watermark algorithms are applying “detectable watermarks” for which a detection mechanism checks if the watermark exists or not (a Boolean decision) based on a watermarking key. The problem is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image databases. On the other hand “readable” watermarks may prove weaker but easier to detect as only the detection mechanism is required. The proposed watermarking algorithm combine’s the advantages of both “detectable” and “readable” watermarks. The result is a fast and robust multimedia application which has the ability to cast readable multibit watermarks into digital images. The watermarking application is capable of hiding 214 different keys into digital images and casting multiple zero-bit watermarks onto the same coefficient area while maintaining a sufficient level of robustness. KeywordsWatermarking-Content based image retrieval-Spread spectrum analysis-Wavelet domain-Subband-DCT-Digital images
    Full-text · Article · May 2010 · Multimedia Tools and Applications
  • C. T. Panagiotakopoulos · T. S. Papatheodorou · G. D. Styliaras
    [Show abstract] [Hide abstract]
    ABSTRACT: In this study the design issues of a web-based application, named Istopolis, are described. Istopolis is an integrated system aiming to support History and Culture courses of primary and secondary education in Greece. This hypermedia application system can be installed on a server allowing access both to students and teachers. Authorized teachers are able to modify or add modules with text, virtual material and video, and also modify or add scenarios towards an optimum exploitation of the initially specified educational modules. Furthermore, results of the summative evaluation obtained by a sample of active teachers are presented, showing that problems such as “lost in cyberspace” have been evaded and that, in general, educators are rather positive against the developed application in terms of usability, quality of media and aesthetics.
    No preview · Article · Jan 2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this work is to help cultural web application developers to benefit from the latest technological achievements in Web research. The authors introduce a 3-tier architecture that combines Web 2.0 principles, especially those that focus on usability, community and collaboration, with the powerful Semantic Web infrastructure, which facilitates the information sharing among applications. Moreover, they present a development methodology, based on this architecture, especially tailored for the cultural heritage domain. Cultural developers can exploit this architecture and methodology in order to construct web2.0-powered cultural applications with rich-content and responsive user-interface. Furthermore, they outline some indicative applications in order to illustrate the features of the proposed architecture and prove that it can be applied today and support modern cultural web applications.
    No preview · Article · Jan 2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: The wide availability of educational resources is a common objective for universities, libraries, archives and other knowledge-intensive institutions. Although generic metadata specifications (such as Dublin Core) seem to fulfill the need for documenting web-distributed objects, educational resources demand a more specialized treatment and characterization. In this article we focus on the use of learning-object specific metadata in digital repositories, as they are primarily incarnated in the LOM (learning object metadata) standard. We review relevant standards and practices, especially noting the importance of application profiling paradigms. A widespread institutional repository platform is offered by DSpace. We discuss our implementation of LOM metadata in this system as well as our interoperability extensions. To this end, we propose a potential LOM to DC mapping that we have put into use in DSpace. Finally, we introduce our implementation of an LOM ontology, as a basis for delivering Semantic Web services over educational resources. © 2010 Dimitrios A. Koutsomitropoulos, Andreas D. Alexopoulos, Georgia D. Solomou and Theodore S. Papatheodorou.
    No preview · Article · Jan 2010 · D-Lib Magazine
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this chapter the authors present the basic characteristics about some existing educational metadata schemata and application profiles. They focus on the widely adopted IEEE LOM standard and give a brief analysis of its structure. Having in mind the utilization of educational metadata schemata by digital repositories preserving educational and research resources, they concentrate on a considerably popular system for this reason, DSpace. The authors want to show how the IEEE LOM metadata set can be incorporated in the default DSpace's qualified Dublin Core metadata schema, introducing enhancements to the existing University of Patras live installation. For this reason, they document a potential LOM to Dublin Core metadata mapping and reveal possible gains from such an attempt. Further, they propose an ontological model for the repository's metadata that takes also into account the educational characteristics of resources. In this way, they show how a semantic level of interoperability between educational applications can be achieved.
    No preview · Article · Jan 2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: In order for Semantic Web applications to be successful a key component should be their ability to take advantage of rich content descriptions in meaningful ways. Reasoning consists a key part in this process and it consequently appears at the core of the Semantic Web architecture stack. From a practical point of view however, it is not always clear how applications may take advantage of the knowledge discovery capabilities that reasoning can offer to the Semantic Web. In this paper we present and survey current methods that can be used to integrate inference-based services with such applications. We argue that an important decision is to have reasoning tasks logically and physically distributed. To this end, we discuss relevant protocols and languages such as DIG and SPARQL and give an overview of our Knowledge Discovery Interface. Further, we describe the lessons-learned from remotely invoking reasoning services through the OWL API.
    No preview · Conference Paper · Jan 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Digital repositories and digital libraries are today among the most common tools for managing and disseminating digital object collections of cultural, educational, and other kinds of content over the Web. However, it is often the case that descriptive information about these assets, known as metadata, are usually semi-structured from a semantics point of view; implicit knowledge about this content may exist that cannot always be represented in metadata implementations and thus is not always discoverable. To this end, in this article we propose a method and a practical implementation that could allow traditional metadata-intensive repositories to benefit from Semantic Web ideas and techniques. In particular, we show how, starting with a semi-structured knowledge model (like the one offered by DSpace), we can end up with inference-based knowledge discovery, retrieval, and navigation among the repository contents. Our methodology and results are applied on the University of Patras institutional repository. The resulting prototype is also available as a plug-in, although it can fit, in principle, any other kind of digital repository.
    Full-text · Article · Dec 2009 · International Journal on Digital Libraries
  • Source
    Ioannis E Venetis · Theodore S Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: Emerging parallel architectures provide the means to efficiently handle more fine-grained and larger numbers of parallel tasks. However, software for parallel programming still does not take full advantage of these new possibilities, retaining the high cost associated with managing large numbers of threads. A significant percentage of this overhead can be attributed to operations on queues. In this paper, we present a methodology to efficiently create and enqueue large numbers of threads for execution. In combination with advances in computer architecture, this reduces cost of handling parallelism and allows applications to express their inherent parallelism in a more fine-grained manner. Our methodology is based on the notion of Batches of Threads, which are teams of threads that are used to insert and extract more than one objects simultaneously from queues. Thus, the cost of operations on queues is amortized among all members of a batch. We define an API, present its implementation in the NthLib threading library and demonstrate how it can be used in real applications. Our experimental evaluation clearly demonstrates that handling operations on queues improves significantly.
    Full-text · Conference Paper · Sep 2009
  • Dimitrios Tsolis · Spyros Sioutas · Theodore Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: As a general and effective protection measure for copyright violations which occur with the use of digital technologies including peer to peer (P2P) networks, copyright owners often use digital watermarking techniques so as to encrypt copyright information to the content or otherwise restrict or even block access to the digital content through the Internet and the P2P infrastructure. This paper claims that DRM and P2P can be quite complementary. Specifically, a P2P infrastructure is presented which allows broad digital content exchange while on the same time supports copyright protection and management through watermarking technologies for digital images.
    No preview · Conference Paper · Aug 2009
  • Source
    Theodore S. Papatheodorou · Anastasia N. Kandili
    [Show abstract] [Hide abstract]
    ABSTRACT: The unified transform method of A. S. Fokas has led to important new developments, regarding the analysis and solution of various types of linear and nonlinear PDE problems. In this work we use these developments and obtain the solution of time-dependent problems in a straightforward manner and with such high accuracy that cannot be reached within reasonable time by use of the existing numerical methods. More specifically, an integral representation of the solution is obtained by use of the A. S. Fokas approach, which provides the value of the solution at any point, without requiring the solution of linear systems or any other calculation at intermediate time levels and without raising any stability problems. For instance, the solution of the initial boundary value problem with the non-homogeneous heat equation is obtained with accuracy 10−15, while the well-established Crank–Nicholson scheme requires 2048 time steps in order to reach a 10−8 accuracy.
    Preview · Article · May 2009 · Journal of Computational and Applied Mathematics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The current work is focusing on the implementation of a robust multibit watermarking algorithm for digital images, which is based on an innovative spread spectrum technique analysis. The paper presents the watermark embedding and detection algorithms, which use both wavelets and the Discrete Cosine Transform and analyzes the arising issues.
    Full-text · Article · May 2009 · Journal of Computational and Applied Mathematics
  • Source
    D.A. Koutsomitropoulos · G.D. Solomou · T.S. Papatheodorou
    [Show abstract] [Hide abstract]
    ABSTRACT: Metadata applications have evolved in time into highly structured ldquoislands of informationrdquo about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this paper we take upon the well-established Dublin Core metadata standard and suggest a proper Semantic Web OWL ontology, coping with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual Dublin Core metadata, originating from the live DSpace installation of the University of Patras institutional repository.
    Full-text · Conference Paper · Jan 2009

Publication Stats

913 Citations
56.59 Total Impact Points

Institutions

  • 1987-2012
    • University of Patras
      • • Department of Computer Engineering and Informatics
      • • Laboratory for High Performance Information Systems
      Rhion, West Greece, Greece
  • 2004
    • University of California, Riverside
      • Department of Computer Science and Engineering
      Riverside, CA, United States
  • 1978-1983
    • Clarkson College
      New York City, New York, United States