Chapter

Infrastructural Approach to Modern Digital Library and Repository Management Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Traditional DLM Systems were usually implemented in the form of an either monolithic or distributed applications. The paper presents a modern approach, where a modular environment provides an infrastructure of components to build DMLS applications upon. A case study of an open Synat Software Platform is presented, together with its sample applications, and the key benefits of the approach are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... JSON or XML-based) could be processed for obtaining higher-level classifications so as to create a cyberphysical digital library. This library could be used not only to access the SOs according to catalogs like it is commonly done with digital documents/objects of digital libraries [16] [4], but also to support (i) the development process of SOs, specifically the design phase, and (ii) the analysis of SOs, i.e. all live and historical information produced and/or recorded by SOs, through ad-hoc defined GUIs. ...
Conference Paper
Full-text available
The vision of the Internet of Things (IoT) based on Smart Objects (SOs) promotes an high-level architectural organization of the future IoT designed around the basic concept of SO. An SO is an au- tonomous, cyberphysical object augmented with sensing/actuation, pro- cessing, storing, and networking capabilities. An important issue in sup- porting future SO-based IoT systems is how to classify SOs. Classi�ca- tion of SOs is an important activity directly in uencing the de�nition of e�ective SO discovery services and management systems. In particu- lar, the discovery service is a fundamental middleware component of the IoT as it allows SOs and their users to dynamically discover distributed SOs and, speci�cally, the services, operations, and data that they pro- vide. This paper aims at proposing a reference taxonomy for SOs that is highly functional for an SO discovery service, and, more generally, for an SO management system. The taxonomy is based on a metadata model that is able to describe all the cyberphysical characteristics (geophysical, functional, and non-functional) of an SO.
... Typically, such software systems implement modules supporting general-purpose functional patterns for data collection, processing, storage and provision in order to allow developers to build ADIs by re-using, customising, and pipelining functionalities into workflows to meet the specific community needs. Examples of such systems, focusing on metadata collection are: SYNAT (Mazurek et al., 2013;Rosiek et al., 2013), CORE (Knoth and Zdrahal, 2012) and MoRe, i.e. the Monument Repository (Gavrilis et al., 2013). SYNAT and CORE offer advanced and configurable services for the construction of ADIs for the scholarly communication , i.e. aggregation and curation of metadata collected from heterogeneous publication repositories. ...
Article
Full-text available
The Cultural Heritage CH community is one of the most active in the realisation of Aggregative Data Infrastructures ADIs. ADIs provide tools to integrate data sources to form uniform and richer information spaces. The realisation of ADIs for CH must be based on technology capable of coping with complex interoperability issues and sustainability issues. In this paper, we present the D-NET software toolkit framework and services, devised for the realisation of sustainable and customisable ADIs. In particular, we demonstrate the effectiveness of D-NET in the CH scenario by describing its usage in the realisation of a real-case ADI for the EC project Heritage of the People's Europe HOPE. The HOPE ADI uses D-NET to implement a two-phase metadata conversion methodology that addresses data interoperability issues while facilitating sustainability by encouraging participation of data sources.
Article
Full-text available
Purpose – The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures. Design/methodology/approach – The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above. Findings – The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research. Originality/value – D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.
Article
Full-text available
The term Linked Data refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions-the Web of Data. In this article we present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. We describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
Article
Full-text available
The term "Linked Data" refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions-the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.
Article
Full-text available
This paper charts a research agenda on systems-oriented issues in digital libraries. It focuses on the most central and generic system issues, including system architecture, user-level functionality, and the overall operational environment. With respect to user-level functionality, in particular, it abstracts the overall information lifecycle in digital libraries to five major stages and identifies key research problems that require solution in each stage. Finally, it recommends an explicit set of activities that would help achieve the research goals outlined and identifies several dimensions along which progress of the digital library field can be evaluated.
Conference Paper
Full-text available
This paper proposes to enhance the dynamism and the flexibility of Java Enterprise Edition (EE) servers by introducing a Service-Oriented Architecture (SOA) inside. The purpose is to ease the deployment and offer dynamic server configuration and reconfiguration. Such an approach limits consumed resources and is capable of context adaptation. After defining the properties that must be verified for the service platform, we propose to use OSGi technology as the basis for the architecture. We have experimented with integrating OSGi into Java EE servers. Moreover, this architecture has been chosen for the next generation of JOnAS ObjectWeb's open source Java EE implementation.
Conference Paper
Full-text available
The open services gateway initiative (OSGi) defines and promotes open specifications for the delivery of managed services into networked environments. A key element of this initiative is the OSGi framework, which is a lightweight framework for deploying and executing service-oriented applications. This paper focuses on the OSGi framework by first discussing some implementation details of an open source implementation of the OSGi framework, called Oscar. The paper then presents issues that arose or whose importance was magnified through implementing and/or using the OSGi framework.
Article
An abstract is not available.
Article
Remote procedure call systems have been around since around 1984 when they were first proposed (A.D. Birrell and B.J. Nelson, 1984). During the intervening 15 years, numerous evolutionary improvements have occurred in the basic RPC system, leading to improved systems-such as NCS (T.H. Dineen et al., 1987)-that offer programmers more functionality or greater simplicity. The Common Object Request Broker Architecture from the Object Management Group and Microsoft's Distributed Common Object Model are this evolutionary process's latest outgrowths. With the introduction of Java Developer's Kit release 1.1, a third alternative for creating distributed applications has emerged. The Java Remote Method Invocation system has many of the same features of other RPC systems, letting an object running in one Java virtual machine make a method call on an object running in another, perhaps on a different physical machine. On the surface, the RMI system is just another RPC mechanism, much like Corba and DCOM. But on closer inspection, RMI represents a very different evolutionary progression, one that results in a system that differs not just in detail but in the very set of assumptions made about the distributed systems in which it operates. These differences lead to differences in the programming model, capabilities, and the way the mechanisms interact with the code that implements and built the distributed systems
Article
The Open Archives Initiative (OAI) develops and promotes interoperability solutions that aim to facilitate the efficient dissemination of content. The roots of the OAI lie in the E-Print community. Over the last year its focus has been extended to include all content providers. This paper describes the recent history of the OAI -- its origins in promoting E-Prints, the broadening of its focus, the details of its technical standard for metadata harvesting, the applications of this standard, and future plans.