Chapter

Evolvable and Machine-Actionable Modular Reports for Service-Oriented Architecture

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Independent and preferably atomic services sending messages to each other are a significant approach of Separations of Concerns principle application. There are already standardised formats and protocols that enable easy implementation. In this paper, we go deeper and introduce evolvable and machine-actionable reports that can be sent between services. It is not just a way of encoding reports and composing them together; it allows linking semantics using technologies from semantic web and ontology engineering, mainly JSON-LD and Schema.org. We demonstrate our design on the Data Stewardship Wizard project where reports from evaluations are crucial functionality, but thanks to its versatility and extensibility, it can be used in any message-oriented software system or subsystem.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The modularization was based on deep knowledge of the domain. Another example is the work of Slifka and Suchánek Suchánek et al. (2019), who aimed to enhance the evolvability of reports within the context of service-oriented architectures. ...
Article
Full-text available
Along with the ongoing digitalization of society, we witness a strong movement to make scientific data FAIR, machine-actionable, and available in the form of knowledge graphs. On the other hand, converting machine-actionable data from knowledge graphs back into human-oriented formats, including documents, graphical, or voice user interfaces, poses significant challenges. The solutions often build on various templates tailored to specific platforms on top of the shared underlying data. These templates suffer from limited reusability, making their adaptations difficult. Moreover, the continuous evolution of data or technological advancements requires substantial efforts to maintain these templates over time. In general, these challenges increase software development costs and are error-prone. In this paper, we propose a solution based on Normalized Systems Theory to address this challenge with the aim of achieving evolvability and sustainability in the transformation process of knowledge graphs into human-oriented formats with broad applicability across domains and technologies. We explain the theoretical foundation and design theorems used in our solution and outline the approach and implementation details. We theoretically evaluate our solution by comparing it to the traditional approach, where the systems are crafted manually. The evaluation shows that our solution is more efficient and effective on a large scale, reducing the human labor required to maintain various templates and supported target platforms. Next, we demonstrate the technical feasibility of our solution on a proof-of-concept implementation in a domain of data management planning that may also serve as a basis for future development.
Conference Paper
Full-text available
Every year, the amount of data (in science) grows significantly as information technologies are used more intensively in various domains of human activities. Biologists, chemists, linguists, and others are not data experts but often just regular users who need to capture and process some huge amount of data. This is where serious problems emerge-bad data management leading to losing important data, producing unverifiable results, wasting funds, and so on. Thousands of qualified data stewards will be needed in following years to deal with this issues. At the Faculty of Information Technology, CTU in Prague, we participate in the European platform ELIXIR in which we work on the Data Stewardship Wizard to help researchers and data stewards with building high-quality FAIR data management plans that are accurate and helpful to their projects. We cooperate on this challenging project with our colleagues from other ELIXIR nodes.
Article
Full-text available
This report presents outputs of the International Digital Curation Conference 2017 workshop on machine-actionable data management plans. It contains community-generated use cases covering eight broad topics that reflect the needs of various stakeholders. It also articulates a consensus about the need for a common standard for machine-actionable data management plans to enable future work in this area.
Article
Full-text available
There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.
Conference Paper
Schema.org is a way to add machine-understandable information to web pages that is processed by the major search engines to improve search performance. The definition of schema.org is provided as a set of web pages plus a partial mapping into RDF triples with unusual properties, and is incomplete in a number of places. This analysis of and formal semantics for schema.org provides a complete basis for a plausible version of what schema.org should be.
Article
Big data makes common schemas even more necessary.
Conference Paper
Normalized Systems (NS) theory has recently been proposed as an approach to develop agile and evolvable software by defining theorems and design patterns for software architectures. In this paper we discuss the NS development process, which is illustrated by means of an elaborate description of a case regarding a budget management application developed according to the theory. Advantages of the NS approach, such as swift application development through code expansion and the transfer of additional NS design knowledge to new applications, are equally discussed.
Article
As the amount of data and devices on the Web experiences exponential growth issues on how to integrate such hugely heterogeneous components into a scalable system become increasingly important. REST has proven to be a viable solution for such large-scale information systems. It provides a set of architectural constraints that, when applied as a whole, result in benefits in terms of loose coupling, maintainability, evolvability, and scalability. Unfortunately, some of REST’s constraints such as the ones that demand self-descriptive messages or require the use of hypermedia as the engine of application state are rarely implemented correctly. This results in tightly coupled and thus brittle systems. To solve these and other issues, we present JSON-LD, a community effort to standardize a media type targeted to machine-to-machine communication with inherent hypermedia support and rich semantics. Since JSON-LD is 100\% compatible with traditional JSON, developers can continue to use their existing tools and libraries. As we show in the paper, JSON-LD can be used to build truly RESTful services that, at the same time, integrate the exposed data into the Semantic Web. The required additional design costs are significantly outweighed by the achievable benefits in terms of loose coupling, evolvability, scalability, self-descriptiveness, and maintainability.
Service-Oriented Architecture: Analysis and Design for Services and Microservices
  • T Erl
Building Microservices: Designing Fine-Grained Systems
  • S Newman