Ontology Engineering in a Networked World
Abstract
The Semantic Web is characterized by the existence of a very large number of distributed semantic resources, which together define a network of ontologies. These ontologies in turn are interlinked through a variety of different meta-relationships such as versioning, inclusion, and many more. This scenario is radically different from the relatively narrow contexts in which ontologies have been traditionally developed and applied, and thus calls for new methods and tools to effectively support the development of novel network-oriented semantic applications.
This book by Suárez-Figueroa et al. provides the necessary methodological and technological support for the development and use of ontology networks, which ontology developers need in this distributed environment. After an introduction, in its second part the authors describe the NeOn Methodology framework. The book’s third part details the key activities relevant to the ontology engineering life cycle. For each activity, a general introduction, methodological guidelines, and practical examples are provided. The fourth part then presents a detailed overview of the NeOn Toolkit and its plug-ins. Lastly, case studies from the pharmaceutical and the fishery domain round out the work.
The book primarily addresses two main audiences: students (and their lecturers) who need a textbook for advanced undergraduate or graduate courses on ontology engineering, and practitioners who need to develop ontologies in particular or Semantic Web-based applications in general. Its educational value is maximized by its structured approach to explaining guidelines and combining them with case studies and numerous examples. The description of the open source NeOn Toolkit provides an additional asset, as it allows readers to easily evaluate and apply the ideas presented.
Chapters (17)
While ontology engineering is rapidly entering the mainstream, expert ontology engineers are a scarce resource. Hence, there is a need for practical methodologies and technologies, which can assist a variety of user types with ontology development tasks. To address this need, this book presents a scenario-based methodology, the NeOn Methodology, which provides guidance for all main activities in ontology engineering. The context in which we consider these activities is that of a networked world, where reuse of existing resources is commonplace, ontologies are developed collaboratively, and managing relationships between ontologies becomes an essential aspect of the ontological engineering process. The description of both the methodology and the ontology engineering activities is grounded in a comprehensive software environment, the NeOn Toolkit and its plugins, which provides integrated support for all the activities described in the book. Here we provide an introduction for the whole book, while the rest of the content is organized into 4 parts: (1) the NeOn Methodology Framework, (2) the set of ontology engineering activities, (3) the NeOn Toolkit and plugins, and (4) three use cases. Primary goals of this book are (a) to disseminate the results from the NeOn project in a structured and comprehensive form, (b) to make it easier for students and practitioners to adopt ontology engineering methods and tools, and (c) to provide a textbook for undergraduate and postgraduate courses on ontology engineering.
In contrast to other approaches that provide methodological guidance for ontology engineering, the NeOn Methodology does not prescribe a rigid workflow, but instead it suggests a variety of pathways for developing ontologies. The nine scenarios proposed in the methodology cover commonly occurring situations, for example, when available ontologies need to be re-engineered, aligned, modularized, localized to support different languages and cultures, and integrated with ontology design patterns and non-ontological resources, such as folksonomies or thesauri. In addition, the NeOn Methodology framework provides (a) a glossary of processes and activities involved in the development of ontologies, (b) two ontology life cycle models, and (c) a set of methodological guidelines for different processes and activities, which are described (a) functionally, in terms of goals, inputs, outputs, and relevant constraints; (b) procedurally, by means of workflow specifications; and (c) empirically, through a set of illustrative examples.
In this chapter, we present ontology design patterns (ODPs), which are reusable modeling solutions that encode modeling best practices. ODPs are the main tool for performing pattern-based design of ontologies, which is an approach to ontology development that emphasizes reuse and promotes the development of a common “language” for sharing knowledge about ontology design best practices. We put specific focus on content ODPs (CPs) and show how they can be used within a particular methodology. CPs are domain-dependent patterns, the requirements of which are expressed by means of competency questions, contextual statements, and reasoning requirements. The eXtreme Design (XD) methodology is an iterative and incremental process, which is characterized by a test-driven and collaborative development approach. In this chapter, we exemplify the XD methodology for the specific case of CP reuse. The XD methodology is also supported by a set of software components named XD Tools, compatible with the NeOn Toolkit, which assist users in the process of pattern-based design.
Interoperability on multiple levels, concerning both the ontologies themselves and their engineering activities, is a key requirement for ontology networks to be efficient, with minimal redundancy and high reuse. This requirement has a strict binding for software tools that can support some interoperability levels, yet they can be hindered by a lack of shared models and vocabularies describing the resources to be handled, as well as the ways of handling them. Here, three examples of metalevel vocabularies are proposed, each covering at least one peculiar interoperability aspect: OMV for modeling the artifacts themselves, LIR for managing a multilingual layer on top of them, and C-ODO Light for modeling collaboration-supportive life cycle management tasks and processes. All of these models lend themselves to handling by dedicated software tools and are all being employed within NeOn products.
The goal of the ontology requirements specification activity is to state why the ontology is being built, what its intended uses are, who the end users are, and which requirements the ontology should fulfill. This chapter presents detailed methodological guidelines for specifying ontology requirements efficiently. These guidelines will help ontology engineers to capture ontology requirements and produce the ontology requirements specification document (ORSD). The ORSD will play a key role during the ontology development process because it facilitates, among other activities, (1) the search and reuse of existing knowledge resources with the aim of reengineering them into ontologies, (2) the search and reuse of ontological resources (ontologies, ontology modules, ontology statements as well as ontology design patterns), and (3) the verification of the ontology along the ontology development.
With the goal of speeding up the ontology development process, ontology developers are reusing as much as possible available ontological and non-ontological resources such as classification schemes, thesauri, lexicons, and folksonomies, that have already reached some consensus. The reuse of such non-ontological resources necessarily involves their re-engineering into ontologies. Based on this new trend, this chapter presents a general method for re-engineering non-ontological resources into ontologies, taking into account that non-ontological resources are highly heterogeneous in their data model and contents. The method is based on the so-called re-engineering patterns, which define a procedure that transforms the non-ontological resource components into ontology representational primitives. This chapter also presents the description of a software library that implements the transformations suggested by the patterns. Finally, the chapter depicts an evaluation of the method.
This chapter presents methodological guidelines that allow engineers to reuse generic ontologies. This kind of ontologies represents notions generic across many fields, (is part of, temporal interval, etc.). The guidelines helps the developer (a) to identify the type of generic ontology to be reused, (b) to find out the axioms and definitions that should be reused and (c) to adapt and integrate the generic ontology selected in the domain ontology to be developed. For each task of the methodology, a set of heuristics with examples are presented. We hope that after reading this chapter, you would have acquired some basic ideas on how to take advantage of the great deal of well-founded explicit knowledge that formalizes generic notions such as time concepts and the part of relation.
As large monolithic ontologies are difficult to handle and maintain, the activity of modularizing an ontology consists in identifying components (modules) of this ontology that can be considered separately while they are interlinked with other modules. The end benefit of modularizing an ontology can be, depending on the particular application or scenario, (a) to improve performance by enabling the distribution or targeted processing, (b) to facilitate the development and maintenance of the ontology by dividing it in loosely coupled, self-contained components or (c) to facilitate the reuse of parts of the ontology. In this chapter, we present a brief introduction to the field of ontology modularization. We detail the approach taken as a guideline to modularize existing ontologies and the tools available in order to carry out this activity.
Ontologies are dynamic entities that evolve over time. There are several challenges associated with the management of ontology dynamics, from the adequate control of ontology changes to the identification and administration of ontology versions. Moreover, ontologies are increasingly becoming part of a network of complex relationships and dependencies, where they reuse and extend other ontologies, have associated metadata in order to ease sharing and reuse, are used to integrate heterogeneous knowledge bases, etc. Under these circumstances, a change in an ontology does not only affect the ontology itself but may also have consequences in all its related artifacts. In this chapter, we propose methodological guidelines for carrying out the ontology evolution activity. We target different scenarios, supporting users in the process of ontology evolution from a generic perspective and on how to use tools that semiautomatically assist them in discovering, evaluating, and integrating domain changes to evolve ontologies. To illustrate their applicability, we describe how such guidelines have been used in real example applications.
The NeOn Toolkit is one of the major results of the NeOn project. It is a state-of-the-art, open-source, multiplatform ontology engineering environment, which provides comprehensive support for the ontology engineering life cycle of networked ontologies. It is based on an open and modular plugin architecture that allows adding additional plugins realizing more advanced features supporting more complex ontology engineering activities. A substantial number of plugins have been developed within and outside the NeOn consortium and are available at the NeOn Toolkit homepage. The NeOn Toolkit supports the Web Ontology Language OWL 2, the ontology language specified by the W3C, and features basic editing and visualization functionality. Its user interface, especially the presentation of class restrictions, makes the NeOn Toolkit accessible to users that do not have long experience with ontologies but instead know the object-oriented modeling paradigm. In the chapter, we will present the feature set of the NeOn Toolkit and how to use it. A second part explains some architecture and implementation background and how new plugins can be integrated into the common platform.
In order to manage properly ontology development projects in complex settings and to apply correctly the NeOn Methodology, it is crucial to have knowledge of the entire ontology development life cycle before starting the development projects. The ontology project plan and scheduling helps the ontology development team to have this knowledge and to monitor the project execution. To facilitate the planning and scheduling of ontology development projects, the NeOn Toolkit plugin called gOntt has been developed. gOntt is a tool that supports the scheduling of ontology network development projects and helps to execute them. In addition, prescriptive methodological guidelines for scheduling ontology development projects using gOntt are provided.
This chapter presents the Kali-ma NeOn Toolkit plugin, which exploits the versatility of the C-ODO Light model to assist ontology engineers and project managers in locating, selecting, and accessing other plugins through a unified, shared interaction mode. Kali-ma offers reasoning methods for classifying and categorizing ontology design tools with a variety of criteria, including collaborative aspects of ontology engineering and activities that follow the NeOn Methodology. Furthermore, it provides means for storing selections of tools and associating them directly to development projects so that they can be shared and ported across systems involved in common engineering tasks. In order to boost Kali-ma support for third-party plugins, we are also offering an online service for the semiautomatic generation of C-ODO Light–based plugin descriptions.
There is empirical evidence that current user interfaces for ontology engineering are still inadequate in their ability to reduce task complexity for users, especially non-expert ones. Here we present a novel tool for visualizing and navigating ontologies, KC-Viz, which exploits an innovative ontology summarization method to support a ’middle-out ontology browsing’ approach, where it becomes possible to navigate ontologies starting from the most information-rich nodes (i.e., key concepts). This approach is similar to map-based visualization and navigation in Geographical Information Systems, where, e.g., major cities are displayed more prominently than others, depending on the current level of granularity. Building on its powerful and empirically validated ontology summarization algorithm, KC-Viz provides a rich set of navigation and visualization mechanisms, including flexible zooming into and hiding of specific parts of an ontology, visualization of the most salient nodes, history browsing, saving and loading of customized ontology views, as well as essential interface support, such as graphical zooming, font manipulation, tree layout customization, and other functionalities.
The chapter covers basic functionality pertaining to reasoning with ontologies. We first introduce general methods for detecting and resolving inconsistencies, and then present three plugins that provide reasoning and query functionality. The three plugins are: the reasoning plugin, which allows for standard reasoning tasks, such as materialising inferences and checking consistency in ontologies; the RaDON plugin, which provides functionality for diagnosing and resolving inconsistencies in networked ontologies; and the query plugin, which allows for users querying ontologies in the NeOn Toolkit via the RDF query language SPARQL.
In this chapter, we illustrate the work conducted at the Food and Agriculture Organization of the United Nations (FAO) with the creation of a network of ontologies about fisheries, developed with NeOn technologies and methodologies. The network included the main thematic areas needed to talk about fish stocks (often referred to as aquatic resources) and included data sources of various types: reference data for time series, thesauri for document indexing, actual time series, and the reuse of an existing well-known ontology maintained by FAO (the geopolitical ontology). Such a network of ontologies was also used within a prototypical web-based application. After describing the methodologies used to create the network, and its contents and features, we draw some conclusions and highlight the lessons learned during the process.
Since the use of electronic invoicing in business transactions was approved by the EU back in 2002, its application in Europe has grown considerably. However, despite the existence of standards like EDIFACT (http:// www. unece. org/ trade/ untdid/ welcome. htm) or UBL, (http:// www. oasis-open. org/ committees/ ubl) widespread take-up of electronic invoicing has been hindered by the enormous heterogeneity of proprietary solutions. In this chapter, we describe an approach toward addressing the interoperability problem in electronic invoice exchange. We especially focus on networked ontologies as the main enablers of such an approach, where networked ontologies serve as a semantic gateway for the transformation of invoice data between different formats and models.
In recent years, increased attention has been paid to what is called semantic interoperability in eHealth, being the interoperable identification and description of drugs at its very core. In spite of the efforts toward having a common way to describe drugs, there is no universal nomenclature but several attempts like SNOMED CT (http:// www. ihtsdo. org/ snomed-ct/ ) or the biomedical ontologies in OBO Foundry (http:// www. obofoundry. org/ ) and BioPortal (http:// bioportal. bioontology. org/ ). This chapter describes an approach that applies NeOn technology to bridge the gap between different ontologies describing pharmaceutical products.
... Specially for a complex domain (such as HCI), representing its knowledge as a single ontology results in a large and monolithic ontology that is hard to manipulate, use, and maintain [Suárez-Figueroa et al., 2012]. On the other hand, representing each sub-domain in isolation is a costly task that leads to a very fragmented solution that is again hard to handle [Ruy et al., 2016]. ...
... On the other hand, representing each sub-domain in isolation is a costly task that leads to a very fragmented solution that is again hard to handle [Ruy et al., 2016]. In such cases, building an ontology network (ON) is an adequate solution [Suárez-Figueroa et al., 2012;Ruy et al., 2016]. ...
... An ON is a collection of ontologies, included in such a network, related together through a variety of relationships, such as modularization, alignment, and dependency, sharing concepts and relations with other ontologies. Accordingly, a networked ontology is an ontology included in the network, sharing relationships with a potentially large number of other ontologies and, thus, forming a network of interlinked semantic resources [Suárez-Figueroa et al., 2012;Costa et al., 2020]. ...
Immersive technologies have emerged as a new type of interactive system that aims to provide users with immersive experiences. They have been adopted in various fields and are gradually becoming part of our lives. UX is a key quality attribute to evaluate or model such experiences. However, when it comes to immersive experiences, evaluating UX is particularly challenging because the user should not be interrupted to provide feedback. Aiming at giving a step to address this issue, we have explored using ontologies from an ontology network to support evaluating immersive experiences. In this work, we adopted the Human-Computer Interaction Ontology Network (HCI-ON) and used an extract containing concepts from some of its networked ontologies to develop the User eXperience evaluation based on Ontology Network (UXON), an ontology-based tool that supports UX experts evaluating immersive experiences based on data recorded in interaction logs. HCI-ON is a framework for organizing knowledge of the HCI domain, offering a general understanding of the field, regardless of specific solutions. UXON was used to evaluate the UX of Compomus, an immersive application that supports collaborative music composition. UXON extracts data from the application interaction logs, calculates UX metrics, and provides consolidated data and information in graphs and tables. We conducted a study and collected feedback from the UXON developer and three UX experts who used the tool. Results showed that using networked ontologies to develop a tool to support UX evaluation is feasible and valuable. In summary, the ontologies helped at the conceptual level by offering a basis to define the system's structural model and at the implementation level by assigning semantics to data to make inferences about UX. Based on the UX experts' perceptions, UXON was considered a promising system, beneficial, helpful, and easy to use. The conceptualization used to develop UXON was evaluated by HCI experts and it was considered adequate and understandable, having the potential to be used by other people to solve HCI evaluation problems.
... In addition, the knowledge context contains our experience in the use of gUFO , a lightweight implementation of the Unified Foundational Ontology suitable for Semantic Web OWL 2 DL applications. We also use our experience in design science research, NeOn (Suárez-Figueroa et al., 2012) -a methodology for building ontology networks -and SABiO (Almeida Falbo, 2014) -a systematic approach for building ontologies -to conduct the project activities. (Hevner and Chatterjee, 2010)). ...
... To perform the the design cycle activities, we follow a customized version of the NeOn methodology (Suárez-Figueroa et al., 2012) for building ontology networks, combined with a customized version of the SABiO methodology (Almeida Falbo, 2014) for building ontologies, proposed in (Sales, 2019). We provide a detailed description of SABiO and NeOn in sections 1.3.1 and 1.3.2, ...
... To design and implement each networked ontology proposed in this thesis, we follow the steps knowledge acquisition and ontology formalization of the SABiO methodology (Almeida Falbo, 2014) (see section 1.3.1). To integrate the ontologies to OntoFINE, use a customized version of the NeOn methodology (Suárez-Figueroa et al., 2012) for building ontology networks (see section 1.3.2), by following the best suited scenario for each context and needs (see section 3.1 of chapter 3). ...
Over the past few years, the financial industry has been disrupted by innovations that have shaken up the world of money, payments, and economic exchanges. These advances, which include blockchain technologies, cryptocurrencies, programmable money, and the tokenization of assets, create new forms of trust, money, and innovative models for economic exchanges. Although they have the potential to contribute to the evolution and efficiency in the provision of financial products and services, they pose significant challenges regarding the safeguarding of financial stability, the definition of regulatory frameworks and proper governance models. The lack of conceptual clarity in the financial sector domain is arguably one of the reasons behind these problems. An insufficient level of understanding and agreement about a domain harms the communication among the different actors, and consequently, the definition of laws, regulations, and proper governance models. Furthermore, it hinders the ability to properly integrate information and provide semantic interoperability. Consequently, it becomes more difficult to manage risks.
In this thesis, we address these issues by means of ontologically well-founded reference conceptual models to make the nature of the conceptualizations explicit, as well as to safely establish the correct relations between them, thereby supporting semantic interoperability. We present the Ontology Network in Finance and Economics (OntoFINE), a network of reference ontologies, grounded in the Unified Foundational Ontology, which provides ontological foundations on money, trust, value, risk, and economic exchanges. We demonstrate the usability and relevance of OntoFINE by applying it to solve real-world problems in the areas of decentralized finance, requirements engineering, enterprise modeling, and game theory.
... According to (Suárez-Figueroa et al. 2012), there exist three different possibilities when ontologies are created: ...
... We applied the NeOn methodology (Suárez-Figueroa et al. 2012), which specifies nine scenarios that incorporate commonly occurring circumstances in the ontological development process, e.g. when available ontologies require to be re-engineered, aligned, modularized, or integrated with nonontological resources, putting a singular emphasis on re-engineering and reusing knowledge resources (ontological and non-ontological). Further details about this methodology and its related scenarios are described in (Suárez-Figueroa et al. 2012). ...
... We applied the NeOn methodology (Suárez-Figueroa et al. 2012), which specifies nine scenarios that incorporate commonly occurring circumstances in the ontological development process, e.g. when available ontologies require to be re-engineered, aligned, modularized, or integrated with nonontological resources, putting a singular emphasis on re-engineering and reusing knowledge resources (ontological and non-ontological). Further details about this methodology and its related scenarios are described in (Suárez-Figueroa et al. 2012). Next, we show details associated with different modules that define our ontology network. ...
Multiple efforts have been performed worldwide around diverse aspects of land administration. However, land administration data and systems’ notorious heterogeneity remains a longstanding challenge to develop a harmonized vision. In this sense, the traditional Spatial Data Infrastructures adoption is not enough to overcome this challenge since data sources’ heterogeneity implies needs related to harmonization interoperability, sharing, and integration in land administration development. This paper proposes a graph-based representation of knowledge for integrating multiple and heterogeneous data sources (tables, shapefiles, geodatabases, and WFS services) belonging to two Colombian agencies within a decentralized land administration scenario. These knowledge graphs are developed on an ontology-based knowledge representation using national and international standards for land administration. Our approach aims to prevent data isolation, enable cross-datasets integration, accomplish machine-processable data, and facilitate the reuse and exploitation of multi-jurisdictional datasets in a single approach. A real case study demonstrates the applicability of the land administration data cycle deployed.
... For that purpose, dedicated Research Design was combined and divided into three phases with respective methods (Figure 1). Phase 1 is based on qualitative data collection from desk research, which, together with the outcomes of the interviews with four industry professionals, acts as the process input for a Systems Engineering process (Lightsey, 2001) in Phase 2. During Phase 2, a novel assessment framework was combined and based on Ontology Engineering guidelines (Sua rez-Figueroa et al., 2012) to develop a new ontology for building circularity assessment. Finally, the usability of the prototype is evaluated in Phase 3. ...
... In this sub-section we propose a novel Building Circularity Assessment Ontology following Ontology Engineering guidelines presented by Sua rez-Figueroa et al. (2012) and the Linked Open Terms (LOT) 2 methodology. The purpose of the ontology, as well as the scope, implementation language and functional requirements have been defined during the ontology requirement specification stage. ...
... The ontology implementation stage comprises three phases: conceptualization, encoding and evaluation. Conceptualization refers to organizing and structuring the information obtained during the acquisition process into meaningful models at knowledge level according to the ontology requirements specification document (Sua rez-Figueroa et al., 2012). ...
The construction industry is a significant source of pollution and consumer of natural resources. As the damage to the environment is rapidly growing, the criticism towards the linear economy model is increasing. Circular Economy is perceived as an environmentally friendly alternative. However, Circular Economy implementation in the industry is still in its infancy. Researchers agree that the early design phase plays a significant role in building circularity, but early-stage circularity assessment is not a common practice. Therefore, this paper investigates the technical needs for early circularity assessment and proposes a novel assessment framework relying on Semantic Web and Linked Building Data technologies. A new Building Circularity Assessment Ontology (BCAO) is proposed to structure the scattered heterogenous manufacturer product data needed for the assessment. The ontology and the application framework are evaluated in a use case that reveals the potential in guiding the design decisions to more circular alternatives.
... This approach offers nine different scenarios to build an ontology model. The scenarios are: (1) From specification to implementation, (2) Reusing and reengineering non-ontological resources, (3) Reusing ontological resources, (4) Reusing and reengineering ontological resources, (5) Reusing and merging ontological resources, (6) Reusing, merging, and reengineering ontological resources, (7) Reusing ontology design patterns, (8) Restructuring ontological resources, and (9) Localizing ontological resources [17]. ...
... Phase 3: Design research to build a new domain ontology model. Refers to the NeOn methodology framework, this study combines Scenario 2 and Scenario 4 [17]. This phase includes four steps: − Create a mapping from the necessary database contents (Phase 1) to the existing ontologies (Phase 2). ...
The ACM/IEEE Computing Curricula 2020 includes the study of relational databases in four of its six disciplines. However, a domain ontology model of multidisciplinary database course does not exist. Therefore, the current study aims to build a domain ontology model for the multidisciplinary database course. The research process comprises three phases: a review of database course contents based on the ACM/IEEE Computing Curricula 2020, a literature review of relevant domain ontology models, and a design research phase using the NeOn methodology framework. The ontology building involves the ontology reuse and reengineering of existing models, along with the construction of some classes from a non-ontological resource. The approach to ontology reuse and reengineering demonstrates ontology reusability. The final domain ontology model is then evaluated using two ontology syntactic metrics: Relationship Richness and Information Richness. These metrics reflect the diversity of relationships and the breadth of knowledge in the model, respectively. In conclusion, the current research contributes to the Computing Curricula by providing an ontology model for a multidisciplinary database course. The model, developed through ontology reuse and reengineering and the integration of non-ontological resources, exhibits more diverse relationships and represents a broader range of knowledge.
... O Projeto NeON 2 tem a capacidade de permitir a gestão de múltiplas ontologias de forma colaborativa, dinâmica e evolutiva. Em vez de propor umúnico caminho para o desenvolvimento de uma ontologia, a NeOn propõe nove cenários que podem ser individuais ou combinados para cobrir problemas comuns que ocorrem na engenharia de ontologias, conforme descritos por [Suárez-Figueroa et al. 2012], que são: ...
... Nesta fase, foi avaliada a reutilização de recursos ontológicos existentes, principalmente aqueles que possuem um alto nível de granularidade e qualidade no desenvolvimento. A classificação da qualificação por tipo e nível de granularidade foi especificado por [Suárez-Figueroa et al. 2012], como: i) Gerais/Comuns: quando trata de assuntos diveros ou comuns; ii) Domínio: quando estabelecem conceitos de questões ou contextos específicos. Conforme estabelecido pela metodologia NeOn, algumas tarefas podem ser agrupadas em atendimento a esta fase. ...
Muitas são as dificuldades enfrentadas pelos municípios brasileiros na obtenção de informações estratégicas que possibilitem a tomada de decisões devido ao modelo de gestão descentralizado e a falta de sistemas interoperáveis que facilitem a realização de análises complexas. Com a pandemia do novo coronavírus (COVID-19), a obtenção de dados de forma rápida e confiável tornou-se ainda mais necessária em função da escalada da doença e da necessidade de rápidas intervenções. Portanto, esse trabalho, mediante a construção de Grafos de Conhecimento Semântico (GCS), a partir da integração das bases de dados do SIM (Sistema de Informações sobre Mortalidade) e do e-SUS (Notifica), teve como ojetivo principal mitigar as deficiências decorrentes da extração de indicadores importantes sobre os óbitos, vacinados e internações. No processo de construção da ONTOVID foram utilizadas a metodologia NeOn e a abordagem OBDI (Ontology-Based Data Integration). Como forma de avaliar a acurácia desta abordagem automática, os dados obtidos na integração foram apresentados e validados pelos gestores da Secretaria de Saúde de Camaçari, que identificaram algumas divergências não detectadas pelo processo manual de coleta de informações executado anteriormente pela equipe.
... Ontology Development (RQ4): According to the publications, the development of only three (8.57%) out of 35 ontologies followed an ontology engineering method (see Figure 6): #25 adopted Methontology (Fernández-López et al., 1997), #28 used Methontology (Fernández-López et al., 1997 and NeOn (Gómez-Pérez and Suárez-Figueroa, 2009;Suárez-Figueroa et al., 2012) and #30 applied NeOn (Gómez-Pérez and Suárez-Figueroa, 2009;Suárez-Figueroa et al., 2012). Although other publications (#02, #11, #12, #13 and #18) mention the use of some guidelines to develop the ontologies, the methods are not clearly presented or referred to. ...
... Ontology Development (RQ4): According to the publications, the development of only three (8.57%) out of 35 ontologies followed an ontology engineering method (see Figure 6): #25 adopted Methontology (Fernández-López et al., 1997), #28 used Methontology (Fernández-López et al., 1997 and NeOn (Gómez-Pérez and Suárez-Figueroa, 2009;Suárez-Figueroa et al., 2012) and #30 applied NeOn (Gómez-Pérez and Suárez-Figueroa, 2009;Suárez-Figueroa et al., 2012). Although other publications (#02, #11, #12, #13 and #18) mention the use of some guidelines to develop the ontologies, the methods are not clearly presented or referred to. ...
Human–Computer Interaction (HCI) is a multidisciplinary area that involves a diverse body of knowledge and a complex landscape of concepts, which can lead to semantic problems, hampering communication and knowledge transfer. Ontologies have been successfully used to solve semantics and knowledge-related problems in several domains. This paper presents a systematic literature review that investigated the use of ontologies in the HCI domain. The main goal was to find out how HCI ontologies have been used and developed. 35 ontologies were identified. As a result, we noticed that they cover different HCI aspects, such as user interface, interaction phenomenon, pervasive computing, user modeling / profile, HCI design, interaction experience and adaptive interactive system. Although there are overlaps, we did not identify reuse among the 35 analyzed ontologies. The ontologies have been used mainly to support knowledge representation and reasoning. Although ontologies have been used in HCI for more than 25 years, their use became more frequent in the last decade, when ontologies address a higher number of HCI aspects and are represented as both conceptual and computational models. Concerning how ontologies have been developed, we noticed that some good practices of ontology engineering have not been followed. Considering that the quality of an ontology directly influences the quality of the solution built based on it, we believe that there is an opportunity for HCI and ontology engineering professionals to get closer to build better and more effective ontologies, as well as ontology-based solutions.
... After analyzing our corpus, we identified that current works use different procedures for handling semantic conflation. To classify them, we adapted the glossary proposed by (Suárez-Figueroa et al., 2012) to set the subsequent source independent definitions for those activities obtained from our sample of articles. ...
... More specifically,Closa et al. (2017) presented the application of W3C PROV ontology for describing the provenance of a dataset, feature, and attribute levels in a geospatial data conflation Web Processing Service (WPS) instance.Wiemann and Bernard (2016) considered employing a set of ontologies and vocabularies, as well as endpoints for harmonizing the description and usage of feature relations in spatial data fusion.Giannopoulos et al. (2014) mentioned RDF vocabularies for representing geospatial features, andGomez-Perez et al. (2008) developed an ontology to conceptualize geospatial features and use it to discover automatic mapping. Moreover, the contributions described byDu et al. (2017),Alonso and Datcu (2015),Suárez- Figueroa et al. (2012),Tong et al. (2009), and Brodeur and Bédard ...
Manifold providers from a wide range of initiatives (private organizations, volunteered efforts, social media, etc.) offer enormous data amounts with geospatial characteristics. These efforts of many data providers entail multiple data scenarios and imply many viewpoints about the same feature, involving different representations, accuracy, models, vocabularies, etc. Various techniques or processes are employed to deal with these heterogeneity problems related to diverse data sources within the conflation research area. However, semantic conflation has not been addressed widely in the literature, unlike geometrical conflation. Hence, it is unclear what issues semantic conflation tries to solve and what activities, methods, metrics, and techniques have been used in existing GIScience investigations. In this article, we carry out a systematic review of approaches that focus on semantic aspects for geospatial data conflation. Besides, we analyze a wide selection of contributions following different criteria to depict a detailed semantic conflation status in GIScience. Our contributions are: (i) an overview of semantic conflation application domains, (ii) a characterization of semantic issues within these domains, (iii) the recognition of gaps and weaknesses of collected researches, and (iv) several open challenges and opportunities for next steps in this GIScience research area.
... Requirements for ontology engineering must be defined to ensure the expected quality and suitability for the application case [27] and is the fundamental part of established methodologies for ontology engineering [84,187]. Dedicated frameworks such as the Ontology Requirements Specification Document (ORSD) [219] support requirements analysis. The requirements contain the use case specification, the data exchange identification, the purpose and scope of the ontology, as well as FRs and NFRs. ...
Effective collaboration among stakeholders in construction projects is vital to maintain consistency and accurate interpretation of information across multiple systems. Particularly, integrating and exchanging heterogeneous and distributed data is a challenge in the Architecture, Engineering, and Construction (AEC) domain. With Building Information Modeling (BIM), semantic and geometrical data is streamlined into building models using the Industry Foundation Classes (IFC). The exchange of IFC building models can be validated using requirements specified in Information Delivery Specifications (IDS). Approachestowards integrating other data with the IFC models are provided with information containers and Semantic Web technologies. Information containers are persistent collections of information on arbitrary aggregation levels that are addressable in a storage system. One specific container for storing and exchanging heterogeneous distributed building data is the standardized Information Container for linked Document Delivery (ICDD). ICDD enables storing payload in arbitrary formats within a container or reference documents from external sources. The container standard of ICDD employs SemanticWeb technologies to bundle and link these data achieving a context-aware exchange. However, requirement definitions and validation mechanisms, such as the IDS for IFC files, do not exist for heterogenous data in ICDD containers. Consequently, this thesis focuses on exchanging and validating not only specific data but the entire ICDD container to ensure information consistency and coherence across multiple stakeholders over the complete building’s life cycle, achieving a reliable BIM information exchange. The introductory part of this thesis is structured as follows. First, the existing literature on data exchange in the AEC industry is reviewed. Particular emphasis is placed on investigating Semantic Web technologies for facilitating collaboration. Second, the ICDD, a possible solution for data exchange employing Semantic Web technologies, is introduced and described in detail. Third, data validation approaches in the construction industry are reviewed, and the Semantic Web constraint language Shapes Constraint Language (SHACL) is presented. Based on the introductory part, the main part of the thesis provides a concept for a web-based ICDD platform for exchanging and validating ICDD containers. The ICDD containers are embedded into the information delivery process using the developed Information Delivery Processes Ontology (IDPO). This ontology has been designed with an established ontology engineering methodology after analyzing the information delivery in AEC based on the standards’ framework. The IDPO defines the data exchange by the stakeholders and the requirements a container has to fulfill based on defined validation strategies for the conformance, structure, content, and linking of ICDD containers. The requirements are encoded into SHACL shapes that can be executed in a SHACL processor on the implemented platform. A concept for storing and executing SHACL shapes in the platform is designed, and further necessary technical extensions of the ICDD data model are provided. The primary deliverables of the thesis are the IDPO ontology and the platform prototype for maintaining ICDD containers. The results of the conceptualization and implementation are demonstrated and discussed in two use cases (UCs) from the design to construction phase transition (UC1) and the operation phase (UC2). UC1 presents a BIM-based scheduling procedure for a building, while UC2 demonstrates operating an infrastructure asset management procedure integrating model data and historical data from inspections for maintenance planning of a bridge. The two distinct use cases are discussed to demonstrate the feasibility of using ICDD in different lifecycle stages of buildings and structures and to provide a proof of concept for the developed IDPO ontology and the implemented ICDD platform prototype. Eventually, the use cases are discussed, and conclusions are drawn from the research. The results indicate that the concept of ICDD is viable for establishing a reliable container-based BIM information exchange. It also elucidated how the current state of standardization of ICDD hinders its implementation by practitioners in the AEC industry.
... VHBIEO extended EXPO (Soldatova & King, 2006) and reused terms and concepts from spatial-temporal eventdriven modeling (STED) (Saeidi et al., 2018), the ontology to represent energy-related occupant behavior in buildings (DNAs) (Hong et al., 2015), ifcOWL ontology (Pauwels & Terkaj, 2019), the semantic sensor network ontology (SSN) (Compton et al., 2012), the survey ontology (SUR) (Scandolari et al., 2021), and the units of measurement ontology (UO) (Gkoutos et al., 2012). The development of VHBIEO followed well-defined ontology development approaches, namely ONTOLOGIES (Uschold & Gruninger, 1996), METHONTOLOGY (Ferndndez et al., 1997), Ontology Development 101 (Noy & McGuinness, 2001), and NeOn (Suárez-Figueroa et al., 2012). It comprised three major steps: initiation, construction, and evaluation. ...
Virtual reality (VR) offers promise as a tool for building performance simulations, especially when considering human-building interactions in buildings or spaces still under design. However, the absence of standardized data protocols impedes the consistent sharing of VR-related experiments and findings. This makes advancing VR experimentation as a reliable method for studying human-building dynamics challenging. The authors introduced the Virtual Human-Building Interaction Experimentation Ontology (VHBIEO) to address the challenge. VHBIEO seeks to standardize experimentation details as a domain-specific ontology, enhancing their interoperability. It includes essential experimentation concepts and employs semantic web technologies to ensure machine readability. Moreover, it integrates an application view (APV) to tailor details to specific experiments. Using VHBIEO-based metadata, this paper presents a case study aiming to standardize experiments that validate thermal sensations in immersive virtual environments (IVE), encompassing experimental protocol, variables, design, and data gathering. By exploring the main characteristics of VHBIEO-based metadata, the authors discuss its potential to improve the reliability of human-building interaction research.
... In addition to the LOT framework, guidance from Ontology Development 101 by Noy & McGuinness [30] is applied to develop class definitions, hierarchies, properties, and class instantiations. Methods for knowledge acquisition follow the NeOn Methodology [31]. ...
This paper presents the Hydrogen Ontology ( HOLY ), a domain ontology modeling the complex and dynamic structures of hydrogen-based markets. The hydrogen economy has become a politically and economically crucial sector for the transition to renewable energy, accelerating technological and socio-economic innovations. However, the attainment of market insights requires a large variety of informational concepts which are predominantly found in unstructured text data. HOLY provides the necessary structure for the representation of these concepts. Through a top-down approach, HOLY defines taxonomies based on a hierarchical structure of products and applications. In addition, to ensure reusability, the ontology incorporates components from established ontologies in its structure. As a result, HOLY consists of over 100 classes defining information about organizations, projects, components, products, applications, markets, and indicators. Hence, our work contributes to the systemic modeling of the hydrogen domain with a focus on its value chain. Formally, we represent and validate the ontology with Semantic Web Technologies. HOLY includes lexical-semantic information (e.g., synonyms, hyponyms, definitions, and examples) to simplify data integration into knowledge acquisition systems. Therefore, we provide a foundation for the retrieval, storage, and delivery of market insights. A first application based on HOLY at the Fraunhofer IIS offers an up-to-date market overview of developments in the fuel cell environment.
... For large and complex domains, such as Software Engineering, ontologies can be organized in an Ontology Network (ON), which consists of a set of ontologies (the networked ontologies) connected to each other through relationships in such a way to provide a comprehensive and consistent conceptualization [31]. In this work, we use SEON [23], an ON that describes various subdomains of Software Engineering and organizes its ontologies according to the aforementioned layers. ...
Organizations have adopted Continuous Software Engineering (CSE) practices aiming at making software development faster, iterative, integrated, continuous, and aligned with the business. In this context, they often use different applications (e.g., project management tools, source repositories, and quality assessment tools) that store valuable data to support daily activities and decision-making. However, data items often remain spread in different applications that adopt different data and behavioral models, posing a barrier to integrated data usage. As a consequence, data-driven software development is uncommon, missing valuable opportunities for product and process improvement. In this paper, we explore an ontology network addressing CSE aspects to develop a data integration solution in which networked ontologies are the basis to build reusable and autonomous software components that work together in a system federation to provide meaningful integrated data. We achieve a comprehensive and flexible solution that can be used as a whole or partially, by extracting only the components related to the subdomains of interest.
... For large and complex domains, ontologies can be organized in an ontology network (ON), which consists of a set of ontologies connected to each other through relationships in such a way to provide a comprehensive and consistent conceptualization [47]. Thus, an ON contributes to representing and growing knowledge of a domain by connecting several subdomains inside the ON or different domains, in a network of ONs. ...
Developing interactive systems is a challenging task that involvesconcerns related to the human-computer interaction (HCI), such asusability and user experience. Therefore, HCI design is a core issueto the quality of such systems. HCI design often involves peoplewith different backgrounds (e.g., Arts, Software Engineering, De-sign). This makes knowledge transfer a challenging issue due to thelack of a common conceptualization about HCI design, leading tosemantic interoperability problems, such as ambiguity and impreci-sion when interpreting shared information. Ontologies have beenacknowledged as a successful approach to represent domain knowl-edge and support knowledge-based solutions. Hence, in this work,we propose to explore the use of ontologies to represent structuredknowledge of HCI design and improve knowledge sharing in thiscontext. We developed the Human-Computer Interaction DesignOntology (HCIDO), which is part of the Human-Computer Interac-tion Ontology Network (HCI-ON) and is connected to the SoftwareEngineering Ontology Network (SEON). By making knowledgerelated to the HCI design domain explicit and structured, HCIDOhelped us to develop KTID, a tool that aims to support capturing andsharing knowledge to aid in HCI design by allowing HCI designersto annotate information about design choices in design artifactsshared with HCI design stakeholders. Preliminary results indicatethat the tool can be particularly useful for novice HCI designers.
... Therefore, several methodologies were analyzed [17,24,25]. The development of the ontology presented in this research was guided by the methodology proposed by Noy and McGuinness, which has been widely adopted [26]. This is the most used or cited methodology for designing an ontology [27]. ...
The COVID-19 pandemic has caused the deaths of millions of people around the world. The scientific community faces a tough struggle to reduce the effects of this pandemic. Several investigations dealing with different perspectives have been carried out. However, it is not easy to find studies focused on COVID-19 contagion chains. A deep analysis of contagion chains may contribute new findings that can be used to reduce the effects of COVID-19. For example, some interesting chains with specific behaviors could be identified and more in-depth analyses could be performed to investigate the reasons for such behaviors. To represent, validate and analyze the information of contagion chains, we adopted an ontological approach. Ontologies are artificial intelligence techniques that have become widely accepted solutions for the representation of knowledge and corresponding analyses. The semantic representation of information by means of ontologies enables the consistency of the information to be checked, as well as automatic reasoning to infer new knowledge. The ontology was implemented in Ontology Web Language (OWL), which is a formal language based on description logics. This approach could have a special impact on smart cities, which are characterized as using information to enhance the quality of basic services for citizens. In particular, health services could take advantage of this approach to reduce the effects of COVID-19.
... (1) ontological resources and (2) nonontological resources. To facilitate future research collaborations, various scenarios could be used, and include the following as examples: reuse and merge ontological resources; reuse, merge, and reengineer ontological resources; reuse ontology design patterns; restructuring ontology resources; or localizing ontological resources (Suárez-Figueroa et al., 2012). ...
Ontology and knowledge-based systems typically provide e-learning recommender systems. However, ontology use in such systems is not well studied in systematic detail. Therefore, this research examines the development and evaluation of ontology-based recommender systems. The study also discusses technical ontology use and the recommendation process. We identified multidisciplinary ontology-based recommender systems in 28 journal articles. These systems combined ontology with artificial intelligence, computing technology, education, education psychology, and social sciences. Student models and learning objects remain the primary ontology use, followed by feedback, assessments, and context data. Currently, the most popular recommendation item is the learning object, but learning path, feedback, and learning device could be the future considerations. This recommendation process is reciprocal and can be initiated either by the system or students. Standard ontology languages are commonly used, but standards for student profiles and learning object metadata are rarely adopted. Moreover, ontology-based recommender systems seldom use the methodology of building ontologies and hardly use other ontology methodologies. Similarly, none of the primary studies described ontology evaluation methodologies, but the systems are evaluated by nonreal students, algorithmic performance tests, statistics, questionnaires, and qualitative observations. In conclusion, the findings support the implementation of ontology methodologies and the integration of ontology-based recommendations into existing learning technologies. The study also promotes the use of recommender systems in social science and humanities courses, non-higher education, and open learning environments.
... However, to produce a large monolithic HCI ontology is unfeasible. Thus, we advocate that ontologies should be organized in an ontology network Suárez-Figueroa et al. (2012). This way, knowledge is better structured and ontologies reuse concepts one from another, keeping consistency in shared concepts and decreasing overlap problems. ...
Human-Computer Interaction (HCI) is a complex communication phenomenon involving human beings and computer systems that gained large attention from industry and academia with the advent of new types of interactive systems (mobile applications, smart cities, smart homes, ubiquitous systems and so on). Despite of its importance, there is still a lack of formal and explicit representations of what the HCI phenomenon is. In this paper, we intend to clarify the main notions involved in the HCI phenomenon, by establishing an explicit conceptualization of it. To do so, we need to understand what interactive computer systems are, which types of actions users perform when interacting with an interactive computer system, and finally what human–computer interaction itself is. The conceptualization is presented as a core reference ontology, called HCIO (HCI Ontology), which is grounded in the Unified Foundational Ontology (UFO). HCIO was evaluated using ontology verification and validation techniques and has been used as core ontology of an HCI ontology network.
... For large and complex domains, ontologies can be organized in an ontology network (ON), which consists of a set of ontologies connected to each other through relationships in such a way to provide a comprehensive and consistent conceptualization [30]. The ontology presented in this paper (HCIDO) reuses concepts from (and, thus, is connected to) two ONs: the Human-Computer Interaction Ontology Network (HCI-ON) [9], which contains ontologies addressing HCI sub-domains (e.g., HCI phenomenon, HCI Evaluation), and the Software Engineering Ontology Network (SEON) [24], which includes ontologies addressing ES sub-domains (e.g., Software Requirement, Software Process, System and Software). ...
Developing interactive systems is a challenging task that involves concerns related to the human-computer interaction (HCI), such as usability and user experience. Therefore, HCI design is a core issue when developing such systems. It often involves people with different backgrounds (e.g., Arts, Software Engineering, Design), which makes knowledge transfer a challenging issue. Ontologies have been acknowledged as a successful approach to represent domain knowledge and support knowledge-based solutions. Hence, in this work, we propose to explore ontologies to represent structured knowledge and improve knowledge sharing in HCI design. We briefly present the Human-Computer Interaction Design Ontology (HCIDO), a reference ontology that addresses HCI design aspects that connect HCI and Software Engineering concerns. By making knowledge related to the HCI design domain explicit and structured, HCIDO has helped us to develop KTID, a tool that aims to support capturing and sharing useful knowledge to aid in HCI design. Preliminary results indicate that the tool may be particularly useful for novice HCI designers.
... EBTOnto can therefore be considered an application ontology [21], whose domain and usage scenario is well defined. Its construction has been carried out under the NeOn methodology [22]. This methodology defines 9 possible scenarios, each of which captures a possible part of the ontology development process: ...
The acquisition and maintenance of non-technical skills by the pilots are fundamental factors for the prevention of aviation accidents. The aviation authorities are promoting that air crew training be carried out through simulator sessions using scenarios specifically designed to develop and assess the global performance of pilots in such skills. When designing custom flight training scenarios, choosing the correct events and conditions from the myriad of possible combinations with respect to their potential utility in training specific competencies is a costly task that depends entirely on highly specialized expert knowledge. In this paper, we present EBTOnto, an OWL DL ontology that allows to formalize this knowledge and other useful data from real cases, laying the foundations for a semantic knowledge base of scenarios for airline pilots training. Previous advances in this matter and possible applications of this system are reviewed. EBTOnto is built on top of a source validated by experts, the Evidence-Based Training Implementation Guide by the International Air Transport Association, and then checked using an automatic reasoner and a database of 37,568 aviation safety incidents, extracted from the widely regarded Aviation Safety Reporting System by the U.S. National Aeronautics and Space Administration. The results suggest that it is possible to classify real aviation scenarios in terms of non-technical competencies and filter useful incident reports for design and enrichment of these training scenarios. EBTOnto opens up new possibilities for interoperability between incident databases and training organizations, and smoothes the path to represent, share and generate custom simulation training scenarios for pilots based on real data.
... One can define three main approaches to the realization of the model according to Figure 7: the implementation of the model in JAVA in the form of an object model, ontologies [42] and knowledge graphs based solutions [43,44]. JAVA based solutions allows receive minimal delays, but this approach is rather complex from the point of view of programming. ...
The article deals with the use of context-sensitive policies in the building of data acquisition systems in large scale distributed cyber-physical systems built on fog computing platforms. It is pointed out that the distinctive features of modern cyber-physical systems are their high complexity and constantly changing structure and behavior, which complicates the data acquisition procedure. To solve this problem, it is proposed to use an approach according to which the data acquisition procedure is divided into two phases: model construction and data acquisition, which allows parallel realization of these procedures. A distinctive feature of the developed approach is that the models are built in runtime automatically. As a top-level model, a multi-level relative finite state operational automaton is used. The automaton state is described using a multi-level structural-behavioral model, which is a superposition of four graphs: the workflow graph, the data flow graph, the request flow graph and the resource graph. To implement the data acquisition procedure using the model, the context-sensitive policy mechanism is used. The article discusses possible approaches to implementation of suggested mechanisms and describes an example of application.
The significance of the work lies in the increasingly cross-disciplinary nature of emerging research problems, which necessitate the collaboration of experts from various domains to address them. To achieve this, it is essential for all research contributors to align their perspectives on the processes involved. The use of semantic technologies such as ontology-driven modeling ensures concept alignment and structuring of data and knowledge. The estimation of cross-sectoral component of price elasticity coefficients of electricity demand relies on an array of mathematical models, each built by different experts. To solve this problem, it is crucial to provide data exchange between models whose outputs complement each other. To this end, we perform an ontological analysis of information flows between models and provide examples of graphical ontologies thus designed. The semantic analysis of information flows and the system of ontologies allow the use of these models not only by their original creators but also by a broader community of researchers.
In the evolving landscape of knowledge representation, the convergence of taxonomies and ontologies has become increasingly important. Traditionally, domain ontologies have been designed for machine readability, characterized by their rigorous formalism and intricate relational structures. These ontologies are optimized for back-end processes, where precision and semantic depth are paramount. Conversely, taxonomies, often implemented using SKOS (Simple Knowledge Organization System), prioritize simplicity and usability, making them ideal for front-end applications where human interaction is key, even if they might have back-end applications as well.
This document presents a set of heuristics tailored for the design of hybrid taxonomies that incorporate ontological elements. These hybrid structures are intended to serve both front-end and back-end needs, integrating the ontological richness required for automated processes with the intuitive clarity necessary for human users. The heuristics outlined here are crafted to guide the development of taxonomies that introduce some ontological rigor while remaining navigable.
Introduction
Modern forestry increasingly relies on the management of large datasets, such as forest inventories and land cover maps. Governments are typically in charge of publishing these datasets, but they typically employ disparate data formats (sometimes proprietary ones) and published datasets are commonly disconnected from other sources, including previous versions of such datasets. As a result, the usage of forestry data is very challenging, especially if we need to combine multiple datasets.
Methods and results
Semantic Web technologies, standardized by the World Wide Web Consortium (W3C), have emerged in the last decades as a solution to publish heterogeneous data in an interoperable way. They enable the publication of self-describing data that can easily interlink with other sources. The concepts and relationships between them are described using ontologies, and the data can be published as Linked Data on the Web, which can be downloaded or queried online. National and international agencies promote the publication of governmental data as Linked Open Data, and research fields such as biosciences or cultural heritage make an extensive use of Semantic Web technologies. In this study, we present the result of the European Cross-Forest project, addressing the integration and publication of national forest inventories and land cover maps from Spain and Portugal using Semantic Web technologies. We used a bottom-up methodology to design the ontologies, with the goal of being generalizable to other countries and forestry datasets. First, we created an ontology for each dataset to describe the concepts (plots, trees, positions, measures, and so on) and relationships between the data in detail. We converted the source data into Linked Open Data by using the ontology to annotate the data such as species taxonomies. As a result, all the datasets are integrated into one place this is the Cross-Forest dataset and are available for querying and analysis through a SPARQL endpoint. These data have been used in real-world use cases such as (1) providing a graphical representation of all the data, (2) combining it with spatial planning data to reveal the forestry resources under the management of Spanish municipalities, and (3) facilitating data selection and ingestion to predict the evolution of forest inventories and simulate how different actions and conditions impact this evolution.
Discussion
The work started in the Cross-Forest project continues in current lines of research, including the addition of the temporal dimension to the data, aligning the ontologies and data with additional well-known vocabularies and datasets, and incorporating additional forestry resources.
The burgeoning significance of environmental, social, and governance (ESG) metrics in realms such as investment decision making, corporate reporting, and risk management underscores the imperative for a robust, comprehensive solution capable of effectively capturing, representing, and analysing the multifaceted and intricate ESG data landscape. Facing the challenge of aligning with diverse standards and utilising complex datasets, organisations require robust systems for the integration of ESG metrics with traditional financial reporting. Amidst this, the evolving regulatory landscape and the demand for transparency and stakeholder engagement present significant challenges, given the lack of standardized ESG metrics in certain areas. Recently, the use of ontology-driven architectures has gained attention for their ability to encapsulate domain knowledge and facilitate integration with decision-support systems. This paper proposes a knowledge graph in the ESG metric domain to assist corporations in cataloguing and navigating ESG reporting requirements, standards, and associated data. Employing a design science methodology, we developed an ontology that serves as both a conceptual foundation and a semantic layer, fostering the creation of an interoperable ESG Metrics Knowledge Graph (ESGMKG) and its integration within operational layers. This ontology-driven approach promises seamless integration with diverse ESG data sources and reporting frameworks, while addressing the critical challenges of metric selection, alignment, and data verification, supporting the dynamic nature of ESG metrics. The utility and effectiveness of the proposed ontology were demonstrated through a case study centred on the International Financial Reporting Standards (IFRS) framework that is widely used within the banking industry.
In the last years, plenty of proposals have been created to guarantee the success of projects. However, the rate of failure projects is still high. In that sense, it has been demonstrated that adopting a standard that guide the development of all the stages of a project is a factor that considerably contribute to its success. However, the fact that the information about standards is provided only in text documents (expressed in natural language) hinders its automatic analysis. For example, the task of choosing a standard taking into account the features of the project might become a complex task. Moreover, it is not easy of detect inconsistencies and ambiguities using natural language. On the other hand, ontologies represented in Ontology Web Language (OWL) have demonstrated good results to represent knowledge. OWL ontologies can be analyzed by a reasoner in order to automatically validate and check the consistency of the specifications. Considering the significant advantages of ontologies, the main aim of this paper is to present an ontological model to represent and analyze standards or guides for project management. This ontology can be used to represent the knowledge of several project management standards as well information about their implementation in particular projects. In general, this ontology can support the organizations to make decisions. In general, this ontology might reduce the ambiguity of guides or standards descriptions for project management and enhance its comprehensibility.
The software quality assurance discipline is necessary in all domains where software is employed; principally so, where risks may involve the loss of human life, like that of the aerospace industry. Ontologies exist for a plethora of domains, including those within software development, like requirements elicitation and testing. These ontologies have found use in a variety of applications, most notably in assuring traceability of software requirements and in test automation. However, no ontology of software quality assurance exists in use. In this paper is presented a state-of-the-art literature review of ontologies applied to the software engineering lifecycle where modern aerospace projects reside. A proposal for an ontology of software quality assurance is also presented and framed with respect to the aerospace industry.
The widespread use of AI tools in recent years has put even more emphasis on the interaction between humans and computers in terms of information exchange and knowledge representation. Appropriate knowledge representation is important in the context of human-computer interaction because the knowledge has to be represented in a manner that is understandable to humans while still being useful for computers. This paper describes the approach of knowledge representation using ontologies and provides a use case example of using this approach in medical diagnosis.
With the increasing amount of data collected by IoT devices, detecting complex events in real-time has become a challenging task. To overcome this challenge, we propose the utilisation of semantic web technologies to create ontologies that structure background knowledge about the complex event-processing (CEP) framework in a way that machines can easily comprehend. Our ontology focuses on Indoor Air Quality (IAQ) data, asthma patients' activities and symptoms, and how IAQ can be related to asthma symptoms and daily activities. Our goal is to detect complex events within the stream of events and accurately determine pollution levels and symptoms of asthma attacks based on daily activities. We conducted a thorough testing of our enhanced CEP framework with a real dataset, and the results indicate that it outperforms traditional CEP across various evaluation metrics such as accuracy, precision, recall, and F1-score.
As quantum computing matures, organizations must engage early with the technology and eventually adopt it in their business operations to achieve a competitive edge. At the same time, quantum computing experts (e.g., researchers and technology providers) expect extensive input and collaboration with potential adopters to explore new application areas. However, the inherently counter-intuitive and complex theoretical principles of quantum theory discourage non-expert adopters of the technology from engaging in research and development. As a result, an increasing knowledge gap emerges. This paper proposes the QuantumShare ontology to capture and share quantum computing knowledge to support the collaboration between quantum experts and non-expert adopters, thereby bridging the present knowledge gap. We used the NeOn methodology to create QuantumShare systematically. We evaluated QuantumShare by applying it to the usage scenarios extracted from the literature and end-users.
The challenge of integrating data from many data sources has persisted as an issue in several industry areas. With the evolution of technology in the upstream petroleum sector (i.e., exploration and production), the petroleum business must contend with technological silos from diverse service providers and suffers from the associated waste of time to locate data and information throughout siloed databases. Based on a thorough compilation of industry-oriented requirements in the form of use cases and competency questions, this document defines a domain ontology for defining entities in offshore petroleum production plants. The objective is to develop a uniform and clearly defined reference vocabulary to aid engineers and information technology professionals in labeling and relating production plant monitoring, simulation measures, and facilities. BFO is the top-level ontology, while GeoCore and a continuing version of the core ontology developed by the Industry Ontology Foundry (IOF) configure the middle-level ontologies. We have studied and combined several other resources to build the ontology, such as glossaries from the industry and related ontologies. The research resulted in a well-founded domain ontology that provides universals, defined classes, and relations that can be useful in several types of applications in the domain. We have demonstrated the utility of the ontology within an actual scenario in an offshore petroleum field in Brazil, where we conceived and applied the domain ontology. This study is a component of the PeTWIN project, which looks at the best approaches for creating digital twins of offshore petroleum plants.
Industry 4.0 is helping to unleash a new age of digitalization across industries, leading to a data-driven, inter-operable, and decentralized production process. To achieve this major transformation, one of the main requirements is to achieve interoperability across various systems and multiple devices. Ontologies have been used in numerous industrial projects to tackle the interoperability challenge in digital manufacturing. However, there is currently no semantic model in the literature that can be used to represent the industrial production workflow comprehensively while also integrating digitalized information from a variety of systems and contexts. To fill this gap, this paper proposed industrial production workflow ontologies (InPro) for formalizing and integrating production process information. We implemented the 5 M model (manpower, machine, material, method, and measurement) for InPro partitioning and module extraction. The InPro comprises seven main domain ontology modules including Entities, Agents, Machines, Materials, Methods, Measurements, and Production Processes. The Machines ontology module was developed leveraging the OPC Unified Architecture (OPC UA) information model. The presented InPro ontology was further evaluated by a hybrid combination of approaches. Additionally, the InPro ontology was implemented with practical use cases to support production planning and failure analysis by retrieving relevant information via SPARQL queries. The validation results also demonstrated that using the proposed InPro ontology allows for efficiently formalizing, integrating, and retrieving information within the industrial production process context.
A significant portion of the energy used in building operations is wasted due to faults and poor operation. Despite volumes of research, the real-world use of analytics applications utilizing the data available from building systems is limited. Mapping the data points from building systems to analytics applications outside the building systems and automation requires expert labor, and is often done in point-to-point integrations. This study proposes a novel method for using queryable information models to connect data points of building systems to a centralized analytics platform without requiring a particular modeling technology. The method is explained in detail through a software architecture and is further demonstrated by walking through an implementation of an example rule from a rule-based fault detection method for air handling units. In the demonstration, an air handling unit is modeled with two different approaches, and the example analytic is connected to both. The method is shown to support reusing analytic implementations between building systems modeled with different approaches, with limited assumptions of the information models.
We are witnessing an acceleration of the global drive to converge consumption and production patterns towards a more circular and sustainable approach to the food system. To address the challenge of reconnecting agriculture, environment, food and health, collections of large datasets must be exploited. However, building high-capacity data-sharing networks means unlocking the information silos that are caused by a multiplicity of local data dictionaries. To solve the data harmonization problem, we proposed an ontology on food, feed, bioproducts, and biowastes engineering for data integration in a circular bioeconomy and nexus-oriented approach. This ontology is based on a core model representing a generic process, the Process and Observation Ontology (PO2), which has been specialized to provide the vocabulary necessary to describe any biomass transformation process and to characterize the food, bioproducts, and wastes derived from these processes. Much of this vocabulary comes from transforming authoritative references such as the European food classification system (FoodEx2), the European Waste Catalogue, and other international nomenclatures into a semantic, world wide web consortium (W3C) format that provides system interoperability and software-driven intelligence. We showed the relevance of this new domain ontology PO2/TransformON through several concrete use cases in the fields of process engineering, bio-based composite making, food ecodesign, and relations with consumer’s perception and preferences. Further works will aim to align with other ontologies to create an ontology network for bridging the gap between upstream and downstream processes in the food system.
The community of Conceptual Modeling (CM) perception that Semantic Interoperability cannot be achieved without the support of an ontology-driven approach has become increasingly consensual. Moreover, the more complex and extensive the domain of the application of conceptual models, the harder it is to achieve semantic consensus. Therefore, it has emerged the perception that ontologies built to describe complex domains should not be overly large or be used in isolation. Ontology Networks arose to cover this issue. The community had to deal with issues such as different ontologies of the network using the same concept with different meanings or the same term used to designate distinct concepts. We developed a framework for classifying ontologies that provides a stable and homogeneous environment to facilitate the ontological analysis process by dealing simultaneously with ontological and domain perspectives. This article presents our proposal where conceptualization is used to identify the relationships among the evaluated ontologies. Our goal is to facilitate semantic consensus, providing guidelines and best practices supported by a stable, homogeneous, and repeatable environment.KeywordsConceptual ModelingOntology ClassificationOntologies NetworkOntology
The article deals with the problem of building Digital Twins and Smart Digital Twins for control and management in power systems. The energy system is understood as a set of energy resources of all types, methods for their production (extraction), transformation, distribution and use, as well as technical means and organizational complexes that ensure the supply of consumers with all types of energy. Integrated intelligent energy systems are analyzed as one of the important trends in the Russian energy sector, and the main directions of digitization of the energy sector are considered. The concept of "digital twins" in technical fields is considered as one of the main digitalization trends, an ontological approach to building digital twins and semantic models for building smart digital twins are proposed. It is proposed to use a fractal approach when performing ontological engineering, which makes it possible to formalize the concepts of the subject area and allows you to build different-scale ontologies using metalevels of ontologies. Formalized models of digital twins and smart digital twins are presented. The developed approaches are illustrated by the example of construction of digital twins of a solar power plant and smart digital twins of a fuel and energy complex. The approach described in the article makes it possible to integrate different levels of digital and smart digital twins into a single digital solution when modeling energy facilities and power systems.
In recent years and with the advancement of semantic technologies, shared and published online data have become necessary to improve research and development in all fields. While many datasets are publicly available in social and economic domains, most lack standardization. Unlike the medical field, where terms and concepts are well defined using controlled vocabulary and ontologies, social datasets are not. Experts such as the National Consortium for the Study of Terrorism and Responses to Terrorism (START) collect data on global incidents and publish them in the Global Terrorism Database (GTD). Thus, the data are deficient in the technical modeling of its metadata. In this paper, we proposed GTD ontology (GTDOnto) to organize and model knowledge about global incidents, targets, perpetrators, weapons, and other related information. Based on the NeOn methodology, the goal is to build on the effort of START and present controlled vocabularies in a machine-readable format that is interoperable and can be reused to describe potential incidents in the future. The GTDOnto was implemented with the Web Ontology Language (OWL) using the Protégé editor and evaluated by answering competency questions, domain experts’ opinions, and running examples of GTDOnto for representing actual incidents. The GTDOnto can further be used to leverage the publishing of GTD as a knowledge graph that visualizes related incidents and build further applications to enrich its content.
Knowledge management in a structured system is a complicated task that requires common, standardized methods that are acceptable to all actors in a system. Ontology, in this regard, is a primary element and plays a central role in knowledge management, interoperability between various departments, and better decision making. The ontology construction for structured systems comprises logical and structural complications. Researchers have already proposed a variety of domain ontology construction schemes. However, these schemes do not involve some important phases of ontology construction that make ontologies more collaborative. Furthermore, these schemes do not provide details of the activities and methods involved in the construction of an ontology, which may cause difficulty in implementing the ontology. The major objectives of this research were to provide a comparison between some existing ontology construction schemes and to propose an enhanced ecological and confined domain ontology construction (EC-DOC) scheme for structured knowledge management. The proposed scheme introduces five important phases to construct an ontology, with a major focus on the conceptualizing and clustering of domain concepts. In the conceptualization phase, a glossary of domain-related concepts and their properties is maintained, and a Fuzzy C-Mean soft clustering mechanism is used to form the clusters of these concepts. In addition, the localization of concepts is instantly performed after the conceptualization phase, and a translation file of localized concepts is created. The EC-DOC scheme can provide accurate concepts regarding the terms for a specific domain, and these concepts can be made available in a preferred local language.
Similar to managing software packages, managing the ontology life cycle involves multiple complex workflows such as preparing releases, continuous quality control checking and dependency management. To manage these processes, a diverse set of tools is required, from command-line utilities to powerful ontology-engineering environmentsr. Particularly in the biomedical domain, which has developed a set of highly diverse yet inter-dependent ontologies, standardizing release practices and metadata and establishing shared quality standards are crucial to enable interoperability. The Ontology Development Kit (ODK) provides a set of standardized, customizable and automatically executable workflows, and packages all required tooling in a single Docker image. In this paper, we provide an overview of how the ODK works, show how it is used in practice and describe how we envision it driving standardization efforts in our community.
Database URL: https://github.com/INCATools/ontology-development-kit
The article presents the building of an event-centric model for the computational representation of crisis events using an ontology encoded in the Web Ontology Language (OWL). The work presented here is done in collaboration with the Leaders and Crisis Management in Ancient Literature. A Comparative Approach (LACRIMALit) project, (2022–2025) hosted at the Institute for Mediterranean Studies/Foundation for Research and Technology (IMS-FORTH). A key outcome of the project is the LACRIMALit ontology that aims principally at the semantic annotation of ancient Greek historiographical texts in open access via Perseus Digital Library. The ontology will facilitate reasoning on and across these documents and enable their semantic querying. The tagset of annotations, concepts, relations, and terms of the ontology will be both human and machine readable, extensible and reusable. The annotated corpus of texts to be produced will be available for sophisticated queries based on the concepts and relations, defined by the ontologies. This will considerably improve the string-based querying of the texts in their present digital format. This article presents the principles of conceptualization of the domain in the three dimensions: domain knowledge (mainly classes illustrated with some individuals), linguistic dimension (terms, proper names, definite descriptions), and references.
The advent of the Semantic Web and Linked Data initiative has contributed to new perspectives and opportunities regarding cultural heritage conservation. Museums have extensive collections of Chinese ceramic vases in China. Although some data sources have been digitized, the vision of cultural heritage institutions is not only to display objects and simple descriptions (drawn from metadata), but also to allow for understanding relationships between objects (created by semantically interrelated metadata). The key to achieving this goal is to utilize the technologies of the Semantic Web, whose core is Ontology. The focus of this paper is to describe the construction of the TAO CI (“ceramics” in Chinese) ontology and terminology of the domain of ceramic vases of the Ming (1368–1644) and Qing (1644–1911) dynasties. The theoretical framework relies on the notion of essential characteristics. This notion is compliant with the ISO principles on Terminology (ISO 1087 and 704), according to which a concept is defined as a combination of essential characteristics, and with the Aristotelian definition in terms of genus and differentia. This approach is intuitive for domain experts and requires identifying essential characteristics, combining them into concepts, and translating the result into a Semantic Web language. This article proposes an approach based on a morphological analysis of the Chinese terms for vases to identify essential characteristics and a term-guided method for defining concepts. Such a term-and-characteristic guided approach makes ontology engineering less dependent on formal languages and does not require a background in Description Logics. The research presented in this article aims to publish the resulting structured data on the Semantic Web for the use of anybody interested, including museums hosting collections of these vessels, and enrich existing domain ontology building methodologies. To our knowledge, there are no comprehensive ontologies for Chinese ceramic vases. TAO CI ontology remedies this gap and provides a reference for ontology building in other domains of Chinese cultural heritage. The TAO CI ontology is openly accessible here: http://www.dh.ketrc.com/otcontainer/data/otc.owl.
Recently, digital innovation has revolutionized the world of payments and settlement services. Innovative technologies, such as the tokenization of assets, as well as new forms of digital payments, have challenged both current business models and the existing models of regulation. In this scenario, semantic transparency is fundamental not only to adapt regulation frameworks, but also to support information integration and semantic interoperability. In this paper, we deal with these issues by proposing an ontology-based approach for the modeling of payments and linked obligation settlements, that reuses reference ontologies to create ontology-based modeling patterns that are applied to model the domain-related concepts.
KeywordsEconomic exchangesDelivery versus paymentOntoUMLgUFO
The problem of extracting saturated term-sets for learning domain ontologies from professional texts, describing a subject domain of interest, appeared to be under-researched. Therefore, a broader scale systematic review of the related work has been undertaken for collecting different bits of relevant knowledge about the State-of-the-Art in various fields. These ranged from Ontology Engineering, through Ontology Learning from Texts and Information Science, to Qualitative Research in Social and Medical Sciences. The analysis of these bits of knowledge helped us better understand the research gaps in our field of study.With an aim to narrow these research gaps, a vision of the approach for terminology saturation measurement and detection was proposed. This vision allowed us to formulate the research questions that needed to be answered in order to transform this visionary approach into the method, further implement it in a software pipeline, systematically evaluate the method, and make it ready for technology transfer.
Knowledge graphs (KG) are used in a wide range of applications. The automation of KG generation is very desired due to the data volume and variety in industries. One important approach of KG generation is to map the raw data to a given KG schema, namely a domain ontology, and construct the entities and properties according to the ontology. However, the automatic generation of such ontology is demanding and existing solutions are often not satisfactory. An important challenge is a trade-off between two principles of ontology engineering: knowledge-orientation and data-orientation. The former one prescribes that an ontology should model the general knowledge of a domain, while the latter one emphasises on reflecting the data specificities to ensure good usability. We address this challenge by our method of ontology reshaping, which automates the process of converting a given domain ontology to a smaller ontology that serves as the KG schema. The domain ontology can be designed to be knowledge-oriented and the KG schema covers the data specificities. In addition, our approach allows the option of including user preferences in the loop. We demonstrate our on-going research on ontology reshaping and present an evaluation using real industrial data, with promising results.
Background
Medical experts in the domain of Diabetes Mellitus (DM) acquire specific knowledge from diabetic patients through monitoring and interaction. This allows them to know the disease and information about other conditions or comorbidities, treatments, and typical consequences of the Mexican population. This indicates that an expert in a domain knows technical information about the domain and contextual factors that interact with it in the real world, contributing to new knowledge generation. For capturing and managing information about the DM, it is necessary to design and implement techniques and methods that allow: determining the most relevant conceptual dimensions and their correct organization, the integration of existing medical and clinical information from different resources, and the generation of structures that represent the deduction process of the doctor. An Ontology Network is a collection of ontologies of diverse knowledge domains which can be interconnected by meta-relations. This article describes an Ontology Network for representing DM in Mexico, designed by a proposed methodology. The information used for Ontology Network building include the ontological resource reuse and non-ontological resource transformation for ontology design and ontology extending by natural language processing techniques. These are medical information extracted from vocabularies, taxonomies, medical dictionaries, ontologies, among others. Additionally, a set of semantic rules has been defined within the Ontology Network to derive new knowledge.
Results
An Ontology Network for DM in Mexico has been built from six well-defined domains, resulting in new classes, using ontological and non-ontological resources to offer a semantic structure for assisting in the medical diagnosis process. The network comprises 1367 classes, 20 object properties, 63 data properties, and 4268 individuals from seven different ontologies. Ontology Network evaluation was carried out by verifying the purpose for its design and some quality criteria.
Conclusions
The composition of the Ontology Network offers a set of well-defined ontological modules facilitating the reuse of one or more of them. The inclusion of international vocabularies as SNOMED CT or ICD-10 reinforces the representation by international standards. It increases the semantic interoperability of the network, providing the opportunity to integrate other ontologies with the same vocabularies. The ontology network design methodology offers a guide for ontology developers about how to use ontological and non-ontological resources in order to exploit the maximum of information and knowledge from a set of domains that share or not information.
In the last decades, Knowledge Graph (KG) empowered analytics have been used to extract advanced insights from data. Several companies integrated legacy relational databases with semantic technologies using Ontology-Based Data Access (OBDA). In practice, this approach enables the analysts to write SPARQL queries both over KGs and SQL relational data sources by making transparent most of the implementation details. However, the volume of data is continuously increasing, and a growing number of companies are adopting distributed storage platforms and distributed computing engines. There is a gap between big data and semantic technologies. Ontop, one of the reference OBDA systems, is limited to legacy relational databases, and the compatibility with the big data analytics engine Apache Spark is still missing. This paper introduces Chimera, an open-source software suite that aims at filling such a gap. Chimera enables a new type of round-tripping data science pipelines. Data Scientists can query data stored in a data lake using SPARQL through Ontop and SparkSQL while saving the semantic results of such analysis back in the data lake. This new type of pipelines semantically enriches data from Spark before saving them back.
ResearchGate has not been able to resolve any references for this publication.