Article

A Translation Approach to Portable Ontology Specifications

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In order to include external knowledge in the artefact model, we have chosen to integrate the BIM representation environment with a knowledge management system based on ontologies. An ontology, as defined in the ICT field, is an explicit specification of concepts that includes within the same descriptive system the concepts of a knowledge domain and the relations between them (Gruber, 1993); and only recently those tools have been proposed as a method to formalize immaterial knowledge about cultural heritage (Pauwels, 2008). In the proposed model, ontologies play the key role of structuring and managing concepts related to the different domains that are needed to fully comprehend the historical artefact, and any kind of data and documentation that can be useful for its interpretation such as external links, textual poter includere la conoscenza esterna nel modello dell'artefatto abbiamo scelto di integrare l'ambiente di rappresentazione BIM con un sistema di gestione della conoscenza basato sulle ontologie. ...
... In the proposed model, ontologies play the key role of structuring and managing concepts related to the different domains that are needed to fully comprehend the historical artefact, and any kind of data and documentation that can be useful for its interpretation such as external links, textual poter includere la conoscenza esterna nel modello dell'artefatto abbiamo scelto di integrare l'ambiente di rappresentazione BIM con un sistema di gestione della conoscenza basato sulle ontologie. Un'ontologia, come definita nel campo dell'ICT, è una specificazione di concetti che include all'interno dello stesso sistema descrittivo la rappresentazione delle entità appartenenti a diversi domini di conoscenza e le relazioni che intercorrono tre di esse (Gruber, 1993); strumenti che solo recentemente sono stati introdotti per formalizzare la conoscenza 'immateriale' relativa ai beni culturali (Pauwels, 2008). Nel modello proposto le ontologie giocano un ruolo chiave per la strutturazione e la gestione dei concetti riconducibili ai diversi domini che sono necessari per comprendere appieno il manufatto storico e a qualsiasi altro tipo di informazione che possa essere utile per la sua interpretazione, come ad esempio: link a fonti testuali, immagini, oggetti modellati, riferimenti bibliografici, ecc. ...
Chapter
The presented research aims at extending current Building Information Modelling (BIM) approach to built cultural and historical heritage in order to provide a modelling environment where all the knowledge related to the artefact can be integrated, formalized, managed and made available to the different professionals involved in the investigation and restoration processes. The representation of the knowledge related to the artifact deeply influences all the activities concerning restoring, conservation and maintenance. While in this area digital approaches have focused more on the development of virtual reconstructions of built heritage objects, few research explored the value of BIM in the management of heritage buildings. Similarly to its impact on the AEC sector, the introduction of BIM in the built heritage field can lead to interesting benefits related to the object representation by integrating in a single modelling environment its three-dimensional representation with non-geometric knowledge provided and used by the various actors involved in the process. The proposed system aims at improving the availability and accessibility of any information related to the archaeological artefact, making easier to interpret its nature, monitor its changes and document any investigation and intervention activity. To represent all the knowledge related to the object we chose to integrate BIM environment with a knowledge management system based on ontologies to reach a higher level of semantic representation. In order to test its effectiveness, the proposed model has been applied to the process of investigation and documentation of the roman temple of Castor and Pollux in Cori (Italy).
... Section 4 describes that these levels now form the basis for the new memory and neural models. An ontology [24] was also part of the original architecture and that is replaced by the new middle level. The memory model uses a statistical clustering process, rather than semantics and rules. ...
... The original cognitive model was based on a 3-level architecture of increasing complexity, which included an ontology that would be available to all the levels. Ontologies [24] describe the relations between concepts in a very structured and formal way. They are themselves high-level structures and it is not clear how they could be built simply from statistical processes. ...
Preprint
Full-text available
This paper describes the E-Sense Artificial Intelligence system. It comprises of a memory model with 2 levels of information and then a more neural layer above that. The lower memory level stores source data in a Markov (n-gram) structure that is unweighted. Then a middle ontology level is created from a further 3 phases of aggregating source information. Each phase re-structures from an ensemble to a tree, where the information transposition may be from horizontal set-based sequences into more vertical, typed-based clusters. The base memory is essentially neutral, where any weighted constraints or preferences should be stored in the calling module. The success of the ontology typing is open to question, but results produced answers based more on use and context. The third level is more functional, where each function can represent a subset of the base data and learn how to transpose across it. The functional structures are shown to be quite orthogonal, or separate and are made from nodes with a progressive type of capability, including unordered to ordered. Comparisons with the columnar structure of the neural cortex can be made and the idea of ordinal learning, or just learning relative positions, is introduced. While this is still a work in progress, it offers a different architecture to the current favourites and may be able to give different views of the data from what they can provide.
... An ontology is an explicit specification of a conceptualization [15]. A conceptualization is an abstract view of the world that we hope to represent for some purpose and it encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains of discourse. ...
... Considering the use of ontologies as a semantic model, we consider that an ontology can directly contribute to achieving this CHG inventory. Some reasons for developing an ontology include sharing a common understanding of information structure between people or software agents, reusing and analyzing domain knowledge [22], [23] and defining a common vocabulary for those who need to share information [15]. In our study, we go one step further in proposing an axiomatically rich ontology, which allows the discovery of new implicit relationships between data. ...
Article
Full-text available
Global warming and climate change have been subjects of great interest in recent years. They are understood to be related to greenhouse gas (GHG) emissions. Although agriculture suffers the consequences of these changes, it is one of the top global emitters of GHG. While it is complex in environmental, social, and economic aspects, there is a need to advance solutions for more sustainable agriculture. In the farm environment, an important step is the generation of GHG inventories. A GHG inventory is a systematic process for measuring and recording gas emissions and sequestration. However, generating inventories on rural properties presents many challenges due to the many variables involved, such as land use, animal husbandry, use of electricity, and fuels. One of the main challenges is to deal with this data heterogeneity. The data landscape presents obstacles such as managing source diversity, handling massive data volumes, adapting to various data formats, and ensuring real-time integration for decision-making. In this complex data landscape, ontologies emerge as a potential solution. An ontology is defined as a formal representation of a set of concepts within a domain and the relationships between those concepts. This study proposes an ontological model called CarbOnto for the syntactic and semantic integration of heterogeneous databases. Using an ontology, we intend to contribute to the standardization and interpretation of domain concepts and the addition of semantic information to generate complete GHG inventories. CarboOnto provides the means to generate farm inventories, identify imbalances, and search for solutions to neutralize gas emissions. We report a Case Study and argue that using this ontology can support balancing these gases.
... The term ontology has different definitions depending on the application domain. We refer to ontologies as defined by Gruber [50] in the domain of computer science. "A common ontology defines the vocabulary with which queries and assertions are exchanged among agents." ...
... "A common ontology defines the vocabulary with which queries and assertions are exchanged among agents." [50] Therefore, we focus on the communicative aspect of an ontology as a common and formally expressed set of terms in a knowledge base. ...
Preprint
Vehicles in public traffic that are equipped with Automated Driving Systems are subject to a number of expectations: Among other aspects, their behavior should be safe, conforming to the rules of the road and provide mobility to their users. This poses challenges for the developers of such systems: Developers are responsible for specifying this behavior, for example, in terms of requirements at system design time. As we will discuss in the article, this specification always involves the need for assumptions and trade-offs. As a result, insufficiencies in such a behavior specification can occur that can potentially lead to unsafe system behavior. In order to support the identification of specification insufficiencies, requirements and respective assumptions need to be made explicit. In this article, we propose the Semantic Norm Behavior Analysis as an ontology-based approach to specify the behavior for an Automated Driving System equipped vehicle. We use ontologies to formally represent specified behavior for a targeted operational environment, and to establish traceability between specified behavior and the addressed stakeholder needs. Furthermore, we illustrate the application of the Semantic Norm Behavior Analysis in two example scenarios and evaluate our results.
... A major approach for enabling semantic interoperability is developing ontologies. While studied in the field of philosophy as the nature of being, ontologies in geospatial semantics are closer to those in computer science and bioinformatics, which serve the function of formalizing the meaning of concepts in a machine-understandable manner (Bittner, Donnelly, & Winter, 2005;Couclelis, 2009;Gruber, 1993;Guarino, 1998;Stevens, Goble, & Bechhofer, 2000). From a data structure perspective, an ontology can be considered a graph with concepts as nodes and relations as edges. ...
Preprint
Geospatial semantics is a broad field that involves a variety of research areas. The term semantics refers to the meaning of things, and is in contrast with the term syntactics. Accordingly, studies on geospatial semantics usually focus on understanding the meaning of geographic entities as well as their counterparts in the cognitive and digital world, such as cognitive geographic concepts and digital gazetteers. Geospatial semantics can also facilitate the design of geographic information systems (GIS) by enhancing the interoperability of distributed systems and developing more intelligent interfaces for user interactions. During the past years, a lot of research has been conducted, approaching geospatial semantics from different perspectives, using a variety of methods, and targeting different problems. Meanwhile, the arrival of big geo data, especially the large amount of unstructured text data on the Web, and the fast development of natural language processing methods enable new research directions in geospatial semantics. This chapter, therefore, provides a systematic review on the existing geospatial semantic research. Six major research areas are identified and discussed, including semantic interoperability, digital gazetteers, geographic information retrieval, geospatial Semantic Web, place semantics, and cognitive geographic concepts.
... Issues related to knowledge management and ontology are two central elements of this approach as they deal with a large part of the mentioned challenges. According to Gruber (1993), "an ontology is an explicit, formal specification of a shared conceptualization". Thus, ontology allows to represent knowledge generated during an investigation (knowledge about footprints, events, objects etc.). ...
Preprint
Having a clear view of events that occurred over time is a difficult objective to achieve in digital investigations (DI). Event reconstruction, which allows investigators to understand the timeline of a crime, is one of the most important step of a DI process. This complex task requires exploration of a large amount of events due to the pervasiveness of new technologies nowadays. Any evidence produced at the end of the investigative process must also meet the requirements of the courts, such as reproducibility, verifiability, validation, etc. For this purpose, we propose a new methodology, supported by theoretical concepts, that can assist investigators through the whole process including the construction and the interpretation of the events describing the case. The proposed approach is based on a model which integrates knowledge of experts from the fields of digital forensics and software development to allow a semantically rich representation of events related to the incident. The main purpose of this model is to allow the analysis of these events in an automatic and efficient way. This paper describes the approach and then focuses on the main conceptual and formal aspects: a formal incident modelization and operators for timeline reconstruction and analysis.
... Concomitantly, the classic definition of a computational ontology (Gruber 1993) is that it is "an explicit specification of a conceptualization." In ontology-based enterprise modeling, the conceptualization is the set of ontologies required to ensure common interpretation of data from one or more enterprises' shared databases. ...
Preprint
An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.
... Specifically, the study focuses on creating an ontology [Gruber, 1993] to provide specific means for student data and map new relationships, using SWRL rules [World Wide Web Consortium 2012] to discover patterns in these data. These patterns offer insights into the reasons for course dropout, mapping previously unknown relationships and directing the study to consider specific dropout patterns. ...
Conference Paper
Student dropout from higher education is still a challenge, imposing a financial and human burden and refusing students to learn. Brazil witnessed a university dropout rate of almost 55%. This work aims to analyze the factors that lead to student dropout from Information System courses, exploring the profile of students, using intelligent techniques. The information obtained can help reduce the evasion rate and identify key actions to control the problem. We used the Design Science Research methodology to conduct our study. An analysis with data from a university, considering the LGPD was conducted to verify the proposal's feasibility. Our results show that the solution can help identify key factors that lead to dropping out.
... Un primo passo fondamentale nella determinazione di una semantica avviene attraverso il processo di definizione di un modello ontologico, ovvero di specificazioni formali ed esplicite di una concettualizzazione condivisa relativa ad un determinato dominio [39]. I vari gruppi di ricerca che lavorano nell'ambito delle modellazioni ontologiche si sono indirizzati verso alcuni tentativi di individuare approcci sempre più efficaci nella costruzione di reticoli semantici formalizzati, tenendo bene in considerazione le scelte che vengono prese nella fase iniziale di analisi dei requisiti a cui deve rispondere il modello stesso. ...
Conference Paper
The recognition of past memory evidence and identity through a critical reconstruction is an essential development driver in the field of industrial heritage. The complex network of relationships established in a site between humans, factories, cities, landscapes, and daily life needs the support of digital tools and technologies to preserve material and immaterial knowledge and to manage intervention and valorization activities. In order to handle such a complex network, it is crucial to promote a deeper understanding and a better knowledge-based structure for general problems and specifically the technological production elements during the artefact design phase. Within the usual digital tools, there is a fragmentation of representations that does not allow a complete comprehension of the heritage knowledge description and semantic relations. In this context, the use of ontological models for data and knowledge definition, for inconsistency reductions and for enriching data possibility sources through external information certainly represent the main features to obtain a heterogeneous data model capable of fully expressing the objects’ values. Therefore, the proposed framework shows how to exploit digital technologies to make data available in an open and standards-compliant format to provide the correct interpretative background through correlations with other concepts, information and knowledge. The expected outcome is to improve both the computable knowledge representation and the reasoning automatization system by ontologies to obtain a deeper comprehension, new value recognition and better management to exploit archaeological and industrial assets.
... For instance, TERN harmonized the plot-based ecology using EcoPlots (ecoplots.tern.org.au), a semantic data integration system that maps each data source to TERN's Plot Ontology. The term 'ontology' is a structured framework that defines the relationships between concepts within a specific domain, providing a shared vocabulary for that domain [12,28]. OWL (Web Ontology Language) [4] is a formal language used to create and share these ontologies on the web, enabling better data interoperability. ...
Article
Full-text available
Open Data Observatories refer to online platforms that provide real-time and historical data for a particular application context, e.g., urban/non-urban environments or a specific application domain. They are generally developed to facilitate collaboration within one or more communities through reusable datasets, analysis tools, and interactive visualizations. Open Data Observatories collect and integrate various data from multiple disparate data sources—some providing mechanisms to support real-time data capture and ingest. Data types can include sensor data (soil, weather, traffic, pollution levels) and satellite imagery. Data sources can include Open Data providers, interconnected devices, and services offered through the Internet of Things. The continually increasing volume and variety of such data require timely integration, management, and analysis, yet presented in a way that end-users can easily understand. Data released for open access preserve their value and enable a more in-depth understanding of real-world choices. This survey compares thirteen Open Data Observatories and their data management approaches - investigating their aims, design, and types of data. We conclude with research challenges that influence the implementation of these observatories, outlining some strengths and limitations for each one and recommending areas for improvement. Our goal is to identify best practices learned from the selected observatories to aid the development of new Open Data Observatories.
... This model categorizes related assets into three levels: A, B, and C based on their importance. It is widely used in single criteria classification such as inventory management, spare parts management, and human resource management [7][8][9]; Gruber and Gruber [10] introduced ontology into the field of computer science and technology and realized the sharing and reuse of information among AI systems by identifying and classifying data. This model played a positive role in the classification, analysis, processing, and service of massive information in the network environment [11,12]; Alzubi [13] put forward an optimal classifier ensemble design (Coalition-based Ensemble Design) based on cooperative game theory, which takes advantage of diverse and accurate classifier ensembles. ...
Article
Full-text available
Purpose/Significance With the extensive adoption of cloud computing, big data, artificial intelligence, the Internet of Things, and other novel information technologies in the industrial field, the data flow in industrial companies is rapidly increasing, leading to an explosion in the total volume of data. Ensuring effective data security has become a critical concern for both national and industrial entities. Method/Process To tackle the challenges of classification management of industrial big data, this study proposed an Information Security Triad Assessment‐Support Vector Machine (AIC‐ASVM) model according to information security principles. Building on national policy requirements, FIPS 199 standards, and the ABC grading method, a comprehensive classification framework for industrial data, termed “two‐layer classification, three‐dimensional grading,” was developed. By integrating the concept of Data Protection Impact Assessment (DPIA) from the GDPR, the classification of large industrial data sets was accomplished using a Support Vector Machine (SVM) algorithm. Result/Conclusion Simulations conducted using MATLAB yielded a classification accuracy of 96.67%. Furthermore, comparisons with decision tree and random forest models demonstrated that AIC‐ASVM outperforms these alternatives, significantly improving the efficiency of big data classification and the quality of security management.
... To address this challenge, we present a hybrid methodology that integrates deep learning with domain expert knowledge using ontologies [3,13] and answer set programming (ASP) [12]. Our approach involves embedding domain-specific constraints and logical rules directly into the loss function of the DL model. ...
Preprint
Full-text available
This paper presents a hybrid methodology that enhances the training process of deep learning (DL) models by embedding domain expert knowledge using ontologies and answer set programming (ASP). By integrating these symbolic AI methods, we encode domain-specific constraints, rules, and logical reasoning directly into the model's learning process, thereby improving both performance and trustworthiness. The proposed approach is flexible and applicable to both regression and classification tasks, demonstrating generalizability across various fields such as healthcare, autonomous systems, engineering, and battery manufacturing applications. Unlike other state-of-the-art methods, the strength of our approach lies in its scalability across different domains. The design allows for the automation of the loss function by simply updating the ASP rules, making the system highly scalable and user-friendly. This facilitates seamless adaptation to new domains without significant redesign, offering a practical solution for integrating expert knowledge into DL models in industrial settings such as battery manufacturing.
... A core component of symbolic AI is knowledge representation, where symbols denote real-world entities and their relationships. Representations in symbolic AI often employ hierarchical or graph-based structures such as semantic networks or ontologies [1,15]. In contrast, LLMs GPT4 [22] and BERT [6], while highly effective in generating contextually relevant responses, face challenges in offering the same level of explainability due to their "black box" nature [32]. ...
Preprint
Full-text available
Growing concerns over the lack of transparency in AI, particularly in high-stakes fields like healthcare and finance, drive the need for explainable and trustworthy systems. While Large Language Models (LLMs) perform exceptionally well in generating accurate outputs, their "black box" nature poses significant challenges to transparency and trust. To address this, the paper proposes the TranspNet pipeline, which integrates symbolic AI with LLMs. By leveraging domain expert knowledge, retrieval-augmented generation (RAG), and formal reasoning frameworks like Answer Set Programming (ASP), TranspNet enhances LLM outputs with structured reasoning and verification. This approach ensures that AI systems deliver not only accurate but also explainable and trustworthy results, meeting regulatory demands for transparency and accountability. TranspNet provides a comprehensive solution for developing AI systems that are reliable and interpretable, making it suitable for real-world applications where trust is critical.
... The term ontology has different definitions depending on the application domain. We refer to ontologies as defined by Gruber [59] in the domain of computer science. "A common ontology defines the vocabulary with which queries and assertions are exchanged among agents" [59, p. 201]. ...
Article
Full-text available
Automated vehicles in public traffic are subject to a number of expectations: Among other aspects, their behavior should be safe, conforming to the rules of the road and provide mobility to their users [1], [2]. This poses challenges for the developers of such systems: Developers are responsible for specifying this behavior, for example, in terms of requirements at system design time. As we will discuss in the article, this specification always involves the need for assumptions and trade-offs. As a result, insufficiencies in such a behavior specification can occur that can potentially lead to unsafe system behavior. In order to support the identification of specification insufficiencies, requirements and respective assumptions need to be made explicit. In this article, we propose the Semantic Norm Behavior Analysis as an ontology-based approach to specify the behavior for an Automated Driving System equipped vehicle. We use ontologies to formally represent specified behavior for a targeted operational environment, and to establish traceability between specified behavior and the addressed stakeholder needs. Furthermore, we illustrate the application of the Semantic Norm Behavior Analysis in a German legal context with two example scenarios and evaluate our results. Our evaluation shows that the explicit documentation of assumptions in the behavior specification supports both the identification of specification insufficiencies and their treatment. Therefore, this article provides requirements, terminology and an according methodology to facilitate ontology-based behavior specifications in automated driving.
... it searches out results using the conceptual meaning of input query instead of key matching. it gives a clear description of the concept of shared domain [7]. This paper is motivated by the need to eradicate some of the challenges encountered in the existing e-learning platforms. ...
Article
Full-text available
This paper is focused on developing A Semantic Web-Ontology E-Learning Platform, which is a system that combines semantic web and ontology technology to guarantees a sophisticated learning environment that provides the learners with adaptable and customized learning resources based on learners’ knowledge requirement. With this system, learners can log in from their comfort zone anytime, to receive their online lesson as provided by their tutor. The system has an added advantage of providing a personalized learning to students through creation of intelligent search engine and ontology backbone consisting of learning data and their meta data. The learner, through this search engine, searches the ontology semantically for the learning materials that suits his/her profile. The system also has the capability of filtering the search results by matching them with the profile of a particular learner using inference engine, such that the result best suited for the user’s academic need is presented. This work will not only promote self-directed learning but will also facilitate quick search of learning materials, by narrowing the search based on specified learner’s interest. The methodology adopted for this work is Object-Oriented Analysis and Design Methodology (OOADM) and programing languages used are Php-Mysql and Java Script. The system will be of great benefit to schools, other learning institutions and organization seeking to educate their manpower.
... Ontologies, as another part of the Semantic Web, model a domain of interest in terms of concepts, properties, and relationships. An ontology is a formal, explicit specification of a shared conceptualization [6] expressed in languages like RDFS and the Web Ontology Language (OWL) to add more meaning and reasoning power to RDF graphs. By using RDF and ontologies it is possible to represent so-called knowledge graphs (KG). ...
... An ontology, as defined by Gruber in [9], is a "formal specification of a shared conceptualization" which allows to define concepts, individuals, and relationships. One of the strengths of such a knowledge representation is the ability to infer new knowledge via Description Logic (DL) based reasoning. ...
Conference Paper
Full-text available
In human-robot interaction, the collaboration of agents stems from what each agent is capable of doing to successfully complete the objective. Thus, for a robot to efficiently interact with its environment , it is crucial to have knowledge over the actions it is capable of performing. This paper presents a step toward the consideration of agentive aspects in the representation of action possibilities for an agent, i.e. affordances. Building upon the dispositional theory, this work aims at matching the dispositions of entities with the capabilities of the agents. Using an ontological representation and inference mechanisms, this contribution aims to autonomously infer affordance relations linking an agent to entities in a reactive way. As affordance relations stem from the match between the dispositions of entities and the capabilities of the agents, we propose a reasoner which allows to link the corresponding entities given each affordance's requirement. Furthermore, as certain actions may necessitate the combination of multiple entities, this work also addresses the representation and inference of affordances for such combinations. These inferences provide the agent with a self-awareness capability about which actions are feasible with respect to the entities in the environment, while also leveraging changes that may occur in the environment.
... To address the challenges posed by evolving safety regulations, we propose integrating ontology with computer vision approaches. Ontology provides a formal, conceptualized framework of knowledge, offering a simplified representation of a domain by describing objects, concepts, and their relationships [50]. Its purpose is to enable computer applications to represent and reason about knowledge efficiently. ...
Preprint
Full-text available
The growing deployment of computer vision in industrial processes significantly contributes to strengthening the manufacturing sector in terms of productivity and safety of the workers. Manufacturing workers are often working in hazardous environments handling different dangerous equipment putting their life on the line every day. Work accidents are reminders for which companies must make efforts to reduce its occurrence and their adverse impact on the lives of workers. The growing deployment of computer vision in industrial processes significantly contributes to strengthening the manufacturing sector in terms of productivity and safety of the workers. Manufacturing workers are often working in hazardous environments handling different dangerous equipment putting their life on the line every day. Work accidents are reminders for which companies must make efforts to reduce its occurrence and their adverse impact on the lives of workers. Due to the inherent complexities of construction sites, safety management remains a persistent challenge and a critical responsibility. The primary objective of construction safety management is to guarantee the project's successful and secure completion, prioritize the well-being of workers, and mitigate construction-related accidents and their associated costs. The advent of computer vision (CV) technology has revolutionized traditional approaches to construction safety management. This paper examines the transformative impact of CV technology on construction site safety and management efficiency from an application-oriented perspective. Initially, the paper provides an overview of the fundamental principles and methodologies underpinning CV technology, accompanied by a detailed description of the literature analysis methodology employed. Subsequently, the paper delves into the diverse applications of CV technology, encompassing real-time construction site monitoring, worker safety management, equipment behavior tracking, and material quality control. Furthermore, the paper highlights the substantial potential of CV technology in mitigating accidents, enhancing safety performance, and offers valuable insights into future research directions. In conclusion, this paper presents a comprehensive overview of the construction industry's pursuit of leveraging CV technology to elevate safety management practices, serving as both an informative and instructive resource.
... En la literatura *Corresponding author: maria.medina@uppuebla.edu.mx existen diferentes definiciones entre las que destacan la de referencia propuesta por Gruber [180], quien define a las ontologías como "especificaciones formales y explícitas de conceptualizaciones compartidas", o la de Guarino [61] quien indica que se refieren a "artefactos de ingeniería, constituidos por un vocabulario específico utilizado para describir cierta realidad, además de un conjunto de suposiciones explícitas que tratan el significado de los términos del vocabulario". Dependiendo del nivel de generalización de los conceptos, [61] propone los tipos de ontologías siguientes: nivel superior, dominio, tarea y aplicación, (ver la Tabla 2.1). ...
... What is ontology? In theory, ontology is "a formal, explicit specification of a shared conceptualisation" (Gruber, 1993). In practice, ontology is a logical model applied within a certain knowledge domain. ...
Technical Report
Full-text available
Explores technologies to model the semantic relationships between objects described by TEI-encoded texts of Johann Friedrich Blumenbach and the metadata of these items stored in several University collections
... Our research is taking place within the context of the eDEM-CONNECT project 3 , aimed to develop a chatbot-based communication system [Bo24] for dealing with the agitation of PwD and to ease the stress factor of family members, promoting aspects of the stability of home-based care arrangements. In this chatbot, we represented knowledge as an ontology, a formal description of vocabularies that permits us to explain complex structures and new relationships between vocabulary terms and components of the classes we describe [Gr93]. We developed eDEM-CONNECT ontology based on literature reviews, caregiver interviews, and expert workshops, and the ontology is publicly available at the NCBO IOBPortal 4 , which is integrated into the eDEM-Connect chatbot [Bo24]. ...
Conference Paper
Full-text available
As the aging population grows, the incidence of dementia is rising sharply, necessitating the extraction of domain-specific information from texts to gain valuable insights into the condition. Training Natural Language Processing (NLP) models for this purpose requires substantial amounts of annotated data, which is typically produced by human annotators. While human annotation is precise, it is also labor-intensive and costly. Large Language Models (LLMs) present a promising alternative that could potentially streamline and economize the annotation process. However, LLMs may struggle with complex, domain-specific contexts, potentially leading to inaccuracies. This paper investigates the effectiveness of LLMs in annotating words and phrases in ambiguous dementia-related texts by comparing LLM-generated annotations with those produced by human annotators. We followed a specific annotation scheme and had both the LLM and human raters annotate a corpus of informal texts from forums of family carers of people with dementia. The results indicate a moderate overlap in inter-rater agreement between LLM and expert annotators, with the LLM identifying nearly twice as many instances as the human raters. Although LLMs can partially automate the annotation process, they are not yet fully reliable for complex domains. By refining LLM-generated data through expert review, it is possible to reduce the burden on human raters and accelerate the creation of annotated datasets.
... As a discipline, knowledge engineering targets more widely ontologies: shared conceptualizations of a target domain expressed in a machine-processable form [29]. These conceptualization artifacts contain terminological knowledge, but they also include relationships attributed (in most cases by humans) to the target domain: eg. ...
Preprint
With the digitalization of society, the interest, the debates and the research efforts concerning "code", "law", "artificial intelligence", and their various relationships, have been widely increasing. Yet, most arguments primarily focus on contemporary computational methods and artifacts (inferential models constructed via machine-learning methods, rule-based systems, smart contracts, ...), rather than attempting to identify more fundamental mechanisms. Aiming to go beyond this conceptual limitation, this paper introduces and elaborates on "normware" as an explicit additional stance -- complementary to software and hardware -- for the interpretation and the design of artificial devices. By means of a few examples, we argue that normware-centred views provide a more adequate abstraction to study and design interactions between computational systems and human institutions, and may help with the design and development of technical interventions within wider socio-technical views.
... En la literatura *Corresponding author: maria.medina@uppuebla.edu.mx existen diferentes definiciones entre las que destacan la de referencia propuesta por Gruber [180], quien define a las ontologías como "especificaciones formales y explícitas de conceptualizaciones compartidas", o la de Guarino [61] quien indica que se refieren a "artefactos de ingeniería, constituidos por un vocabulario específico utilizado para describir cierta realidad, además de un conjunto de suposiciones explícitas que tratan el significado de los términos del vocabulario". Dependiendo del nivel de generalización de los conceptos, [61] propone los tipos de ontologías siguientes: nivel superior, dominio, tarea y aplicación, (ver la Tabla 2.1). ...
Chapter
Full-text available
En el capítulo 6 se aborda un panorama general de los métodos de clustering en la medicina. Su aplicación en el ámbito de las ciencias de la salud y la biomedicina adquiere una importancia crítica, puesto que proporciona un medio efectivo para agrupar a los pacientes en distintas categorías mediante la extracción del conocimiento de los conjuntos de datos, con ello mejorar la atención médica y apoyar el desarrollo de tratamientos personalizados, respondiendo así a los desafíos únicos de cada paciente.
... An ontology is "an explicit specification of a conceptualization" (p. 199, [29]) that allows machine-readable knowledge to be shared between humans in a common vocabulary. While the combination of ontologies and assurance cases is not entirely novel -Gallina et al. [30] propose such a framework for assuring AI conformance with the EU Machinery Regulation -we note that continuously formalizing, assuring and reasoning about LLM security is a novel proposition. ...
Preprint
Full-text available
Despite the impressive adaptability of large language models (LLMs), challenges remain in ensuring their security, transparency, and interpretability. Given their susceptibility to adversarial attacks, LLMs need to be defended with an evolving combination of adversarial training and guardrails. However, managing the implicit and heterogeneous knowledge for continuously assuring robustness is difficult. We introduce a novel approach for assurance of the adversarial robustness of LLMs based on formal argumentation. Using ontologies for formalization, we structure state-of-the-art attacks and defenses, facilitating the creation of a human-readable assurance case, and a machine-readable representation. We demonstrate its application with examples in English language and code translation tasks, and provide implications for theory and practice, by targeting engineers, data scientists, users, and auditors.
... To clarify how knowledge can be represented, we turn to the ontology of artificial intelligence in computer science [6,7] sometimes referred to as applied ontology. In philosophy and in information science is the attempt to represent entities including their properties and relations according to a system of taxonomies. ...
Article
Full-text available
We are confronted with the concept of intelligence every day. Starting with human intelligence to artificial intelligence. Some animals are also attested to be intelligent based on specific problems they solve. We also come across terms such as swarm intelligence, emotional intelligence or even physical intelligence. But there is still a lack of a clear definition of what intelligence actually is and, in particular, how it could be measured. Intelligence tests that provide quantitative information have so far only been available from psychology and only for people. There is a lack of criteria for what makes a system an intelligent system. This became particularly clear with the question of whether generative AI, such as ChatGPT, can be considered intelligent at all. So can intelligence be derived from the cognitive abilities of a system, or is it ultimately decisive how these abilities come about? This paper suggests a definition of the term intelligence and suggests an explanation for what constitutes intelligence and to what extent intelligence is required to gain knowledge. And finally it is questioned whether artificial systems are intelligent and have any knowledge at all.
... As mentioned above, one of the most popular definitions of ontology states that ontologies represent the entities that exist in a relevant portion of reality, their properties, and interrelations 38 . Other definitions, capturing rather the technical-philosophical characteristics of ontologies, refer to them pragmatically as formal specifications of conceptualizations 39 or as logical theories that are interpreted as propositions about a domain 40 . More descriptive definitions may also include that ontologies are computational structures that allow for machine-actionable knowledge representations which can be employed for specific application purposes (see also Ref. 41 ). ...
Thesis
Full-text available
The work presented in this doctoral thesis pursued the objective to conceptualize an expressive and flexible knowledge representation system that can capture the complexity of cognitive neuroscience research. An adequate system needs to be able to represent the interactions and dependencies between (1) theories as natural delimiters of conceptual space, (2) latent, only indirectly observable cognitive concepts as hypothetical stages of cognition, (3) terminological ambiguity resulting from imprecise community-specific terminology use, and (4) the heterogeneous and complex types of data resulting from (neural) signal measurements that are assumed to proxy ongoing cognition. Crucially, a knowledge representation system that satisfies these requirements allows to semantically enrich the continuously growing amounts of empirical neurocognitive data with domain-specific contextual (i.e., experimental and conceptual) meta-data. Ontologies as highly expressive and complex knowledge representation structures are, most generally, specifications of conceptualizations of reality that have a particular scope and purpose. Specifically, an ontology for the domain of cognitive neuroscience as conceptualized throughout this work may be regarded as a computational artifact that encodes domain-specific knowledge in a machine-actionable format to make it available to information systems that manage (neurocognitive) data. On these terms, knowledge valorization and FAIR data operations in cognitive neuroscience are expected to be significantly facilitated through a shared and systematically structured vocabulary (e.g., mitigating terminological ambiguity and elevating the quality of meta-data annotations) and machine-actionable architecture (allowing to programmatically access knowledge and data, and to reason with it, e.g., for identifying potential inconsistencies). However, the perhaps most valuable contribution of a representationally adequate ontology for cognitive neuroscience may unfold through explicating implicit knowledge – making it tangible, assessable, and thereby, potentially improvable.
... There are two definitions presented here: Gruber's definition and the definition provided by the BFO (Basic Formal Ontology) developers. Gruber [9] defines ontology as 'an explicit specification of a perception'. He implies that an ontology serves as a representative vocabulary capturing the objects within a domain of knowledge and their relationships to each other. ...
Article
The performing arts domain, addressing the ephemeral nature of its main study subject, the performance, calls upon Digital Humanities' techniques to document, record and analyse the performative event, thus offering a new perspective to research and education. In this context, many digital projects have been completed, including digitising art archives, creating databases of art productions and developing metadata schemas and conceptual models. The study of these efforts revealed a need for a universal and comprehensive way of documenting performance. This article, which benefitted from the experience gained thus far in the performing arts in the digital world, attempts to unify the domain's acquired knowledge and organise it in an ontology. While the overall success of such a project proved to be greater than the scope of this research, we propose a core ontology for the performing arts, which has the potential to evolve in the future.
... An ontology is a formally specified agreed description (conceptualization) of a subject domain (in T. Gruber's sense [6,7]) that is developed by a group of experts and interpreted by both machines and people. In other words, an ontology is a formalized description of expert-agreed concepts in a particular subject domain, developed to be unambiguously understood by people and machines. ...
Article
Full-text available
This paper describes the approaches underlying ISAND, an information system for scientific activity analysis in the field of control theory and its applications. ISAND is being developed at the Trapeznikov Institute of Control Sciences, the Russian Academy of Sciences. The ISAND ontology is oriented toward the representation and collection of knowledge in the field of control theory and its applications, namely, scientific knowledge (the ontology of control theory) and knowledge related to the scientific activity of agents (organizations, journals, conferences, and individual researchers) in this field. Based on this ontology, the ISAND architecture is a complex program system to collect, store, and analyze publications and their metadata from external sources. The ISAND algorithm for building the thematic profiles of scientific objects (publications, researchers, organizations, journals, and conferences), as well as ISAND text processing and network analysis capabilities, are presented. Finally, the main possibilities of using ISAND are considered.
... Ontology is a formal method of knowledge representation in the field of computer science, used to describe entities in the real world and the relationships between them 14,15 . The goal of an ontology is to provide a shared and consistent conceptual framework, enabling computer programs to understand and process information, thereby facilitating information sharing and semantic interoperability 16 . The ontology types for mountainous rack railways depend on specific ontology modeling requirements and usage scenarios. ...
Article
Full-text available
In response to the practical demands for data sharing and exchange in the field of rack railway systems engineering, as well as to address the gaps in the rack railway domain within the framework of the IFC4 standard, we extend and define the rack railway domain through entity extension and custom attribute sets. By utilizing the ongoing construction of the Dujiangyan to Mount Siguniang Railway as a case study, we validate the utility of this IFC extension and modeling approach. Leveraging IfcOpenShell, we incorporate the extended data content into the generated IFC file. We present a process for extension tailored to the characteristics of rack railway engineering. This study aims to provide broader information support for the digital construction of track structures in the design phase of rack railway engineering and to facilitate more efficient data exchange and sharing.
... Онтология -формальная спецификация согласованного описания (концептуализации) предметной области (по Т. Груберу [6,7]), разрабатываемая группой экспертов и интерпретируемая как машиной, так и человеком. Иными словами, онтология представляет собой формализованное описание согласованных экспертами понятий в определенной предметной области, разработанное для однозначного понимания как людьми, так и машинами. ...
Article
Full-text available
Представлено описание подходов, лежащих в основе разрабатываемой в ИПУ РАН Информационной системы анализа научной деятельности (ИСАНД) в области теории управления. Описана онтология ИСАНД, ориентированная на представление и сбор знаний в области теории управления: как научного знания (онтология теории управления), так и знаний, связанных с научной деятельностью агентов в данной области (организаций, журналов, конференций и отдельных исследователей). Дана схема построенной на основе онтологии архитектуры ИСАНД как сложного программного комплекса, обеспечивающего сбор, хранение и анализ публикаций и их метаинформации, которые поступают из внешних источников. Описан алгоритм построения тематических профилей научных объектов (публикаций, ученых, организаций, журналов, конференций), описаны осуществляемые при помощи ИСАНД процессы обработки текстов и возможности сетевого анализа. Описаны основные возможности использования ИСАНД. This paper describes the approaches underlying ISAND, an information system for scientific activity analysis in the field of control theory and its applications. ISAND is being developed at the Trapeznikov Institute of Control Sciences, the Russian Academy of Sciences. The ISAND ontology is oriented toward the representation and collection of knowledge in the field of control theory and its applications, namely, scientific knowledge (the ontology of control theory) and knowledge related to the scientific activity of agents (organizations, journals, conferences, and individual researchers) in this field. Based on this ontology, the ISAND architecture is a complex program system to collect, store, and analyze publications and their metadata from external sources. The ISAND algorithm for building the thematic profiles of scientific objects (publications, researchers, organizations, journals, and conferences), as well as ISAND text processing and network analysis capabilities, are presented. Finally, the main possibilities of using ISAND are considered.
... Unlike PM that learns a model from a given log data, SPMs are used to discover frequent subsequences in a sequence dataset like web logs (Mabroukeh & Ezeife, 2010). Ontologies defined by Gruber (1993) as "a formal explicit specification of conceptualization" are a mechanism to model the knowledge where concepts and the relations among them are explicitly defined in a machine readable manner. They offer a reasoning mechanism that infers new knowledge from the already existing one. ...
Article
Full-text available
In education, e-learning is highly adopted to improve the learning experience and increase learning efficiency and engagement. Yet, an explosion of online learning materials has overwhelmed learners, especially when trying to achieve their learning goals. In this scope, recommender systems are used to guide learners in their learning process by filtering out the available resources to best match their needs, i.e. to offer personalized content and learning paths. Concurrently, process mining has emerged as a valuable tool for comprehending learner behavior during the learning journey. To synergize these disciplines and optimize learning outcomes, our paper introduces an ontology-based framework that aims to recommend an adaptive learning path, driven by a learner’s learning objective, personalized to his learning style, and enriched by the past learning experience of other learners extracted via process mining. The learning path considers pedagogical standards by employing Bloom’s taxonomy within its structure. The framework establishes an Ontological Foundation, to model the Learner, Domain, and Learning Path. Choosing Computer Science as a domain, we construct a knowledge base using synthesized data. For past learning experience, we analyze Moodle log data from 2018 to 2022, encompassing 471 students in the Computer Science and Engineering Department at Frederick University, Cyprus.
... The core function of ontology is to describe entities and their interrelationships in a domain through terms and relationships that can be understood by both humans and machines, thereby providing a unified view on the essence of things in that domain. [12][13][14] Various studies have explained approaches for adopting ontologies in diseases. 5,[15][16][17][18] The construction of Integrated TCM and WM CPG ontology is the basis and key step to solve the problem of CPG data sharing and achievement integration. ...
Article
Full-text available
Objective Clinical practice guidelines (CPGs) for Integrated Traditional Chinese and Western Medicine (TCM and WM) are important medical documents used to assist medical decision‐making and are of great significance for standardizing clinical pathways. However, due to the constraints of text format, it is difficult for Integrated TCM and WM CPGs to play a real role in medical practice. In addition, how to standardize the structure and semantic relationships between Integrated TCM and WM CPG knowledge, and realize the construction of computable, sharable and reliable CPGs, remains an urgent issue to be addressed. Therefore, we are proposing an ontology of CPGs for Integrated TCM and WM. Methods We first initialized domain concepts and relationships to ensure the accuracy of the ontology knowledge structure. We then screened CPGs that meet the standards for Integrated TCM and WM, analyzed and classified the contents, and extracted the common structures. Based on the seven‐step ontology construction method combined with inference‐complement, referring to the representation methods and hierarchical relationships of terms and concepts in MeSH, ICD‐10, SNOMED‐CT, and other ontologies and terminology sets, we formed the concept structure and semantic relationship tables for the ontology. We also achieved the matching and mapping between the ontology and reference ontologies and term sets. Next, we defined the aspects and constraints of properties, selected multiple Integrated TCM and WM CPGs as instances to populate, and used ontology reasoning tools and formulated defined inference rules to reason and extend the ontology. Finally, we evaluated the performance of the ontology. Results The content of the Integrated TCM and WM CPGs is divided into nine parts: basic information, background, development method, clinical question, recommendation, evidence, conclusion, result, and reason for recommendations. The Integrated TCM and WM CPG ontology has 152 classes and defines 90 object properties and 114 data properties, with a maximum classification depth of 4 layers. The terms of disease, drug and examination item names in the ontology have been standardized. Conclusions This study proposes an Integrated TCM and WM CPG ontology. The ontology adopts a modular design, which has both sharing and scaling ability, and can express rich guideline knowledge. It provides important support for the semantic processing and computational application of guideline documents.
... This paper explores an automated method for creating service ontologies (SO) integrated into a layered architecture to aid in web service discovery (Kaouan et al., 2017). The idea of automatic ontology construction is based on Gruber's definition (Gruber, 1993), which defined ontology as an explicit specification of a conceptualization. By convention, the conceptualization is recognized as a grouping of entities, just like software engineering, where objects with the same semantics must be grouped within the same class. ...
Article
Full-text available
Ontologies have played a crucial role in addressing challenges related to knowledge modeling, artificial intelligence, clustering, and classification. Within the realm of web services, ontologies primarily serve to describe web service interfaces, facilitating the automation of functional processes, particularly to guide the registers in discovering services. Enriching this model with a shared ontology (knowledge base) can improve the efficiency of web services technology by guiding web services processes. This shared ontology plays the role of intermediary between queries and registries, and can be shared between different registries. However, manually constructing ontologies is a labor-intensive and expensive endeavor, necessitating domain expertise. Automatic ontology construction approaches present compelling alternatives, offering advantages in terms of both time and financial costs. In this article, we advocate for the utilization of the Hierarchical Clustering algorithm to generate an ontological network (Service Ontology (SO)) from a semantic web services corpus. Experimental validations are presented to demonstrate the effectiveness of the proposed methodology.
... Within the semantic web paradigm, formal ontologies serve the purpose of defining the concepts and relationships between concepts and are employed in constructing knowledge bases specific to various domains [23,6]. Studer [24] described ontologies as "a formal, explicit specification of a shared conceptualization"; where "formal" indicates that it should be readable by machines, "explicit" implies that the concepts employed and the constraints on their usage are clearly defined, while "shared" suggests that it represents consensus knowledge agreed upon by a group. ...
Conference Paper
The integration of semantic web technologies in the Architecture, Engineering, and Construction (AEC) industry has gained momentum in recent years, with promising studies being conducted for various use cases. Furthermore, its necessity has been implied in the Building Information Modelling (BIM) Maturity Level 3, which focuses on fully integrated collaboration and open data standards and transfer. The main drivers of implementing semantic web technologies in a multi-data environment like the AEC industry are to overcome interoperability issues, link across domains, and facilitate logical inference and proof. The construction execution phase encompasses multiple stakeholders delivering data from multiple domains in heterogeneous formats, which needs to be put to better use for optimal project delivery. Thus, this study explores how semantic web technology can be applied for BIM-based project management and control during the execution phase through a comprehensive review of the literature to identify existing semantic web applications and their limitations to determine the gaps in knowledge. The development of ifcOWL ontology has spurred attention on developing domain ontologies to be integrated with the ifcOWL for the seamless automation of construction activities. The study identified several ontology-based semantic web applications for project control in a BIM-based environment, for example, 4Dcollab ontology for Synchronous Collaboration Sessions for decision making, LinkOnt for lookahead planning enabled construction constraint checking, CPPC ontology for construction product control, etc. Furthermore, many studies were recorded related to semantic web technology-based 4D BIM use cases, such as automated schedule generation, automated construction sequencing, collaboration, and constraint checking. A consistent idea across these studies is the emphasis on the need for knowledge modelling leveraging data accumulated through both pre-construction and construction stages for project monitoring and controlling processes and integrating data streams through IoT and sensing devices for prompt decision-making.
... Well-established approaches for multi-robot knowledge sharing often utilize highly flexible and robust ontologies. In ontologies, knowledge is represented as concepts connected through relationships and constraints and manipulated through operations like merging, unification, and inheritance [42]. Several ontological frameworks have been developed to facilitate knowledge representation and sharing in multirobot teams. ...
Preprint
Full-text available
Multi-agent and multi-robot systems (MRS) often rely on direct communication for information sharing. This work explores an alternative approach inspired by eavesdropping mechanisms in nature that involves casual observation of agent interactions to enhance decentralized knowledge dissemination. We achieve this through a novel IKT-BT framework tailored for a behavior-based MRS, encapsulating knowledge and control actions in Behavior Trees (BT). We present two new BT-based modalities-eavesdrop-update (EU) and eavesdrop-buffer-update (EBU)-incorporating unique eavesdropping strategies and efficient episodic memory management suited for resource-limited swarm robots. We theoretically analyze the IKT-BT framework for an MRS and validate the performance of the proposed modalities through extensive experiments simulating a search and rescue mission. Our results reveal improvements in both global mission performance outcomes and agent-level knowledge dissemination with a reduced need for direct communication.
... (Birks & Mills, 2023). According to Gruber (1993), ontology is an abstract representation of the world conceptualized explicitly or implicitly. Differences in how researchers experience existence and reality often lead to choices between qualitative and quantitative research settings. ...
Thesis
Full-text available
This thesis is about the experience of social work management and leadership in Finland, and the implications for pedagogy. Leadership and management in Social Work has been researched over the years, yet unanimous understanding of appropriate social work leadership and management has not been accomplished. This research provides new perspectives into social work leadership by presenting the views of Finnish social work leaders and social work field employees as narratives on social work leadership and management. These narratives provide insight into experiences of good social work leadership practices and help reveal the phenomenon of leadership and management in a holistic light from the grassroots of everyday life in the social work profession. These narratives were revealed through interviews and a grounded theory research process and reviewed in the light of theoretical knowledge constructed through a literature review. The theoretical findings generated through the literature review describe styles and approaches of leadership, both from the tradition of business management and social work. In addition, social work as a context, particularly in Finland, is discussed, as are themes of professional values and interdisciplinary work. Empirical data for this research was collected through interviews with social work leaders and workers in the social care field. The findings reveal the relevance of much of the pre-existing research (Lawler & Bilson, 2010; Juuti, 2013; Pekkarinen, 2010; Peters, 2018), particularly the importance of values. In addition, three clear narratives generated the three roles of "caregiver", "understander" and "designer", which all contribute to well-functioning leadership and management in an organisation through different strengths. These empirical findings present "relational understandings" as capacities that are important for a leader to reflect to increase their self-awareness as a leader. These involve considering relationship to power; abilities to understand trust as a dynamic phenomenon influencing the atmosphere in a work setting; and dialogue as a tool to affect the stage of trust. Knowledge from both empirical interview data and theoretical sources were combined and provide an example of how a social work management and leadership curriculum might be developed. The intended learning outcomes describe the recommended objectives for social work leadership and management training. Example narratives provide examples of experiences, which connect to the themes regarded as essential in social work leadership and management. This research contributes to the professional body of knowledge by confirming the usability of the transformational, compassionate, servant and distributed leadership approaches alongside highlighting the importance of strategic objectives and evidence-based practice. For the already existing approaches, this research presents original initiatives for the further development of these approaches through connecting them to business management originated tools, such as lean management and multiple criteria decision making. This integration should be made through social work ethics and social work professional values. A step forward is made in highlighting that psychodynamic leadership should be better acknowledged, particularly through its relevance to understanding emotions regarding interaction in the community and connected to trust. This research recognized four tensions which were later in the process combined into three tensions. All these deserve further attention in research. Tensions between social work ethical values and business management rooted financial tools, between individualism and distributed leadership, and between trust and mistrust in the community should be individually examined further.
... It is worth mentioning that the true benefit of the proposed workflow can be harnessed if the buildings share the vocabularies such as the Brick ontology, SSN, and the proposed new ontologies. By definition, an ontology should be an explicit, formal specification of a shared conceptualization (Gruber, 2008) and the agreement upon these ontologies is important to make the proposed workflow adaptable. Future work focuses on implementing and validating the proposed RDF service to support the MPC and other data-driven applications. ...
Conference Paper
Full-text available
A smart building relies on heterogeneous information systems, with data originating from different sources and represented in various formats. Semantic Web technologies allow connecting these disparate data sources by standardizing metadata with ontologies and enriching information with logical relationships. The latter enables not only shared understanding and interpretability by machines but also querying and reasoning capabilities. Brick and SOSA/SSN are such widely used ontologies in the smart building domain. These ontologies are currently used to semantically describe physical or virtual assets in a building. Building on these ontologies, we propose an ontology to describe a Model Predictive Controller (MPC) system. This MPC is designed as a Demand Side Management strategy for an office micro-grid system. The proposed ontology complements the use of Semantic Web technologies in the smart building. It allows the designed MPC application to seamlessly modify to a different building for both simulation and experimental purposes by communicating with necessary data sources with using the knowledge embedded in the semantic graphs.
... For instance, in autonomous vehicles, ontologies help the AI system understand and categorize objects in the environment, such as distinguishing between pedestrians, vehicles, and road signs. This ontological framework is essential for the AI to navigate and make decisions in real-time, ensuring safety and efficiency (Gruber, 1993). ...
Article
Full-text available
This paper delves into the philosophy of informatics, exploring its critical role in shaping the development and application of information technologies. By examining informatics through the intertwined lenses of ontology, epistemology, ethics, and theoretical frameworks, the study underscores the importance of philosophical inquiry in understanding the complexities of the digital age. Ontological considerations provide the foundation for defining and structuring information, which influences how data is categorized and interpreted. Epistemology extends this by addressing how information is transformed into reliable knowledge and the processes by which this knowledge is validated. Ethics is a central concern, guiding the responsible design and use of information systems, particularly in addressing challenges such as data privacy, algorithmic bias, and the ethical implications of artificial intelligence. Theoretical frameworks, including Socio-Technical Systems (STS) theory, Actor-Network Theory (ANT), and Critical Theory, offer valuable insights into the socio-technical nature of informatics, emphasizing the need for systems that are not only technically sound but also socially responsible. The paper concludes that a comprehensive understanding of the philosophy of informatics is essential for developing technologies that are aligned with human values and that contribute positively to society.
Preprint
Full-text available
This study applies Natural Language Processing (NLP) and Machine Learning (ML) to investigate global historical trends in food security. Using USAID’s Famine Early Warning Systems Network’s (FEWS NET) comprehensive reports spanning over two dozen countries, it explores prevalent dimensions such as shocks, outcomes, and coping capacities, offering insights into long-term food security conditions. Results highlight the prevalence of market and climate impacts across the countries and period considered. Based on results from the topic classification, ML models were applied to determine the most important factors that predict food insecurity. The analysis confirmed market shocks as the main predictors of food insecurity, globally. The approach demonstrates the potential for extracting valuable insights from narrative sources that can support decision-making and strategic planning. This integrated approach not only enhances understanding of food security but also presents a versatile tool applicable beyond the context of humanitarian aid.
Article
В настоящее время очевидной является проблема влияния энергетики на окружающую среду как одного из антропогенных факторов. В рамках исследований по оценке влияния энергетики на геоэкологию региона предполагается разработка Webориентированной информационной системы, интегрирующей математические и семантические методы, инструментальные средства, базу данных и базу знаний рассматриваемой предметной области. В статье рассматривается предлагаемый онтологический подход к структурированию и интеграции информации, необходимой для исследований. Онтологии предлагается рассматривать как прообраз инфологической модели данных при проектировании базы данных и базы знаний.
Article
В статье представлен оригинальный методический подход к построению системы онтологий для хранения знаний о трубопроводных системах энергетики, их свойствах, связанных с этими системами задачах моделирования и используемом программном обеспечении. Эта система онтологий состоит из метаонтологии и прикладных онтологий, которые включают: онтологию трубопроводных систем различного типа, онтологию задач и онтологию программного обеспечения. Применение онтологий в интегрированной графической среде позволяет автоматизировать этапы построения программного обеспечения, выполнить информационное наполнение пользовательского интерфейса и обеспечить эффективную работу с компьютерной моделью трубопроводной системы.
Article
This article presents a literature review on the domain of dance research that exploits ontology artifacts to manage their domain knowledge. Any dance form around the world is rich in knowledge because of its historical and geographical diversity, movement rules, and interpretive aspects. Researchers found various approaches to manage this knowledge base, among which the ontology development process is preferred by most people owing to its superficial and easy-to-manage characteristics. However, the heterogenic use of ontology in different dance research aspects demands an organized study to understand its contributions to the domain. Our survey approach towards this objective starts with a systematic literature selection and further grouping them into four categories based on ontology involvement. Second, we discuss each group of articles by their contributions and the level of ontology involvement. Third, a novel evaluation framework is proposed, which assesses each selected article based on nineteen attributes from ontology quality, development, and applications perspectives. We rank each article into three qualitative measures, i.e., High(H), Medium(M), and Low(L), for our attribute set based on our understanding. Finally, We comprehensively analyze the outcomes of our qualitative assessment to present the current research status and their limitations in the candidate domain. This review aspires to be a cornerstone resource, enlightening researchers about the current landscape and future prospects of ontological involvement in dance research.
Presentation
Presentation of the paper at the XIV Seminar on Ontology Research in Brazil (ONTOBRAS 2021) and the V Doctoral and Master’s Consortium on Ontologies (WTDO 2021)
Presentation
Defesa de Dissertação de Mestrado – ROBERTO MONTEIRO DIAS Título da dissertação: “Ontologia para o Gerenciamento de Segurança da Informação em Sistemas-de-Sistemas” BANCA: UNIRIO: RODRIGO PEREIRA DOS SANTOS (Orientador) TADEU MOREIRA DE CLASSE Docentes externos à UNIRIO JOEL LUÍS CARBONERA Suplentes da UNIRIO BRUNA DIIRR GONÇALVES DA SILVA Suplentes externos à UNIRIO MARK DOUGLAS DE AZEVEDO JACYNTHO Data: 18/04/2022 às 14:00
Article
There are two main approaches to the development of AI systems : data-driven approaches primarily using machine learning and knowledge-based approaches using knowledge processing technologies. In knowledge-based AI, various knowledge graphs are developed and used as core technologies. A knowledge graph represents connections between various pieces of knowledge in a network-like graph structure, and technologies such as Linked Data and ontologies are widely used for its publication and utilization. In this paper, the foundational technologies involved in the development of AI systems using knowledge graphs are explained.
Article
Full-text available
Abstrak Kajian ini membahas filsafat informatika, mengeksplorasi perannya yang kritis dalam membentuk pengembangan dan penerapan teknologi informasi. Dengan mengkaji informatika melalui lensa yang saling terkait antara ontologi, epistemologi, etika, dan kerangka teori, studi ini menekankan pentingnya penyelidikan filosofis dalam memahami kompleksitas era digital. Pertimbangan ontologis menyediakan dasar untuk mendefinisikan dan menyusun informasi, yang memengaruhi cara data dikategorikan dan ditafsirkan. Epistemologi memperluas hal ini dengan membahas bagaimana informasi diubah menjadi pengetahuan yang andal dan proses validasi pengetahuan tersebut. Etika menjadi perhatian utama, mengarahkan desain dan penggunaan sistem informasi yang bertanggung jawab, terutama dalam menangani tantangan seperti privasi data, bias algoritmik, dan implikasi etis kecerdasan buatan. Kerangka teori, termasuk teori Sistem Sosio-Teknis (STS), Actor-Network Theory (ANT), dan Teori Kritis, menawarkan wawasan berharga tentang sifat sosio-teknis informatika, menekankan perlunya sistem yang tidak hanya kuat secara teknis tetapi juga bertanggung jawab secara sosial. Makalah ini menyimpulkan bahwa pemahaman yang komprehensif tentang filsafat informatika sangat penting untuk mengembangkan teknologi yang selaras dengan nilai-nilai manusia dan memberikan kontribusi positif bagi masyarakat. Pendahuluan Informatika, yang sering dianggap sebagai disiplin teknis, memiliki dimensi filosofis yang mendalam yang sangat penting untuk memahami dampaknya terhadap pengetahuan dan masyarakat. Disiplin ini melibatkan studi sistematis tentang informasi dan pemrosesannya, biasanya melalui cara komputasional. Banyak atensi telah diberikan pada aspek praktis dan teknologi informatika, sehingga semakin terasa perlunya eksplorasi fondasi filosofisnya-terutama karena informatika memainkan peran yang semakin penting di berbagai bidang, mulai dari kesehatan hingga media sosial, dan dari kecerdasan buatan hingga keamanan siber. Filsafat informatika membahas beberapa pertanyaan mendasar: Apa hakikat informasi? Bagaimana hubungannya dengan pengetahuan? Apa implikasi etis dari pemrosesan dan penyebaran informasi? Memahami pertanyaan-pertanyaan ini memerlukan penelusuran asal-usul informatika dari perspektif filosofis, topik yang sering diabaikan dalam diskusi arus utama. Dengan mengkaji landasan filosofis informatika, maka kita akan mendapatkan pemahaman tentang kerangka konseptual dalam pengembangan dan penerapannya, sehingga
ResearchGate has not been able to resolve any references for this publication.