Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Data on observed and forecasted environmental conditions, such as weather, air quality and pollen, are offered in a great variety in the web and serve as basis for decisions taken by a wide range of the population. However, the value of these data is limited because their quality varies largely and because the burden of their interpretation in the light of a specific context and in the light of the specific needs of a user is left to the user herself. To remove this burden from the user, we propose an environmental Decision Support System (DSS) model with an ontology-based knowledge base as its integrative core. The availability of an ontological knowledge representation allows us to encode in a uniform format all knowledge that is involved (environmental background knowledge, the characteristic features of the profile of the user, the formal description of the user request, measured or forecasted environmental data, etc.) and apply advanced reasoning techniques on it. The result is an advanced DSS that provides high quality environmental information for personalized decision support.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Several studies have been conducted to investigate developing frameworks based on SWT for different problem domains. For example , Wanner, et al. (2015) investigated whether the ontologies in SWT can be exploited as a core of DSS in the sense that all functions of the systems operate on ontologies which are designed to serve all modules of the system. To answer their questions, the authors proposed an environmental DSS model with an ontology-based knowledgebase as its integrative core. ...
... These activities are domain-specific and required for decision-making models; however, they might not be mandatory in other domains. The new devised information is inserted directly to the knowledgebase to be used in exploration and decision-making activities (Wanner et al., 2015). In the problem domain use-case scenario, the companies' performance rates are calculated by using other existing numeric rates and inserted in the knowledgebase as appropriate. ...
Article
Full-text available
The availability of online documents that describe domain-specific information provides an opportunity in employing a knowledge-based approach in extracting information from web data. This research proposes a novel comprehensive semantic knowledge-based framework that helps to transform unstructured data to be easily exploited by data scientists. The resultant sematic knowledgebase is reasoned to infer new facts and classify events that might be of importance to end users. The target use case for the framework implementation was the financial domain, which represents an important class of dynamic applications that require the modelling of non-binary relations. Such complex relations are becoming increasingly common in the era of linked open data. This research in modelling and reasoning upon such relations is a further contribution of the proposed semantic framework, where non-binary relations are semantically modelled by adapting the semantic reasoning axioms to fit the intermediate resources in the N-ary relations requirements.
... In order to achieve this, they propose a semiautomatic method of constructing multilingual ontologies, as well as a semantic searching mechanism based on concept similarity. In another approach [11] the authors present a system that provides high quality environmental information for personalized decision support based on reasoning. ...
... Finally, a detailed analysis of tools implemented in DSS to support individuals in their financial management and investment decisions is provided in [14]. This paper, inspired by the ontology-based decision support systems such as [9] and [11] presents a knowledge-driven DSS for SME internationalisation based on semantic integration of heterogeneous internet data. ...
Conference Paper
Given the current economic situation and the financial crisis in many European countries, Small and Medium Enterprises (SMEs) have found internationalisation and exportation of their products as the main way out of this crisis. In this paper, we provide a decision support system that semantically aggregates information from many heterogeneous web resources and provides guidance to SMEs for their potential investments. The main contributions of this paper are the introduction of SME internationalisation indicators that can be considered for such decisions, as well as the novel decision support system for SME internationalisation based on inference over semantically integrated data from heterogeneous web resources. The system is evaluated by SME experts in realistic scenarios in the section of dairy products.
... Also, SWRL-based rules were developed to assess rollover risk and obtain suggested measures. In [23], the authors proposed an ontology-based environmental decision support system that integrates data from different Web data sources and assesses these fused data for a given time and location. In this way, users can interpret information and make a decision. ...
Article
Full-text available
In the agricultural context, there is a great diversity of insects and diseases that affect crops. Moreover, the amount of data available on data sources such as the Web regarding these topics increase every day. This fact can represent a problem when farmers want to make decisions based on this large and dynamic amount of information. This work presents AgriEnt, a knowledge-based Web platform focused on supporting farmers in the decision-making process concerning crop insect pest diagnosis and management. AgriEnt relies on a layered functional architecture comprising four layers: the data layer, the semantic layer, the web services layer, and the presentation layer. This platform takes advantage of ontologies to formally and explicitly describe agricultural entomology experts’ knowledge and to perform insect pest diagnosis. Finally, to validate the AgriEnt platform, we describe a case study on diagnosing the insect pest affecting a crop. The results show that AgriEnt, through the use of the ontology, has proven to produce similar answers as the professional advice given by the entomology experts involved in the evaluation process. Therefore, this platform can guide farmers to make better decisions concerning crop insect pest diagnosis and management.
... García approach did not use a structured storage, such as, an ontological knowledge base to store the information about the domain. Wanner et al. [39] proposed an approach that overcomes this drawback and uses ontology based knowledge base as the main data structure to store information about the environmental domain used for designing an expert system offering personalized support to the citizens in questions related to the environmental conditions in their habitat. Information is collected from the user, environmental data and stored in ontological knowledge base in a uniform format. ...
Article
Full-text available
Knowledge based recommendation systems use knowledge about users and products to make recommendations. Knowledge-based recommendations are not dependent on the rating, nor do they have to gather information about a particular user to give recommendations. Knowledge acquisition is the most important task for constructing knowledge-based recommendation system. Acquired knowledge must be represented in some structured machine-readable form, e.g., as ontology to support reasoning about what products meets the user's requirements. In Semantic Web, knowledge is represented in the form of ontology. Representation of knowledge in structured form of ontology in Semantic Web makes the application of knowledge based recommendations system on Semantic Web very easy, as there is no need to construct knowledge base from scratch. Performance of knowledge based recommendations systems can be enhanced by exploiting ontology reasoning characteristics. This paper explores different techniques used to generate knowledge-based recommendations highlighting the advantages of knowledge based recommendation system over other recommendation techniques.
... A web-based geospatial problem-solving environment (Jung et al., 2013) for earthquakes provides formal definitions that web service providers present and integrate with geographic information services using domain expert approved ontologies. Wanner et al. (2015) introduced an ontology-based decision support system that acquires data from various sources on the web to provide personalized environmental information, such as the pollen traffic in certain wind, temperature, and air conditions. ...
... Several initiatives, including projects (PESCaDO (http://pescado-project.upf.edu/) [36]) and applications (AirForU (http://newsroom.ucla.edu/releases/new-app-lets-you-check-air-quality-aseasily-as-checking-the-weather), Clean Air Nation (https://play.google.com/store/apps/details? id=io.gonative.android.robzl&hl=en), ...
Article
Full-text available
Although air pollution is one of the most significant environmental factors posing a threat to human health worldwide, air quality data are scarce or not easily accessible in most European countries. The current work aims to develop a centralized air quality data hub that enables citizens to contribute to air quality monitoring. In this work, data from official air quality monitoring stations are combined with air pollution estimates from sky-depicting photos and from low-cost sensing devices that citizens build on their own so that citizens receive improved information about the quality of the air they breathe. Additionally, a data fusion algorithm merges air quality information from various sources to provide information in areas where no air quality measurements exist.
... Similarly to our approach, within the context of the PESCaDO EU project, ontologies were used as the backbone of the proposed EDSS, supporting all phases of the decision making process [16]; nevertheless, its rules along with the reasoning module are hardcoded in the source code, resulting in a highly inflexible approach. On the other hand, our implementation pushes the usage of ontologies one step further: both the domain knowledge and the experts' rules are developed at the ontology level, with the use of the OWL language and SPIN rules (see next section). ...
Chapter
As urban atmospheric conditions are tightly connected to citizens’ quality of life, the concept of efficient environmental decision support systems becomes highly relevant. However, the scale and heterogeneity of the involved data, together with the need for associating environmental information with physical reality, increase the complexity of the problem. In this work, we capitalize on the semantic expressiveness of ontologies to build a framework that uniformly covers all phases of the decision making process: from structuring and integration of data, to inference of new knowledge. We define a simplified ontology schema for representing the status of the environment and its impact on citizens’ health and actions. We also implement a novel ontology- and rule-based reasoning mechanism for generating personalized recommendations, capable of treating differently individuals with diverse levels of vulnerability under poor air quality conditions. The overall framework is easily adaptable to new sources and needs.
... Similarly to our approach, within the context of the PESCaDO EU project, ontologies were used as the backbone of the proposed EDSS, supporting all phases of the decision making process [16]; nevertheless, its rules along with the reasoning module are hardcoded in the source code, resulting in a highly inflexible approach. On the other hand, our implementation pushes the usage of ontologies one step further: both the domain knowledge and the experts' rules are developed at the ontology level, with the use of the OWL language and SPIN rules (see next section). ...
Conference Paper
Full-text available
As urban atmospheric conditions are tightly connected to citizens’ quality of life, the concept of efficient environmental decision support systems becomes highly relevant. However, the scale and heterogeneity of the involved data, together with the need for associating environmental information with physical reality, increase the complexity of the problem. In this work, we capitalize on the semantic expressiveness of ontologies to build a framework that uniformly covers all phases of the decision making process: from structuring and integration of data, to inference of new knowledge. We define a simplified ontology schema for representing the status of the environment and its impact on citizens’ health and actions. We also implement a novel ontology- and rule-based reasoning mechanism for generating personalized recommendations, capable of treating differently individuals with diverse levels of vulnerability under poor air quality conditions. The overall framework is easily adaptable to new sources and needs.
Article
Full-text available
W pracy przedstawiono projekt ontologii BHP opracowanej na podstawie normy PN-N-18001 Systemy zarządzania bezpieczeństwem i higieną pracy. Wymagania. Ontologię zdefiniowano w języku OWL (Ontology Web Language). Przedstawiono odpowiednio diagramy klas, diagramy właściwości obiektowych oraz właściwości typu danych. Zaprezentowano odpowiednie wizualizacje ontologii oraz przykłady definiowania pojęć i relacji.
Chapter
A smart home is a home based on the internet of thing to enable the control and the remote monitoring of home’s devices and to allow the user to adapt the system to his desires and needs. This paper presents an approach to implement a smart home system using the Internet of thing IoT, Web services, and an Android App. The proposed model focuses on (1) An Arduino Uno Wi-Fi platform for interoperability among sensors, actuators, and communication protocols. (2) The REST framework makes the home appliances accessible and connected also it improves data exchange. (3) An Android App providing several functionalities by which the user can control the home device from anywhere. We present the smart home architecture and its application in a use case. Our goal is providing a low cost, effective and low-cost smart home system which can be controlled easily from anywhere.
Chapter
With the development of information technology, personalization is recognized as one of the emerging technologies in the research field. Web page personalization process the user’s query and retrieve the search results that correspond to their interest. In some cases, it is difficult to identify the desired result when the user having a different background on the same query. It is overcome by the proposed work; here the web page personalization is done through the query formulation and profiling by the WordNet ontology. Initially, the required data are collected from the web sources and are clustered using the presented Weighted Clustering (WC) algorithm. The WC clusters the web pages in corresponds to their domains and then, it is learned by the user learning module. With the help of four similarity measures, data similarity is evaluated between the generated word net and trained dataset. From that, the maximum similarity based data is achieved by the proposed algorithm called Oppositional based FireFly Optimization (OFFO). The results demonstrate that the WC-OFFO attains the precision of 89.16, recall of 78.09 and f-measure of 83.26 which is high compared to existing algorithms.
Conference Paper
A resource of knowledge based ontology to tropical region very important owned by Indonesia as one of tropical countries. Climate change often provide for human and the environment impact. Weather monitoring is needed to minimize the impact, so that supervision, appropriate measures and rescue operations can be performed. Monitoring the weather done by placing sensors in weather monitoring stations. Data monitoring stored on a data center, the utilization of these data into a source of knowledge would be very useful for decision support system that can predict the weather conditions in the tropics. The result of this research is an ontology dedicated to tropical regions. The tropical weather ontology used a source of knowledge ontological and non-ontological resources. Modularization ontology to be used in these modules will facilitate the reuse of modules for the development and maintenance of the system.
Article
Full-text available
There is a large amount of meteorological and air quality data available online. Often, different sources provide deviating and even contradicting data for the same geographical area and time. This implies that users need to evaluate the relative reliability of the information and then trust one of the sources. We present a novel data fusion method that merges the data from different sources for a given area and time, ensuring the best data quality. The method is a unique combination of land-use regression techniques, statistical air quality modelling and a well-known data fusion algorithm. We show experiments where a fused temperature forecast outperforms individual temperature forecasts from several providers. Also, we demonstrate that the local hourly NO2 concentration can be estimated accurately with our fusion method while a more conventional extrapolation method falls short. The method forms part of the prototype web-based service PESCaDO, designed to cater personalized environmental information to users.
Article
Full-text available
Environmental data analysis and information provision are considered of great importance for people, since environmental conditions are strongly related to health issues and directly affect a variety of everyday activities. Nowadays, there are several free web-based services that provide environmental information in several formats with map images being the most commonly used to present air quality and pollen forecasts. This format, despite being intuitive for humans, complicates the extraction and processing of the underlying data. Typical examples of this case are the chemical weather forecasts, which are usually encoded heatmaps (i.e. graphical representation of matrix data with colors), while the forecasted numerical pollutant concentrations are commonly unavailable. This work presents a model for the semi-automatic extraction of such information based on a template configuration tool, on methodologies for data reconstruction from images, as well as on text processing and Optical Character Recognition (OCR). The aforementioned modules are integrated in a standalone framework, which is extensively evaluated by comparing data extracted from a variety of chemical weather heat maps against the real numerical values produced by chemical weather forecasting models. The results demonstrate a satisfactory performance in terms of data recovery and positional accuracy.
Conference Paper
Full-text available
Environmental conditions play a very important role in hu-man life.Nowadays, environmental data and measurements are freely made available through dedicated web sites, ser-vices and portals. This work deals with the problem of discovering such web resources by proposing an interactive domain-specific search engine, which is built on top of a general purpose search engine, employing supervised ma-chine learning and advanced interactive visualization tech-niques. Our experiments and the evaluation show that in-teractive classification based on visualization improves the performance of the system.
Conference Paper
Full-text available
An increasing number of information systems integrate semantic data stores for managing ontologies. To access these knowledge bases most of the available implementations provide application programming interfaces (APIs). The implementations of these APIs normally do not support any kind of network protocol or service interface. This works fine as long as a monolithic system is developed. If the need arises to integrate such a knowledge base into a service-oriented architecture a different approach is needed. In this paper we propose an architecture to address this issue. A first demonstrator was fully implemented in the European project PESCaDO. Several services access and work on a central knowledge base access service which supports multi-threaded access for parallel instantiated ontologies.
Article
Full-text available
Rhetorical Structure Theory is a descriptive theory of a major aspect of the organization of natural text. It is a linguistically useful method for describing natural texts, characterizing their structure primarily in terms of relations that hold between parts of the text. This paper establishes a new definitional foundation for RST. The paper also examines three claims of RST: the predominance of nucleus/satellite structural patterns, the functional basis of hierarchy, and the communicative role of text structure.
Article
Full-text available
The paper gives ontologies in the Web Ontology Language (OWL) for Legal Case-based Reasoning (LCBR) systems, giving explicit, formal, and general specifications of a conceptualisation LCBR. Ontologies for different systems allows comparison and contrast between them. OWL ontologies are standardised, machine-readable formats that support automated processing with Semantic Web applications. Intermediate concepts, concepts between base-level concepts and higher level concepts, are central in LCBR. The main issues and their relevance to ontological reasoning and to LCBR are discussed. Two LCBR systems (AS-CATO, which is based on CATO, and IBP) are analysed in terms of basic and intermediate concepts. Central components of the OWL ontologies for these systems are presented, pointing out differences and similarities. The main novelty of the paper is the ontological analysis and representation in OWL of LCBR systems. The paper also emphasises the important issues concerning the representation and reasoning of intermediate concepts.
Article
Full-text available
Effective presentation of data for decision support is a major issue when large volumes of data are generated as happens in the Intensive Care Unit (ICU). Although the most common approach is to present the data graphically, it has been shown that textual summarisation can lead to improved decision making. As part of the BabyTalk project, we present a prototype, called BT-45, which generates textual summaries of about 45 minutes of continuous physiological signals and discrete events (e.g.: equipment settings and drug administration). Its architecture brings together techniques from the different areas of signal processing, medical reasoning, knowledge engineering, and natural language generation. A clinical off-ward experiment in a Neonatal ICU (NICU) showed that human expert textual descriptions of NICU data lead to better decision making than classical graphical visualisation, whereas texts generated by BT-45 lead to similar quality decision-making as visualisations. Textual analysis showed that BT-45 texts were inferior to human expert texts in a number of ways, including not reporting temporal information as well and not producing good narratives. Despite these deficiencies, our work shows that it is possible for computer systems to generate effective textual summaries of complex continuous and discrete temporal clinical data.
Conference Paper
Full-text available
We present a wiki-based collaborative environment for the semi-automatic incremental building of ontologies. The system relies on an existing platform, which has been extended with a component for terminology extraction from domain-specific textual corpora and with a further step aimed at matching the extracted concepts with pre-existing structured and semi-structured information. The system stands on the shoulders of a well-established user-friendly wiki architecture and it enables knowledge engineers and domain experts to collaborate in the ontology building process. We have performed a task-oriented evaluation of the tool in a real use case for incrementally constructing the missing part of an environmental ontology. The tool effectively supported the users in the task, thus showing its usefulness for knowledge extraction and ontology engineering.
Conference Paper
Full-text available
The paper presents an approach to development of a system that supports decision-making for tasks aimed to reduce power consumption of oil-and-gas production enterprise and to enhance the environmental safety of oil and natural gas production. The architecture and operating principles of this system, as well as the classes of tasks to be solved, are discussed. Adjustment of the system to the subject domain and types of tasks is provided by including explicitly the models of the subject and problem domains into the system.
Article
Full-text available
Air pollution has a major influence on health. It is thus not surprising that air quality (AQ) increasingly becomes a central issue in the environmental information policy worldwide. The most common way to deliver AQ information is in terms of graphics, tables, pictograms, or color scales that display either the concentrations of the pollutant substances or the corresponding AQ indices. However, all of these presentation modi lack the explanatory dimension; nor can they be easily tailored to the needs of the individual users. MARQUIS is an AQ information generation service that produces user-tailored multilingual bulletins on the major measured and forecasted air pollution substances and their relevance to human health in five European regions. It incorporates modules for the assessment of pollutant time series episodes with respect to their relevance to a given addressee, for planning of the discourse structure of the bulletins and the selection of the adequate presentation mode, and for generation proper. The positive evaluation of the bulletins produced by MARQUIS by users shows that the use of automatic text generation techniques in such a complex and sensitive application is feasible.
Article
Full-text available
Many sensor networks have been deployed to monitor Earth's environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement.
Article
Full-text available
A key problem in current sensor network technology is the heterogeneity of the available software and hardware platforms which makes deployment and application development a tedious and time consuming task. To minimize the unnecessary and repetitive implementation of identical functionalities for different platforms, we present our Global Sensor Networks (GSN) middleware which supports the flexible integration and discovery of sensor networks and sensor data, enables fast deployment and addition of new platforms, provides distributed querying, filtering, and combination of sensor data, and supports the dynamic adaption of the system configuration during operation. In this demonstration, we specifically focus on the deployment aspects and allow users to dynamically reconfigure the running system, to add new sensor networks on the fly, and to monitor the effects of the changes via a graphical interface.
Article
Full-text available
In this paper we investigate some basic properties of the multi-model ensemble systems, which can be deduced from a general characteristic of statistical distributions of the ensemble members with the help of mathematical tools. In particular we show how to find optimal linear combination of model results, which minimizes the mean square error both in the case of uncorrelated and correlated models. By proving basic estimations we try to deduce general properties describing multi-model ensemble systems. We show also how mathematical formalism can be used for investigation of the characteristics of such systems.
Chapter
The term fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc. The truth-value set, , of FL is assumed to be generated by a context-free grammar, with a semantic rule providing a means of computing the meaning of each linguistic truth-value in as a fuzzy subset of [0, 1]. Since is not closed under the operations of negation, conjunction, disjunction and implication, the result of an operation on truth-values in requires, in general, a linguistic approximation by a truth-value in . As a consequence, the truth tables and the rules of inference in fuzzy logic are (i) inexact and (ii) dependent on the meaning associated with the primary truth-value true as well as the modifiers very, quite, more or less, etc. Approximate reasoning is viewed as a process of approximate solution of a system of relational assignment equations. This process is formulated as a compositional rule of inference which subsumes modus ponens as a special case. A characteristic feature of approximate reasoning is the fuzziness and nonuniqueness of consequents of fuzzy premisses. Simple examples of approximate reasoning are: (a) Most men are vain; Socrates is a man; therefore, it is very likely that Socrates is vain, (b) x is small; x and y are approximately equal; therefore y is more or less small, where italicized words are labels of fuzzy sets. Approximate reasoning is viewed as a process of approximate solution of a system of relational assignment equations. This process is formulated as a compositional rule of inference which subsumes modus ponens as a special case. A characteristic feature of approximate reasoning is the fuzziness and nonuniqueness of consequents of fuzzy premisses. Simple examples of approximate reasoning are: (a) Most men are vain; Socrates is a man; therefore, it is very likely that Socrates is vain, (b) x is small; x and y are approximately equal; therefore y is more or less small, where italicized words are labels of fuzzy sets.
Article
We present the PESCaDO Ontology, a modular application ontology exploited for personalized environmental decision support, that enables to formally describe (i) the user decision support request, (ii) the environmental data relevant to process the request, as well as (iii) the decisions and conclusions to be produced. The PESCaDO Ontology was thoroughly developed following state of the art best practices, and it is matched with a comprehensive and detailed documentation.
Article
Environmental and meteorological conditions are of utmost importance for the population, as they are strongly related to the quality of life. Citizens are increasingly aware of this importance. This awareness results in an increasing demand for environmental information tailored to their specific needs and background. We present an environmental information platform that supports submission of user queries related to environmental conditions and orchestrates results from complementary services to generate personalized suggestions. The system discovers and processes reliable data in the Web in order to convert them into knowledge. At runtime, this information is transferred into an ontology-structured knowledge base, from which then information relevant to the specific user is deduced and communicated in the language of their preference. The platform is demonstrated with real world use cases in the south area of Finland, showing the impact it can have on the quality of everyday life.
Conference Paper
Environmental data are considered of utmost importance for human life, since weather conditions, air quality and pollen are strongly related to health issues and affect everyday activities. This paper addresses the problem of discovery of air quality and pollen forecast Web resources, which are usually presented in the form of heatmaps (i.e. graphical representation of matrix data with colors). Towards the solution of this problem, we propose a discovery methodology, which builds upon a general purpose search engine and a novel post processing heatmap recognition layer. The first step involves generation of domain-specific queries, which are submitted to the search engine, while the second involves an image classification step based on visual low level features to identify Web sites including heatmaps. Experimental results comparing various visual features combinations show that relevant environmental sites can be efficiently recognized and retrieved.
Conference Paper
This paper proposes a new approach for identifying situations from sensor data by using a perception-based mechanism that has been borrowed from humans: sensation, perception and cognition. The proposed approach is based on two phases: low-level perception and high-level perception. The first one is realized by means of semantic technologies and allows to generate more abstract information from raw sensor data by also considering knowledge about the environment. The second one is realized by means of Fuzzy Formal Concept Analysis and allows to organize and classify abstract information, coming from the first phase, by generating a knowledge representation structure, namely lattice, that can be traversed to obtain information about occurring situation and augment human perception. The work proposes also a sample scenario executed in the context of an early experimentation.
Conference Paper
Resilience is the capability of a system to absorb and mitigate unexpected faults and risks. This paper describes the definition of a resilient middleware for sensor network management in dynamic environments for supporting Situation Awareness processes in security scenarios. The middleware proposes an approach based on Quality of Service of sensors for identifying faults, disturbances or incompleteness and uncertainty in raw data. Moreover, Dempster-Shafer Theory is employed to aggregate sensor data and abstract it to obtain coherent observations about the monitored environment. Lastly, machine learning techniques are used to discover association rules in order to handle the absence of significant observations.
Article
Preface Introduction 2. Discourse structure 3. Focusing in discourse 4. TEXT system implementation 5. Discourse history 6. Related generation research 7. Summary and conclusions Appendices Bibliography Index.
Article
An electronic issue management system, alternatively known as a help desk system, refers to a computer application that can be used to electronically automate the process of managing business issues, including problems, defects, tasks, changes or new requests. The difficulties found in using such a system are often from the lack of expertise to resolve the issues that are stored by the system. This paper proposes to use ontology and case-based reasoning to better provide structured information and enable the capturing of tacit knowledge of experts for issue management.
Article
During the past several years, the emergence of expert systems as a field of considerable practical as well as theoretical importance within AI has provided a strong impetus for the develop ment of theories of approximate reasoning and credibility assessment of inference processes in knowledge-based systems. The approach to approximate reasoning described in this paper is based on a fuzzy logic, FL, in which the truth-values and quantifiers are defined as possibility distributions which carry linguistic labels such as true, quite true, not very true, many, not very many, several, almost all, etc. Based on the concept of a possibility distribution, a set of translation and Inference rules is developed and their application to inference from imprecise premises is illustrated by examples.
Conference Paper
Analysis and processing of environmental information is considered of utmost importance for humanity. This article addresses the problem of discovery of web resources that provide environmental measurements. Towards the solution of this domain-specific search problem, we combine state-of-the-art search techniques together with advanced textual processing and supervised machine learning. Specifically, we generate domain-specific queries using empirical information and machine learning driven query expansion in order to enhance the initial queries with domain-specific terms. Multiple variations of these queries are submitted to a general-purpose web search engine in order to achieve a high recall performance and we employ a post processing module based on supervised machine learning to improve the precision of the final results. In this work, we focus on the discovery of weather forecast websites and we evaluate our technique by discovering weather nodes for south Finland.
Article
The term fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc. The truth-value set, ℐ, of FL is assumed to be generated by a context-free grammar, with a semantic rule providing a means of computing the meaning of each linguistic truth-value in ℐ as a fuzzy subset of [0, 1]. Since ℐ is not closed under the operations of negation, conjunction, disjunction and implication, the result of an operation on truth-values in ℐ requires, in general, a linguistic approximation by a truth-value in ℐ. As a consequence, the truth tables and the rules of inference in fuzzy logic are (i) inexact and (ii) dependent on the meaning associated with the primary truth-value true as well as the modifiers very, quite, more or less, etc. Approximate reasoning is viewed as a process of approximate solution of a system of relational assignment equations. This process is formulated as a compositional rule of inference which subsumes modus ponens as a special case. A characteristic feature of approximate reasoning is the fuzziness and nonuniqueness of consequents of fuzzy premisses. Simple examples of approximate reasoning are: (a) Most men are vain; Socrates is a man; therefore, it is very likely that Socrates is vain. (b) x is small; x and y are approximately equal; therefore y is more or less small, where italicized words are labels of fuzzy sets.
Article
This paper characterizes part of an interdisciplinary research effort on AI techniques applied to environmental decision-support systems. The architectural design of the OntoWEDSS decision-support system for wastewater management is presented. This system augments classic rule-based reasoning and case-based reasoning with a domain ontology, which provides a more flexible management capability to OntoWEDSS. The construction of the decision-support system is based on a specific case study. But the system is also of general interest, given that its ontology-underpinned architecture can be applied to any wastewater treatment plant and, at an appropriate level of abstraction, to other environmental domains. The OntoWEDSS system helps improve the diagnosis of faulty states of a treatment plant, provides support for complex problem-solving and facilitates knowledge modeling and reuse. In particular, the following issues are dealt with: (1) modeling information about wastewater treatment processes, (2) clarifying part of the existing terminological confusion in the domain, (3) incorporating ontology-modeled microbiologic knowledge related to the treatment process into the reasoning process and (4) creating a decision-support system that combines information through a novel integration between knowledge-based systems and ontologies.
Conference Paper
In this paper our original methodology of applying ontology-based logic into decision support system for security management in heterogeneous networks is presented. Such decision support approach is used by the off-network layer of security and resiliency mechanisms developed in the INTERSECTION Project. Decision support application uses knowledge about networks vulnerabilities to support off-network operator to manage and control in-networks components such as probes, intrusion detection systems, Complex Event Processor, Reaction and Remediation. Hereby, both IVO (Intersection Vulnerability Ontology) as well as PIVOT - decision support system based on the vulnerability ontology are presented.
Article
Natural Language Generation (NLG) can be used to generate textual summaries of numeric data sets. In this paper we develop an architecture for generating short (a few sentences) summaries of large (100KB or more) time-series data sets. The architecture integrates pattern recognition, pattern abstraction, selection of the most significant patterns, microplanning (especially aggregation), and realisation. We also describe and evaluate SumTime-Turbine, a prototype system which uses this architecture to generate textualsummaries of sensor data from gas turbines.
Article
The use of Magnetic Resonance (MR) as a supporting tool in the diagnosis and monitoring of multiple sclerosis (MS) and in the assessment of treatment effects requires the accurate determination of cerebral white matter lesion (WML) volumes. In order to automatically support neuroradiologists in the classification of WMLs, an ontology-based fuzzy decision support system (DSS) has been devised and implemented. The DSS encodes high-level, specialized medical knowledge in terms of ontologies and fuzzy rules and applies this knowledge in conjunction with a fuzzy inference engine to classify WMLs and to obtain a measure of their volumes. The performance of the DSS has been quantitatively evaluated on 120 patients affected by MS. Specifically, binary classification results have been first obtained by applying thresholds on fuzzy outputs and then evaluated, by means of ROC curves, in terms of trade-off between sensitivity and specificity. Similarity measures of WMLs have been also computed for a further quantitative analysis. Moreover, a statistical analysis has been carried out for appraising the DSS influence on the diagnostic tasks of physicians. The evaluation has shown that the DSS offers an innovative and valuable way to perform automated WML classification in real clinical settings.
Conference Paper
Applications in pervasive computing environment exploit information about the context of use, such as the location, tasks and preferences of the user, in order to adapt their behavior in response to changing operating environments and user requirements. Utilizing context with aid of ontology in data and model, decision support systems can provide better and more desirable support to their users. The effects and advantages of exploiting ontology in user modeling and DSS is discussed. We propose a framework of a ontology-based decision support system (O2DSS), including ontology in model base, database and their advantages.
An ontological framework for decision support. Proceedings of the 2nd joint international semantic technology conference
  • M Rospocher
  • L Serafini
Rospocher, M., & Serafini, L. (2012). An ontological framework for decision support. Proceedings of the 2nd joint international semantic technology conference..
SOS 2.0 tutorial. Retrieved from <http
OGC (2011). SOS 2.0 tutorial. Retrieved from <http://www.ogcnetwork.net/SOS_2_ 0/tutorial>.
Dependency syntax: Theory and practice
  • I Mel'čuk
Mel'čuk, I. (1988). Dependency syntax: Theory and practice. SUNY Press.