To read the full-text of this research, you can request a copy directly from the authors.
Citizens are increasingly aware of the influence of environmental and meteorological conditions on the quality of their life. The consequence of this awareness is the demand for personalized environmental information, i.e., information that is tailored to their specific context and background. The EU-funded project PESCaDO addresses this demand in its full complexity. It aims to develop a service that supports the user in questions related to environmental conditions in that it searches for reliable data in the web, processeses these data to deduce the relevant information and communicates this information to the user in the language of their preference. In this paper, we describe the requirements and the working service-based realization of the infrastructure of the service.
To read the full-text of this research, you can request a copy directly from the authors.
... This requires a qualityoriented service design process when developing a serviceoriented geographical information system. In the context of the project Personalized Environmental Service Configuration and Delivery Orchestration (PESCaDO) [3, 4] of the European Commission, a serviceoriented geographical information system has to be developed in cooperation with the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation. This system enables getting personalized information regarding the personal profile and environmental conditions. ...
... Since each individual has the need for specific information about the environment that is affecting him and his life, information personalization plays a major role. The PESCaDO project of the European Commission [3, 4] takes up this issue and aims at developing a platform for getting personalized information regarding the personal profile, such as health status, mode of presentation or language of an individual, and also takes into consideration the intention of the individual. PESCaDO covers the discovery of services providing the data, their orchestration, the processing of the data and the delivery of the gained information. ...
Due to the usage of distributed information, such as sensor information, geographical information systems are designed according to service-oriented principles. Thus, the development of new solutions within this context requires a design of necessary services. These services have to follow certain quality attributes that have evolved as important for services, such as loose coupling and autonomy. In this paper, a quality-oriented design process is considered and its applicability and effectiveness are shown within the Personalized Environmental Service Configuration and Delivery Orchestration project of the European Commission.
... We presented briefly a three-layer ontology framework and have shown how this framework benefits NLTG. The ontologies, their user query-based population and the mentioned NLTG modules are implemented in a service-based architecture described in [24,11]. The proposed multilayered OWL ontology-driven NLTG has the advantages that it supports dynamic population of applicationneutral ontologies, clean separation of domain, domain communication and communication knowledge and codification of text planning-relevant aspects as part of the (communication) ontology. ...
Natural Language Generation (NLG) from knowledge bases (KBs) has repeatedly been subject of research. Howev-er, most proposals tend to have in common that they start from KBs of limited size that either already contain lin-guistically-oriented knowledge structures or to whose structures different ways of realization are explicitly assigned. To avoid these limitations, we propose a three layer OWL-based ontology framework in which domain, domain communication and linguistic knowledge structures are clearly separated and show how a large scale instantiation of this framework in the environmental domain serves multilingual NLG.
... In this context especially the compliance with naming conventions and the usage of a common domain model for a functional naming are of interest. Additionally, the metrics are applied at the Personalized Environmental Service Configuration and Delivery Orchestration (PESCaDO) project [40, 41], a project co-funded by the European Commission. Also in this case, service designs are supposed to be created that verifiably fulfill the four introduced quality attributes. ...
In the context of service-oriented architectures, quality attributes, such as loose coupling and autonomy, have been identified that services should fulfill. In order to influence services with regard to these quality attributes, an evaluation is necessary at an early development stage, i.e. during design time. Existing work mostly focuses on a textual description of desired quality attributes, formalizes metrics that require more information than available during design time, or bases on a theoretical model that hampers the practical applicability. In this article, quality indicators for a unique categorization, loose coupling, discoverability and autonomy are identified. For each quality indicator formalized metrics are provided which enable their measurement and application on service candidates and service designs based on the Service oriented architecture Modeling Language as standardized language for modeling service-oriented architectures. To illustrate the metrics and to verify their validity, service candidates and service designs of a campus guide system as developed at the Karlsruhe Institute of Technology are evaluated.
Frühwarnsysteme dienen zur möglichst frühzeitigen Information über eine sich anbahnende oder auftretende Gefahr, um Personen und Organisationen die Möglichkeit zu geben entsprechend darauf reagieren zu können. Die Konzeption eines Frühwarnsystems stellt komplexe Herausforderungen an die Systemarchitekten, hierzu liefert die vorliegende Arbeit ein Rahmenwerk für die Architektur zukünftiger Frühwarnsysteme.
The often cited information explosion is not limited to volatile network traffic and massive multimedia capture data. Structured and high quality data from diverse fields of study become easily and freely available, too. This is due to crowd sourced data collections, better sharing infrastructure, or more generally speaking user generated content of the Web 2.0 and the popular transparency and open data movements. At the same time as data generation is shifting to everyday casual users, data analysis is often still reserved to large companies specialized in content analysis and distribution such as today's internet giants Amazon, Google, and Facebook. Here, fully automatic algorithms analyze metadata and content to infer interests and believes of their users and present only matching navigation suggestions and advertisements. Besides the problem of creating a filter bubble, in which users never see conflicting information due to the reinforcement nature of history based navigation suggestions, the use of fully automatic approaches has inherent problems, e.g. being unable to find the unexpected and adopt to changes, which lead to the introduction of the Visual Analytics (VA) agenda.
If users intend to perform their own analysis on the available data, they are often faced with either generic toolkits that cover a broad range of applicable domains and features or specialized VA systems that focus on one domain. Both are not suited to support casual users in their analysis as they don't match the users' goals and capabilities. The former tend to be complex and targeted to analysis professionals due to the large range of supported features and programmable visualization techniques. The latter trade general flexibility for improved ease of use and optimized interaction for a specific domain requirement. This work describes two approaches building on interactive visualization to reduce this gap between generic toolkits and domain-specific systems.
The first one builds upon the idea that most data relevant for casual users are collections of entities with attributes. This least common denominator is commonly employed in faceted browsing scenarios and filter/flow environments. Thinking in sets of entities is natural and allows for a very direct visual interaction with the analysis subject and it stands for a common ground for adding analysis functionality to domain-specific visualization software. Encapsulating the interaction with sets of entities into a filter/flow graph component can be used to record analysis steps and intermediate results into an explicit structure to support collaboration, reporting, and reuse of filters and result sets. This generic analysis functionality is provided as a plugin-in component and was integrated into several domain-specific data visualization and analysis prototypes. This way, the plug-in benefits from the implicit domain knowledge of the host system (e.g. selection semantics and domain-specific visualization) while being used to structure and record the user's analysis process.
The second approach directly exploits encoded domain knowledge in order to help casual users interacting with very specific domain data. By observing the interrelations in the ontology, the user interface can automatically be adjusted to indicate problems with invalid user input and transform the system's output to explain its relation to the user. Here, the domain related visualizations are personalized and orchestrated for each user based on user profiles and ontology information.
In conclusion, this thesis introduces novel approaches at the boundary of generic analysis tools and their domain-specific context to extend the usage of visual analytics to casual users by exploiting domain knowledge for supporting analysis tasks, input validation, and personalized information visualization.
There is a large amount of meteorological and air quality data available online. Often, different sources provide deviating and even contradicting data for the same geographical area and time. This implies that users need to evaluate the relative reliability of the information and then trust one of the sources. We present a novel data fusion method that merges the data from different sources for a given area and time, ensuring the best data quality. The method is a unique combination of land-use regression techniques, statistical air quality modelling and a well-known data fusion algorithm. We show experiments where a fused temperature forecast outperforms individual temperature forecasts from several providers. Also, we demonstrate that the local hourly NO2 concentration can be estimated accurately with our fusion method while a more conventional extrapolation method falls short. The method forms part of the prototype web-based service PESCaDO, designed to cater personalized environmental information to users.
The PESCaDO project (http://www.pescado-project.eu/) aims at providing tailored environmental information to EU citizens. For this purpose, PESCaDO delivers personalized environmental information, based on coordinating the data flow from multiple sources. After the necessary discovery, indexing and parsing of those sources, the harmonization and retrieval of data is achieved through Node Orchestration and the creation of unified and accurate responses to user queries by using the Fusion service, which assimilates input data into a coherent data block according to their imprecision and relevance in respect to the user defined query. Environmental nodes are selected from open-access web resources of various types, and from the direct usage of data from monitoring stations. Forecasts of models are made available through the synergy with the AirMerge Image parsing engine and its chemical weather database. In the presented paper, elements of the general architecture of AirMerge, and the Fusion service of PESCaDO are exposed as an example of the modus operandi of environmental information fusion for the atmospheric environment.
Air pollution has a major influence on health. It is thus not surprising that air quality (AQ) increasingly becomes a central issue in the environmental information policy worldwide. The most common way to deliver AQ information is in terms of graphics, tables, pictograms, or color scales that display either the concentrations of the pollutant substances or the corresponding AQ indices. However, all of these presentation modi lack the explanatory dimension; nor can they be easily tailored to the needs of the individual users. MARQUIS is an AQ information generation service that produces user-tailored multilingual bulletins on the major measured and forecasted air pollution substances and their relevance to human health in five European regions. It incorporates modules for the assessment of pollutant time series episodes with respect to their relevance to a given addressee, for planning of the discourse structure of the bulletins and the selection of the adequate presentation mode, and for generation proper. The positive evaluation of the bulletins produced by MARQUIS by users shows that the use of automatic text generation techniques in such a complex and sensitive application is feasible.