Book

Web Application Interoperability

Authors:
Conference Paper
Ontologies generation is related to the retrieval of ontological structures such as capture the structure, relationships, semantics and other essential meta-information after understanding the web applications by using inferential approach. Tasks such as Web application identification, information mining, ontology generation, ontology matching, and query translation are just some of the required techniques that need to be put together in order to achieve a minimum of user intervention. In this paper, the Chaotic Associative Memory (composed of neurons) is used to obtain linear mapping. In this paper, we propose a Quantum Chaotic Associative Memory (composed of Quantum Neurons) for non-linear mapping. The Gradient-Descent based algorithm is also proposed to train Quantum Neuron to obtain the non-linear mapping. Quantum Neuron has the same convergence property as Conventional Network but can attain faster training than Conventional Network. The proposed model can perform both quantum learning and simulate the conventional models because the neural model has weights and non-linear activation functions. The use of Quantum approach can eliminate the necessity of building a network of neurons to obtain non-linear mapping.
Article
Full-text available
A multiple robot control architecture including a plurality of robotic agricultural machines including a first and second robotic agricultural machine. Each robotic agricultural machine including at least one controller configured to implement a plurality of finite state machines Within an individual robot control architecture (IRCA) and a global information module (GIM) communicatively coupled to the IRCA. The GIMs of the first and second robotic agricultural machines being configured to cooperate to cause said first robotic agricultural machine and said second agricultural machine to perform at least one agricultural task.
Article
Full-text available
ESservices, based on automated data exchange in distributed technological and organizational environment, are an effective way to build cross-border, controlled information services. Processes of creation, integration, management, reuse, discovery and composition of eServices are not very efficient without understanding the meaning of information resources. Creation and management of human and machine readable semantics of heterogeneous and distributed information resources are more complicated than coordinated documentation process, and require new interoperability principles, architecture and infrastructure. This paper outlines the idea and architecture of the Estonian semantic interoperability initiative in the public sector. The paper presents a collaborative ontology engineering toolset and repository as a part of interoperability infrastructure, built with Semantic Mediawiki, to manage the semantics of information resources.
Conference Paper
Full-text available
Configuration and customization choices arise due to the heterogeneous and scalable aspect of the cloud computing paradigm. To avoid being restricted to a given cloud and ensure application requirements, using several clouds to deploy a multi-cloud configuration is recommended but introduces several challenges due to the amount of providers and their intrinsic variability. In this paper, we present a model-driven approach based on Feature Models (FMs) originating from Software Product Lines (SPL) to handle cloud variability and then manage and create cloud configurations. We combine it with ontologies, used to model the various semantics of cloud systems. The approach takes into consideration application technical requirements as well as non-functional ones to provide a set of valid cloud or multi-cloud configurations and is implemented in a framework named SALOON.
Article
Full-text available
Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs.
Article
Full-text available
Many working memory (WM) models propose that the focus of attention (or primary memory) has a capacity limit of one to four items, and therefore, that performance on WM tasks involves retrieving some items from long-term (or secondary) memory (LTM). In the present study, we present evidence suggesting that recall of even one item on a WM task can involve retrieving it from LTM. The WM task required participants to make a deep (living/nonliving) or shallow ("e"/no "e") level-of-processing (LOP) judgment on one word and to recall the word after a 10-s delay on each trial. During the delay, participants either rehearsed the word or performed an easy or a hard math task. When the to-be-remembered item could be rehearsed, recall was fast and accurate. When it was followed by a math task, recall was slower, error-prone, and benefited from a deeper LOP at encoding, especially for the hard math condition. The authors suggest that a covert-retrieval mechanism may have refreshed the item during easy math, and that the hard math condition shows that even a single item cannot be reliably held in WM during a sufficiently distracting task-therefore, recalling the item involved retrieving it from LTM. Additionally, performance on a final free recall (LTM) test was better for items recalled following math than following rehearsal, suggesting that initial recall following math involved elaborative retrieval from LTM, whereas rehearsal did not. The authors suggest that the extent to which performance on WM tasks involves retrieval from LTM depends on the amounts of disruption to both rehearsal and covert-retrieval/refreshing maintenance mechanisms.
Conference Paper
Full-text available
The Semantic Web migrates the Internet into a web of machine-understandable data. However, users should still be able to understand the data and the knowledge associated with it. Using the LCIM we evaluate a set of visual artifacts based on today's standards and software that help to solve this apparent conflict.
Article
Full-text available
This paper is based on the findings of a literature review on the field of World Wide Web. A list of some key publications and explicitly provided literature is briefly included in this paper; addressing the importance of the field of Semantic Web and describing its contributions as the mechanism for structuring the information over the web in a format so that machines can understand the semantic context. Highlighting the technological innovation of Semantic Web, this paper presents Ontology some domain specific languages for Ontology construction: eXtensible Mark-up Language, Resource Description Framework and Web Ontology Language; offering different ways of explicitly structuring and richly annotating Web pages. Furthermore this paper discusses how Ontology is contributing to the semantic based web system development with some example of real time applications and concludes with some existing limitations needing to be overcome.
Conference Paper
Full-text available
Interoperability of systems is not a cookie-cutter-function. There are various levels of interoperability between two systems ranging from no interoperability to full interoperability. In the technical domain, various models for levels of interoperability already exist and are used successfully to determine the degree of interoperability between information technology systems. However, such models are not yet established in the domain of conceptual modeling. This paper introduces a general model dealing with various levels of conceptual interoperability that goes beyond the technical reference models for interoperable solutions. The model is intended to become a bridge between the conceptual design and the technical design for implementation, integration, or federation. It should also contribute to the standardization of V&V procedures as well as to the documentation of systems that are designed to be federated. It is furthermore a framework to determine in the early stages of the federation development process whether meaningful interoperability between systems is possible. To this end, the scope of the model goes beyond the implementation level of actual standards, which focus on the exchange of data using standardized formats and interfaces. Another practical application of the model is, that it enhances the only recently published DoD Net-Centric Data Strategy for the Global Information Grid (GIG) and is directly applicable to derive necessary metadata to reach the DoD Data Goal to "enable Data to be understandable."
Article
Full-text available
Ontologies are often viewed as the answer to the need for interoperable semantics in modern information systems. The explosion of textual information on the Read/Write Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas, such as natural language processing, have fueled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium and discusses the remaining challenges that will define the research directions in this area in the near future.
Article
Full-text available
Location-based services have evolved significantly dur- ing the last years and are reaching a maturity phase, relying primar- ily on the experience gained and the utilization of recent technolo- gies. This paper identifies main fields of research and key challenges for the proliferation of such services and proposes a framework for building location aware applications that use Semantic Web tech- nologies and advanced interactive interfaces. The approach, focusing on indoor environments, is structured around a RDFS ontology for capturing and formally modelling contextual information and also utilizes user-centered design for developing interactive map-based presentations for accommodating pedestrians. To test and demon- strate the approach, a prototype has been developed and a number of further extensions are analyzed that document the flexibility of the design.
Article
Full-text available
Cloud Computing is a paradigm shift in the field of Computing. It is moving at an incredible fast pace and one of the fastest evolving domains of computer science today. It consist set of technology and service models that concentrates on the internet base use and delivery of IT applications, processing capability, storage and memory space. There is a shift from the traditional in-house servers and applications to the next generation of cloud computing applications. With many of the computer giants like Google, Microsoft, etc. entering into the cloud computing arena, there will be thousands of applications running on the cloud. There are several cloud environments available in the market today which support a huge consumer-base. Eventually this will lead to a multitude of standards, technologies and products being provided on the cloud. Consumers will need certain degrees of flexibility to use the cloud application/services of their choice and at the same time will need these applications/services to communicate with each other. This paper emphasizes cloud computing and provides a solution to achieve Interoperability, which is in the form of Web Services. The paper will also provide a Live Case Study where interoperability comes into play � Connecting Google App Engine and Microsoft Windows Azure Platform, two of the leading Cloud Platforms available today. GAE and WAP are two Cloud Frameworks which have very little in common, making interoperability an absolute necessary.
Article
Full-text available
A rarely studied issue with using persistent computational models is whether the underlying computational mechanisms scale as knowledge is accumulated through learning. In this paper we evaluate the declarative memories of Soar: working memory, semantic memory, and episodic memory, using a detailed simulation of a mobile robot running for one hour of real-time. Our results indicate that our implementation is sufficient for tasks of this length. Moreover our system executes orders of magnitudes faster than real-time, with relatively modest storage requirements. We also project the computational resources required for extended operations.
Article
Full-text available
The question of whether and to what extent semantic or conceptual representations are integrated across languages in bilinguals has led cognitive psychologists to generate over 100 empirical reports. The terms semantic and conceptual are compared, contrasted, and distinguished from other levels of representation, and terms used to describe language integration are clarified. The existing literature addressing bilingual episodic and semantic memory at the level of semantic systems and at the level of the translation-equivalent word pair is summarized. This evidence strongly favors shared semantic systems, and shared semantic/conceptual representation for translation equivalents. Translation equivalents appear to have a different and closer cognitive status than within-language synonyms. Important directions in future cognitive research on bilingualism include neuroscientific and developmental approaches. Bilingual semantic and conceptual organization has been a topic of interest to cognitive researchers because of the fundamental cognitive question of redundancy versus efficiency of representation. Solutions to the redundancy/efficiency question will be important, for example, in understanding how two languages can be used competently within a single mind, and perhaps in understanding how second languages are acquired. For the purposes of this chapter, bilinguals will be considered to be all people who regularly use at least two languages (Grosjean, 1992). This definition implies spoken communicative competence, but encompasses people with a broad range of relative proficiencies in their languages. To pursue this topic, several terminology clarifications are necessary and will be discussed in the subsequent sections before turning to the results of research on bilingual representation.
Chapter
Full-text available
A number of technologies are mentioned under the rubric of “The Semantic Web”, but good overviews of these technologies with an eye toward practical applications are scarce. Moreover, much of the early focus in this field has been on the development of representation languages for static conceptual information, while there has been less emphasis on how to make semantic web applications practically useful in the context of knowledge work. To achieve this, a better coupling is needed between ontology, service descriptions and workflow modeling. This paper reviews all the basic technologies involved in this, and outlines what can be achieved by merging them in the context of real world workflow descriptions.
Chapter
Full-text available
Large, industry-wide interoperability projects use syntax-based standards approaches to accomplish interoperable data exchange among enterprise applications. We are investigating Semantic Web to advance these approaches. In this paper, we describe an architecture for Semantic Enterprise Application Integration Standards as a basis for experimental assessment of the Semantic Web technologies to enhance these standards approaches. The architecture relies on automated translation of the XML Schema-based representation of business document content models into an OWL-based ontology. Based on this architecture, we use Semantic Web representation and reasoning mechanisms to support consistency checking of ontological constructs and constraints specified within the ontology. The proposed architecture is relevant (1) when managing multiple enterprise ontologies derived from, and dependent on, a common ontology and (2) when dealing with model-driven integration using customizable interface models and testing of such integration efforts.
Conference Paper
Full-text available
Advances in the middleware paradigm has enabled applications to be integrated together thus enabling more reliable distributed systems. Although every middleware tries to solve interoperability issues among a given set of applications, nonetheless there still remains interoperability challenges across various middlewares. Interoperability enables diverse systems to work in accordance and extend the scope of services that are provided by individual systems. During an interoperability process, it is imperative to interpret the information exchanged in a correct and accurate manner in order to maintain coherence of data. Hence, the aim of this paper is to tackle this issue of semantic interoperability through an experimental approach using the domain of vehicular ad-hoc networked systems. KeywordsInteroperability–Ontology–Vehicular Ad-Hoc Networks
Conference Paper
Full-text available
Appropriately handling noise and outliers is an important issue in data mining. In this paper we examine how noise and outliers are handled by learning algorithms. We introduce a filtering method called PRISM that identifies and removes instances that should be misclassified. We refer to the set of removed instances as ISMs (instances that should be misclassified). We examine PRISM and compare it against 3 existing outlier detection methods and 1 noise reduction technique on 48 data sets using 9 learning algorithms. Using PRISM, the classification accuracy increases from 78.5% to 79.8% on a set of 53 data sets and is statistically significant. In addition, the accuracy on the non-outlier instances increases from 82.8% to 84.7%. PRISM achieves a higher classification accuracy than the outlier detection methods and compares favorably with the noise reduction method.
Article
Full-text available
The pattern of brain atrophy in semantic dementia and its associated cognitive effects have attracted a considerable body of research, but the nature of core impairments remains disputed. A key issue is whether the disease encompasses one neurocognitive network (semantics) or two (language and semantics). In order to address these conflicting perspectives, we conducted a longitudinal investigation of two semantic dementia patients, in which behavioural performance across a range of measures of language and semantic performance was assessed and interpreted in the context of annually acquired MRI scans. Our results indicated a core semantic impairment in early stages of the disease, associated with atrophy of the inferior, anterior temporal cortex. Linguistic impairments emerged later, and were contingent on atrophy having spread into areas widely believed to subserve core language processes (left posterior perisylvian, inferior frontal and insular cortex). We claim, therefore, that phonological, syntactic and morphological processing deficits in semantic dementia reflect damage to core language areas. Further, we propose that much of the current controversy over the nature of deficits in semantic dementia reflect a tendency in the literature to adopt a static perspective on what is a progressive disease. An approach in which the relationship between progressive neural changes and behavioural change over time is carefully mapped, offers a more constraining data-set from which to draw inferences about the relationship between language, semantics and the brain.
Conference Paper
Full-text available
We introduce the OWL Plugin, a Semantic Web extension of the Proteg´ e ontology development platform. The OWL Plugin can be used to edit ontologies in the Web Ontology Language (OWL), to access description logic reasoners, and to acquire instances for semantic markup. In many of these features, the OWL Plugin has created and facilitated new practices for building Semantic Web con- tents, often driven by the needs of and feedback from our users. Furthermore, Prot´ ege's flexible open-source platform means that it is easy to integrate custom- tailored components to build real-world applications. This document describes the architecture of the OWL Plugin, walks through its most important features, and discusses some of our design decisions.
Conference Paper
Full-text available
The paper presents a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle. Some dis- tinctions are drawn between kinds of ontology design patterns. Some content- oriented patterns are presented in order to illustrate their utility at different de- grees of abstraction, and how they can be specialized or composed. The pro- posed framework and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology- driven tasks/queries to be executed.
Conference Paper
Full-text available
The emergence of handheld devices associated with wireless tech- nologies has introduced new challenges for middleware. First, mobility is be- coming a key characteristic; mobile devices may move around different areas and have to interact with different types of networks and services, and may be exposed to new communication paradigms. Second, the increasing number and diversity of devices, as in particular witnessed in the home environment, lead to the advertisement of supported services according to different service discovery protocols as they come from various manufacturers. Thus, if networked services are advertised with protocols different than those supported by client devices, the latter are unable to discover their environment and are consequently iso- lated. This paper presents a system based on event-based parsing techniques to provide full service discovery interoperability to any existing middleware. Our system is transparent to applications, which are not aware of the existence of our interoperable system that adapts itself to both its environment across time and its host to offer interoperability anytime anywhere. A prototype implemen- tation of our system is further presented, enabling us to demonstrate that our approach is both lightweight in terms of resource usage and efficient in terms of response time.
Conference Paper
Full-text available
We explore the design patterns and architectural trade- offs for achieving interoperability across communication middleware platforms, and describe uMiddle, a bridging framework for universal interoperability that enables seam- less device interaction over diverse platforms. The prolifer- ation of middleware platforms that cater to specific devices has created isolated islands of devices with no uniform pro- tocol for interoperability across these islands. This void makes it difficult to rapidly prototype pervasive computing applications spanning a wide variety of devices. We discuss the design space of architectural solutions that can address this void, and detail the trade-offs that must be faced when trying to achieve cross-platform interoperability. uMiddle is a framework for achieving such interoperability, and serves as a powerful platform for creating applications that are in- dependent of specific underlying communication platforms.
Conference Paper
Full-text available
The common vision of pervasive computing environments requires a very large range of devices and software components to interoperate seamlessly. From the assumption that these devices and associated software permeate the fabric of everyday life, a massive increase looms in the number of software developers deploying functionality into pervasive computing environments. This poses a very large interoperability problem for which solutions reliant solely on interoperability standards will not scale. An interoperability problem of a similar scale is presented by the desire for a Semantic Web supporting autonomous machine communication over the WWW. Here, solutions based on service-oriented architectures and ontologies are being actively researched, and we examine how such an approach could be used to address pervasive computing's interoperability problem. The paper outlines the potential role that semantic techniques offer in solving some key challenges, including candidate service discovery, intelligent matching, service adaptation and service composition. In particular the paper addresses the resulting requirement of semantic interoperability outlining initial results in dynamic gateway generation. In addition the paper proposes a roadmap identifying the different scenarios in which semantic techniques will contribute to the engineering and operation of pervasive computing systems.
Conference Paper
Full-text available
A layered architecture for the Semantic Web that adheres to software engineering principles and the fundamental aspects of layered architectures will assist in the development of Semantic Web specifications and applications. The most well-known versions of the layered architecture that exist within literature have been proposed by Berners-Lee. It is possible to indicate inconsistencies and discrepancies in the different versions of the architecture, leading to confusion, as well as conflicting proposals and adoptions by the Semantic Web community. A more recent version of a Semantic Web layered architecture, namely the CFL architecture, was proposed in 2007 by Gerber, van der Merwe and Barnard [23], which adheres to software engineering principles and addresses several of the concerns evident from previous versions of the architecture. In this paper we evaluate this recent architecture, both by scrutinising the shortcomings of previous architectures and evaluating the approach used for the development of the latest architecture. Furthermore, the architecture is applied to usage scenarios to evaluate the usefulness thereof.
Article
The goal of Effective Web based E-Learning is to bridge the gap between the currently popular approach to Web-based education, which is centered on learning management systems (LMS) vs. the powerful but underused technologies in intelligent tutoring and adaptive hypermedia. Ontologies have become a key technology for enabling semantic-driven resource management. We present a sub ontology-based approach for resource reuse by using an Evolutionary algorithm.
Article
The Web has developed to the biggest source of information and entertainment in the world. By its size, its adaptability and exibility, it challenged our current paradigms on information sharing in several areas. By offering every-body the opportunity to release own contents in a fast and cheap way, the Web already led to a revolution of the traditional publishing world and just now, it commences to change the perspective on advertisements. With the possibility to adapt the contents displayed on a page dynamically based on the viewer's context, campaigns launched to target rough customer groups will become an element of the past. However, this new ecosystem, that relates advertisements with the user, heavily relies on the quality of the underlying user prole. This prole has to be able to model any combination of user characteristics, the relations between its composing elements and the uncertainty that stems from the automated processing of real-world data. The work at hand describes the beginnings of a PhD project that aims to tackle those issues using a combination of data analysis, ontology engineering and processing of big data resources provided by an industrial partner. The nal goal is to automatically construct and populate a prole ontology for each user identied by the system. This allows to associate these users to high-value audience segments in order to drive digital marketing.
Article
Ontologies capture the structure, relationships, semantics and other essential meta information of an application. This thesis describes a framework to automate application interoperability by using dynamically generated ontologies. We propose a set of techniques to extract ontologies from data accessible on the Web in the form of semi-structured HTML pages. Ontologies retrieved from similar applications are matched together to create a general ontology describing the application domain. Information retrieval and graph matching techniques are used to match and measure the usefulness of the ontologies created. Matching algorithms are combined together to produce global ontologies based on local ontologies inherently present in Web applications. We present a system called OntoBuilder that allows users to drive the ontology creation process using a user-friendly and intuitive interface. We also present experiments for a well-known case of study: car-rental applications. We successfully achieve 90% accuracy on ontology extraction and 70 % accuracy for ontology matching.
Conference Paper
Recommender systems aim to provide users with personalized suggestions about information items, products or services that are likely to be of their interests. The traditional syntactic-based recommender systems suffer from a number of shortcomings for the information available on the internet has been designed to be readable only by humans and computer systems can not effectively process nor interpret the data present in it. As one Semantic Web technology, Ontology facilitates the knowledge sharing, reuse, communication, collaboration and construction of knowledge rich and intensive systems. Adding semantically empowered techniques to recommender systems can significantly improve the overall quality of recommendations. In this paper, an ontology-based method for personalized recommendation of knowledge in the heterogeneous environment is presented, which provides users with an autonomous tool that is able to minimize repetitive and tedious retrieved information. It constructs a domain ontology by integrating multi-resource and heterogeneous data, generates a user's interest ontology by analyzing the user's demographic characteristics and personal preferences. Based on the matching results of the domain ontology, user's query requests and interest ontology, the recommender system can suggest the proper information to the user who is likely interested in the related topics. Finally, qualitative evaluation is carried out in order to demonstrate the effectiveness of the proposed method.
Article
After years of research on ontology matching, it is reasonable to consider several questions: is the field of ontology matching still making progress? Is this progress significant enough to pursue further research? If so, what are the particularly promising directions? To answer these questions, we review the state of the art of ontology matching and analyze the results of recent ontology matching evaluations. These results show a measurable improvement in the field, the speed of which is albeit slowing down. We conjecture that significant improvements can be obtained only by addressing important challenges for ontology matching. We present such challenges with insights on how to approach them, thereby aiming to direct research into the most promising tracks and to facilitate the progress of the field.
Article
This paper describes our experience in the rapid prototyping of a food ontology oriented to the nutritional and health care domain that is used to share knowledge between the different stakeholders involved in the PIPS project.
Conference Paper
With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or remoter server. The running of the enterprise's data center is just like Internet. This makes the enterprise use the resource in the application that is needed, and access computer and storage system according to the requirement. This article introduces the background and principle of cloud computing, the character, style and actuality. This article also introduces the application field the merit of cloud computing, such as, it do not need user's high level equipment, so it reduces the user's cost. It provides secure and dependable data storage center, so user needn't do the awful things such storing data and killing virus, this kind of task can be done by professionals. It can realize data share through different equipments. It analyses some questions and hidden troubles, and puts forward some solutions, and discusses the future of cloud computing. Cloud computing is a computing style that provide power referenced with IT as a service. Users can enjoy the service even he knows nothing about the technology of cloud computing and the professional knowledge in this field and the power to control it.
Conference Paper
Recently, ontology has become the key technique to annotate semantics and provide a common foundation for various complex resources on the semantic web. Efficient registration is the fundamental basis of other applications based on ontology, but extant registry mechanisms are insufficient to register semantic information and enable interoperation based on them. So in this paper, MFI4Onto is proposed as a common framework to register ontology and its evolution information for semantic interoperation between ontologies and information systems based on them. Furthermore, MFI4Onto can also be extended to coordinate with ebXML and enhance its capability for complex resources. Under the guidance of MFI4Onto, it is feasible to perform semantic interoperation between heterogeneous registries on the semantic web.
Article
Biodiversity research requires associating data about living beings and their habitats, constructing sophisticated models and correlating all kinds of information. Data handled are inherently heterogeneous, being provided by distinct (and distributed) research groups, which collect these data using different vocabularies, assumptions, methodologies and goals, and under varying spatio-temporal frames. Ontologies are being adopted as one of the means to alleviate these heterogeneity problems, thus helping cooperation among researchers. While ontology toolkits offer a wide range of operations, they are self-contained and cannot be accessed by external applications. Thus, the many proposals for adopting ontologies to enhance interoperability in application development are either based on the use of ontology servers or of ontology frameworks. The latter support many functions, but impose application recoding whenever ontologies change, whereas the first supports ontology evolution, but for a limited set of functions.This paper presents Aondê—a Web service geared towards the biodiversity domain that combines the advantages of frameworks and servers, supporting ontology sharing and management on the Web. By clearly separating storage concerns from semantic issues, the service provides independence between ontology evolution and the applications that need them. The service provides a wide range of basic operations to create, store, manage, analyze and integrate multiple ontologies. These operations can be repeatedly invoked by client applications to construct more complex manipulations. Aondê has been validated for real biodiversity case studies.
Article
The service-orientated computing paradigm is transforming traditional workflow management from a close and centralized control system into a worldwide dynamic business process. A complete workflow serving inter-enterprise collaboration should include both internal processes and ad hoc external processes. This paper presents an agent-based workflow model to address this challenge. In the proposed model, agent-based technology provides the workflow coordination at both inter- and intra-enterprise levels while Web service-based technology provides infrastructures for messaging, service description and workflow enactment. A proof-of-concept prototype system simulating the order entry, partner search and selection, and contracting in a virtual enterprise creation scenario is implemented to demonstrate the dynamic workflow definition and execution for inter-enterprise collaboration.
Article
High quality domain ontologies are essential for successful employment of semantic Web services. However, their acquisition is difficult and costly, thus hampering the development of this field. In this paper we report on the first stage of research that aims to develop (semi-)automatic ontology learning tools in the context of Web services that can support domain experts in the ontology building task. The goal of this first stage was to get a better understanding of the problem at hand and to determine which techniques might be feasible to use. To this end, we developed a framework for (semi-)automatic ontology learning from textual sources attached to Web services. The framework exploits the fact that these sources are expressed in a specific sublanguage, making them amenable to automatic analysis. We implement two methods in this framework, which differ in the complexity of the employed linguistic analysis. We evaluate the methods in two different domains, verifying the quality of the extracted ontologies against high quality hand-built ontologies of these domains.Our evaluation lead to a set of valuable conclusions on which further work can be based. First, it appears that our method, while tailored for the Web services context, might be applicable across different domains. Second, we concluded that deeper linguistic analysis is likely to lead to better results. Finally, the evaluation metrics indicate that good results can be achieved using only relatively simple, off the shelf techniques. Indeed, the novelty of our work is not in the used natural language processing methods but rather in the way they are put together in a generic framework specialized for the context of Web services.
Article
Bilinguals’ lexical mappings for their two languages have been found to converge toward a common naming pattern. The present paper investigates in more detail how semantic convergence is manifested in bilingual lexical knowledge. We examined how semantic convergence affects the centers and boundaries of lexical categories for common household objects for Dutch–French bilinguals. We found evidence for converging category centers for bilinguals: (1) correlations were higher between their typicality ratings for roughly corresponding categories in the two languages than between typicality ratings of monolinguals in each language, and (2) in a geometrical representation, category centers derived from their naming data in the two languages were situated closer to each other than were the corresponding monolingual category centers. We also found evidence for less complex category boundaries for bilinguals: (1) bilinguals needed fewer dimensions than monolinguals to separate their categories linearly and (2) fewer violations of similarity-based naming were observed for bilinguals than for monolinguals. Implications for theories of the bilingual lexicon are discussed.
Article
The Web Services world consists of loosely-coupled distributed systems which adapt to changes by the use of service descriptions that enable ad-hoc, opportunistic service discovery and reuse. At present, these service descriptions are semantically impoverished, being concerned with describing the functional signature of the services rather than characterising their meaning. In the Semantic Web community, the DAML Services effort attempts to rectify this by providing a more expressive way of describing Web Services using ontologies. However, this approach does not separate the domain-neutral communicative intent of a message (considered in terms of speech acts) from its domain-specific content, unlike similar developments from the multi-agent systems community.We describe our experiences of designing and building an ontologically motivated Web Services system for situational awareness and information triage in a simulated humanitarian aid scenario. In particular, we discuss the merits of using techniques from the multi-agent systems community for separating the intentional force of messages from their content, and the implementation of these techniques within the DAML Services model.
Article
The paper defines and clarifies basic concepts of enterprise architectures. Then an overview on architectures for enterprise integration developed since the middle of the 1980s is presented. The main part of the paper focuses on the recent developments on architectures for enterprise interoperability. The main initiatives and existing works are presented. Future trends and some research issues are discussed and conclusions are given at the end of the paper.
Conference Paper
Summary form only given. In this article, we present a new architecture of the intelligent tutoring system (ITS) and we suggest an original method, which allows recognizing the expression of the learner's face during exercise. It helps evaluate his affective state in an emotional system in order to distinguish his influence on his responses. Accordingly, we should first be able to detect his face, extract his important features translating the state of his expression (characteristical features: eyes and mouth,), then we should analyse their configuration and characteristics in order to recognize the expression, which describe and interpret it. Our architecture is based on the observation of the behaviour of the learner; detect engaging signs so as to detect affective responses, which can be the manifestation of feelings of interest, excitement and confusion. From the observation and the identification of the emotional state of the learner, the tutor can undertake actions which influences the quality of learning and its execution (important remarks may reduce the feeling of failure of the learner or avoid the risk of interrupting his work as soon as he feels bored). A scientific and especially emotional analysis is necessary to evaluate and help the learner during exercise.