International Journal of Cooperative Information Systems

Published by World Scientific Publishing
Online ISSN: 0218-8430
Publications
Conference Paper
The engineering data of a large enterprise is typically distributed over a wide area and archived in a variety of databases and file systems. Access to such information is crucial to a team member, particularly in a concurrent engineering setting. However, this is not easy, because (1) a model of the relevant information is not available, and (2) there is no simple way to access the information without being knowledgeable about various computer data formats, file systems, and networks. The authors have developed a system called the Information Sharing System (ISS) to enable access to diverse and distributed information within a corporation. Such data could be stored in different repositories such as databases and file systems including those that contain multiple media. The paper describes the methodology fo the ISS, the details of the implementation nd extensions planned for the future
 
Conference Paper
The World Wide Web is serving as a leading vehicle for information dissemination by offering information services, such as product information, group interactions, or sales transactions. Three major factors affect the performance and reliability of information services for the Web: the distribution of information which has resulted from the globalization of information systems, the heterogeneity of information sources, and the sources' instability caused by their autonomous evolution. This paper focuses on integrating existing information sources, available via the Web, in the delivery of information services. The primary objective of the paper is to provide mechanisms for structuring and maintaining a domain model for Web applications. These mechanisms are based on conceptual modeling techniques, where concepts are being defined and refined within a meta-data repository through the use of instantiation, generalization and attribution. Also, active databases techniques are exploited to provide robust mechanisms for maintaining a consistent domain model in a rapidly evolving environment, such as the Web
 
Conference Paper
The explosive growth in global networking infrastructures has created the opportunity to construct systems involving large number of independent and widely-distributed computational components. Administrative and operational autonomy considerations imply that the actual establishment of agreements regarding all aspects of component interaction must be explicitly declared and effectively formed. Moreover, since agreements may evolve over time, their representation needs to be highly tailorable. Design autonomy considerations imply the need to interoperate between pre-existing components, yet not enforce a fixed interoperability standard. The HADAS system addresses both concerns by providing a model and a corresponding programmable interface to component interoperability. Specifically, it provides an integration framework in which components “live”, a peer-based configuration model for forming agreements and interconnections between components, and a coordination language for explicitly programming the actual desired distributed computation using these components. The framework rests on an underlying reflective object model that supports mutability and mobility, and an infrastructure that provides object interconnectivity, security and persistence. HADAS is fully implemented in Java and comes with a full programming environment for developing and executing network-centric applications
 
Conference Paper
The self-triggering approach presented in this paper is a novel architecture for active database systems. It provides a high-level reflective rule language, rich modelling tools and a clear semantics of dependencies among data elements. Under this approach, rules that are data-driven by nature are handled by a mechanism that utilizes their semantic properties, while rules that are event-driven by nature are handled with extended modelling facilities. The self-triggering approach has been employed by the PARDES project that is described throughout this paper. This approach is contrasted with the “event-control-action” method, the leading architecture in current active database models
 
Conference Paper
We propose an object-oriented logical formalism to conceptionally model applications in an interoperable environment. Such an environment consists of heterogeneous and autonomous local database systems. Applications in such an environment use several resources and services. Their conceptual modelling involves re-specification of existing systems in terms of homogeneous views, modelling of behavior and system dynamics, modelling of logically distributed components in an open environment and the modeling of communication relationships and dependencies between components. We introduce a formal object-oriented language capable of dealing with these requirements and illustrate its use to model applications in an interoperable environment
 
Article
Artifact-centric modeling is a promising approach for modeling business processes based on the so-called business artifacts - key entities driving the company's operations and whose lifecycles define the overall business process. While artifact-centric modeling shows significant advantages, the overwhelming majority of existing process mining methods cannot be applied (directly) as they are tailored to discover monolithic process models. This paper addresses the problem by proposing a chain of methods that can be applied to discover artifact lifecycle models in Guard-Stage-Milestone notation. We decompose the problem in such a way that a wide range of existing (non-artifact-centric) process discovery and analysis methods can be reused in a flexible manner. The methods presented in this paper are implemented as software plug-ins for ProM, a generic open-source framework and architecture for implementing process mining tools.
 
Conference Paper
With increasing size and complexity of Grids manual diagnosis of individual application faults becomes impractical and time-consuming. Quick and accurate identification of the root cause of failures is an important prerequisite for building reliable systems. We describe a pragmatic model-based technique for application-specific fault diagnosis based on indicators, symptoms and rules. Customized wrapper services then apply this knowledge to reason about root causes of failures. In addition to user-provided diagnosis models we show that given a set of past classified fault events it is possible to extract new models through learning that are able to diagnose new faults. We investigated and compared algorithms of supervised classification learning and cluster analysis. Our approach was implemented as part of the Otho Toolkit that ’service-enables’ legacy applications based on synthesis of wrapper service.
 
Conference Paper
We consider distributed information systems that are open, dynamic and provide access to large numbers of distributed, heterogeneous, autonomous information sources. Most of the work in data mediator systems has dealt with the problem of finding relevant information providers for a request. However, finding relevant requests for information providers is another important side of the mediation problem which has not received much attention. In this paper, we address these two sides of the problem with a flexible mediation process. Once the qualified information providers are identified, our process allows them to express their request interests via a bidding mechanism. It also requires to set up a requisition policy, because a request must always be answered if there are qualified providers. This work does not concern pure market mechanisms because we counter-balance the providers’ bids by considering their quality wrt a request. We validated our process on a set of simulations. The results show that the mediation process supports the providers in adequacy with the user expectations, even if they are sometimes imposed.
 
The health insurance claim example workflow 
Chapter
This paper presents market-based workflow management, a novel approach to workflow specification and execution which regards activities contained in a workflow as goods traded on an electronic market. Information about expected cost and execution time is considered for activity specifications, and is used at runtime to execute workflows such that actual cost and execution times are balanced and optimized. To that end, task assignment uses a bidding protocol, in which each eligible processing entity specifies at which price and in which time interval he/she can execute the activity. The winner of a specific bidding process is requested to execute the activity, and earns the amount specified in the corresponding bid. Market-based workflow management thus not only allows to optimize workflow executions with respect to execution time and overall cost; but the trading of activities also represents an incentive for processing entities to engage in a workflow.
 
Article
. Given the undeniable popularity of the Web, providing efficient and secure access to remote databases using a Web browser is crucial for the emerging cooperative information systems and applications. In this paper, we evaluate all currently available Java-based approaches that support persistent connections between Web clients and database servers. These approaches include Java applets, Java Sockets, Servlets, Remote Method Invocation, CORBA, and mobile agents technology. Our comparison is along the important parameters of performance and programmability. 1 Introduction Providing efficient and secure access to remote databases using a Web browser [2,6] is crucial for the emerging cooperative information systems, such as Virtual Enterprises. A number of methods for Web database connectivity and integration have been proposed such as CGI scripts, active pages, databases speaking http, external viewers or plug-ins, and HyperWave [9]. These methods enhance the Web server capabil...
 
A Strategic Dependency model of a goods acquisition process
A Strategic Rationale model showing alternative ways of accomplishing "having an item"
An illustration of some of the features of ¢ £ for supporting reengineering
A partial schema, showing task decomposition links and some classes of dependency links
Article
. As information systems are increasingly being called upon to play vital roles in organizations, conceptual modelling techniques need to be extended to relate information structures and processes to business and organizational objectives. We propose a framework which focuses on the modelling of strategic actor relationships ("A-R") for a richer conceptual model of business processes in their organizational settings. Organizations are viewed as being madeup of social actors who are intentional -- have motivations, wants, and beliefs -- and strategic -- they evaluate their relationships to each other in terms of opportunities and vulnerabilities. The framework supports formal modelling of the network of dependency relationships among actors, and the systematic exploration and assessment of alternative process designs in reengineering. The semantics of the modelling concepts are axiomatically characterized. By embedding the framework in the Telos language, the framework can also potentia...
 
Article
This paper discussess an example of the application of a high-level modelling framework which enables both the specification and implementation of a system's conceptual design. This framework, DESIRE (framework for DEsign and Specification of Interacting REasoning components), explicitly models the knowledge, interaction, and coordination of complex tasks and reasoning capabilities in agent systems. For the application domain addressed in this paper, an operational multi-agent system which manages an electricity transportation network for a Spanish electricity utility, a comprehensible specification is presented. Keywords: Multi-agent system; Modelling framework; Compositional modelling 1. Introduction As multi-agent technology begins to emerge as a viable solution for large-scale industrial and commercial applications, there is an increasing need to ensure that the systems being developed are robust, reliable and fit for purpose. To this end, it is important that the basic principles...
 
Article
The explosive growth in genomic (and soon, expression and proteomic) data, exemplified by the Human Genome Project, is a fertile domain for the application of multi-agent information gathering technologies. Furthermore, hundreds of smaller-profile, yet still economically important organisms are being studied that require the efficient and inexpensive automated analysis tools that multi-agent approaches can provide. In this paper we discuss the use of DECAF, a multi-agent system toolkit based on RETSINA and TAEMS, to build reusable information gathering systems for bioinformatics. We will cover why bioinformatics is a classic application for information gathering, how DECAF supports it, and several extensions that support new analysis paths for genomic information.
 
Article
this paper we focus on the solutions we are providing for the outer layer of the architecture. They are embedded into a domain independent COOrdination Lan3 guage (COOL) that provides services for defining distributed agent configurations, managing communication, defining and managing structured interactions amongst agents, external software integration and in context acquisition and debugging of coordination knowledge. As these solutions impact on the way agents manage change by information distribution and conflict resolution, we also address these aspects showing how the coordination service supports these tasks. The paper is structured as follows. In section 2 we review the work in Distributed Artificial Intelligence from several perspectives and define our research goals. As the subsequent presentation of our tools is carried out in the context of our main application, the agent-based integration of the supply chain of manufacturing enterprises, we continue in section 3 with presenting this application domain. Section 4 deals with the main subject of the paper, the components of the coordination language. We illustrate the language throughout with examples from the supply chain. Section 5 then deals with the coordination knowledge acquisition service that allows users to extend and debug coordination knowledge on-line. To show how the coordination system is integrated with other reasoning tasks in the Agent Building Shell, in section 6 we review two other services of the architecture that make use of the coordination framework, cooperative information distribution and cooperative conflict management. In the end, we discuss some related approaches and provide concluding remarks.
 
Article
Exploration is a central issue for autonomous agents which must carry out navigation tasks in environments of which a description is not known a priori. In our approach the environment is described, from a symbolic point of view, by means of a graph; clustering techniques allow for further levels of abstraction to be defined, leading to a multi-layered representation. In this work we propose an unsupervised exploration algorithm in which several agents cooperate to acquire knowledge of the environment at the different abstraction levels. All agents are equal and pursue the same local exploration strategy; nevertheless, the existence of multiple levels of abstraction in the environment representation allows for the agents' behaviour to differ. Agents carry out exploration at different abstraction levels, aimed at reproducing an ideal exploration profile; each agent dynamically selects its exploration level, based on the current demand. Inter-agent communication allows for the agents to ...
 
Article
This paper discusses a distributed architecture for integrating such engineering tools in an open design environment, organized as a population of asynchronous cognitive agents. Before introducing the general architecture and the communication protocol, issues about an agent architecture and inter-agent communications are discussed. A prototype of such an environment with a number of independent agents located in several workstations is then presented and demonstrated on an example of a small mechanical design.
 
Article
The use of mobile agent technology has been proposed for various fault-sensitive application areas, including electronic commerce and system management. A prerequisite for the use of mobile agents in these environments is that agents have to be executed reliably, independent of communication and node failures. In this article, we present two approaches improving the level of fault-tolerance in agent execution. The introduction of an itinerary concept allows to specify an agent's travel plan flexibly and provides the agent system with the possibility to postpone the visit of currently unavailable nodes or to choose alternative nodes in case of node failures. The second approach is a recently proposed fault-tolerant protocol to ensure the exactly-once execution of an agent. With this protocol, agents are preformed in stages. Each stage consists of a number of nodes. One of these nodes executes the agent while the others monitor the execution. After a summary of this protocol, we focus on the construction of stages. In particular, we investigate how the number of nodes per stage influences the probability of an agent to be blocked due to failures and which nodes should be selected when forming a stage to minimize the protocol overhead.
 
Selection probability for deterministic selection with weights  
1,000,000 steps, preference selection 
Article
The exchange of goods and services between bargaining software agents requires new forms of brokering mechanisms which achieve consensus between conflicting parties. Such mechanisms have to be designed in a way that they give rational self-interested agents no incentives for insincere behavior. We introduce an arbiter as third party that resolves conflicting bargaining situations between the agents. To achieve non-manipulative agent behavior, we investigate three arbitration protocols that avoid di#erent forms of manipulations and show how each trades net e#ciency for robustness against manipulations. We describe the applicability of the protocols in bilateral bargaining situations and, analyze their robustness against manipulations analytically and by simulations. We compare the protocols with Nash's arbitration 1 and the Groves-Clarke tax 2 and characterize situations in which our protocols are superior. Keywords: Arbitration, Negotiation, Cooperation, Agents, Protocol...
 
Agent Architecture  
Visitor Hosting System
Information Sources and Returned Items  
Final Schedule of Minsky's Visit  
Article
We are investigating techniques for developing distributed and adaptive collections of information agents that coordinate to retrieve, filter and fuse information relevant to the user, task and situation, as well as anticipate user's information needs. In our system of agents, information gathering is seamlessly integrated with decision support. The task for which particular information is requested of the agents does not remain in the user's head but it is explicitly represented and supported through agent collaboration. In this paper we present the distributed system architecture, agent collaboration interactions, and a reusable set of software components for structuring agents. The system architecture has three types of agents: Interface agents interact with the user receiving user specifications and delivering results. They acquire, model, and utilize user preferences to guide system coordination in support of the user's tasks. Task agents help users perform tasks by formulating problem solving plans and carrying out these plans through querying and exchanging information with other software agents. Information agents provide intelligent access to a heterogeneous collection of information sources. We have implemented this system framework and are developing collaborating agents in diverse complex real world tasks, such as organizational decision making, investment counseling, health care and electronic commerce.
 
Article
Ontobroker applies Artificial Intelligence techniques to improve access to heterogeneous, distributed and semistructured information sources as they are presented in the World Wide Web or organizationwide intranets. It relies on the use of ontologies to annotate web pages, formulate queries and derive answers. In this paper we will briefly sketch Ontobroker. Then we will discuss its main shortcomings, i.e. we will share the lessons we learned from our exercise. We will also show how On2broker overcomes these limitations. Most important is the separation of the query and inference engines and the integration of new web standards like XML and RDF. Keywords: World Wide Wide, Internet, Information retrieval, Ontologies, Semantics 1. Introduction The World Wide Web (WWW) currently contains around 300 million static objects providing a broad variety of information sources (cf. [Bharat & Broder, 1998]). The early question of whether a certain piece of information is on the Web has be...
 
Merging across classes.
Ontology of subclasses of COUNTRY.  
Effectiveness of extracting patterns in countries mediator.
Ontology of subclasses of interest.  
Article
There is currently great interest in building information mediators that can integrate information from multiple data sources such as databases or Web sources. The query response time for such mediators is typically quite high, mainly due to the time spent in retrieving data from remote sources. We present an approach for optimizing the performance of information mediators by selectively materializing data. We first present our overall framework for materialization in a mediator environment. The data is materialized selectively. We outline the factors that are considered in selecting data to materialize. We present an algorithm for identifying classes of data to materialize by analyzing one of the factors which is the distribution of user queries. We present results with an implemented version of our optimization system for the Ariadne information mediator, which show the effectiveness of our algorithm in extracting patterns of frequently accessed classes from user queries. We also demonstrate the effectiveness of approach in optimizing mediator performance by materializing such classes.
 
Article
The support for automatic interoperation of software components can reduce cost and provide greater functionality. This paper describes a novel approach to software interoperation based on specification sharing. Software components, called agents, provide machine processable descriptions of their capabilities and needs. Agents can be realized in different programming languages, and they can run in different processes on different machines. In addition, agents can be dynamic -- at run time agents can join the system or leave. The system uses the declarative agent specifications to automatically coordinate their interoperation. The architecture supports anonymous interoperation of agents, where each agent has the illusion that the capabilities of all the other agents are provided directly by the system. The distinctive feature of this approach is the expressiveness of the declarative specification language, which enables sophisticated agent interoperation, e.g., decomposing complex reque...
 
Article
In this paper, we propose a PWM (Personal Web Map) which is a personal and small database of interesting Web pages to a user, and develop a method to construct it under the user's control of multiple Web robots. Though general search engine with large databases like YaHoo, AltaVista, MetaCrawler are valid, it is important that a user constructs a small, personal database of relevant Web pages to his/her interest like Bookmarks. For such a Web page database, we propose a PWM: a personal database of interesting Web pages to a user which he/she can control its construction. First a user gives keywords indicating his/her interest to a system, and it constructs a PWM concerned with the keywords. For building a useful PWM, it is necessary that a user can interrupt the construction of a PWM anytime and instruct a sub-field in which a PWM should be expanded more. For this function, we develop an anytime-control algorithm for multiple Web robots. A density distribution blackboard is used, and a...
 
Article
Much work has been done in the last decade in the related areas of object-oriented programming languages and object-oriented databases. Researchers from both areas now seem to be working toward a common end, that of an object management system, or OMS. An OMS is constructed similarly to an OODB but provides a general purpose concurrent object-oriented programming language as well, complementing the OODB query facilities. In this paper, we will define several different types of object systems (object servers, persistent OOPL's, OODB's and OMS's) in terms of their interfaces and capabilities from the viewpoint of how these support the requirements of cooperative information systems. We will examine the distinguishing features and general architecture of systems of each type in the light of a general model of OMS architecture. Copyright 1992 Steven S. Popovich and Gail E. Kaiser Keywords: concurrency control, locking, storage management, transactions, type management 1 1. Introductio...
 
Article
We present the systematic design and development of a distributed query scheduling service (DQS) in the context of DIOM, a distributed and interoperable query mediation system [26]. DQS consists of an extensible architecture for distributed query processing, a three-phase optimization algorithm for generating efficient query execution schedules, and a prototype implementation. Functionally, two important execution models of distributed queries, namely moving query to data or moving data to query, are supported and combined into a unified framework, allowing the data sources with limited search and filtering capabilities to be incorporated through wrappers into the distributed query scheduling process. Algorithmically, conventional optimization factors (such as join order) are considered separately from and refined by distributed system factors (such as data distribution, execution location, heterogeneous host capabilities), allowing for stepwise refinement through three optimization phases: compilation, parallelization, site selection and execution. A subset of DQS algorithms has been implemented in Java to demonstrate the practicality of the architecture and the usefulness of the distributed query scheduling algorithm in optimizing execution schedules for inter-site queries.
 
Article
Systems composed of multiple interacting problem solvers are becoming increasingly pervasive and have been championed in some quarters as the basis of the next generation of intelligent information systems. If this technology is to fulfill its true potential then it is important that the systems which are developed have a sound theoretical grounding. One aspect of this foundation, namely the model of collaborative problem solving, is examined in this paper. A synergistic review of existing models of cooperation is presented, their weaknesses are highlighted and a new model (called joint responsibility) is introduced. Joint responsibility is then used to specify a novel high-level agent architecture for cooperative problem solving in which the mentalistic notions of belief, desire, intention and joint intention play a central role in guiding an individual's and the group's problem solving behaviour. An implementation of this highlevel architecture is then discussed and its util...
 
Article
this paper we illustrate an approach aimed at solving this problem. The basic idea is to add an explicit level of abstraction to the traditional data warehousing framework. The new level, which we call "logical," serves to describe in abstract terms the multidimensional aspects of an OLAP application, and to guarantee an independence of the application from the physical storage structure of the data warehouse. This is similar to what happens with relational technology, in which the property of data independence allows users and applications to manipulate tables and views ignoring implementation details
 
Article
We address the issue of design of architectures and abstractions to implement multimedia scientific manipulation systems in a Concurrent Engineering setting, where experts in a cooperating group communicate and interact to solve problems. We propose a model for the integration of software tools into a multiuser distributed and collaborative environment on the multimedia desktop, and describe a prototype CSCW infrastructure which we have used to implement scientific problem solving tools. Finally, we briefly describe a prototype CE system built on this infrastructure. SHASTRA presents a unified prototype for some crucial enabling technologies for Concurrent Engineering -- Multimedia Communication, Framework Integration, Coordination, and Enterprise Integration. 1 Overview This section introduces SHASTRA in the context of related work. Section 2 describes the architecture and highlights the main features of the system. Section 3 introduces SHASTRA Multimedia Services and Section 4 brief...
 
outlines our general framework. We assume that a user building an application has identified a set of semistructured Web sources he or she wants to integrate. These might be both publicly available sources as well as a user's personal sources. For each source, the developer uses Ariadne to generate a wrapper for extracting information from that source. The source is then linked into a global, unified domain model. Once the mediator is constructed, users can query the mediator as if the sources were all in a single database. Ariadne will efficiently retrieve the requested information, hiding the planning and retrieval process details from the user.
Article
The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to do this is to build specialized applications, which are time-consuming to develop and difficult to maintain. We have addressed this problem by creating the technology and tools for rapidly constructing information agents that extract, query, and integrate data from web sources. Our approach is based on a simple, uniform representation that makes it simple and efficient to integrate multiple sources. Instead of building specialized algorithms for handling web sources, we have developed methods for mapping web sources into this uniform representation. This approach builds on work from knowledge representation, databases, machine learning and automated planning. The resulting system, called Ariadne, makes it fast and easy to build new information agents that access existing web sources. Ariadne also makes it easy to maintain these agents and incorporate new sources as they become available.
 
Article
A new model of hypertext, in which text is augmented with a fine-grained semantic net representation of the text, solves several problems found in traditional hypertext models. In the new model, hypertext links are paths that originate in the text, move across to the semantic net, traverse a subpath through the semantic net, then return to a different point in the text. Benefits of the model include a strong semantics for links, dynamic discovery of links, link reusability, and automatic creation of links. The SNITCH hypertext system, which is based on this model, allows a user to access data in ways never foreseen by the hypertext author. Keywords: hypertext; semantic net; dynamic links; complex link types; hypertext browser 1. Introduction Hypertext is text to which connections between related phrases, sentences, or paragraphs have been added. While hypertext is usually associated with a particular class of point-and-click interface systems, the idea of hypertext is independent of...
 
Article
The large quantity and often questionable quality of available information in the information age provides a shaky foundation for decision making by individuals and organizations alike. This has created a tremendous demand for information services which can access, filter, process and present information on an as-needed basis. However, two factors complicate the design of such information services, namely the distributed and the autonomous nature of data sources. This paper reports on the design and implementation of a generic architecture for supporting information services, which meets the above challenge. The architecture adopts concepts from conceptual modeling to offer a transparent description of the information sources' setting and uses active databases techniques to offer a declarative, event-based language for defining coordination rules for integrating distributed information services. Accordingly, the proposed architecture supports two of the most prominent utilities of information services, namely the pre-designed flow of operations and the reactive provision of information. In addition to describing the architecture and illustrating its features with an example, the paper presents a prototype implementation and reports on some experimental performance results.
 
Article
An approach to accommodating semantic heterogeneity in a federation of interoperable, autonomous, heterogeneous databases is presented. A mechanism is described for identifying and resolving semantic heterogeneity while at the same time honoring the autonomy of the database components that participate in the federation. A minimal, common data model is introduced as the basis for describing sharable information, and a three-pronged facility for determining the relationships between information units (objects) is developed. Our approach serves as a basis for the sharing of related concepts through (partial) schema unification without the need for a global view of the data that is stored in the different components. The mechanism presented here can be seen in contrast with more traditional approaches such as "integrated databases" or "distributed databases". An experimental prototype implementation has been constructed within the framework of the Remote-Exchange experimental system. Keyw...
 
Article
this paper, they show a novel approach to interoperation between software components (agents): that is, the declarative agent specifications are used to automatically coordinate their interoperation. It is their thesis that more effective software interoperation is made possible by agreeing to a shared declarative vocabulary, than by agreeing to procedural interface specification that do not address the semantics of the software component. The third paper is "Heterogeneous Cooperative Problem-Solving System Helios and its Cooperation Mechanism" by Akira Aiba, et al. The authors, who had been members of ICOT (Institute for New Generation Computer Technology), have been engaged in the Helios project for constructing heterogeneous distributed cooperative problem solvers. In this paper, they give an overview of Helios and propose a new cooperation/negotiation protocol based on transactions: that is, embedding cooperation messages into nested transactions. In their system. a negotiation strategy that suits the given negotiation protocol can be defined in each agent. The fourth paper is "COBRA: Integration of Heterogeneous Knowledge-Bases in Medical Domain" by Shusaku Tsumoto, et al. As in the title, the authors model medical knowledge-bases as a heterogeneous system. As medical data consist of many kinds of data: natural language data, sound data, numerical data, time-series data, and medical images, they point out that medical databases should be modeled as multidatabases. In this paper, they report a system called COBRA (ComputerOperated Birth-defect Recognition Aid), which supports diagnosis and information retrieval of congenital malformation diseases, and which also integrates natural language data, sound data, numerical data, and medical images into multidatabases on...
 
Overlapping broad contexts.
Strict context.
Recursive context deenition For example, in Fig. 5 contexts has been deened recursively. According to the table of values at the top of the gure, S (x 2 ) = c 1 , S (x 3 ) = c 2 , S 2 (x 3 ) = S (nd (c 2 )) = c 1. It also holds that: S 3 (x 6 ) = c 1 and S 4 (x 6 ) = IBcontext. Hereafter, we use the words context and contents instead of strict context and strict contents, respectively. We also say that a node x is included in a context c
Article
In information bases following semantic and object-oriented data models logical names are used for the external identification of objects. Yet the naming schemes employed are not "natural" enough and several problems often arise: logical names can be ambiguous, excessively long, unrelated to or unable to follow the changes of the environment of named object. In natural language, similar problems are resolved by the context within which words are used. An approach to introducing a notion of context in an information base is to provide structuring mechanisms for decomposing it into possibly overlapping parts. This paper focuses on developing a context mechanism for an information base and, in particular, exploiting this mechanism for naming purposes. Rules are developed for generating meaningful names for objects by taking their context into account. This contextbased naming enhances name readability, resolves name ambiguities, saves a lot of redundant name substrings, and it localizes and thus facilitates consistency checking, query processing and update operations. In modeling, it supports systematic naming of objects, and thus enhances cooperation between the designers and the end-users in the sense that the contents of the information base are more understandable by both of them.
 
Article
Coupling of Classes (UML Class Diagram) All auctions announce their commencement and the item(s) for sale, as well as a possible reserve price and minimum bid increment. A sealed auction has a publicly announced deadline, and will make no information about the current bids available to any future bidders until the auction is over, at which time the winner(s) is (are) announced. An open auction will make information about current bids available to any future bidders until the auction is over, at which time the winner(s) is (are) announced. One can open an auction with the e-broker operation startAuction(). Currently this can be either an English, a Vickrey or a First Price Sealed Bid Auction, which are described in Table 1. Through polymorphic messaging, the operations of the appropriate type of auction are triggered afterwards. For example, different auctions, closed via stopAuction() will compute the winners in different manners. Moreover, an English Auction has additional operations...
 
The general communication architecture. The shared repository manages the common terms that are used in XLBC messages to be exchanged by the communicating information sys- tems.  
Linking the XLBC components with the concepts in the semantic network of the multilingual thesaurus. Exemplary, translations of some concepts to English, Dutch and German are given.  
Article
A main problem for electronic commerce, particularly for business-to- business applications, lies in the need for the involved information systems to meaningfully exchange information. Domain-specific standards may be used to define the semantics of common terms. However, in practice it is not easy to find those domain-specific standards that are detailed and stable enough to allow for real interoperability. Therefore, we propose an architecture that allows for incremental construction of a shared repository including a multilingual thesaurus, which is used in a business communication language. Communicating information systems then refer to the common thesaurus while exchanging messages. Our emphasis is be on separating semantics (in the thesaurus) and syntax (in XML). Therefore, our extensibility is not only that of XML, but also the extensibility of the semantics that is modeled in the shared repository. The business communication language XLBC is presented and how it can be used in electronic commerce applications. XLBC message patterns and conversation protocols are stored in the shared repository as well.
 
Article
In this paper, we develop techniques for interoperable query processing between object and relational schemas. The objective is to pose a query against a local object schema and be able to share information transparently from target relational databases, which have equivalent schema. Our approach is a mapping approach (as opposed to a global schema approach) and is based on using canonical representations (CR). We use one CR for resolving heterogeneity based on the object and relational query languages. We use a second parameterized CR to resolve representational heterogeneity between object and relational schema, and to build a mapping knowledge dictionary. There is also a set of mapping rules, based on the CR, which defines the appropriate mapping between schemas. A query posed against the local object schema is first represented in the CR for queries, and then transformed by the mapping rules, to an appropriate query for the target relational schema, using relevant information from ...
 
Article
The SHARE project seeks to apply information technologies in helping design teams gather, organize, re-access, and communicate both informal and formal design information to establish a "shared understanding " of the design and design process. This paper presents the visions of SHARE, along with the research and strategies undertaken to build an infrastructure toward its realization. A preliminary prototype environment is being used by designers working on a variety of industry sponsored design projects. This testbed continues to inform and guide the development of NoteMail, MovieMail, and Xshare, as well other components of the next generation SHARE environment that will help distributed design teams work together more effectively. 1 Introduction The SHARE 1 project is broadly concerned with how information technology can help engineers develop products. Increasingly, product development involves teams of engineers from multiple organizations working together over networks, suppo...
 
Article
This paper presents a formal framework for the combination of document representations based on evidential reasoning. Each indexing method is modelled by an agent referred to as an indexer. Indexing elements are modelled as sentences which are used to describe the content of a document. The modelling of the indexing and its uncertainty provides the document representation. The combination of document representations is expressed as the combination of the indexing and uncertainty as provided by two or more indexers. The resulting indexer is referred to as the combined indexer. The proposed framework allows the capture of the semantics of the indexing vocabularies associated with the indexers and the aggregation of the uncertainty associated with the indexing. Keywords: evidential reasoning, information retrieval, indexing, uncertainty, document representation, combination of evidence, aggregation of uncertainty 1. Introduction Information retrieval (IR) [18] is the science...
 
Article
Based on the specific characteristics and requirements for an adequate electronic commerce system support, this article gives an overview of the respective distributed systems technologies which are available for open and heterogeneous electronic commerce applications. Abstracting from basic communication mechanisms such as (transactionally secure) remote procedure calls and remote database access mechanisms, this includes service trading and brokerage functions as well as security aspects including such as notary and non-repudiation functions. Further important elements of a system infrastructure for electronic commerce applications are: Common middleware infrastructures, componentware techniques, distributed and mobile agent technologies etc. As electronic transcations enter the phase of performance, increasingly new and important functions are required. Among these are: Negotiation protocols to support both the settlement and the fulfillment of electronic contracts as well as ad-hoc workflow management support for compound and distributed services in electronic commerce applications. In addition to an overview of the state of the art of the respective technology, the article briefly presents some related projects conducted by the authors jointly with international partners in order to realize some of the important new functions of a system infrastructure for open distributed electronic commerce applications.
 
EEect of commitment strategy on the number of iterations with varying desired schedule density and arrival rate of tasks (10 agents, 14 day calendars, 9 hrs/day).  
EEect of commitment strategy on task hours missed under varying desired schedule densities and task arrival rates (10 agents, 14 day calendars, 9 hrs/day). to used the committed strategy.  
EEect of commitment strategy on the number of iterations with varying desired schedule densities and task arrival rates (10 agents, 14 day calendars, 9 hrs/day).  
EEect of commitment strategy on task hours missed under varying desired schedule densities and task arrival rates (10 agents, 14 day calendars, 9 hrs/day). although a more thorough quantitative analysis of the impact of these factors is possible 10].  
Article
Cooperative information agents need mechanisms that enable them to work together effectively while solving common problems. We investigate the use of commitment by agents to proposed actions as a mechanism that allow agents to work concurrently on interdependent problems. Judicious use of commitment can not only increase the throughput of cooperative information systems, but also allow them to deal flexibly with dynamically changing environments. We use the domain of distributed scheduling to demonstrate that static commitment strategies are ineffective. Results from simulated experiments are used to identify the environmental features on which an adaptive commitment strategy should be predicated.
 
Article
Modeling environment is essential for agents to cooperate with each other in a distributed system. In this paper, we propose two strategies for selecting agents' communication structures in a cooperative search using their local histories as a model of their computational environment. Under the assumption of homogeneity of agents, an agent can select a proper communication structure by using a history of local computation, and the utility of communication always matches its cost. Simulations using the traveling salesman problem show that strategies produce high performance. We also describe an extension of these strategies to other areas and the means to separate them from application programs using meta-object programming in Object-Oriented Programming Languages (OOPL). Keywords: Distributed problem solving, cooperative search, selforganization, computational reflection 1. Introduction Cooperation is a kind of meta-level computation which controls problem-level computati...
 
Categories of planning problems Decomposability non-decomposable decomposable 
Matrix for the allocation of multiple identical jobs 
Matrix for the allocation of multiple different jobs 
Article
This paper focuses on market-like coordination mechanisms in multi-agent systems, with applications to business planning. Several fundamental criteria are derived in order to evaluate market-like coordination mechanisms. The central criterion is the efficient allocation of jobs to agents. Assuming a relationship between classes of operational planning problems and certain coordination mechanisms, business planning problems are classified on the basis of their relevant attributes. Coordination mechanisms for each of the classes are then introduced on the basis of auction theory and investigated with respect to the trade-off between efficiency and computational tractability. All of the mechanisms provetohave a common basis: the Vickrey Auction
 
The example Multi-Agent System: Top Level composition relation
Composition relation within an agent
Conclusion correctness of A
Communicated info saturation of A
Information correctness of A
Article
A compositional method is presented for the verification of multi-agent systems. The advantages of the method are the well-structuredness of the proofs and the reusability of parts of these proofs in relation to reuse of components. The method is illustrated for an example multi-agent system, consisting of cooperative information gathering agents. This application of the verification method results in a formal analysis of pro-activeness and reactiveness of agents, and shows which combinations of pro-activeness and reactiveness in a specific type of information agents lead to a successful cooperation.. 1
 
Article
The problem of integrating information from conflicting sources comes up in many current applications, such as cooperative information systems, heterogeneous databases, and multi-agent systems. We model this by the operation of merging first-order theories. We propose a formal semantics for this operation and show that it has desirable properties, including abiding by majority rule in case of conflict and syntax independence. We apply our semantics to the special case when the theories to be merged represent relational databases under integrity constraints. We then present a way of merging databases that have different or conflicting schemas caused by problems such as synonyms, homonyms or type conflicts mentioned in the schema integration literature. 1 Introduction Being able to share information from multiple sources has become increasingly important. Considerable efforts have been made in both academia and industry to develop global information sharing systems such as fede...
 
Article
When we build a database many assumptions made within the intended user community about the scope, accuracy, timeliness, completeness and meaning of the data items are only implicit in the data. If the data are to be made available to a wider community where different assumptions are made, then what was implicit must be made explicit. This paper describes a way of doing this. We describe a formalism for qualifying data with assumptions and show how this formalism may be used by a system of brokers to distribute information amongst users who make different assumptions about data.
 
Article
We introduce the language LOTOS/TM for the formal specification of a network of cooperating agents with a shared data repository and private local data. LOTOS/TM is the orthogonal integration of the process-algebraic protocol specification language LOTOS and the functional, object-oriented database specification language TM. The specified world consists of a number of interacting LOTOS processes — describing the cooperating agents — and a special LOTOS process representing the shared data repository, which is modeled as a TM database. The data repository's functionality is made available to the other, cooperating processes through one or more external database gates. Interaction at such a gate corresponds to a method invocation in the database. In addition to shared persistent data, the TM language is used to specify the data encapsulated locally within processes, and the transient data communicated over gates. Some features of LOTOS/TM are inherently suitable for describing cooperation, such as combinators for synchronization on specific methods. These features are illustrated by examples showing navigation events on a shared graph structure that resembles a hypertext. Emphasis in the examples is placed on coordination aspects of the scenario. LOTOS/TM serves as a formalism for a more user-friendly specification language by the name of CoCoA that is currently under construction.
 
Article
When a query fails, it is more cooperative to identify the cause of failure, rather than just to report the empty answer set. When there is not a cause per se for the query's failure, it is then worthwhile to report the part of the query which failed. To identify a Minimal Failing Subquery (MFS) of the query is the best way to do this. (This MFS is not unique; there may be many of them.) Likewise, to identify a Maximal Succeeding Subquery (XSS) can help a user to recast a new query that leads to a non-empty answer set. Database systems do not provide the functionality of these types of cooperative responses. This may be, in part, because algorithmic approaches to finding the MFSs and the XSSs to a failing query are not obvious. The search space of subqueries is large. Despite work on MFSs in the past, the algorithmic complexity of these identification problems had remained uncharted. This paper shows the complexity profile of MFS and XSS identification. It is shown that there exists a simple algorithm for finding an MFS or an XSS by asking N subsequent queries, in which N is the length of the query. To find more MFSs (or XSSs) can be hard. It is shown that to find N MFSs (or XSSs) is NP-hard. To find k MFSs (or XSSs), for a fixed k, remains polynomial. An optimal algorithm for enumerating MFSs and XSSs, ISHMAEL, is developed and presented. The algorithm has ideal performance in enumeration, finding the first answers quickly, and only decaying toward intractability in a predictable manner as further answers are found. The complexity results and the algorithmic approaches given in this paper should allow for the construction of cooperative facilities which identify MFSs and XSSs for database systems. These results are relevant to a number of problems outside of databases too, and may find further application.
 
Article
Managing interschema knowledge is an essential task when dealing with cooperative information systems. We propose a logical approach to the problem of both expressing interschema knowledge, and reasoning about it. In particular, we set up a structured representation language for expressing semantic interdependencies between classes belonging to different database schemas, and present a method for reasoning over such interdependencies. The language and the associated reasoning technique makes it possible to build a logic-based module that can draw useful inferences whenever the need arises of both comparing and combining the knowledge represented in the various schemas. Notable examples of such inferences include checking the coherence of interschema knowledge, and providing integrated access to a cooperative information system.
 
Conference Paper
. This paper presents a method for extracting a conceptual schema from a relational database. The method is based on an analysis of data manipulation statements in the code of an application using a relational DBMS. Attributes representing references between tables in the relational schema, and possible keys are determined by an analysis of join conditions in queries and view definitions. Knowledge about which attributes link tables is used to investigate the database extension in a selective manner. When the keys cannot be unambiguously determined, possible solutions are generated by the system under guidance of the user. The approach makes it possible to efficiently construct a conceptual schema from only rudimentary information. 1 Introduction The current rapid progress in the telecommunications domain will allow geographically distributed computers to interact more closely than today. However, this evolution is slowed down by existing information systems (IS) based on old-fashione...
 
Top-cited authors
Frank Leymann
  • Universität Stuttgart
Mike Papazoglou
  • Tilburg University
Schahram Dustdar
Wil Van der Aalst
  • RWTH Aachen University
Paolo Traverso
  • Fondazione Bruno Kessler