Over the last two decades, qualitative reasoning (QR) has become an important domain in Artificial Intelligence. QDE (Qualitative Differential Equation) model learning (QML), as a branch of QR, has also received an increasing amount of attention; many systems have been proposed to solve various significant problems in this field. QML has been applied to a wide range of fields, including physics, biology and medical science. In this paper, we first identify the scope of this review by distinguishing QML from other QML systems, and then review all the noteworthy QML systems within this scope. The applications of QML in several application domains are also introduced briefly. Finally, the future directions of QML are explored from different perspectives.
Recent theoretical results have justified the use of potential-based reward shaping as a way to improve the performance of multi-agent reinforcement learning (MARL). However, the question remains of how to generate a useful potential function.
Previous research demonstrated the use of STRIPS operator knowledge to automatically generate a potential function for single-agent reinforcement learning. Following up on this work, we investigate the use of STRIPS planning knowledge in the context of MARL.
Our results show that a potential function based on joint or individual plan knowledge can significantly improve MARL performance compared with no shaping. In addition, we investigate the limitations of individual plan knowledge as a source of reward shaping in cases where the combination of individual agent plans causes conflict.
Multi-agent systems consist of a number of interacting autonomous agents, each of which is capable of sensing its environment
(including other agents) and deciding to act in order to achieve its own objectives. In order to guarantee the overall design
objectives of multi-agent systems, the behavior of individual agents and their interactions need to be regulated and coordinated
[23,29,30]. The development of multi-agent systems therefore requires programming languages that facilitate the implementation
of individual agents as well as mechanisms that control and regulate individual agents’ behaviors. It also requires computational
tools to test and verify programs that implement multi-agent systems .
In a nutshell, agent-based models (ABM) are models, i.e. abstract repre- sentation of the reality, in which (i) a multitude of objects interact with each other and with the environment, (ii) the objects are autonomous, i.e. there is no central, or “top down” control over their behavior Paper prepared for the The First European PhD Complexity School: Agent-Based Studies
This paper provides a survey on studies that analyze the macroeconomic effects of intellectual property rights (IPR). The first part of this paper introduces different patent policy instruments and reviews their effects on R&D and economic growth. This part also discusses the distortionary effects and distributional consequences of IPR protection as well as empirical evidence on the effects of patent rights. Then, the second part considers the international aspects of IPR protection. In summary, this paper draws the following conclusions from the literature. Firstly, different patent policy instruments have different effects on R&D and growth. Secondly, there is empirical evidence supporting a positive relationship between IPR protection and innovation, but the evidence is stronger for developed countries than for developing countries. Thirdly, the optimal level of IPR protection should tradeoff the social benefits of enhanced innovation against the social costs of multiple distortions and income inequality. Finally, in an open economy, achieving the globally optimal level of protection requires an international coordination (rather than the harmonization) of IPR protection.
One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.
Finance is a challenging yet appropriate domain for model-based reasoning, an area of research otherwise grounded in classical physics. Among the many features that suggest a model-based approach are that firms have formal internal structures, business entities have idealizable behaviours, and there is a history of formal analysis of business problems. This article discusses the motivations and foundations of the model-based approach, and surveys several existing artificial intelligence programs that exploit its advantages. The survey shows that there are ample opportunities for useful systems and significant research in this area. However, accomplishing either of these goals depends crucially upon moving beyond qualitative models based only on accounting information, which tend not to capture the actual complexities of the domain.
This document provides the specification of the Process Interchange Format (PIF) version 1.1. The goal of this work is to develop an interchange format to help automatically exchange process descriptions among a wide variety of business process modeling and support systems such as workflow software, flow charting tools, planners, process simulation systems, and process repositories. Instead of having to write ad hoc translators for each pair of such systems, each system will only need to have a single translator for converting process descriptions in that system into and out of the common PIF format. Then any system will be able to automatically exchange basic process descriptions with any other system. This document describes the PIF-CORE 1.1, i.e. the core set of object types (such as activities, agents, and prerequisite relations) that can be used to describe the basic elements of any process. The document also describes a framework for extending the core set of object types to include additional information needed in specific applications. These extended descriptions are exchanged in such a way that the common elements are interpretable by any PIF translator and the additional elements are interpretable by any translator that knows about the extensions.
This paper is intended to serve as a comprehensive introduction to the emerging field concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to effective communication among people, organisations, and/or software systems. We show how the development and implementation of an explicit account of a shared understanding (i.e. an `ontology') in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purposes they serve. We outline a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing definitions. We then consider the benefits of and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the specification, implementation and evaluation of ontologies. Finally, we review the state of the art and practice in this emerging field, considering various case studies, software tools for ontology development, key reearch issues and future prospects.
The use of logic to model information access enables to obtain models that are general than previous models. Some of the logical models represent within a uniform framework of various features of information retrieval systems such as hypermedia links, multimedia content, users knowledge and performance. It also provides a common approach to the integration of different information access systems. Logic allows the individual to reason about a model and its properties. However, logic itself is not sufficient to model information access process. It is expected that this is an active line of research and more effective systems for information access will find their basis.
Knowledge acquisition research supports the generation of knowledge-based systems through the development of principles, techniques, methodologies and tools. What differentiates knowledge-based system development from conventional system development is the emphasis on in-depth understanding and formalization of the relations between the conceptual structures underlying expert performance and the computational structures capable of emulating that performance.
Personal construct psychology is a theory of individual and group psychological and social processes that has been used extensively in knowledge acquisition research to model the cognitive processes of human experts. The psychology takes a constructivist position appropriate to the modelling of human knowledge processes, but develops this through the characterization of human conceptual structures in axiomatic terms that translate directly to computational form. In particular, there is a close correspondence between the intensional logics of knowledge, belief and action developed in personal construct psychology, and the intensional logics for formal knowledge representation developed in artificial intelligence research as term subsumption, or KL-ONE-like, systems.
This paper gives an overview of personal construct psychology and its expression as an intensional logic describing the cognitive processes of anticipatory agents, and uses this to survey knowledge acquisition tools deriving from personal construct psychology.
o relevant agent literature, without claiming these to be either exhaustive or the most authoritative references. Scale: It is almost a platitude to mention the size of the Internet in general, and of the Web in particular. Google now indexes well over a billion pages, and the number of connected hosts runs into the millions. These numbers are orders of magnitude larger than any traditional single knowledge-base for which much of current KR technology has been designed [Turner and Jennings, 2000]. Change rate: Many portions of the Internet display a very high change rate, with information changing on the timescale of days (e.g. news sites), hours (e.g. auction sites), or even minutes (e.g. stockmarkets). On the other hand, KR techniques, such as those from knowledge engineering, typically have been designed for update rates in the order of months, or even slower. (e.g. [Schut and Wooldridge, 2000]) Lack of referential integrity: One of the major departures that the Web took from trad
As computer scientists, our goals are motivated by the desire to improve computer systems in some way: making them easier to design and implement, more robust and less prone to error, easier to use, faster, cheaper, and so on. In the field of multi-agent systems, our goal is to build systems capable of flexible autonomous decision making, with societies of such systems cooperating with one-another. There is a lot of formal theory in the area but it is often not obvious what such theories should represent and what role the theory is intended to play. Theories of agents are often abstract and obtuse and not related to concrete computational models.
This report is the result of a panel discussion at the First UK Workshop on Foundations of Multi-Agent Systems (held at the University of Warwick on Oct. 23rd 1996). The three panellists and the chairman are the authors of this document and they are listed in alphabetical order.
In systems composed of multiple autonomous agents, negotiation is a key form of interaction that enables groups of agents to arrive at a mutual agreement regarding some belief, goal or plan, for example. Particularly because the agents are autonomous and cannot be assumed to be benevolent, agents must influence others to convince them to act in certain ways, and negotiation is thus critical for managing such inter-agent dependencies. The process of negotiation may be of many different forms, such as auctions, protocols in the style of the contract net, and argumentation, but it is unclear just how sophisticated the agents or the protocols for interaction must be for successful negotiation in different contexts. All these issues were raised in the panel session on negotiation. As a prelude to the discussion, Jennings identified three broad topics for research on negotiation, that serve to organise the issues under consideration. First, negotiation protocols are the set of rules that govern the interaction. This covers, the permissible types of participants (e.g., the negotiators and relevant third parties), the negotiation states (e.g., accepting bids, negotiation closed), the events that cause state transitions (e.g., no more bidders, bid accepted), and the valid actions of the participants in particular states (e.g., which can be sent by whom, to whom and at when). Second, negotiation objects are the range of issues over which agreement must be reached. These may single issues, such
Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings.
Telecommunications infrastructures are a natural application domain for the distributed Software Agent paradigm. The authors clarify the potential application of software agent technology in legacy and future communications systems, and provide an overview of publicly available research on software agents as used for network management. The authors focus on the so called "stationary intelligent agent" type of software agent, although the paper also reviews the reasons why mobile agents have made an impact in this domain. The authors' objective is to describe some of the intricacies of using the software agent approach in the management of communications systems. The paper is in four main sections. The first section provides a brief introduction to software agent technology. The second section considers general problems of network management and the reasons why software agents may provide a suitable solution. The third section reviews some selected research on agents in a telec...
Introduction Continuing the series of workshops begun in 1996 (Luck, 1997; Doran et al., 1997; d'Inverno et al., 1997; Fisher et al., 1997) and continuing in each of the two years since (Luck et al.,1998; Aylett et al., 1998; Binmore et al, 1998; Decker et al., 1999; Beer et al., 1999), the 1999 workshop of the UK Special Interest Group on Multi Agent Systems (UKMAS'99) workshop took place in Bristol in December. Chaired and organised by Chris Preist of Hewlett Packard Laboratories, with support from both HP and BT Laboratories, the workshop brought together a diverse range of participants from the agent community in both the UK and abroad, to discuss and present work spanning all areas of agent research. Although dominated by computer scientists, also present at the meeting were electronics engineers, computational biologists, philosophers, sociologi
In spite of the rapid spread of agent technology, there is, as yet, little evidence of an engineering approach to the development of agent-based systems. In particular, development methods for these systems are relatively rare. One of the key reasons for this is the inadequacy of standard software development approaches for these new, and fundamentally different, agent-based systems. Traditional software development methods often lack the flexibility to handle high-level concepts such as an agent's dynamic control of its own behaviour, its ability to represent cooperative interactions, and its mechanisms for representing internal change, assumptions, objectives, and the uncertainty inherent in its interactions with the real-world.
this report, we summarise the other contributions to the workshop through paper presentations and invited talks, which cover a wide range of relevant topics. The structure of the report reflects the organisation of the workshop.
The rapid development of the field of agent-based systems offers a new and exciting paradigm for the development of sophisticated programs in dynamic and open environments, particularly in distributed domains such as web-based systems of various kinds and electronic commerce. However, the speed of progress has been such that it has also brought with it a new set of problems. This paper reviews the current state of research into agent-based systems, considering reasons for the way the field has grown and pointing at the way it might continue to progress. It pays particular attention to problems with defining the nature of agents, the technologies that have enabled the rapid progress to date, and ways in which work can be consolidated through the development of large-scale applications, and the integration with theoretical foundations. 1 Introduction While it may be difficult to identify the critical point at which work on agent-based systems became a distinct and recognisable area of ...
A multi-agent system architecture for coordination of just-in-time production and distribution is presented. The problem to solve is two-fold: first the right amount of resources at the right time should be produced, and then these resources should be distributed to the right consumers. In order to solve the first problem, which is hard when the production and/or distribution time is relatively long, each consumer is equipped with an agent that makes predictions of future needs that it sends to a production agent. The second part of the problem is approached by forming clusters of consumers within which it is possible to redistribute resources fast and at a low cost in order to cope with discrepancies between predicted and actual consumption. Reallocation agents are introduced (one for each cluster) to manage the redistribution of resources. The suggested architecture is evaluated in a case study concerning management of district heating systems. Results from a preliminary simulation study show that the suggested approach makes it possible to control the trade-off between quality-of-service and degree of surplus production. We also compare the suggested approach to a reference control scheme (approximately corresponding to the current approach to district heating management), and conclude that it is possible to reduce the amount of resources produced while maintaining the quality of service. Finally, we describe a simulation experiment where the relation between the size of the clusters and the quality of service was studied.
The issues involved in applying machine learning algorithms to multi-agent systems were discussed. Issues about multi-agent learning, including the difference between single-agent learning and multi-agent learning, on-line and off-line learning methods, and mechanisms for social learning were presented. The different design options namely on-line versus off-line, reactive versus logic-based learning algorithms, and social learning algorithms inspired by animal learning were also presented. It was found that logic-based agents have the advantage of being able to naturally incorporate domain knowledge in the learning process, while artificial life approaches can be based on evidence from biology.
Introduction One of the main reasons for the sustained activity and interest in the field of agent-based systems, apart from the obvious recognition of its value as a natural and intuitive way of understanding the world, is its reach into very many different and distinct fields of investigation. Indeed, the notions of agents and multi-agent systems are relevant to fields ranging from economics to robotics, in contributing to the foundations of the field, being influenced by ongoing research, and in providing many domains of application. While these various disciplines constitute a rich and diverse environment for agent research, the way in which they may have been linked by it is a much less considered issue. The purpose of this panel was to examine just this concern, in the relationships between different areas that have resulted from agent research. Informed by the experience of the participants in the areas of robotics, social simulation, economics, computer science and art