We study the use of reroutable assignment for multipoint video
conferences in a high-speed network. A conference model is constructed
and conference calls are classified. A conference of a particular type
can ride on different route-configurations. According to the location of
the current speaker, a conference has different modes of operation. Two
network management functions are discussed: call admission ensures a
preset QOS requirement by blocking new calls that causes congestion;
route-configuration assignment determines the multicast tree for
distributing the video of the current speaker. The reroutable
route-configuration assignment is introduced. It allows a change of
route-configuration when there is a change of speaker. Two reroutable
assignment schemes are studied. In the normal scheme, a conference is
always rerouted to the least congested route-configuration; while in the
sticky scheme, a conference is only rerouted when the current
route-configuration is congested. The video freeze probability,
rerouting probability and the extended capacity space are derived. An
example shows that the video freeze probabilities of the two schemes do
not differ significantly. The sticky scheme, however, is superior as it
gives a much smaller rerouting probability than the normal scheme
Electronic, web-based commerce enables and demands the application
of intelligent methods to analyze information collected from consumer
web sessions. We propose a method of increasing the granularity of the
user session analysis by isolating useful subsessions within web page
access sessions, where each subsession represents a frequently traversed
path indicating high-level user activity. The subsession approximates
user state information as well as anticipated user activity, and as a
result is useful for personalization and pre-caching
The paper presents the concept and development of a prototype diagnostic decision support system for real-time control and monitoring of dynamical processes. This decision support system, known as Diagnostic Evaluation and Corrective Action (DECA), employs qualitative reasoning, in conjunction with quantitative models, for monitoring and diagnosis of malfunctions and disruptions in dynamical processes under routine operations and emergency situations. DECA is especially suited for application to time-constrained environments where an immediate action is needed to avoid catastrophic failure(s). DECA is written in common Lisp and has been implemented on a Symbolics 3670 machine; its efficacy has been verified using the data from the Three Mile Island No. 2 Nuclear Reactor Accident.
The direct fuzzification of a standard layered feedforward neural network where the signals and weights are fuzzy sets is discussed. A fuzzified delta rule is presented for learning. Three applications are given, including modeling a fuzzy expert system; performing fuzzy hierarchical analysis based on data from a group of experts; and modeling a fuzzy system. Further applications depend on proving that this fuzzy neural network can approximate a continuous fuzzy function to any degree of accuracy on a compact set
Uncertain relations between temporal points are represented by means of possibility distributions over the three basic relations "smaller than", "equal to", and "greater than". Operations for computing inverse relations, for composing relations, for combining relations coming from different sources and pertaining to the same temporal points, or for representing negative information, are defined. An illustrative example of representing and reasoning with uncertain temporal relations is given. This paper shows how possibilistic temporal uncertainty can be handled in the setting of point algebra. Moreover, the paper emphasizes the advantages of the possibilistic approach over a probabilistic approach previously proposed. This work does for the temporal point algebra what the authors previously did for the temporal interval algebra.
The formalization of agents attitudes, and belief in particular, has been investigated in the past by the authors of this paper, along two different but related streams. Giunchiglia and Giunchiglia investigate the properties of contexts for the formal specification of agents mutual beliefs, combining extensional specification with (finite) presentation by means of contexts. Cimatti and Serafini address the representational and implementational implications of the use of contexts for representing prepositional attitudes by tackling a paradigmatic case study. The goal of this paper is to show how these two streams are actually complementary, i.e. how the methodology proposed in the former can be successfully applied to formally specify the case study discussed in the latter. In order to achieve this goal, the formal framework is extended to take into account some relevant aspects of the case study, the specification of which is then worked out in detail.
The problem of assessing the value of a candidate is viewed here as a
multiple combination problem. On the one hand a candidate can be evaluated
according to different criteria, and on the other hand several experts are
supposed to assess the value of candidates according to each criterion.
Criteria are not equally important, experts are not equally competent or
reliable. Moreover levels of satisfaction of criteria, or levels of confidence
are only assumed to take their values in qualitative scales which are just
linearly ordered. The problem is discussed within two frameworks, the
transferable belief model and the qualitative possibility theory. They
respectively offer a quantitative and a qualitative setting for handling the
problem, providing thus a way to compare the nature of the underlying
This article presents a knowledge-based system methodology for developing operator assistant (OA) systems in dynamic and interactive environments. This is a problem both of training and design, which is the subject of this article. Design includes both design of the system to be controlled and design of procedures for operating this system. A specific knowledge representation is proposed for representing the corresponding system and operational knowledge. This representation is based on the situation recognition and analytical reasoning paradigm. It tries to make explicit common factors involved in both human and machine intelligence, including perception and reasoning. An OA system based on this representation has been developed for space telerobotics. Simulations have been carried out with astronauts and the resulting protocols have been analyzed. Results show the relevance of the approach and have been used for improving the knowledge representation and the OA architecture.
Monotonicity with respect to all arguments is fundamental to the definition
of aggregation functions. It is also a limiting property that results in many
important non-monotonic averaging functions being excluded from the theoretical
framework. This work proposes a definition for weakly monotonic averaging
functions, studies some properties of this class of functions and proves that
several families of important non-monotonic means are actually weakly monotonic
averaging functions. Specifically we provide sufficient conditions for weak
monotonicity of the Lehmer mean and generalised mixture operators. We establish
weak monotonicity of several robust estimators of location and conditions for
weak monotonicity of a large class of penalty-based aggregation functions.
These results permit a proof of the weak monotonicity of the class of
spatial-tonal filters that include important members such as the bilateral
filter and anisotropic diffusion. Our concept of weak monotonicity provides a
sound theoretical and practical basis by which (monotone) aggregation functions
and non-monotone averaging functions can be related within the same framework,
allowing us to bridge the gap between these previously disparate areas of
We extend the notion of belief function to the case where the underlying structure is no more the Boolean lattice of subsets of some universal set, but any lattice, which we will endow with a minimal set of properties according to our needs. We show that all classical constructions and definitions (e.g., mass allocation, commonality function, plausibility functions, necessity measures with nested focal elements, possibility distributions, Dempster rule of combination, decomposition w.r.t. simple support functions, etc.) remain valid in this general setting. Moreover, our proof of decomposition of belief functions into simple support functions is much simpler and general than the original one by Shafer.
In this paper the framework DESIRE for the design of compositional reasoning systems and multi-agent systems was applied to build a generic nonmonotonic reasoning system. The outcome is a general reasoning system that can be used to model different nonmonotonic reasoning formalisms and that can be executed by a generic execution mechanism. The main advantages of using DESIRE (for example, compared to a direct implementation in a programming language such as PROLOG) are that the design is generic and has a transparent compositional structure, and the explicit declarative specification of both the static and dynamic aspects of the nonmonotonic reasoning processes, including their control. (C) 2003 Wiley Periodicals, Inc.
Empirical research has shown that in some situations subjects tend to assign a probability to a conjunction of two events that is larger than the probability they assign to each of these two events. This empirical phenomenon is traditionally called the conjunction fallacy. One of the best known experiment used to demonstrate the conjunction fallacy is the Linda problem introduced by Tversky and Kahneman in 1982. They explain the “fallacious behavior” by their so-called judgemental heuristics. These heuristics have been heavily criticized by Gigerenzer (1996) as being far “too vague to count as explanations”. In this paper, it is shown that the “fallacious behavior” in the Linda problem can be explained by the so-called Theory of Hints developed by Kohlas and Monney in 1995.
The article presented an edited collection of eleven papers presented at the first five workshops (1988-1992) on Verification, Validation and Testing of Intelligent Systems conducted by the American Association for Artificial Intelligence (AAAI). These workshops have been actively attended by V&V researchers, tool developers, and practitioners who benefit most from the dissemination of major new results and systems.
This paper proposes a fuzzy abductive inference with degrees of manifestation. The fuzzy logic is applied to Peng and Reggia's abductive inference for handling the manifestation degrees. This method infers irredundant combinations of candidates with degrees of belief for the manifestations. Learning algorithm for updating the fuzzy causations and t-conorm parameter is also presented in this paper. Application of the new method to a diagnostic problem is shown and the effectiveness of the proposed method is demonstrated. 1 Introduction Abduction is one of methods of inference for medical diagnostic systems  . D. Poole  defined the abduction as follows; When background theory 6, hypotheses H, goal G are given, an explanation of E of elements of H is defined such that 6 [ E j= G and 6 [ E 6j= false. Peng and Reggia developed association based abductive inference  . This abductive inference used knowledge suitable for the fault/medical diagnoses, and had an efficient method...
This paper investigates two different activities that involve making assumptions: predicting what one expects to be true and explaining observations. In a companion paper, an architecture for both prediction and explanation is proposed and an implementation is outlined. In this paper, we show how such a hypothetical reasoning system can be used to solve recognition, diagnostic and prediction problems. As part of this is the assumption that the default reasoner must be "programmed" to get the right answer and it is not just a matter of "stating what is true" and hoping the system will magically find the right answer. A number of distinctions have been found in practice to be important: between predicting whether something is expected to be true versus explaining why it is true; and between conventional defaults (assumptions as a communication convention), normality defaults (assumed for expediency) and conjectures (assumed only if there is evidence). The effects of these distinctions on...
In this note we examine the question of assigning a probabilistic valuation to a statement as ''Tweety (a particular bird) is able to fly. '' Namely, we suggest that a natural way to proceed is to rewrite it as ''a (randomly chosen) bird with the same observable properties of Tweety is able to fly, '' and consequently to assume that the probability of '' Tweety is able to fly '' is equal to the percentage of the past observed birds similar to Tweety that are able to fly. (C) 1994 John Wiley & Sons, Inc.
Diagnostic reasoning at multiple levels of abstraction is an efficient problem-solving strategy. It enables diagnostic problem-solvers (human or automated) to efficiently form plausible high-level diagnostic hypotheses while avoiding the explicit consideration of unnecessary details. This article describes a domain-independent inference mechanism for diagnostic reasoning at multiple levels of abstraction. the inference mechanism uses the causal knowledge representation framework described in an earlier companion article.1 This inference strategy has been tested through the implementation of a prototype diagnostic system with encouraging results.
A significant body of causal knowledge for diagnostic problem-solving is organized at multiple levels of abstraction. By this we mean that causal relations are specified in terms of disorder and manifestation classes that can be further refined as well as in terms of specific, unrefinable disorders and manifestations. Such knowledge enables diagnostic problem-solvers (human or automated) to efficiently form initial, high-level diagnostic hypotheses while avoiding the explicit consideration of unnecessary details. This article develops a knowledge representation framework to precisely yet naturally capture causal relations at multiple levels of abstraction. Different interpretations of high-level causal associations are precisely defined and systematically tabulated. Rules to infer implicit causal relations from explicitly declared causal relations are also identified. These ideas have been implemented in a working system for medical diagnosis. the results presented in this article also offers a new perspective on studying the semantics of knowledge representation in general.
In this article we explore the issue of domain-specificity in language learning. the point to be argued here is that although language acquisition requires substantial domainintensive knowledge, some of the mechanisms used in concept acquisition can be seen as special cases of more general learning strategies; that is, domain-independent strategies operating within domain-dependent constraints. We present a computational model of concept 2cquisition making use of these strategies, operating within a model of lexical organization, called Constraint Semantics. This is a rich lexical semantics embedded within a “markedness theory,” constraining how semantic functions relate to one another. Constraint semantics is a restrictive calculus limiting the search space of possible word meanings for the language leaner. This, in effect, acts as a set of wellformedness conditions, defining the constraints for what possible logical decompositions a word might contain. the general approach taken here is based on the supposition that predicates from the perceptual domain are the primitives for more abstract relations. We then describe an implementation of this model, TULLY, which mirrors the stages of lexical acquisition for children. Examples are given showing how hierarchical structure for concepts is acquired, as well as the development of polysemy relations for verbs.
Globally coherent behavior is essential for a distributed problem solving network. It is a characteristic of the whole problem solving process. We discuss in this article different forms of cooperation at different phases of the problem solving process, which have to be considered to increase global coherence. the connection problem and the timing problem are key issues for distributed problem solving. the “perceive-plan-act” loop is introduced for each problem solving node. It means that a node has to perceive the network state and plan its near-future activities before it takes an action. Different approaches to network perception are discussed. the experimental results show significant improvement of system performance.
Strong deficiencies are present in symbolic models for action representation and planning, regarding mainly the difficulty of coping with real, complex environments. These deficiencies can be attributed to several problems, such as the inadequacy in coping with incompletely structured situations, the difficulty of interacting with visual and motorial aspects, the difficulty in representing low-level knowledge, the need to specify the problem at a high level of detail, and so on. Besides the purely symbolic approaches, several nonsymbolic models have been developed, such as the recent class of subsymbolic techniques. A promising paradigm for the modeling of reasoning, which combines features of both symbolic and analogical approaches, is based on the construction of analogical models of the reference for the internal representations, as introduced by Johnson-Laird. In this work, we propose a similar approach to the problem of knowledge representation and reasoning about actions and plans. We propose a hybrid approach, symbolic and analogical, in which the inferences are partially devolved to measurements on analogical models generated starting from the symbolic representation. The interaction between the symbolic and the analogical level is due to the fact that procedures are connected to some symbols, allowing generating, updating, and verifying the mental model. The hybrid model utilizes, for the symbolic component, a representation system based on the distinction between terminological and assertional knowledge. The terminological component adopts a SI-Net formalism, extended by temporal primitives. The assertional component is a subset of first-order logics. The analogical representation is a set of concurrent procedures modeling parts of the world, action processes, simulations, and metaphors based on force fields concepts. A particular case study, regarding the problem of the assembly of a complex object from parts, is taken as an experimental paradigm.
IDSCA, an intelligent system, is developed for the direction selection of controller's action in a multiloop control system. In the design of a controller, the selections of both the valve type and the controller's action direction are important tasks, which directly affects the operation and safety of production. Traditional design can hardly solve the problem. Programmed in OPS5, IDSCA can perform the heuristic inference reasoning and make the intelligent decision. A significant result from IDSCA is the fact that a new design criterion is developed, which may complement the knowledge of controller design technique. the other important investigation is that the Adaptive Feedback Testing System (AFTS) is developed to provide the high reliability of the design results. These two investigations indicate that the development of intelligent systems can stimulate and help the development of both AI and related prototype problems. Moreover, IDSCA has some additional important features: its knowledge base can be modified and new production rules can be created in the running process to solve special problems; and the hierarchy of meta-level control strategy provides the means to manage the knowledge base of IDSCA efficiently. In this article, the principle of building intelligent systems is discussed. As an example, the cascade control system of a polymerizer is applied to illustrate the use of IDSCA.
this paper we introduce the use of contextual transformation functions to adjust membership functions in fuzzy systems. We address both linear and nonlinear functions to perform linear or nonlinear context adaptation, respectively. The key issue is to encode knowledge in a standard frame of reference, and have its meaning tuned to the situation by means of an adequate transformation reflecting the influence of context in the interpretation of a concept. Linear context adaptation is simple and fast. Nonlinear context adaptation is more computationally expensive, but due to its nonlinear characteristic, different parts of base membership functions can be stretched or expanded to best fit the desired format. Here we use a genetic algorithm to find a nonlinear transformation function, given the base membership functions and a set of data extracted from environment classified by means of fuzzy concepts.
This article describes ongoing research on content sensitive recombination operators for genetic algorithms. A motivation behind this line of inquiry stems from the observation that biological chromosomes appear to contain special nucleotide sequences whose job is to influence the recombination of the expressible genes. We think of these as punctuation marks telling the recombination operators how to do their job. Furthermore, we assume that the distribution of these marks (part of the representation) in a gene pool is determined by the same survival-of-the-fittest and genetic recombination mechanisms that account for the distribution of the expressible genes (the knowledge). A goal of this project is to devise such mechanisms for genetic algorithms and thereby to link the adaptation of a representation to the adaptation of its contents. We hope to do so in a way that capitalizes on the intrinsically parallel behavior of the traditional genetic algorithm. We anticipate benefits of this for machine learning. We describe one mechanism we have devised and present some empirical evidence that suggests it may be as good as or better than a traditional genetic algorithm across a range of search problems. We attempt to show that its action does successfully adapt the search mechanics to the problem space and provide the beginnings of a theory to explain its good perfomance.
This paper explores a multimodular architecture of an intelligent information system and proposes a method for adaptation. The method is based on evaluating which of the modules need to be adapted based on the performance of the whole system on new data. These modules are then trained selectively on the new data until they improve their performance and the performance of the whole system. The modules are fuzzy neural networks, especially designed to facilitate adaptive training and knowledge discovery, and spatial temporal maps. A particular case study of spoken language recognition is presented along with some preliminary experimental results of an adaptive speech recognition system. # 1998 John Wiley & Sons, Inc.
There is a compromise between noise removal and texture preservation in image enhancement. It is difficult to perform image enhancement, using only one simple filter, for a real world image which may consist of many different regions. This article studies the intelligent aspect of filtering algorithms and describe a multi-threshold adaptive filter (MTA filter) for solving this problem. the MTA filter uses a generalized gradient function and a local variance function, which provides the local contextual information as evidence to determine the nature of the filtering for each local neighborhood. A knowledge-based presegmentation procedure is presented. It applies a threshold operation to extract the local evidence. A belief function is used to combine different evidence and to determine the local filtering strategies. In this way, several simple filters can be combined to form a more efficient and more flexible context dependent filter. As a result, specific filtering is only applied to the region for which it is suitable. Thus, a balanced texture preserving and noise removal effect can be simultaneously achieved.
One of the most crucial problems in any computer system that involves representing the world is the representation of time. This includes applications such as databases, simulation, expert systems and applications of Artificial Intelligence in general. In this brief paper, I will give a survey of the basic techniques available for representing time, and then talk about temporal reasoning in a general setting as needed in AI applications. Quite different representations of time are usable depending on the assumptions that can be made about the temporal information to be represented. The most crucial issue is the degree of certainty one can assume. Can one assume that a time stamp can be assigned to each event, or barring that, that the events are fully ordered? Or can we only assume that a partial ordering of events is known? Can events be simultaneous? Can they overlap in time and yet not be simultaneous? If they are not instantaneous, do we know the durations of events? Different answers to each of these questions allow very different representations of time.
In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matter of growing importance. We propose a new machine, called agency, which is devoted to solve complex problems by means of cooperation among agents, where each agent is able to perform inferential activities. The aim of this paper is to give rigorous and formal descriptions of agency and, using the descriptions, to define and prove some interesting properties. The descriptions are based on three formalisms: multilanguage systems, directed hypergraphs, ER Petri nets. The work is a step in the direction of building a methodology for the project and the development of systems operating in real-world applications. We give a theoretical background on which new techniques can be implemented for testing the requirements of systems of distributed artificial intelligence such as agencies. The fundamental formalism in describing agencies is multilanguage system; starting from it we capture some particular issues (i.e., structure and evolution of an agency) by means of hypergraphs and ER Petri nets. The formalisms support the definition and proof of properties (such as fairness of cooperation among agents).
. The formalization of agents attitudes, and belief in particular, has been investigated in the past by the authors of this paper, along two different but related streams. Giunchiglia and Giunchiglia investigate the properties of contexts for the formal specification of agents mutual beliefs, combining extensional specification with (finite) presentation by means of contexts. Cimatti and Serafini address the representational and implementational implications of the use of contexts for representing propositional attitudes by tackling a paradigmatic case study. The goal of this paper is to show how these two streams are actually complementary, i.e. how the methodology proposed in the former can be successfully applied to formally specify the case study discussed in the latter. In order to achieve this goal, the formal framework is extended to take into account some relevant aspects of the case study, the specification of which is then worked out in detail. 1 Introduction Much of the wor...