International Journal of Intelligent Systems

Published by Wiley
Online ISSN: 1098-111X
Publications
Conference Paper
We study the use of reroutable assignment for multipoint video conferences in a high-speed network. A conference model is constructed and conference calls are classified. A conference of a particular type can ride on different route-configurations. According to the location of the current speaker, a conference has different modes of operation. Two network management functions are discussed: call admission ensures a preset QOS requirement by blocking new calls that causes congestion; route-configuration assignment determines the multicast tree for distributing the video of the current speaker. The reroutable route-configuration assignment is introduced. It allows a change of route-configuration when there is a change of speaker. Two reroutable assignment schemes are studied. In the normal scheme, a conference is always rerouted to the least congested route-configuration; while in the sticky scheme, a conference is only rerouted when the current route-configuration is congested. The video freeze probability, rerouting probability and the extended capacity space are derived. An example shows that the video freeze probabilities of the two schemes do not differ significantly. The sticky scheme, however, is superior as it gives a much smaller rerouting probability than the normal scheme
 
Conference Paper
Electronic, web-based commerce enables and demands the application of intelligent methods to analyze information collected from consumer web sessions. We propose a method of increasing the granularity of the user session analysis by isolating useful subsessions within web page access sessions, where each subsession represents a frequently traversed path indicating high-level user activity. The subsession approximates user state information as well as anticipated user activity, and as a result is useful for personalization and pre-caching
 
Conference Paper
The paper presents the concept and development of a prototype diagnostic decision support system for real-time control and monitoring of dynamical processes. This decision support system, known as Diagnostic Evaluation and Corrective Action (DECA), employs qualitative reasoning, in conjunction with quantitative models, for monitoring and diagnosis of malfunctions and disruptions in dynamical processes under routine operations and emergency situations. DECA is especially suited for application to time-constrained environments where an immediate action is needed to avoid catastrophic failure(s). DECA is written in common Lisp and has been implemented on a Symbolics 3670 machine; its efficacy has been verified using the data from the Three Mile Island No. 2 Nuclear Reactor Accident.
 
Conference Paper
The direct fuzzification of a standard layered feedforward neural network where the signals and weights are fuzzy sets is discussed. A fuzzified delta rule is presented for learning. Three applications are given, including modeling a fuzzy expert system; performing fuzzy hierarchical analysis based on data from a group of experts; and modeling a fuzzy system. Further applications depend on proving that this fuzzy neural network can approximate a continuous fuzzy function to any degree of accuracy on a compact set
 
Conference Paper
Uncertain relations between temporal points are represented by means of possibility distributions over the three basic relations "smaller than", "equal to", and "greater than". Operations for computing inverse relations, for composing relations, for combining relations coming from different sources and pertaining to the same temporal points, or for representing negative information, are defined. An illustrative example of representing and reasoning with uncertain temporal relations is given. This paper shows how possibilistic temporal uncertainty can be handled in the setting of point algebra. Moreover, the paper emphasizes the advantages of the possibilistic approach over a probabilistic approach previously proposed. This work does for the temporal point algebra what the authors previously did for the temporal interval algebra.
 
Chapter
The formalization of agents attitudes, and belief in particular, has been investigated in the past by the authors of this paper, along two different but related streams. Giunchiglia and Giunchiglia investigate the properties of contexts for the formal specification of agents mutual beliefs, combining extensional specification with (finite) presentation by means of contexts. Cimatti and Serafini address the representational and implementational implications of the use of contexts for representing prepositional attitudes by tackling a paradigmatic case study. The goal of this paper is to show how these two streams are actually complementary, i.e. how the methodology proposed in the former can be successfully applied to formally specify the case study discussed in the latter. In order to achieve this goal, the formal framework is extended to take into account some relevant aspects of the case study, the specification of which is then worked out in detail.
 
Definition of 0 
Article
The problem of assessing the value of a candidate is viewed here as a multiple combination problem. On the one hand a candidate can be evaluated according to different criteria, and on the other hand several experts are supposed to assess the value of candidates according to each criterion. Criteria are not equally important, experts are not equally competent or reliable. Moreover levels of satisfaction of criteria, or levels of confidence are only assumed to take their values in qualitative scales which are just linearly ordered. The problem is discussed within two frameworks, the transferable belief model and the qualitative possibility theory. They respectively offer a quantitative and a qualitative setting for handling the problem, providing thus a way to compare the nature of the underlying assumptions.
 
Article
This article presents a knowledge-based system methodology for developing operator assistant (OA) systems in dynamic and interactive environments. This is a problem both of training and design, which is the subject of this article. Design includes both design of the system to be controlled and design of procedures for operating this system. A specific knowledge representation is proposed for representing the corresponding system and operational knowledge. This representation is based on the situation recognition and analytical reasoning paradigm. It tries to make explicit common factors involved in both human and machine intelligence, including perception and reasoning. An OA system based on this representation has been developed for space telerobotics. Simulations have been carried out with astronauts and the resulting protocols have been analyzed. Results show the relevance of the approach and have been used for improving the knowledge representation and the OA architecture.
 
Article
Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions. It is also a limiting property that results in many important non-monotonic averaging functions being excluded from the theoretical framework. This work proposes a definition for weakly monotonic averaging functions, studies some properties of this class of functions and proves that several families of important non-monotonic means are actually weakly monotonic averaging functions. Specifically we provide sufficient conditions for weak monotonicity of the Lehmer mean and generalised mixture operators. We establish weak monotonicity of several robust estimators of location and conditions for weak monotonicity of a large class of penalty-based aggregation functions. These results permit a proof of the weak monotonicity of the class of spatial-tonal filters that include important members such as the bilateral filter and anisotropic diffusion. Our concept of weak monotonicity provides a sound theoretical and practical basis by which (monotone) aggregation functions and non-monotone averaging functions can be related within the same framework, allowing us to bridge the gap between these previously disparate areas of research.
 
Article
We extend the notion of belief function to the case where the underlying structure is no more the Boolean lattice of subsets of some universal set, but any lattice, which we will endow with a minimal set of properties according to our needs. We show that all classical constructions and definitions (e.g., mass allocation, commonality function, plausibility functions, necessity measures with nested focal elements, possibility distributions, Dempster rule of combination, decomposition w.r.t. simple support functions, etc.) remain valid in this general setting. Moreover, our proof of decomposition of belief functions into simple support functions is much simpler and general than the original one by Shafer.
 
Information exchange at the top level of the system
Information exchange within generate possible continuations
Information exchange within determine combinations
User interaction 
Chapter
In this paper the framework DESIRE for the design of compositional reasoning systems and multi-agent systems was applied to build a generic nonmonotonic reasoning system. The outcome is a general reasoning system that can be used to model different nonmonotonic reasoning formalisms and that can be executed by a generic execution mechanism. The main advantages of using DESIRE (for example, compared to a direct implementation in a programming language such as PROLOG) are that the design is generic and has a transparent compositional structure, and the explicit declarative specification of both the static and dynamic aspects of the nonmonotonic reasoning processes, including their control. (C) 2003 Wiley Periodicals, Inc.
 
Article
Empirical research has shown that in some situations subjects tend to assign a probability to a conjunction of two events that is larger than the probability they assign to each of these two events. This empirical phenomenon is traditionally called the conjunction fallacy. One of the best known experiment used to demonstrate the conjunction fallacy is the Linda problem introduced by Tversky and Kahneman in 1982. They explain the “fallacious behavior” by their so-called judgemental heuristics. These heuristics have been heavily criticized by Gigerenzer (1996) as being far “too vague to count as explanations”. In this paper, it is shown that the “fallacious behavior” in the Linda problem can be explained by the so-called Theory of Hints developed by Kohlas and Monney in 1995.
 
Article
In this article, the self-organizing map (SOM) is employed to analyze data describing the 24-hour blood pressure and heart-rate variability of human subjects. The number of observations varies widely over different subjects, and therefore a direct statistical analysis of the data is not feasible without extensive pre-processing and interpolation for normalization purposes. The SOM network operates directly on the data set, without any pre-processing, determines several important data set characteristics, and allows their visualization on a two-dimensional plot. The SOM results are very similar to those obtained using classic statistical methods, indicating the effectiveness of the SOM method in accurately extracting the main characteristics from the data set and displaying them in a readily understandable manner. In this article, the relation is studied between the representation of each subject on the SOM, and his blood pressure and pulse-rate measurements. Finally, some indications are included regarding how the SOM can be used by the medical community to assist in diagnosis tasks. © 2002 John Wiley & Sons, Inc.
 
Article
Knowledge acquisition is a constructive modeling process, not simply a matter of “expertise transfer.” Consistent with this perspective, we advocate knowledge acquisition practices and tools that facilitate active collaboration between expert and knowledge engineer, that exploit a serviceable theory in their application, and that support knowledge-based system development from a life-cycle perspective. A constructivist theory of knowledge is offered as a plausible theoretical foundation for knowledge acquisition and as an effective practical approach to the dynamics of modeling. In this view, human experts construct knowledge from their own personal experiences while interacting with their social constituencies (e.g., supervisors, colleagues, clients patients) in their niche of expertise. Knowledge acquisition is presented as a cooperative enterprise in which the knowledge engineer and expert collaborate in constructing an explicit model of problem solving in a specific domain. From this perspective, the agenda for the knowledge acquisition research community includes developing tools and methods to aid experts in their efforts to express, elaborate, and improve their models of the domain. This functional view of expertise helps account for several problems that typically arise in practical knowledge acquisition projects, many of which stem directly from the inadequacies of representations used at various stages of system development. to counter these problems, we emphasize the use of mediating representations as a means of communication between expert and knowledge engineer, and intermediate representations to help bridge the gap between the mediating representations themselves, as well as between the mediating representations and a particular implementation formalism. © 1993 John Wiley & Sons, Inc.
 
Article
The article presented an edited collection of eleven papers presented at the first five workshops (1988-1992) on Verification, Validation and Testing of Intelligent Systems conducted by the American Association for Artificial Intelligence (AAAI). These workshops have been actively attended by V&V researchers, tool developers, and practitioners who benefit most from the dissemination of major new results and systems.
 
Article
For some time, researchers have become increasingly aware that some aspects of natural language processing can be viewed as abductive inference. This article describes knowledge representation in dual-route parsimonious covering theory, based on an existing diagnostic abductive inference model, extended to address issues specific to logic form generation. the two routes of covering deal with syntactic and semantic aspects of language, and are integrated by attributing both syntactic and semantic facets to each “open class” concept. Such extensions reflect some fundamental differences between the two task domains. the syntactic aspect of covering is described to show the differences, and some interesting properties are established. the semantic associations are characterized in terms of how they can be used in an abductive model. A major significance of this work is that it paves the way for a nondeductive inference method for word sense disambiguation and logical form generation, exploiting the associative linguistic knowledge. This approach sharply contrasts with others, where knowledge has usually been laboriously encoded into pattern-action rules that are hard to modify. Further, this work represents yet another application for the general principle of parsimonious covering. © 1994 John Wiley & Sons, Inc.
 
Article
Abductive inferences are commonplace during natural language processing. Having identified some limitations of an existing parsimonious covering model of abductive diagnostic inference, we developed an extended, dual-route version to address issues in word sense disambiguation and logical form generation. the details of representing knowledge in this framework and the syntactic route of covering are described in a companion article [V. Dasigi, Int. J. Intell. Syst., 9, 571-608 (1994)]. Here, we describe the semantic covering process in detail. A dual-route algorithm that integrates syntactic and semantic covering is given. Taking advantage of the “transitivity” of irredundant syntactic covering, plausible semantic covers are searched for, based on some heuristics, in the space of irredundant syntactic covers. Syntactic covering identifies all possible candidates for semantic covering, which in turn, helps focus syntactic covering. Attributing both syntactic and semantic facets to “open-class” linguistic concepts makes this integration possible. an experimental prototype has been developed to provide a proof-of-concept for these ideas in the context of expert system interfaces. the prototype has at least some ability to handle ungrammatical sentences, to perform some nonmonotonic inferences, etc. We believe this work provides a starting point for a nondeductive inference method for logical form generation, exploiting the associative linguistic knowledge. © 1994 John Wiley & Sons, Inc.
 
Article
This paper proposes a fuzzy abductive inference with degrees of manifestation. The fuzzy logic is applied to Peng and Reggia's abductive inference for handling the manifestation degrees. This method infers irredundant combinations of candidates with degrees of belief for the manifestations. Learning algorithm for updating the fuzzy causations and t-conorm parameter is also presented in this paper. Application of the new method to a diagnostic problem is shown and the effectiveness of the proposed method is demonstrated. 1 Introduction Abduction is one of methods of inference for medical diagnostic systems [1] . D. Poole [2] defined the abduction as follows; When background theory 6, hypotheses H, goal G are given, an explanation of E of elements of H is defined such that 6 [ E j= G and 6 [ E 6j= false. Peng and Reggia developed association based abductive inference [7] . This abductive inference used knowledge suitable for the fault/medical diagnoses, and had an efficient method...
 
Article
This paper investigates two different activities that involve making assumptions: predicting what one expects to be true and explaining observations. In a companion paper, an architecture for both prediction and explanation is proposed and an implementation is outlined. In this paper, we show how such a hypothetical reasoning system can be used to solve recognition, diagnostic and prediction problems. As part of this is the assumption that the default reasoner must be "programmed" to get the right answer and it is not just a matter of "stating what is true" and hoping the system will magically find the right answer. A number of distinctions have been found in practice to be important: between predicting whether something is expected to be true versus explaining why it is true; and between conventional defaults (assumptions as a communication convention), normality defaults (assumed for expediency) and conjectures (assumed only if there is evidence). The effects of these distinctions on...
 
Article
In this note we examine the question of assigning a probabilistic valuation to a statement as ''Tweety (a particular bird) is able to fly. '' Namely, we suggest that a natural way to proceed is to rewrite it as ''a (randomly chosen) bird with the same observable properties of Tweety is able to fly, '' and consequently to assume that the probability of '' Tweety is able to fly '' is equal to the percentage of the past observed birds similar to Tweety that are able to fly. (C) 1994 John Wiley & Sons, Inc.
 
Article
Diagnostic reasoning at multiple levels of abstraction is an efficient problem-solving strategy. It enables diagnostic problem-solvers (human or automated) to efficiently form plausible high-level diagnostic hypotheses while avoiding the explicit consideration of unnecessary details. This article describes a domain-independent inference mechanism for diagnostic reasoning at multiple levels of abstraction. the inference mechanism uses the causal knowledge representation framework described in an earlier companion article.1 This inference strategy has been tested through the implementation of a prototype diagnostic system with encouraging results.
 
Article
A significant body of causal knowledge for diagnostic problem-solving is organized at multiple levels of abstraction. By this we mean that causal relations are specified in terms of disorder and manifestation classes that can be further refined as well as in terms of specific, unrefinable disorders and manifestations. Such knowledge enables diagnostic problem-solvers (human or automated) to efficiently form initial, high-level diagnostic hypotheses while avoiding the explicit consideration of unnecessary details. This article develops a knowledge representation framework to precisely yet naturally capture causal relations at multiple levels of abstraction. Different interpretations of high-level causal associations are precisely defined and systematically tabulated. Rules to infer implicit causal relations from explicitly declared causal relations are also identified. These ideas have been implemented in a working system for medical diagnosis. the results presented in this article also offers a new perspective on studying the semantics of knowledge representation in general.
 
Article
In this article we explore the issue of domain-specificity in language learning. the point to be argued here is that although language acquisition requires substantial domainintensive knowledge, some of the mechanisms used in concept acquisition can be seen as special cases of more general learning strategies; that is, domain-independent strategies operating within domain-dependent constraints. We present a computational model of concept 2cquisition making use of these strategies, operating within a model of lexical organization, called Constraint Semantics. This is a rich lexical semantics embedded within a “markedness theory,” constraining how semantic functions relate to one another. Constraint semantics is a restrictive calculus limiting the search space of possible word meanings for the language leaner. This, in effect, acts as a set of wellformedness conditions, defining the constraints for what possible logical decompositions a word might contain. the general approach taken here is based on the supposition that predicates from the perceptual domain are the primitives for more abstract relations. We then describe an implementation of this model, TULLY, which mirrors the stages of lexical acquisition for children. Examples are given showing how hierarchical structure for concepts is acquired, as well as the development of polysemy relations for verbs.
 
Article
Expert systems have been successfully applied to a wide variety of application domains. to achieve better performance, researchers have tried to employ fuzzy logic to the development of expert systems. However, as fuzzy rules and membership functions are difficult to define, most of the existing tools and environments for expert systems do not support fuzzy representation and reasoning. Thus, it is time-consuming to develop fuzzy expert systems. In this article we propose a new approach to elicit expertise and to generate knowledge bases for fuzzy expert systems. A knowledge acquisition system based upon the approach is also presented, which can help knowledge engineers to create, adjust, debug, and execute fuzzy expert systems. Some control techniques are employed in the knowledge acquisition system so that the concepts of fuzzy logic could be directly applied to conventional expert system shells; moreover, a graphic user interface is provided to facilitate the adjustment of membership functions and the display of outputs. the knowledge acquisition system has been integrated with a popular expert system shell, CLIPS, to offer a complete development environment for knowledge engineers. With the help of this environment, the development of fuzzy expert systems becomes much more convenient and efficient. © 1995 John Wiley & Sons, Inc.
 
Article
When the purpose of a knowledge acquisition (KA) system is to acquire the knowledge needed to build an analytic model of a complex system, the structure of the model can be used to guide and streamline the KA process. Constraints on a system's structure can be used to generate an “intelligent questioning” sequence of requests for descriptive facts to minimize the burden on the expert or model-builder supplying the program with information. Moreover, general knowledge about the system domain can be supplied as “meta-knowledge” by an expert and used by the KA program to guide the search for specific knowledge (“facts”) about a particular system from a less expert user. This article describes a KA methodology and program developed to streamline the acquisition of descriptive information about complex reliability systems (e.g., telecommunicatio networks, computer systems, industrial processes, etc.). the methodology treats knowledge acquisition and knowledge representation as two inseparable parts of an integrated process of model building. the goal of the KA dialogue is formulated as minimizing the effort needed for the user and the machine to achieve a shared model of the system to be analyzed. Models are built by specializing and instantiating templates constructed from background “meta-knowledge.” This perspective has several implications for dialogue-based KA shells that support modeling of complex systems in limited domains. © 1993 John Wiley & Sons, Inc.
 
Article
A major bottleneck in developing knowledge-based systems is the acquisition of knowledge. Machine learning is an area concerned with the automation of this process of knowledge acquisition. Neural networks generally represent their knowledge at the lower level, while knowledge-based systems use higher-level knowledge representations. the method we propose here provides a technique that automatically allows us to extract conjunctive rules from the lower-level representation used by neural networks, the strength of neural networks in dealing with noise has enabled us to produce correct rules in a noisy domain. Thus we propose a method that uses neural networks as the basis for the automation of knowledge acquisition and can be applied to noisy, realworld domains. © 1993 John Wiley & Sons, Inc.
 
Article
Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development life cycle. to alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. to support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. the tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. © 1994 John Wiley & Sons, Inc.
 
Article
Globally coherent behavior is essential for a distributed problem solving network. It is a characteristic of the whole problem solving process. We discuss in this article different forms of cooperation at different phases of the problem solving process, which have to be considered to increase global coherence. the connection problem and the timing problem are key issues for distributed problem solving. the “perceive-plan-act” loop is introduced for each problem solving node. It means that a node has to perceive the network state and plan its near-future activities before it takes an action. Different approaches to network perception are discussed. the experimental results show significant improvement of system performance.
 
Article
Strong deficiencies are present in symbolic models for action representation and planning, regarding mainly the difficulty of coping with real, complex environments. These deficiencies can be attributed to several problems, such as the inadequacy in coping with incompletely structured situations, the difficulty of interacting with visual and motorial aspects, the difficulty in representing low-level knowledge, the need to specify the problem at a high level of detail, and so on. Besides the purely symbolic approaches, several nonsymbolic models have been developed, such as the recent class of subsymbolic techniques. A promising paradigm for the modeling of reasoning, which combines features of both symbolic and analogical approaches, is based on the construction of analogical models of the reference for the internal representations, as introduced by Johnson-Laird. In this work, we propose a similar approach to the problem of knowledge representation and reasoning about actions and plans. We propose a hybrid approach, symbolic and analogical, in which the inferences are partially devolved to measurements on analogical models generated starting from the symbolic representation. The interaction between the symbolic and the analogical level is due to the fact that procedures are connected to some symbols, allowing generating, updating, and verifying the mental model. The hybrid model utilizes, for the symbolic component, a representation system based on the distinction between terminological and assertional knowledge. The terminological component adopts a SI-Net formalism, extended by temporal primitives. The assertional component is a subset of first-order logics. The analogical representation is a set of concurrent procedures modeling parts of the world, action processes, simulations, and metaphors based on force fields concepts. A particular case study, regarding the problem of the assembly of a complex object from parts, is taken as an experimental paradigm.
 
Article
Action rules assume that attributes in a database are divided into two groups: stable and flexible. In general, an action rule can be constructed from two rules extracted earlier from the same database. Furthermore, we assume that these two rules describe two different decision classes and our goal is to reclassify objects from one of these classes into the other one. Flexible attributes are essential in achieving that goal because they provide a tool for making hints to a user about what changes within some values of flexible attributes are needed for a given group of objects to reclassify them into a new decision class. A new subclass of attributes called semi-stable attributes is introduced. Semi-stable attributes are typically a function of time and undergo deterministic changes (e.g., attribute age or height). So, the set of conditional attributes is partitioned into stable, semi-stable, and flexible. Depending on the semantics of attributes, some semi-stable attributes can be treated as flexible and the same new action rules can be constructed. These new action rules are usually built to replace some existing action rules whose confidence is too low to be of any interest to a user. The confidence of new action rules is always higher than the confidence of rules they replace. Additionally, the notion of the cost and feasibility of an action rule is introduced in this article. A heuristic strategy for constructing feasible action rules that have high confidence and possibly the lowest cost is proposed. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 719–736, 2005.
 
Article
IDSCA, an intelligent system, is developed for the direction selection of controller's action in a multiloop control system. In the design of a controller, the selections of both the valve type and the controller's action direction are important tasks, which directly affects the operation and safety of production. Traditional design can hardly solve the problem. Programmed in OPS5, IDSCA can perform the heuristic inference reasoning and make the intelligent decision. A significant result from IDSCA is the fact that a new design criterion is developed, which may complement the knowledge of controller design technique. the other important investigation is that the Adaptive Feedback Testing System (AFTS) is developed to provide the high reliability of the design results. These two investigations indicate that the development of intelligent systems can stimulate and help the development of both AI and related prototype problems. Moreover, IDSCA has some additional important features: its knowledge base can be modified and new production rules can be created in the running process to solve special problems; and the hierarchy of meta-level control strategy provides the means to manage the knowledge base of IDSCA efficiently. In this article, the principle of building intelligent systems is discussed. As an example, the cascade control system of a polymerizer is applied to illustrate the use of IDSCA.
 
Article
this paper we introduce the use of contextual transformation functions to adjust membership functions in fuzzy systems. We address both linear and nonlinear functions to perform linear or nonlinear context adaptation, respectively. The key issue is to encode knowledge in a standard frame of reference, and have its meaning tuned to the situation by means of an adequate transformation reflecting the influence of context in the interpretation of a concept. Linear context adaptation is simple and fast. Nonlinear context adaptation is more computationally expensive, but due to its nonlinear characteristic, different parts of base membership functions can be stretched or expanded to best fit the desired format. Here we use a genetic algorithm to find a nonlinear transformation function, given the base membership functions and a set of data extracted from environment classified by means of fuzzy concepts.
 
Article
This article describes ongoing research on content sensitive recombination operators for genetic algorithms. A motivation behind this line of inquiry stems from the observation that biological chromosomes appear to contain special nucleotide sequences whose job is to influence the recombination of the expressible genes. We think of these as punctuation marks telling the recombination operators how to do their job. Furthermore, we assume that the distribution of these marks (part of the representation) in a gene pool is determined by the same survival-of-the-fittest and genetic recombination mechanisms that account for the distribution of the expressible genes (the knowledge). A goal of this project is to devise such mechanisms for genetic algorithms and thereby to link the adaptation of a representation to the adaptation of its contents. We hope to do so in a way that capitalizes on the intrinsically parallel behavior of the traditional genetic algorithm. We anticipate benefits of this for machine learning. We describe one mechanism we have devised and present some empirical evidence that suggests it may be as good as or better than a traditional genetic algorithm across a range of search problems. We attempt to show that its action does successfully adapt the search mechanics to the problem space and provide the beginnings of a theory to explain its good perfomance.
 
Article
The important properties and applications of the adaptive weighted fuzzy mean (AWFM) filter are presented in this paper. AWFM is an extension of the weighted fuzzy mean (WFM) filter to overcome the drawback of WFM in fine signal preservation. It not only preserves the high performance of WFM on heavy additive impulse noise, but also improves the efficiency of WFM on removing light additive impulse noise. Some deterministic and statistical properties of the AWFM filter are analyzed, and the main characteristic of the AWFM filter that maps the input signal space into a root signal space, where a root signal is an invariant signal to the filter, is also discussed. Compared with the other filters, AWFM exhibits better performance in the criteria of mean absolute error and mean square error. On the subjective evaluation of those filtered images, AWFM also results in a higher quality of global restoration. ©1999 John Wiley & Sons, Inc.
 
Article
Fundamental developments in feedforward artificial neural networks from the past 30 years are reviewed. the central theme of this article is a description of the history, origination, operating characteristics, and basic theory of several supervised neural network training algorithms including the Perceptron rule, the LMS algorithm, three Madaline rules, and the backpropagation technique. These methods were developed independently, but with the perspective of history they can all be related to each other. the concept which underlies these algorithms is the “minimal disturbance principle,” which suggests that during training it is advisable to inject new information into a network in a manner which disturbs stored information to the smallest extent possible. In the utilization of present-day rule-based expert systems, decision rules must always be known for the application of interest. Sometimes there are no rules, however. the rules are either not explicit or they simply do not exist. For such applications, trainable expert systems might be usable. Rather than working with decision rules, an adaptive expert system might observe the decisions made by a human expert. Looking over the expert's shoulders, an adaptive system can learn to make similar decisions to those of the human. Trainable expert systems have been used in the laboratory for real-time control of a “broom-balancing system.” © 1993 John Wiley & Sons, Inc.
 
Article
According to many authors, neural networks and adaptive expert systems may provide the foundations of sixth-generation computers. Neural networks use lower hardware-like concepts and they are based on continuous and numeric type computation. On the other hand, adaptive expert systems use inference rules and perform high-level symbolic computations. the approaches may seem to be totally different, but they do exhibit similar properties: learning, flexibility, parallel search, generalization, and association. This article takes up the problem of the design of a common model for neural networks and adaptive expert systems. For this purpose the Calculus of Self-Modifiable Algorithms, a general tool for problem solving, is used. This joint approach to expert systems and neural networks emphasize their analogies, rather than their differences. © 1993 John Wiley & Sons, Inc.
 
Article
This paper explores a multimodular architecture of an intelligent information system and proposes a method for adaptation. The method is based on evaluating which of the modules need to be adapted based on the performance of the whole system on new data. These modules are then trained selectively on the new data until they improve their performance and the performance of the whole system. The modules are fuzzy neural networks, especially designed to facilitate adaptive training and knowledge discovery, and spatial temporal maps. A particular case study of spoken language recognition is presented along with some preliminary experimental results of an adaptive speech recognition system. # 1998 John Wiley & Sons, Inc.
 
Article
There is a compromise between noise removal and texture preservation in image enhancement. It is difficult to perform image enhancement, using only one simple filter, for a real world image which may consist of many different regions. This article studies the intelligent aspect of filtering algorithms and describe a multi-threshold adaptive filter (MTA filter) for solving this problem. the MTA filter uses a generalized gradient function and a local variance function, which provides the local contextual information as evidence to determine the nature of the filtering for each local neighborhood. A knowledge-based presegmentation procedure is presented. It applies a threshold operation to extract the local evidence. A belief function is used to combine different evidence and to determine the local filtering strategies. In this way, several simple filters can be combined to form a more efficient and more flexible context dependent filter. As a result, specific filtering is only applied to the region for which it is suitable. Thus, a balanced texture preserving and noise removal effect can be simultaneously achieved.
 
Graphics representation using classical operator and majority operator.  
Values and results for the third row of Tables III and IV.
Article
A problem that we had encountered in the aggregation process, is how to aggregate the elements that have cardinality >1. The purpose of this article is to present a new aggregation operator of linguistic labels that uses the cardinality of these elements, the linguistic aggregation of majority additive (LAMA) operator. We also present an extension of the LAMA operator under the two-tuple fuzzy linguistic representation model. © 2003 Wiley Periodicals, Inc.
 
Article
Evidence Aggregation Networks based on multiplicative fuzzy hybrid operators were introduced by Krishnapuram and Lee. They have been used for image segmentation, pattern recognition, and general multicriteria decision making. One of the drawbacks to these networks is that the training is complex and quite time consuming. In this article, we modify these aggregation networks to implement additive fuzzy hybrid connectives. We study the theoretical properties of two classes of such aggregation operators, one where the union and intersection components are based on multiplication, and the other where these components are derived from Yager connectives. These new networks have similar excellent properties such as backpropagation training and node interpretability for decision making under uncertainty as do their multiplicative precursors. They also have the advantage that training is easier since the derivatives of the additive hybrid operators are not as complex in form. the appropriate training algorithms are derived, and several examples given to illustrate the properties of the networks. © 1994 John Wiley & Sons, Inc.
 
Article
A problem that we had encountered in the aggregation process is how to aggregate the elements that have cardinality greater than one. The most common operators used in the aggregation process produce reasonable results, but, at the same time, when the items to aggregate have cardinality greater than one, they may produce distributed problems. The purpose of this article is to present a new neat ordered weighting averaging (OWA) operator that uses the cardinality of these elements to calculate their weights. © 2003 Wiley Periodicals, Inc.
 
Article
We describe a new paradigm for the inclusion of advertisements on the WWW. This paradigm takes advantage of the internet's great ability for instantaneous online processing of information in real time. A methodology is described for the use of intelligent agents to help in the determination of the appropriateness of displaying a given advertisement to a visitor to a site using very specific information about potential customers. Use is made of fuzzy systems modeling for the construction of these agents. © 1997 John Wiley & Sons, Inc.
 
Article
There is a great need to better understand the sources, dynamics, and compositions of atmospheric aerosols. The traditional approach for particle measurement, collecting bulk samples of particulates on filters, is not adequate for studying particle dynamics and real-time correlations. This has led to the development of a new generation of real-time instruments that provide continuous or semicontinuous streams of data about certain aerosol properties. However, these instruments have added a significant level of complexity to atmospheric aerosol data and dramatically increased the amounts of data to be collected, managed, and analyzed. Our ability to integrate the data from all of these new and complex instruments now lags far behind our data-collection capabilities, and severely limits our ability to understand the data and act upon it in a timely manner. In this article, we present an overview of EDAM (Exploratory Data Analysis and Management), a joint project between researchers in Atmospheric Chemistry and Computer Science. Important objectives include environmental monitoring and data quality assurance, and real-time data mining offers great potential. While atmospheric aerosol analysis is an important and challenging domain, our objective is to develop techniques that have broader applicability. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 759–787, 2005.
 
Article
One of the most crucial problems in any computer system that involves representing the world is the representation of time. This includes applications such as databases, simulation, expert systems and applications of Artificial Intelligence in general. In this brief paper, I will give a survey of the basic techniques available for representing time, and then talk about temporal reasoning in a general setting as needed in AI applications. Quite different representations of time are usable depending on the assumptions that can be made about the temporal information to be represented. The most crucial issue is the degree of certainty one can assume. Can one assume that a time stamp can be assigned to each event, or barring that, that the events are fully ordered? Or can we only assume that a partial ordering of events is known? Can events be simultaneous? Can they overlap in time and yet not be simultaneous? If they are not instantaneous, do we know the durations of events? Different answers to each of these questions allow very different representations of time.
 
Article
In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matter of growing importance. We propose a new machine, called agency, which is devoted to solve complex problems by means of cooperation among agents, where each agent is able to perform inferential activities. The aim of this paper is to give rigorous and formal descriptions of agency and, using the descriptions, to define and prove some interesting properties. The descriptions are based on three formalisms: multilanguage systems, directed hypergraphs, ER Petri nets. The work is a step in the direction of building a methodology for the project and the development of systems operating in real-world applications. We give a theoretical background on which new techniques can be implemented for testing the requirements of systems of distributed artificial intelligence such as agencies. The fundamental formalism in describing agencies is multilanguage system; starting from it we capture some particular issues (i.e., structure and evolution of an agency) by means of hypergraphs and ER Petri nets. The formalisms support the definition and proof of properties (such as fairness of cooperation among agents).
 
Article
The use of distributed artificial intelligence (DAI) techniques, particularly the multiagent systems theory, in a decentralized architecture, is proposed to manage cooperatively, all sensor tasks in a network of (air) surveillance radars with capabilities for autonomous operation. At the multisensor data fusion (DF) center, the fusion agent will periodically deliver to sensor agents a list with the system-level tasks that need to be fulfilled. For each system task, indications about its system-level priority are included (inferred global necessity of fulfilling the task) as well as the performance objectives that are required, expressed in different terms depending on the type of task (sector surveillance, target tracking, target identification, etc.). Periodically, the local manager at each sensor (the sensor agent) will decide on the list of sensor-level tasks to be executed by its sensor, providing also the sensor-level priority and performance objectives for each task. The problem of sensor(s)-to-task(s) assignment (including decomposition of system-level tasks into sensor-level tasks and translation of system-level performance requirements to sensor-level performance objectives) is the result of a negotiation process performed among sensor agents, initiated with the information sent to them by the fusion agent. With types of agents, a symbolic bottom-up fuzzy reasoning process is performed that considers the available fused or local target tracks, surveillance sectors data, and (external) intelligence information. As a result of these reasoning processes, performed at each agent planning level, the priorities of system-level and sensor-level tasks will be inferred and applied during the negation process. © 2003 Wiley Periodicals, Inc.
 
Article
. The formalization of agents attitudes, and belief in particular, has been investigated in the past by the authors of this paper, along two different but related streams. Giunchiglia and Giunchiglia investigate the properties of contexts for the formal specification of agents mutual beliefs, combining extensional specification with (finite) presentation by means of contexts. Cimatti and Serafini address the representational and implementational implications of the use of contexts for representing propositional attitudes by tackling a paradigmatic case study. The goal of this paper is to show how these two streams are actually complementary, i.e. how the methodology proposed in the former can be successfully applied to formally specify the case study discussed in the latter. In order to achieve this goal, the formal framework is extended to take into account some relevant aspects of the case study, the specification of which is then worked out in detail. 1 Introduction Much of the wor...
 
Top-cited authors
Zeshui Xu
  • Sichuan University
Ronald R. Yager
Vicenç Torra
  • Umeå University
Francisco Herrera
  • University of Granada
Henri Prade
  • Paul Sabatier University - Toulouse III