Knowledge-Based Systems

Published by Elsevier BV

Print ISSN: 0950-7051

Articles


Emotion recognition and its application to computer agents withspontaneous interactive capabilities
  • Conference Paper

February 1999

·

118 Reads

·

J. Nicholson

·

Nonverbal information, such as the emotions, plays an essential role in human communication. Unfortunately, only the verbal aspect of communication has been focused on in communications between humans and computer agents. In the future, however, it is expected that computer agents will have nonverbal as well as verbal communication capabilities. In this paper, we first study the recognition of the emotions involved in human speech. Then we apply this emotion recognition algorithm to a computer agent that plays a character role in the interactive movie system that we are developing
Share

An Ontology-Based Knowledge System for Supporting Position and Classification of Co-Branding Strategy

May 2008

·

67 Reads

As many companies seek growth through the development of new products, co-branding strategy provides a way to develop new products. However, combining two brands may cause brand meaning to transfer in ways that were never intended. Utilizing two or more brand names in the process of introducing new products offers competitive advantages. The present paper advances research on co- branding strategies by proposing a knowledge management system for co-branding through an ontology with three concepts: co-branding aim, category, and effect. The ontology-based knowledge system not only provides a roadmap of co-branding strategies but also illuminates issues related to co-branding for related research.

Figure 1. System architecture 
Mining Web Logs to Improve Hit Ratios of Prefetching and Caching
  • Conference Paper
  • Full-text available

September 2005

·

62 Reads

In the Internet, proxy servers play the key roles between users and Web sites, which could reduce the response time of user requests and save network bandwidth. Basically, an efficient buffer manager should be built in a proxy server to cache frequently accessed documents in the buffer, thereby achieving better response time. In the paper, we developed an access sequence miner to mine popular surfing 2-sequences with their conditional probabilities from the proxy log, and stored them in the rule table. Then, according to buffer contents and the rule table, a prediction-based buffer manager also developed here makes appropriate actions such as document caching, document prefetching, and even cache/prefetch buffer size adjusting to achieve better buffer utilization. Through the simulation, we found that our approach has better performance than the others, in the quantitative measures such as hit ratios and byte hit ratios of accessed documents.
Download

Diagnostic reasoning based on means-end models: experiences and future prospects

February 1999

·

42 Reads

Multilevel flow models (MFM) are graphical models of goals and functions of technical systems. MFM provides a good basis for computer-based supervision and diagnosis, especially in real-time applications, where fast execution and guaranteed worst-case response times are essential. The expressive power of MFM is similar to that of rule-based expert systems, while the explicit representation of means-end knowledge and the graphical nature of the models make the knowledge engineering effort less and the execution efficiency higher than that of standard expert systems. The paper gives an overview of existing MFM algorithms, and different MFM projects which have been performed or are currently in progress

Figure 1. PFNET Scaling with Papers under Same Factors Close to Each Other
Table 1 . Factors and Loading
Revealing the research themes and trends in Knowledge Management studies
Knowledge management (KM) has come to encompass a wide range of studies. It is also a new discipline with great growth potential since knowledge acquisition and assimilation has become one of the most important ingredients in modern business practice. KM-related research within science and engineering could help provide the theoretical and infrastructural support that is needed by practitioners and researchers in this new field. The study of the intellectual structure of a discipline was pioneered by researchers in information science in the early eighties. The intellectual structure of KM had been studied earlier by researchers in the information systems (IS) field. The finding of IS researchers is idiosyncratically inclined toward IS related research. Our study draws on the CiteSeer citation index, which is primarily a computer engineering and information science citation database. The intellectual structure of KM derived from a predominantly science and engineering oriented index is quite different from what has been provided by IS researchers. Our results reveal sub areas that appear to form the conceptual groundwork of KM. Besides, these current research themes of KM were also explored, further identifying research trends in KM. It was found that the study of semantic based peer-to-peer computing is one of the most important of these trends.

Development of a knowledge-based design support system

March 1992

·

55 Reads

Tim Smithers

·

·

Nils Tomes

·

[...]

·

Edward Hodgkin
A notion of design has been developed that is fundamentally different from others in the field. Creative design across a number of domains has been focused on, and a model of design as an exploratory activity rather than as a form of search has been developed. The exploration of a design problem's characteristics is an activity that creates and bounds the space within which possible design solutions can be located. The seeing of design as an exploration and mapping of parameter space highlights the inherent complexity of the creative design process, and it has implications for the specification of knowledge-based design systems. The resulting design philosophy places the human designer at the heart of the exploration process with the computer system, using integrated AI techniques, acting to support him/her throughout the design process. The Castlemaine project has adopted the philosophy of design support, and it evaluates the model of design exploration within the domain of pharmaceutical small-molecule design through the specification of a knowledge-based design-support system. The paper describes the back-ground to the Castlemaine project, the research programme, and the status of the project in 1991.

Data mining and knowledge discovery in proton nuclear magnetic resonance (1H-NMR) spectra using frequency to information transformation (FIT)

May 2002

·

26 Reads

Recent rapid development of research in the fields of structural genomics and bioinformatics has stressed the need for the development of effective methods of data mining and knowledge extraction from complex and convoluted signals.In this paper we introduce frequency to information transformation (FIT) as a novel method of extracting information content of complex signals. Because FIT uses a priori knowledge and is a comparative technique, it is well suited for data mining and knowledge discovery from complex data. In this paper, we introduce FIT and compare it to established methods used in automated conditioning and knowledge discovery in proton-nuclear magnetic resonance (1H-NMR) spectra. FIT transformation was applied to a collection of 80 one-dimensional (1D) 1H-NMR spectra of 23 N-linked oligosaccharides.Three classification methods, namely, cluster analysis, Bayesian analysis and artificial neural networks (ANN) were used to demonstrate the advantages of FIT in information and knowledge extraction in comparison with classical methods such as frequency-based filtering, nonlinear and piecewise linear curve fitting, and correlation coefficient analysis.

Towards a fuzzy-logic programming system: a 1st-order fuzzy logic

June 1992

·

12 Reads

Traditional logic and logic programming languages cannot handle uncertainty. Fuzzy logic can, but nobody has yet devised a readily computable form. One possible way to achieve this is to define a propositional fuzzy logic, extend this to a 1st-order form, convert it to Horn-clause form, and, finally, to devise a theorem prover to manipulate the Horn clauses. The authors of the paper have already achieved the first step. The paper formally develops the second step, namely a type of 1st-order fuzzy logic that incorporates a complete set of quantifiers, qualifiers and modifiers. The fuzzy entities that represent the language are described, and a 1st-order theory is introduced that consists of an alphabet, a syntax and a set of semantics for the language.

Xiao, Z.: Data analysis approaches of soft sets under incomplete information. Knowledge-Based Syst. 21(8), 941-945

December 2008

·

169 Reads

In view of the particularity of the value domains of mapping functions in soft sets, this paper presents data analysis approaches of soft sets under incomplete information. For standard soft sets, the decision value of an object with incomplete information is calculated by weighted-average of all possible choice values of the object, and the weight of each possible choice value is decided by the distribution of other objects. For fuzzy soft sets, incomplete data will be predicted based on the method of average-probability. Results of comparison show that comparing to other approaches for dealing with incomplete data, these approaches presented in this paper are preferable for reflecting actual states of incomplete data in soft sets. At last, an example is provided to illuminate the practicability and validity of the data analysis approach of soft sets under incomplete information.

Liao, S.S.: Discovering Original Motifs with Different Lengths from Time Series. Knowledge-Based Systems 21, 666-671

October 2008

·

49 Reads

Finding previously unknown patterns in a time series has received much attention in recent years. Of the associated algorithms, the k-motif algorithm is one of the most effective and efficient. It is also widely used as a time series preprocessing routine for many other data mining tasks. However, the k-motif algorithm depends on the predefine of the parameter w, which is the length of the pattern. This paper introduces a novel k-motif-based algorithm that can solve the existing problem and, moreover, provide a way to generate the original patterns by summarizing the discovered motifs.

Large scale knowledge based systems for airborne decision support. Knowl-Based Syst 12(5-6):215-222

October 1999

·

15 Reads

At ES 96 during the Keynote address, the point was forcefully made that software houses have contributed little to the advancement of KBS. In the defence area, especially that of aerospace systems, extensive use has been made of the expertise of software and system houses in developing validation methodologies (VORTEX), real time (MUSE) and multi-agent (D-MUSE) software and together with Universities, a knowledge acquisition toolkit (PC PACK). In the UK at DERA Farnborough within the Airborne Decision Support Group, Air Sector, these software and tools have been developed and applied to problems in building Decision Support Systems for Maritime Air applications. The demanding aircrew tasks are characterised by the need for assimilation and interpretation of multi-sensor data to devise tactical responses in real time based on prevailing tactical doctrine and aircrew experience. The applications include Decision Support for Anti-Submarine Warfare (ASW), Anti-Surface Warfare (ASuW), Airborne Early Warning (AEW) together with ASW/ASuW and the proposed AEW technology demonstrators. Currently the transition is being made from the laboratory concept demonstrators to large scale technology demonstrator programmes as a risk reduction exercise prior to specification for airborne use. The proposed AEW TDP includes extensive modularity to support extensibility and component reuse.

Intelligent multi-shot 3D visualization interfaces

December 1999

·

14 Reads

In next-generation virtual 3D simulation, training, and entertainment environments, intelligent visualization interfaces must respond to user-specified viewing requests so users can follow salient points of the action and monitor the relative locations of objects. Users should be able to indicate which object(s) to view, how each should be viewed, what cinematic style and pace to employ, and how to respond when a single satisfactory view is not possible. When constraints fail, weak constraints can be relaxed or multi-shot solutions can be displayed in sequence or as composite shots with simultaneous viewports. To address these issues, we have developed ConstraintCam, a real-time camera visualization interface for dynamic 3D worlds.

A.D.: Detecting mismatches among experts’ ontologies acquired through knowledge elicitation. Knowl.-Based Syst

July 2002

·

44 Reads

We have constructed a set of ontologies modelled on conceptual structures elicited from several domain experts. Protocols were collected from various experts, who advise on the selection/specification and purchase of personal computers. These protocols were analysed from the perspective of both the processes and the domain knowledge to reflect each expert's inherent conceptualisation of the domain. We are particularly interested in analysing discrepancies within and among such experts' ontologies, and have identified a range of ontology mismatches. A systematic approach to the analysis has been developed; subsequently, we shall develop software tools to support this process.

Integrating multiple and diverse abstract knowledge types in real-time embedded systems

November 1996

·

8 Reads

Designers of large-scale real-time systems are increasingly turning to knowledge-based techniques in order to solve complex problems. This paper identifies three essential needs to support the implementation of these systems: first, the need to provide a variety of knowledge-based components that can be used to model the diverse expert domains being encountered; second, the need to provide the user with the means of creating multiple independent instances of the knowledge-based components; and third, the need to provide an integrating environment in which the knowledge-based instances may be controlled. This paper uses ideas derived from the concept of abstract data types and recommends the construction of a library of diverse knowledge-based components, called abstract knowledge types, and that multiple instances of the abstract knowledge types be integrated and controlled using a blackboard architecture. A prototype component library and a blackboard have been implemented in Ada in order to take advantage of a real-time language which supports software engineering principles through a well defined and enforced standard. The use of abstract knowledge types gives a uniform software engineered approach to the development and integration of both conventional and knowledge-based components.

Multi-documents Automatic Abstracting based on text clustering and semantic analysis

August 2009

·

56 Reads

A method of realization of multi-documents Automatic Abstracting based on text clustering and semantic analysis is brought forward, aimed at overcoming shortages of some current methods about multi-documents. The method makes use of semantic analysis and can realize Automatic Abstracting of multi-documents. The algorithm of twice word segmentation based on the title and first-sentences in paragraphs is brought forward. Its precision and recall is above 95%. For a specific domain on plastics, an Automatic Abstracting system named TCAAS is implemented. The precision and recall of multi-document’s Automatic Abstracting is above 75%. And experiments do prove that it is feasible to use the method to develop a domain Automatic Abstracting system, which is valuable for further study in more depth.

Knowledge-based clustering approach for data abstraction

June 1994

·

24 Reads

Clustering techniques have been used for data abstraction. Dara abstraction has many applications in the contect of data-bases. Conceptual models are used to bridge the gap between the user's view of a database and the physical view of the database. Semantic models evolved to overcome the limitations of classical data models such as network and relational models. The paper uses a knowledge-based clustering algorithm to extend the abstractions, such as classification and association, which are employed in the semantic modeling of databases. The complexity of the proposed clustering algorithm is analysed. The extended semantic model can be used to design databases in which useful and interesting queries can be answered. The efficacy of the proposed knowledge-based clustering approach is examined in the context of a library database.

Discovering user access patterns on the World Wide Web

May 1998

·

46 Reads

The World Wide Web provides its users with almost unlimited access to documents on the Internet. The use of intelligent agents is suggested to assist users to locate documents related to their interests instead of browsing the Web via primitive search engines. A number of key components in such intelligent systems are identified and a system architecture is proposed. In particular, a learning agent is designed along with the underlying algorithms for the discovery of areas of interest from user access logs. The discovered topics can be used to improve the efficiency of information retrieval by prefetching documents for the users and storing then in a document database in the system. A prototype system has also been implemented to illustrate the various concepts. Experiments are performed which show that the area of interest discovered can in fact be used to improve the efficiency of information retrieval on a distributed information system such as the Internet.

Information access in context

March 2001

·

78 Reads

Our central claim is that user interactions with productivity applications (e.g. word processors, Web browsers, etc.) provide rich contextual information that can be leveraged to support just-in-time access to task-relevant information. As evidence for our claim, we present Watson, a system which gathers contextual information in the form of the text of the document the user is manipulating, in order to proactively retrieve documents from distributed information repositories related to task at hand, as well as process explicit requests in the context of this task. We close by describing the results of several experiments with Watson, which show it consistently provides useful information to its users. The experiments also suggest that, contrary to the assumptions of many system designers, similar documents are not necessarily useful documents in the context of a particular task.

The Selection Recognition Agent: instant access to relevant information and operations

March 1998

·

47 Reads

We present the Selection Recognition Agent (SRA), an application for Windows-based personal computers. The SRA recognizes meaningful words and phrases in highlighted text, and enables useful operations on them. The SRA includes seven recognition modules, for geographic names, dates, e-mail addresses, phone numbers, Usenet newsgroup name components, Microsoft Outlook 973 contact records, and URLs, as well as a module that enables useful operations on text in general. We describe the architecture and design of the SRA. Our experiments demonstrate that the SRA significantly reduces the time and effort users must expend in performing common tasks.

Neuro-fuzzy modelling in support of knowledge management in social regulation of access to cigarettes by minors

January 2004

·

21 Reads

In this paper a neuro-fuzzy modelling is proposed to support knowledge management in social regulation. The neuro-fuzzy learning process is based on tacit knowledge in order to highlight what specific steps local government should undertake to reach the outcome with an increase in compliance. An example is given to demonstrate the validity of the approach. Empirical results show the dependability of the proposed techniques.

Smart Task Support through Proactive Access to Organizational Memory

October 2000

·

49 Reads

We describe an approach towards integrating the semantics of semi-structured documents with task-support for (weakly structured) business processes and proactive inferencing capabilities of a desk support agent. The mechanism of our Proactive Inferencing Agent is motivated by the requirements posed in (weakly structured) business processes performed by a typical knowledge worker and by experiences we have made from a first trial with a Reactive Agent Support scheme.Our reactive scheme is an innovative approach for smart task support that links knowledge from an organizational memory to business tasks. The scheme is extended to include proactive inferencing capabilities in order to improve user-friendliness and to facilitate modeling of actual agent support. In particular, the improved scheme copes with varying precision of knowledge found in the organizational memory and it reasons proactively about what might be interesting to you and what might be due in your next step.

An ontological account of action in processes and plans

October 2005

·

22 Reads

This paper formalises the constraints governing the relationship between actions and their preconditions and effects in processes and plans. By providing axiomatisations and a model theory, we establish a sound basis for both deductive and constraint satisfaction-based reasoning. The constraints we present are expressed in a common ontology of classes and relations that is the basis of process and plan representations.

Knowledge acquisition for expert systems in accounting and financial problem domains

November 2002

·

677 Reads

Since the mid-1980s, expert systems have been developed for a variety of problems in accounting and finance. The most commonly cited problems in developing these systems are the unavailability of the experts and knowledge engineers and difficulties with the rule extraction process. Within the field of artificial intelligence, this has been called the ‘knowledge acquisition’ (KA) problem and has been identified as a major bottleneck in the expert system development process. Recent empirical research reveals that certain KA techniques are significantly more efficient than others in helping to extract certain types of knowledge within specific problem domains. This paper presents a mapping between these empirical studies and a generic taxonomy of expert system problem domains. To accomplish this, we first examine the range of problem domains and suggest a mapping of accounting and finance tasks to a generic problem domain taxonomy. We then identify and describe the most prominent KA techniques employed in developing expert systems in accounting and finance. After examining and summarizing the existing empirical KA work, we conclude by showing how the empirical KA research in the various problem domains can be used to provide guidance to developers of expert systems in the fields of accounting and finance.

Accumulation of object representations utilising interaction of robot action and perception

January 2002

·

19 Reads

We introduce a robotic-vision system which is able to extract object representations autonomously utilising a tight interaction of visual perception and robotic action within a perception action cycle [Ecological Psychology 4 (1992) 121; Algebraic Frames for the Perception and Action Cycle, 1997, 1]. Controlled movement of the object grasped by the robot enables us to compute the transformations of entities which are used to represent aspects of objects and to find correspondences of entities within an image sequence.A general accumulation scheme allows to acquire robust information from partly missing information extracted from single frames of an image sequence. Here we use this scheme with a preprocessing stage in which 3D-line segments are extracted from stereo images. However, the accumulation scheme can be used with any kind of preprocessing as long as the entities used to represent objects can be brought to correspondence by certain equivalence relations such as ‘rigid body motion’.We show that an accumulated representation can be applied within a tracking algorithm. The accumulation scheme is an important module of a vision based robot system on which we are currently working. In this system, objects are planned to be represented by different visual and tactile entities. The object representations are going to be learned autonomously. We discuss the accumulation scheme in the context of this project.

The effect of principal component analysis on machine learning accuracy with high-dimensional spectral data

September 2006

·

181 Reads

This paper presents the results of an investigation into the use of machine learning methods for the identification of narcotics from Raman spectra. The classification of spectral data and other high-dimensional data, such as images, gene-expression data and spectral data, poses an interesting challenge to machine learning, as the presence of high numbers of redundant or highly correlated attributes can seriously degrade classification accuracy. This paper investigates the use of principal component analysis (PCA) to reduce high-dimensional spectral data and to improve the predictive performance of some well-known machine learning methods. Experiments are carried out on a high-dimensional spectral dataset. These experiments employ the NIPALS (Non-Linear Iterative Partial Least Squares) PCA method, a method that has been used in the field of chemometrics for spectral classification, and is a more efficient alternative than the widely used eigenvector decomposition approach. The experiments show that the use of this PCA method can improve the performance of machine learning in the classification of high-dimensional data.

Top-cited authors