Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development.
The way information is presented influences human decision making and is consequently highly relevant to electronically supported negotiations. The present study analyzes in a controlled laboratory experiment how information presentation in three alternative formats (table, history graph and dance graph) influences the negotiators' behavior and negotiation outcomes. The results show that graphical information presentation supports integrative behavior and the use of non-compensatory strategies. Furthermore, information about the opponents' preferences increases the quality of outcomes but decreases post-negotiation satisfaction of negotiators. The implications for system designers are discussed.
This paper presents and discusses a logical apparatus which may be used to support machine-based inferencing and automatic creation of hypertext links in what we call hypermedia-based argumentation decision support systems (HADSS). This logical approach has important advantages over other sorts of argument representation, found in the current literature. We present and discuss a prototype implementation in the context of three examples. We also present an exploratory experiment indicating that graph-based logical representations can materially help people make better inferences.
Internet auctions have become an integral part of electronic
commerce (EC) and a promising field for applying agent technologies.
Although the Internet provides an excellent infrastructure for
large-scale auctions, we must consider the possibility of a new type of
cheating, i.e., a bidder trying to profit from submitting several bids
under fictitious names (false-name bids). Double auctions are an
important subclass of auction protocols that permit multiple buyers and
sellers to bid to exchange a good, and have been widely used in stock,
bond, and foreign exchange markets. If there exists no false-name bid, a
double auction protocol called PMD protocol has proven to be
dominant-strategy incentive compatible. On the other hand, if we
consider the possibility of false-name bids, the PMD protocol is no
longer dominant-strategy incentive compatible. We develop a new double
auction protocol called the Threshold Price Double auction (TPD)
protocol, which is dominant strategy incentive compatible even if
participants can submit false-name bids. The characteristics of the TPD
protocol is that the number of trades and prices of exchange are
controlled by the threshold price. Simulation results show that this
protocol can achieve a social surplus that is very close to being Pareto
A production rule of a knowledge base is called essential if it is present in any prime knowledge base which is logically equivalent to the given one. The identification of essential rules indicates the degree of freedom we have in constructing logically equivalent transformations of the base, by specifying the set of rules which must remain in place, and implicitly showing which rules could be replaced by other ones. A prime rule is called redundant if it is not present in any irredundant prime knowledge base which is logically equivalent to the given one. The recognition of redundancy of a particular prime rule in a knowledge base will eliminate such rule from consideration in any future simplifications of the knowledge base. The paper provides combinatorial characterizations and computational recognition procedures of essentiality and redundancy of prime rules of Horn knowledge bases
Constrained transmission lines are known to be able to
economically isolate submarkets from the competition of players located
elsewhere on the network. The paper examines the type of oligopolistic
competition that is likely to take place in these submarkets. It shows,
using simple models, how static or intertemporal Nash equilibria can
arise in a framework of price or supply function competitions, found to
be more realistic than Cournot models in the particular case of short
term competition in the electric power market. The paper shows also how
transmission constraints can play a direct role in the outcome of the
oligopolistic competition and encourage strategic behavior by the
generators. Transmission lines that would not be constrained if the
players did not know of their thermal limits, may be strategically
driven to operate at these limits in order to maximize the profits of
the players who have market power, leaving the others to cope with the
consequences of such behavior
Tendering processes need some improvements as they being used widely by government and private sector. One of the improvements attempts to apply is to use Web services technology as a medium to process all the tenders that submitted by the interested contractors. Integration of Web-based application and decision support concepts are employed in order to develop an efficient Web application for prequalification tendering processes. PreQTender is used as a tool to help decision maker (DM) to select the best contractors in Malaysia. PreQTender processes all tender documents and it generates the short listed of the qualified contractors. Those eligible contractors will be considered to be evaluated in the next phase which is evaluation phase.
A description is given of PDM, a knowledge-based tool designed to help nonexpert users construct linear programming (LP) models of production, distribution, and inventory (PDI) planning problems. PDM interactively aids users in defining a logic model of their planning problem and uses it to generate problem-specific inferences and as input to a model building component that mechanically constructs the algebraic schema of the appropriate LP model. Interesting features of PDM include the application of domain knowledge to guide user interaction, the use of syntactic knowledge of the problem representation language to effect model revision, and the use of a small set of primitive modeling rules in model construction
In a deregulated environment, independent generators and utility
generators may or may not participate in the load frequency control of
the system. For the purpose of evaluating the performance of such a
system, a flexible method has been developed and implemented. The method
assumes that load frequency control is performed by an ISO based on
parameters defined by the participating generating units. The
participating units comprise utility generators and independent power
producers. The utilities define the units which will be under load
frequency control, while the independent power producers may or may not
participate in the load frequency control. For all the units which
participate in the load frequency control, the generator owner defines:
(a) generation limits, (b) rate of change and (c) economic participation
factor. This information is transmitted to the ISO. This scheme allows
the utilities to economically dispatch their own system, while at the
same time permitting the ISO to control the interconnected system
The electricity market has changed rapidly in the Northern
European countries. Harmonisation of the legislation and trading methods
widens the market area outside national limits. Vertical integration
among electricity companies is changing the traditional structure of the
inner market. Successful business in this new more volatile market
requires sophisticated techniques for identifying new market
opportunities and managing the increased risks. We study the new market
from the perspectives of regional distributors and power pools. We
analyse the trading possibilities and profitability of different kinds
of power pools, when a spot market and several new contract structures
are available along with existing capacity based long-term contracts.
Different policies for allocating the common benefits of a power pool
among its members are compared and a new booking algorithm for balance
settlement is introduced. To analyse the operation of different kinds of
pools, we use simulation and optimisation techniques
The Search Space Toolkit (SST) is a suite of tools for
investigating the properties of the continuous search spaces which arise
in designing complex engineering artifacts whose evaluation requires
significant computation by a numerical simulator. SST has been developed
as part of NDA, a computational environment for (semi-)automated design
of jet engine exhaust nozzles for supersonic aircraft, which was
developed in a collaboration between computer scientists at Rutgers
University and design engineers at General Electric and Lockheed. The
search spaces which SST explores differ significantly from the discrete
search spaces that typically arise in artificial intelligence research,
and properly searching such spaces is a fundamental AI research area. By
promoting the design space to be a first class entity, rather than a
“black box” buried in the interface between an optimizer and
a simulator, SST allows a more principled approach to automated design
In law enforcement applications, there is a critical need for new tools that can facilitate efficient and effective collaboration. Through a field study, we observe that crime analysis, a critical component of law enforcement operations, is knowledge intensive and often involves collaborative efforts from multiple law enforcement officers within and across agencies. To better facilitate such knowledge intensive collaboration and thereby improve law enforcement agencies' crime-fighting capabilities, we propose a novel methodology based on modeling and implementation techniques from workflow management and information retrieval. This paper presents this process-driven collaboration methodology and its prototype implementation as part of an integrated law enforcement information management environment called COPLINK.
The paper presents the modelling possibilities of neural networks on a complex real-world problem, i.e. municipal credit rating modelling. First, current approaches in credit rating modelling are introduced. Second, previous studies on municipal credit rating modelling are analyzed. Based on this analysis, the model is designed to classify US municipalities (located in the State of Connecticut) into rating classes. The model includes data pre-processing, the selection process of input variables, and the design of various neural networks' structures for classification. The selection of input variables is realized using genetic algorithms. The input variables are extracted from financial statements and statistical reports in line with previous studies. These variables represent the inputs of neural networks, while the rating classes from Moody's rating agency stand for the outputs. In addition to exact rating classes, data are also labelled by four basic rating classes. As a result, the classification accuracies and the contributions of input variables are studied for the different number of classes. The results show that the rating classes assigned to bond issuers can be classified with a high accuracy rate using a limited subset of input variables.
The sensitivity analysis of general decision systems originated from the Bridgman model is dealt with. It is shown that these problems are equivalent to the optimization of linear fractional functions over rectangles. By using the specialities of decision problems, an O(n log n) algorithm is elaborated fro solving such problems. Computational experience is also given.
In this paper, we introduce a knowledge-based meta-model which serves as a unified resource model for integrating characteristics of major types of objects appearing in software development models (SDMs). The URM consists of resource classes and a web of relations that link different types of resources found in different kinds of models of software development. The URM includes specialized models for software models for software systems, documents, agents, tools, and development processes. The URM has served as the basis for integrating and interoperating a number of process-centered CASE environments. The major benefit of the URM is twofold: First, it forms a higher level of abstraction supporting SDM formulation that subsumes many typical models of software development objects. Hence, it enables a higher level of reusability for existing support mechanisms of these models. Second, it provides a basis to support complex reasoning mechanisms that address issues across different types of software objects. To explore these features, we describe the URM both formally and with a detailed example, followed by a characterization of the process of SDM composition, and then by a characterization of the life cycle of activities involved in an overall model formulation process.
This paper identifies the most influential contributors in the DSS area in the U.S., examines their contributions, and reviews the institutional publishing records at the leading U.S. universities, which are actively publishing DSS research. To measure the influence/contributions of leading universities and contributors, we used the bibliographic citations of the publications on the specific DSS applications. The critical assumption of this study is that “bibliographic citations are an acceptable surrogate for the actual influence of various information sources.” (M.J. Culnan, Management Science 32, 2, feb 1986, 156–172) This paper identifies thirty-two leading U.S. universities with eighty-one of their affiliated members and twenty three most influential researchers. Among the leading U.S. universities identified, two universities are truly outstanding: The University of Texas-Austin and MIT. Regardless of any types of yardsticks which may be applied to measure their contributions, these two universities may be recognized as centers of excellent DSS research in the U.S.A. in terms of the number of research publications, the number of total citation frequencies, and the number of active researchers in the DSS related areas.
This study applies factor analysis of an author cocitation frequency matrix derived from a database file that consists of a total of 23,768 cited reference records taken from 944 citing articles. Factor analysis extracted eleven factors consisting of six major areas of DSS research (group DSS, foundations, model management, interface systems, multicriteria DSS, and implementation) and five contributing disciplines (multiple criteria decision making, cognitive science, organizational science, artificial intelligence, and systems science). This research provides hard evidence that the decision support system has made meaningful progress over the past two decades and is in the process of solidifying its domain and demarcating its reference disciplines. Especially, much progress has been made in the subareas of model management such as representation, model base processing, model integration, and artificial intelligence application to model management leading towards the development of a theory of models. To facilitate the transition from the pre- to post-paradigm period in DSS research, this study has completed important groundwork.
This paper presents the findings of a survey of software tools built to assist in the verification and validation of knowledge-based systems. The tools were identified from literature sources from the period 1985–1995. The tool builders were contacted and asked to complete and return a survey that identified which testing and analysis techniques were utilised and covered by their tool. From these survey results it is possible to identify trends in tool development, technique coverage and areas for future research.
The external business environment has become more turbulent over the last several years and this is likely to continue. At the same time the functionality and cost of information technology has been improving steadily. The combination of these two forces can give rise to a quite different new organization — the flexible organization. Research has shown the nature of the inhibiting factor that has caused this transformation to only occur in some organization.
Knowledge management (KM) research has been evolving for more than a decade, yet little is known about KM theoretical perspectives, research paradigms, and research methods. This paper explores KM research in influential journals for the period 2000–2004. A total of 160 KM articles in ten top-tier information systems and management journals are analyzed. Articles that may serve as useful exemplars of KM research from positivist, interpretivist, and critical pluralist paradigms are selected. We find that KM research in information systems journals differs from that in management journals, but neither makes a balanced use of positivist and non-positivist research approaches.
The research literature on Radio Frequency Identification (RFID) has grown exponentially in recent years. In a domain where new concepts and techniques are constantly being introduced, it is of interest to analyze recent trends in this literature. Although some attempts have been made in the past to review this stream of research, there has been no attempt to assess the contributions to this literature by individuals and institutions. This study assesses the contributions of individual researchers and institutions from 2004 to 2008, based on their publications in SCI- or SSCI-indexed journals. The findings of this study offer researchers a unique view of this field and some directions for future research.
Modern Internet applications run on top of complex system infrastructures where several runtime management algorithms have to guarantee high performance, scalability and availability. This paper aims to offer a support to runtime algorithms that must take decisions on the basis of historical and predicted load conditions of the internal system resources. We propose a new class of moving filtering techniques and of adaptive prediction models that are specifically designed to deal with runtime and short-term forecast of time series which originate from monitors of system resources of Internet-based servers. A large set of experiments confirm that the proposed models improve the prediction accuracy with respect to existing algorithms and they show stable results for different workload scenarios.
Retrieving knowledge from a knowledge repository includes both the process of finding information of interest and the process of converting incoming information to a person's own knowledge. This paper explores the application of 3D interfaces in supporting the retrieval of spatial knowledge by presenting the development and the evaluation of a geo-referenced knowledge repository system. As computer screen is crowded with high volume of information available, 3D interface becomes a promising candidate to better use the screen space. A 3D interface is also more similar to the 3D terrain surface it represents than its 2D counterpart. However, almost all previous empirical studies did not find any supportive evidence for the application of 3D interface. Realizing that those studies required users to observe the 3D object from a given perspective by providing one static interface, we developed 3D interfaces with interactive animation, which allows users to control how a visual object should be displayed. The empirical study demonstrated that this is a promising approach to facilitate the spatial knowledge retrieval.
Knowledge mapping can provide comprehensive depictions of rapidly evolving scientific domains. Taking the design science approach, we developed a Web-based knowledge mapping system (i.e., Nano Mapper) that provides interactive search and analysis on various scientific document sources in nanotechnology. We conducted multiple studies to evaluate Nano Mapper's search and analysis functionality respectively. The search functionality appears more effective than that of the benchmark systems. Subjects exhibit favorable satisfaction with the analysis functionality. Our study addresses several gaps in knowledge mapping for nanotechnology and illustrates desirability of using the design science approach to design, implement, and evaluate an advanced information system.