Decision Support Systems

Published by Elsevier
Print ISSN: 0167-9236
Publications
Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development.
 
The way information is presented influences human decision making and is consequently highly relevant to electronically supported negotiations. The present study analyzes in a controlled laboratory experiment how information presentation in three alternative formats (table, history graph and dance graph) influences the negotiators' behavior and negotiation outcomes. The results show that graphical information presentation supports integrative behavior and the use of non-compensatory strategies. Furthermore, information about the opponents' preferences increases the quality of outcomes but decreases post-negotiation satisfaction of negotiators. The implications for system designers are discussed.
 
This paper presents and discusses a logical apparatus which may be used to support machine-based inferencing and automatic creation of hypertext links in what we call hypermedia-based argumentation decision support systems (HADSS). This logical approach has important advantages over other sorts of argument representation, found in the current literature. We present and discuss a prototype implementation in the context of three examples. We also present an exploratory experiment indicating that graph-based logical representations can materially help people make better inferences.
 
Internet auctions have become an integral part of electronic commerce (EC) and a promising field for applying agent technologies. Although the Internet provides an excellent infrastructure for large-scale auctions, we must consider the possibility of a new type of cheating, i.e., a bidder trying to profit from submitting several bids under fictitious names (false-name bids). Double auctions are an important subclass of auction protocols that permit multiple buyers and sellers to bid to exchange a good, and have been widely used in stock, bond, and foreign exchange markets. If there exists no false-name bid, a double auction protocol called PMD protocol has proven to be dominant-strategy incentive compatible. On the other hand, if we consider the possibility of false-name bids, the PMD protocol is no longer dominant-strategy incentive compatible. We develop a new double auction protocol called the Threshold Price Double auction (TPD) protocol, which is dominant strategy incentive compatible even if participants can submit false-name bids. The characteristics of the TPD protocol is that the number of trades and prices of exchange are controlled by the threshold price. Simulation results show that this protocol can achieve a social surplus that is very close to being Pareto efficient
 
A production rule of a knowledge base is called essential if it is present in any prime knowledge base which is logically equivalent to the given one. The identification of essential rules indicates the degree of freedom we have in constructing logically equivalent transformations of the base, by specifying the set of rules which must remain in place, and implicitly showing which rules could be replaced by other ones. A prime rule is called redundant if it is not present in any irredundant prime knowledge base which is logically equivalent to the given one. The recognition of redundancy of a particular prime rule in a knowledge base will eliminate such rule from consideration in any future simplifications of the knowledge base. The paper provides combinatorial characterizations and computational recognition procedures of essentiality and redundancy of prime rules of Horn knowledge bases
 
Constrained transmission lines are known to be able to economically isolate submarkets from the competition of players located elsewhere on the network. The paper examines the type of oligopolistic competition that is likely to take place in these submarkets. It shows, using simple models, how static or intertemporal Nash equilibria can arise in a framework of price or supply function competitions, found to be more realistic than Cournot models in the particular case of short term competition in the electric power market. The paper shows also how transmission constraints can play a direct role in the outcome of the oligopolistic competition and encourage strategic behavior by the generators. Transmission lines that would not be constrained if the players did not know of their thermal limits, may be strategically driven to operate at these limits in order to maximize the profits of the players who have market power, leaving the others to cope with the consequences of such behavior
 
Tendering processes need some improvements as they being used widely by government and private sector. One of the improvements attempts to apply is to use Web services technology as a medium to process all the tenders that submitted by the interested contractors. Integration of Web-based application and decision support concepts are employed in order to develop an efficient Web application for prequalification tendering processes. PreQTender is used as a tool to help decision maker (DM) to select the best contractors in Malaysia. PreQTender processes all tender documents and it generates the short listed of the qualified contractors. Those eligible contractors will be considered to be evaluated in the next phase which is evaluation phase.
 
A description is given of PDM, a knowledge-based tool designed to help nonexpert users construct linear programming (LP) models of production, distribution, and inventory (PDI) planning problems. PDM interactively aids users in defining a logic model of their planning problem and uses it to generate problem-specific inferences and as input to a model building component that mechanically constructs the algebraic schema of the appropriate LP model. Interesting features of PDM include the application of domain knowledge to guide user interaction, the use of syntactic knowledge of the problem representation language to effect model revision, and the use of a small set of primitive modeling rules in model construction
 
The electricity market has changed rapidly in the Northern European countries. Harmonisation of the legislation and trading methods widens the market area outside national limits. Vertical integration among electricity companies is changing the traditional structure of the inner market. Successful business in this new more volatile market requires sophisticated techniques for identifying new market opportunities and managing the increased risks. We study the new market from the perspectives of regional distributors and power pools. We analyse the trading possibilities and profitability of different kinds of power pools, when a spot market and several new contract structures are available along with existing capacity based long-term contracts. Different policies for allocating the common benefits of a power pool among its members are compared and a new booking algorithm for balance settlement is introduced. To analyse the operation of different kinds of pools, we use simulation and optimisation techniques
 
In a deregulated environment, independent generators and utility generators may or may not participate in the load frequency control of the system. For the purpose of evaluating the performance of such a system, a flexible method has been developed and implemented. The method assumes that load frequency control is performed by an ISO based on parameters defined by the participating generating units. The participating units comprise utility generators and independent power producers. The utilities define the units which will be under load frequency control, while the independent power producers may or may not participate in the load frequency control. For all the units which participate in the load frequency control, the generator owner defines: (a) generation limits, (b) rate of change and (c) economic participation factor. This information is transmitted to the ISO. This scheme allows the utilities to economically dispatch their own system, while at the same time permitting the ISO to control the interconnected system operation
 
The Search Space Toolkit (SST) is a suite of tools for investigating the properties of the continuous search spaces which arise in designing complex engineering artifacts whose evaluation requires significant computation by a numerical simulator. SST has been developed as part of NDA, a computational environment for (semi-)automated design of jet engine exhaust nozzles for supersonic aircraft, which was developed in a collaboration between computer scientists at Rutgers University and design engineers at General Electric and Lockheed. The search spaces which SST explores differ significantly from the discrete search spaces that typically arise in artificial intelligence research, and properly searching such spaces is a fundamental AI research area. By promoting the design space to be a first class entity, rather than a “black box” buried in the interface between an optimizer and a simulator, SST allows a more principled approach to automated design
 
In law enforcement applications, there is a critical need for new tools that can facilitate efficient and effective collaboration. Through a field study, we observe that crime analysis, a critical component of law enforcement operations, is knowledge intensive and often involves collaborative efforts from multiple law enforcement officers within and across agencies. To better facilitate such knowledge intensive collaboration and thereby improve law enforcement agencies' crime-fighting capabilities, we propose a novel methodology based on modeling and implementation techniques from workflow management and information retrieval. This paper presents this process-driven collaboration methodology and its prototype implementation as part of an integrated law enforcement information management environment called COPLINK.
 
The paper presents the modelling possibilities of neural networks on a complex real-world problem, i.e. municipal credit rating modelling. First, current approaches in credit rating modelling are introduced. Second, previous studies on municipal credit rating modelling are analyzed. Based on this analysis, the model is designed to classify US municipalities (located in the State of Connecticut) into rating classes. The model includes data pre-processing, the selection process of input variables, and the design of various neural networks' structures for classification. The selection of input variables is realized using genetic algorithms. The input variables are extracted from financial statements and statistical reports in line with previous studies. These variables represent the inputs of neural networks, while the rating classes from Moody's rating agency stand for the outputs. In addition to exact rating classes, data are also labelled by four basic rating classes. As a result, the classification accuracies and the contributions of input variables are studied for the different number of classes. The results show that the rating classes assigned to bond issuers can be classified with a high accuracy rate using a limited subset of input variables.
 
The sensitivity analysis of general decision systems originated from the Bridgman model is dealt with. It is shown that these problems are equivalent to the optimization of linear fractional functions over rectangles. By using the specialities of decision problems, an O(n log n) algorithm is elaborated fro solving such problems. Computational experience is also given.
 
In this paper, we introduce a knowledge-based meta-model which serves as a unified resource model for integrating characteristics of major types of objects appearing in software development models (SDMs). The URM consists of resource classes and a web of relations that link different types of resources found in different kinds of models of software development. The URM includes specialized models for software models for software systems, documents, agents, tools, and development processes. The URM has served as the basis for integrating and interoperating a number of process-centered CASE environments. The major benefit of the URM is twofold: First, it forms a higher level of abstraction supporting SDM formulation that subsumes many typical models of software development objects. Hence, it enables a higher level of reusability for existing support mechanisms of these models. Second, it provides a basis to support complex reasoning mechanisms that address issues across different types of software objects. To explore these features, we describe the URM both formally and with a detailed example, followed by a characterization of the process of SDM composition, and then by a characterization of the life cycle of activities involved in an overall model formulation process.
 
This study applies factor analysis of an author cocitation frequency matrix derived from a database file that consists of a total of 23,768 cited reference records taken from 944 citing articles. Factor analysis extracted eleven factors consisting of six major areas of DSS research (group DSS, foundations, model management, interface systems, multicriteria DSS, and implementation) and five contributing disciplines (multiple criteria decision making, cognitive science, organizational science, artificial intelligence, and systems science). This research provides hard evidence that the decision support system has made meaningful progress over the past two decades and is in the process of solidifying its domain and demarcating its reference disciplines. Especially, much progress has been made in the subareas of model management such as representation, model base processing, model integration, and artificial intelligence application to model management leading towards the development of a theory of models. To facilitate the transition from the pre- to post-paradigm period in DSS research, this study has completed important groundwork.
 
This paper identifies the most influential contributors in the DSS area in the U.S., examines their contributions, and reviews the institutional publishing records at the leading U.S. universities, which are actively publishing DSS research. To measure the influence/contributions of leading universities and contributors, we used the bibliographic citations of the publications on the specific DSS applications. The critical assumption of this study is that “bibliographic citations are an acceptable surrogate for the actual influence of various information sources.” (M.J. Culnan, Management Science 32, 2, feb 1986, 156–172) This paper identifies thirty-two leading U.S. universities with eighty-one of their affiliated members and twenty three most influential researchers. Among the leading U.S. universities identified, two universities are truly outstanding: The University of Texas-Austin and MIT. Regardless of any types of yardsticks which may be applied to measure their contributions, these two universities may be recognized as centers of excellent DSS research in the U.S.A. in terms of the number of research publications, the number of total citation frequencies, and the number of active researchers in the DSS related areas.
 
This paper presents the findings of a survey of software tools built to assist in the verification and validation of knowledge-based systems. The tools were identified from literature sources from the period 1985–1995. The tool builders were contacted and asked to complete and return a survey that identified which testing and analysis techniques were utilised and covered by their tool. From these survey results it is possible to identify trends in tool development, technique coverage and areas for future research.
 
The external business environment has become more turbulent over the last several years and this is likely to continue. At the same time the functionality and cost of information technology has been improving steadily. The combination of these two forces can give rise to a quite different new organization — the flexible organization. Research has shown the nature of the inhibiting factor that has caused this transformation to only occur in some organization.
 
Knowledge management (KM) research has been evolving for more than a decade, yet little is known about KM theoretical perspectives, research paradigms, and research methods. This paper explores KM research in influential journals for the period 2000–2004. A total of 160 KM articles in ten top-tier information systems and management journals are analyzed. Articles that may serve as useful exemplars of KM research from positivist, interpretivist, and critical pluralist paradigms are selected. We find that KM research in information systems journals differs from that in management journals, but neither makes a balanced use of positivist and non-positivist research approaches.
 
The research literature on Radio Frequency Identification (RFID) has grown exponentially in recent years. In a domain where new concepts and techniques are constantly being introduced, it is of interest to analyze recent trends in this literature. Although some attempts have been made in the past to review this stream of research, there has been no attempt to assess the contributions to this literature by individuals and institutions. This study assesses the contributions of individual researchers and institutions from 2004 to 2008, based on their publications in SCI- or SSCI-indexed journals. The findings of this study offer researchers a unique view of this field and some directions for future research.
 
Modern Internet applications run on top of complex system infrastructures where several runtime management algorithms have to guarantee high performance, scalability and availability. This paper aims to offer a support to runtime algorithms that must take decisions on the basis of historical and predicted load conditions of the internal system resources. We propose a new class of moving filtering techniques and of adaptive prediction models that are specifically designed to deal with runtime and short-term forecast of time series which originate from monitors of system resources of Internet-based servers. A large set of experiments confirm that the proposed models improve the prediction accuracy with respect to existing algorithms and they show stable results for different workload scenarios.
 
Retrieving knowledge from a knowledge repository includes both the process of finding information of interest and the process of converting incoming information to a person's own knowledge. This paper explores the application of 3D interfaces in supporting the retrieval of spatial knowledge by presenting the development and the evaluation of a geo-referenced knowledge repository system. As computer screen is crowded with high volume of information available, 3D interface becomes a promising candidate to better use the screen space. A 3D interface is also more similar to the 3D terrain surface it represents than its 2D counterpart. However, almost all previous empirical studies did not find any supportive evidence for the application of 3D interface. Realizing that those studies required users to observe the 3D object from a given perspective by providing one static interface, we developed 3D interfaces with interactive animation, which allows users to control how a visual object should be displayed. The empirical study demonstrated that this is a promising approach to facilitate the spatial knowledge retrieval.
 
Knowledge mapping can provide comprehensive depictions of rapidly evolving scientific domains. Taking the design science approach, we developed a Web-based knowledge mapping system (i.e., Nano Mapper) that provides interactive search and analysis on various scientific document sources in nanotechnology. We conducted multiple studies to evaluate Nano Mapper's search and analysis functionality respectively. The search functionality appears more effective than that of the benchmark systems. Subjects exhibit favorable satisfaction with the analysis functionality. Our study addresses several gaps in knowledge mapping for nanotechnology and illustrates desirability of using the design science approach to design, implement, and evaluate an advanced information system.
 
Application software has been developed for analyzing and understanding a dynamic price change in the US wholesale power market. Traders can use the software as an effective decision-making tool by modeling and simulating a power market. The software uses different features of a decision support system by creating a framework for assessing new trading strategies in a competitive electricity trading environment. The practicality of the software is confirmed by comparing its estimation accuracy with those of other methods (e.g., neural network and genetic algorithm). The software has been applied to a data set regarding the California electricity crisis in order to examine whether the learning (convergence) speed of traders is different between the two periods (before and during the crisis). Such an application confirms the validity of the proposed software.
 
Decision making in process-aware information systems involves build-time and run-time decisions. At build-time, idealized process models are designed based on the organization's objectives, infrastructure, context, constraints, etc. At run-time, this idealized view is often broken. In particular, process models generally assume that planned activities happen within a certain period. When such assumptions are not fulfilled, users must make decisions regarding alternative arrangements to achieve the goal of completing the process within its expected time frame or to minimize tardiness. We refer to the required decisions as escalations. This paper proposes a framework for escalations that draws on established principles from the workflow management field. The paper identifies and classifies a number of escalation mechanisms such as changing the routing of work, changing the work distribution or changing the requirements with respect to available data. A case study and a simulation experiment are used to illustrate and evaluate these mechanisms.
 
This paper integrates a number of strands of a long-term project that is critically analysing the academic field of decision support systems (DSS). The project is based on the content analysis of 1093 DSS articles published in 14 major journals from 1990 to 2004. An examination of the findings of each part of the project yields eight key issues that the DSS field should address for it to continue to play an important part in information systems scholarship. These eight issues are: the relevance of DSS research, DSS research methods and paradigms, the judgement and decision-making theoretical foundations of DSS research, the role of the IT artifact in DSS research, the funding of DSS research, inertia and conservatism of DSS research agendas, DSS exposure in general “A” journals, and discipline coherence. The discussion of each issue is based on the data derived from the article content analysis. A number of suggestions are made for the improvement of DSS research. These relate to case study research, design science, professional relevance, industry funding, theoretical foundations, data warehousing, and business intelligence. The suggestions should help DSS researchers construct high quality research agendas that are relevant and rigorous.
 
This study presents a hybrid AI (artificial intelligence) approach to the implementation of trading strategies in the S&P 500 stock index futures market. The hybrid AI approach integrates the rule-based systems technique and the neural networks technique to accurately predict the direction of daily price changes in S&P 500 stock index futures. By highlighting the advantages and overcoming the limitations of both the neural networks technique and rule-based systems technique, the hybrid approach can facilitate the development of more reliable intelligent systems to model expert thinking and to support the decision-making processes. Our methodology differs from other studies in two respects. First, the rule-based systems approach is applied to provide neural networks with training examples. Second, we employ Reasoning Neural Networks (RN) instead of Back Propagation Networks. Empirical results demonstrate that RN outperforms the other two ANN models (Back Propagation Networks and Perceptron). Based upon this hybrid AI approach, the integrated futures trading system (IFTS) is established and employed to trade the S&P 500 stock index futures contracts. Empirical results also confirm that IFTS outperformed the passive buy-and-hold investment strategy during the 6-year testing period from 1988 to 1993.
 
A bureaucracy can be viewed as a set of policies that governs the activities of its people. The purpose of these policies is to improve operational effectiveness and efficiency. However, manual administration of these policies is a tedious and often overwhelming task because it is too cognitively demanding to keep track of the complex relationships between the policies. As a result, these policies often consist of many inconsistencies (conflicts) as they evolve because there is no automated means to aid the administrators in detecting inconsistencies. In this paper, we present an approach that uses abductive logic programming for building a decision support system for the administration of bureaucratic policies. The system will help administrators decide the consistency of a policy with respect to the current set of policies and hence, prevent the introduction of inconsistent policies.
 
Many decisions involve uncertainty, yet most decision support systems (DSSs) and expert systems (ESs) are poorly equipped to deal with such problems. This paper employs an abductive perspective to propose a prototype framework for tackling uncertainty handling in DSS and ES. Five major features of such a framework, a user-friendly dialogue, a case-base, a knowledge-base, approximate reasoning and a fuzzification/defuzzification mechanism are presented and developed. These are supported by an extended example.
 
In this case study in knowledge engineering, data mining, and behavioral finance, we implement a variation of the bull flag stock charting heuristic using a template matching technique from pattern recognition to identify abrupt increases in volume in the New York Stock Exchange Composite Index. Such volume increases are found to signal subsequent increases in price under certain conditions during the period from 1981 to 1999, the Great Bull Market. A 120-trading-day history of price and volume is used to forecast price movement at horizons from 20 to 100 trading days.
 
Electronic commerce has added a new complex issue to international trade. It is based upon the assumption that buyers and sellers conduct business with very little information about each other. This paper is on the importance and development of trust in electronic commerce. The importance of these assets in commercial relations is discussed. The paper describes how reputation is protected as a legal asset and how laws or legal principles support trust relationships in trade. Finally, the importance of developing legal guidelines for trust and reputation as a counterbalance to the lack of morality on the Internet is discussed.
 
With the explosion in the quantity of on-line text and multimedia information in recent years, there has been a renewed interest in the automated extraction of knowledge and information in various disciplines. In this paper, we provide a novel quantitative model for the creation of a summary by extracting a set of sentences that represent the most salient content of a text. The model is based on a shallow linguistic extraction technique. What distinguishes it from previous research is that it does not work on the detection of specific keywords or cue-phrases to evaluate the relevance of the sentence concerned. Instead, the attention is focused on the identification of the main factors in the textual continuity. Simulation experiments suggest that this technique is useful because it moves away from a purely keyword-based method of textual information extraction and its associated limitations.
 
What dimensions can be identified in the trust formation processes in Business-to-Consumer (B-to-C) electronic commerce (e-commerce)? How do these differ in importance between academia and practitioners? The purpose of this research is to build a model of multidimensional trust formation for online exchanges in B-to-C electronic commerce. Further, to study the relative importance of the dimensions between two expert groups (academics and practitioners), two semantic network and content analyses are conducted: one for academia's perspectives and another for practitioners' perspectives of trust in B-to-C electronic commerce. The results show that the two perspectives are divergent in some ways and complementary in other ways. We believe that the two need to be combined to represent meaningful trust-building mechanisms in websites.
 
This paper presents a review of — and classification scheme for — the literature on the application of data mining techniques for the detection of financial fraud. Although financial fraud detection (FFD) is an emerging topic of great importance, a comprehensive literature review of the subject has yet to be carried out. This paper thus represents the first systematic, identifiable and comprehensive academic literature review of the data mining techniques that have been applied to FFD. 49 journal articles on the subject published between 1997 and 2008 was analyzed and classified into four categories of financial fraud (bank fraud, insurance fraud, securities and commodities fraud, and other related financial fraud) and six classes of data mining techniques (classification, regression, clustering, prediction, outlier detection, and visualization). The findings of this review clearly show that data mining techniques have been applied most extensively to the detection of insurance fraud, although corporate fraud and credit card fraud have also attracted a great deal of attention in recent years. In contrast, we find a distinct lack of research on mortgage fraud, money laundering, and securities and commodities fraud. The main data mining techniques used for FFD are logistic models, neural networks, the Bayesian belief network, and decision trees, all of which provide primary solutions to the problems inherent in the detection and classification of fraudulent data. This paper also addresses the gaps between FFD and the needs of the industry to encourage additional research on neglected topics, and concludes with several suggestions for further FFD research.
 
Web delays are a persistent and highly publicized problem. Long delays have been shown to reduce information search, but less is known about the impact of more modest “acceptable” delays — delays that do not substantially reduce user satisfaction. Prior research suggests that as the time and effort required to complete a task increases, decision-makers tend to reduce information search at the expense of decision quality. In this study, the effects of an acceptable time delay (seven seconds) on information search behavior were examined. Results showed that increased time and effort caused by acceptable delays provoked increased information search.
 
Inspired by ever evolving information technologies and the myriad of successful business cases that reap the benefit of new technologies, many governments around the world have jumped on the bandwagon of electronic government (e-Gov). However, there has been little academic research regarding the types and conditions of e-Gov services that are acceptable to the public. This paper synthesizes a model of e-Gov compliance services acceptance by critically integrating prior research along with the distinctive characteristics of the online government services context. The study posits that different levels of task complexity involved in various e-Gov compliance processes can lead citizens to use different decision criteria and empirically examines the differing acceptance decision patterns of potential e-Government service users in two compliance service domains. The results reveal that citizens do adopt different decision criteria for different levels of task complexity, suggesting that functional usefulness of e-Gov services becomes a more important criterion for online services that involve difficult tasks. In contrast, the service provider's competence in online operations becomes a more important factor for simple tasks. Several other findings and future research directions are also discussed.
 
Electronic commerce is gaining much attention from researchers and practitioners. Although increasing numbers of products are being marketed on the web, little effort has been spent on studying what product is more suitable for marketing electronically and why. In this research, a model based on the transaction cost theory is developed to tackle the problem. It is assumed that customers will go with a channel that has lower transactional costs. In other words, whether a customer would buy a product electronically is determined by the transaction cost of the channel. The transaction cost of a product on the web is determined by the uncertainty and asset specificity. An empirical study involving eight-six Internet users was conducted to test the model. Five products with different characteristics (book, shoes, toothpaste, microwave oven, and flower) were used in the study. The results indicate that (1) different products do have different customer acceptance on the electronic market, (2) the customer acceptance is determined by the transaction cost, which is in turn determined by the uncertainty and asset specificity, and (3) experienced shoppers are concerned more about the uncertainty in electronic shopping, whereas inexperienced shoppers are concerned with both.
 
Research model. 
PLS results for the relationship between perceived risk and its eight first-order risk perception factors. * significant at 0.05 level; ** significant at 0.01 level. 
The factors affecting rejection or acceptance of an emerging IT artifact such as mobile banking have piqued interest among IS researchers and remain unknown due in part to consumers' trust and risk perceptions in the wireless platform. This study extends this line of research by conjointly examining multi-dimensional trust and multi-faceted risk perceptions in the initial adoption stage of the wireless Internet platform. Results of this study indicate that risk perception, derived from eight different facets, is a salient antecedent to innovative technology acceptance. Beyond prior studies, the results also provide empirical support for employing personal trait factors in analyzing acceptance of emerging IT artifacts.
 
Mean, standard deviation, square root of average variance extracted, and correlations
Path coefficients and t values for EMR and CDS data, respectively
Physician acceptance of clinical information technology (IT) is important for its successful implementation. We propose that perceived threat to professional autonomy is a salient outcome belief affecting physician acceptance of an IT. In addition, level of knowledge codification of an IT is an important technological context affecting physician acceptance. Data from a sample of U.S. physicians were collected to test the hypotheses using partial least squares analysis. Results show that perceived threat to professional autonomy has a significant, negative direct influence on perceived usefulness of an IT and on intention to use that IT. Level of knowledge codification is also an important variable. The effect of perceived threat to professional autonomy is larger for clinical decision support systems than for electronic medical records systems. Awareness of these results would help managers better manage IT implementation in health care settings.
 
Internet self-efficacy (ISE), or the beliefs in one's capabilities to organize and execute courses of Internet actions required to produce given attainments, is a potentially important factor to explain the consumers' decisions in e-commerce use, such as e-service. In this study, we introduce two types of ISE (i.e., general Internet self-efficacy and Web-specific self-efficacy) as new factors that reflect the user's behavioral control beliefs in e-service acceptance. Using these two constructs as behavioral control factors, we extend and empirically validate the Theory of Planned Behavior (TPB) for the World Wide Web (WWW) context.
 
We expanded Davis et al.'s technology acceptance model (TAM) by considering both the affective and the cognitive dimensions of attitude and the hypothesized internal hierarchy among beliefs, cognitive attitude, affective attitude and information systems (IS) use. While many of the earlier findings in TAM research were confirmed, the mediating role of affective attitude between cognitive attitude and IS use was not supported. Our results cast doubts on the use of the affective attitude construct in explaining IS use. Meanwhile, we found that cognitive attitude is an important variable to consider in explaining IS usage behaviors. Our results suggest that attitude deserves more attention in IS research for its considerable influence on the individual and organizational usage of IS.
 
While the technology acceptance model (TAM) is generally robust it does not always adequately explain user behavior. Recent studies argue that including individual characteristics in TAM can improve our understanding of those conditions under which TAM is not adequate for explaining acceptance behavior. Using this argument, we examine the effects of positive mood, one individual characteristic that significantly affects an individual's cognition and behavior, on acceptance of a DSS that supports uncertain tasks. Our results show that positive mood has a significant influence on DSS acceptance and that its influence on users' behavior is not due to a halo effect.
 
The technology acceptance model (TAM) proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also demonstrate that (1) ease of understanding and ease of finding predict ease of use, and that (2) information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users' decisions to revisit sites relevant to their jobs.
 
Determinants of acceptance of agile methodologies
It is widely believed that systems development methodologies (SDMs) can help improve the software development process. Nevertheless, their deployment often encounters resistance from systems developers. Agile methodologies, the latest batch of SDMs that are most suitable in dealing with volatile business requirements, are likely to face the same challenge as they require developers to drastically change their work habits and acquire new skills. This paper addresses what can be done to overcome the challenge to agile methodologies acceptance. We provide a critical review of the extant literature on the acceptance of traditional SDMs and agile methodologies, and develop a conceptual framework for agile methodologies acceptance based on a knowledge management perspective. This framework can provide guidance for future research into acceptance of agile methodologies, and has implications for practitioners concerned with the effective deployment of agile methodologies.
 
Many factors have led to explosive growth in the use of geographic information system (GIS) technology to support managerial decision making. Despite their power, utility, and popularity, however, GIS require a significant amount of specialized knowledge for effective use. This paper describes a GIS-based decision support system (DSS) design approach that embeds much of this knowledge in well-structured metadata and presents it to the decision maker through an appropriate interface or software agents, thereby decreasing system learning costs and improving effectiveness. The metadata design from a spatial decision support system (SDSS) is presented along with illustrations showing how the design addresses specific knowledge management (KM) problems. The paper then discusses how the knowledge management design approach can be generalized to other SDSS, to DSS in general, and to data warehouses.
 
Role-based access control (RBAC) provides flexibility to security management over the traditional approach of using user and group identifiers. In RBAC, access privileges are given to roles rather than to individual users. Users acquire the corresponding permissions when playing different roles. Roles can be defined simply as a label, but such an approach lacks the support to allow users to automatically change roles under different contexts; using static method also adds administrative overheads in role assignment. In electronic commerce (E-Commerce) and other cooperative computing environments, access to shared resources has to be controlled in the context of the entire business process; it is therefore necessary to model dynamic roles as a function of resource attributes and contextual information.In this paper, an object-oriented organizational model, Organization Modeling and Management (OMM), is presented as an underlying model to support dynamic role definition and role resolution in E-Commerce solution. The paper describes the OMM reference model and shows how it can be applied flexibly to capture the different classes of resources within a corporation, and to maintain the complex and dynamic roles and relationships between the resource objects. Administrative tools use the role model in OMM to define security policies for role definition and role assignment. At runtime, the E-Commerce application and the underlying resource manager queries the OMM system to resolve roles in order to authorize any access attempts. Contrary to traditional approaches, OMM separates the organization model from the applications; thus, it allows independent and flexible role modeling to support realistically the dynamic authorization requirements in a rapidly changing business world.
 
As part of the operation of an Expert System, a deductive component accesses a database of facts to help simulate the behavior of a human expert in a particular problem domain. The nature of this access is examined, and four access strategies are identified. Features of each of these strategies are addressed within the framework of a logic-based deductive component and the relational model of data.
 
Corporate collaboration allows organizations to improve the efficiency and quality of their business activities. It may occur as a workflow collaboration, a supply chain collaboration, or as collaborative commerce. Collaborative commerce uses information technology to achieve a closer integration and better management of business relationships between internal and external parties. There are many emerging issues in collaborative commerce and one of them is access control.To implement collaborative commerce, interfaces between the system elements of the organizations that are involved in the collaboration are needed. However, access control policies are often inconsistent from interface to interface, and therefore conflict resolution should be considered to resolve multilevel access control policy problems. Many studies propose different rules for the resolution of the conflict between access control policies, but little attention has been given to the relationship between the groups or subject classes that represent the different types of corporate collaboration. In this paper, the format of corporate collaboration is considered, and the conflicts between the access control policies of interfaces are addressed. Some general guidelines, other than those that relate to minimum privilege on duty and maximum privilege on sharing, are proposed.
 
Top-cited authors
Raghav Rao
  • University of Texas at San Antonio
Dan J. Kim
  • University of North Texas
Chao-Min Chiu
  • National Sun Yat-sen University
Meng-Hsiang Hsu
  • National Kaohsiung University of Science and Technology (NKUST)
Salvatore T. March
  • Vanderbilt University