Information Systems Research

Published by INFORMS
Online ISSN: 1526-5536
Print ISSN: 1047-7047
We study how firm boundaries are affected by the reduction in search costs when business-to-business electronic markets are adopted. Our paper analyzes a multi-tier industry in which upstream parts suppliers incur procurement search costs, and downstream manufacturers incur incentive contracting costs with these parts suppliers. We develop a model that integrates search theory into the hidden-action principal-agent model and characterize the optimal contract, showing that the delegation of search results in an outcome analogous to an effective increase in the search cost of the intermediary, reflected in the magnitude of the cutoff price in the second-best stopping rule. This contract is used to specify the manufacturer's make versus buy decision, and to analyze how the technological changes associated with electronic markets affect vertical organizational scope. Our main results show that when search is information-intensive, electronic markets will result in constant decreases in search costs that reduce the vertical scope of organizations. In contrast, when search is communication-intensive, electronic markets will result in proportionate reductions in search costs that lead to an increase in vertical integration; the latter outcome also occurs if search costs converge. We also discuss the implications of our results for the general problem of designing contracts that optimally delegate costly search to an intermediary
In this paper, we analyze a model of usage pricing for digital products with discontinuous supply functions. This model characterizes a number of information technology-based products and services for which variable increases in demand are fulfilled by the addition of blocks of computing or network infrastructure. Such goods are often modeled as information goods with zero variable costs; in fact, the actual cost structure resembles a mixture of zero marginal costs and positive periodic fixed costs. This paper discusses the properties of a general solution for the optimal nonlinear pricing of such digital goods. We show that the discontinuous cost structure can be accrued as a virtual constant variable cost. This paper applies the general solution to solve two related extensions by first investigating the optimal technology capacity planning when the cost function is both discontinuous and declining over time, and then characterizing the optimal costing for the discontinuous supply when it is shared by several business profit centers. Our findings suggest that the widely adopted full cost recovery policies are typically suboptimal.
Given that Information Technology (IT) security has emerged as an important issue in the last few years, the subject of security information sharing among firms, as a tool to minimize security breaches, has gained the interest of practitioners and academics. To promote the disclosure and sharing of cyber-security information among firms, the US federal government has encouraged the establishment of many industry based Information Sharing & Analysis Centers (ISACs) under Presidential Decision Directive 63. Sharing security vulnerabilities and technological solutions related to methods for preventing, detecting and correcting security breaches, is the fundamental goal of the ISACs. However, there are a number of interesting economic issues that will affect the achievement of this goal. Using game theory, we develop an analytical framework to investigate the competitive implications of sharing security information and investments in security technologies. We find that security technology investments and security information sharing act as ``strategic complements'' in equilibrium. Our results suggest that information sharing is more valuable when product substitutability is higher, implying that such sharing alliances yield greater benefits in more competitive industries. We also highlight that the benefits from such information sharing alliances increase with the size of the firm. We compare the levels of information sharing and technology investments obtained when firms behave independently (Bertrand-Nash) to those selected by an ISAC which maximizes social welfare or joint industry profits. Our results help us predict the consequences of establishing organizations such as ISACs, CERT or InfraGard by the federal government.
We study the impact of employment quota on firms' demand for disabled workers. The Austrian Disabled Persons Employment Act (DPEA) requires firms to provide at least one job to a disabled worker per 25 non-disabled workers, a rule which is strictly enforced by non-compliance taxation. We find that, as a result of the discontinuous nature of the non-compliance tax, firms exactly at the quota threshold employ 0.05 (20 % in relative terms) more disabled workers than firms just below the threshold - an effect that is unlikely driven by purposeful selection below the threshold. The flat rate nature of the non-compliance tax generates strong employment effects for low-wage firms and weak effects for high-wage firms. We also find that growing firms passing the quota threshold react with a substantial time-lag but the magnitude of the long-run effect is similar to the one found in cross-section contrasts.
Hypothesis 1 ummary
Hypothesis 2
Hypothesis 2 Summary
Large amounts of resources have been and continue to be invested in information technology (IT). Much of this investment is made on the basis of faith that returns will occur. This study presents the results of an empirical test of the performance effects of IT investment in the manufacturing sector. Six years of historical data on IT investment and performance were collected for 33 valve manufacturing firms from the CEO, the controller and the production manager in each firm. Investment was perceptually categorized by management objective (i.e., strategic, informational and transactional) and tested against four measures of performance (sales growth, return on assets, and two measures of labor productivity). Heavy use of transactional IT investment was found to be significantly and consistently associated with strong firm performance over the six years studied. Heavy use of strategic IT was found to be neutral in the long term and associated only with relatively poorly performing firms in the short term. This study suggests that early adopters of strategic IT could have spectacular success but once the technology becomes common, the competitive advantage is lost. In addition, the context of the firm was included in the analysis. Conversion effectiveness, which measures the quality of the firm-wide management and commitment to IT, was found to be a significant moderator between strategic IT investment and firm performance.
In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two-year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, self-organizing, and learning.
Introduction Visualizing information to promote ease of comprehension and dissection is a primary goal of user interface research. Information---often very abstract--- must be transformed into coherent visual representations. These visualizations are customarily read-only; amplification of information-based work processes can be obtained, however, by allowing direct manipulation of the visualization itself. Treemaps graphically represent hierarchical information via a two-dimensional rectangular map, providing compact visual representations of complex data spaces through both area and color [9]. Their efficiency for particular data searching tasks has been tested through controlled studies [12, 13] with primary benefits seen for two types of tasks: location of outliers in large hierarchies and identification of cause-effect relationships within hierarchies. By extending the treemap into a "read/write" graphic through direct manipulation tools, the user is given the capability
Top-Ten Future Research Topics in Flexible and Distributed IS Development
Process flexibility and globally distributed development are two major current trends in software and information systems development (ISD). Agile development methods are examples of apparently major success stories that seem to have run counter to the prevailing wisdom in information systems (IS) and software engineering. The papers in the special issue are organized in a logical sequence. Agility in ISD is a central theme of the special issue, and its definition is at the core of many other papers in the issue. Although the topics identified in this study partly mirror the content of the special issue, they also advance the state of the art by providing a fresh view of an emerging research area. Although we believe that this special issue has indeed advanced the field considerably, there is clearly a lot more that needs to be done. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Innovation researchers have known for some time that a new information technology may be widely acquired, but then only sparsely deployed among acquiring firms. When this happens, the observed pattern of cumulative adoptions will vary depending on which event in the assimilation process (i.e., acquisition or deployment) is treated as the adoption event. Instead of mirroring one another, a widening gap---termed here an assimilation gap---will exist between the cumulative adoption curves associated with the alternatively conceived adoption events. When a pronounced assimilation gap exists, the common practice of using cumulative purchases or acquisitions as the basis for diffusion modeling can present an illusory picture of the diffusion process---leading to potentially erroneous judgments about the robustness of the diffusion process already observed, and of the technology's future prospects. Researchers may draw inappropriate theoretical inferences about the forces driving diffusion. Practitioners may commit to a technology based on a mistaken belief that pervasive adoption is inevitable, when it is not. This study introduces the assimilation gap concept, and develops a general operational measure derived from the difference between the cumulative acquisition and deployment patterns. It describes how two characteristics---increasing returns to adoption and knowledge barriers impeding adoption---separately and in combination may serve to predispose a technology to exhibit a pronounced gap. It develops techniques for measuring assimilation gaps, for establishing whether two gaps are significantly different from each other, and for establishing whether a particular gap is absolutely large enough to of substantive interest. Finally, it demonstrates these techniques in an a...
A statechart representation of a work process
Information acts classified by type-Top two levels
The "write the msg" work process
A statechart representation of a partially completed work process
Relations among uses of information acts
In this paper we discuss a new kind of information system that helps people be ready for information work and locate documents. This system di#ers from a traditional information retrieval system by relying extensively on descriptions of both how a document is used and the purposes it is used for. A formal language represents this information. 1 Introduction The discipline of information retrieval is focused on locating needed information. After several decades of research, the field has made many advances in both commercial and laboratory settings. Predominantly, these e#orts have centered on describing the content of documents, using keywords or other representational systems; to a lesser degree, other e#orts have concentrated on a document's contextual information, such as its author or publisher. We claim that document descriptions can be strengthened and retrieval improved by paying attention to how documents are used and what their purposes are. We describe a formal languag...
We employ the theory of incomplete contracts to examine the relationship between ownership and investment in electronic networks such as the Internet and interorganizational information systems. Electronic networks represent an institutional structure that has resulted from the introduction of information technology in industrial and consumer markets. Ownership of electronic networks is important because it affects the level of network-specific investments, which in turn determine the profitability and in some cases the viability of these networks. In our analysis we define an electronic network as a set of participants and a portfolio of assets. The salient concept in this perspective is the degree to which network participants are indispensable in making network assets productive. We derive three main results: First, if one or more assets are essential to all network participants, then all the assets should be owned together. Second, participants that are indispensable to an asset essential to all participants should own all network assets. Third and most important, in the absence of an indispensable participant, and as long as the cooperation of at least two participants is necessary to create value, sole ownership is never the best form of ownership for an electronic network. This latter result implies that as the leading network participants become more dispensable, we should see an evolution towards forms of joint ownership. Copyright 1997 by Institute for Operations Research and the Management Sciences Keywords: incomplete contracts, Internet ownership, investment externalities, network externalities, network investment, network ownership The authors gratefully acknowledge the helpful comments and suggestions of the Associate Editor and three anonymous referees, ...
The application of fundamental option pricing models (OPMs), such as the binomial and the Black-Scholes models, to problems in information technology (IT) investment decision making have been the subject of some debate in the last few years. Prior research, for example, has made the case that pricing "real options" in real world operational and strategic settings offers the potential for useful insights in the evaluation of irreversible investments under uncertainty. However, most authors in the IS literature have made their cases using illustrative, rather than actual real world examples, and have always concluded with caveats and questions for future research about the applicability of such methods in practice. This paper makes three important contributions in this context: (1) it provides a formal theoretical grounding for the validity of the Black-Scholes option pricing model in the context of the spectrum of capital budgeting methods that might be employed to assess IT investments; (2) it shows why the assumptions of both the Black-Scholes and the binomial option pricing models place constraints on the range of IT investment situations that one can evaluate that are similar to those implied by traditional capital budgeting methods such as discounted cash flow analysis; and (3) it presents the first application of the Black-Scholes model that uses a real world business situation involving IT as its test bed. Our application focuses on an analysis of the timing of the deployment of point-of-sale (POS) debit services by the Yankee 24 shared electronic banking network of New England. This application enables us to make the case for the generalizability of the approach we discuss to four IT investment settings.
The dynamic environment of an organization makes management difficult. This is especially true in information systems (IS) departments. A new theoretical model, based on structured interviews with IS executives, proposes ways that some dimensions of the environment create problems for IS management. IS managers develop coping mechanisms to attenuate or prevent these problems. They use these coping mechanisms either to resolve the problems directly or to influence the environment to prevent the problems. The new theoretical model facilitates an organized study of the relationship between the IS department and its dynamic environment.
The Technology Acceptance Model and two variations of the Theory of Planned Behavior were compared to assess which model best helps to understand usage of information technology. The models were compared using student data collected from 786 potential users of a computer resource center. Behavior data was based on monitoring 3,780 visits to the resource center over a 12-week period. Weighted least squares estimation revealed that all three models performed well in terms of fit and were roughly equivalent in terms of their ability to explain behavior. Decomposing the belief structures in the Theory of Planned Behavior provided a moderate increase in the explanation of behavioral intention. Overall, the results indicate that the decomposed Theory of Planned Behavior provides a fuller understanding of behavioral intention by focusing on the factors that are likely to influence systems use through the application of both design and implementation strategies. Copyright © 1995 Institute for Operations Research and the Management Sciences.
The first British census was taken in 1801 and was processed by a handful of clerks in a tiny office. By the mid-1800s, the census had evolved into an elaborate Victorian data-processing operation involving over a hundred clerks, each of whom had a specialized information-processing role. In 1911 the census was mechanized and the routine data processing was taken over by punched-card machines. This paper explores the changes in information technology within the census over a period of more than a century, and the resulting organizational changes. A contrast is drawn with the U.S. census-which mechanized in 1890-on the adoption of new technology.
The flow of manuscripts through the editorial offices of an academic journal can provide valuable information both about the performance of the journal as an instrument of its field and about the structure and evolution of the field itself. We undertook an analysis of the manuscripts submitted to the journal Information Systems Research (ISR) during its start-up years, 1987 through 1992, in an effort to provide a foundation for examining the performance of the journal, and to open a window on to the information systems (IS) field during that period. We identified the primary research question for each of 397 submissions to ISR, and then categorized the research questions using an iterative classification procedure. Ambiguities in classification were exploited to identify relationships among the categories, and some overarching themes were exposed in order to reveal levels of structure in the journal's submissions stream. We also examined the distribution of submissions across categories and over the years of the study period, and compared the structures of the submissions stream and the publication stream. We present the results with the goal of broadening the perspectives which individual members of the IS research community have of ISR and to help fuel community discourse about the nature and proper direction of the field. We provide some guidelines to assist readers in this interpretive task, and offer some observations and speculations to help launch the discussion.
Interactions Between Competitive Actions and Capabilities
Survival Model Results
1 This study examines why firms fail or survive in the volatile software industry. We provide a novel perspective by considering how software firms ’ capabilities and their competitive actions affect their ultimate survival. Drawing on the resource based view (RBV) we conceptualize capabilities as a firm’s ability to efficiently transform input resources into outputs, relative to its peers. We define three critical capabilities of software producing firms: Research and Development (RD), Marketing (MK) and
We are pleased that our effort has received such interest, and thank the editor for allowing us the opportunity to provide our thoughts regarding the questions raised by Allport and Kerler (A&K) in their research note (2003). A&K's paper has triggered a response on two fronts. The first concerns technical issues specific to our study (Salisbury et al. 2002), and the second is a more general question that would seem implicit in their paper: What is the appropriate balance between theory and data in scale development?
or more than a decade NEC dominated the Japanese PC market with its PC-98 architecture, which was incompatible both with its major Japanese rivals and the global PC standard. However, NEC was powerless to prevent the introduction of Japanese versions of Windows 3.1 and 95 that ran on its competitors' architectures as well as on the PC-98, unifying the Japanese PC market and creating a common set of application programming interfaces for all Intel-based Japanese PCs. The introduction of Windows rendered obsolete the large DOS- based software library that had provided strong positive externalities for the NEC architecture. Absent those advantages, the market share of the PC-98 standard fell from 60% to 33% in five years, and NEC finally abandoned the PC-98 in favor of the global standard. An examination of the unusual rise and fall of the PC-98 shows how victory in a standards competition can be negated by the introduction of a new architectural layer that spans two or more previously incompatible architectures. (Standards Competition; Computer Architecture; Application Programming Interface; NetworkExter- nalities; Personal Computers; Japan)
In most institutions faculty members are expected to teach, research, and perform community service. The emphasis placed on each activity is expected to vary considerably between institutions and departments. To examine this expectation, a nationwide survey was made of both American Assembly of Collegiate Schools of Business (AACSB) institutions and nonAACSB institutions. participants rated 80 publications for their value in reviews of research performance, and responded to a series of questions pertaining to the importance of publication types on the merit compensation, promotion, and tenure processes. These results were made available to the IS community, and approximately 150 comments were obtained. The survey results and the comments suggest that there might be some convergence in expectations of academic performance a cross institutions, as research-oriented institutions require better performance on teaching, teaching-oriented institutions require better performance in research, and all institutions impose greater service demands on IS faculty.
The five-factor model (FFM) of personality has been used to great effect in management and psychology research to predict attitudes, cognitions, and behaviors, but has largely been ignored in the IS field. We demonstrate the potential utility of incorporating this model into IS research by using the FFM personality factors in the context of technology acceptance. We propose a dispositional perspective to understanding user attitudes and beliefs, and examine the effect of user personality---captured using the FFM's big five factors---on both the perceived usefulness of and subjective norms toward the acceptance and use of technology. Using logged usage data from 180 new users of a collaborative technology, we found general support for our hypotheses that the FFM personality dimensions can be useful predictors of users' attitudes and beliefs. We also found strong support for the relationships between intention to use and system use.
In general, perceptions of information systems (IS) success have been investigated within two primary research streams--the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).
n this study, we consider the online consumer as both a shopper and a computer user. We test constructs from information systems (Technology Acceptance Model), marketing (Con- sumer Behavior), and psychology (Flow and Environmental Psychology) in an integrated theo- retical framework of online consumer behavior. Specifically, we examine how emotional and cognitive responses to visiting a Web-based store for the first time can influence online con- sumers' intention to return and their likelihood to make unplanned purchases. The instru- mentation shows reasonably good measurement properties and the constructs are validated as a nomological network. A questionnaire-based empirical study is used to test this nomological network. Results confirm the double identity of the online consumer as a shopper and a computer user because both shopping enjoyment and perceived usefulness of the site strongly predict intention to return. Our results on unplanned purchases are not conclusive. We also test some individual and Web site factors that can affect the consumer's emotional and cognitive responses. Product involvement, Web skills, challenges, and use of value-added search mechanisms all have a significant impact on the Web consumer. The study provides a more rounded, albeit partial, view of the online consumer and is a significant steptowards a better understanding of con- sumer behavior on the Web. The validated metrics should be of use to researchers and prac- titioners alike. (TAM; Flow Theory; Nomological Validity; Web Skills; Value-Added Search Mechanisms; Online Con- sumer Behavior)
Theoretical Framework for the Determinants of Perceived Ease of Use
b). Reliability and Discriminant Validity Coefficients at T2
Much previous research has established that perceived ease of use is an important factor influencing user acceptance and usage behavior of information technologies. However, very little research has been conducted to understand how that perception forms and changes over time. The current work presents and tests an anchoring and adjustment-based theoretical model of the determinants of system-specific perceived ease of use. The model proposes control (internal and external--conceptualized as computer self-efficacy and facilitating conditions, respectively), intrinsic motivation (conceptualized as computer playfulness), and emotion (conceptualized as computer anxiety) as anchors that determine early perceptions about the ease of use of a new system. With increasing experience, it is expected that system-specific perceived ease of use, while still anchored to the general beliefs regarding computers and computer use, will adjust to reflect objective usability, perceptions of external control specific to the new system environment, and system-specific perceived enjoyment. The proposed model was tested in three different organizations among 246 employees using three measurements taken over a three-month period. The proposed model was strongly supported at all points of measurement, and explained up to 60% of the variance in system-specific perceived ease of use, which is twice as much as our current understanding. Important theoretical and practical implications of these findings are discussed.
Information systems (IS) cannot be effective unless they are used. However, people sometimes do not use systems that could potentially increase their performance. This study compares two models that predict an individual's intention to use an IS: the technology acceptance model (TAM) and the theory of planned behavior (TPB). The comparison was designed to be as fair as possible, not favoring one model over the other. Both TAM and TPB predicted intention to use an IS quite well, with TAM having a slight empirical advantage. TAM is easier to apply, but only supplies very general information on users' opinions about a system. TPB provides more specific information that can better guide development.
This paper evaluates the free-access policy as a control mechanism for internal networks. We derive the optimal message pricing scheme, compare it to the free-access policy, and study the associated net-value loss. We derive uniform upper bounds on this value loss, and apply our results to the polar implementations of ethernet and token ring networks. The results show that the free-access policy is often attractive.
We analyze a large-scale custom software effort, the Worm Community System (WCS), a collaborative system designed for a geographically dispersed community of geneticists. There were complex challenges in creating this infrastructural tool, ranging from simple lack of resources to complex organizational and intellectual communication failures and tradeoffs. Despite high user satisfaction with the system and interface, and extensive user needs assessment, feedback, and analysis, many users experienced difficulties in signing on and use. The study was conducted during a time of unprecedented growth in the Internet and its utilities (1991-1994), and many respondents turned to the World Wide Web for their information exchange. Using Bateson's model of levels of learning, we analyze the levels of infrastructural complexity involved in system access and designer-user communication. We analyze the connection between systems development aimed at supporting specific forms of collaborative knowledge work, local organizational transformation, and large-scale infrastructural change.
Although much contemporary thought considers advanced information technologies as either determinants or enablers of radical organizational change, empirical studies have revealed inconsistent findings to support the deterministic logic implicit in such arguments. This paper reviews the contradictory empirical findings both across studies and within studies, and proposes the use of theories employing a logic of opposition to study the organizational consequences of information technology. In contrast to a logic of determination, a logic of opposition explains organizational change by identifying forces both promoting change and impeding change. Four specific theories are considered: organizational politics, organizational culture, institutional theory, and organizational learning. Each theory is briefly described to illustrate its usefulness to the problem of explaining information technology's role in organizational change. Four methodological implications of using these theories are also discussed: empirical identification of opposing forces, statement of opposing hypotheses, process research, and employing multiple interpretations.
The need to ensure reliability of data in information systems has long been recognized. However, recent accounting scandals and the subsequent requirements enacted in the Sarbanes-Oxley Act have made data reliability assessment of critical importance to organizations, particularly for accounting data. Using the accounting functions of management information systems as a context, this paper develops an interdisciplinary approach to data reliability assessment. Our work builds on the literature in accounting and auditing, where reliability assessment has been a topic of study for a number of years. While formal probabilistic approaches have been developed in this literature, they are rarely used in practice. The research reported in this paper attempts to strike a balance between the informal, heuristic-based approaches used by auditors and formal, probabilistic reliability assessment methods. We develop a formal, process-oriented ontology of an accounting information system that defines its components and semantic constraints. We use the ontology to specify data reliability assessment requirements and develop mathematical-model-based decision support methods to implement these requirements. We provide preliminary empirical evidence that the use of our approach improves the efficiency and effectiveness of reliability assessments. Finally, given the recent trend toward specifying information systems using executable business process models (e.g., business process execution language), we discuss opportunities for integrating our process-oriented data reliability assessment approach-developed in the accounting context-in other IS application contexts.
It is well known, of course, that the assessment of this month's economic activity will improve with the passage of time. The same situation exists for many of the inputs to managerial and strategic decision processes. Information regarding some situation or activity at a fixed point in time becomes better with the passage of time. However, as a consequence of the dynamic nature of many environments, the information also becomes less relevant over time. This balance between using current but inaccurate information or accurate but outdated information we call the accuracy-timeliness tradeoff. Through analysis of a generic family of environments, procedures are suggested for reducing the negative consequences of this tradeoff. In many of these situations, rather general knowledge concerning relative weights and shapes of functions is sufficient to determine optimizing strategies. Copyright © 1995 Institute for Operations Research and the Management Sciences.
The sharing of databases either within or across organizations raises the possibility of unintentionally revealing sensitive relationships contained in them. Recent advances in data-mining technology have increased the chances of such disclosure. Consequently, firms that share their databases might choose to hide these sensitive relationships prior to sharing. Ideally, the approach used to hide relationships should be impervious to as many data-mining techniques as possible, while minimizing the resulting distortion to the database. This paper focuses on frequent item sets, the identification of which forms a critical initial step in a variety of data-mining tasks. It presents an optimal approach for hiding sensitive item sets, while keeping the number of modified transactions to a minimum. The approach is particularly attractive as it easily handles databases with millions of transactions. Results from extensive tests conducted on publicly available real data and data generated using IBM's synthetic data generator indicate that the approach presented is very effective, optimally solving problems involving millions of transactions in a few seconds.
We would like to thank the many reviewers who devoted their time, energy, and expertise to Information Systems Research, and whose dedicated efforts play a key role in ensuring the quality of the journal. The following individuals served as reviewers of manuscripts reviewed in 2009.
The determination of information requirements is one of the most crucial stages in the software design and development process (Montezemi 1988). It is during this stage that the greatest degree of interaction occurs between the analyst and the user (Lauer et al. 1992). Despite the system development method employed, the functional success of many aspects of requirements determination ultimately rests on how well the user(s) and analyst(s) communicate (Holtzblatt and Beyer 1995). The purpose of this paper is to report the results obtain from a laboratory experiment that investigated the effects of a semantic structuring process of inquiry on the process of interview-derived information acquisition and the subsequent overall correctness of the logical representation of the facts obtained. The study focused on the specific question types used by systems analysts and the role their semantic construction pla yed in representing the information flows in a business system. Three underlying semantic patterns of questions emerged from the analysis. The results showed that certain question types were associated with increased accuracy of logical representations re gardless of analyst experience level. Further, the semantic and process patterns that emerged were also directly related to accurate representation of facts and demonstrated an experience-level independence. The results indicate that disciplined questioning strategies are not necessarily learned from practice and they can be improved via structured training. Each of the patterns provide insight into the questioning process employed and the effectiveness of different strategies of inquiry. Implications for both the practitioner and the academic research communities with regard to analyst interview behavior are discussed.
Input Attributes and Descriptions
Abstract Retrieval of a set of cases similar to a new case is a problem common,to a number of machine learning approaches such as nearest neighbor algorithms, conceptual clustering, and case based reasoning. A limitation of most case retrieval algorithms is their lack of attention to information acquisition costs. When information acquisition costs are considered, cost reduction is hampered by the practice of separating concept formation and retrieval strategy formation. Todemonstrate the above claim, we examine two approaches. The first approach separates concept formation and retrieval strategy formation. To form,a retrieval strategy in this approach, we develop the CR lc (case retrieval loss criterion) algorithm that selects attributes in ascending order of expected loss. The second,approach ,jointly optimizes ,concept ,formation ,and ,retrieval strategy formation using a cost ,based variant of the ID3 algorithm (ID3 c). ID3c builds a decision tree wherein ,attributes are selected using entropy reduction per unit information acquisition cost. Experiments with four data sets are described in which algorithm, attribute cost coefficient of variation, and matching threshold are factors. The experimental results demonstrate ,that (i) jointly optimizing concept ,formation and retrieval strategy formation has substantial benefits, and (ii) using cost considerations can significantly reduce information acquisition costs, even if concept formation and retrieval strategy formation are separated.
This paper investigates a problem of resource allocation where a manager allocates discrete resources among multiple agents in a team in a socially optimal manner. In making this allocation, the manager needs to understand the preference orders of the agents for the discrete resources. The manager does this by adopting an information acquisition policy. Three different information acquisition policies are investigated here. The trade off between the amount of information elicited and the costs involved are studied for each of the policies.
Conceptual Framework
Proposed Research Model
Factor Structure Matrix of Loadings and Cross-Loadings-Measurement Model
Computer skills are key to organizational performance, and past research indicates that behavior modeling is a highly effective form of computer skill training.The present research develops and tests a new theoretical model of the underlying observational learning processes by which modeling-based training interventions influence computer task performance.Observational learning processes are represented as a second-order construct with four dimensions (attention, retention, production, and motivation).New measures for these dimensions were developed and shown to have strong psychometric properties.The proposed model controls for two pretraining individual differences (motivation to learn and self-efficacy) and specifies the relationships among three training outcomes (declarative knowledge, post-training self-efficacy, and task performance).The model was tested using PLS on data from an experiment (N = 95) on computer spreadsheet training.As hypothesized, observational learning processes significantly influenced training outcomes. A representative modeling-based training intervention (retention enhancement) significantly improved task performance through its specific effects on the retention processes dimension of observational learning.The new model provides a more complete theoretical account of the mechanisms by which modeling-based interventions affect training outcomes, which should enable future research to systematically evaluate the effectiveness of a wide range of modeling-based training interventions.Further, the new instruments can be used by practitioners to refine ongoing training programs.
Recent empirical work by Compeau and Higgins (1995) investigated the role of behavioral modeling training in the development of computer skills. Their efforts have provided insight into our understanding of the role of computer self-efficacy (CSE) and behavioral modeling (BM) techniques with regard to training effectiveness. Contrary to their expectations, however, several of the hypothesized relationships were not supported, especially those relating to outcome expectancy. In this paper, an empirically derived model of the CSE construct proposed by Marakas, Yi, and Johnson (1998) is offered to highlight potential theoretical, methodological, and measurement issues which may have contributed to or exacerbated the unexpected results obtained in the Compeau and Higgins study. The empirical work contained herein is intended to both replicate and extend the work of Compeau and Higgins and to assist in resolving several key issues left unsettled by their seminal work in this area.
A cognitive learning perspective is used to develop and test a model of the relationship between information acquisition and learning in the executive support systems (ESS) context. The model proposes two types of learning: mental model maintenance in which new information fits into existing mental models and confirms them; and mental model building in which mental models are changed to accommodate new information. It also proposes that information acquisition objectives determine the type of learning that is possible. When ESS are used to answer specific questions or solve well-defined problems, they help to fine-tune operations and verify assumptions-in other words, they help to maintain current mental models. However, ESS may be able to challenge fundamental assumptions and help to build new mental models if executives scan through them to help formulate problems and foster creativity. Thirty-six interviews with executive ESS users at seven organizations and a survey of 361 users at 18 additional organizations are used to develop scales to measure the model's constructs and provide support for its relationships. These results support the models prediction that mental model building is more likely with scanning than with focused search. ESS also appear to contribute to mental model maintenance much more often than they do to mental model building. Without a clear focus on mental model building, it seems that business as usual is the more likely outcome.
This research suggests that providing decision support systems to satisfy individual managers' desires will not have a large effect on either the efficiency or the effectiveness of problem solving. Designers should, instead, concentrate on determining the characteristics of the tasks that problem solvers must address, and on supporting those tasks with the appropriate problem representations and support tools. Sufficient evidence now exists to suggest that the notion of cognitive fit may be one aspect of a general theory of problem solving. Suggestions are made for extending the notion of fit to more complex problem-solving environments.
Statistical Model 
ASN Findings 
Interaction Between Firm Size and Year on Average Lg(wage) Average Lg(wage) by firm size 
Comparison of Variable Definitions 
Descriptive Statistics and Correlation Descriptive statistics and correlation (Current study 1997) 
As the IT workforce becomes global, it is increasingly important to understand the factors affecting IT workers' compensation in labor markets distributed across the globe. Ang et al. (2002) published the first in-depth analysis of compensation for IT professionals juxtaposing human capital endowment (education and experience) and institutional determinants (firm's size, industry, and sector) of compensation in the Singaporean economy. In this paper, we explore the influence of particular national economies on IT workers' compensation. We draw on research into the roots of wage differentials in labor economics and build on the Ang et al. research to present a multilevel analysis of IT workers' compensation in the United States, analyzing the U.S. Bureau of Labor Statistics' Current Population Survey (CPS) data for 1997, 2001, and 2003. We find that, while institutional differences in Singapore mattered only in conjunction with individual factors, in the U.S. institutional differences had a direct effect on IT workers' wages. As tightness of IT labor supply decreased in the United States in the early 2000s, the influence of a firm's size on wages became more pronounced. Also, female IT workers and workers without a college degree fared worse than their male and college-educated counterparts as the IT job market slowed down. We suggest that factors such as presence of job search friction, diversity in the educational system, geographical differences in cost of living, labor mobility, and shortages in IT labor supply vis-à-vis demand help explain the differences among countries. We conclude by outlining the implications of these findings for IT workers, firms, and policy makers.
Researchers in competitive dynamics have demonstrated that firms that carry out intense, complex, and heterogeneous competitive actions exhibit better performance. However, there is a need to understand factors that enable firms to undertake competitive actions. In this study, we focus on two antecedents of competitive behavior of firms: (1) access to network resources and (2) use of information technology (IT). We argue that while network structure provides firms with the opportunity to tap into external resources, the extent to which they are actually exploited depends on firms' IT-enabled capability. We develop a theoretical model that examines the relationships between IT-enabled capability, network structure, and competitive action. We test the model using secondary data, about 12 major automakers over 16 years from 1988 to 2003. We find that network structure rich in structural holes has a positive direct effect on firms' ability to introduce a greater number and a wider range of competitive actions. However, the effect of dense network structure is contingent on firms' IT-enabled capability. Firms benefit from dense network structure only when they develop a strong IT-enabled capability. Our results suggest that IT-enabled capability plays both a substitutive role, when firms do not have advantageous access to brokerage opportunities, and a complementary role, when firms are embedded in dense network structure, in the relationship between network structure and competitive actions.
Growth of web-based applications has drawn a great number of diverse stakeholders and specialists into the IS Development (ISD) practice. Marketing, strategy, and graphic design professionals have joined technical developers, business managers, and users in the development of web-based applications. Often, these specialists work for different organizations with distinct histories and cultures. A longitudinal, qualitative field study of a web-based application development project was undertaken so as to develop an in-depth understanding of the collaborative practices that unfold among diverse professionals in ISD projects. The paper proposes that multi-party collaborative practice can be understood as constituting a "collective reflection-in-action" cycle through which an Information Systems (IS) design emerges as a result of agents producing, sharing, and reflecting upon explicit objects. Depending on their control over the various economic and cultural (intellectual) resources brought to the project and developed on the project, agents from diverse backgrounds influence the design in distinctive ways. Their diverse sources of power shape whether collaborators "add to," "ignore," or "challenge" the work produced by others. Which of these modes of collective reflection-in-action are enacted on the project influences whose expertise will be reflected in the final design. Implications for the study of boundary objects, multi-party collaboration, and organizational learning in contemporary ISD are drawn.
Using an interpretive grounded theory research approach, we investigate the utilization of organization-wide information systems in the competitive actions and responses undertaken by top managers to sustain their firms' leading competitive position. Our central contribution is a model that explicates the role of information systems in the process by which competitive actions or responses are conceived, enacted, and executed, and resulting impacts on firm performance—issues that have been largely missing from contemporary research in both the information systems and competitive dynamics domains. This study has important implications for both research and practice. Specifically, researchers should consider organizational context; the intentions and actions of key players; and the process of conceiving, enacting, and executing competitive actions or responses carried out by the organization to account for the impact of information systems on firm performance. Findings suggest that when managers envision information systems as a resource that provides opportunities for competitive actions rather than viewing information systems in a service role, competitive advantages will evolve. Furthermore, practitioners will be better able to leverage information systems investments if they recognize the embedded role of information systems within the competitive actions or responses a firm undertakes to maintain or improve relative performance.
Conceptual Model
Definitions and Description of Categories of Competitive Actions
Breakdown of Competitive Actions by Categories  
Firm Alliance Network in the SNS Industry  
Regression Results with All Variables
This paper examines two important questions in the context of the social networking services (SNS) firms: what kind of competitive moves do SNS firms undertake and to what extent do the competitive moves impact firm performance?We blend the literature streams on information systems (IS) and strategic management and argue that given the unique characteristics of this nascent industry, SNS firms' competitive moves are likely to focus on value cocreation, as well as enhancement of the repertoire of their moves. We propose a conceptual model by blending value cocreation perspectives from the IS literature and repertoire of competitive actions from the competitive dynamics literature, and test our hypotheses using archival data. Results show that firms that emphasize value cocreation actions through the engagement of codevelopers in their technology platform and formation of strategic alliances enhance their performance. Furthermore, firms that undertake complex action repertoires achieve better performance. This study provides unique insights about the ways in which firms compete in the industry and has several implications for future research.
Effect of Rank and Distance on Clicks (Dual Channel Users; N=1,940)
Robustness to Alternative Specifications (N=1,940)
Robustness to Alternative Samples
Robustness to Additional Sub-samples
Additional Analysis for Time-Sensitivity (Dual Channel Users; N=1,940)
We explore how Internet browsing behavior varies between mobile phones and personal computers. Smaller screen sizes on mobile phones increase the cost to the user of browsing for information. In addition, a wider range of offline locations for mobile Internet usage suggests that local activities are particularly important. Using data on user behavior at a (Twitter-like) microblogging service, we exploit exogenous variation in the ranking mechanism of posts to identify the ranking effects. We show that (1) ranking effects are higher on mobile phones suggesting higher search costs: links that appear at the top of the screen are especially likely to be clicked on mobile phones and (2) the benefit of browsing for geographically close matches is higher on mobile phones: stores located in close proximity to a user's home are much more likely to be clicked on mobile phones. Thus, the mobile Internet is somewhat less "Internet-like": search costs are higher and distance matters more. We speculate on how these changes may affect the future direction of Internet commerce.
Prior research has generated considerable knowledge about the design of effective IT organizational architectures. Today, however, increasing signs have accumulated that this wisdom might be inadequate in shaping appropriate insights for contemporary practice. This essay seeks to direct research attention toward the following question:How should firms organize their IT activities in order to manage the imperatives of the business and technological environments in the digital economy? We articulate the platform logic as a conceptual framework for both viewing the organizing of IT management activities as well as for framing important questions for future research. In articulating this logic, we aim to shift thinking away from the traditional focus on governance structures (i.e., choice of centralized, decentralized, or federal forms) and sourcing structures (i.e., insourcing, outsourcing) and toward more complex structures that are reflective of contemporary practice. These structures are designed around important IT capabilities and network architectures.
Preservation of organizational memory becomes increasingly important to organizations as it is recognized that experiential knowledge is a key to competitiveness. With the development and widespread availability of advanced information technologies (IT), information systems become a vital part of this memory. We analyze existing conceptualizations and task-specific instances of IT-supported organizational memory. We then develop a model for an organizational memory information system (OMIS) that is rooted in the construct of organizational effectiveness. The framework offers four subsystems that support activities leading to organizational effectiveness. These subsystems rest on the foundation of five mnemonic functions that provide for acquisition, retention, maintenance, search, and retrieval of information. We then identify the factors that will limit the success of OMIS implementation, although full treatment of this issue is outside the scope of the paper. To initiate a research agenda on OMIS, we propose an initial contingency framework for OMIS development depending on the organization's environment and its life-cycle stage, and discuss the relationships between an OMIS and organizational learning and decision making.
Top-cited authors
Paul Pavlou
  • Temple University
Omar A. El Sawy
  • University of Southern California
Marius Florin Niculescu
  • Georgia Institute of Technology
Hock Hai Teo
  • National University of Singapore
Andrew Burton-Jones
  • The University of Queensland