Information Systems Research (Inform Syst Res)

Publisher: Institute for Operations Research and the Management Sciences, INFORMS

Journal description

ISR (Information Systems Research) is a leading international journal of theory, research, and intellectual development, focused on information systems in organization, institutions, the economy, and society.

RG Journal Impact: 1.67 *

*This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive.

RG Journal impact history

2019Available summer 2020
20181.67
20173.45
20163.47
20154.19
20143.18

RG Journal impact over time

RG Journal impact
RG Journal impact over timeGraph showing a linear path with a yearly representation of impact points of the journal

Additional details

Cited half-life9.90
Immediacy index0.23
Eigenfactor0.01
Article influence2.03
Websitehttp://isr.journal.informs.org/
Website descriptionInformation Systems Research website
Other titlesInformation systems research (Online), Information systems research, ISR
ISSN1526-5536
OCLC42287409
Material typeDocument, Periodical, Internet resource
Document typeInternet Resource, Computer File, Journal / Magazine / Newspaper

Publications in this journal

Wireless telecommunications have become over time a ubiquitous tool that not only sustains our increasing need for flexibility and efficiency, but also provides new ways to access and experience both utilitarian and hedonic information goods and services. This paper explores the parallel market evolution of the two main categories of wireless services - voice and data - in leading technology markets, inspecting the differences and complex interactions between the associated adoption processes. We propose a model that addresses specific individual characteristics of these two services and the stand-alone/add-on relationship between them. In particular, we acknowledge the distinction between the nonoverlapping classes of basic consumers, who only subscribe to voice plans, and sophisticated consumers, who adopt both services. We also account for the fact that, unlike voice services, data services rapidly evolved over time due to factors such as interface improvement, gradual technological advances in data transmission speed and security, and the increase in volume and diversity of the content and services ported to mobile Internet. Moreover, we consider the time gap between the market introduction of these services and allow for different corresponding consumer learning curves. We test our model on the Japanese wireless market. The empirical analysis reveals several interesting results. In addition to an expected one-way effect of voice on data adoption at the market potential level, we do find two-way codiffusion effects at the speed of adoption level. We also observe that basic consumers impact the adoption of wireless voice services in a stronger way compared to sophisticated consumers. This, in turn, leads to a decreasing average marginal network effect of voice subscribers on the adoption of wireless voice services. Furthermore, we find that the willingness of voice consumers to consider adopting data services is positively related to both time and penetration of 3G-capable handsets among voice subscribers.
The digital divide has loomed as a public policy issue for over a decade. Yet, a theoretical account for the effects of the digital divide is currently lacking. This study examines three levels of the digital divide. The digital access divide (the first-level digital divide) is the inequality of access to information technology (IT) in homes and schools. The digital capability divide (the second-level digital divide) is the inequality of the capability to exploit IT arising from the first-level digital divide and other contextual factors. The digital outcome divide (the third-level digital divide) is the inequality of outcomes (e.g., learning and productivity) of exploiting IT arising from the second-level digital divide and other contextual factors. Drawing on social cognitive theory and computer self-efficacy literature, we developed a model to show how the digital access divide affects the digital capability divide and the digital outcome divide among students. The digital access divide focuses on computer ownership and usage in homes and schools. The digital capability divide and the digital outcome divide focus on computer self-efficacy and learning outcomes, respectively. This model was tested using data collected from over 4,000 students in Singapore. The results generate insights into the relationships among the three levels of the digital divide and provide a theoretical account for the effects of the digital divide. While school computing environments help to increase computer self-efficacy for all students, these factors do not eliminate knowledge the gap between students with and without home computers. Implications for theory and practice are discussed.
Open source software is becoming increasingly prominent, and the economic structure of open source development is changing. In recent years, firms that are motivated by revenues from software services markets have become primary contributors to open source development. In this paper, we explore firms' economic incentives to foster open source software initiatives in lieu of proprietary ones and the role of services in software development and value generation. We present an economic model that jointly analyzes software originators and subsequent contributors' investments in software development as well as pricing of software and services under competition. We find that, despite benefits the originator obtains from an external contributor in improving software quality under an open source strategy, increased contributor development efficiency may encourage the originator to choose a proprietary strategy. Due to strategic interaction between the originator and contributors, in certain cases, an increase in originator development efficiency can lead to increased contributor profits, while an increase in contributor development efficiency can reduce social welfare. Under an open source regime, increased service costs can increase both originator and contributor profits. Further, increased service costs can result in the software originator's choosing an open source strategy and increase welfare, implying that taxes on service revenues imposed by a regulator may in certain cases prove beneficial. Exploring open source license selection, we identify conditions that determine an originator's choice of license restrictiveness. We show that more restrictive licenses may increase development investments and software quality, but providing government subsidies for less restrictive licenses can improve welfare.
Firms nowadays are increasingly proactive in trying to strategically capitalize on consumer networks and social interactions. In this paper, we complement an emerging body of research on the engineering of word-of-mouth (WOM) effects by exploring a different angle through which firms can strategically exploit the value-generation potential of the user network. Namely, we consider how software firms should optimize the strength of network effects at utility level by adjusting the level of embedded social media features in tandem with the right market seeding and pricing strategies, in the presence of seeding disutility. We explore two opposing seeding cost models where seeding-induced disutility can be either positively or negatively correlated with customer type. We consider both complete and incomplete information scenarios for the firm. Under complete information, we uncover a complementarity relationship between seeding and building social media features which holds for both disutility models. When the cost of any of these action increases, rather than compensating by a stronger action on the other dimension in order to restore the overall level of network effects, the firm will actually scale back on the other initiative as well. Under incomplete information, this complementarity holds when seeding disutility is negatively correlated with customer type but may not always hold in the other disutility model, potentially leading to fundamentally different optimal strategies. We also discuss how our insights apply to asymmetric networks.
The Internet facilitates information flow between sex workers and buyers, making it easier to set up paid sexual transactions online. Despite the illegality of selling sexual services online, Section 230 of Communications Decency Act shields websites from liability for unlawful postings by third parties. Consequently, the websites like Craigslist have become a haven for prostitution-related ads. With increasing number of prostitution-related sites launched over time, it is imperative to understand the link between these sites and prostitution trends. Specifically, in this paper, we quantify the economic impact of Craigslist’s entry on prostitution incidence, and identify potential pathways in which the website affects the sex industry. Using a national panel data for 1,796 U.S. counties from 1999 to 2008, our analyses suggest that entry of Craigslist to a county leads to a 17.58 percent increase in prostitution cases. In addition, the analyses reveal that a majority of prostitution on Craigslist are induced by organized vice groups, in addition to voluntary participation by smaller set of independent providers. Further, we find site entry has a stronger impact in counties with past history of prostitution and produces spillover effects in neighboring locations that are not directly served by Craigslist. Sex workers providing niche sexual services are found to increase with site entry. We find that the increase in prostitution arrests does not catch up with the growth in prostitution trends brought in by Craigslist. Finally, we find complementarity effects between erotic and casual sex ads in leading to the increase of prostitution. Our results contribute broadly to the emerging literature on the societal challenges associated with online intermediaries and Internet penetration, and serve to provide guidelines for policy makers in regulating the sex industry in the Internet era.
Product recommendation agents (PRAs) are used extensively to give consumers advice on product selection. Prior research suggests that anthropomorphic PRAs with a human-like interface (avatar) can enhance interpersonal interaction with users. However, human-like interfaces highlight the role of social demographics, such as ethnicity and gender, in the design of PRAs, and the literature has shown that anthropomorphic PRAs that match their users’ ethnicity and gender are more likely to be adopted, particularly among female users. This paper seeks to offer a deeper explanation of these behavioral findings by exploring the neurological origins of the adoption of PRAs. In the spirit of triangulation, a functional Magnetic Resonance Imaging (fMRI) study explored how users evaluated PRAs that are similar or dissimilar to them in terms of ethnicity and gender. Users who varied on their ethnicity (Asian/Caucasian) and gender to either match or mismatch the ethnicity and gender of four anthropomorphic PRAs were asked to evaluate these PRAs on three constructs that were shown in the IS literature to predict the adoption and use of PRAs – perceived usefulness, trust, and social presence. Functional brain activations were captured in an fMRI scanner simultaneously while users also self-reported their responses to these three key constructs on the ethnicity- and gender-matched or mismatched PRAs. The exploratory fMRI results help triangulate and provide deeper explanations of prior behavioral studies by showing that the brain activations for the three key constructs (perceived usefulness, trust, social presence) were evident mainly in women, challenging theories on PRA adoption which are applied uniformly across gender and calling for new theories that explicitly include the moderating role of gender. Our findings indicate that demographic similarity in the context of athropomorphic PRAs is only important for women and mainly in the case of ethnicity-based similarity. Moreover, certain brain activations were elicited mostly from ethnicity match while others mostly from gender match, helping explain the idiosyncratic effects that ethnicity and gender play in the adoption of online PRAs based on underlying neurological processes. Besides supplementing earlier behavioral studies by shedding light on the neurological origins of key constructs that explain the adoption of PRAs in an exploratory fashion, the study offers practical implications for the design of online human-like PRAs.
This article analytically and experimentally investigates how firms can best capture the business value of information technology (IT) investments through IT contract design. Using a small sample of outsourcing contracts for enterprise information technology (EIT) projects in several industries-coupled with reviews of contracts used by a major enterprise software maker-the authors determine the common provisions and structural characteristics of EIT contracts. The authors use these characteristics to develop an analytical model of optimal contract design with principal-agent techniques. The model captures a set of key characteristics of EIT contracts, including a staged, multiperiod project structure; learning; probabilistic binary outcomes; variable fee structures; possibly risk-averse agents; and implementation risks. The model characterizes conditions under which multistage contracts enable clients to create and capture greater project value than single-stage projects, and how project staging enables firms to reduce project risks, capture learning benefits, and increase development effort. Finally, the authors use controlled laboratory experiments to complement their analytical approaches and demonstrate robustness of their key findings.
Enabled by recent advances in information and communications technologies, open innovation contests in online markets allow employers of any size to access a global pool of skille d labor (termed “human c loud”), creating an efficient alternative to innovation with significantly lower cost and risk. While there is a long stream of research on contests (or tournaments) in the economics literature, most studies were mostly theoretical and mainly focused on the optimal design of the prize structure for innovation contests. However, key features of online innovation contests make them very different from traditional contests. Notably, the widely used feedback system in online innovation contests, which allows innovation seekers (employers) to provide feedback to submitted solutions during the contest process, significantly changes the nature of open innovation contests. This paper provides a comprehensive study of open innovation contests in online markets and makes several unique contributions to enhance open innovation contests. First, we theoretically show that, with the effective use of feedback system, the number of contestants can serve as a reliable proxy for open innovation contest performance. Second, we examine the factors that affect contest performance based on a large-scale dataset from an online open innovation contest market. We propose a model including three categories of factors that predict contest performance: (1) contest design parameters (i.e., prize, description length, and duration), (2) project intrinsic characteristics (i.e., project complexity and required skills) and (3) market environment factors (i.e., competition intensity and market price). While several of these factors were ignored in the literature, they are shown to be significant predictors of the performance of open innovation contests. We discuss how our theoretical model and proposed measures allow future research to study open innovation performance more systematically, and how our empirical study renders useful insights for facilitating performance and for designing better open innovation contests. We conclude by discussing the implications of open innovation contests on the changing nature of labor.
Today, few firms could survive for very long without their computer systems. IT has permeated every corner of firms. Firms have reached the current state in their use of IT because IT has provided myriad opportunities for firms to improve performance and, firms have availed themselves of these opportunities. Some have argued, however, that the opportunities for firms to improve their performance through new uses of IT have been declining. Are the opportunities to use TT to improve firm performance diminishing? We sought to answer this question. In this study, we develop a theory and explain the logic behind our empirical analysis; an analysis that employs a different type of event study. Using the volatility of firms' stock prices to news signaling a change in economic conditions, we compare the stock price behavior of firms in the IT industry to firms in the utility and transportation and freight industries. Our analysis of the IT industry as a whole indicates that the opportunities for firms to use TT to improve their performance are not diminishing. However, there are sectors within the TT industry that no longer provide value-enhancing opportunities for firms. We also find that IT products that provided opportunities for firms to create value at one point in time, later become necessities for staying in business. Our results support the key assumption in our work.
In this research commentary we show that the discipline of information systems (IS) has much that can be learned from the history of the discipline of medicine. We argue that as interest in historical studies of information systems grows, there are important historical lessons to be drawn from disciplines other than IS, with the medical discipline providing fertile ground. Of particular interest are the circumstances that surrounded the practice of the medical craft in the 1800's—circumstances that drove a process of unification and specialization resulting in the modern conceptualization of medical education, research, and practice. In analyzing the history of the field of medicine, with its long-established methods for general practice, specialization, and sub-specialization we find that it serves as an example of a discipline that has dealt effectively with its initial establishment as a scientific discipline, exponential growth of knowledge and ensuing diversity of practice over centuries, and has much to say in regards to a number of discipline-wide debates of IS. Our objective is to isolate the key factors that can be observed from the writings of leading medical historians, and examine those factors from the perspective of the information systems discipline today. Through our analysis we identify the primary factors and structural changes which preceded a modern medical discipline characterized by unification and specialization. We identify these same historic factors within the present-day information systems milieu and discuss the implications of following a unification and specialization strategy for the future of the disciplines of information.
External financing is critical to ventures that do not have a revenue source but need to recruit employees, develop products, pay suppliers, and market their products/services. There is an increasing belief among entrepreneurs that electronic word-of-mouth (eWOM), specifically blog coverage, can aid in achieving venture capital financing. Conflicting findings reported by past studies examining eWOM make it unclear what to make of such beliefs of entrepreneurs. Even if there were generally agreed-upon results, a stream of literature indicates that because of the differences in traits between the prior investigated contexts and venture capital financing, the findings from the prior studies cannot be generalized to venture capital financing. Extant studies also fall short in examining the role of time and the status of entities generating eWOM in determining the influence of eWOM on decision making. To address this dearth of literature in a context that attracts billions of dollars every year, we investigate the effect of eWOM on venture capital financing. This study entails the challenging task of gathering data from hundreds of ventures along with other sources including VentureXpert, surveys, Google Blogsearch, Lexis-Nexis, and Archive.org. The key findings of our econometric analysis are that the impact of negative eWOM is greater than is the impact of positive eWOM and that the effect of eWOM on financing decreases with the progress through the financing stages. We also find that the eWOM of popular bloggers helps ventures in getting higher funding amounts and valuations. The empirical model used in this work accounts for inherent selection biases of entrepreneurs and venture capitalists, and we conduct numerous robustness checks for potential issues of endogeneity, selection bias, nonlinearities, and popularity cutoff for blogs. The findings have important implications for entrepreneurs and suggest ways by which entrepreneurs can take advantage of eWOM.
Because of the need to go beyond cross-sectional models, explore longitudinal phenomena, and test theories over time, this paper presents and extends Latent Growth Modeling (LGM) as a complementary method for analyzing longitudinal data, understanding the process of change over time, and testing time-centric hypotheses toward building longitudinal theories. We first describe the basic tenets of LGM and offer guidelines for applying LGM in IS research, specifically how to pose research questions that focus on change over time and how to implement LGM models to test time-centric hypotheses. Second and more importantly, we extend LGM by proposing a model validation criterion, d-separation, to assess LGM. We also conduct extensive simulations to examine factors that influence the performance of LGM. Finally, we apply LGM to empirically model the dynamic relationship between word of mouth communication and book sales on Amazon. The paper concludes by discussing implications for IS research by using LGM to develop and test longitudinal theories.
At the present day a lot of attention is devoted to issues of reducing electrical energy consumption in the seaside facilities. The implementation of newfangled intelligent technologies of management of electrical energy consumption not to the fullest extent enables us to substantiate and elaborate quantities of reduction of electrical energy consumption for the facilities of dock-side electrotechnical complex. This scientific paper is suggesting one way of this drawback elimination which implies the realization of the algorithm of reduction of electrical energy consumption based on facility- and system-level potential of electrical power saving. The basic stages of the algorithm are the following ones: development of current database on electrical energy consumption, analysis of the established strategy of electrical power saving, calculation of the facility-level potential, identification of leading parameters for groups of facilities, calculation of system-level potential of electrical power saving and assessment of efficiency of achieved outcomes. This algorithm is implemented in hardware and software complex of management of electrical energy consumption in Kaliningrad region and enables to determine the targeted indexes of reduction of electrical energy consumption individually for each facility considering the chosen strategy.
Because of the need to go beyond cross-sectional models, explore longitudinal phenomena, and test theories over time, this paper presents and extends Latent Growth Modeling (LGM) as a complementary method for analyzing longitudinal data, understanding the process of change over time, and testing time-centric hypotheses toward building longitudinal theories. We first describe the basic tenets of LGM and offer guidelines for applying LGM in IS research, specifically how to pose research questions that focus on change over time and how to implement LGM models to test time-centric hypotheses. Second and more importantly, we extend LGM by proposing a model validation criterion, d-separation, to assess LGM. We also conduct extensive simulations to examine factors that influence the performance of LGM. Finally, we apply LGM to empirically model the dynamic relationship between word of mouth communication and book sales on Amazon. The paper concludes by discussing implications for IS research by using LGM to develop and test longitudinal theories.
At the present day a lot of attention is devoted to issues of reducing electrical energy consumption in the seaside facilities. The implementation of newfangled intelligent technologies of management of electrical energy consumption not to the fullest extent enables us to substantiate and elaborate quantities of reduction of electrical energy consumption for the facilities of dock-side electrotechnical complex. This scientific paper is suggesting one way of this drawback elimination which implies the realization of the algorithm of reduction of electrical energy consumption based on facility- and system-level potential of electrical power saving. The basic stages of the algorithm are the following ones: development of current database on electrical energy consumption, analysis of the established strategy of electrical power saving, calculation of the facility-level potential, identification of leading parameters for groups of facilities, calculation of system-level potential of electrical power saving and assessment of efficiency of achieved outcomes. This algorithm is implemented in hardware and software complex of management of electrical energy consumption in Kaliningrad region and enables to determine the targeted indexes of reduction of electrical energy consumption individually for each facility considering the chosen strategy.

Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed.