Article

Classifying the Ideational Impact of Information Systems Review Articles: A Content-Enriched Deep Learning Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ideational impact refers to the uptake of a paper's ideas and concepts by subsequent research. It is defined in stark contrast to total citation impact, a measure predominantly used in research evaluation that assumes that all citations are equal. Understanding ideational impact is critical for evaluating research impact and understanding how scientific disciplines build a cumulative tradition. Research has only recently developed automated citation classification techniques to distinguish between different types of citations and generally does not emphasize the conceptual content of the citations and its ideational impact. To address this problem, we develop Deep Content-enriched Ideational Impact Classification (Deep-CENIC) as the first automated approach for ideational impact classification to support researchers' literature search practices. We evaluate Deep-CENIC on 1,256 papers citing 24 information systems review articles from the IT business value domain. We show that Deep-CENIC significantly outperforms state-of-the-art benchmark models. We contribute to information systems research by operationalizing the concept of ideational impact, designing a recommender system for academic papers based on deep learning techniques, and empirically exploring the ideational impact of the IT business value domain.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To support the iterative cycles of the literature review process in a context of scientific information overload and the systematic synthesis of large bodies of literature, many different techniques are increasingly deployed, albeit none will eliminate the need for researcher reflexivity when conducting a literature review. These techniques include linguistic and machine-learning techniques (Larsen & Bong, 2016;Larsen et al., 2019;Prester et al., 2021), bibliometric techniques (Zupić & Čater, 2015) and, more broadly, computational techniques. They allow essential review tasks to be performed automatically with little or no direct human input (Antons et al., 2021), leading to what has been termed AI-based literature reviews (Wagner et al., 2021). ...
... These techniques include linguistic and machine-learning techniques (Larsen & Bong, 2016;Larsen et al., 2019;Prester et al., 2021), bibliometric techniques (Zupić & Čater, 2015) and, more broadly, computational techniques. They allow essential review tasks to be performed automatically with little or no direct human input (Antons et al., 2021), leading to what has been termed AI-based literature reviews (Wagner et al., 2021). ...
Article
La présente recherche n’a pas été subventionnée. Cependant, le développement de la plateforme ARTIREV est actuellement soutenu par une bourse « French tech Emergence #x00BB;, Dossier No. DOS0194751/00, de BPI France. Dans le contexte actuel de surcharge informationnelle scientifique, les chercheurs et les praticiens pourraient tirer profit d’un logiciel bibliométrique intégré pour les aider à conduire leurs revues de la littérature existante. En utilisant une approche de recherche ancrée dans les sciences de la conception ainsi que deux techniques bibliométriques (l’analyse de co-citation de références citées et l’analyse de couplage bibliographique de documents citant), nous proposons un workflow détaillé pour conduire des revues de littérature et un logiciel intégré nommé ARTIREV (Intelligence ARTI ficielle et REV ues de littérature) que nous évaluons dans les champs du management et de la médecine. Nous montrons qu’ARTIREV résout trois problèmes identifiés dans les outils existants. Ces problèmes dans les outils existants sont : (1) la nécessité d’avoir des connaissances bibliométriques approfondies pour pouvoir effectivement les utiliser ; (2) le nettoyage des données bibliographiques qu’ils proposent n’est pas suffisant pour obtenir des résultats fiables ; et (3) les représentations graphiques fournies sont visuellement plaisantes, mais souvent difficiles à interpréter. Le logiciel résultant de notre travail pourrait aider la conduite de revues de littérature pour tout type d’utilisateurs potentiels : chercheurs et praticiens, experts et néophytes en bibliométrie.
... Combining systematicity (notably, the quest for comprehensiveness and, possibly, quality assessment of sources depending on review type) and iterative cycles in the review process can be a daunting task (Okoli & Schabram, 2010), particularly when the review is transdisciplinary (Rowe, 2014). Numerous techniques have been used to improve the efficiency of the iterative cycles of literature reviews -these include linguistic and machine-learning techniques (Larsen & Bong, 2016;Larsen et al., 2019;Prester et al., 2021) and bibliometric techniques (Zupic & Cater, 2015) which we focus on here. ...
... It is important that flexibility and creativity are built into a methodological approach such as the one proposed in this article to accommodate the essence of the iterative process of a literature review and to leave room for future methodological innovation/evolution. Some machine-learning and natural-language processing have been proposed as ways to improve data treatment in literature reviews (e.g., Abdel- Karim et al., 2020;Larsen et al., 2019;Prester et al., 2021). Combining these techniques with BIBGT could be interesting to further support the review process. ...
Article
In the current context of scientific information overload, we propose a method combining bibliometrics and grounded theory to conduct literature reviews that have a descriptive, understanding or explanatory purpose. This overall inductive combined method, which we name BIBGT (BIB = Bibliometrics; GT = Grounded Theory) provides a powerful instrument for researchers. This instrument has been made more readily accessible by recent technological developments, and scientific advances in the field of bibliometrics. Notably, BIBGT helps grounded theorists identify colleges of thought of the field being reviewed, and representative texts to be reviewed in depth, in each college; it improves the trustworthiness and efficiency of reviews. BIBGT helps bibliometricians achieve in-depth analysis and interpretation of their findings. For all researchers, BIBGT can nurture the emergence of novel insights towards in-depth description, understanding or explanation of a core concept, theme, or research domain. We detail the four steps of this flexible method, providing essential elements and some guidelines as well as an illustration, to help other researchers who wish to apply it.
... , as well as tools that perform syntactic translations of search queries for different scientific databases (Sturm & Sunyaev, 2019). Others have conceptualized the ideational impact of a research article as the uptake of a given paper's ideas by subsequent research, and developed an automatic ideational impact classification approach that employs citation content analysis based on NLP and deep learning techniques to support researchers' literature search practices (Prester et al., 2021). Additionally, other research has focused on building approaches for the screening of potential articles with the use of ML alogorithms combining network-citation approaches (Larsen et al., 2019). ...
Chapter
Full-text available
Please see the full chapter in the attached PDF.
... In the literature, it is equated an information technology (IT) audit [10]. In some research work, the performance of the IS has been limited, to the success of the IS [11] and the impact of the IS [12]. Zhou et al. [13] have pointed out the difference between the performance and value of the IS. ...
Article
Full-text available
The main objective of this paper is to study the correlation between investment in information technologies and especially information systems and information system success based on data collection and a multi-criteria decision-making approach using technique for order preference by similarity to ideal solution (TOPSIS) and analytical hierarchy process (AHP) methods. The criteria of the hierarchical model for evaluating the information system success are chosen from Delone and McLean information systems (IS) success model. The proposed approach has been implemented in 3 sectors recognized by their variation in the use of information systems: the financial sector, the service companies sector, and the construction industry sector. Therefore, the results of this implementation show that massive investment in information systems does not always guarantee good information system success, and information system success is not always the result of massive investment in the information system.
... Задача классификации фрагментов текста в зависимости от наличия в них ссылок методологически схожа с задачей анализа тональности, в рамках которой тексты автоматически классифицируются как позитивные и негативные (в основном) в соответствии с их эмоциональными характеристиками. В дополнение к классификации фрагментов на позитивные и негативные принцип анализа тональности используется для выделения других классов, включая определение значимости цитирования [8][9][10][11]. Задача выявления недостающих или ненужных ссылок в тексте также может рассматриваться аналогично анализу тональности, тогда искомым настроением здесь является потребность автора подтвердить сформулированное утверждение. ...
Article
The paper proposes various strategies for sampling text data when performing automatic sentence classification for the purpose of detecting missing bibliographic links. We construct samples based on sentences as semantic units of the text and add their immediate context which consists of several neighbouring sentences. We examine a number of sampling strategies that differ in context size and position. The experiment is carried out on the collection of STEM scientific papers. Including the context of sentences into samples improves the result of their classification. We automatically determine the optimal sampling strategy for a given text collection by implementing an ensemble voting when classifying the same data sampled in different ways. Sampling strategy taking into account the sentence context with hard voting procedure leads to the classification accuracy of 98% (F1-score). This method of detecting missing bibliographic links can be used in recommendation engines of applied intelligent information systems. Keywords: text sampling, sampling strategy, citation analysis, bibliographic link prediction, sentence classification.
... The most common measures are precision, recall, their harmonic mean (Fmeasure), or the area under the receiver operating characteristic curve (AUC), which captures the diagnostic ability of a classifier based on varying discrimination thresholds (O'Mara-Eves et al., 2015). These measures may be used to assess the validity of AILR findings (Larsen and Bong, 2016;Prester et al., 2020). However, a recent review of design validities, including AI-based validities , concluded that we continue to lack specialized design science validities needed to establish common principles of rigor when designing and applying such artifacts as ML and NLP. ...
Article
Full-text available
Artificial intelligence (AI) is beginning to transform traditional research practices in many areas. In this context, literature reviews stand out because they operate on large and rapidly growing volumes of documents, that is, partially structured (meta)data, and pervade almost every type of paper published in information systems research or related social science disciplines. To familiarize researchers with some of the recent trends in this area, we outline how AI can expedite individual steps of the literature review process. Considering that the use of AI in this context is in an early stage of development, we propose a comprehensive research agenda for AI-based literature reviews (AILRs) in our field. With this agenda, we would like to encourage design science research and a broader constructive discourse on shaping the future of AILRs in research.
... Among the research articles were taken into account those that provided their own results, which have been evaluated by scientific peers and which present the typical structure of introduction, materials and methods, results, discussion, and references were considered (Kerans et al., 2020). In addition, the review articles mainly focus on synthesizing existing knowledge were considered (Kerans et al., 2020;Prester et al., 2021). ...
Article
Full-text available
The knowledge of the tendencies of the drinking water treatments was changing through the previous decades and it is necessary to improve it for the benefit of the human beings. In this sense, the purpose of the study was to develop a scientometric study about the drinking water treatments in the period 2010–2020 for providing the state of art of the studies about the drinking water treatments in diverse knowledge areas and new orientations for future research. For this purpose, a search of the information was performed both in the Web of Science (WoS) and Scopus databases, and all articles and reviews related to the field of water treatment or chemistry were included. The results showed that China, the USA and the Netherlands have the majority of the most cited publications and various related multidisciplinary topics, such as infrastructure, technologies and pollution. Therefore, the study allows concluding that there is a need for research on different technologies that contribute positively to obtaining quality water for consumption and for the use of routine activities, being the combination and integration of the different treatment processes a challenge for future studies.
... We measure the dependent variable using citation rates as commonly suggested in the extant literature [16,23,54]. By focusing on overall citation rates, we do not distinguish different types of citations, such as confirmative vs. negational [65], ideational vs. perfunctory citations [66] or plagiaristic citations [67]. ...
Article
Review papers are essential for knowledge development in IS. While some are cited twice a day, others accumulate single digit citations over a decade. The magnitude of these differences prompts us to analyze what distinguishes those reviews that have proven to be integral to scientific progress from those that might be considered less impactful. Our results highlight differences between reviews aimed at describing, understanding, explaining, and theory testing. Beyond the control variables, they demonstrate the importance of methodological transparency and the development of research agendas. These insights inform all stakeholders involved in the development and publication of review papers.
Article
Classical methods for mapping domain knowledge structures, namely bibliographic coupling (BC) and co-citation (CC) analyses, rely on co-reference or CC counts, which may lack precision and reliability. While full-text mining can enhance BC and CC strength, there is limited comparative analysis on the impact of different full-text citation features. This study explores the optimisation effects of four full-text citation features: citation content, sentiment, position and mention frequency. Enhanced strength algorithms for BC and CC relationships were designed based on these features, and a comparative experiment was conducted in the field of oncology. Deep learning techniques were employed to extract various citation features, which were then used in the proposed models and control groups. These full-text citation features were assessed for their effectiveness and characteristics in discovering domain knowledge structure. The study revealed that including full-text citation features improved the traditional methods, aligning more closely with expert knowledge. These features offered distinct insights but also introduced potential drawbacks. The research results hold insights for gaining a deeper understanding regarding the optimisation effects of full-text citation features on traditional bibliometric methods.
Article
Kesesuaian sebuah program pemerintah yang dilaksanakan dengan tujuan pelaksanaannya menjadi hal yang menarik untuk di kaji. Pemerintahan pada tingkat desa sudah menjadi pusat pembangunan yang mendukung perkembangan Negara. Sehingga pembangunan desa dalam bentuk pelaksanaan program-program baik pisik atau non-pisik mulai terpusat didesa.Desa dengan segala sumberdaya manusianya akan melakukan penyesuaian terhadap kesiapan segala pelaksanaan program yang akan dikelola yang sebelumnya disiapkan oleh pemerintahan yang lebih tinggi baik yang dikelola kabupaten, propinsi, ataupun oleh pusat. Program-program desa dikemas sedemikian rupa dengan judul-judul terbaik sesuai tujuan untuk keberlangsungan perkembangan desa secara berkelanjutan. Sustainable Development Goals) SDG’s desa merupakan sebuah upaya terpadu bagi desa-desa di negara ini tumbuh dan mewujud dengan adil dan merata. Maka setiap program yang terfokus di desa merujuk pada program yang sesuai dengan tujuan tersebut. Penelitian ini dilakukan untuk menganalisa kesesuaian kegiatan program-program yang dilakasanakan di desa apakah sesuai dengan program SDG’s desa dengan menganalisa makna Ideational dalam judul-judul pelaksanaan program desa wisata. Metode penelitian menggunakan metode campuran dengan mata pisau penelitian menggunakan Sytemic Functional Liguistik. Objek penelitian pada program-program desa wisata Domba Adu Rncabango Garut selama 24 bulan periode 2020-2021 dan 2021-2022. Hasil menunjukan representasi terbanyak pada point SDG’s nomor 3 sebanyak 21,5%, dan terkecil pada point SDG’s nomor 7,13, 14, 15 dan 17 sebesar 0%.
Article
Citations play a significant role in the evaluation of scientific literature and researchers. Citation intent analysis is essential for academic literature understanding. Meanwhile, it is useful for enriching semantic information representation for the citation intent classification task because of the rapid growth of publicly accessible full-text literature. However, some useful information that is readily available in citation context and facilitates citation intent analysis has not been fully explored. Furthermore, some deep learning models may not be able to learn relevant features effectively due to insufficient training samples of citation intent analysis tasks. Multi-task learning aims to exploit useful information between multiple tasks to help improve learning performance and exhibits promising results on many natural language processing tasks. In this paper, we propose a joint semantic representation model, which consists of pretrained language models and heterogeneous features of citation intent texts. Considering the correlation between citation intents, citation section and citation worthiness classification tasks, we build a multi-task citation classification framework with soft parameter sharing constraint and construct independent models for multiple tasks to improve the performance of citation intent classification. The experimental results demonstrate that the heterogeneous features and the multi-task framework with soft parameter sharing constraint proposed in this paper enhance the overall citation intent classification performance.
Preprint
Full-text available
The paper proposes various strategies for sampling text data when performing automatic sentence classification for the purpose of detecting missing bibliographic links. We construct samples based on sentences as semantic units of the text and add their immediate context which consists of several neighboring sentences. We examine a number of sampling strategies that differ in context size and position. The experiment is carried out on the collection of STEM scientific papers. Including the context of sentences into samples improves the result of their classification. We automatically determine the optimal sampling strategy for a given text collection by implementing an ensemble voting when classifying the same data sampled in different ways. Sampling strategy taking into account the sentence context with hard voting procedure leads to the classification accuracy of 98% (F1-score). This method of detecting missing bibliographic links can be used in recommendation engines of applied intelligent information systems.
Article
Full-text available
The use of machine learning technologies by the world's most profitable companies to personalise their offerings is commonplace. However, not all companies using machine learning technologies succeed in creating and capturing value. Academic research has studied value creation through the use of information technologies, but this field of research tends to consider information technology as a homogeneous phenomenon, not considering the unique characteristics of machine learning technologies. This literature review aims to study the extent to which value creation and value capture through machine learning technologies are being investigated in the field of information systems. Evidence is found of a paucity of publications focusing on value creation through the use of ML in the enterprise, and none on value capture. This study's contribution is to provide a better understanding of the use of machine learning technologies in information systems as a social and business practice.
Article
Full-text available
The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text. A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier. Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95 %), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94 %). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online. © 2021 Fedor V. Krasnov, Irina S. Smaznevich, Elena N. Baskakova
Conference Paper
Full-text available
The goal of this study is to advance conceptual development and the growth of knowledge in the information systems (IS) field by placing the spotlight on a component of theory that is rarely discussed-the native IS concept. Beginning with the assertion that concepts are not the same as constructs, we build the argument that concepts, which are observable sets of ideas, should take priority over constructs which are unobservable fictions and hypothetical entities. Using natural language processing (NLP) based principles and techniques, we extract a sample of the most important concepts in the IS field from a corpus of 245 highly cited IS review articles and 1,293 citing articles from the Senior Scholars' Basket of Journals to illustrate the extent to which the field agrees on their usage, their clarity and distinctiveness and how the field can move forward in enhancing its conceptual formation.
Article
Full-text available
In recent years, the full text of papers are increasingly available electronically which opens up the possibility of quantitatively investigating citation contexts in more detail. In this study, we introduce a new form of citation analysis, which we call citation concept analysis (CCA). CCA is intended to reveal the cognitive impact certain concepts—published in a highly-cited landmark publication—have on the citing authors. It counts the number of times the concepts are mentioned (cited) in the citation context of citing publications. We demonstrate the method using three classical highly cited books: (1) The structure of scientific revolutions by Thomas S. Kuhn, (2) The logic of scientific discovery—Logik der Forschung: Zur Erkenntnistheorie der modernen Naturwissenschaft in German—, and (3) Conjectures and refutations: the growth of scientific knowledge by Karl R. Popper. It is not surprising—as our results show—that Kuhn’s “paradigm” concept seems to have had a significant impact. What is surprising is that our results indicate a much larger impact of the concept “paradigm” than Kuhn’s other concepts, e.g., “scientific revolution”. The paradigm concept accounts for about 40% of the concept-related citations to Kuhn’s work, and its impact is resilient across all disciplines and over time. With respect to Popper, “falsification” is the most used concept derived from his books. Falsification is the cornerstone of Popper’s critical rationalism.
Article
Full-text available
Reviewing a body of work presents unique opportunities for making a theoretical contribution. Review articles can make readers think theoretically differently about a given field or phenomenon. Yet, review articles that advance theory have been historically under‐represented in Journal of Management Studies. Accordingly, the purpose of this editorial is to propose a multi‐faceted approach for fashioning theoretical contributions in review articles, which we hope will inspire more authors to develop and submit innovative, original, and high‐quality theory‐building review articles. We argue that advancing theory with review articles requires an integrative and generative approach. We propose a non‐exhaustive set of avenues for developing theory with a review article: exposing emerging perspectives, analyzing assumptions, clarifying constructs, establishing boundary conditions, testing new theory, theorizing with systems theory, and theorizing with mechanisms. As a journal, Journal of Management Studies is a journal of ideas – new ideas; ideas drawn from reflections on extant theory and ideas with potential to change the way we understand and interpret theory. With this in mind, we think that advancing theory with review articles is an untapped source of new ideas.
Article
Full-text available
Fake news, malicious rumors, fabricated reviews, generated images and videos, are today spread at an unprecedented rate, making the task of manually assessing data veracity for decision-making purposes a daunting task. Hence, it is urgent to explore possibilities to perform automatic veracity assessment. In this work we review the literature in search for methods and techniques representing state of the art with regard to computerized veracity assessment. We study what others have done within the area of veracity assessment, especially targeted towards social media and open source data, to understand research trends and determine needs for future research. The most common veracity assessment method among the studied set of papers is to perform text analysis using supervised learning. Regarding methods for machine learning much has happened in the last couple of years related to the advancements made in deep learning. However, very few papers make use of these advancements. Also, the papers in general tend to have a narrow scope, as they focus on solving a small task with only one type of data from one main source. The overall veracity assessment problem is complex, requiring a combination of data sources, data types, indicators, and methods. Only a few papers take on such a broad scope, thus, demonstrating the relative immaturity of the veracity assessment domain.
Article
Full-text available
Purpose The purpose of this paper is to sensitize researchers to qualitative citation patterns that characterize original research, contribute toward the growth of knowledge and, ultimately, promote scientific progress. Design/methodology/approach This study describes how ideas are intertextually inserted into citing works to create new concepts and theories, thereby contributing to the growth of knowledge. By combining existing perspectives and dimensions of citations with Foucauldian theory, this study develops a typology of qualitative citation patterns for the growth of knowledge and uses examples from two classic works to illustrate how these citation patterns can be identified and applied. Findings A clearer understanding of the motivations behind citations becomes possible by focusing on the qualitative patterns of citations rather than on their quantitative features. The proposed typology includes the following patterns: original, conceptual, organic, juxtapositional, peripheral, persuasive, acknowledgment, perfunctory, inconsistent and plagiaristic. Originality/value In contrast to quantitative evaluations of the role and value of citations, this study focuses on the qualitative characteristics of citations, in the form of specific patterns of citations that engender original and novel research and those that may not. By integrating Foucauldian analysis of discourse with existing theories of citations, this study offers a more nuanced and refined typology of citations that can be used by researchers to gain a deeper semantic understanding of citations.
Article
Full-text available
Citations play a pivotal role in indicating various aspects of scientific literature. Quantitative citation analysis approaches have been used over the decades to measure the impact factor of journals, to rank researchers or institutions, to discover evolving research topics etc. Researchers doubted the pure quantitative citation analysis approaches and argued that all citations are not equally important; citation reasons must be considered while counting. In the recent past, researchers have focused on identifying important citation reasons by classifying them into important and non-important classes rather than individually classifying each reason. Most of contemporary citation classification techniques either rely on full content of articles, or they are dominated by content based features. However, most of the time content is not freely available as various journal publishers do not provide open access to articles. This paper presents a binary citation classification scheme, which is dominated by metadata based parameters. The study demonstrates the significance of metadata and content based parameters in varying scenarios. The experiments are performed on two annotated data sets, which are evaluated by employing SVM, KLR, Random Forest machine learning classifiers. The results are compared with the contemporary study that has performed similar classification employing rich list of content-based features. The results of comparisons revealed that the proposed model has attained improved value of precision (i.e., 0.68) just by relying on freely available metadata. We claim that the proposed approach can serve as the best alternative in the scenarios wherein content in unavailable.
Article
Full-text available
The goal of a review article is to present the current state of knowledge in a research area. Two important initial steps in writing a review article are boundary identification (identifying a body of potentially relevant past research) and corpus construction (selecting research manuscripts to include in the review). We present a theory-as-discourse approach which a) creates a theory ecosystem of potentially relevant prior research using a citation-network approach to boundary identification; and b) identifies manuscripts for consideration using machine learning or random selection. We demonstrate an instantiation of the theory as discourse approach through a proof-of-concept, which we call the Automated Detection of Implicit Theory (ADIT) technique. ADIT improves performance over the conventional approach as practiced in past Technology Acceptance Model reviews (i.e., keyword search, sometimes manual citation chaining); it identifies a set of research manuscripts that is more comprehensive and at least as precise. Our analysis shows that the conventional approach failed to identify a majority of past research. Like the three blind men examining the elephant, the conventional approach distorts the totality of the phenomenon. ADIT also enables researchers to statistically estimate the number of relevant manuscripts which were excluded from the resulting review article, thus enabling an assessment of the review article's representativeness.
Article
Full-text available
Researchers tend to cite highly cited articles, but how these highly cited articles influence the citing articles has been underexplored. This paper investigates how one highly cited essay, Hirsch’s “h-index” article (H-article) published in 2005, has been cited by other articles. Content-based citation analysis is applied to trace the dynamics of the article’s impact changes from 2006 to 2014. The findings confirm that citation context captures the changing impact of the H-article over time in several ways. In the first two years, average citation mention of H-article increased, yet continued to decline with fluctuation until 2014. In contrast with citation mention, average citation count stayed the same. The distribution of citation location over time also indicates three phases of the H-article “Discussion,” “Reputation,” and “Adoption” we propose in this study. Based on their locations in the citing articles and their roles in different periods, topics of citation context shifted gradually when an increasing number of other articles were co-mentioned with the H-article in the same sentences. These outcomes show that the impact of the H-article manifests in various ways within the content of these citing articles that continued to shift in nine years, data that is not captured by traditional means of citation analysis that do not weigh citation impacts over time.
Conference Paper
Full-text available
Standalone literature reviews are fundamental in every scientific discipline. Their value is reflected by a profound scientific impact in terms of citations. Although previous empirical research has shown that this impact has a large variance, it is largely unknown which specific factors influence the impact of literature reviews. Against this background, the purpose of our study is to shed light on the driving factors that make a difference in the scientific impact of literature reviews. Our analysis of an exhaustive set of 214 IS literature reviews reveals that factors on the author level (e.g., expertise, collaboration, and conceptual feedback) and on the article level (e.g., methodological rigor) are significant and robust predictors of scientific impact over and above journal level factors. These insights enhance our understanding of what distinguishes highly cited literature reviews. In so doing, our study informs future guidelines on literature reviews and provides insights for prospective authors.
Article
Full-text available
Research in the information systems (IS) field has often been characterized as fragmented. This paper builds on a belief that for the field to move forward and have an impact on practitioners and other academic fields, the existing work must be examined and systematized. It is particularly important to systematize research on the factors that underlie success of organizational IS. The goal here is to conceptualize the IS success antecedents (ISSA) area of research through surveying, synthesizing, and explicating the work in the domain. Using a combination of qualitative and quantitative research methods, a taxonomy of 12 general categories is created, and existing research within each category is examined. Important lacunae in the direction of work have been determined. It is found that little work has been conducted on the macro-level independent variables, the most difficult variables to assess, although these variables may be the most important to understanding the ultimate value of IS to organizations. Similarly, ISSA research on success variables of consequence to organizations was found severely lacking. Variable analysis research on organizational-level success variables was found to be literally nonexistent in the IS field, whereas research in the organizational studies field was found to provide useful directions for IS researchers. The specifics of the 12 taxonomy areas are analyzed and directions for research in each of them provided. Thus, researchers and practitioners are directed toward available research and receive suggestions for future work to bring ISSA research toward an organized and cohesive future.
Article
Full-text available
The problem of detecting whether two behavioral constructs reference the same real world phenomenon has existed for over 100 years; we term discordant naming of constructs the Construct Identity Fallacy (CIF). We designed and evaluated the Construct Identity Detector (CID), the first tool with large-scale construct identity detection properties and the first tool that does not require respondent data. Through the adaptation and combination of different natural language processing (NLP) algorithms, six designs were created and evaluated against human expert decisions. All six designs were found capable of detecting construct identity, and a design combining two existing algorithms significantly outperformed the other approaches. A set of follow-up studies suggests the tool is valuable as a supplement to expert efforts in literature review and meta-analysis. Beyond Design Science contributions, this article has important implications related to the taxonomic structure of social- and behavioral-science constructs, for the jingle and jangle fallacy, the core of the Information Systems (IS) nomological network, and the inaccessibility of social and behavioral science knowledge. In sum, CID represents an important, albeit tentative, step towards discipline-wide identification of construct identities.
Article
Full-text available
The agility literature suggests a positive relationship between IT-investments, agility, and performance for firms operating in turbulent contexts. However, agility studies have primarily focused on conceptual concerns, leaving these relationships empirically unexplored. In addition, the literature has focused on for-profit firms operating in commercial markets, thereby leaving other important organizational types unexamined; one such type is the social enterprise (SE). SEs are entrepreneurial organizations with a mission to improve complex social challenges (i.e., healthcare, hunger, education, etc) rather than profit maximization. This void leaves SEs in the dark as to how they can leverage IT to become more agile and improve performance. We draw on the agility perspective to examine how one exemplary SE operating in the context of pediatric global health utilized IT to enhance its agility and improve performance. We identify how the SE's IT-investment decisions resulted in an IT platform that facilitated increased agility in launching new products aimed at improving survival rates of children. Specifically, we analyze how the SE's IT platform positively impacted customer, partnering, and operational agility, and demonstrate how this led to dramatic improvements in performance. Finally, we offer evidence to support positive relationships between IT, agility, and performance in social sector contexts.
Article
Full-text available
When an enterprise system is introduced, system users often experience a performance dip as they struggle with the unfamiliar system. Appropriately managing this phase, which we term as the swift response phase (SRP), is vital given its prominent impact on the eventual success of the system. Yet, there is a glaring lack of studies that examine the SRP. Drawing on sensemaking theory and early postadoptive literature, this study seeks to propose a theory-driven model to understand how different support structures facilitate different forms of use-related activities to induce a positive performance in the SRP. The model was tested through a two-stage survey involving 329 nurses. The results demonstrated the discriminating alignment between information system (IS) use-related activity and support structures in enhancing system users' work performance in the SRP. Specifically, suitability of impersonal support moderated the effects of standardized system use and individual adaption on performance, whereas availability of personal support only moderated the effect of nonstandardized system use on performance. For moderating role of personal support, IS specialists support had a lower influence than peer-champion support and peer-user support. This study contributes to the extant literature by (1) conceptualizing the turbulent SRP, (2) applying sensemaking theory to the initial postadoptive stage, (3) adding to the theoretical debate on the value of system use, and (4) unveiling the distinct roles of support structures under different types of use activities. Practical suggestions are provided for organizational management and policy makers to deal with the complexities in the SRP.
Article
Full-text available
Enterprise Architecture Management (EAM) is discussed in academia and industry as a vehicle to guide IT implementations, alignment, compliance assessment, or technology management. Still, a lack of knowledge prevails about how EAM can be successfully used, and how positive impact can be realized from EAM. To determine these factors, we identify EAM success factors and measures through literature reviews and exploratory interviews and propose a theoretical model that explains key factors and measures of EAM success. We test our model with data collected from a cross-sectional survey of 133 EAM practitioners. The results confirm the existence of an impact of four distinct EAM success factors, 'EAM product quality', 'EAM infrastructure quality', 'EAM service delivery quality', and 'EAM organizational anchoring', and two important EAM success measures, 'intentions to use EAM' and 'Organizational and Project Benefits' in a confirmatory analysis of the model. We found the construct 'EAM organizational anchoring' to be a core focal concept that mediated the effect of success factors such as 'EAM infrastructure quality' and 'EAM service quality' on the success measures. We also found that 'EAM satisfaction' was irrelevant to determining or measuring success. We discuss implications for theory and EAM practice.
Article
Full-text available
We conceive of information technology (IT) innovation posture-profile misalignment as a condition that exists when a firm's innovation posture (the extent to which a firm leads with IT innovation) does not match up with its innovation resource profile (the firm's stock of resources conducive to effective innovation). We theorize that firms with a posture-profile misalignment will see diminished returns from IT adoption because they will be less likely to possess (and be less effective at exploiting) crucial innovation resources when they need them most. We demonstrate how misalignment conditions the link between IT innovation adoption and organizational performance using a data set comprising electronic networking technologies in over 25,000 U.S. manufacturing plants. Productivity regression estimations reveal a consistent pattern that the association between IT innovation adoption and productivity is substantially diminished among misaligned firms. These empirical results provide initial confirmation of the theoretical value of innovation posture, innovation resource profile, and innovation posture-profile misalignment. We consider the implications for research on business value and innovation as well as for the practice of management.
Article
Full-text available
Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the "laws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments.
Article
Full-text available
Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation’s value by interpreting each based on their contexts at both syntactic and semantic level. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper of citation analysis, more accurate citation prediction, and increased knowledge discovery.
Article
Full-text available
With the proliferation of available electronic service channels for IS users such as mobile or Intranet services in companies, service interactions between IS users and IS professionals have become an increasingly important factor for organizational business-IT alignment. Despite the increasing relevance of such interactions, the implications of agreement or disagreement on the fulfillment of critical service quality factors for successful alignment and higher user satisfaction are far from being well understood. While prior research has extensively studied the question of matching different viewpoints on IS service quality in organizations, little or no attention has been paid to the role of perceptual congruence or incongruence in the dyadic relationship between IS professionals and users in forming user satisfaction with the IS function. Drawing on cognitive dissonance theory, prospect theory and perceptual congruence research, our study examines survey responses from 169 matching pairs of IS professionals and users in different organizations and explains how perceptual fit patterns affect user satisfaction with the IS function. We demonstrate that perceptual congruence can, in and of itself, have an impact on user satisfaction which goes beyond what was found before. Moreover, our results reveal the relevance of nonlinear and asymmetric effect mechanisms arising from perceptual (in-)congruence that may affect user satisfaction. This study extends our theoretical understanding of the role of perceptual alignment or misalignment on IS service quality factors in forming user satisfaction, and lays the foundation for further study of the interplay between perceptions in the dyadic relationship between IS professionals and IS users. Managers who seek to encourage particular behaviors by the IS staff or IS users may use our results to reconcile the oftentimes troubled business-IT relationship.
Article
Full-text available
What value does information technology (IT) create in governments and how does it do so? While business value of IT has been extensively studied in the information systems field, this has not been the case for public value. This is in part due to a lack of theoretical bases for investigating IT value in the public sector. To address this issue, we present a conceptual model on the mechanism by which IT resources contribute to value creation in the public-sector organizations. We propose that the relationship between IT resources and organizational performance in governments is mediated by organizational capabilities and develop a theoretical model that delineates the paths from IT resources to organizational performance, drawing upon public-value management theory. This theory asserts that public managers, on behalf of the public, should actively strive to generate greater public value, as managers in the private sector seek to achieve greater private business value. On the basis of the review of public-value management literature, we suggest that the following five organizational capabilities mediate the relationship between IT resources and public value – public service delivery capability, public engagement capability, co-production capability, resource-building capability, and public-sector innovation capability. We argue that IT resources in public organizations can enable public managers to advance public-value frontiers by cultivating these five organizational capabilities and to overcome conflicts among competing values.
Article
Full-text available
Absorptive capacity is a firm's ability to identify, assimilate, transform, and apply valuable external knowledge. It is considered an imperative for business success. Modern information technologies perform a critical role in the development and maintenance of a firm's absorptive capacity. We provide an assessment of absorptive capacity in the information systems literature. IS scholars have used the absorptive capacity construct in diverse and often contradictory ways. Confusion surrounds how absorptive capacity should be conceptualized, its appropriate level of analysis, and how it can be measured. Our aim in reviewing this construct is to reduce such confusion by improving our understanding of absorptive capacity and guiding its effective use in IS research. We trace the evolution of the absorptive capacity construct in the broader organizational literature and pay special attention to its conceptualization, assumptions, and relationship to organizational learning. Following this, we investigate how absorptive capacity has been conceptualized, measured, and used in IS research. We also examine how absorptive capacity fits into distinct IS themes and facilitates understanding of various IS phenomena. Based on our analysis, we provide a framework through which IS researchers can more fully leverage the rich aspects of absorptive capacity when investigating the role of information technology in organizations.
Article
Full-text available
This paper examines two ways to create business value of information technology (BVIT): resource structuring and capability building. We develop a research model positing that IT resources and IT capabilities enhance a firm's performance by providing support to its competitive strategies and core competencies, and the strengths of these supports vary in accord with environmental dynamism. The model is empirically tested using data collected from 296 firms in China. It is found that IT resources generate more business effects in stable environments than in dynamic environments, while IT capabilities generate more business effects in dynamic environments than in stable environments. The results suggest that the BVIT creation mechanism in stable environments is primarily resource structuring while the mechanism in dynamic environments is primarily capability building.
Article
Literature reviews (LRs) play an important role in the development of domain knowledge in all fields. Yet, we observe a lack of insights into the activities with which LRs actually develop knowledge. To address this important gap, we (1) derive knowledge building activities from the extant literature on LRs, (2) suggest a knowledge-based typology of LRs that complements existing typologies, and (3) apply the suggested typology in an empirical study that explores how LRs with different goals and methodologies have contributed to knowledge development. The analysis of 240 LRs published in 40 renowned IS journals between 2000 and 2014 allows us to draw a detailed picture of knowledge development achieved by one of the most important genres in the IS field. An overarching contribution of our work is to unify extant conceptualizations of LRs by clarifying and illustrating how LRs apply different methodologies in a range of knowledge building activities to achieve their goals with respect to theory.
Article
With the rapid proliferation of images on e-commerce platforms today, embracing and integrating versatile information sources have become increasingly important in recommender systems. Owing to the heterogeneity in information sources and consumers, it is necessary and meaningful to consider the potential synergy between visual and textual content as well as consumers’ different cognitive styles. This paper proposes a multi-view model, namely, Deep Multi-view Information iNtEgration (Deep-MINE), to take multiple sources of content (i.e., product images, descriptions and review texts) into account and design an end-to-end recommendation model. In doing so, stacked auto-encoder networks are deployed to map multi-view information into a unified latent space, a cognition layer is added to depict consumers’ heterogeneous cognition styles and an integration module is introduced to reflect the interaction of multi-view latent representations. Extensive experiments on real world data demonstrate that Deep-MINE achieves high accuracy in product ranking, especially in the cold-start case. In addition, Deep-MINE is able to boost overall model performance compared with models taking a single view, further verifying the proposed model's effectiveness on information integration.
Article
Citations have long been used to characterize the state of a scientific field and to identify influential works. However, writers use citations for different purposes, and this varied purpose influences uptake by future scholars. Unfortunately, our understanding of how scholars use and frame citations has been limited to small-scale manual citation analysis of individual papers. We perform the largest behavioral study of citations to date, analyzing how scientific works frame their contributions through different types of citations and how this framing affects the field as a whole. We introduce a new dataset of nearly 2,000 citations annotated for their function, and use it to develop a state-of-the-art classifier and label the papers of an entire field: Natural Language Processing. We then show how differences in framing affect scientific uptake and reveal the evolution of the publication venues and the field as a whole. We demonstrate that authors are sensitive to discourse structure and publication venue when citing, and that how a paper frames its work through citations is predictive of the citation count it will receive. Finally, we use changes in citation framing to show that the field of NLP is undergoing a significant increase in consensus.
Article
Information retrieval systems for scholarly literature rely heavily not only on text matching but on semantic- and context-based features. Readers nowadays are deeply interested in how important an article is, its purpose and how influential it is in follow-up research work. Numerous techniques to tap the power of machine learning and artificial intelligence have been developed to enhance retrieval of the most influential scientific literature. In this paper, we compare and improve on four existing state-of-the-art techniques designed to identify influential citations. We consider 450 citations from the Association for Computational Linguistics corpus, classified by experts as either important or unimportant, and further extract 64 features based on the methodology of four state-of-the-art techniques. We apply the Extra-Trees classifier to select 29 best features and apply the Random Forest and Support Vector Machine classifiers to all selected techniques. Using the Random Forest classifier, our supervised model improves on the state-of-the-art method by 11.25%, with 89% Precision-Recall area under the curve. Finally, we present our deep-learning model, the Long Short-Term Memory network, that uses all 64 features to distinguish important and unimportant citations with 92.57% accuracy.
Article
We measure the knowledge flows between countries by analysing publication and citation data, arguing that not all citations are equally important. Therefore, in contrast to existing techniques that utilize absolute citation counts to quantify knowledge flows between different entities, our model employs a citation context analysis technique, using a machine-learning approach to distinguish between important and non-important citations. We use 14 novel features (including context-based, cue words-based and text-based) to train a Support Vector Machine (SVM) and Random Forest classifier on an annotated dataset of 20,527 publications downloaded from the Association for Computational Linguistics anthology (http://allenai.org/data.html). Our machine-learning models outperform existing state-of-the-art citation context approaches, with the SVM model reaching up to 61% and the Random Forest model up to a very encouraging 90% Precision–Recall Area Under the Curve, with 10-fold cross-validation. Finally, we present a case study to explain our deployed method for datasets of PLoS ONE full-text publications in the field of Computer and Information Sciences. Our results show that a significant volume of knowledge flows from the United States, based on important citations, are consumed by the international scientific community. Of the total knowledge flow from China, we find a relatively smaller proportion (only 4.11%) falling into the category of knowledge flow based on important citations, while The Netherlands and Germany show the highest proportions of knowledge flows based on important citations, at 9.06 and 7.35% respectively. Among the institutions, interestingly, the findings show that at the University of Malaya more than 10% of the knowledge produced falls into the category of important. We believe that such analyses are helpful to understand the dynamics of the relevant knowledge flows across nations and institutions.
Article
Automobile insurance fraud represents a pivotal percentage of property insurance companies' costs and affects the companies' pricing strategies and social economic benefits in the long term. Automobile insurance fraud detection has become critically important for reducing the costs of insurance companies. Previous studies on automobile insurance fraud detection examined various numeric factors, such as the time of the claim and the brand of the insured car. However, the textual information in the claims has rarely been studied to analyze insurance fraud. This paper proposes a novel deep learning model for automobile insurance fraud detection that uses Latent Dirichlet Allocation (LDA)-based text analytics. In our proposed method, LDA is first used to extract the text features hiding in the text descriptions of the accidents appearing in the claims, and deep neural networks then are trained on the data, which include the text features and traditional numeric features for detecting fraudulent claims. Based on the real-world insurance fraud dataset, our experimental results reveal that the proposed text analytics-based framework outperforms a traditional one. Furthermore, the experimental results show that the deep neural networks outperform widely used machine learning models, such as random forests and support vector machine. Therefore, our proposed framework that combines deep neural networks and LDA is a suitable potential tool for automobile insurance fraud detection.
Article
Investment appraisal techniques are an integral part of many traditional capital budgeting processes. However, the adoption of Information Systems (IS) and the development of resulting infrastructures are being increasingly viewed on the basis of consumption. Consequently, decision-makers are now moving away from the confines of rigid capital budgeting processes, which have traditionally compared IS with non-IS-related investments. With this in mind, the authors seek to dissect investment appraisal from the broader capital budgeting process to allow a deeper understanding of the mechanics involved with IS justification. This analysis presents conflicting perspectives surrounding the scope and sensitivity of traditional appraisal methods. In contributing to this debate, the authors present taxonomies of IS benefit types and associated natures, and discuss the resulting implications of using traditional appraisal techniques during the IS planning and decision-making process. A frame of reference that can be used to navigate through the variety of appraisal methods available to decision-makers is presented and discussed. Taxonomies of appraisal techniques that are classified by their respective characteristics are also presented. Perspectives surrounding the degree of involvement that financial appraisal should play during decision making and the limitations surrounding investment appraisal techniques are identified.
Article
Businesses continue to make large investments in information technology (IT) resources, and it is crucial for them to implement effective management strategies to better leverage these resources. Modern organizations are increasingly dependent on IT to remain agile and competitive in a rapidly changing market, but there remain gaps in understanding how IT resources support IT agility. Recent IT strategy research highlights the role of IT service climate in driving positive IT service quality, and we extend this work in the form of a theoretical model that relates an organization’s internal IT service perceptions to IT agility. We hypothesize a partially mediated relationship wherein internal IT service perceptions positively affects IT agility, both directly and indirectly, through facilitating positive IT service quality, highlighting the crucial role of IT personnel and their service orientation in provisioning services to enable IT agility. We test our model with an unmatched survey of 400 full-time IT managers and professionals and find strong support for our hypotheses. Our results have important implications for future research and practice, as the IT community continues to seek to adopt effective strategies for managing and leveraging its expensive resources.
Article
Despite the importance of investing in information technology, research on business value of information technology (BVIT) shows contradictory results, raising questions about the reasons for divergence. Kohli and Devaraj (2003) provided valuable insights into this issue based on a meta-analysis of 66 BVIT studies. This paper extends Kohli and Devaraj by examining the influences on BVIT through a meta-analysis of 303 studies published between 1990 and 2013. We found that BVIT increases when the study does not consider IT investment, does not use profitability measure of value, and employs primary data sources, fewer IT-related antecedents, and larger sample size. Considerations of IT alignment, IT adoption and use, and interorganizational IT strengthen the relationship between IT investment on BVIT, whereas the focus on environmental theories dampens the same relationship. However, the use of productivity measures of value, the number of dependent variables, the economic region, the consideration of IT assets and IT infrastructure or capability, and the consideration of IT sophistication do not affect BVIT. Finally, BVIT increases over time with IT progress. Implications for future research and practice are discussed.
Article
Although scientometrics is seeing increasing use in Information Systems (IS) research, in particular for evaluating research efforts and measuring scholarly influence; historically, scientometric IS studies are focused primarily on ranking authors, journals, or institutions. Notwithstanding the usefulness of ranking studies for evaluating the productivity of the IS field’s formal communication channels and its scholars, the IS field has yet to exploit the full potential that scientometrics offers, especially towards its progress as a discipline. This study makes a contribution by raising the discourse surrounding the value of scientometric research in IS, and proposes a framework that uncovers the multi-dimensional bases for citation behaviour and its epistemological implications on the creation, transfer, and growth of IS knowledge. Having identified 112 empirical research evaluation studies in IS, we select 44 substantive scientometric IS studies for in-depth content analysis. The findings from this review allow us to map an engaging future in scientometric research, especially towards enhancing the IS field’s conceptual and theoretical development.
Article
The perceived lack of benefits resulting from investments in information technology (IT), that is the `productivity paradox', is first described. The literature is divided into studies supported by variance theory, and those stemming from process theory. Within the first group, there are studies refuting the productivity paradox, based on results showing a positive impact on firm performance by IT investments. The second group of papers, however, consists of studies that found no evidence of a favorable impact. Such conflicting results have caused researchers to turn to process theory, which focuses on how IT affects processes, rather than overall output statistics of the firm.
Article
Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.
Article
This study develops an instrument that may be used as an information systems (IS) functional scorecard (ISFS). It is based on a theoretical input-output model of the IS function's role in supporting business process effectiveness and organizational performance. The research model consists of three system output dimensions - systems performance, information effectiveness, and service performance. The "updated paradigm" for instrument development was followed to develop and validate the ISFS instrument. Construct validation of the instrument was conducted using responses from 346 systems users in 149 organizations by a combination of exploratory factor analysis and structural equation modeling using LISREL. The process resulted in an instrument that measures 18 unidimensional factors within the three ISFS dimensions. Moreover, a sample of 120 matched-paired responses of separate CIO and user responses was used for nomological validation. The results showed that the ISFS measure reflected by the instrument was positively related to improvements in business processes effectiveness and organizational performance. Consequently, the instrument may be used for assessing IS performance, for guiding information technology investment and sourcing decisions, and as a basis for further research and instrument development.
Article
Previous research has proposed different types for and contingency factors affecting information technology governance. Yet, in spite of this valuable work, it is still unclear through what mechanisms IT governance affects organizational performance. We make a detailed argument for the mediation of strategic alignment in this process. Strategic alignment remains a top priority for business and IT executives, but theory-based empirical research on the relative importance of the factors affecting strategic alignment is still lagging. By consolidating strategic alignment and IT governance models, this research proposes a nomological model showing how organizational value is created through IT governance mechanisms. Our research model draws upon the resource-based view of the firm and provides guidance on how strategic alignment can mediate the effectiveness of IT governance on organizational performance. As such, it contributes to the knowledge bases of both alignment and IT governance literatures. Using dyadic data collected from 131 Taiwanese companies (cross-validated with archival data from 72 firms), we uncover a positive, significant, and impactful linkage between IT governance mechanisms and strategic alignment and, further, between strategic alignment and organizational performance. We also show that the effect of IT governance mechanisms on organizational performance is fully mediated by strategic alignment. Besides making contributions to construct and measure items in this domain, this research contributes to the theory base by integrating and extending the literature on IT governance and strategic alignment, both of which have long been recognized as critical for achieving organizational goals.
Article
The hypercompetitive aspects of modern business environments have drawn organizational attention toward agility as a strategic capability. Information technologies are expected to be an important competency in the development of organizational agility. This research proposes two distinct roles to understand how information technology competencies shape organizational agility and firm performance. In their enabling role, IT competencies are expected to directly enhance entrepreneurial and adaptive organizational agility. In their facilitating role, IT competencies should enhance firm performance by helping the implementation of requisite entrepreneurial and adaptive actions. Furthermore, we argue that the effects of the dual roles of IT competencies are moderated by multiple contingencies arising from environmental dynamism and other sources. We test our model and hypotheses through a latent class regression analysis on data from a sample of 109 business-to-business electronic marketplaces. The results provide support for the enabling and facilitating roles of IT competencies. Moreover, we find that these dual effects vary according to environmental dynamism. The results suggest that managers should account for (multiple) contingencies (observed and unobserved) while assessing the effects of IT competencies on organizational agility and firm performance.
Conference Paper
Citations are a valuable resource for characterizing scientific publications that has already been used in applications such as summarization and information retrieval. These applications could be even better served by expanding citation information. We aim to achieve this by extracting and classifying citation information from the text, so that subsequent applications may make use of it. We make three contributions to the advancement of fine-grained citation classification. First, our work uses a standard classification scheme for citations that was developed independently of automatic classification and therefore is not bound to any particular citation application. Second, to address the lack of available annotated corpora and reproducible results for citation classification, we are making available a manually-annotated corpus as a benchmark for further citation classification research. Third, we introduce new features designed for citation classification and compare them experimentally with previously proposed citation features, showing that these new features improve classification accuracy.
Article
It is widely acknowledged that IT and business resources need to be well aligned to achieve organizational goals. Yet, year after year, chief information officers (CIOs) still name business-IT alignment a key challenge for IT executives. While alignment research has matured, we still lack a sound theoretical foundation for alignment. Transcending the predominantly strategic executive level focus, we develop a model of 'operational alignment' and IT business value that combines a social perspective of IT and business linkage with a view of interaction between business and IT at non-strategic levels, such as in daily business operations involving regular staff. Drawing on social capital theory to explain how alignment affects organizational performance, we examine why common suggestions like "communicate more" are insufficient to strengthen alignment and disclose how social capital between IT and business units drives alignment and ultimately IT business value. Empirical data from 136 firms confirms the profound impact of operational business-IT alignment, composed of social capital and business understanding of IT, on IT flexibility, IT utilization, and organizational performance. The results show that social capital theory is a useful theoretical foundation for understanding how business IT alignment works. The findings suggest that operational alignment is at least as important as strategic alignment for IT service quality, that managers need to focus on operational aspects of alignment beyond communication by fostering knowledge, trust and respect, and that IT utilization and flexibility are appropriate intermediate goals for business-IT alignment governance.
Article
This paper reviews key concepts from the resource-based theory (RBT) of the firm, including evidence of “empirical support” for RBT. However, the paper then turns the conventional logic of empirical testing of RBT on its head, and argues that all that empirical testing does is to show researchers’ success in identifying valuable, rare, inimitable, and non-substitutable (VRIN) resources. Examining the IS literature from this perspective, the paper identifies a number of resources that really do seem to have been sources of competitive advantage. It concludes with recommendations on how RBT should be used in future strategic IS research.
Article
Despite polarizing arguments on the strategic potential of information technology (IT), academic research has yet to demonstrate clearly that information systems initiatives can lead to sustained competitive performance (CP). We investigate this question using data from 165 hotels affiliated with two brands of an international lodging chain. We study the effect of successful use and unreliability of an incremental IT-enabled self-service channel on overall CP. We find that the effect of the incremental service channel depends on the firm’s organizational resources. We also show that different organizations experience significantly different use and unreliability rates. Further, we find that the positive association between the use of an IT-enabled self-service channel and CP endures over a 2-year period, despite competitors’ widespread adoption of the technology enabling the incremental service channel (self-service kiosks). Our findings corroborate research on the strategic role of IT resources when appropriately coupled with complementary resources. They lead us to question the notion that IT is a strategic commodity. Indeed, the findings suggest that IT-dependent strategic initiatives have the potential to generate sustained CP, even when the technology that enables them appears ‘simple’. These findings suggest the need for a theoretical explanation of the complementarities and interaction among the elements of IT-dependent strategic initiatives.
Article
In this paper, we examine how the competitive industry environment shapes the way that digital strategic posture (defined as a focal firm's degree of engagement in a particular class of digital business practices relative to the industry norm) influences firms' realized digital business strategy. We focus on two forms of digital strategy: general IT investment and IT outsourcing investment. Drawing from prior literature on determinants of IT activity and competitive dynamics, we argue that three elements of the industry environment determine whether digital strategic posture has an increasingly convergent or divergent influence on digital business strategy. By divergent influence, we mean an influence that leads to spending substantially more or less on a particular strategic activity than industry norms. We predict that a digital strategic posture (difference from the industry mean) has an increasingly divergent effect on digital business strategy under higher industry turbulence, while having an increasingly convergent effect on digital business strategy under higher industry concentration and higher industry growth. The study uses archival data for 400 U.S.-based firms from 1999 to 2006. Our findings imply that digital business strategy is not solely a matter of optimizing firm operations internally or of responding to one or two focal competitors, but also arises strikingly from awareness and responsiveness to the digital business competitive environment. Collectively, the findings provide insights on how strategic posture and industry environment influence firms' digital business strategy.
Article
To date, most research on information technology (IT) outsourcing concludes that firms decide to outsource IT services because they believe that outside vendors possess production cost advantages. Yet it is not clear whether vendors can provide production ...
Article
Information technology matters to business success because it directly affects the mechanisms through which they create and capture value to earn a profit: IT is thus integral to a firm's business-level strategy. Much of the extant research on the IT/strategy relationship, however, inaccurately frames IT as only a functional-level strategy. This widespread under-appreciation of the business-level role of IT indicates a need for substantial retheorizing of its role in strategy and its complex and interdependent relationship with the mechanisms through which firms generate profit. Using a comprehensive framework of potential profit mechanisms, we argue that while IT activities remain integral to the functional-level strategies of the firm, they also play several significant roles in business strategy, with substantial performance implications. IT affects industry structure and the set of business-level strategic alternatives and value-creation opportunities that a firm may pursue. Along with complementary organizational changes, IT both enhances the firm's current (ordinary) capabilities and enables new (dynamic) capabilities, including the flexibility to focus on rapidly changing opportunities or to abandon losing initiatives while salvaging substantial asset value. Such digitally attributable capabilities also determine how much of this value, once created, can be captured by the firm--and how much will be dissipated through competition or through the power of value chain partners, the governance of which itself depends on IT. We explore these business-level strategic roles of IT and discuss several provocative implications and future research directions in the converging information systems and strategy domains.
Article
Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.
Article
The implementation of enterprise systems has yielded mixed and unpredictable outcomes in organizations. Although the focus of prior research has been on training and individual self-efficacy as important enablers, we examine the roles that the social network structures of employees, and the organizational units where they work, play in influencing the postimplementation success. Data were gathered across several units within a large organization: immediately after the implementation, six months after the implementation, and one year after the implementation. Social network analysis was used to understand the effects of network structures, and hierarchical linear modeling was used to capture the multilevel effects at unit and individual levels. At the unit level of analysis, we found that centralized structures inhibit implementation success. At the individual level of analysis, employees with high in-degree and betweenness centrality reported high task impact and information quality. We also found a cross-level effect such that central employees in centralized units reported implementation success. This suggests that individual-level success can occur even within a unit structure that is detrimental to unit-level success. Our research has significant implications for the implementation of enterprise systems in large organizations.