Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

There is no escaping the Big Data hype. Vendors are peddling Big Data solutions; consulting firms employ Big Data specialists; Big Data conferences are aplenty. There is a rush to extract golden nuggets (of insight) from mountains (of data). By focusing merely on the mountain (of Big Data), these adventurers are overlooking the source of the revolution-namely, the many digital data streams (DDSs) that create Big Data-and the opportunity to improve real-time decision making. This article discusses the characteristics of DDSs, describes their common structure, and offers guidelines to enable firms to profit from their untapped potential.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Digital data, such as online customers' feedback or transaction records, have become central in a firm's value creation either enabling new value proposition or empowering existing products and services. Pigni et al. (2016) have advanced a taxonomy of the value propositions firms leveraged to extract value from the increasing flow of digital data generated by an increasingly pervasive use of digital devices. This taxonomy aims to guide practitioners' actions in extracting value from big data. ...
... Value archetypes represent generalized categories of ways firms used to uniquely combine products, services and DDS to create customer value. Five archetypes were identified by Pigni et al. (2016): ...
... We also wanted to foster the impact of research allowing practitioners to contextualize it while using collaboration to support engagement and discussion for better learning (Prince, 2004). Thus, we transformed recently published research (Pigni et al., 2016) into a playful experience. ...
Full-text available
Article
Purpose Practitioners, despite competing in a difficult environment, struggle to understand or implement researchers’ findings that may support the development of sustainable competitive advantage. Following design science research using a gamification framework, the purpose of this study is to develop Game of Streams, a boundary object fostering practitioners’ capabilities to generate IT-dependent strategic initiatives. The Game of Streams method is available following a creative commons license and has two benefits for practitioners. First, it allows practitioners to ideate IT-dependent strategic initiatives with big data fitting their context. Second, it supports the understanding of a taxonomy originating in academic research about big data, precisely Digital Data Streams. Design/methodology/approach Through design science research methodology, the author investigates the research/practice gap. This study created with and for firms Game of Streams, a boundary object using gamification. The author tested this boundary object with different organizations from small- and medium-sized enterprises to multinationals and proved its effectiveness in generating IT-dependent strategic initiatives. Findings Game of Streams is enhancing practitioners’ use of research conclusions from academic literature. This study demonstrates that academic literature can impact practice better than before using boundary objects and gamification. Originality/value The gamification of research to bridge the research/practice gap is an emerging subject in the literature. This study offers an approach that allows practitioners to actively participate while manipulating research concepts in their context to generate IT-dependent strategic initiatives.
... (b1) Data access, collection, and ownership Data collection, access rights, and clarity about ownership are central issues discussed in the reviewed literature (e.g., Demirkan et al. 2015;Nino et al. 2015;Rymaszewska et al. 2017) and the interviews. Access to data can be restricted because of (a) technical issues; (b) the unwillingness of actors to share data on their problems or failures; or (c) outdated systems and irregular routines, where data download and exchange are not automated (Grubic and Peppard 2016;Grubic and Jennions 2017;Pigni et al. 2016). It is suggested that organizations need to be able to specify the essential data in advance and to ensure the exchange (including extraction and transmission) of those data (Kamp et al. 2016). ...
... Organizations can offer special development courses or training to educate employees internally (Lerch and Gotsch 2015;Bullinger et al. 2015;Cenamor et al. 2017) or externally (Pigni et al. 2016). IT, technical skills, and knowledge of the business demand social skills that support the sharing of employee competences in the network (Troilo et al. 2017;Bullinger et al. 2015;Aho 2015). ...
... An aspect that our analysis adds to the non-DDSI literature is the recommendation to establish a data-oriented culture for a business to capture data's value. In particular, the SLR confirms that reliable insights from data-rather than gut feelings, instincts, or intuition-could be the basis for decision making (Troilo et al. 2017;Pigni et al. 2016). Both the SLR and the interviews indicate the need for organizations to provide employees with a clear strategy for DDSs, taking into account issues such as data access and usage and relating this to the organizations' overall strategy (e.g., Schüritz et al. 2017a, b;Aho 2015;Sanders 2016). ...
Full-text available
Article
Data collected from interconnected devices offer wide-ranging opportunities for data-driven service innovation that delivers additional or new value to organizations’ customers and clients. While previous studies have focused on traditional service innovation and servitization, few scholarly works have examined the influence of data on these two concepts. With the aim of deepening the understanding of data as a key resource for service innovation and overcoming challenges for a broader application, this study combines a systematic literature review and expert interviews. This study (a) synthesizes the various existing definitions of a data-driven service, (b) investigates attributes of data-driven service innovation, and (c) explores the corresponding organizational capabilities. The goal is to examine the repercussions of data utilization for service provision. The findings indicate that the use of data makes service innovation more complex. Data add new attributes, including a data-oriented culture; issues of data access, data ownership, privacy, and standardization; as well as the potential for new revenue models. The paper contributes to current discussions by providing an aligned perspective of theory and practice in data-driven service innovation and recommending that managers implement a culture and strategy that embraces the specifics of data usage.
... According to Panetta (2021), Gartner defines data literacy as "the ability to read, write and communicate data in context, including an understanding of data sources and constructs, analytical methods and techniques applied, and the ability to describe the use case, application and resulting value." Without employees possessing the necessary data, skills, tools, and motivation to take on projects that generate business value from data, organizations lack the environmental readiness needed for successful data monetization (Pigni et al., 2016). ...
... To help senior leaders define their data performance objectives, this paper draws its balanced scorecard inspiration from Pigni et al.'s (Pigni et al., 2016) article on generating value from Digital Data Streams. In their review of how firms innovate using digital data streams, Pigni et al. (2016) described five opportunities for how companies derive value from data along with four "readiness components" needed to create an environment conducive for pursuing those opportunities. The results of their analysis provide a sound basis for the perspectives of a balanced scorecard for guiding organizations on how to derive value from their data assets. ...
... The results of their analysis provide a sound basis for the perspectives of a balanced scorecard for guiding organizations on how to derive value from their data assets. Table 2 shows how the perspectives for the balanced data scorecard map to Pigni et al. (2016) Data Streams Value Framework. These data driven perspectives correlate well with the perspectives of the classic balanced scorecard: Data Monetization to Financial, Data Consumer to Customer, Data Governance to Internal Business, and Data Readiness to Innovation and Learning. ...
Full-text available
Article
A good performance monitoring system is crucial to knowing whether an organization's efforts are making their data capabilities better, the same, or worse. However, comprehensive performance measurements are costly. Organizations need to expend time, resources, and personnel to design the metrics, to gather evidence for the metrics, to assess the metrics' value, and to determine if any actions should be taken as a result of those metrics. Consequently organizations need to be strategic in selecting their portfolio of performance indicators for evaluating how well their data initiatives are producing value to the organization. This paper proposes a balanced scorecard approach to aid organizations in designing a set of meaningful and coordinated metrics for maximizing the potential of their data assets. This paper also discusses implementation challenges and the need for further research in this area.
... Scholars have also paid attention to the effects that IoT solutions have on reducing costs, improving efficiency and increasing the overall competitive advantage of a company (Opresnik and Taisch, 2015;Matthias et al., 2017;Luo et al., 2016;Pigni et al., 2016;Kiel et al., 2017;O'Connor and Kelly, 2017;Grover et al., 2018;Mü ller et al., 2018). Grover et al. (2018) examined the case of United Parcel Service, which has applied telematic sensors to over 50,000 delivery trucks to acquire daily performance data and reduce fuel consumption, emissions and maintenance costs while improving customer service and driver safety. ...
... case-based reasoning and problem-solving capabilities) and digital technologies, such as artificial intelligence, augmented reality, big data analytics, the cloud, smart manufacturing systems, digital platforms and wearable technologies. The majority of papers in this area have investigated the crucial role of big data analytics to exploit knowledge and achieve high levels of operational efficiency and effectiveness (Akter et al., 2016;Mahmood and Mubarik, 2020), to support decision-making in several business areas (Corte-Real et al., 2017;Sumbal et al., 2017;Xie et al., 2016) and to achieve an improved performance and a sustainable competitive advantage (Griva et al., 2018;He et al., 2017;Khan and Vorley, 2017;Pigni et al., 2016;Zeng and Glaister, 2018). Moreover, Big Data Analytics is becoming more and more important for internal supply chain finance integration, business model innovation, testing new products before they are launched on the market and innovation (Ciampi et al., 2021;Mariani and Fosso Wamba, 2020;Yu et al., 2021). ...
... Akter et al., 2016;Bradlow et al., 2017;Dutta and Bose, 2015;Edwards et al., 2017;Hoornaert et al., 2017;Martinez and Walton, 2014;McAfee and Brynjolfsson, 2012; Mü ller et al., 2018;Nimmagadda et al., 2018;Pigni et al., 2016;Talon-Ballestero, 2018;Torres et al., 2018;Fosso Wamba et al., 2017;Wang and Byrd, 2017;Wang et al., 2018; Dremel et al., 2020), cloud solutions (Akter et al., 2016Ooi et al., 2018;Schniederjans et al., 2016;Uden and He, 2017; Wang et al., 2016), cyber security(McLeod and Dolezel, 2018), the IoT(Buhalis and Foerste, 2015; Kumar and Rajasekaran, 2016;Uden and He, 2017), smart manufacturing(Choe, 2004), social media/digital platforms(Randhawa et al., 2017) and augmented reality(Dacko, 2017;Olya et al., 2020). A body of constructive research has emerged in recent years around the role of big data analytics in supporting the process of information assimilation to achieve decision-making effectiveness(Edwards et al., 2017;McAfee and Brynjolfsson, 2012;Pigni et al., 2016;Wang and Byrd, 2017;Wang et al., 2018) and develop a customer analytics-driven value creation capability(Hossain et al., 2021). ...
Article
Purpose The effect of the transition toward digital technologies on today’s businesses (i.e. Industry 4.0 transition) is becoming increasingly relevant, and the number of studies that have examined this phenomenon has grown rapidly. However, systematizing the existing findings is still a challenge, from both a theoretical and a managerial point of view. In such a setting, the knowledge management (KM) discipline can provide guidance to address such a gap. Indeed, the implementation of fundamental digital technologies is reshaping how firms manage knowledge. Thus, this study aims to critically review the existing literature on Industry 4.0 from a KM perspective. Design/methodology/approach First, the authors defined a structuring framework to highlight the role of Industry 4.0 transition along with absorptive capacity (ACAP) processes (acquisition, assimilation, transformation and exploitation), while specifying what is being managed, that is data, information and/or (actual) knowledge, according to the data-information-knowledge (DIK) hierarchy. The authors then followed the systematic literature review methodology, which involves the use of explicit criteria to select publications to review and outline the stages a process has to follow to provide a transparent and replicable review and to analyze the existing literature according to the theoretical framework. This procedure yielded a final list of 150 papers. Findings By providing a clear picture of what scholars have studied so far on Industry 4.0 transition, in terms of KM, this literature review highlights that among all the studied digital technologies, the big data analytics technology is the one that has been explored the most in each phase of the ACAP process. A constructive body of research has also emerged in recent years around the role played by the internet of things, especially to explain the acquisition of data. On the other hand, some digital technologies, such as cyber security and smart manufacturing, have largely remained unaddressed. An explanation of the role of these technologies has been provided, from a KM perspective, together with the business implications. Originality/value This study is one of the first attempts to revise the literature on Industry 4.0 transition from a KM perspective, and it proposes a novel framework to read existing studies and on which to base new ones. Furthermore, the synthesis makes two main contributions. First, it provides a clear picture of the different digital technologies that support the four ACAP phases in relation to the DIK hierarchy. Accordingly, these results can emphasize what the literature has looked at so far, as well as which digital technologies have gained the most attention and their impacts in terms of KM. Second, the synthesis provides prescriptive considerations on the development of future research avenues, according to the proposed research framework.
... Among the Industry 4.0 technologies that connect the physical world with the digital world, new digitization technologies (DT), such as the Internet of Things or RFID, allow operational data to be automatically captured in digital form at their inception (Pigni et al., 2016). ...
... The structuredness of an event is defined by its analyzability and ambiguity. The more a decision problem or activity benefits from the use of computational, objective rules and procedures, as opposed to personal judgment and experience (Flores-Garcia et al., 2019), the more it is analyzablei.e., detectable, measurable, and interpretable (Pigni et al., 2016). On the other hand, ambiguity (or equivocality) is the degree to which there are multiple and conflicting interpretations of an event, and it is associated with such problems as confusion and a lack of consensus or understanding. ...
Article
With the diffusion of information systems and new technologies for the real-time capturing of data, especially in rapid technological and managerial innovation contexts such as the automotive industry, data-driven decision-making (DDM) has now the potential to generate dramatic improvements in the performance of manufacturing firms. However, there is still a lack of evidence in literature on whether these technologies can actually enhance the effectiveness of data-driven approaches. The aim of this article is to investigate the impact of DDM on operational performance moderated by two main dimensions of digitalization: data integration and the breadth of new digitization technologies. The results of a cross-country survey of 138 Italian and U.S. auto-supplier firms, which was supported by plant visits and interviews, suggest that an interplay between the two dimensions exists. Higher degrees of data integration in information systems increase the positive effect of DDM on the probability of cost reductions. On the other hand, introducing multiple emerging digitization technologies leads to worse DDM results, in terms of cost performance. The conclusion that can be drawn is that the operational employees of auto-supplier firms are now facing difficulties in successfully combining real-time operational data from various sources and in exploiting them for decision-making. Managers and workers need to align their intuition, experience and analytical capabilities to initiate the digitalization process. The challenge, in the medium term, is to limit the difficulties of implementing new digitization technologies and integrating their data, to embrace DDM and fully grasp the potential of data analytics in operations.
... In recent years the term has regained popularity as a result of increased production and transmission rate of information that is collected and handled thus gaining the attention of many disciplines and industries to take advantage of the benefits of its implementation (Tamiminia et al., 2020). Many studies (Caesarius & Hohenthal, 2018;Pigni Piccoli G. & Watson R., 2016;The Economist, 2012) agree that the adoption of BD results in positive outcomes such as improved financial performance, business optimization and innovation. (Raguseo, 2018) work highlights the "high revenue" promise as a main driver. ...
... The need to manage increasing data volumes that is been produced today is the major key driver for many industries that have already been making inroads in the adoption of Big Data and data management technology for years (Tamiminia et al., 2020;You & Wu, 2019), phenomenon that undoubtedly also occurs within the construction industry. Plenty of literature exist reflecting on the positive results of adoption BD in other disciplines (Caesarius & Hohenthal, 2018;Pigni Piccoli G. & Watson R., 2016;Raguseo, 2018;Tamiminia et al., 2020;The Economist, 2012), while limited sources exists about Big Data adoption within the construction industry indicating the presence of a gap. Still, areas such as project waste management, energy efficiency, project planning are already benefiting from BD implementation, driven by the use of technologies and trends such as BIM and Construction 4.0, which also contribute to the growth of BD datasets through the promotion of cloud storage and the use of data generating equipment in construction (Berger, 2016;Burguer, 2019;Wood, 2018). ...
Full-text available
Conference Paper
The construction industry, being one of the main characters in the ever-demanding need for technology developments, sometimes falls short of other industries in terms of implementation. The adoption of Big Data (BD) in industries like health and retail has had positive impacts in aspects such as decision-making processes and forecasting trends that allow planning some future business movements in advance. Hence, the question of whether these results can be recreated in construction industry. Therefore, this paper addresses the level of awareness identified as the first step towards implementation of the BD Concept within the construction industry of Dominican Republic (DR). Since little to no information exist on the subject the selected approach to perform this research was qualitative, twenty-one semi-structured interviews were studied using content analysis. Four levels of awareness is developed based on the Endsley situation awareness model. The results showed that nearly ninety-five percent of the interviewees had either no knowledge or a very basic awareness of the BD requirements or intermediate awareness but only five percent had actually applied BD in the construction industry. This paper provides the level of awareness of BD in the DR construction industry and provides evidence for the need to provide continuous professional development programmes for construction professionals and a need for an update of curriculum in construction-related education.
... Formulation of a digital strategy and its incorporation into higher-level strategies (Kane et al., 2016); Desirable establishment of DT governance structures (Chanias et al., 2019) Structure Bearing in mind the creation of a more agile structure (Kane et al., 2016, p. 15); Identify the existing "skillset" as the organizational resource management capability required to create and deliver value through DT (e.g., for Big Data in Pigni et al., 2016) Processes Defining existing processes before implementing DT; Support for process digitization (e.g., implementation of ERP and CRM systems); Preparation for process optimization (e.g., using Big Data) (Hess et al., 2016) ...
... Preparation for DT by building a "digital culture"; The problem of the absence of a culture of experimentation (Kane et al., 2016, p. 14); The existing "mindset", the issue of organization's willingness to invest in DT initiatives (e.g., Pigni et al., 2016) For RQ1 (Are there impacts of digital transformation technologies on all components of an organization's design?) The answer to this question is positive. ...
Full-text available
Conference Paper
When defining and explaining the phenomenon of digital transformation, a considerable part of the research is focused on technologies that characterize such projects or initiatives, while a relatively smaller body of work addresses the change or transformation in organizational terms. The broader context in which the digital transformation of an organization should be considered is the organization's design, i.e., redesign framework. Several established models of organizational design emphasize the connection between key components-strategy, structure, processes, human resources, leadership, and organizational culture. Digital transformation occurs in general within all the aforementioned aspects of an organization, with all the respective changes being interrelated. This paper provides an overview of the results of selected previous research in the field of digital transformation under the framework of organizational design and redesign.
... In recent years, the term has regained popularity as a result of the increased production and transmission rate of information collected and handled, thus gaining the attention of many disciplines and industries to take advantage of the benefits of its implementation (Tamiminia et al., 2020). Many studies (Caesarius and Hohenthal, 2018;Pigni et al., 2016;The Economist, 2012) agree that the adoption of BD results in positive outcomes such as improved financial performance, business optimisation and innovation. Raguseo (2017) work highlights the "high revenue" promise as the main driver. ...
... The need to manage increasing data volumes produced today is the major key driver for many industries that have already been making inroads in the adoption of BD and data management technology for years (Chen et al., 2020 andTamiminia et al., 2020), a phenomenon that undoubtedly also occurs within the construction industry. Plenty of literature exists reflecting on the positive results of adoption BD in other disciplines (Tamiminia et al., 2020;Caesarius and Hohenthal, 2018;Pigni et al., 2016;The Economist, 2012;Raguseo, 2017), whereas limited sources exist about BD adoption within the construction industry indicating the presence of a gap. ...
Full-text available
Article
Purpose The construction industry, being one of the main activities in the ever-demanding need for technology developments, sometimes falls short of other industries in terms of implementation. The adoption of Big Data (BD) in industries such as health and retail has had positive impacts in aspects such as decision-making processes and forecasting trends that allow planning some future business movements. Hence, the question of whether these results can be imitated in the construction industry. Therefore, this paper aims to address the level of awareness identified as a first step towards implementation of the BD concept within the construction industry in the Dominican Republic (DR). Design/methodology/approach As little to no information exist on the subject; the selected approach to perform this research was qualitative methodology; 21 semi-structured interviews were studied using situational awareness. Four levels of awareness were developed based on the Endsley’s Situation Awareness model. Findings The results showed that nearly 95% of the interviewees had either no knowledge or very basic awareness of the BD requirements or intermediate awareness, but only 5% had applied BD concepts in the construction industry. Originality/value This study shows the gaps that exist in the understanding and implementation of BD concepts in the DR construction industry. This paper establishes the need to develop continuous professional development programmes for construction professionals and a need to update curriculum in construction-related education.
... As a result, digitalization is viewed as an entrepreneurial process [85,86], in which firms pursuing digital transformation render formerly successful BMs obsolete [87,88] through business model innovation (BMI), which is revolutionizing many industries. Firms adopting digital technologies, for example, regard data streams as critical and assign them a central role in supporting their digital transformation strategies [89], in contrast to traditional BM frameworks [90]. This is an important finding because it demonstrates that the impact of digital on business model innovation remains hazy [1] and that a digital conundrum prevails in the literature where key concepts lack construct clarity [17]. ...
Full-text available
Chapter
An imperative contemporary management dilemma in moments of rapidly evolving regarding the ongoing digital transformation of business and society in general is recognizing and trying to translate these adjustments into digital business model innovation (DBMI). Academia has plenty to show in exchange of assisting with this managerial problem, but studies in the field still seem to be hazy in terms of what DBMI is, the present, future, and vision. Therefore, this article aimed to review the present situation of DBMI, its future, and its vision in the general context. The secondary databases were used to collect the relevant articles, and the outcome of the study found that DBMI has attained prolonged growth in different businesses especially in COVID-19 period. This scenario would not be changed in future because of increasing digital impact on several businesses. Therefore, it is recommended for all types of businesses to adopt digital business model innovation to attain competitive advantage.
... Debido a estas limitaciones, especialmente 2., el procesamiento de flujos de datos o Data Stream Mining (DSM) se ha convertido en un tema emergente dentro del área de Big Data (Bifet & Read, 2018) (Ramírez-Gallego, Krawczyk, García, Wozniak, & Herrera, 2017. Un datastream es una representación digital y transmisión continua de datos, los cuales describen una clase de eventos relacionada (Pigni, Piccoli, & Watson, 2016). Mediante su procesamiento, es posible lograr la toma de decisiones en tiempo real, es decir, cuando se producen los acontecimientos. ...
Article
Los procesos de negocio exigen tomar decisiones rápidas para lograr la adaptación constante a los cambios en búsqueda de mejorar el desempeño y aprovechar las oportunidades. Resulta clave contar con analíticas que transformen los datos en conocimiento para la toma de decisiones. En este trabajo se introduce una línea de investigación enfocada en la analítica prescriptiva, capaz de calcular acciones a ser ejecutadas en el momento (decisiones operativas) o en el futuro (decisiones tácticas y/o estratégicas) para lograr un objetivo deseado, en problemas de enrutamiento de vehículos (VRP), y se presentan los avances y resultados obtenidos. El cálculo de las acciones involucra el procesamiento del flujo de eventos del negocio en forma de datastreams, la aplicación de técnicas y algoritmos de Soft Computing e Inteligencia Computacional (en particular Aprendizaje por Refuerzo) y, derivado de la necesidad de bajos tiempos de respuesta, el empleo de Computación de Alto Desempeño.
... Big data, resources, and capabilities At its origin, big data is about dealing with a large amount of data [volume] (Grover et al., 2018), from diverse sources, which is structured to a greater or lesser extent [variety] (Chen et al., 2012) and has a high data flow [velocity] (Pigni et al., 2016). Today, this original definition is complemented by an appreciation of the evolution of data and their flows over time [variability] (Janssen et al., 2017), by the valuation of the data [value] (Lycett, 2013) and the reliability and authenticity of the data [veracity] (Baesens et al., 2014). ...
Article
Purpose This article aims to examine the factors that influence how managers approach data analytics. Design/methodology/approach The authors draw on content analysis of 34 in-depth interviews with managers in various sectors in France. Findings Using Resource Orchestration Theory as the theoretical lens, the findings show that an understanding of the importance of data analytics, having the skills to effectively use data analytics and the capability to integrate data analytics throughout organizations impact the approach adopted by managers. Based on these interrelated factors, a typology of four different approaches is identified: buyer-users, segmenters, promoters and implementers. Research limitations/implications The authors' study reflects results from multiple industries instead of one particular sector. Delving deeper into the practices of distinct sectors with respect to the authors' typology would be of interest. Practical implications The study points to the role of managers and more specifically managers' perception of the opportunities and challenges related to data analytics. These perceptions emerge in managers' skills and capacity to understand and integrate dimensions of data analytics that go beyond one's areas of expertise in order to create capabilities towards an organization's advantage. Originality/value The authors contribute by revealing three interrelated factors influencing how managers approach data analytics in managers' organizations. The authors address the need expressed by practitioners to better identify factors responsible for adoption and effective use of data analytics.
... The World Economic Forum (2018) expects a continuous, two-digit annual growth rate for firms' emphasis on digital technologies in the future. In consequence of these advances and the global, social developments; the rules of competition and collaboration have changed in many industries (Dodgson et al., 2015;Pigni et al., 2016;Weill and Woerner, 2018). While digital innovation may lead to new wealth for some organisations and societies (Nambisan et al., 2017), exploiting the emerging opportunities remains a challenging endeavour. ...
Article
The increasingly digital business landscape has created manifold novel opportunities as well as threats to traditional business models. In consequence, a broad variety of digital business models emerged. Powerful tools and managerial guidance on how to shape digital strategies in this volatile and uncertain terrain are sought-after, but remain rare. Building on an analysis of the world’s top-1.000 venture funded technology startups over the last decade, we identify 49 novel business model types that describe firms as vendors of digitally enabled products and services, as providers of resources and capabilities for digital business, and as facilitators of intermediation. Furthermore, we identify the novelties of these digital business models types in their components, i.e., value proposition as well as their value creation, delivery, and capture processes. The result is a recipe collection of novel mechanisms to guide and inspire other firms when commercialising digital technologies in their business models.
... The value created by big data is reflected in the effective transformation of data information into knowledge in the feature database. Compared with the process of big data identification, collection and storage, big data analytics can better reflect the technical tool and resource transformation process of big data generating value (Akter et al., 2016;Pigni et al., 2016). Therefore, researchers began to use big data analytics capability to indicate the proficiency of firms in using big data to achieve goals and acquire new knowledge (Gupta & George, 2016a). ...
Full-text available
Article
Data-driven innovation enables firms to design products that are more responsive to market needs, which greatly reduces the risk of innovation. Customer data in the same supply chain has certain commonality, but data separation makes it difficult to maximize data value. The selection of an appropriate mode for cooperation innovation should be based on the particular big data analytics capability of the firms. This paper focuses on the influence of big data analytics capability on the choice of cooperation mode, and the influence of their matching relationship on cooperation performance. Specifically, using game-theoretic models, we discuss two cooperation modes, data analytics is implemented individually (i.e., loose cooperation) by either firm, or jointly (tight cooperation) by both firms, and further discuss the addition of coordination contracts under the loose mode. Several important conclusions are obtained. Firstly, both firms’ big data capability have positive effects on the selection of tight cooperation mode. Secondly, with the improvement of big data capability, the firms’ innovative performance gaps between loose and tight mode will increase significantly. Finally, when the capability meet certain condition, the cost subsidy contract can alleviate the gap between the two cooperative models.
... As told by more business-oriented accounts of this development, corporations of all kinds extract economic value from the real-time flow of big data. Since digital data streams are generated through anything from credit card purchases and digital check-ins to social-media posts and mobile self-tracking, if a corporation is able to conjoin several data streams (for example a credit card transaction and an Instagram post from the same evening), so-called splicing, it can harvest more information about potential customers and even reconstruct entire episodes related to particular events (Pigni, Piccoli, & Watson, 2016). As such, there is also a continuously growing demand for more big data, necessitating more devices, more platforms and more sensors installed in the spaces where people dwell (e.g., Andrejevic & Volcic, 2019;Iveson & Maalsen, 2019). ...
Full-text available
Article
Logistics is a relatively hidden subject in tourism studies. This theoretical article advances a logistical approach to the study of tourism in the platform economy. It is argued that the platform economy rests on logistical accumulation, which means that human practices are not just predicted but ultimately steered in order to generate profitable digital data streams. At the same time, “smart”, mobile media platforms provide unprecedented logistical affordances to people to navigate and manage various flows. Tourism is thus taken as a logistical intersection, where the steering mechanisms of the platform economy entangle with the needs and capacities for orientation, coordination and orchestration among travellers. The social expansion of logistical accumulation raises questions of human agency, especially in relation to tourism, as well as a need to study how the basic tension between “steering” and “being steered” unfolds in different sociocultural settings. The article provides a critical account of the logistical frictions, conflicts and inequalities characterizing digital tourism geographies. It also actualizes the need for further exchanges between media studies, tourism studies, and critical geographical research on logistics.
... In an IoT network, many physical devices seek to collect and exchange data, thus providing potentially vast datasets (Lo & Campos, 2018;Pigni et al., 2016). Traditional data processing methods may not be able to deal with these large provided datasets. ...
Chapter
The many potential uses and benefits of the Internet of Things (IoT) have spurred great interest from practitioners and researchers to investigate IoT applications in supply chains and logistics systems. IoT-based supply chains are enabled by technologies such as Radio Frequency Identification, Wireless Sensor Networks, Machine-to-Machine systems, and mobile apps. IoT may allow reduced human intervention in decision-making processes through controlling, optimizing, planning, and monitoring of the supply chain virtually. This chapter provides an overview of IoT and investigates IoT applications and challenges in the context of supply chains. Different IoT technologies are being actively trialed in a wide range of supply chain applications including cold chains, perishable products, agriculture and crops, and some manufacturing supply chains. However, the application of IoT technologies in supply chains is challenging. Security and data privacy, standards and naming services, technology adoption, and big data generation are some of the critical issues. We present a framework—the IoT Adopter—for organizations to critically assess IoT adoption and implementation in supply chain applications. The framework identifies four fundamental stages that should be considered in deploying IoT across a supply chain: adoption rate calculation, profitability computation, architecture design, and continuous improvement.
... With event-driven architectures and integration of IIoT devices, there is the basic possibility to improve control of processes or obtain information, which is important for predictive maintanance [24]. Event streaming is the practice of capturing data in realtime [25]. Event streaming thus ensures a continuous flow and interpretation of data. ...
Full-text available
Article
Today, Industrial Internet of Things (IIoT) devices are very often used to collect manufacturing process data. The integration of industrial data is increasingly being promoted by the Open Platform Communications United Architecture (OPC UA). However, available IIoT devices are limited by the features they provide; therefore, we decided to design an IIoT device taking advantage of the benefits arising from OPC UA. The design procedure was based on the creation of sequences of steps resulting in a workflow that was transformed into a finite state machine (FSM) model. The FSM model was transformed into an OPC UA object, which was implemented in the proposed IIoT. The OPC UA object makes it possible to monitor events and provide important information based on a client’s criteria. The result was the design and implementation of an IIoT device that provides improved monitoring and data acquisition, enabling improved control of the manufacturing process.
... Data provides information such as GPS location. Thus, big data entails greater access to data, using automated algorithms to support decision-making processes and models of innovative businesses (Lombardi et al., 2020b;McAfee and Brynjolfsson, 2012;Pigni et al., 2016). Big data supports and improves decision-making by organisations and institutions (Lombardi et al., 2014) and allows to assume intelligent decisions based on processed data (Wang et al., 2016;La Torre et al., 2018) increasing high-quality results. ...
... As AI becomes an integral part of the daily operation of tourism firms, it inevitably will impact the competitiveness of that firm. Tourism firms are using big data, AI and robotics to improve their speed of operation and create value in the consumer's mind (Mariana et al., 2013;Pauleen and Wang, 2017;Pigni et al., 2016;Mariani, 2019). Competitiveness is a complex word and can be measured using multiple variables. ...
Full-text available
Article
Purpose This study explores the variables that drive the impact of artificial intelligence (AI) on the competitiveness of a tourism firm. The relationship between the variables is established using the modified total interpretive structural modelling (m-TISM) methodology. The factors are identified through literature review and expert opinion. This study investigates the hierarchical relationship between these variables. Design/methodology/approach The modified total interpretive structural modelling (m-TISM) method is used to develop a hierarchical interrelationship among variables that display direct and indirect impact. The competitiveness of a tourism firm is measured by investigating the effect of variables on the firm's financial performance. Findings The study identifies ten key factors essential for analysing the impact of AI on a firm's competitiveness. The m-TISM methodology gave us the hierarchical relationship between the factors and their interpretation. A theoretical TISM model has been constructed based on the hierarchy and relationship of the elements. The elements that fall in Level V are “AI Skilled Workforce”, “Infrastructure” and “Policies and Regulations”. Level IV includes the elements “AI Readiness”, “AI-Enabled Technologies” and “Digital Platforms”. Elements that fall under Level III are “Productivity” and “AI Innovation”. Level II and Level I comprise “Tourist Satisfaction” and “Financial Performance”, respectively. The levels indicate the elements' hierarchical level, with Level I the highest and Level V the lowest. Research limitations/implications Tourism and AI scholars can analyse the given variables by including the transitive links and incorporate new variables depending upon future research. The m-TISM model constructed from literature review and expert opinion can act as a theoretical base for future studies to be conducted by researchers. Practical implications Management/Practitioners can focus on the available characteristics and capitalise on them while working on the factors lacking in their organisation to enhance their competitiveness. Entrepreneurs starting their own business can utilise the elements in understanding the ecosystem of strengthening a firm's competitiveness. They can work to improve on the aspects which are crucial and trigger the impact on competitiveness. The government and management can devise policies and strategies that encompass the essential factors that positively impact the competitiveness of the firms. The approach can then be looked at with a holistic approach to cater to the other related components of the tourism industry. Originality/value This study is the first of its kind to use the modified TISM methodology to understand the impact of AI on the competitiveness of tourism firms.
... Under environmental uncertainty, analytics is a dynamic capability to simulate and predict compelling insights to achieve operational performance. By following a three-step process (manage data, perform analytics and drive decisions), analytics helps companies design their SCDO more swiftly and effectively (Agarwal & Dhar, 2014;Pigni et al., 2016) so that managers can make evidence-based decisions about developing even more significant SCDO (Shamim et al., 2019). In uncertain circumstances, ACO can help a firm's supply chain in terms of demand and visibility, procurement decisions, and seamless running of operations, which contribute to SCR (Bateman & Bonanni, 2019). ...
Full-text available
Article
The relationship between Analytics Capability of an Organization (ACO) and both Supply Chain Disruption Orientation (SCDO) and Supply Chain Resilience (SCR) in order to achieve adequate operational performance in an era of environmental uncertainty is carried out in this study. Total three hypotheses (seven sub-hypotheses) using a survey of 405 respondents are collected via a pre-tested instrument and tested further. Results indicated the influence of ACO on both SCDO and SCR to achieve the desired degree of operational performance. However, under the moderation of environmental uncertainty, the link between ACO and SCDO was not supported, although the link between ACO and SCR was supported and this further enhanced operational performance. Further investigation of unsupported hypotheses using statistical analysis was conducted to gain deeper insights. It is explained how ACO impacted dynamic capabilities to influence operational performance. The contribution to theory of this study lies in explaining the role of dynamic capabilities that emerge from analytics as compared to traditional view of supply chain classification. Further, the influence of environmental uncertainty on positioning dynamic capabilities strategically to address disruption in supply chains is discussed in the present study.
... The technologies that make a major contribution to digital transformation, and will have a tendency to increase their impact in the next decade, are the following: the Internet of Things (IOT) and connected devices, artificial intelligence (AI), big data analysis and cloud, custom manufacturing and 3D printing, robots and drones, pervasive computing, biotechnology, machine learning, nanotechnology, social media and platforms and autonomous vehicles (Ben-Ner & Siemsen, 2017;Fisher, 2017;Pigni, Piccoli, & Watson, 2016). According to a recent study (Segars, 2018), combining the capabilities of these technologies will give rise to even more powerful super-technologies that will open a new digital frontier. ...
... To benefit from new digital technologies, companies need to acquire competences that help them exploit new technologies' opportunities (Pigni, Piccoli, and Watson 2016). Merely technical knowledge and skills are insufficient as digital transformation induces significant changes in the companies' overall business processes and models (de Mauro et al. 2018;Vial 2019). ...
Article
Digital transformation brings significant challenges for managing competences. Exploiting digital technologies requires new competences, which need to be combined with existing competences to increase business efficiency and introduce digital innovations. However, competence combination for digital transformation is particularly challenging due to the scale of needed organisational change, conflicts between new and existing operating logics, and employee stress and resistance. Nonetheless, earlier research has neglected the question of how companies approach competence combination for digital transformation. Therefore, we investigated the views of top managers from ten companies in the Finnish machinery and metal products industry for this purpose. Based on an inductive analysis, we identified triggers for digital transformation and developed a managerial framework as a response. We argue that companies can create new transformative competence combinations by engaging in three activities: developing new competencies, promoting competence combination, and enhancing transformational leadership.
... Nowadays, a huge amount of data is produced periodically at an unparalleled speed from diverse and composite origins such as social media, sensors, telecommunication, financial transactions, etc. [1,2]. Such acceleration in data generation yields to the concept of big data, which can be attributed to the umpteen and dynamism in technological advancements. ...
Full-text available
Article
Anomaly detection in high dimensional data is a critical research issue with serious implications in real-world problems. Many issues in this field are still unsolved, so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data. Such a phenomenon is referred to as the "curse of dimensionality" that affects traditional techniques in terms of both accuracy and performance. Thus, this research proposed a hybrid model based on Deep Autoencoder Neural Network (DANN) with five layers to reduce the difference between the input and output. The proposed model was applied to a real-world gas turbine (GT) dataset that contains 87620 columns and 56 rows. During the experiment, two issues have been investigated and solved to enhance the results. The first is the dataset class imbalance, which solved using SMOTE technique. The second issue is the poor performance, which can be solved using one of the optimization algorithms. Several optimization algorithms have been investigated and tested, including stochastic gradient descent (SGD), RMSprop, Adam and Adamax. However, Adamax optimization algorithm showed the best results when employed to train the DANN model. The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with an accuracy of 99.40%, F1-score of 0.9649, Area Under the Curve (AUC) rate of 0.9649, and a minimal loss function during the hybrid model training.
Full-text available
Chapter
Zusammenfassung Die Erfahrungen der letzten Jahre zeigen, dass Unternehmen häufig den Wert ihrer eigenen Daten sowie die eigenen Fähigkeiten, Umsatz aus diesen Daten generieren zu können, systematisch überschätzen. In der Praxis sind die wenigsten Unternehmen tatsächlich in der Lage ein nachhaltiges auf Daten basiertes Geschäftsmodell zu etablieren. Dies hat vielfältige Gründe: eine zu geringe Menge verfügbarer Daten, fehlende Einheitlichkeit und Vergleichbarkeit der Daten, mangelnde Möglichkeiten, diese Daten zur Erzeugung von relevanten Informationen oder Handlungshinweisen interpretieren zu können, sowie die fehlende Monetarisierung der Resultate. Für ein besseres Verständnis, welche grundsätzlichen Aspekte es bei der Entwicklung datenbasierter Geschäftsmodelle zu beachten gilt und wie eine mögliche Umsetzung aussehen kann, beschreibt dieser Beitrag die Besonderheiten sowie die unterschiedlichen Arten datengetriebener Geschäftsmodelle und für welche Unternehmen diese besonders geeignet sind, weist auf spezifische Chancen und Herausforderungen hin und stellt die systematische Entwicklung datenbasierter Geschäftsmodelle dar.
Full-text available
Preprint
Este artigo teórico-reflexivo tem como objetivo discorrer e provocar sobre o que se entende como digital no contexto mercadológico, questionando a segregação de consumidores alegadamente digitais e discorrendo sobre o impacto dos benefícios e facilidades trazidas pela conectividade além do canal/mídia digital, como gatilhos de mudanças psicossociais mais profundas e, por consequência, moldando expectativas do consumidor referenciado em alguns contextos como pós-digital, em que tais tecnologias transformadoras, inovadoras entre as décadas de 1980 e 2000, passam, agora, a compor a normalidade das relações interpessoais e de consumo, mas responsáveis por mudanças nas expectativas de tais consumidores em quaisquer pontos de contato com marcas e empresas.
Article
Digital technologies are omnipresent in our professional and personal lives. While they provide numerous opportunities, they also cause tensions. Many of these tensions are paradoxical. They confront us with conflicting, yet synergetic and interdependent, alternatives that persist over time-such as benefiting from the increasing availability and access to information at the risk of information overload and technostress. So far, we know little about the specific paradoxes caused by digital technologies in the workplace and about how managers perceive and cope with them. This article presents how middle managers from several incumbent European firms experience the occurrence of individual paradoxes (autonomy, information, and interaction paradoxes) and meta-level paradoxes that combine a wide variety of tensions within an overarching theme (opportunity, and engagement paradoxes). This research offers a comprehensive perspective on the multiplicity and interrelatedness of paradoxes in the digital, white-collar workplace and suggests how managers can develop effective coping mechanisms for convergent change and transforming work practices in paradoxical environments.
Article
Digital technologies are omnipresent in our professional and personal lives. While they provide numerous opportunities, they also cause tensions. Many of these tensions are paradoxical. They confront us with conflicting, yet synergetic and interdependent, alternatives that persist over time-such as benefiting from the increasing availability and access to information at the risk of information overload and technostress. So far, we know little about the specific paradoxes caused by digital technologies in the workplace and about how managers perceive and cope with them. This article presents how middle managers from several incumbent European firms experience the occurrence of individual paradoxes (autonomy, information, and interaction paradoxes) and meta-level paradoxes that combine a wide variety of tensions within an overarching theme (opportunity, and engagement paradoxes). This research offers a comprehensive perspective on the multiplicity and interrelatedness of paradoxes in the digital, white-collar workplace and suggests how managers can develop effective coping mechanisms for convergent change and transforming work practices in paradoxical environments.
Article
Big data are a prominent source of value capable of generating competitive advantage and superior business performance. This paper represents the first empirical investigation of the theoretical model proposed by Grover et al. (2018), considering the mediating effects of four value creation mechanisms on the relationship between big data analytics capabilities (BDAC) and four value targets. The four value creation mechanisms investigated (the source of the value being pursued) are transparency, access, discovery, and proactive adaptation, while the four value targets (the impacts of the value creation process) are organization performance, business process improvement, customer experience and market enhancement, and product and service innovation. The proposed empirical validation of Grover et al.’s (2018) model adopts an econometric analysis applied to data gathered through a survey involving 256 BDA experts. The results reveal that transparency mediates the relationship for all the value targets, while access and proactive adaptation mediate only in case of some value targets, and discovery does not have any mediating effect. Theoretical and practical implications are discussed at the end of the paper.
Article
Rapid expansion of digitalization and in the volume of data available constitutes a major driver toward circular economy. In the textile industry, with its vast quantities of waste and huge environmental impact, transformation toward such circularity is necessary but challenging. To explore how the use of data could support building sustainability-aligned pathways to circular economy of textiles, a study employing a two-round disaggregative Delphi approach (engaging 33 experts in the first round, in May 2021, and 26 in the second, in June 2021) articulated alternative images of the future. The three images, dubbed Transparency, Conflicting Interests, and Sustainable Textiles, imply that the role for data is intertwined with sustainability aspirations. The results highlight that exploiting data in pursuit of circular economy is a collaborative effort involving business value networks that include consumers and regulators. Availability and sharing of accountability-affording, meaningful data on textiles' life cycle and value network function as a key enabler. By working with the images developed, actors can better assess their circular-economy commitments, planned actions, and the consequences of these. Furthermore, the images provide a tool for mutual discussion of the development desired and of related responsibilities and uncertainties.
Full-text available
Chapter
The science of Earth observation uses satellites and other sensors to monitor our planet, e.g., for mitigating the effects of climate change. Earth observation data collected by satellites is a paradigmatic case of big data. Due to programs such as Copernicus in Europe and Landsat in the United States, Earth observation data is open and free today. Users that want to develop an application using this data typically search within the relevant archives, discover the needed data, process it to extract information and knowledge and integrate this information and knowledge into their applications. In this chapter, we argue that if Earth observation data, information and knowledge are published on the Web using the linked data paradigm, then the data discovery, the information and knowledge discovery, the data integration and the development of applications become much easier. To demonstrate this, we present a data science pipeline that starts with data in a satellite archive and ends up with a complete application using this data. We show how to support the various stages of the data science pipeline using software that has been developed in various FP7 and Horizon 2020 projects. As a concrete example, our initial data comes from the Sentinel-2, Sentinel-3 and Sentinel-5P satellite archives, and they are used in developing the Green City use case.
Full-text available
Chapter
Traditional usage models of Supercomputing centres have been extended by High-Throughput Computing (HTC), High-Performance Data Analytics (HPDA) and Cloud Computing. The complexity of current compute platforms calls for solutions to simplify usage and conveniently orchestrate computing tasks. These enable also non-expert users to efficiently execute Big Data workflows. In this context, the LEXIS project (‘Large-scale EXecution for Industry and Society’, H2020 GA 825532, https://lexis-project.eu ) sets up an orchestration platform for compute- and data-intensive workflows. Its main objective is to implement a front-end and interfaces/APIs for distributed data management and workflow orchestration. The platform uses an open-source Identity and Access Management solution and a custom billing system. The data management API allows data ingestion and staging between various infrastructures. The orchestration API allows execution of workflows specified in extended TOSCA. LEXIS uses innovative technologies like YORC and Alien4Cloud for orchestration or iRODS/EUDAT-B2SAFE for data management, accelerated by Burst Buffers. Three pilot use cases from Aeronautics Engineering, Earthquake/Tsunami Analysis, and Weather and Climate Prediction are used to test the services. On the road towards longer-term sustainability, we are expanding this user base and aiming at the immersion of more Supercomputing centres within the platform.
Full-text available
Chapter
As institutions increasingly shift to distributed and containerized application deployments on remote heterogeneous cloud/cluster infrastructures, the cost and difficulty of efficiently managing and maintaining data-intensive applications have risen. A new emerging solution to this issue is Data-Driven Infrastructure Management (DDIM), where the decisions regarding the management of resources are taken based on data aspects and operations (both on the infrastructure and on the application levels). This chapter will introduce readers to the core concepts underpinning DDIM, based on experience gained from development of the Kubernetes-based BigDataStack DDIM platform ( https://bigdatastack.eu/ ). This chapter involves multiple important BDV topics, including development, deployment, and operations for cluster/cloud-based big data applications, as well as data-driven analytics and artificial intelligence for smart automated infrastructure self-management. Readers will gain important insights into how next-generation DDIM platforms function, as well as how they can be used in practical deployments to improve quality of service for Big Data Applications. This chapter relates to the technical priority Data Processing Architectures of the European Big Data Value Strategic Research & Innovation Agenda [33], as well as the Data Processing Architectures horizontal and Engineering and DevOps for building Big Data Value vertical concerns. The chapter relates to the Reasoning and Decision Making cross-sectorial technology enablers of the AI, Data and Robotics Strategic Research, Innovation & Deployment Agenda [34].
Full-text available
Chapter
With the rising complexity of modern products and a trend from single products to Systems of Systems (SoS) where the produced system consists of multiple subsystems and the integration of multiple domains is a mandatory step, new approaches for development are demanded. This chapter explores how Model-Based Systems Engineering (MBSE) can benefit from big data technologies to implement smarter engineering processes. The chapter presents the Boost 4.0 Testbed that demonstrates how digital twin continuity and digital thread can be realized from service engineering, production, product performance, to behavior monitoring. The Boost 4.0 testbed demonstrates the technical feasibility of an interconnected operation of digital twin design, ZDM subtractive manufacturing, IoT product monitoring, and spare part 3D printing services. It shows how the IDSA reference model for data sovereignty, blockchain technologies, and FIWARE open-source technology can be jointly used for breaking silos, providing a seamless and controlled exchange of data across digital twins based on open international standards (ProStep, QIF), allowing companies to dramatically improve cost, quality, timeliness, and business results.
Full-text available
Chapter
Serverless computing has become very popular today since it largely simplifies cloud programming. Developers do no longer need to worry about provisioning or operating servers, and they have to pay only for the compute resources used when their code is run. This new cloud paradigm suits well for many applications, and researchers have already begun investigating the feasibility of serverless computing for data analytics. Unfortunately, today’s serverless computing presents important limitations that make it really difficult to support all sorts of analytics workloads. This chapter first starts by analyzing three fundamental trade-offs of today’s serverless computing model and their relationship with data analytics. It studies how by relaxing disaggregation, isolation, and simple scheduling, it is possible to increase the overall computing performance, but at the expense of essential aspects of the model such as elasticity, security, or sub-second activations, respectively. The consequence of these trade-offs is that analytics applications may well end up embracing hybrid systems composed of serverless and serverful components, which we call ServerMix in this chapter. We will review the existing related work to show that most applications can be actually categorized as ServerMix .
Full-text available
Chapter
This chapter describes an actual smart city use-case application for advanced mobility and intelligent traffic management, implemented in the city of Modena, Italy. This use case is developed in the context of the European Union’s Horizon 2020 project CLASS [4]—Edge and Cloud Computation: A highly Distributed Software for Big Data Analytics. This use-case requires both real-time data processing ( data in motion ) for driving assistance and online city-wide monitoring, as well as large-scale offline processing of big data sets collected from sensors ( data at rest ). As such, it demonstrates the advanced capabilities of the CLASS software architecture to coordinate edge and cloud for big data analytics. Concretely, the CLASS smart city use case includes a range of mobility-related applications, including extended car awareness for collision avoidance, air pollution monitoring, and digital traffic sign management. These applications serve to improve the quality of road traffic in terms of safety, sustainability, and efficiency. This chapter shows the big data analytics methods and algorithms for implementing these applications efficiently.
Full-text available
Chapter
Data enrichment is a critical task in the data preparation process in which a dataset is extended with additional information from various sources to perform analyses or add meaningful context. Facilitating the enrichment process design for data workers and supporting its execution on large datasets are only supported to a limited extent by existing solutions. Harnessing semantics at scale can be a crucial factor in effectively addressing this challenge. This chapter presents a comprehensive approach covering both design- and run-time aspects of tabular data enrichment and discusses our experience in making this process scalable. We illustrate how data enrichment steps of a Big Data pipeline can be implemented via tabular transformations exploiting semantic table annotation methods and discuss techniques devised to support the enactment of the resulting process on large tabular datasets. Furthermore, we present results from experimental evaluations in which we tested the scalability and run-time efficiency of the proposed cloud-based approach, enriching massive datasets with promising performance.
Full-text available
Chapter
The continuous and significant growth of data, together with improved access to data and the availability of powerful computing infrastructure, has led to intensified activities around Big Data Value (BDV) and data-driven Artificial Intelligence (AI). Powerful data techniques and tools allow collecting, storing, analysing, processing and visualising vast amounts of data, enabling data-driven disruptive innovation within our work, business, life, industry and society. The adoption of big data technology within industrial sectors facilitates organisations to gain a competitive advantage. Driving adoption is a two-sided coin. On one side, organisations need to master the technology necessary to extract value from big data. On the other side, they need to use the insights extracted to drive their digital transformation with new applications and processes that deliver real value. This book has been structured to help you understand both sides of this coin and bring together technologies and applications for Big Data Value. This chapter defines the notion of big data value, introduces the Big Data Value Public-Private Partnership (PPP) and gives some background on the Big Data Value Association (BDVA)—the private side of the PPP. It then moves on to structure the contributions of the book in terms of three key lenses: the BDV Reference Model, the Big Data and AI Pipeline, and the AI, Data and Robotics Framework.
Full-text available
Chapter
Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of technical architectures for Big Data and AI systems. Experiences from many projects in the Big Data PPP program has shown that a number of projects use similar architectural patterns with variations only in the choice of various technology components in the same pattern. The project DataBench has developed a Big Data and AI Pipeline Framework, which is used for the description of pipeline steps in Big Data and AI projects, and supports the classification of benchmarks. This includes the four pipeline steps of Data Acquisition/Collection and Storage, Data Preparation and Curation, Data Analytics with AI/Machine Learning, and Action and Interaction, including Data Visualization and User Interaction as well as API Access. It has also created a toolbox which supports the identification and use of existing benchmarks according to these steps in addition to all of the different technical areas and different data types in the BDV Reference Model. An observatory, which is a tool, accessed via the toolbox, for observing the popularity, importance and the visibility of topic terms related to Artificial Intelligence and Big Data technologies has also been developed and is described in this chapter.
Full-text available
Chapter
The airfreight industry of shipping goods with special handling needs, also known as special cargo, suffers from nontransparent shipping processes, resulting in inefficiency. The LARA project (Lane Analysis and Route Advisor) aims at addressing these limitations and bringing innovation in special cargo route planning so as to improve operational deficiencies and customer services. In this chapter, we discuss the special cargo domain knowledge elicitation and modeling into an ontology. We also present research into cargo incidents, namely, automatic classification of incidents in free-text reports and experiments in detecting significant features associated with specific cargo incident types. Our work mainly addresses two of the main technical priority areas defined by the European Big Data Value (BDV) Strategic Research and Innovation Agenda, namely, the application of data analytics to improve data understanding and providing optimized architectures for analytics of data-at-rest and data-in-motion, the overall goal is to develop technologies contributing to the data value chain in the logistics sector. It addresses the horizontal concerns Data Analytics, Data Processing Architectures, and Data Management of the BDV Reference Model. It also addresses the vertical dimension Big Data Types and Semantics.
Full-text available
Chapter
This chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.
Full-text available
Chapter
In modern societies, the rampant growth of data management technologies—that have access to data sources from a plethora of heterogeneous systems—enables data analysts to leverage their advantages to new areas and critical infrastructures. However, there is no global reference standard for data platform technology. Data platforms scenarios are characterized by a high degree of heterogeneity at all levels (middleware, application service, data/semantics, scalability, and governance), preventing deployment, federation, and interoperability of existing solutions. Although many initiatives are dealing with developing data platform architectures in diversified application domains, not many projects have addressed integration in port environments with the possibility of including cognitive services. Unlike other cases, port environment is a complex system that consists of multiple heterogeneous critical infrastructures, which are connected and dependent on each other. The key pillar is to define the design of a secure interoperable system facilitating the exchange of data through standardized data models, based on common semantics, and offering advanced interconnection capabilities leading to cooperation between different IT/IoT/Objects platforms. This contribution deals with scalability, interoperability, and standardization features of data platforms from a business point of view in a smart and cognitive port case study. The main goal is to design an innovative platform, named DataPorts, which will overcome these obstacles and provide an ecosystem where port authorities, external data platforms, transportation, and logistics companies can cooperate and create the basis to offer cognitive services. The chapter relates to knowledge and learning as well as to systems, methodologies, hardware, and tools cross-sectorial technology enablers of the AI, Data and Robotics Strategic Research, Innovation & Deployment Agenda (Milano et al., Strategic research, innovation and deployment agenda - AI, data and robotics partnership. Third release. Big Data Value Association, 2020).
Full-text available
Chapter
In the last few years, the potential impact of big data on the manufacturing industry has received enormous attention. This chapter details two large-scale trials that have been implemented in the context of the lighthouse project Boost 4.0. The chapter introduces the Boost 4.0 Reference Model, which adapts the more generic BDVA big data reference architectures to the needs of Industry 4.0. The Boost 4.0 reference model includes a reference architecture for the design and implementation of advanced big data pipelines and the digital factory service development reference architecture. The engineering and management of business network track and trace processes in high-end textile supply are explored with a focus on the assurance of Preferential Certification of Origin (PCO). Finally, the main findings from these two large-scale piloting activities in the area of service engineering are discussed.
Full-text available
Chapter
3D personal data is a type of data that contains useful information for product design, online sale services, medical research and patient follow-up. Currently, hospitals store and grow massive collections of 3D data that are not accessible by researchers, professionals or companies. About 2.7 petabytes a year are stored in the EU26. In parallel to the advances made in the healthcare sector, a new, low-cost 3D body-surface scanning technology has been developed for the goods consumer sector, namely, apparel, animation and art. It is estimated that currently one person is scanned every 15 min in the USA and Europe. And increasing. The 3D data of the healthcare sector can be used by designers and manufacturers of the consumer goods sector. At the same time, although 3D body-surface scanners have been developed primarily for the garment industry, 3D scanners’ low cost, non-invasive character and ease of use make them appealing for widespread clinical applications and large-scale epidemiological surveys. However, companies and professionals of the consumer goods sector cannot easily access the 3D data of the healthcare sector. And vice versa. Even exchanging information between data owners in the same sector is a big problem today. It is necessary to overcome problems related to data privacy and the processing of huge 3D datasets. To break these silos and foster the exchange of data between the two sectors, the BodyPass project has developed: (1) processes to harmonize 3D databases; (2) tools able to aggregate 3D data from different huge datasets; (3) tools for exchanging data and to assure anonymization and data protection (based on blockchain technology and distributed query engines); (4) services and visualization tools adapted to the necessities of the healthcare sector and the garment sector. These developments have been applied in practical cases by hospitals and companies of in the garment sector.
Full-text available
Chapter
The quality of a machine learning model depends on the volume of data used during the training process. To prevent low accuracy models, one needs to generate more training data or add external data sources of the same kind. If the first option is not feasible, the second one requires the adoption of a federated learning approach, where different devices can collaboratively learn a shared prediction model. However, access to data can be hindered by privacy restrictions. Training machine learning algorithms using data collected from different data providers while mitigating privacy concerns is a challenging problem. In this chapter, we first introduce the general approach of federated machine learning and the H2020 MUSKETEER project, which aims to create a federated, privacy-preserving machine learning Industrial Data Platform. Then, we describe the Privacy Operations Modes designed in MUSKETEER as an answer for more privacy before looking at the platform and its operation using these different Privacy Operations Modes. We eventually present an efficiency assessment of the federated approach using the MUSKETEER platform. This chapter concludes with the description of a real use case of MUSKETEER in the manufacturing domain.
Full-text available
Chapter
Computer systems deployed in hospital environments, particularly physiological and biochemical real-time monitoring of patients in an Intensive Care Unit (ICU) environment, routinely collect a large volume of data that can hold very useful information. However, the vast majority are either not stored and lost forever or are stored in digital archives and seldom re-examined. In recent years, there has been extensive work carried out by researchers utilizing Machine Learning (ML) and Artificial Intelligence (AI) techniques on these data streams, to predict and prevent disease states. Such work aims to improve patient outcomes, to decrease mortality rates and decrease hospital stays, and, more generally, to decrease healthcare costs. This chapter reviews the state of the art in that field and reports on our own current research, with practicing clinicians, on improving ventilator weaning protocols and lung protective ventilation, using ML and AI methodologies for decision support, including but not limited to Neural Networks and Decision Trees. The chapter considers both the clinical and Computer Science aspects of the field. In addition, we look to the future and report how physiological data holds clinically important information to aid in decision support in the wider hospital environment.
Full-text available
Chapter
Manufacturing processes are highly complex. Production lines have several robots and digital tools, generating massive amounts of data. Unstructured, noisy and incomplete data have to be collected, aggregated, pre-processed and transformed into structured messages of a common, unified format in order to be analysed not only for the monitoring of the processes but also for increasing their robustness and efficiency. This chapter describes the solution, best practices, lessons learned and guidelines for Big Data analytics in two manufacturing scenarios defined by CRF, within the I-BiDaaS project, namely ‘Production process of aluminium die-casting’, and ‘Maintenance and monitoring of production assets’. First, it reports on the retrieval of useful data from real processes taking into consideration the privacy policies of industrial data and on the definition of the corresponding technical and business KPIs. It then describes the solution in terms of architecture, data analytics and visualizations and assesses its impact with respect to the quality of the processes and products.
Chapter
Real-time management of Artificial Intelligence (AI) becomes a central enabling function for coping with the rapid market changes and increasing demands of stakeholders. But diverse sensing of real time makes it tricky for enterprises to adjust business processes towards a real-time-based era and build the temporal conditions needed for deploying AI in a humanistic manner. This chapter therefore introduces the concept of Fast & Flow in an AI engagement context. Fast & Flow encompasses two ideas: one considers time as a monetary asset that helps to increase value; the second does not seek to control time and does not define it on the clock scale, rather, it describes the sense of presence. By introducing the components of Fast & Flow interaction we provide a cognitive psychology dimension to management of AI and explore the balancing of Fast & Flow in three possible AI scenarios. The first scenario is ‘business as usual’, but faster and more complex; the second scenario is more focused on consumers, and is based on an ideal combination of AI and Fast & Flow management; in the third scenario, there is an overflow of technology – AI is too fast and people are unable to control it. Finally, we are asking what Fast & Flow management can do for a humanistic deployment of AI in enterprises and societies.
Preprint
Based on the resource-based theory, the current study examines the relationship between competitive strategies and strategic alliance performance. Furthermore, big data predictive analytics is treated as a boundary condition between competitive strategies and strategic alliance performance. Big data of predictive analytics in operations and industrial management has been a focal point in the current era. There has been little attention has about big data predictive analytics influences on competitive strategies and strategic alliance performance, especially in developing countries like Pakistan. A survey instrument was used to record the responses from 331 employees of the telecom sectors companies working in Pakistan. Study findings show that big competitive strategies have a positive and significant relationship with strategic alliances performance. It was also found that big data predictive analytics plays the role of moderator between competitive strategies and strategic alliance performance. The study add a new perspective and contribution to the literature on big data predictive analytics, strategic alliance performance, and competitive strategies in Pakistan's telecom sector companies. Further, the study results explain that big data analytics is just like the companies' lifeblood in the current era. The efficient and effective use of big data analytics, companies can boost their standards in a competitive environment.
Full-text available
Chapter
The growing complexity of the business environment forces companies to be able to make decisions rapidly and effectively. This requires knowing how to manage internal processes and making sure that data support decisions. The strategic use of data not only supports cost reduction and increased efficiency but also allows us to reveal new opportunities by facilitating the emergence of hidden or unknown paths. For example, the analysis of hundreds of demographic and health variables may help predict the risk associated with hospital admission (Valentini, 2017) or prevent injuries in professional footballers (Davenport, 2006). Broadly speaking, the fields of possible applications of big data (BD) and business analytics (BA) are practically immeasurable.
Article
The growth of business analytics applications in decision making is becoming a significant component in today's organizations, and the powerful changes brought by such applications to both centralized and distributed organizations have led decision makers to revise the way they capture, process, and analyze both structured and unstructured data and make decisions. This study discusses how business analytics tools can supply distributed organizations with a new operating model, process, and outlet to disseminate knowledge, and provides a framework for building a business analytics platform that may be employed by decision makers and managers to realize the full potential of a comprehensive decision-making platform in a distributed organizational setting.
Chapter
Forecasts of the future are often wildly wrong, so this chapter is short to avoid making too many errors.
Full-text available
Conference Paper
Quarterly Earnings Calls by public companies represent an example of a rich digital data stream. In this study, we intercept this data stream and use it to create a new business confidence index, the P&W Index TM. This index correlates with existing business confidence indices, both consumer and business, and correlates with US GDP percentage change. The main advantage is that it can be updated as Earnings Calls are released, and there are typically multiple such releases in a week. In contrast, traditional measures are computed on a monthly or quarterly basis. Further, the index can be computed by industry or region. Initial evidence supports further research to validate and refine the proposed business confidence index.
Full-text available
Article
The authors reflect on management of big data by organizations. They comment on service level agreements (SLA) which define the nature and quality of information technology services and mention big data-sharing agreements tend to be poorly structured and informal. They reflect on the methodologies of analyzing big data and state it is easy to get false correlations when using typical statistical tools in analyzing big data. They talk about the use of big data in management and behavior research.
Full-text available
Article
Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata. Copyright © 2015, American Association for the Advancement of Science.
Full-text available
Article
In recent years, chief information officers have begun to report exponential increases in the amounts of raw data captured and retained across the organization. Managing extreme amounts of data can be complex and challenging at a time when information is increasingly viewed as a strategic resource. Since the dominant focus of the information technology (IT) governance literature has been on how firms govern physical IT artifacts (hardware, software, networks), the goal of this study is to extend the theory of IT governance by uncovering the structures and practices used to govern information artifacts. Through detailed interviews with 37 executives in 30 organizations across 17 industries, we discover a range of structural, procedural, and relational practices used to govern information within a nomological net that includes the antecedents of these practices and their effects on firm performance. While some antecedents enable the speedy adoption of information governance, others can delay or limit the adoption of information governance practices. Once adopted, however, information governance can help to boost firm performance. By incorporating these results into an extended theory of IT governance, we note how information governance practices can unlock value from the ever-expanding mountains of data currently held within organizations.
Full-text available
Article
A recurrent problem in information-systems development (ISD) is that many design shortcomings are not detected during development, but first after the system has been delivered and implemented in its intended environment. Pilot implementations appear to promise a way to extend prototyping from the laboratory to the field, thereby allowing users to experience a system design under realistic conditions and developers to get feedback from realistic use while the design is still malleable. We characterize pilot implementation, contrast it with prototyping, propose a five-element model of pilot implementation, and provide three empirical illustrations of our model. We conclude that pilot implementation has much merit as an ISD technique when system performance is contingent on context. But we also warn developers that, despite their seductive conceptual simplicity, pilot implementations can be difficult to plan and conduct. It is sometimes assumed that pilot implementations are less complicated and risky than ordinary implementations. Pilot implementations are, however, neither prototyping nor small-scale versions of full-scale implementations; they are fundamentally different and have their own challenges, which will be enumerated and discussed in this article.
Full-text available
Article
We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95% of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1/10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.
Full-text available
Conference Paper
As more data (especially scientific data) is digitized and put on the Web, it is desirable to make provenance metadata easy to access, reuse, integrate and reason over. Ontologies can be used to encode expectations and agreements concerning provenance metadata representation and computation. This paper analyzes a selection of popular Semantic Web provenance ontologies such as the Open Provenance Model (OPM), Dublin Core (DC) and the Proof Markup Language (PML). Selected initial findings are reported in this paper: (i) concept coverage analysis – we analyze the coverage, similarities and differences among primitive concepts from different provenance ontologies, based on identified themes; and (ii) concept modeling analysis – we analyze how Semantic Web language features were used to support computational provenance semantics. We expect the outcome of this work to provide guidance for understanding, aligning and evolving existing provenance ontologies.
Full-text available
Article
Information systems researchers have a long tradition of drawing on theories from disciplines such as economics, computer science, psychology, and general management and using them in their own research. Because of this, the information systems field has become a rich tapestry of theore-tical and conceptual foundations. As new theories are brought into the field, particularly theories that have become dominant in other areas, there may be a benefit in pausing to assess their use and contribution in an IS context. The purpose of this paper is to explore and critically evaluate use of the resource-based view of the firm (RBV) by IS researchers. The paper provides a brief review of resource-based theory and then suggests extensions to make the RBV more useful for empirical IS research. First, a typology of key IS resources is presented, and these are then described using six traditional resource attributes. Second, we emphasize the particular importance of looking at both resource complementarity and moderating factors when studying IS resource effects on firm performance. Finally, we discuss three considerations that IS researchers need to address when using the RBV empirically. Eight sets of propositions are advanced to help guide future research.
Full-text available
Article
Enterprises need Data Quality Management (DQM) to respond to strategic and operational challenges demanding high-quality corporate data. Hitherto, companies have mostly assigned accountabilities for DQM to Information Technology (IT) departments. They have thereby neglected the organizational issues critical to successful DQM. With data governance, however, companies may implement corporate-wide accountabilities for DQM that encompass professionals from business and IT departments. This research aims at starting a scientific discussion on data governance by transferring concepts from IT governance and organizational theory to the previously largely ignored field of data governance. The article presents the first results of a community action research project on data governance comprising six international companies from various industries. It outlines a data governance model that consists of three components (data quality roles, decision areas, and responsibilities), which together form a responsibility assignment matrix. The data governance model documents data quality roles and their type of interaction with DQM activities. In addition, the article describes a data governance contingency model and demonstrates the influence of performance strategy, diversification breadth, organization structure, competitive strategy, degree of process harmonization, degree of market regulation, and decision-making style on data governance. Based on these findings, companies can structure their specific data governance model.
Full-text available
Article
This paper aims to clarifyi the concept of business models, its usages, and its roles in the Information Systems domain. A review of the literature shows a broad diversity of understandings, usages, and places in the firm. The paper identifies the terminology or ontology used to describe a business model, and compares this terminology with previous work. Then the general usages, roles and potential of the concept are outlined. Finally, the connection between the business model concept and Information Systems is described in the form of eight propositions to be analyzed in future work.
Full-text available
Article
Events are real-world occurrences that unfold over space and time. Event mining from multimedia streams improves the access and reuse of large media collections, and it has been an active area of research with notable progress. This paper contains a survey on the problems and solutions in event mining, approached from three aspects: event description, event-modeling components, and current event mining systems. We present a general characterization of multimedia events, motivated by the maxim of five ldquoWrdquos and one ldquoHrdquo for reporting real-world events in journalism: when, where, who, what, why, and how. We discuss the causes for semantic variability in real-world descriptions, including multilevel event semantics, implicit semantics facets, and the influence of context. We discuss five main aspects of an event detection system. These aspects are: the variants of tasks and event definitions that constrain system design, the media capture setup that collectively define the available data and necessary domain assumptions, the feature extraction step that converts the captured data into perceptually significant numeric or symbolic forms, statistical models that map the feature representations to richer semantic descriptions, and applications that use event metadata to help in different information-seeking tasks. We review current event-mining systems in detail, grouping them by the problem formulations and approaches. The review includes detection of events and actions in one or more continuous sequences, events in edited video streams, unsupervised event discovery, events in a collection of media objects, and a discussion on ongoing benchmark activities. These problems span a wide range of multimedia domains such as surveillance, meetings, broadcast news, sports, documentary, and films, as well as personal and online media collections. We conclude this survey with a brief outlook on open research directions.
Full-text available
Article
Introduction The task of deciding when and how to innovate is not an easy one. Consider the following managerial quandaries: . A CIO has joined a firm that lags in the adoption of emerging information technologies. He wonders: just how innovative should this firm be going forward, and what can be done to position it to be more willing and able to assume the challenge of early adoption? . A VP of marketing resides in a firm that generally leads in IT innovation, and must decide whether to endorse the immediate adoption of a particular innovation with major implications for marketing strategy. She wonders: are her firm's needs in this area and "readiness" to adopt sufficient to justify taking the lead with this specific innovation? If so, how should the assimilation process be managed? . A product manager must design a deployment strategy for an innovative software development tool. He wonders: how fast can this technology diffu
Article
Dynamic product demand, competitor innovations, and competitive pressures on costs all pose strategic challenges to companies. Many firms are responding to these challenges by modifying the way they organize their operations. During the 1980s, they have tended to externalize transactions by contracting out to the market or by engaging in longer-term contractual relationships with other firms. To preserve adequate control and coordination over this system of transactions, several organizational forms are employed which overlay market-contracting relations with integrative arrangements. Information technology enables the strategic benefits of externalization to be secured with considerably less risk of losing operational control, and thus promises to facilitate a major evolution in organizational design.
Article
Data remains one of our most abundant yet under-utilized resources. This article provides a holistic framework that will help companies maximize this resource. It outlines the elements necessary to transform data into knowledge and then into business results. Managers must understand that human elements—strategy, skills, culture—need to be attended to in addition to technology. This article examines the experiences of over 20 companies that were successful in their data-to-knowledge efforts. It identifies the critical success factors that must be present in any data-toknowledge initiative and offers advice for companies seeking to build a robust analytic capability.
Article
Aggregate, industry, and firm level studies all point to a strong connection between information technology (IT) and the U.S. productivity revival in the late 1990s. At the aggregate level, growth accounting studies show a large and growing contribution to productivity growth from both the production and the use of IT. At the industry level, industries that produce or use IT most intensively have shown the largest increases in productivity growth after 1995. At the firm level, IT-intensive firms show better performance than their peers, and several specific case studies show how IT improves real business practices. This accumulation of evidence from a variety of studies suggests a real productivity impact from IT.
Conference Paper
Real-time business intelligence (BI) plays an important role in enabling the "real-time enterprise," and as such has received a lot of attention in the practitioner literature in recent years. However, academic research on real-time BI and its role in improving overall organizational agility is scarce today. Most research on the real-time phenomenon has focused on technological, as opposed to organizational, issues. Using practitioner models of information value as a starting point, we draw from theories on individual and organizational decision making to create a model of the components of latency that impact an organization's ability to both sense and respond to business events in real time. Failure to take all the antecedents of these latency components into account when implementing a real-time BI system can have serious consequences on a firm's ability to optimize benefits from conversion to real-time BI systems. We close with suggestions for future IS research on this important emerging topic.
Article
The role of information systems in the creation and appropriation of economic value has a long tradition of research, within which falls the literature on the sustainability of IT-dependent competitive advantage. In this article, we formally define the notion of IT-dependent strategic initiative and use it to frame a review of the literature on the sustainability of competitive advantage rooted in information systems use. We offer a framework that articulates both the dynamic approach to IT-dependent strategic advantage currently receiving attention in the literature and the underlying drivers of sustainability. This framework models how and why the characteristics of the IT-dependent strategic initiative enable sustained competitive advantage, and how the determinants of sustainability are developed and strengthened over time. Such explanation facilitates the pre-implementation analysis of planned initiatives by innovators, as well as the post-implementation evaluation of existing initiatives so as to identify the basis of their sustainability. In carrying out this study, we examined the interdisciplinary literature on strategic information systems. Using a structured methodology, we reviewed the titles and abstracts of 648 articles drawn from information systems, strategic management, and marketing literature. We then examined and individually coded a relevant subset of 117 articles. The literature has identified four barriers to erosion of competitive advantage for IT-dependent strategic initiatives and has surfaced the structural determinants of their magnitude. Previous work has also begun to theorize about the process by which these barriers to erosion evolve over time. Our review reveals that significant exploratory research and theoretical development have occurred in this area, but there is a paucity of research providing rigorous tests of theoretical propositions. Our work makes three principal contributions. First, it formalizes the definition of IT-dependent strategic initiative. Second, it organizes the extant interdisciplinary research around an integrative framework that should prove useful to both research and practice. This framework offers an explanation of how and why IT-dependent strategic initiatives contribute to sustained competitive advantage, and explains the process by which they evolve over time. Finally, our review and analysis of the literature offers the basis for future research directions.
Article
Recent rapid advances in Information and Communication Technologies (ICTs) have highlighted the rising importance of the Business Model (BM) concept in the field of Information Systems (IS). Despite agreement on its importance to an organization’s success, the concept is still fuzzy and vague, and there is little consensus regarding its compositional facets. Identifying the fundamental concepts, modeling principles, practical functions, and reach of the BM relevant to IS and other business concepts is by no means complete. This paper, following a comprehensive review of the literature, principally employs the content analysis method and utilizes a deductive reasoning approach to provide a hierarchical taxonomy of the BM concepts from which to develop a more comprehensive framework. This framework comprises four fundamental aspects. First, it identifies four primary BM dimensions along with their constituent elements forming a complete ontological structure of the concept. Second, it cohesively organizes the BM modeling principles, that is, guidelines and features. Third, it explains the reach of the concept showing its interactions and intersections with strategy, business processes, and IS so as to place the BM within the world of digital business. Finally, the framework explores three major functions of BMs within digital organizations to shed light on the practical significance of the concept. Hence, this paper links the BM facets in a novel manner offering an intact definition. In doing so, this paper provides a unified conceptual framework for the BM concept that we argue is comprehensive and appropriate to the complex nature of businesses today. This leads to fruitful implications for theory and practice and also enables us to suggest a research agenda using our conceptual framework.
Article
We model and experimentally examine the board structure-performance relationship. We examine single-tiered boards, two-tiered boards, insider-controlled boards, and outsider-controlled boards. We find that even insider-controlled boards frequently adopt institutionally preferred rather than self-interested policies. Two-tiered boards adopt institutionally preferred policies more frequently but tend to destroy value by being too conservative, frequently rejecting good projects. Outsider-controlled single-tiered boards, both when they have multiple insiders and only a single insider, adopt institutionally preferred policies most frequently. In those board designs where the efficient Nash equilibrium produces strictly higher payoffs to all agents than the coalition-proof equilibria, agents tend to select the efficient Nash equilibria. Copyright 2008, Oxford University Press.
Decoding Uber's Proposed $50B Valuation (and What It Means for You)
  • Chris Myers
Chris Myers, "Decoding Uber's Proposed $50B Valuation (and What It Means for You),"
Is the New Technology at Macy's Our First Glimpse of the Future of Retail?
  • S Halzach
S. Halzach, "Is the New Technology at Macy's Our First Glimpse of the Future of Retail?" The Washington Post, September 25, 2014, <http://www.washingtonpost.com/news/business/wp/ 2014/09/25/is-the-new-technology-at-macys-our-first-glimpse-of-the-future-of-retail/>, accessed July 15; 2015.
How Beacons-Small, Low-Cost Gadgets-Will Influence Billions in US Retail Sales
  • C Smith
C. Smith, "How Beacons-Small, Low-Cost Gadgets-Will Influence Billions in US Retail Sales," Business Insider, July 8, 2015, <http://uk.businessinsider.com/beacons-will-impact-billions-inretail-sales-2015-2>, accessed July 15, 2015.
Big Data and Management
  • G George
  • M R Haas
  • A Pentland
G. George, M.R. Haas, and A. Pentland, "Big Data and Management," Academy of Management Journal, 57/2 (April 1, 2014): 321-326, doi:10.5465/amj.2014.4002.
The Rise of the Digital Bank
  • T Olanrewaju
T. Olanrewaju, "The Rise of the Digital Bank," McKinsey & Company Insights (July 2014), <http://www.mckinsey.com/Insights/Business_Technology/The_rise_of_the_digital_bank?cid= DigitalEdge-eml-alt-mip-mck-oth-1407>, accessed July 15, 2015.
Bolstering Our Infrastructure
  • Mazen Rawashdeh
Mazen Rawashdeh, "Bolstering Our Infrastructure," Twitter Engineering Blog, November 7, 2012, <https://blog.twitter.com/2012/bolstering-our-infrastructure>, accessed July 15, 2015.
On Quantitative Narrative Analysis
  • Roberto Franzosi
Roberto Franzosi, "On Quantitative Narrative Analysis," in James A. Holstein and Jaber F. Gubrium, eds., Varieties of Narrative Analysis (Los Angeles, CA: Sage Publications, Inc., 2012), pp. 75-96.
Event Mining in Multimedia Streams
  • H Lexing Xie
  • M Sundaram
  • Campbell
Lexing Xie, H. Sundaram, and M. Campbell, "Event Mining in Multimedia Streams," Proceedings of the IEEE, 96/4 (April 2008): 623-647, doi:10.1109/JPROC.2008.916362.
API/Platform team, provided in 2010 a now famous map of all the metadata present in a tweet showing how a tweet is much more than just 140 characters
  • Raffi Krikorian
Raffi Krikorian, a developer on Twitter's API/Platform team, provided in 2010 a now famous map of all the metadata present in a tweet showing how a tweet is much more than just 140 characters. See <http://www.scribd.com/doc/30146338/map-of-a-tweet>.
Predicts 2015: Big Data Challenges Move From Technology to the Organization
  • Nick Heudecker
  • Lakshmi Randall
  • Roxane Edjlali
  • Frank Buytendijk
  • Douglas Laney
  • Regina Casonato
  • Mark A Beyer
  • Merv Adrian
Nick Heudecker, Lakshmi Randall, Roxane Edjlali, Frank Buytendijk, Douglas Laney, Regina Casonato, Mark A. Beyer, and Merv Adrian, "Predicts 2015: Big Data Challenges Move From Technology to the Organization," Gartner, November 28, 2014.
The Culture of Big Data
  • Andreas Weigend In Mike
  • Barlow
Andreas Weigend in Mike Barlow, "The Culture of Big Data," O'Reilly Media Inc., 2013, see <http://chimera.labs.oreilly.com/books/1234000001713/ch01.html#fitting_in>.
Understanding Strategic Change in Organizations
  • Ibid
  • E Constance
  • Helfat
Ibid.; Constance E. Helfat, Dynamic Capabilities. Understanding Strategic Change in Organizations (Oxford, UK: Blackwell Publishing, 2007), p. 65.
Are You Ready to Reengineer Your Decision Making? Interview with Thomas H. Davenport
  • Michael S Hopkins
Michael S. Hopkins, "Are You Ready to Reengineer Your Decision Making? Interview with Thomas H. Davenport," MIT Sloan Management Review, 52/1 (Fall 2010): 1-6.
Andreas Weigend as cited in Andrew McAfee and Erik Brynjolfsson
Andreas Weigend as cited in Andrew McAfee and Erik Brynjolfsson. "Big Data: The Management Revolution," Harvard Business Review, 90/10 (October 2012): 60-68.
© 2016 by The Regents of the University of California. All rights reserved. Request permission to photocopy or reproduce article content at the University of California Press's Reprints and Permissions web page
California Management Review, Vol. 58, No. 3, pp. 5-25. ISSN 0008-1256, eISSN 2162-8564. © 2016 by The Regents of the University of California. All rights reserved. Request permission to photocopy or reproduce article content at the University of California Press's Reprints and Permissions web page, http://www.ucpress.edu/journals.php?p=reprints. DOI: 10.1525/cmr.2016.58.3.5.