Globally, Mobile Network Operators (MNOs) incur considerable capital investments towards the acquisition of spectrum, deployment of mobile networks, and marketing and advertising of their mobile services to potential mobile subscribers. The extant literature, which is mainly conceptual, suggests that such capital investments impact the individual mobile subscriber base of MNOs. However, the extant literature lacks in quantitative explanation of such impacts. We address this lacuna by proposing an empirical framework using a novel panel dataset of the four largest MNOs of India, during the years 2009–2017. We find that capital investments in the spectrum (both contemporaneous and lagged) and mobile networks (lagged) positively impact the mobile subscriber base of MNOs in India. We observe that a “triggering effect,” such as the market rollout of 4G (fourth generation) services, leads to an initial slump in the mobile subscriber base of MNOs, which is counterintuitive and signifies the importance of early network-preparedness on the part of MNOs. We also find that, in the event of the aforementioned market triggers, MNOs’ firm-size and potential to invest in the spectrum, in addition to network-preparedness, are crucial for its survival.
The proliferation of rumors on social media has become a major concern due to its ability to create a devastating impact. Manually assessing the veracity of social media messages is a very time-consuming task that can be much helped by machine learning. Most message veracity verification methods only exploit textual contents and metadata. Very few take both textual and visual contents, and more particularly images, into account. Moreover, prior works have used many classical machine learning models to detect rumors. However, although recent studies have proven the effectiveness of ensemble machine learning approaches, such models have seldom been applied. Thus, in this paper, we propose a set of advanced image features that are inspired from the field of image quality assessment, and introduce the Multimodal fusiON framework to assess message veracIty in social neTwORks (MONITOR), which exploits all message features by exploring various machine learning models. Moreover, we demonstrate the effectiveness of ensemble learning algorithms for rumor detection by using five metalearning models. Eventually, we conduct extensive experiments on two real-world datasets. Results show that MONITOR outperforms state-of-the-art machine learning baselines and that ensemble models significantly increase MONITOR’s performance.
Cutting-edge technologies like big data analytics (BDA), artificial intelligence (AI), quantum computing, blockchain, and digital twins have a profound impact on the sustainability of the production system. In addition, it is argued that turbulence in technology could negatively impact the adoption of these technologies and adversely impact the sustainability of the production system of the firm. The present study has demonstrated that the role of technological turbulence as a moderator could impact the relationships between the sustainability the of production system with its predictors. The study further analyses the mediating role of operational sustainability which could impact the firm performance. A theoretical model has been developed that is underpinned by dynamic capability view (DCV) theory and firm absorptive capacity theory. This model was verified by PLS-SEM with 412 responses from various manufacturing firms in India. There exists a positive and significant influence of AI and other cutting-edge technologies for keeping the production system sustainable.
Curiosity, a motivational state of exploratory behavior, is conducive to innovation diffusion by encouraging users’ exploration on open innovation platforms. Yet, despite its importance, there is a scarcity of research investigating the mechanism for piquing users’ curiosity. Accordingly, we advance a research model to unravel how platform service quality, in the form of service content quality and service delivery quality, affects users’ epistemic and perceptual curiosity via inducing their trust and distrust in a platform. Taking mobile app stores as our empirical context, we collected data from 431 users to validate our hypothesized relationships. Analytical results indicate that both dimensions of platform service quality positively influence users’ trust in platform, whereas only service delivery quality negatively influences users’ distrust in platform. Furthermore, trust in platform directly triggers curiosity whereas distrust in platform positively influences users’ feeling-of-deprivation, which in turn triggers curiosity. In this sense, our analytical results reveal the mediating roles of distrust in platform and feeling-of-deprivation in the relationship between service delivery quality and curiosity.
Competition on e-commerce platforms is becoming increasingly fierce, due to the ease of online searching for comparing products and services. We examine how the sequential browsing behavior of consumers can enable targeted marketing strategies on e-commerce platforms, by using clickstream data from one of the largest e-commerce platforms in Asia. We deploy duration analysis to i) explore how path dependence can better explain consumers’ sequential browsing behavior in different product categories, and ii) characterize the sequential browsing behavior of heterogeneous consumer groups. The findings of our work showcase i) the high accuracy of using sequential browsing path dependence to explain consumer behavior, ii) the patterns of their behavioral intentions and iii) the spell of the behavior of heterogeneous consumer groups. Our findings provide nuanced implications for strategically managing branding, marketing, and customer relations on e-commerce platforms. We discuss the implications of our findings for both research and practice, and we delineate an agenda for future research on the topic.
One of the core challenges in digital marketing is that the business conditions continuously change, which impacts the reception of campaigns. A winning campaign strategy can become unfavored over time, while an old strategy can gain new traction. In data driven digital marketing and web analytics, A/B testing is the prevalent method of comparing digital campaigns, choosing the winning ad, and deciding targeting strategy. A/B testing is suitable when testing variations on similar solutions and having one or more metrics that are clear indicators of success or failure. However, when faced with a complex problem or working on future topics, A/B testing fails to deliver and achieving long-term impact from experimentation is demanding and resource intensive. This study proposes a reinforcement learning based model and demonstrates its application to digital marketing campaigns. We argue and validate with actual-world data that reinforcement learning can help overcome some of the critical challenges that A/B testing, and popular Machine Learning methods currently used in digital marketing campaigns face. We demonstrate the effectiveness of the proposed technique on real actual data for a digital marketing campaign collected from a firm.
In the context of distributed machine learning, the concept of feder-ated learning (FL) has emerged as a solution to the privacy concerns that users have about sharing their own data with a third-party server. FL allows a group of users (often referred to as clients) to locally train a single machine learning model on their devices without sharing their raw data. One of the main challenges in FL is how to select the most appropriate clients to participate in the training of a certain task. In this paper, we address this challenge and propose a trust-based deep reinforcement learning approach to select the most adequate clients in terms of resource consumption and training time. On top of the client selection mechanism, we embed a transfer learning approach to handle the scarcity of data in some regions and compensate potential lack of learning at some servers. We apply our solution in the healthcare domain in a COVID-19 detection scenario over IoT devices. In the considered scenario , edge servers collaborate with IoT devices to train a COVID-19 detection model using FL without having to share any raw confidential data. Experiments conducted on a real-world COVID-19 dataset reveal that our solution achieves a good trade-off between detection accuracy and model execution time compared to existing approaches.
During a disaster, a large number of disaster-related social media posts are widely disseminated. Only a small percentage of disaster-related information is posted by eyewitnesses. The post of a disaster eyewitness offers an accurate depiction of the disaster. Therefore, the information posted by the eyewitness is preferred over the other source of information as it is more effective at helping organize rescue and relief operations and potentially saving lives. In this work, we propose a multi-channel convolutional neural network (MCNN) that uses three different word-embedding vectors together to classify disaster-related tweets into eyewitness, non-eyewitness, and don't know classes. We compared the performance of the proposed multi-channel convolutional neural network with several attention-based deep-learning models and conventional machine learning-models such as recurrent neural network, gated recurrent unit, Long-Short-Term-Memory, convolutional neural network, logistic regression, support vector machine, and gradient boosting. The proposed multi-channel convolutional neural network achieved an F 1-score of 0.84, 0.88, 0.84, and 0.86 with four disaster-related datasets of floods, earthquakes, hurricanes, and wildfires, respectively. The experimental results show that the training MCNN model with different word embedding together performs better than the conventional machine-learning models and several other deep-learning models.
Industry 4.0 is revolutionizing manufacturing processes and has a powerful impact on globalization by changing the workforce and increasing access to new skills and knowledge. World Economic Forum estimates that, by 2025, 50% of all employees will need reskilling due to adopting new technology. Five years from now, over two-thirds of skills considered important in today’s job requirements will change. A third of the essential skills in 2025 will consist of technology competencies not yet regarded as crucial to today's job requirements. In this study, we focus our discussion on the reskilling and upskilling of the future-ready workforce in the era of Industry 4.0 and beyond. We have delineated top skills sought by the industry to realize Industry 4.0 and presented a blueprint as a reference for people to learn and acquire new skills and knowledge. The findings of the study suggest that life-long learning should be part of an organization’s strategic goals. Both individuals and companies need to commit to reskilling and upskilling and make career development an essential phase of the future workforce. Great efforts should be taken to make these learning opportunities, such as reskilling and upskilling, accessible, available, and affordable to the workforce. This paper provides a unique perspective regarding a future-ready learning society as an essential integral of the vision of Industry 4.0.
Today’s complex problems call for multidisciplinary analytics teams comprising of both analytics and non-technical domain (i.e. subject matter) experts. Recognizing the difference between data visualisaion (DV) (i.e. static visual outputs) and visual analytics (VA) (i.e. a process of interactive visual data exploration, guided by user’s domain and contextual knowledge), this paper focuses on VA for non-technical domain experts. By seeking to understand knowledge sharing from VA experts to non-technical users of VA in a multidisciplinary team, we aim to explore how these domain experts learn to use VA as a thinking tool, guided by their knowing-in-practice. The research described in this paper was conducted in the context of a long-term industry-wide research project called the ‘Visual Historical Atlas of the Australian Co-operatives’, led by a multidisciplinary VA team who faced the challenge tackled by this research. Using Action Design Research (ADR) and the combined theoretical lens of boundary objects and secondary design, the paper theorises a three-phase method for knowledge transfer, translation and transformation from VA experts to domain experts using different types of VA-related boundary objects. Together with the proposed set of design principles, the three-phase model advances the well-established stream of research on organizational use of analytics, extending it to the emerging area of visual analytics for non-technical decision makers.
Text-to-GQL (Text2GQL) is a task that converts the user's questions into GQL (Graph Query Language) when a graph database is given. That is a task of semantic parsing that transforms natural language problems into logical expressions, which will bring more efficient direct communication between humans and machines. The existing related work mainly focuses on Text-to-SQL tasks, and there is no available semantic parsing method and data set for the graph database. In order to fill the gaps in this field to serve the medical Human–Robot Interactions (HRI) better, we propose this task and a pipeline solution for the Text2GQL task. This solution uses the Adapter pre-trained by “the linking of GQL schemas and the corresponding utterances" as an external knowledge introduction plug-in. By inserting the Adapter into the language model, the mapping between logical language and natural language can be introduced faster and more directly to better realize the end-to-end human–machine language translation task. In the study, the proposed Text2GQL task model is mainly constructed based on an improved pipeline composed of a Language Model, Pre-trained Adapter plug-in, and Pointer Network. This enables the model to copy objects' tokens from utterances, generate corresponding GQL statements for graph database retrieval, and builds an adjustment mechanism to improve the final output. And the experiments have proved that our proposed method has certain competitiveness on the counterpart datasets (Spider, ATIS, GeoQuery, and 39.net) converted from the Text2SQL task, and the proposed method is also practical in medical scenarios.
This paper reflects on what differentiates AI ethics issues from concerns raised by all IS applications. AI ethics issues can be viewed in three distinct categories. One can view AI as another IS application like any other. We examine this category of AI applications focusing primarily on Mason’s (MIS Quarterly, 10, 5–12, 1986) PAPA framework as a way to position AI ethics within the IS domain. One can also view AI as adding a generative capacity producing outputs that cannot be pre-determined from inputs and code. We examine this by adding “inference” to the informational pyramid and exploring its implications. AI can also be viewed as a basis for reexamining questions of the nature of mental phenomena such as reasoning and imagination. At this time, AI-based systems seem far from replicating or replacing human capabilities. However, if/when such abilities emerge as computing machinery continues growing in capacity and capability, it will be helpful to have anticipated arising ethical issues and developed plans for avoiding, detecting, and resolving them to the extent possible.
Artificial Intelligence (AI) implementation incorporates challenges that are unique to the context of AI, such as dealing with probabilistic outputs. To address these challenges, recent research suggests that organizations should develop specific capabilities for AI implementation. Currently, we lack a thorough understanding of how certain capabilities facilitate AI implementation. It remains unclear how they help organizations to cope with AI’s unique characteristics. To address this research gap, we employ a qualitative research approach and conduct 25 explorative interviews with experts on AI implementation. We derive four organizational capabilities for AI implementation: AI Project Planning and Co-Development help to cope with the inscrutability in AI, which complicates the planning of AI projects and communication between different stakeholders. Data Management and AI Model Lifecycle Management help to cope with the data dependency in AI, which challenges organizations to provide the proper data foundation and continuously adjust AI systems as the data evolves. We contribute to our understanding of the sociotechnical implications of AI’s characteristics and further develop the concept of organizational capabilities as an important success factor for AI implementation. For practice, we provide actionable recommendations to develop organizational capabilities for AI implementation.
The rapid expansion of the Internet of Things (IoT) led to the emergence of new computing paradigms, such as mist and fog computing, in order to tackle the problem of transferring vast volumes of data to remote cloud data centers. In this paper, we propose a security, cost and energy aware scheduling heuristic for real-time workflow jobs that process IoT data with various security requirements. The environment under study is a four-tier architecture, consisting of IoT, mist, fog and cloud layers. The resources in the mist, fog and cloud tiers are considered to be heterogeneous. The proposed scheduling approach is compared to a baseline strategy, which is security aware, but not cost and energy aware. The performance of both heuristics is evaluated through extensive simulation experiments, under different values of security level probabilities for the initial IoT input data of the entry tasks of the workflow jobs. The simulation results reveal that the proposed approach, not only provides a better Quality of Service (QoS) compared to the baseline strategy, but it also achieves monetary cost and energy savings.
This paper engages with the emerging field of Artificial Intelligence (AI) governance wishing to contribute to the relevant literature from three angles grounded in international human rights law, Law and Technology, Science and Technology Studies (STS) and theories of technology. Focusing on the shift from ethics to governance, it offers a bird-eye overview of the developments in AI governance, focusing on the comparison between ethical principles and binding rules for the governance of AI, and critically reviewing the latest regulatory developments. Secondly, focusing on the role of human rights, it takes the argument that human rights offer a more robust and effective framework a step further, arguing for the necessity to extend human rights obligations to also directly apply to private actors in the context of AI governance. Finally, it offers insights for AI governance borrowing from the Internet Governance history and the broader technology governance field.
Information technology (IT) is a critical resource and asset for a firm’s internal operations and external interactions with outside stakeholders. This paper explores IT capabilities that enable a retailer’s value chain activities and interfaces with suppliers and customers. IT capabilities that enable the value chain activities of logistics and operations as well as those of marketing and sales are identified to examine how they influence a retailer’s implementation of value chain interfaces. Through the lens of organizational information processing theory (OIPT), we find that both IT capabilities and IT infrastructure play a pivotal role in implementing value chain interfaces but in distinctive ways. On the supplier-facing side, a retailer’s IT infrastructure and IT capabilities enabling logistics and operations are essential to its vendor-managed inventory (VMI) system. On the customer-facing side, a retailer’s e-commerce website requires IT infrastructure and IT capabilities that enable marketing and sales. Moreover, synergies are found downstream for e-commerce website that benefits from the integration of IT infrastructure and IT capabilities enabling logistics and operations while such synergies are not found upstream for VMI. This partial complementary effect is explicated by the sequential asymmetric information dependency of value chain activities. Finally, we discuss the implications of our findings for the research and practice of IT enablement and integration.
This study extends our understanding of what makes an online review useful by examining the effects of review quality (i.e., as a composite variable of review comprehensiveness and review topic consistency) on review usefulness, and the moderating effects of source credibility on the relationship between review quality and review usefulness. The Elaboration Likelihood Model, convergence theory, and cueing effect literature are used to define the variables of review comprehensiveness and review topic consistency. Analyses of 27,517 restaurant reviews from Yelp show that review topic consistency has a positive effect on review usefulness, but, contrary to our hypothesis, review comprehensiveness has a negative effect on review usefulness. We also found source credibility positively moderates the effect of review comprehensiveness on review usefulness, but negatively moderates the effect of review topic consistency on review usefulness. Theoretical and practical implications are discussed.
Service availability is a key construct in Service Level Agreements (SLA) between a cloud service provider and a client. The provider typically allocates backup resources to mitigate the risk of violating the SLA-specified uptime guarantee. However, initial backups may need to be adjusted in response to real-time failure and recovery events. In this study, we first develop a recurrent intervention at fixed intervals (RIFI) strategy that allows the provider to adjust the allocation of backup resources such that the expected total cost is minimized. Next, we focus on the limit to number of interventions, starting from single intervention strategy, as frequent reallocations may be operationally disruptive. Particularly, we provide a cost minimization approach to guide the service providers in their virtual resources management, and a specific downtime minimization approach for more mission-critical applications as a more aggressive alternative. We present computational results exploring the impact of intervention on the likelihood of SLA violation for the rest of the contract period, and evaluate parameters such as time and quantum of resource level adjustment, penalty levels desired by clients, and their influences on the backup resource provisioning strategies. We also validate our models through the analysis of use cases from Amazon Elastic Compute Cloud. Finally, we summarize this study by providing key practical managerial implications for resource deployment in the availability-aware cloud.
In higher education, low teacher-student ratios can make it difficult for students to receive immediate and interactive help. Chatbots, increasingly used in various scenarios such as customer service, work productivity, and healthcare, might be one way of helping instructors better meet student needs. However, few empirical studies in the field of Information Systems (IS) have investigated pedagogical chatbot efficacy in higher education and fewer still discuss their potential challenges and drawbacks. In this research we address this gap in the IS literature by exploring the opportunities, challenges, efficacy, and ethical concerns of using chatbots as pedagogical tools in business education. In this two study project, we conducted a chatbot-guided interview with 215 undergraduate students to understand student attitudes regarding the potential benefits and challenges of using chatbots as intelligent student assistants. Our findings revealed the potential for chatbots to help students learn basic content in a responsive, interactive, and confidential way. Findings also provided insights into student learning needs which we then used to design and develop a new, experimental chatbot assistant to teach basic AI concepts to 195 students. Results of this second study suggest chatbots can be engaging and responsive conversational learning tools for teaching basic concepts and for providing educational resources. Herein, we provide the results of both studies and discuss possible promising opportunities and ethical implications of using chatbots to support inclusive learning.
Misinformation on social media has become a horrendous problem in our society. Fact-checks on information often fall behind the diffusion of misinformation, which can lead to negative impacts on society. This research studies how different factors may affect the spread of fact-checks over the internet. We collected a dataset of fact-checks in a six-month period and analyzed how they spread on Twitter. The spread of fact-checks is measured by the total retweet count. The factors/variables include the truthfulness rating, topic of information, source credibility, etc. The research identifies truthfulness rating as a significant factor: conclusive fact-checks (either true or false) tend to be shared more than others. In addition, the source credibility, political leaning, and the sharing count also affect the spread of fact-checks. The findings of this research provide practical insights into accelerating the spread of the truth in the battle against misinformation online.
Enterprise architecture (EA) initiatives consist of functions, processes, tools, instruments, and principles to guide the design of IT and its alignment with business. EA is often presented as a silver bullet to ensure that IT contributes to business. Yet, many EA initiatives do not work out or even fail, but in the literature this area is undertheorized. This study aims to understand the factors influencing the failure of EA initiatives. We identified 15 factors and invited 8 EA experts to evaluate the factors and their influence based on an approach combining grey systems theory, Decision-Making and Trial Evaluation Laboratory (DEMATEL), and Interpretative Structural Modeling (ISM). The findings indicate that the factors are correlated and interwoven in complex causal chains. This study reveals the root factor and suggests enhancing high-level managers' EA knowledge and ensuring communication and leadership skills of enterprise architects as the starting point to avoid EA failure. Only later, organizing the EA function becomes important.
Of all emerging technologies, Artificial Intelligence (AI) is perhaps the most debated topic in contemporary society because it promises to redefine and disrupt several sectors. At the same time, AI poses challenges for policymakers and decision-makers, particularly regarding formulating strategies and regulations to address their stakeholders’ needs and perceptions. This paper explores stakeholder perceptions as expressed through their participation in the formulation of Europe's AI strategy and sheds light on the challenges of AI in Europe and the expectations for the future. Our analysis reveals six dimensions towards an AI strategy; ecosystems, education, liability, data availability sufficiency & protection, governance and autonomy. It draws on these dimensions to construct a desires-realities framework for AI strategy in Europe and provide a research agenda for addressing existing realities. Our findings contribute to understanding stakeholder desires on AI and hold important implications for research, practice and policymaking.
The technology can support multi-criteria decision-making processes, allowing managers to identify efficient solutions to complex problems in a structured and rational way. Specially, in time of crises, the use of Decision Support System (DSS) is useful since these situations demand greater accuracy in the decision-making process. Therefore, this study shows the usefulness of the Decision Support System constructed for the FITradeoff method in a practical context involving a decision-making in time of crisis. In special, in this study, the applicability of the FITradeoff DSS is discussed to solve an important problem involving a Brazilian Company. The FITradeoff DSS was employed for a compliance-program problem, in which a company sought to improve its performance in relation to the program. This problem is particularly significant in Brazil where the search for compliance programs has been increasing since the adoption of the anticorruption law. Thus, twenty-eight alternatives were created, and these alternatives were evaluated against five criteria. As a result, most of the alternatives in the top of the ranking are related to Internal Communication aspect. Hence, the DM considered that these alternatives are sufficient to direct the efforts to execute the Compliance Program, and in special this theme can be the focus in this company. Furthermore, in view of recurring crises around the world, companies must identify ways to ensure their internal processes support the sustainability of their business. For decision making in times of crisis, the DSS of the FITradeoff method is an effective tool allowing decision makers to handle complex decisions.
The online review has become an important pillar in the decision-making process for purchasing experience products, especially durable goods with relatively high prices. Using a rich data set for automobiles, we quantify the sentiment tendency expressed in textual reviews, and empirically examine the nonlinearly inverted U-shaped relationship between customer satisfaction and sentiment tendency. We then investigate the nonlinear influences of review sentiment and depth on helpfulness. Furthermore, we study the relationship between numerical rating and text contents, i.e., sentiment tendency and review depth, in promoting the review helpfulness, and quantitatively identify the complementary effect of sentiment tendency. Our results indicate that both numerical ratings and sentiments expressed in text contents contribute to an increase in review helpfulness. Compared with polarized reviews, the neutral ones better benefit helpfulness and customer satisfaction. We also find that reviews with moderate depth are more helpful. Based on the empirical findings, we discuss several managerial implications for review system designers and consumers in the durable product market.
Machine learning and artificial intelligence (ML/AI) promise higher degrees of personalization and enhanced efficiency in marketing communication. The paper focuses on causal ML/AI models for campaign targeting. Such models estimate the change in customer behavior due to a marketing action known as the individual treatment effect (ITE) or uplift. ITE estimates capture the value of a marketing action when applied to a specific customer and facilitate effective and efficient targeting. We consolidate uplift models for multiple treatments and continuous outcomes and perform a benchmarking study to demonstrate their potential to target promotional monetary campaigns. In this use case, the new models facilitate selecting the optimal discount amount to offer to a customer. Large-scale analysis based on eight marketing data sets from leading B2C retailers confirms the significant gains in the campaign return on marketing when using the new models compared to relevant model benchmarks and conventional marketing practices.
Multinational Enterprises (MNEs) use social media to reach a global audience. Simultaneously, Corporate Social Responsibility (CSR) has become an important feature of MNEs’ communications with stakeholders via social media. It is, therefore, important to understand the country and industry level differences in how stakeholders engage with CSR communications of MNEs via social media. We examine this across four countries and three industries by focusing on stakeholders’ engagement with CSR (vs. non-CSR) posts on Twitter. We find significant differences across industries within countries for three separate aspects of behavioural engagement (likes, retweets and replies). In addition, CSR posts have a positive effect on stakeholder engagement based on likes and retweets at the industry-within-country level. Moreover, CSR posts are not fully effective in developed countries. Hence, achieving legitimacy through CSR on social media is a complex challenge, requiring a nuanced understanding of stakeholder reactions based on specific industry and country contexts.
Diligent compliance with Information security Policies (ISP) can effectively deter threats but can also adversely impact organizational productivity, impeding organizational task completion during extreme events. This paper examines employees’ job performance during extreme events. We use the conservation of resources (COR) theory to examine how psychological resources (individual resilience, job meaningfulness, self-efficacy) and organizational resources (incident command leadership, information availability, and perceived effectiveness of security and privacy controls) influence ISP compliance decisions and job performance during extreme events. The results show that a one-size-fits-all approach to ISP is not ideal during extreme events; ISP can distract employees from critical job tasks. We also observed that under certain conditions, psychological resources, such as individual resilience, are reserved for job performance, while others, such as self-efficacy, are reserved for ISP compliance. A post hoc analysis of data from respondents who experienced strain during a real extreme event while at work was conducted. Our discussion provides recommendations on how security and privacy policies can be designed to reflect disaster conditions by relaxing some policy provisions.
Detecting and responding to information security threats quickly and effectively is becoming increasingly crucial as modern attackers continue to engineer their attacks to operate covertly to maintain long-term access to victims’ systems after the initial penetration. We conducted an experiment to investigate various aspects of decision makers’ behavior in monitoring for threats in systems that potentially have been compromised by intrusions. In checking for threats, decision makers showed a recency effect: they deviated from optimal monitoring behavior by altering their checking pattern in response to recent random incidents. Decision makers’ monitoring behavior was also adversely affected when there was an increase in security, exhibiting a risk compensating behavior through which heightened security leads to debilitated security behaviors. Although the magnitude of the risk compensating behavior was significant, it was not enough to fully offset the benefits from added security. We discuss implications for theory and practice of information security.
Artificial intelligence (AI) transits from merely adopted technology to fueling everyday decision-making systems from medication to navigation. With this combination of AI in decision-making systems (ADMS), the present study explores how text-based users' data from social media helps organize the users' perspectives of ADMS? To investigate our research questions, we used a framework consisting of three phases, exploratory, confirmatory, and validatory. We applied hierarchy clustering and topic modeling in the exploratory study, hypothesis building, and empirical analysis during the confirmatory study and support vector machine (SVM) in the validatory study. Our findings suggest that users are primarily concerned about the risk involved in using ADMS. Factors like accountability, self-efficacy, knowledge of ADMS individuals' attitudes towards ADMS impact the perception of ADMS among individuals. This study's theoretical and practical implications have great scope as ADMS is still in its elementary stage.
Approximately one billion individuals suffer from mental health disorders, such as depression, bipolar disorder, schizophrenia, and anxiety. Mental health professionals use various assessment tools to detect and diagnose these disorders. However, these tools are complex, contain an excessive number of questions, and require a significant amount of time to administer, leading to low participation and completion rates. Additionally, the results obtained from these tools must be analyzed and interpreted manually by mental health professionals, which may yield inaccurate diagnoses. To this extent, this research utilizes advanced analytics and artificial intelligence to develop a decision support system (DSS) that can efficiently detect and diagnose various mental disorders. As part of the DSS development process, the Network Pattern Recognition (NEPAR) algorithm is first utilized to build the assessment tool and identify the questions that participants need to answer. Then, various machine learning models are trained using participants’ answers to these questions and other historical data as inputs to predict the existence and the type of their mental disorder. The results show that the proposed DSS can automatically diagnose mental disorders using only 28 questions without any human input, to an accuracy level of 89%. Furthermore, the proposed mental disorder diagnostic tool has significantly fewer questions than its counterparts; hence, it provides higher participation and completion rates. Therefore, mental health professionals can use this proposed DSS and its accompanying assessment tool for improved clinical decision-making and diagnostic accuracy.
The field of artificial intelligence (AI) is advancing quickly, and systems can increasingly perform a multitude of tasks that previously required human intelligence. Information systems can facilitate collaboration between humans and AI systems such that their individual capabilities complement each other. However, there is a lack of consolidated design guidelines for information systems facilitating the collaboration between humans and AI systems. This work examines how agent transparency affects trust and task outcomes in the context of human-AI collaboration. Drawing on the 3-Gap framework, we study agent transparency as a means to reduce the information asymmetry between humans and the AI. Following the Design Science Research paradigm, we formulate testable propositions, derive design requirements, and synthesize design principles. We instantiate two design principles as design features of an information system utilized in the hospitality industry. Further, we conduct two case studies to evaluate the effects of agent transparency: We find that trust increases when the AI system provides information on its reasoning, while trust decreases when the AI system provides information on sources of uncertainty. Additionally, we observe that agent transparency improves task outcomes as it enhances the accuracy of judgemental forecast adjustments.
Anecdotal evidence suggests that artificial intelligence (AI) technologies are highly effective in digital marketing and rapidly growing in popularity in the context of business-to-business (B2B) marketing. Yet empirical research on AI-powered B2B marketing, and particularly on the socio-technical aspects of its use, is sparse. This study uses Activity Theory (AT) as a theoretical lens to examine AI-powered B2B marketing as a collective activity system, and to illuminate the contradictions that emerge when adopting and implementing AI into traditional B2B marketing practices. AT is appropriate in the context of this study, as it shows how contradictions act as a motor for change and lead to transformational changes, rather than viewing tensions as a threat to prematurely abandon the adoption and implementation of AI in B2B marketing. Based on eighteen interviews with industry and academic experts, the study identifies contradictions with which marketing researchers and practitioners must contend. We show that these contradictions can be culturally or politically challenging to confront, and even when resolved, can have both intended and unintended consequences.
Virtual Reality (VR) is becoming an increasingly important technology in a host of industries, including tourism. VR can provide virtual experiences before, during, or in lieu of real-world visits to tourism sites. Hence, providing authentic experiences is essential to satisfy guests with the site and technology. This study analyzes survey data using PLS to identify the determinants of satisfaction with non-immersive VR experiences of heritage and non-heritage tourism sites. Results from 193 subjects reveal the linkages between system quality, object-related authenticity, activity-related authenticity, and presence, as well their relationship with satisfaction.
The debate on the pros and cons of employee attachment to social networking sites (SNS) has led to social media policy paralysis in many organizations, and often a prohibition on employee use of SNS. This paper examines corporate users’ attachment to SNS. An analysis of 316 survey responses showed that corporate users’ socialization in large public SNS was steeped in perceived work-related benefits, which in turn nourished their SNS attachment. Social use outperformed informational use in generating perceived work-related benefits from SNS. Weak ties in large heterogeneous networks resulted in strategic and operational benefits, whereas the effects of strong bonding in homogenous networks were limited to operational benefits. The paper contributes to research on SNS use by corporate users and the debate on the effect of SNS use for work. The findings will benefit SNS strategists of organizations and policymakers to exploit the benefit potential of public SNS.
Small and medium-sized enterprises (SMEs) organize themselves into clusters by sharing a set of limited resources to achieve the holistic success of the cluster. However, these SMEs often face conflicts and deadlock situations that hinder the fundamental operational dynamics of the cluster due to varied reasons, including lack of trust and transparency in interactions, lack of common consensus, and lack of accountability and non-repudiation. Blockchain technology brings trust, transparency, and traceability to systems, as demonstrated by previous research and practice. In this paper, we explore the role of blockchain technology in building a trustworthy yet collaborative environment in SME clusters through the principles of community self-governance based on the work of Nobel Laureate Elinor Ostrom. We develop and present a blockchain commons governance framework for the three main dimensions i.e., interaction, autonomy, and control, based on the theoretical premise of equivalence mapping and qualitative analysis. This paper examines the role of blockchain technology to act as a guiding mechanism and support the smooth functioning of SMEs for their holistic good. The study focuses on sustainability and improving productivity of SMEs operating in clusters under public and private partnership. This is the first study to address the operational challenges faced by SEMs in clusters by highlighting the dimensions of blockchain commons governance dimensions.
Literature notes that firms are keen to develop big data analytics capability (BDAC, e.g. big data analytics (BDA) management and technology capability) to improve their competitive performance (e.g. financial performance and growth performance). Unfortunately, the extant literature has limited understanding of the mechanisms by which firms’ BDAC affects their competitive performance, especially in the context of small and medium-sized enterprises (SMEs). Using resource capability as the theoretical lens, this paper specifically examines how BDAC influences SMEs’ competitive performance via the mediating role of business models (BMs). Also, this study explores the moderating effect of COVID-19 on the relationship between BDAC and BMs. Supported by Partial Least Squares-Structural Equation Modelling (PLS-SEM) and data from 242 SMEs in China, this study finds the mediating roles of infrastructure and value attributes of BMs in enhancing the relationship of BDAC on competitive performance. Furthermore, the improvement of financial performance comes from the matching of BDA management capability with infrastructure attributes of BMs, while the improvements in growth come from the matching of BDA management capability and BDA technology capability with value attributes of BMs. The result also confirms the positive moderating effects of COVID-19 on the relationship of BDA management capability and value attributes of BMs. This study enriches the integration of BDAC and BMs literature by showing that the match between BDAC and BMs is vital to achieve competitive performance, and it is helpful for managers to adopt an informed BDA strategy to promote widespread use of BDAs and BMs.
Cyberattacks can be considered one of the fundamental challenges that paralyze the progress of digital payment usage (DPU) progress among citizens, as consumers shun away from using digital banking services due to increased concern over information security. National Cybersecurity Commitment (NCSC) has emerged as a preventive cybersecurity mechanism for countries to tackle such cybersecurity threats. Previous studies have shown that a country's NCSC positively impacts the business and economy of the country. This study examines the effect of NCSC on digital payment usage (DPU) across nations by grounding our discussion on the institutional trust theory. As trusting belief in security measures is a culturally embedded characteristic, we also examine the moderating role of national culture through Hofstede’s cultural dimensions. We use multilevel models to analyze publicly available archives of repeated cross-sectional data covering 76 countries to test the proposed relationships. Our findings indicate that NCSC has a positive influence on DPU. Further, our results highlight that the relationship between NCSC and DPU in a country is contingent on cultural dimensions. Overall, the evidence suggests that a competent cybersecurity environment compatible with cultural values can influence the speedy diffusion of digital payments in a country. Implications of our findings for research and practice are also discussed.
This study aims to investigate the role of artificial intelligence (AI) driven facial recognition to enhance a value proposition by influencing different areas of services in the travel and tourism industry. We adopted semi-structured interviews to derive insights from 26 respondents. Thematic analysis reveals the development of four main themes (personalization, data-driven service offering, security and safety, and seamless payments). Further, we mapped the impact of AI- driven facial recognition to enhance value and experience for corporate guests. Findings indicate that AI-based facial recognition can facilitate the travel and tourism industry in understanding travelers’ needs, optimization of service offers, and value-based services, whereas data-driven services can be realized in the form of customized trip planning, email, and calendar integration, and quick bill summarization. This contributes to strengthening the tourism literature through the lens of organizational information processing theory.
The Covid-19 pandemic illustrates that we are never far away from situations that have a scale and impact, which are difficult to predict. Positioned at the intersection of crisis management and resilience, this insider case-study provides the opportunity for a more complete understanding of the organisation-adversity relationship (Williams et al., 2017), by focusing on the third Covid19 wave in Ireland (Dec 2020) and resulting response by an Intensive Care Unit crisis team. The study examines the evolution of seven data supply chains that were developed to support the ICU crisis team through the surge of cases which put the highest level of strain on the Irish health system since the pandemic began. The study focuses on 289 data reviews, which triggered 63 changes each requiring a new iteration of a data supply chain. Incorporating Organisational Mindfulness as the theoretical framework, the study provides an insight into the realities of data management during a crisis but also provides a rich awareness of the complexities of data management that often go unrecognised. In doing so, the study contributes the concept of ‘mindful data’, which aids managers to understand the key characteristics of resilient data supply chains. The study also provides a rare first-hand insight into how mindful data was constructed, presented, and evolved into an essential element within the critical care environment.
Today's companies rely heavily on in-company information technology standards (ICITS) to reduce costs, ensure flexibility , and facilitate the planning, implementation, and operation of IT systems. Steering and managing ICITS has proven to be challenging, revealing the need for efficient governance mechanisms. But even though prior research demonstrates the challenges of ICITS, viable advice on how to implement ICITS is scarce. In this paper, we develop an organizational design theory for the management of ICITS based on the framework of organizational control theory. We conducted a critical case study to identify basic goals, constitutive elements, and fundamental mechanisms of a working ICITS management. The resulting design goals and principles were then evaluated and further refined in the light of additional expert interviews. With our work, we wish to extend the body of theoretical knowledge on the management of ICITS and help practitioners master the various challenges occurring in this domain.
Designing theory-driven social recommender systems (SRSs) has been a significant research challenge for over a decade. This study aims to identify behavioural factors that could improve the persuasiveness and quality of recommendations made by SRSs. Given both research streams’ striking similarity, it uses the recent yet rich research on social media influencers (SMI) to inform SRS research. Drawing on 72 publications, we classified 52 independent variables into 12 categories regrouped into three broad categories that characterise the relationships between the consumer and the (i) recommender system, (ii) product or brand, and (iii) and advert. The metanalysis results determined the relative importance of each category in predicting purchase intentions, placing recommender credibility and attitude towards the recommended product or brand at the top of the charts. Our findings are expected to facilitate more refined theory-building efforts and theory-driven designs in SRS research and practice.
Customer relationship management (CRM) is a strategic approach to manage an organization’s interaction with current and potential customers. Artificial Intelligence (AI) can analyze huge volume of data without human intervention. The integration of AI with existing legacy CRM system in the business to customer (B2C) relationship makes sense given the massive potential for growth of AI integrated CRM system. Failure to plan AI-CRM technology implementation in an organization could lead some to success and others to failure. The Contingency theory states that it is not possible for organizations to take decisions without a contingency plan and the optimal course of action depends on the internal and external circumstances. The Dynamic Capability View theory emphasizes the organizational ability to react adequately in a timely manner to any external changes and combines multiple capabilities of the organization, including organizational CRM and AI capabilities. Against this background, the purpose of this study is to examine the success and failure of implementation of AI integrated CRM system in an organization from B2C perspective using Contingency theory and Dynamic Capability View theory. The study finds that information quality, system fit, and organizational fit significantly and positively impact the implementation of AI-CRM for B2C relationship management. Also, there is a moderating impact of technology turbulence on both acceptance and failure of AI-CRM capability in the organization.
Responsible Artificial Intelligence (AI) has recently gained a lot of attention, especially in the last few years. Scholars have conducted systematic literature reviews to gain more knowledge about responsible AI. However, no study has collected and evaluated the most significant barriers to responsible AI. We filled this gap in the literature by identifying eleven barriers and categorized them, using the Technology-Organization-Environment framework, into three categories. We collected data from seven experts and used the analytical hierarchy process to evaluate the importance of the barriers. The results indicated that technology, as a category, is the most important. The findings also recommended that data quality is the most vital among all eleven barriers. We offered eleven propositions as a theoretical contribution for future researchers in terms of conceptual development. We discussed the implications of the findings for research and practice.
In this study, we explore prominent contemporary technology trajectories in the software industry and how they are expected to influence the work in the software industry. Consequently, we build on cultural lag theory to analyze how technological changes affect work in software development. We present the results from a series of expert interviews that were analyzed using the Gioia method. Moreover, we identify a set of technology trends pertinent to software development from which we derive four main changes affecting the future of work in software development: (1) a shift toward scalable solutions, (2) increased emphasis on data, (3) convergence of IT and non-IT industries, and (4) the cloud as the dominant computing paradigm. Accordingly, this study contains insights into how technology (as an element of material culture) influences non-material culture, as exemplified by the work involved in software development.
Large numbers of incomplete, unclear, and unspecific submissions on idea platforms hinder organizations to exploit the full potential of open innovation initiatives as idea selection is cumbersome. In a design science research project, we develop a design for a conversational agent (CA) based on artificial intelligence to facilitate contributors in generating elaborate ideas on idea platforms where human facilitation is not scalable. We derive prescriptive design knowledge in the form of design principles, instantiate, and evaluate the CA in two successive evaluation episodes. The design principles contribute to the current research stream on automated facilitation and can guide providers of idea platforms to enhance idea generation and subsequent idea selection processes. Results indicate that CA-based facilitation is engaging for contributors and yields well-structured and elaborated ideas.
To create competitive advantages, companies are leaning towards business analytics (BA) to make data-driven decisions. Nevertheless, users acceptance and effective usage of BA is a key element for its success. Around the globe, organizations are increasingly adopting BA, however, a paucity of research on examining the drivers of BA adoption and its continuance is noticeable in the literature. This is evident in developing countries where a higher number of systems and software development projects are outsourced. This is the first study to examine BA continuance in the context of software and systems development projects from the perspective of Pakistani software professionals. The data was collected from 186 Pakistani software professionals working in software and systems development projects. The data were analyzed using partial least squares - structural equation modelling techniques. Our structural model explains 45% variance on BA continuance intention, 69% variance on technological compatibility, and 59% variance on perceived usefulness. Our results show that confirmation has a direct impact on BA continuance intention in software and systems projects. The study has both theoretical and practical implications for professionals in the field of business analytics.