Business & Information Systems Engineering

Published by Springer Nature
Online ISSN: 1867-0202
Print ISSN: 2363-7005
Learn more about this page
Recent publications
  • Jella Pfeiffer
    Jella Pfeiffer
  • Julia Gutschow
    Julia Gutschow
  • Christian Haas
    Christian Haas
  • [...]
  • Suzana Alpsancar
    Suzana Alpsancar
Many software companies are adapting their traditional development processes to incorporate agile practices. In this context, it is necessary to count on expert knowledge to evaluate different agile practices and configure them according to project needs. However, this expert knowledge is scarce, difficult to validate, and time-consuming, since it is applied manually. As a solution, the paper presents a model-driven approach, called SIAM, which automatically generates guidelines for the adoption of agile practices through the combination of different development methods. SIAM is supported by a meta-model architecture to implement a knowledge repository that characterizes method configuration decisions, which can be reused in different development projects. SIAM has been implemented in a tool suite that facilitates the specification of models and the identification of issues during the definition of the development processes. The approach has been successfully applied to reconfigure an industrial development process with agile methods, showing that the effort required for tailoring agile practices according to organizational standards is considerably reduced.
 
Augmented reality (AR) is widely acknowledged to be beneficial for services with exceptionally high requirements regarding knowledge and simultaneous tasks to be performed and are safety-critical. This study explores the user-centered requirements for an AR cognitive assistant in the operations of a large European maritime logistics hub. Specifically, it deals with the safety-critical service process of soil sounding. Based on fourteen think-aloud sessions during service delivery, two expert interviews, and two expert workshops, five core requirements for AR cognitive assistants in soil sounding are derived, namely (1) real-time overlay, (2) variety in displaying information, (3) multi-dimensional tracking, (4) collaboration, and (5) interaction. The study is the first one on the applicability and feasibility of AR in the maritime industry and identifies requirements that impact further research on AR use in safety-critical environments.
 
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
 
This paper reports on a design science research (DSR) study that develops design principles for “green” – more environmentally sustainable – data mining processes. Grounded in the Cross Industry Standard Process for Data Mining (CRISP-DM) and on a review of relevant literature on data mining methods, Green IT, and Green IS, the study identifies eight design principles that fall into the three categories of reuse, reduce, and support. The paper develops an evaluation strategy and provides empirical evidence for the principles’ utility. It suggests that the results can inform the development of a more general approach towards Green Data Science and provide a suitable lens to study sustainable computing.
 
Research presentation according to the DSR grid (Vom Brocke and Maedche 2019)
Breakdown of the references used to support DSR in the sample of 114 papers
Besides increasing transparency and demonstrating awareness of the author, self-reported limitations enable other researchers to effectively learn from, build on, validate, and extend the original work. However, this topic is understudied in information systems design science research (IS DSR). The study has assessed 243 IS DSR papers published in the period 2013–2022 and built a typology of the 19 most relevant limitations, organized into four categories: (1) Input Knowledge and Technology, (2) Research Process, (3) Resulting Artifact, and (4) Design Knowledge. Further, the contribution suggests actions to mitigate each type of limitation throughout the entire IS DSR project lifecycle. The authors have also created guidelines to report the limitations in a useful way for knowledge accumulation. The proposed typology and guidelines enable reviewers and editors to better frame self-reported limitations, assess rigor and relevance more systematically, and provide more precise feedback. Moreover, the contribution may help design researchers identify, mitigate, and effectively communicate the uncertainties inherent to all scientific advances.
 
Flow of research and process
Proposed theoretical model
Proposal of a theoretical model
Analysis grid
The family business literature has not addressed the role of information systems (IS) in the development of trust in family businesses. Through an in-depth analysis of a Chinese industrial family business in Qingdao, this study shows how several IS contribute to trust within the organization. Trust is conceptualized according to three dimensions, namely interpersonal trust, competence trust, and systems trust. Three main IS have been identified in the organization, namely WeChat, DingTalk, and the Enterprise Resource Planning system (ERP). This exploratory study analyzed how eight departments use these IS to understand which institutional logic is embedded within each IS. Each information system is conceptualized as embedded in a specific institutional logic which is not neutral in terms of trust building. These findings highlight the fact that Chinese executives use specific information systems to develop trust. ERP (here SAP) has a specific inherent institutional logic, namely rational managerialism, which contributes to system trust. Social media such as WeChat and DingTalk are embedded in their own institutional logic which makes them more adapted to specific activities. Unlike rational managerialism, the institutional logic associated with WeChat includes a strong focus on interpersonal communication, cooperation and problem-solving. WeChat is associated with the development of interpersonal trust whereas rational managerialism is rather associated with transparence and formality, thus unsuitable for developing interpersonal trust. Chinese executives use WeChat to create an informal and dynamic social space which promotes the development of stronger social ties with each other. DingTalk is associated with another logic which promotes formal information sharing, reliability and internal management. This information system contributes to the development of another type of trust, namely competence trust. The two social media contribute to sustaining interpersonal trust and competence-based trust which are critical in the development stage of a family business. Findings also show that family members need to create a forum without their presence for employees to exchange freely, thus creating a space in which trust can blossom. This paper concludes with theoretical contributions and implications for practitioners.
 
Changes in liquidity and volatility due to the interruption of HFT (minute-wise regressions). This figure illustrates the β1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta _1$$\end{document}-coefficient from Eq. (6), i.e., the minute-wise cross-sectional differences between treatment (DAX30) and control group (CAC40), for each minute of the first hour of trading on non-event days (minutes 1 to 60) and the event day (minutes 61 to 120) separated by the red line. Dependent variables are Spread, L1-Volume, Depth(10), Order Imbalance, S.D. Price, and S.D. Midpoint. The purple line represents the median coefficient for non-event days and the dotted lines represent the upper and lower bounds of the 95%-confidence interval
High-frequency traders account for a significant part of overall price formation and liquidity provision in modern securities markets. In order to react within microseconds, high-frequency traders depend on specialized low latency infrastructure and fast connections to exchanges, which require significant IT investments. The paper investigates a technical failure of this infrastructure at a major exchange that prevents high-frequency traders from trading at low latency. This event provides a unique opportunity to analyze the impact of high-frequency trading on securities markets. The analysis clearly shows that although the impact on trading volume and the number of trades is marginal, the effects on liquidity and to a lesser extent on price volatility are substantial when high-frequency trading is interrupted. Thus, investments in high-frequency trading technology provide positive economic spillovers to the overall market since they reduce transaction costs not only for those who invest in this technology but for all market participants by enhancing the quality of securities markets.
 
Research model
Purchase page in the experiment
Experimental procedure
Scarcity cues, which are increasingly implemented on e-commerce platforms, are known to impair cognitive processes and influence consumers’ decision-making by increasing perceived product value and purchase intention. Another feature present on e-commerce platforms are online consumer reviews (OCRs) which have become one of the most important information sources on e-commerce platforms in the last two decades. Nevertheless, little is known about how the presence of scarcity cues affects consumers’ processing of textual review information. Consequently, it is unclear whether OCRs can counteract the effects of scarcity or whether OCRs are neglected due to scarcity cues. To address this gap, this study examines the effects of limited-quantity scarcity cues on online purchase decisions when participants have the possibility to evaluate textual review information. The results of the experimental study indicate that scarcity lowers participants’ processing of textual review information. This in turn increases perceived product value and has considerable negative consequences for the final purchase decision if the scarcity cue is displayed next to a low-quality product. The study’s findings provide relevant insights and implications for e-commerce platforms and policymakers alike. In particular, it highlights that e-commerce platforms can easily (ab)use scarcity cues to reduce consumers’ processing of textual review information in order to increase the demand for low-quality products. Consequently, policymakers should be aware of this mechanism and consider potential countermeasures to protect consumers.
 
IoT-enhanced Business Processes make use of sensors and actuators to carry out the process tasks and achieve a specific goal. One of the most important difficulties in the development of IoT-enhanced BPs is the interdisciplinarity that is demanded by this type of project. Defining an interdisciplinary tool-supported development approach that facilitates the collaboration of different professionals, with a special focus on three main facets: business process requirements, interoperability between IoT devices and BPs, and low-level data processing. The study followed a Design Science Research methodology for information systems that consists of a 6-step process: (1) problem identification and motivation; (2) define the objectives for a solution; (3) design and development; (4) demonstration; (5) evaluation; and (6) communication. The paper presents an interdisciplinary development process to support the creation of IoT-enhanced BPs by applying the Separation of Concerns principle. A collaborative development environment is built to provide each professional with the tools required to accomplish her/his development responsibilities. The approach is successfully validated through a case-study evaluation. The evaluation allows to conclude that the proposed development process and the supporting development environment are effective to face the interdisciplinary nature of IoT-enhanced BPs.
 
In the past decade, crowdworking on online labor market platforms has become an important source of income for a growing number of people worldwide. This development has led to increasing political and scholarly interest in the wages people can earn on such platforms. This study extends the literature, which is often based on a single platform, region, or category of crowdworking, through a meta-analysis of prevalent hourly wages. After a systematic literature search, the paper considers 22 primary empirical studies, including 105 wages and 76,765 data points from 22 platforms, eight different countries, and 10 years. It is found that, on average, microtasks results in an hourly wage of less than $6. This wage is significantly lower than the mean wage of online freelancers, which is roughly three times higher when not factoring in unpaid work. Hourly wages accounting for unpaid work, such as searching for tasks and communicating with requesters, tend to be significantly lower than wages not considering unpaid work. Legislators and researchers evaluating wages in crowdworking need to be aware of this bias when assessing hourly wages, given that the majority of literature does not account for the effect of unpaid work time on crowdworking wages. To foster the comparability of different research results, the article suggests that scholars consider a wage correction factor to account for unpaid work. Finally, researchers should be aware that remuneration and work processes on crowdworking platforms can systematically affect the data collection method and inclusion of unpaid work.
 
Emerging technologies in healthcare such as wearables, robotics, nanotech, connected health, and genomics technologies produce increasing amounts of data, which fuel artificial intelligence-powered algorithms to actively react to, predict, and prevent diseases and steer scarce healthcare resources. Currently, while digitalization in the healthcare sector differs across European countries and worldwide, we see increasing advances that promise highly personalized, predictive, closed-loop, preventive healthcare solutions. These solutions present a chance to increase the quality and empower users of healthcare services, to make healthcare processes more efficient, and to create more inclusive healthcare services for disadvantaged communities and minority groups. Business & Information Systems Engineering (BISE) has followed the adoption of important healthcare technologies such as electronic health records and digital health apps for more than a decade, beginning with a special issue on these topics (1/2013). This special issue of BISE builds on these foundations and is dedicated to emerging technologies in digital health that transform healthcare delivery with important implications for patient value. In this special issue, we strive for inclusivity. We are open to all work (qualitative, quantitative, computational, design) in relation to reimagining digital health, and encourage work from different regions worldwide. Submitted manuscripts should be well-grounded in theory and need to persuasively demonstrate both practical relevance and substantial contributions to the scientific knowledge base. The scope of the special issue covers the full cycle ''from bench to bedside''. This means we encourage work covering the potentials of emerging technologies that are on the cusp of revolutionizing healthcare, the implementation of those emerging technologies in real-world healthcare settings, to studies that are investigating how the increasing amount of healthcare data can be made usable to unleash data-driven research. While we generally accept manuscripts from a broad thematic range at the intersection of healthcare and information systems, we see two thematic foci at the core of this special issue.
 
Theoretical perspective on social comparison and its consequences
Research model
Research results (Note: **P < 0.01; ***P < 0.001). Initially, we included age, gender, profession and the trait social comparison orientation as control variables in our research model. As these do not influence the variables included in the research model, indicated by non-significant relationships, we removed them for the sake of parsimony
Focus of this study and future research directions needed to provide generalizable findings
Telework became a necessary work arrangement during the global COVID-19 pandemic. However, practical evidence even before the pandemic also suggests that telework can adversely affect teleworkers’ colleagues working in the office. Those regular office workers may experience negative emotions such as envy which, in turn, can impact work performance and turnover intention. In order to assess the adverse effects of telework on regular office workers, the study applies social comparison theory and suggests telework disparity as a new theoretical concept. From the perspective of regular office workers, perceived telework disparity is the extent to which they compare their office working situation with their colleagues’ teleworking situation and conclude that their teleworking colleagues are slightly better off than themselves. Based on social comparison theory, a model of how perceived disparity associated with telework causes negative emotions and adverse behaviors among regular office workers was developed. The data were collected in one organization with telework arrangements (N = 269). The results show that perceived telework disparity from the perspective of regular office workers increases their feelings of envy toward teleworkers and their job dissatisfaction, which is associated with higher turnover intentions and worse job performance. This study contributes to telework research by revealing a dark side of telework by conceptualizing telework disparity and its negative consequences for employees and organizations. For practice, the paper recommends making telework practices and policies as transparent as possible to realize the maximum benefits of telework.
 
In this research, we will discuss how XCZ Bank IT governance, particularly in financial information systems. Bank XCZ is the largest private bank in Indonesia. Founded in 1957, Bank XCZ has continued to grow and become what it is today. The main operational activities that have been run by Bank XCZ are mostly online based, including the finances. The finance information system expects to be a means to produce an accurate, reliable, on time and accountable finance report, to create an excellent financial information system. A capability test can be performed using the COBIT 5 framework. COBIT 5 itself is a framework that is often used by auditors, especially information system auditors. It is because COBIT 5 provides us with a comprehensive framework service so that it can be used as a tool to created better IT governance for the company. In this research, we use the COBIT framework to review the financial application process used in banking industry to see the level of evaluation in the industrial banking system. Focus domain that the authors use at this research is EDM02, EDM03, DSS01. The results of the capability level of EDM02 domain is 3 and located at establish process, EDM03 average capability level is 2,4 so the capability level of this domain is 2 located at managed process, DSS01 average capability level is 2,6 so the capability level of this domain is 3, established process, DSS03 capability level is 3 located at established process.
 
Theoretical framework
Empirical model
Decision support systems are increasingly being adopted by various digital platforms. However, prior research has shown that certain contexts can induce algorithm aversion, leading people to reject their decision support. This paper investigates how and why the context in which users are making decisions (for-profit versus prosocial microlending decisions) affects their degree of algorithm aversion and ultimately their preference for more human-like (versus computer-like) decision support systems. The study proposes that contexts vary in their affordances for self-humanization. Specifically, people perceive prosocial decisions as more relevant to self-humanization than for-profit contexts, and, in consequence, they ascribe more importance to empathy and autonomy while making decisions in prosocial contexts. This increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization. The results from an online experiment support the theorizing. The paper discusses both theoretical and design implications, especially for the potential of anthropomorphized conversational agents on platforms for prosocial decision-making.
 
Sensemaking processes during confirmation and disconfirmation from AI systems and effects on usage
Overview of emerging disconfirmation codes in clinical practice First-order codes of evaluating disconfirming AI advice Emerging second-order categories
Associations between AI usage patterns and radiologists' diagnostic self-efficacy
While diagnostic AI systems are implemented in medical practice, it is still unclear how physicians embed them in diagnostic decision making. This study examines how radiologists come to use diagnostic AI systems in different ways and what role AI assessments play in this process if they confirm or disconfirm radiologists’ own judgment. The study draws on rich qualitative data from a revelatory case study of an AI system for stroke diagnosis at a University Hospital to elaborate how three sensemaking processes revolve around confirming and disconfirming AI assessments. Through context-specific sensedemanding, sensegiving, and sensebreaking, radiologists develop distinct usage patterns of AI systems. The study reveals that diagnostic self-efficacy influences which of the three sensemaking processes radiologists engage in. In deriving six propositions, the account of sensemaking and usage of diagnostic AI systems in medical practice paves the way for future research.
 
Overview of the esport ecosystem and stakeholders
Esport-or competitive video gaming-is on the rise as events attract millions of viewers. Prior literature presents a vivid debate about esport. Proponents highlight esport for two reasons: First, as a means to self-actualization and satisfaction through a desire to win and a preference for difficult tasks. Second, for its entertainment and value creation. Adversaries do not acknowledge esport as an official sport for intellectual property concerns and possible addiction. Information systems can contribute to the debate on this digitally-enabled phenomenon that crosses multiple fields of research. Based on a review of the esport ecosystem and the current state-of-the-art in research, the article proposes an esport research agenda for information systems research.
 
Research process
Yearly publications about digital twins from 2012 to 2020 (Scopus 2020)
Creating a taxonomy following Nickerson et al. (2013)
Conceptual model of a digital twin
Currently, Digital Twins receive considerable attention from practitioners and in research. A Digital Twin describes a concept that connects physical and virtual objects through a data linkage. However, Digital Twins are highly dependent on their individual use case, which leads to a plethora of Digital Twin configurations. Based on a thorough literature analysis and two interview series with experts from various electrical and mechanical engineering companies, this paper proposes a set of archetypes of Digital Twins for individual use cases. It delimits the Digital Twins from related concepts, e.g., Digital Threads. The paper delivers profound insights into the domain of Digital Twins and, thus, helps the reader to identify the different archetypical patterns. The Version of Record is available online at: http://dx.doi.org/10.1007/s12599-021-00727-7 under the CC BY Licence
 
Mapping design requirements to design principles and design features
Smart City metamodel extension
Netanya citizens interactions per neighborhood on the waste management service
Future state: Solution diagram for Netanya waste management city service modeled using the ArchiMate extension
Description of the Smart City concepts and their graphical notation
The rapid increase and adoption of new Information Technologies (IT) in Smart Cities make the provision of public services more efficient. However, various municipalities and cities deal with challenges to transform and digitize city services. Smart Cities have a high degree of complexity where offered city services must respond to the concerns and goals of multiple stakeholders. These city services must also involve diverse data sources, multi-domain applications, and heterogeneous systems and technologies. Enterprise Architecture (EA) is an instrument to deal with complexity in both private and public organizations. The paper defines the concepts for modeling Smart Cities in ArchiMate, guided by a design-oriented research approach. Particularly, the focus of this paper is on the concepts for modeling city services and underlying information systems which are added to the EA metamodel. The metamodel is demonstrated in a real-world case and validated by Smart City domain experts. The findings suggest that these concepts are essential to achieve the Smart City strategy (e.g., city goals and objectives), as well as to meet the needs of different city stakeholders. Furthermore, an extension mechanism allows addressing the alignment of business and IT in complex environments such as Smart Cities, by adjusting EA metamodels and notations. This can help cities to design, visualize, and communicate architecture decisions when managing the transformation and digitalization of public services.
 
Consumer trust in complementor by rating score; (a) cases (1), (2), and (3); (b) cases (2) and (4)
a Estimated import threshold; b Estimated reputation resetting threshold.
These thresholds have to be taken with caution as they depend on factors such as the employed sample, the ratings’ scales, as well as consumers’ perceptions of the rating distributions on the respective platforms
Parameter estimates; OLS regressions; standard errors in parentheses DV: Trust in complementor
Complementors accumulate reputation on an ever-increasing number of online platforms. While the effects of reputation within individual platforms are well-understood, its potential effectiveness across platform boundaries has received much less attention. This research note considers complementors’ ability to increase their trustworthiness in the eyes of prospective consumers by importing reputational data from another platform. The study evaluates this potential lever by means of an online experiment, during which specific combinations of on-site and imported rating scores are tested. Results reveal that importing reputation can be advantageous – but also detrimental, depending on ratings’ values. Implications for complementors, platform operators, and regulatory bodies concerned with online reputation are considered.
 
The accelerated pace of digital technology development and adoption and the ensuing digital disruption challenge established business models at many levels, particularly by invalidating traditional value proposition logics. Therefore, processes of technology and information system (IS) adoption and implementation are crucial to organizations striving to survive in complex digitalized environments. In these circumstances, organizations should be aware of and minimize the possibilities of not using IS. The user involvement perspective may help organizations face this issue. Involving users in IS implementation through activities, agreements, and behavior during system development activities (what the literature refers to as situational involvement) may be an effective way to increase user psychological identification with the system, achieving what the literature describes as intrinsic involvement, a state that ultimately helps to increase the adoption rate. Nevertheless, it is still necessary to understand the influence of situational involvement on intrinsic involvement. Thus, the paper explores how situational involvement and intrinsic involvement relate through a fractional factorial experiment with engineering undergraduate students. The resulting model explains 57.79% of intrinsic involvement and supports the importance of the theoretical premise that including users in activities that nurture a sense of responsibility contributes toward system implementation success. To practitioners, the authors suggest that convenient and low-cost hands-on activities may contribute significantly to IS implementation success in organizations. The study also contributes to adoption and diffusion theory by exploring the concept of user involvement, usually recognized as necessary for an IS adoption but not entirely contemplated in the key adoption and diffusion models.
 
The variable importance for the predictions of the default classifier based on boosting are shown for female (left) and male users (right). On the vertical axis, the clickstream and user attributes are listed. The horizontal axis shows the absolute average SHAP value, indicating the impact of the attribute on the prediction (Lundberg and Lee 2017)
Descriptive statistics by gender
Performance metrics for statistical parity
Performance metrics for equalized odds
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
 
Research model
Excerpts from an exemplary chatbot conversation during the experiment
Moderated mediation model with path coefficients
Interaction effect between chatbot response time and prior chatbot experience on social presence (H2)
Interaction effect between social presence and prior chatbot experience on intention to use (H4) Note. Shaded area represents levels of social presence, where the difference between novice and experienced users is significant at the .05 level
Research has shown that employing social cues (e.g., name, human-like avatar) in chatbot design enhances users’ social presence perceptions and their chatbot usage intentions. However, the picture is less clear for the social cue of chatbot response time. While some researchers argue that instant responses make chatbots appear unhuman-like, others suggest that delayed responses are perceived less positively. Drawing on social response theory and expectancy violations theory, this study investigates whether users’ prior experience with chatbots clarifies the inconsistencies in the literature. In a lab experiment ( N = 202), participants interacted with a chatbot that responded either instantly or with a delay. The results reveal that a delayed response time has opposing effects on social presence and usage intentions and shed light on the differences between novice users and experienced users – that is, those who have not interacted with a chatbot before vs. those who have. This study contributes to information systems literature by identifying prior experience as a key moderating factor that shapes users’ social responses to chatbots and by reconciling inconsistencies in the literature regarding the role of chatbot response time. For practitioners, this study points out a drawback of the widely adopted “one-design-fits-all” approach to chatbot design.
 
A Correction to this paper has been published: 10.1007/s12599-022-00743-1
 
Digital Twins offer considerable potential for cross-company networks. Recent research primarily focu-ses on using Digital Twins within the limits of a single organization. However, Shared Digital Twins extend application boundaries to cross-company utilization through their ability to act as a hub to share data. This results in the need to consider additional design dimensions which help practitioners design Digital Twins tailored for inter-company use. The article addresses precisely that issue as it investigates how Shared Digital Twins should be designed to achieve business success. For this purpose, the article proposes a set of design principles for Shared Digital Twins stemming from a qualitative interview study with 18 industry experts. The interview study is the primary data source for formulating and evaluating the design principles.
 
Predicting the final outcome of an ongoing process instance is a key problem in many real-life contexts. This problem has been addressed mainly by discovering a prediction model by using traditional machine learning methods and, more recently, deep learning methods, exploiting the supervision coming from outcome-class labels associated with historical log traces. However, a supervised learning strategy is unsuitable for important application scenarios where the outcome labels are known only for a small fraction of log traces. In order to address these challenging scenarios, a semi-supervised learning approach is proposed here, which leverages a multi-target DNN model supporting both outcome prediction and the additional auxiliary task of next-activity prediction. The latter task helps the DNN model avoid spurious trace embeddings and overfitting behaviors. In extensive experimentation, this approach is shown to outperform both fully-supervised and semi-supervised discovery methods using similar DNN architectures across different real-life datasets and label-scarce settings.
 
With the ever-increasing societal dependence on electricity, one of the critical tasks in power supply is maintaining the power line infrastructure. In the process of making informed, cost-effective, and timely decisions, maintenance engineers must rely on human-created, heterogeneous, structured, and also largely unstructured information. The maturing research on vision-based power line inspection driven by advancements in deep learning offers first possibilities to move towards more holistic, automated, and safe decision-making. However, (current) research focuses solely on the extraction of information rather than its implementation in decision-making processes. The paper addresses this shortcoming by designing, instantiating, and evaluating a holistic deep-learning-enabled image-based decision support system artifact for power line maintenance at a German distribution system operator in southern Germany. Following the design science research paradigm, two main components of the artifact are designed: A deep-learning-based model component responsible for automatic fault detection of power line parts as well as a user-oriented interface responsible for presenting the captured information in a way that enables more informed decisions. As a basis for both components, preliminary design requirements are derived from literature and the application field. Drawing on justificatory knowledge from deep learning as well as decision support systems, tentative design principles are derived. Based on these design principles, a prototype of the artifact is implemented that allows for rigorous evaluation of the design knowledge in multiple evaluation episodes, covering different angles. Through a technical experiment the technical novelty of the artifact’s capability to capture selected faults (regarding insulators and safety pins) in unmanned aerial vehicle (UAV)-captured image data (model component) is validated. Subsequent interviews, surveys, and workshops in a natural environment confirm the usefulness of the model as well as the user interface component. The evaluation provides evidence that (1) the image processing approach manages to address the gap of power line component inspection and (2) that the proposed holistic design knowledge for image-based decision support systems enables more informed decision-making. The paper therefore contributes to research and practice in three ways. First, the technical feasibility to detect certain maintenance-intensive parts of power lines with the help of unique UAV image data is shown. Second, the distribution system operators’ specific problem is solved by supporting decisions in maintenance with the proposed image-based decision support system. Third, precise design knowledge for image-based decision support systems is formulated that can inform future system designs of a similar nature.
 
Overview of the Delphi study procedure and statistics
Distribution of median values along opportunities and challenges
Process Mining is an active research domain and has been applied to understand and improve business processes. While significant research has been conducted on the development and improvement of algorithms, evidence on the application of Process Mining in organisations has been far more limited. In particular, there is limited understanding of the opportunities and challenges of using Process Mining in organisations. Such an understanding has the potential to guide research by highlighting barriers for Process Mining adoption and, thus, can contribute to successful Process Mining initiatives in practice. In this respect, this paper provides a holistic view of opportunities and challenges for Process Mining in organisations identified in a Delphi study with 40 international experts from academia and industry. Besides proposing a set of 30 opportunities and 32 challenges, the paper conveys insights into the comparative relevance of individual items, as well as differences in the perceived relevance between academics and practitioners. Therefore, the study contributes to the future development of Process Mining, both as a research field and regarding its application in organisations.
 
BPM culture spider web diagram based on all participants’ data
Dimension comparisons between the participating municipalities
Public administration institutions increasingly use business process management (BPM) to innovate internal operations, increase process performance and improve their services. Research on private sector companies has shown that organizational culture may impact an organization's BPM and this culture is often referred to as BPM culture. However, similar research on public administration is yet missing. Thus, this article assesses BPM culture in Germany’s municipal administration. 733 online survey responses were gathered and analyzed using MANOVA and follow-up discriminant analyses to identify possible determinants of public administration’s BPM culture. The results indicate that the employees’ professional experience and their responsibility influence the assessment of BPM culture, as does the size of a municipality. Based on these findings, the article proposes testable relationships and an agenda for further research on BPM culture in public administration.
 
Problem definition: Data errors in business processes can be a source for exceptions and hamper business outcomes. Relevance: The paper proposes a method for analyzing data inaccuracy issues already at process design time, in order to support process designers by identifying process parts where data errors might remain unrecognized, so decisions could be taken based on inaccurate data. Methodology: The paper follows design science, developing a method as an artifact. The conceptual basis is the notion of data inaccuracy awareness – the ability to tell whether potential discrepancies between real and IS values may exist. Results: The method was implemented on top of a Petri net modeling tool and validated in a case study performed in a large manufacturing company of safety–critical systems. Managerial implications: Anticipating consequences of data inaccuracy already during process design can help avoiding them at runtime.
 
Top-cited authors
Thomas Hess
  • Ludwig-Maximilians-University of Munich
Peter Fettke
  • Deutsches Forschungszentrum für Künstliche Intelligenz
Hans-Georg Kemper
  • Universität Stuttgart
Heiner Lasi
  • Ferdinand-Steinbeis-Institut
Alexander Benlian
  • Technische Universität Darmstadt