Design Science Methodology for Information Systems and Software Engineering
Abstract
This book provides guidelines for practicing design science in the fields of information systems and software engineering research. A design process usually iterates over two activities: first designing an artifact that improves something for stakeholders and subsequently empirically investigating the performance of that artifact in its context. This validation in context is a key feature of the book - since an artifact is designed for a context, it should also be validated in this context.
Chapters (20)
To do a design science project, you have to understand its major components, namely, its object of study and its two major activities. The object of study is an artifact in context (Sect. 1.1), and its two major activities are designing and investigating this artifact in context (Sect. 1.2). For the design activity, it is important to know the social context of stakeholders and goals of the project, as this is the source of the research budget as well as the destination of useful research results. For the investigative activity, it is important to be familiar with the knowledge context of the project, as you will use this knowledge and also contribute to it. Jointly, the two major activities and the two contexts form a framework for design science that I describe in Sect. 1.3. In Sect. 1.4, I show why in design science the knowledge that we use and produce is not universal but has middle-range scope.
To frame a research project, you have to specify its research goal (Sect. 2.1). Because a design science project iterates over designing and investigating, its research goal can be refined into design goals and knowledge goals. We give a template for design problems in Sect. 2.2 and a classification of different kinds of knowledge goals in Sect. 2.3.
A design science project iterates over the activities of designing and investigating. The design task itself is decomposed into three tasks, namely, problem investigation, treatment design, and treatment validation. We call this set of three tasks the design cycle, because researchers iterate over these tasks many times in a design science research project.
Design science research projects take place in normative context of laws, regulations, constraints, ethics, human values, desires, and goals. In this chapter, we discuss goals. In utility-driven projects, there are stakeholders who have goals that the research project must contribute to. In exploratory projects, potential stakeholders may not know that they are potential stakeholders, and it may not be clear what their goals are. Nevertheless, or because of that, even in exploratory projects, it is useful to think about who might be interested in the project results and, importantly, who would sponsor the project. After all, design research should produce potentially useful knowledge. We therefore discuss possible stakeholders in Sect. 4.1 and discuss the structure of stakeholder desires and goals in Sect. 4.2. In Sect. 4.3, we classify possible conflicts among stakeholder desires that may need to be resolved by the project.
Treatments are designed to be used in the real world, in the original problem context. Once they are implemented in the original problem context, this is an important source of information about the properties of the artifact and about the treatment that it provides. This may or may not trigger a new iteration through the engineering cycle.
In design science projects, there may be uncertainty about stakeholders and their goals, and so treatment requirements may be very uncertain. It nevertheless pays off to spend some time on thinking about the desired properties of a treatment before designing one. The requirements that we specify provide useful guidelines for searching possible treatments.
To validate a treatment is to justify that it would contribute to stakeholder goals when implemented in the problem context. If the requirements for the treatment are specified and justified, then we can validate a treatment by showing that it satisfies its requirements. The central problem of treatment validation is that no real-world implementation is available to investigate whether the treatment contributes to stakeholder goals. Still, we want to predict what will happen if the treatment is implemented. This problem is explained in Sect. 7.1. To solve it, design researchers build validation models of the artifact in context and investigate these models (Sect. 7.2). Based on these modeling studies, researchers develop a design theory of the artifact in context and use this theory to predict the effects of an implemented artifact in the real world (Sect. 7.3). We review some of the research methods to develop and test design theories in Sect. 7.4. These methods play a role in the process of scaling up an artifact from the idealized conditions of the laboratory to the real-world conditions of practice. This is explained in Sect. 7.5.
When we design and investigate an artifact in context, we need a conceptual framework to define structures in the artifact and its context. In Sect. 8.1, we look at two different kinds of conceptual structures, namely, architectural and statistical structures. In information systems and software engineering research, the context of the artifact often contains people, and researchers usually share concepts with them. This creates a reflective conceptual structure that is typical of social research, discussed in Sect. 8.2. Conceptual frameworks are tools for the mind, and the functions of conceptual frameworks are discussed in Sect. 8.3. In order to measure constructs, we have to operationalize them. This is subject to the requirements of construct validity, discussed in Sect. 8.4.
Like all scientific research, design science aims to develop scientific theories. As explained earlier in Fig. 1. 3, a design science project starts from a knowledge context consisting of scientific theories, design specifications, useful facts, practical knowledge, and common sense. This is called prior knowledge.
The set of scientific theories used as prior knowledge in a design research project is loosely called its theoretical framework.
When it is finished, a design science project should have produced additional knowledge, called posterior knowledge.
Our primary aim in design science is to produce posterior knowledge in the form of a contribution to a scientific theory. In this chapter, we discuss the nature, structure, and function of scientific theories in, respectively, Sects. 9.1, 9.2, and 9.3.
We now turn to the empirical cycle, which is a rational way to answer scientific knowledge questions. It is structured as a checklist of issues to decide when a researcher designs a research setup and wants to reason about the data produced by this setup.
Figure 11.1 shows again the architecture of the empirical research setup. In this chapter, we discuss the design of each of the components of the research setup, namely, of the object of study (Sect. 11.1), sample (Sect. 11.2), treatment (Sect. 11.3), and measurement (Sect. 11.4).
Fig. 11.1
Empirical research setup, repeated from Fig. 10. 2. In observational research, there is no treatment
Descriptive inference
summarizes the data into descriptions of phenomena (Fig. 12.1). This requires data preparation (Sect. 12.1). Any symbolic data must be interpreted (Sect. 12.2), and quantitative data can be summarized in descriptive statistics (Sect. 12.3). The descriptions produced this way are to be treated as facts,
and so ideally there should not be any amplification in descriptive inference. But in practice there may be, and descriptive validity requires that any addition of information to the data be defensible beyond reasonable doubt (Sect. 12.4).
Fig. 12.1
Descriptive inference produces descriptions of phenomena from measurement data
Statistical inference
is the inference of properties of the distribution of variables of a population, from a sample selected from the population (Fig. 13.1). To do statistical inference, your conceptual research framework should define the relevant statistical structures, namely, a population and one or more random variables (Chap. 8, Conceptual Frameworks). The probability distributions of the variables over the population are usually unknown. This chapter is required for Chap. 20 on statistical difference-making experiments, but not for the other chapters that follow.
Fig. 13.1
Statistical inference is the inference of properties of the probability distribution of variables
Abductive inference is inference to the best explanation(s). The traditional definition of abduction is that it traverses deduction in the backward direction: From p → q and q, we may tentatively conclude that p. We know that fire implies smoke, we see smoke, and we conclude that there is fire. There is no deductively certain support for this, and there may be other explanations of the occurrence of smoke. Perhaps a Humvee is laying a smoke screen? Douven (Abduction, in The Stanford Encyclopedia of Philosophy, ed. by A.N. Zalta, Spring 2011 Edition, 2011) gives a good introduction into abduction as a form of reasoning, and Schurz (Synthese 164:201–234, 2008) provides an interesting overview of historical uses of abduction in science, with examples.
Analogic inference is generalization by similarity. In our schema of inferences (Fig. 15.1), analogic inference is done after abductive inference. What we generalize about by analogy is not a description of phenomena, nor a statistical model of a population, but an explanation. In Sect. 15.1, we show that it can be used in case-based and in sample-based research. In Sect. 15.2, we contrast feature-based similarity with architectural similarity and show that architectural similarity gives a better basis for generalization than feature-based similarity. Analogic generalization is done by induction over a series of positive and negative cases, called analytical induction (Sect. 15.3). We discuss the validity of analogic generalizations in Sect. 15.4 and generalize the concept of generalization to that of a theory of similitude in Sect. 15.5.
The road map of this book was shown in outline in the Preface, and is here shown with more detail in Fig. 16.1 (Research Goals and Research Questions). As stated in the Introduction, design science research iterates over solving design problems and answering knowledge questions. Design problems that need novel treatments are dealt with rationally by the design cycle, which has been treated in Part II. Knowledge questions that require empirical research to answer, are dealt with rationally by the empirical cycle, which has been treated in Part IV. Design and empirical research both require theoretical knowledge in the form of conceptual frameworks and theoretical generalizations, which enhance our capability to describe, explain, and predict phenomena, and to design artifacts that produce these phenomena. Theoretical frameworks have been treated in Part III.
An observational case study
is a study of a real-world case without performing an intervention. Measurement may influence the measured phenomena, but as in all forms of research, the researcher tries to restrict this to a minimum.
A single-case
mechanism experiment is a test of a mechanism in a single object of study with a known architecture. The research goal is to describe and explain the cause-effect behavior of the object of study. This can be used in implementation evaluation and problem investigation, where we do real-world research. It can also be used in validation research, where we test validation models. In this chapter, we restrict ourselves to validation research, and in the checklist and examples, the object of study is a validation model.
Technical action research (TAR)
is the use of an experimental artifact to help a client and to learn about its effects in practice. The artifact is experimental, which means that it is still under development and has not yet been transferred to the original problem context. A TAR study is a way to validate the artifact in the field. It is the last stage in the process of scaling up from the conditions of the laboratory to the unprotected conditions of practice.
In a statistical difference-making experiment,
two or more experimental treatments are compared on samples of population elements to see if they make a difference, on the average, for a measured variable.More than two treatments may be compared, and more than one outcome measure may be used. Different treatments may be applied to different objects of study in parallel or to the same object of study in sequence.
... In this paper, we discuss how TTL can address the previously mentioned challenges. Moreover, we report on a validation study [12] that we conducted with domain experts to explore the feasibility of TTL. The idea of TTL is not bound to any particular type of artifact. ...
... The purpose of the validation study [12] is to better understand the practical challenges that may arise when implementing the idea of TTL. By involving practitioners, we also aim to validate the idea in a realistic setting and collect data that can help us further develop the approach. ...
... We want to emphasize that we do not conduct an evaluation study, as such a study would require the assessment of the proposed solution with stakeholders in a natural setting, to improve the solution [12]. This objective is outside the scope of this paper. ...
Traceability greatly supports knowledge-intensive tasks, e.g., coverage check and impact analysis. Despite its clear benefits, the \emph{practical} implementation of traceability poses significant challenges, leading to a reduced focus on the creation and maintenance of trace links. We propose a new approach -- Taxonomic Trace Links (TTL) -- which rethinks traceability and its benefits. With TTL, trace links are created indirectly through a domain-specific taxonomy, a simplified version of a domain model. TTL has the potential to address key traceability challenges, such as the granularity of trace links, the lack of a common data structure among software development artifacts, and unclear responsibility for traceability. We explain how TTL addresses these challenges and perform an initial validation with practitioners. We identified six challenges associated with TTL implementation that need to be addressed. Finally, we propose a research roadmap to further develop and evaluate the technical solution of TTL. TTL appears to be particularly feasible in practice where a domain taxonomy is already established
... This study adopts a design science research (DSR) methodology (Wieringa, 2014) to develop a BIM-based circularity assessment tool (B-CAT) that can deal with variations in information availabilities across project phases. DSR is particularly suitable for changing and improving a real-world problem in a systematic manner (Venable et al., 2017). ...
... Several DSR methodology variants have been developed (see Blessing and Chakrabarti, 2009;Hevner et al., 2010;Peffers et al., 2007;Wieringa, 2014). We selected the one by Wieringa (2014) because it emphasizes extending the problem context -within which a design artefact is to be developed -to include both social and knowledge dimensions. ...
... Several DSR methodology variants have been developed (see Blessing and Chakrabarti, 2009;Hevner et al., 2010;Peffers et al., 2007;Wieringa, 2014). We selected the one by Wieringa (2014) because it emphasizes extending the problem context -within which a design artefact is to be developed -to include both social and knowledge dimensions. This broader perspective is essential for achieving a deeper understanding of the problem, particularly when developing artefacts aimed at addressing real-world issues. ...
... Scholars following similar paradigms have significantly influenced our framework [15,17] while contrasting approaches prioritizing logical consistency over adaptability have helped refine our position [18]. Although iterative empirical validation across different contexts (e.g., different business ecosystems) remains necessary to validate our framework [19], these philosophical foundations and other scholarly works strengthen our framework's foundations. ...
... BEAR methodologically integrates design science principles, particularly the works of Roel J. Wieringa [19], which demonstrates how guiding questions can iteratively refine our theoretical understanding of the domain and its problems. This iterative process reflects Peircian fallibilism [17], allowing the framework to update its ontological assertions when new data falsifies or refines prior versions. ...
... These questions should either improve theoretical understanding or solve pragmatic problems, fundamentally shaping the scope of our inquiry within the domain of interest. This shaped scope of inquiry, directed by the guiding question, acts as the primer for our framework, guiding which aspects of the domain are explored, modeled, and ultimately visualized, drawing inspiration from design science research [19]. ...
Traditional analytical frameworks often struggle to capture the complexity of business ecosystems, leading to ecosystem blindspots and missed opportunities. Following a semantic approach, we introduce the Business Ecosystem Analysis & Representation (BEAR) framework to uncover these blindspots. This approach leverages domain, seed ontologies, and empirical data to construct insightful knowledge graphs and context-driven visualizations, enabling question-driven analysis. Furthermore, we applied BEAR to the wind energy ecosystem to demonstrate its value using data from 35 companies extracted from WindEnergy Hamburg 2024. Guided by co-developed questions with industry experts from a leading manufacturer, our analysis revealed the BEAR's ability to map organizational positioning, interdependencies, and previously hidden wind energy ecosystem supply chain dynamics. These preliminary results demonstrate BEAR's effectiveness in unlocking deeper ecosystem understanding beyond syntactic methods, offering a scalable, semantic toolset that promises to advance strategic planning and ecosystem knowledge representation in business ecosystem analysis.
... To evaluate the contribution, we have conducted two representative single-case mechanism experiments to investigate the proposed approach's validity. We follow Wieringa's definition of a single-case mechanism experiment that defines it as a study in which the researcher intervenes and observes the impact of a new artifact or technology [77]. In the two cases, the approach allows the modeling of 16 different elasticity realizations. ...
... The proposed view type for elasticity modeling and simulation stems from following the design science research method [77]. Wieringa defines design science as the investigation and improvement of artefacts in a predefined context [77]. ...
... The proposed view type for elasticity modeling and simulation stems from following the design science research method [77]. Wieringa defines design science as the investigation and improvement of artefacts in a predefined context [77]. ...
The cloud computing model enables the on-demand provisioning of computing resources, reducing manual management, increasing efficiency, and improving environmental impact. Software architects now play a strategic role in designing and deploying elasticity policies for automated resource management. However, creating policies that meet performance and cost objectives is complex. Existing approaches, often relying on formal models like Queueing Theory, require advanced skills and lack specific methods for representing elasticity within architectural models. This paper introduces an architectural view type for modeling and simulating elasticity, supported by the Scaling Policy Definition (SPD) modeling language, a visual notation, and precise simulation semantics. The view type is integrated into the Palladio ecosystem, providing both conceptual and tool-based support. We evaluate the approach through two single-case experiments and a user study. In the first experiment, simulations of elasticity policies demonstrate sufficient accuracy when compared to load tests, showing the utility of simulations for evaluating elasticity. The second experiment confirms feasibility for larger applications, though with increased simulation times. The user study shows that participants completed 90% of tasks, rated the usability at 71%, and achieved an average score of 76% in nearly half the allocated time. However, the empirical evidence suggests that modeling with this architectural view requires more time than modeling control flow, resource environments, or usage profiles, despite its benefits for elasticity policy design and evaluation.
... The Design Science Research Methodology (DSRM) [64], [65] guides the development of CollabProg, which helps define the research problem and supports the creation, evaluation, and evolution of the tool. DSRM bridges the gap between knowledge and practice [65] and is widely adopted by researchers for developing educational artifacts [66]. ...
... The Design Science Research Methodology (DSRM) [64], [65] guides the development of CollabProg, which helps define the research problem and supports the creation, evaluation, and evolution of the tool. DSRM bridges the gap between knowledge and practice [65] and is widely adopted by researchers for developing educational artifacts [66]. ...
... In DSRM, a practical problem drives the investigation, generating new research questions and challenges that expand existing knowledge [61]. According to [65] the initial phase of the research focuses on understanding the problem without proposing immediate solutions. ...
Background: Teaching programming is a challenging task, as it requires instructors to guide students in developing complex skills such as real-world abstraction, problem-solving, and logical reasoning. However, the traditional teaching approach is often ineffective in achieving these objectives. Evidence suggests that Active Learning Methodologies (ALMs) can provide a more conducive environment for skill and competency development. Nonetheless, instructors’ adoption rate of ALMs remains relatively low due to various barriers and factors, particularly in programming education. Goal: To assist instructors in facing this challenge, we present in this article CollabProg, an open collaborative repository designed to support instructors in identifying and selecting the appropriate ALMs for their teaching context and specific classroom needs. Additionally, CollabProg provides a set of practical guidelines, offering a step-by-step guide to assist instructors in adopting ALMs. Method: We adopted the Design Science Research Methodology (DSRM) to systematically address the research problem and guide the development, evaluation, and evolution of CollabProg. Furthermore, we present two case studies to evaluate the acceptance and feasibility of using CollabProg from the perspective of instructors at different educational institutions in Brazil. Findings: The evidence demonstrates that CollabProg effectively supports instructors in adopting active learning methodologies while identifying limitations and opportunities for improvement. We also found that CollabProg helped instructors identify and choose suitable ALMs for their teaching context to meet their specific classroom needs. The guidelines provided by the repository were useful and highly practical for lesson planning in adopting ALMs. Implications: The use of CollabProg underscores the need for effective strategies to support instructors in teaching programming and motivating students to learn. This is particularly crucial in collaborative learning contexts, where social interaction is key. CollabProg’s versatility in supporting such contexts is a significant factor for successful instruction.
... To this end, we implement a basic prototype of a case-based reasoning (CBR) system within the CbyA model. The design science methodology (Wieringa 2014) was followed to showcase the proposed approach, specifically focusing on the case of pipeline safety technologies. These technologies protect energy pipelines against damage from third-party interventions, such as heavy equipment. ...
... This methodology uses collaborative design processes to develop effective information systems that solve organizational problems (Hevner et al. 2004). Wieringa (2014) conceptualizes design science as an iterative process comprising three phases: understanding a universal problem in a locally observable case setting; designing a solution as a treatment to that problem; and validating whether the treatment solves the problem sufficiently. First, our problem investigation phase aimed to understand which pipeline technologies the industry considers, what expert knowledge decision-makers lack to assess the effectiveness of those technologies, and which factors they use to assess the feasibility of implementing the alternative within an organization. ...
... Third, the treatment validation aimed to evaluate whether the prototype adequately supported the pipeline safety technology selection process by real stakeholders. This design science process is visualized using the model of Wieringa (2014) in Fig. 1 and explained below. ...
Choosing by advantages (CbyA) is increasingly used in multicriteria decision contexts to anchor group decisions to facts and ensure sound decision processes. However, limitations may arise in highly uncertain decision contexts that require extensive expert knowledge. For example, in the adoption of innovative technologies to ensure safety, decision-makers are challenged by the complexity and variety of information regarding novel technologies that they can mobilize to avoid major safety hazards. To overcome this problem, we propose extending the CbyA decision-making model with a case-based reasoning expert system that captures encoded technical expert knowledge. Using design science, we empirically investigated the use of this extended model in a case where safety engineers jointly review and select an innovative pipeline safety technology. We used interviews and reviewed the technological innovation literature to define the decision problem and relevant decision factors for this case. In subsequent design iterations, we created a prototype system and validated it through three rounds of user workshops. The designed prototype guides the selection of effective technologies and anchors this selection to the implementation advantages of the technologies. By prescribing a sequence of decision steps, this study further complements the innovation literature with a procedural model that guides innovation adoption decisions in practice. The proposed model is the first step in automating CbyA decision-making, thereby indicating how the integration of expert systems facilitates complex group decisions. This study encourages broader use of CbyA in highly uncertain contexts by demonstrating the applicability of this new decision paradigm.
... Hence, we performed three design cycles formed by three activities: problem investigation (what phenomena must be improved?), treatment design (how to design an artifact that could treat the problem?), and treatment validation (would these designs treat the problem?), as proposed by Wieringa [18]. ...
... . Research MethodThis research aimed at answering the following research question: what are the main challenges for developing an AI system focused on enhancing the understanding of legal decisions? To address this question, we employed a Design Science Research (DSR)[18], which involved the gradual ...
... In line with design science research, a system design activity was conducted [80,81] (see Table 4). The components that make up the system and software architecture were meticulously crafted, using a design-pattern-based method. ...
... Criteria for evaluation of the design work by the design science research methodology: research activities[80]. ...
IT (Information Technology) support plays a major role in CPSs (cyber-physical systems). More and more IT solutions and CIS (complex information system) modules are being developed to help engineering systems to a higher level of efficiency. The different specificities of different technological environments require a very different IT approach. Increasing the efficiency of different manufacturing processes requires an appropriate architecture. The Zachman framework guidelines were applied to design a suitable framework architecture for the welding process. A literature search was conducted to explore the conditions for component matching to a complex information system, in which advanced data management and data protection are important. In order to effectively manage the standards, a dedicated module needs to be created that can be integrated into the MES-ERP (Manufacturing Execution System-Enterprise Resource Planning) architecture. The result of the study is the creation of business UML (Unified Modeling Language) and BPMN (Business Process Model and Notation) diagrams and a roadmap to start a concrete application development. The paper concludes with an example to illustrate ideas for the way forward.
... We designed and refined the maintainability challenge design, its implementation, and our KC model using a mixedmethod approach framed by design science research [23]. Fig. 1 provides a visual abstract [24] of our method, which is detailed in Section III. ...
... We created the conceptual model and ITS implementation through design science research [23]. Fig. 1 summarizes our methodology with a visual abstract [24]. ...
Software engineers are tasked with writing functionally correct code of high quality. Maintainability is a crucial code quality attribute that determines the ease of analyzing, modifying, reusing, and testing a software component. This quality attribute significantly affects the software's lifetime cost, contributing to developer productivity and other quality attributes. Consequently, academia and industry emphasize the need to train software engineers to build maintainable software code. Unfortunately, code maintainability is an ill-defined domain and is challenging to teach and learn. This problem is aggravated by a rising number of software engineering students and a lack of capable instructors. Existing instructors rely on scalable one-sizefits-all teaching methods that are ineffective. Advances in elearning technologies can alleviate these issues. Our primary contribution is the design of a novel assessment item type, the maintainability challenge. It integrates into the standard intelligent tutoring system (ITS) architecture to develop skills for analyzing and refactoring high-level code maintainability issues. Our secondary contributions include the code maintainability knowledge component model and an implementation of an ITS that supports the maintainability challenge for the C# programming language. We designed, developed, and evaluated the ITS over two years of working with undergraduate students using a mixed-method approach anchored in design science. The empirical evaluations culminated with a field study with 59 undergraduate students. We report on the evaluation results that showcase the utility of our contributions. Our contributions support software engineering instructors in developing the code maintainability skills of their students at scale.
... Developing an enterprise architecture is comparable to developing an information systems artefact. Therefore, we use the engineering cycle for information systems and software engineering as a basis (Wieringa, 2014). This engineering cycle is a more generic Design Science Research methodology specialization. ...
... It focuses on creating an artefact which should be validated (i.e., checked on valid assumptions in a theoretical setting) and evaluated (i.e., implemented and analysed in practice). The engineering cycle consists of the phases: problem investigation, treatment design, treatment validation, treatment implementation and implementation evaluation (Wieringa, 2014). For this study, we limit ourselves to developing the CLCT from problem 1 See Thompson (2017) and Appelbaum (1997) for the technology typology investigation until and including treatment validation. ...
Construction companies have issues meeting building demands, and supply chain management promises are only sometimes fully utilized in practice. This paper investigates an IT artefact called the Construction Logistics Control Tower (CLCT). A CLCT is a control tower artefact specifically focusing on optimizing construction logistics activities across the supply chain. We distinguish four potential construction logistics application fields
and, therefore, describe four potential variants of the CLCT. We design and narrow down these alternatives by applying a form of co-creation in which stakeholders design and set requirements for the artefact of interest. Our goal is to develop a reference architecture for the strategic and operational form in Enterprise Architecture. We focus on a transportation-based CLCT, which has a strategic component, i.e., it predicts and manages long-term logistics activities regarding construction, and an operational one, i.e., it operationalizes and executes daily transportation processes to support construction activities. Our work provides a core enterprise architecture diagram describing this CLCT variant’s main functionalities. Next, we find that three key technologies need to be combined to realize such a system: Building Information Modelling, Geographic Information System and Transportation Management System. We discuss potential hurdles in the integration process and reflect on potential solutions. In the end, we envision that the construction of such a CLCT takes both a bottom-up and top-down approach but at least should be supported by a large consortium of stakeholders, constructing and supporting the system from their interests.
... Este trabalho adotou a Design Science Research Methodology (DSRM), uma abordagem estruturada fundamentada no paradigma epistemológico da Design Science Research (DSR). A DSR tem como principal objetivo a concepção e avaliação de artefatos inovadores capazes de solucionar problemas práticos, ao mesmo tempo em que contribuem para o avanço do conhecimento científico [Hevner et al. 2004;Hevner and Chatterjee 2010;Wieringa 2014]. Para isso, a metodologia integra rigor teórico e relevância prática, reduzindo a lacuna entre teoria e aplicação [Dresch et al. 2020]. ...
A expansão das tecnologias digitais, como a internet e redes sociais, ampliou as oportunidades de participação cidadã, mas sua integração aos processos participativos ainda enfrenta desafios. Para superar essas barreiras, esta pesquisa, fundamentada na Design Science Research, propõe um modelo para apoiar a formulação de estratégias de engajamento em planos de desenvolvimento urbano de cidades de pequeno e médio porte. Desenvolvido e refinado ao longo de três ciclos iterativos, o modelo foi avaliado por meio de análise qualitativa e aplicação em casos reais. Os resultados demonstram sua replicabilidade e adaptabilidade a diferentes contextos urbanos, oferecendo subsídios para aprimorar políticas públicas.
... O método de pesquisa adotado neste trabalhoé qualitativo, exploratório e baseado em Design Science Research (DSR), que foca em resolver problemas práticos em contextos específicos por meio de artefatos, gerando novos conhecimentos científicos [Wieringa 2014]. Este trabalho seguiu o ciclo iterativo da DSR, que incluiu o levantamento do referencial teórico e publicações dos resultados (Ciclo de Rigor), a identificação das lacunas e definição dos objetivos para contribuição (Ciclo de Relevância) e o desenvolvimento do artefato e sua avaliação por meio de uma prova de conceito (Ciclo de Design). ...
Este artigo apresenta um conjunto de diretrizes para a extração e especificação sistemática de requisitos de tolerância a falhas em Sistemas-de-Sistemas (SoS), com base em modelos de Processos-de-Processos de Negócios (PoP) em BPMN. Os requisitos extraídos visam alcançar confiabilidade durante a interoperabilidade do SoS, que automatiza o PoP correspondente, e o alinhamento entre os níveis técnico e de negócio. As diretrizes foram avaliadas por meio de um estudo de caso. Os resultados apontam que as diretrizes apoiam a extração sistemática dos requisitos de tolerância a falhas a partir de informações relevantes do nível de negócio.
... A metodologia adotada neste trabalhoé qualitativa, exploratória e baseada na Design Science Research (DSR), que foca em resolver problemas práticos em contextos específicos por meio de artefatos, gerando novos conhecimentos científicos [Wieringa 2014] A elaboração dos cenários abstratos seguiu exemplos de elementos relacionados ao tratamento de exceções da especificação BPMN, considerando o contexto de um PoP dirigido (ou seja, processos constituintes automatizados por sistemas que integram o SoS são coordenados por uma autoridade central, ou seja, um processo dominante responsável por alcançar os objetivos do PoP e, consequentemente, do SoS). Além disso, os cenários abstratos foram inspirados no conhecimento do grupo de pesquisa dos autores ao modelar cenários concretos de PoP reais em diversos domínios, como educação (PoP de repositório institucional da UFMS), agronegócio (PoP de monitoramento de produtividade e bem-estar animal da Embrapa Gado de Corte), saúde (PoP de saúde pública [Cagnin and Nakagawa 2021]) e de emergência (resgate) [Andrews et al. 2013]. ...
Falhas podem ocorrer durante a interoperabilidade de Sistemas-de-Sistemas (SoS) afetando o seu funcionamento e confiabilidade. Este trabalho define uma abordagem baseada em cenários na notação BPMN (Business Process Model and Notation) para extrair, de maneira sistemática e automática, requisitos de tolerância a falhas de SoS durante a interoperabilidade entre os seus sistemas constituintes, a partir de informação útil de Processos-de-Processos de Negócio (PoP) de alianças de organizações. Com isso, é possível alcançar alinhamento entre os níveis técnico e de negócio, podendo favorecer a competitividade e lucratividade de alianças de organizações.
... The development followed a Design Science Research approach by Wieringa (2014), which encompasses three phases: problem investigation, treatment design, and treatment validation. The problem investigation phase began with interviews of healthcare practitioners across multiple domains, including hospital materials logistics, pharmacy operations, nursing staff, and medical professionals, to understand their existing processes and experiences with supply shortages. ...
This paper presents the design and implementation of a unidirectional data interface connecting epidemiological forecasting models with hospital resource management tools, developed within the PROGNOSIS project to enhance hospital preparedness for pandemics and epidemics. We demonstrate the practical application of this interface through HosNetSim, a novel agent-based simulation tool specifically designed to support hospital and hospital group administrators, as well as public health authorities in hospital supply chain decision-making. The interface provides standardized, geographically specific, and daily updated forecasts of hospital burden across multiple respiratory diseases and care levels, enabling what-if scenario analysis. HosNetSim utilizes these forecasts to evaluate and visualize critical supply chain decisions, including inventory management policies, inventory pooling, and transshipments. By simulating realistic operational scenarios, HosNetSim illustrates the trade-offs between service levels and associated costs from the perspectives of individual hospitals, hospital groups, and regional networks.
... RAVEN leverages statically defined personas to dynamically generate contextualized requirements at runtime that guide decisions associated with safety, ethics, and regulatory compliance. To design and validate RAVEN, we adopted Wieringa's Design Science approach [5], which includes identifying the problem context, designing an intervention (our advocate framework), validating it against defined requirements, and evaluating its impact through empirical or simulated deployment. We conducted a proof-of-concept study involving au- Fig. 2: Comparison of traditional static personas used for design-time requirements specification with dynamic advocate personas supporting runtime decision-making. ...
Complex systems, such as small Uncrewed Aerial Systems (sUAS) swarms dispatched for emergency response, often require dynamic reconfiguration at runtime under the supervision of human operators. This introduces human-on-the-loop requirements, where evolving needs shape ongoing system functionality and behaviors. While traditional personas support upfront, static requirements elicitation, we propose a persona-based advocate framework for runtime requirements engineering to provide ethically informed, safety-driven, and regulatory-aware decision support. Our approach extends standard personas into event-driven personas. When triggered by events such as adverse environmental conditions, evolving mission state, or operational constraints, the framework updates the sUAS operator's view of the personas, ensuring relevance to current conditions. We create three key advocate personas, namely Safety Controller, Ethical Governor, and Regulatory Auditor, to manage trade-offs among risk, ethical considerations, and regulatory compliance. We perform a proof-of-concept validation in an emergency response scenario using sUAS, showing how our advocate personas provide context-aware guidance grounded in safety, regulatory, and ethical constraints. By evolving static, design-time personas into adaptive, event-driven advocates, the framework surfaces mission-critical runtime requirements in response to changing conditions. These requirements shape operator decisions in real time, aligning actions with the operational demands of the moment.
... Our methodology for answering two research questions outlined in Section 1 is grounded in the empirical cycle of design science [13] and consists of the following phases: ...
Teamwork is crucial in software engineering. However, recent literature concludes that software engineering graduates have underdeveloped teamwork skills. Instructors wishing to develop teamwork skills are faced with many teamwork models and a lack of empirical studies that examine their utility in higher education. We conducted an exploratory study to examine teamwork components of attitudinal, behavioral, and cognitive psychological facets (collectively called ABCs) in eight undergraduate software engineering teams composed of 14 to 17 members. We answered two research questions: Which teamwork components were the least developed at the beginning of the project, and Which teamwork components remained underdeveloped by the end of the project. For each team, we conducted two focus group discussions, and analyzed four sprint retrospective reports during the semester. We synthesize the gathered data to define each team by the presence of teamwork components, development of teamwork components, experienced challenges, average course grades, and project results. Teamwork components that were initially underdeveloped were Mutual performance monitoring, Shared cognitions, Leadership, Communication, Psychological safety and Trust. Two teamwork components that remained underdeveloped were Shared cognitions and Mutual performance monitoring. The answer to our first question highlights teamwork components that instructors should pay attention to in their teamwork-oriented courses. For the second question, we explain the mechanisms that we applied to develop teamwork components and highlight which components were the most challenging to develop, as well as in which period instructors should provide the most support to students. Engineering education researchers might benefit from our methodological design, measurement instruments, and raw data to conduct studies in their contexts.
... El desarrollo del artefacto innovador está bajo el marco metodológico DSR establecido por Wieringa [10], el cual consta de cuatro etapas: La investigación, diseño de la solución, validación del diseño e implementación del diseño. La etapa alcanzada a momento de creación de este documento es la primera. ...
The growing volume of data has led to the extensive use of Big Data technologies, such as Data Lakes. However, the protection of personal and sensitive data has become a critical challenge, driven by the need to comply with regulations such as Chilean Law N. 19.628.
This paper investigates encryption techniques to safeguard data in Big Data and Data Lake environments, focusing on the applicable encryption solutions according to the Chilean legal framework. Using a systematic mapping of the literature and the Design Science Research methodology, we identify the main encryption techniques, such as homomorphic encryption and AES, and propose an artifact that employs Format Preserving Encryption.
This artifact allows for the protection of data without compromising its readability or original format, facilitating its secure use in distributed cloud computing platforms.
... Our work contributes to the definition of TwinArch, the Digital Twin Reference Architecture. To design TwinArch, we adopted the design science methodology, a structured process comprising three primary phases: context awareness, solution synthesis, and solution validation [39,40]. As shown in Figure 1, which illustrates our methodology steps, we adopted an iterative process organized into three cycles, each encompassing the design science phases to understand the problem context, devise a solution, and validate it. ...
Background. Digital Twins (DTs) are dynamic virtual representations of physical systems, enabled by seamless, bidirectional communication between the physical and digital realms. Among the challenges impeding the widespread adoption of DTs is the absence of a universally accepted definition and a standardized DT Reference Architecture (RA). Existing state-of-the-art architectures remain largely domain-specific, primarily emphasizing aspects like modeling and simulation. Furthermore, they often combine structural and dynamic elements into unified, all-in-one diagrams, which adds to the ambiguity and confusion surrounding the concept of Digital Twins. Objective. To address these challenges, this work aims to contribute a domain-independent, multi-view Digital Twin Reference Architecture that can help practitioners in architecting and engineering their DTs. Method. We adopted the design science methodology, structured into three cycles: (i) an initial investigation conducting a Systematic Literature Review to identify key architectural elements, (ii) preliminary design refined via feedback from practitioners, and (iii) final artifact development, integrating knowledge from widely adopted DT development platforms and validated through an expert survey of 20 participants. Results. The proposed Digital Twin Reference Architecture is named TwinArch. It is documented using the Views and Beyond methodology by the Software Engineering Institute. TwinArch website and replication package: https://alessandrasomma28.github.io/twinarch/ Conclusion. TwinArch offers practitioners practical artifacts that can be utilized for designing and developing new DT systems across various domains. It enables customization and tailoring to specific use cases while also supporting the documentation of existing DT systems.
... The development of this artifact used repeated design and empirical cycles as part of the technical action research (TAR) methodology. TAR is an interactive and iterative process that uses the outputs of an engineering cycle to help solve a client's engineering cycle (Wieringa, 2014). While the implementation of the artifact as part of a TAR client engineering cycle is beyond this scope, iterative development yielded cycles of generating microdata streams protected with ε-differential privacy (ε-DP) with increasing concurrency and realism in a synthesis model and cycles of developing an adversarial artifact that attempted to reassociate item-level events. ...
As continued data breaches allow state-level threat actors to assemble expansive dossiers on populations to carry out information warfare objectives, protecting personal privacy in published data sets and internal data stores is increasingly essential to civilian and societal safety. At the same time, the explosion of high-resolution, high-accuracy microdata streams, such as timestamped geolocation coordinates collected simultaneously by hardware platforms, operating systems, and a multitude of on-device applications and sites establishes a layered, highly-correlated pattern of life that can uniquely identify individuals and allow for targeted information warfare actions. Differential privacy (DP) is an advanced but highly effective technique in protecting sensitive data streams. This robust approach preserves privacy in published data sets through additive statistical noise sampled from Gaussian or Laplacian probability distributions. Data sets that contain highly correlated event-based data require specialized techniques to preserve mathematical DP guarantees in microdata streams beyond "user-level" applications available in most off-the-shelf approaches. Because practitioners need more tools to assess the robustness of differentially private outputs in microdata streams, application errors may result in future reidentification and privacy loss for data subjects. This research yields an artifact that can reassociate events in microdata streams when insufficient naive approaches are used. It also serves as a tool for implementers to validate their approaches in highly correlated event data.
... Since this field is still emerging, our goal is to establish a general framework that provides a structured foundation while remaining adaptable to future developments. To answer the research goal, we applied a design science approach based on the work of Wieringa [22]. We must first (1) select an adequate sample of DAO constitutions. ...
Decentralised Autonomous Organisations (DAOs) are organisations whose operations are written down in smart contracts and blockchain technology. DAOs can use a constitution to codify the fundamental principles on which they operate. Despite their increasing prevalence, the contents of DAO constitutions and their role in governing these organisations remain relatively unexplored. This study aims to fill this gap by investigating the contents of existing DAO constitutions. To investigate the content of constitutions, we collected a sample of 27 constitutional documents from active DAOs. These documents were systematically coded using a grounded theory approach, resulting in a framework of unified concepts that populate DAO constitutions. Our findings reveal several thematic areas in DAO constitutions, including technology, governance, finance, and community. After its creation, expert interviews validated the framework, confirming its relevance and suggesting refinements. This research contributes to a deeper understanding of the governance mechanisms within DAOs and provides a foundational framework for future studies in this area.
... This section discusses threats to the validity of this research based on the guidelines created by Wieringa et al. [27]. ...
Developing software with the source code open to the public is prevalent; however, similar to its closed counter part, open-source has quality problems, which cause functional failures, such as program breakdowns, and non-functional, such as long response times. Previous researchers have revealed when, where, how and what developers contribute to projects and how these aspects impact software quality. However, there has been little work on how different categories of commits impact software quality. To improve open-source software, we conducted this preliminary study to categorize commits, train prediction models to automate the classification, and investigate how commit quality is impacted by commits of different purposes. By identifying these impacts, we will establish a new set of guidelines for committing changes that will improve the quality.
... Nesta proposta inicial os objetivos eram os mesmos, contudo, ainda precisava-se da identificação de uma metodologia que permitisse a elaboração de um ambiente através de estudos para o atendimento do esperado pelo público-alvo. Desta forma, esta versão da pesquisa conta com a metodologia de Design Science Research, apresentada por Wieringa [25]. Através de seus ciclos, vai auxiliar a responder a questão principal de pesquisa: "Como auxiliar professores de Engenharia de Requisitos na busca e seleção de jogos para utilizar como recurso pedagógico?". ...
Os professores de requisitos são responsáveis por formar os futuros profissionais que lidarão com a especificação de software que serão desenvolvidos pela indústria. A literatura sobre o ensino de requisitos costuma apresentar estudos que vislumbram a perspectiva dos estudantes, mas raramente exploram as dificuldades do professor, como, por exemplo, localizar recursos para utilizarem em sala de aula. Esta pesquisa visa entender as dificuldades do professor quanto a localização de jogos didáticos e compor um ambiente que apoie os professores de requisitos a encontrarem jogos que possam ser utilizados no ensino da disciplina. Esta pesquisa utiliza-se da metodologia de Design Science na qual permite, em ciclos de pesquisa, a elaboração de um artefato que neste estudo será um ambiente a ser proposto. Nestes ciclos, ocorrerão os seguintes estudos: 1) Uma survey para identificar os professores de requisitos, 2) Uma revisão de literatura para popular inicialmente o ambiente proposto, 3) Um estudo de Campo para captação dos requisitos do ambiente, 4) Propostas de protótipos e avaliações destes e 5) Múltiplos estudos de Caso para testar o uso do ambiente na prática. A contribuição esperada ao fi- nal deste estudo é um ambiente que auxilie professores de Engenharia de Requisitos a identificarem jogos que podem ser utilizados como recursos pedagógicos em suas disciplinas.
... Analog zu den Herausforderungen werden diese im Folgenden dargestellt. Da es sich bei dieser Arbeit um eine Arbeit im Forschungsumfeld handelt, werden die in [80] beschriebenen Phasen zur Implementierung und Evaluierung des Lösungsansatzes mit höherem Technologiereifegrad in der breiten Serienanwendung nicht durchgeführt. ...
Production sites in high-wage regions such as Europe or North America are coming under increasing pressure from various factors in the global economy. One way of counteracting this pressure and continuing to operate such production sites economically is through increasing automation and digitalization. The Digital Twin is a key component on the way to digital and highly automated production systems. A consistently available digital representation of physical assets can save costs and time in the design, development, commissioning, and operation of production facilities. This also applies to component manufacturers, who often develop,
manufacture, and sell highly specialized components and systems for production facilities. For component manufacturers, the behavior models in process-relevant modeling depth from the Digital Twin are of central importance.
However, the creation of behavior models is very time-consuming and often requires the expertise of simulation experts with many years of training. This represents a major obstacle to the consistent use of behavior models and Digital Twins by component manufacturers. The aim of this work is to develop a concept for the consistent automated creation of behavior models for components and systems in process-relevant modeling depth.
The developed concept enables the automated creation of behavior models for components and systems based on suitable input information. The behavior models can either be created in detail or at a lower modeling depth in a fully automated process. For this purpose, a behavior model library with behavior models of the relevant components in great modeling depth is used. To simplify the creation of these, the concept has been expanded to include the option of assisted creation of the behavior model library from basic building blocks. The structural information can
come from various sources, including fluid circuit diagrams in either paper or digital form.
The concept will initially be implemented in the vacuum handling technology domain. For a complete realization and evaluation of the concept, basic building blocks of all relevant components of the domain are required. As part of this work, the not yet available behavior models of the basic components vacuum generator and vacuum suction cup are developed and finally evaluated with measurement data. A very good agreement was found between the simulated and measured behavior of the developed basic building blocks.
The presented artifacts of the concept are implemented in the form of an assistance system. This enables the low-effort creation of the behavior model library and the automated creation of behavior models of components and systems in process-relevant modeling depth. It also enables the automated execution of the behavior models. An evaluation was conducted using two case studies from vacuum handling technology. These case studies demonstrate the use of behavior models in virtual product design and optimization, as well as virtual commissioning. The objective
is to design better and more efficient systems in a faster and more cost-effective way. The results indicate that, depending on the modeling depth, the automatically generated behavior models match the measured curves with high accuracy. To quantify the time savings, an expert benchmark was conducted based on the systems of the two case studies. In comparison to the times required for the creation, parameterization, and abstraction processes as outlined in the expert benchmark, the assistance system is capable of delivering time savings of up to a factor of 54.
In meinem Kommentar möchte ich mich für eine Positionierung der Mediendidaktik als gestaltungsorientierte Technologiewissenschaft einsetzen, die anschlussfähig sowohl an die didaktische Theorieentwicklung als auch an den internationalen Education-Technology-Diskurs ist, ohne erziehungswissenschaftliche und pädagogische Grundziele aus dem Blick zu verlieren.Dazu beziehe ich mich auf Fragen (a) des thematischen Kerns, (b) des Wissenschaftstypus, (c) der Bezugsdisziplinen, (d) auf Besonderheiten des Gegenstandes, (e) auf Fragen der Forschungsziele sowie (f) der forschungsmethodischen Anforderungen und Konsequenzen.
Background: Poor communication of requirements between clients and suppliers contributes to project overruns,in both software and infrastructure projects. Existing literature offers limited insights into the communication challenges at this interface. Aim: Our research aim to explore the processes and associated challenges with requirements activities that include client-supplier interaction and communication. Method: we study requirements validation, communication, and digital asset verification processes through two case studies in the road and railway sectors, involving interviews with ten experts across three companies. Results: We identify 13 challenges, along with their causes and consequences, and suggest solution areas from existing literature. Conclusion: Interestingly, the challenges in infrastructure projects mirror those found in software engineering, highlighting a need for further research to validate potential solutions.
Agrivoltaics, which integrate solar photovoltaics with diverse agricultural activities on shared land, can play a pivotal role in advancing global decarbonization and agricultural innovation. Several European Union (EU) countries, states in the United States (US), and Asia Pacific nations are increasingly targeting the development of agrivoltaics. This includes Italy’s €1.7 billion investment to deploy 1.04 gigawatts (GW) of agrivoltaics and the US allocation of USD 75 million to agrivoltaics market incentives. In Australia, large-scale agrivoltaics are currently hindered by policy inertia, legal gaps, and absent market incentives to address emerging tensions between agricultural land use and renewable energy developments. In New South Wales (NSW), Australia, the NSW Electricity Infrastructure Roadmap aims to develop 12 GW of new renewable energy capacity and 2 GW of long-duration storage by establishing Renewable Energy Zones, primarily situated within rural areas. In response, the potential agricultural land alienation and fragmentation has prompted several planning and community engagement inquiries between 2022 and 2024. When regulated effectively, agrivoltaics presents a solution to clarify, protect, and enable agricultural landholder rights, stimulate planning policy innovation, and activate new energy market mechanisms. As a nascent socio-technical practice in NSW, agrivoltaics projects are developing iteratively due to the absence of agrivoltaic-specific planning policy, regulation, market incentives, and legal frameworks. This structural failure creates barriers to agrivoltaics scaling and may undermine social acceptance. This study conducts the first scaling readiness analysis of agrivoltaics in NSW, an emerging Australian state in agrivoltaic grazing practices, examining policy, regulation, market settings, and legal agreements creating obstacles and uncertainties. It presents key regulatory and legal reform recommendations to support scaling a commercially viable agrivoltaics sector promoting good grazing practices and enhancing social outcomes.
JECT.AI is a research-based digital product designed to support news editors and journalists to be more creative when discovering new angles and voices to incorporate into the content of news stories under development. It implements different artificial intelligence (AI) algorithms to deliver interactive and intelligent support to individual journalists and editors in response to topics of interest that they enter or highlight. Whilst studies demonstrated the potential effectiveness of the product in newsrooms, discovering how to deliver its AI in newsrooms for longer-term use has proved to be more challenging. This chapter summarises three sequential pivots in the design and positioning of JECT.AI that were undertaken to overcome barriers to the regular use of the product’s AI technologies in newsrooms. Although each version of JECT.AI was offered to journalists and newsrooms, different barriers to its uptake were encountered. The chapter then describes the most recent version of the product, which seeks to deliver creative insights about news stories relevant to journalists in different ways. Each experience and subsequent pivot have the potential to inform the uptake of other co-creative AI technologies, and especially generative AI technologies accessed via chatbots, in newsrooms and other workplaces that require regular creative thinking.
This approach offers a structured method for organizations to develop and articulate their ethical frameworks, particularly in areas where legal guidance is limited or nonexistent. Problem: This study investigates establishing core values in a legal vacuum, where research, design, or implementation of an invention or innovation is feasible but lacks regulations. We leverage Large Language Models (LLMs) to analyze codes of conduct from 1000 organizations (profit and not-for-profit) to identify core values. Metrics such as accuracy, bias, completeness, consistency, and relevance are used to validate the performance of LLMs in this context. From 493 non-profit organizations and companies on the Fortune 500 list, a total of 8646 core values including variations across 89 sectors were found. Using accuracy, bias, completeness, consistency and relevance as metrics for evaluating result from the LMMs, the number of core values is reduced to 362. The research employs a ten-step decision-making process to guide ethical decision-making when clear rules, laws, or regulations are absent. The framework presents how objectivity can be maintained without losing personal values. This research contributes to understanding how core values are established and applied in the absence of formal regulations.
Hyper-personalization has emerged as a transformative approach in Business Process Management (BPM), leveraging predictive analytics to deliver highly tailored customer experiences. This article explores how Pega Systems harnesses extensive customer data to implement hyper-personalized BPM solutions, significantly enhancing customer retention rates in the retail and banking sectors. By utilizing advanced predictive models, Pega's BPM systems anticipate customer needs and behaviors, enabling organizations to proactively address individual preferences and requirements. The study delves into the methodologies employed to integrate predictive analytics within BPM frameworks, highlighting the seamless orchestration of personalized interactions across various customer touchpoints. Case studies from leading retail and banking institutions illustrate the tangible benefits of hyper-personalized BPM, including increased customer satisfaction, loyalty, and lifetime value. Additionally, the article examines the challenges and limitations associated with data integration, privacy concerns, and the scalability of hyper-personalized solutions. The findings underscore the pivotal role of predictive analytics in driving exceptional customer experiences and offer insights into best practices for organizations seeking to adopt hyper-personalized BPM strategies. Ultimately, this research demonstrates that hyper-personalization, underpinned by robust BPM systems, is essential for businesses aiming to thrive in increasingly competitive and customer-centric markets.
Event-driven architectures have fundamentally transformed Business Process Management (BPM) by facilitating real-time responsiveness to a myriad of external and internal triggers. This paradigm shift is particularly evident in the logistics and supply chain sectors, where dynamic and unpredictable events frequently impact operations. This article explores the implementation of Pega’s event-driven BPM systems and their efficacy in reducing delays and optimizing operations within these industries. Through a comprehensive analysis of case studies and empirical data, the study demonstrates how real-time event processing enhances decision-making processes, improves operational efficiency, and fosters greater adaptability to changing market conditions. The integration of Pega’s BPM solutions enables organizations to automate responses to events such as shipment delays, inventory shortages, and demand fluctuations, thereby minimizing downtime and maximizing resource utilization. Furthermore, the article discusses the methodological approaches employed to assess the performance improvements achieved through event-driven BPM, highlighting key metrics such as response time reduction, process throughput, and overall system reliability. The findings underscore the pivotal role of event-driven architectures in modern BPM, offering a robust framework for organizations aiming to achieve heightened responsiveness and operational excellence in an increasingly volatile business environment.
Context and motivation: End-user development focuses on enabling non-professional programmers to create or extend software applications on their own. However, before beginning the development process, software engineering best practices recommend performing requirements engineering (RE) activities, including requirements modelling. Question/problem: There is limited research on how end-users can model system requirements. Principal ideas/results: In this experience report, we investigate the problem of end-user requirements modelling in an EU-funded project about agricultural digitalisation. Specifically , a team of agronomists was directly involved in the creation of UML, iStar, and BPMN diagrams to model the transformation of socio-technical processes in four different concrete scenarios. They followed a formalisation procedure proposed within an RE method designed to help stakeholders evaluate the impact of agricultural digitalisation. Starting from textual reports including a description of the process as-is and the process-to-be, they followed step-by-step guidelines for model creation. Contribution: This paper reports insights from the experience from the viewpoint of the agronomists and software engineers involved. We identify eight key lessons that highlight the added value of end-user requirements modelling for achieving a shared and in-depth understanding of the socio-technical processes under analysis.
Contexto: A Transformação Digital (TD) é um processo evolutivo que utiliza as capacidades das tecnologias digitais para permitir que as organizações reavaliem seus valores, cultura, comportamentos, operações, serviços, modelos de negócio, estratégias de gestão, práticas e relacionamentos. Esse processo visa promover mudanças significativas em suas propriedades (GEORGE e HOWARD, 2023). Quando aplicada corretamente, a TD oferece benefícios como maior eficiência administrativa e valor social, impulsionando a inovação, aumentando a agilidade, transparência, produtividade e segurança, além de melhorar a tomada de decisões e a prestação de contas (HANELT et al., 2021). Motivação: De acordo com Mariani e Bianchi (2023), para que as organizações e seus servidores tenham um roteiro claro para conceber e implementar a TD, é recomendável realizar um planejamento cuidadoso e avaliar a Maturidade Digital (MD) do órgão ao longo do tempo. Assim, Nasiri, Saunila e Ukko (2022) expressam que a MD reflete o status quo das capacidades de uma organização, descrevendo o que já foi alcançado em termos de esforços de TD e como a instituição se prepara para se adaptar a um ambiente cada vez mais digital. No contexto do setor público, pesquisas mostram que a MD ainda é incipiente, uma vez que a maioria das investigações focam em orientações específicas para o setor privado (Lima et al., 2023). Conforme Reis e Melão (2023), é essencial investigar como promover a MD no setor público, de modo que ela atenda de forma eficaz às suas necessidades reais. Objetivo: Com base nesse cenário, este trabalho tem como objetivo desenvolver um modelo de maturidade digital que considere o contexto, a realidade e as particularidades do setor público, denominado como MaturityGOV. Metodologia: A estrutura metodológica deste estudo baseia-se no método empírico Design Science Research proposto por Wieringa (2014). Inicialmente, foi realizada uma Revisão Terciária da Literatura, permitindo a compreensão do cenário atual da pesquisa na área da TD no setor público. Em paralelo, foi efetuado um diagnóstico em uma organização pública de grande porte por meio de visitas técnicas com integrantes da gerência geral de tecnologia e comunicação do órgão. Além disso, foi conduzida uma revisão exploratória da literatura para identificar aspectos fundamentais para a construção de um corpo comum de conhecimento do modelo de maturidade. Para a avaliação da proposta, serão realizadas sessões de grupos focais com especialistas, nas quais a versão preliminar do modelo será apresentada, discutida e avaliada. Durante novos ciclos de design, o modelo será aplicado, avaliado e validado por meio de um estudo de caso. Resultados Preliminares: Os resultados parciais revelaram o processo de TD no setor público, destacando as atividades e os recursos necessários, as áreas onde a TD está ocorrendo, os principais fatores que motivam essa transformação, e os benefícios gerados pela TD. As percepções e reflexões coletivas obtidas no diagnóstico foram alinhadas com os fatores situacionais identificados na revisão exploratória, levando à execução do primeiro ciclo de design. Este ciclo resultou na criação da versão Alpha do modelo de maturidade, que é fundamentado por quatro eixos estratégicos e possui sete dimensões e seis níveis de maturidade. Conclusão: Através da validação do modelo de maturidade, espera-se que o MaturityGOV forneça orientações sobre como as organizações públicas podem abordar o conceito de TD de maneira alinhada com suas reais necessidades. O modelo deve mapear caminhos para que essas organizações realizem seu processo de transformação, indicando uma trajetória de desenvolvimento potencial, antecipada ou típica, para alcançar o estado
Enabling student mobility within higher education involves complex, interconnected processes that are supported by various digital tools and applications. This study focuses on these processes at the University of Zagreb, Faculty of Organization and Informatics, conducting a detailed case study to analyse the existing digital services that facilitate student mobility. It aims to define and discuss the application architecture of an integrated system and explore how these services can be further developed by integrating legacy components, open services, and open-source elements. The proposed application architecture should support basic transactions, workflows and the need for data storage and analysis, thereby enhancing system interoperability and simplifying processes for all stakeholders involved—including students, international relations officers, and academic advisors. By proposing further digitalisation, this paper contributes to simplifying and improving of the mobility process, making it more accessible and efficient for the international education community.
Distributed Ledger Technology (DLT), including blockchain, is increasingly used within industry ecosystems to create a high-integrity, single source of truth of shared data and business processes across diverse parties. A major challenge in adopting DLT is the conflicting demands on data transparency for improved integrity, against hiding commercially sensitive data. To address this, some industry ecosystems use multiple ledgers shared only between relevant parties rather than using a single distributed ledger across the entire ecosystem. The design problem for this is: What parties should share which ledgers, and what data should be on those ledgers? In this paper, we propose a method employing a Design Structure Matrix (DSM) and Domain Mapping Matrix (DMM) to derive candidate shared ledger combinations for industry ecosystems. The method also indicates, for certain data, centralized web services or point-to-point messages may be more suitable than shared ledgers. We discuss our experiences with applying this method while developing a prototype for an agricultural traceability platform. We also present a genetic-algorithm-based DSM and DMM clustering technique to derive candidate shared ledger combinations.
The use of embedded systems has increased significantly over the last decade with the proliferation of Internet of Things technology, automotive and healthcare innovations and the use of smart home appliances and consumer electronics. With this increase, the need for higher quality embedded systems has increased. There are various guidelines and standards, such as ISO/IEC 9126 and ISO/IEC 25010, for product quality evaluation. However, these guidelines cannot be directly applied to embedded systems due to the nature of these systems. Applying traditional quality standards or guidelines on these systems without modification may degrade the performance of the system, increase memory usage or energy consumption, or affect other critical physical metrics adversely. Consequently, several models and approaches have either been introduced or have adopted existing guidelines to produce high-quality embedded systems. With this motivation, to understand the state of the art, and to identify the research directions in the field, we conducted a systematic literature review (SLR). In our research, we have investigated studies published from 1980 to 2024 and provided a comprehensive review of the scientific literature on quality models, quality attributes, employed practices, and the challenges, gaps, and pitfalls in the field.
Before being able to synthesize, during the exploration phase and by using different research techniques like interviews, observations, and surveys, but also from secondary research from peer-reviewed journal articles, large quantities of data are gathered. Data from various sources, including existing theories, empirical studies, and practical insights can be involved. These data need to be reduced and synthesized into meaningful and actionable chunks through inductive as well as abductive sensemaking (for abduction, see Sect. 2.2.3) to inform the further design process. This is the main activity in the synthesis step, to enable the formulation of design requirements and initial design propositions. So, the outcome of the synthesis phase are the design requirements, which can play a role when scoping the initial design propositions [The initial design propositions are the outcome of the literature synthesis and formulated according to the CAMO logic. During the evaluation phase of the design science cycle, when the solution concept is evaluated, they can be adjusted based on the evaluation findings and become design propositions which can be added to the specific body-of-knowledge].
The paper has been accepted for publication in Computer Science journal: http://journals.agh.edu.pl/csci
Software engineering (SE) research often involves creating software, either as a primary research output (e.g. in design science research) or as a supporting tool for the traditional research process. Ensuring software quality is essential, as it influences both the research process and the credibility of findings. Integrating software testing methods into SE research can streamline efforts by addressing the goals of both research and development processes simultaneously. This paper highlights the advantages of incorporating software testing in SE research, particularly for research evaluation. Through qualitative analysis of software artifacts and insights from two PhD projects, we present ten lessons learned. These experiences demonstrate that, when effectively integrated, software testing offers significant benefits for both the research process and its results.
With the increased interest in qualitative research some questions have arisen regarding methodological issues. In particular sample size and validity are the most often queried aspects of qualitative research. This paper aims to provide a review of the concepts of validity in qualitative research.
Design Methods for Reactive Systems describes methods and techniques for the design of software systems-particularly reactive software systems that engage in stimulus-response behavior. Such systems, which include information systems, workflow management systems, systems for e-commerce, production control systems, and embedded software, increasingly embody design aspects previously considered alone-such as complex information processing, non-trivial behavior, and communication between different components-aspects traditionally treated separately by classic software design methodologies. But, as this book illustrates, the software designer is better served by the ability to intelligently pick and choose from among a variety of techniques according to the particular demands and properties of the system under development. Design Methods for Reactive Systems helps the software designer meet todays increasingly complex challenges by bringing together specification techniques and guidelines proven useful in the design of a wide range of software systems, allowing the designer to evaluate and adapt different techniques for different projects. Written in an exceptionally clear and insightful style, Design Methods for Reactive Systems is a book that students, engineers, teachers, and researchers will undoubtedly find of great value. Shows how the techniques and design approaches of the three most popular design methods can be combined in a flexible, problem-driven manner. Pedagogical features include summaries, rehearsal questions, exercises, discussion questions, and numerous case studies, with additional examples on the companion Web site.
The computing literature contains much advice on new methods and technologies to improve the productivity and quality of the computing organization. But there is little research-generated evaluative evidence to support this advice, and the result is that the responsibility for evaluating new technology at present rests with those who will use it.Practitioner organizations tend to use pilot studies to perform this evaluation. But there is surprisingly little written on how to conduct such studies, and those studies that have been done are all too frequently either biased or inadequate, suggesting there is need for guidelines for the conduct of pilot studies.This article presents such guidelines, in the form of a detailed set of steps to be performed during a pilot study, with a rationale for why the steps are important and the consequences if they are omitted. The result of an evaluation of the guidelines on over two dozen government pilot studies is presented, showing that although the guidelines may be too rigorous, they form at least a foundation from which a useful set of steps for a particular pilot project might be tailored.