Book

Design Patterns. Elements of Reusable Object-Oriented Software

Authors:
... In particular, we use a Software Engineering design pattern that promotes the dynamic composition of behaviors, enabling the creation of modular, reusable components that can be easily combined to enhance functionality. This paper shows how the Decorator pattern [Gamma et al., 1995] was applied over the RDEVS design to include event tracking in the models without changing the expected behavior during simulation. We preserve the routing functionality as part of the models and the suitability of DEVS simulators as execution engines, allowing models to collect event flow data dynamically. ...
... A design pattern is the reusable form of a solution to a design problem [Alexander, 2018]. Design patterns are general, reusable solutions defined from the study of commonly occurring problems within a given context, and they help ensure the success of the modeling task (e.g., OO design patterns capture the intent behind a design by identifying objects, their collaborations, and the distribution of responsibilities [Gamma et al., 1995]). ...
... The Decorator design pattern is one of the twenty-three well-known design patterns proposed by [Gamma et al., 1995]. Such a structural pattern defines a flexible approach to enclosing a component in another object that adds a "border" with the intent of attaching an additional responsibility dynamically. ...
Article
Full-text available
The Routed Discrete Event System Specification (RDEVS) is a modular and hierarchical Modeling and Simulation (M&S) formalism based on the Discrete Event System Specification (DEVS) formalism that provides a set of design models for dealing with routing problems over DEVS. At the formal level, RDEVS models (as DEVS models themselves) are defined mathematically. However, software implementations of both formalisms are based on an object-oriented paradigm. Furthermore, at the implementation design level, the RDEVS formalism is represented by a conceptual model that uses DEVS simulators as execution engines. Even when RDEVS models can be executed with DEVS simulators, the resulting data (obtained as execution outputs) remains DEVS-based, restricting the study of event flows between models influenced by routing policies. This paper shows how the RDEVS formalism design was enhanced to include event tracking in the models without altering their expected behavior during simulation. Such an improvement is based on adding new features to existing RDEVS components. These features are defined as trackers, which are responsible for getting structured data from events exchanged during RDEVS executions. The proposed solution employs the Decorator pattern as a software engineering option to achieve the required goal. It was deployed as a Java package attached to the RDEVS library, devoted to collecting structured event flow data using JavaScript Object Notation (JSON). The results highlight the modeling benefits of adding event tracking to the original capabilities of the RDEVS formalism. For the M&S community, the novel contribution is an advance in understanding how best modeling practices of software engineering can be used to enhance their software tools in general and the RDEVS formalism in particular.
... The Document-import Subsystem is responsible for parsing and converting documents from various formats into the BioC format, while the Corpus-import Subsystem handles similar tasks for existing corpora. The Controller components, DocumentImporterController and CorpusImporterController, employ the Strategy Pattern [49] (p. 315) in the form of a REST interface to ensure the interchangeability of components as microservices. ...
... It provides a Spring Boot @RestController, which delegates requests to the DocumentImporter Controller and CorpusImporter Controller (Figure 2). These controllers employ the Strategy Pattern [49] (p. 315), enabling flexible and extensible data processing. ...
... The DataProvision Controller (Figure 2) is responsible for converting annotated BioC documents into the formats required by different NER frameworks. This is completed using its DataConversion Controller sub-component, which applies the Strategy Pattern [49] (p. 315). ...
Article
Full-text available
This paper presents a cloud-based system that builds upon the FIT4NER framework to support medical experts in training machine learning models for named entity recognition (NER) using Microsoft Azure. The system is designed to simplify complex cloud configurations while providing an intuitive interface for managing and converting large-scale training and evaluation datasets across formats such as PDF, DOCX, TXT, BioC, spaCyJSON, and CoNLL-2003. It also enables the configuration of transformer-based spaCy pipelines and orchestrates Azure cloud services for scalable and efficient NER model training. Following the structured Nunamaker research methodology, the paper introduces the research context, surveys the state of the art, and highlights key challenges faced by medical professionals in cloud-based NER. It then details the modeling, implementation, and integration of the system. Evaluation results—both qualitative and quantitative—demonstrate enhanced usability, scalability, and accessibility for non-technical users in medical domains. The paper concludes with insights gained and outlines directions for future work.
... In a sense, the statechart is the adopted behavioral pattern by ASEME. Behavioral patterns are applied when an object's behavior depends on its state, which also changes at run-time [19]. ...
... Solution: To remedy this situation we rely on the well-known by practitioners factory design pattern [19]. According to this pattern, we use a factory method that returns an instance of a method respecting an API which can be dynamically selected, even at run-time. ...
... Solution: This pattern composes activities into tree-like structures (the data structure to represent the statechart is a tree structure [40]) to represent partwhole hierarchies, following the composite pattern paradigm [19]. Put differently, during the design phase, the intra-agent control model developer may change the basic state type to an OR-state type and add sub-states as they see fit, even other protocols. ...
... Python binds their values in the returned lambda object. Thunking via • lambda: resembles the Command design pattern from Object-Oriented programming (Gamma et al., 1994). ...
... • Thunking via lambda: resembles the Command design pattern from Object-Oriented programming (Gamma et al., 1994). ...
Article
Full-text available
After some reminiscing, I describe the Olympiad trap and then delve into a technique to eliminate recursion by trampolining with continuations.
... The second version was created in the bachelor thesis of Daniel Lawand. It applies multiple design patterns [7] such as Chain of Responsability, Strategy, and Template Method to decouple different responsibilities within the pipeline. Moreover, the application logic was decoupled from external dependencies with the Ports and Adapters pattern [11]. ...
... The system applies multiple design patterns [7] to increase the flexibility of the code. Some examples include: Chain of Responsability, to allow to dynamically structure the steps of the pipeline depending on the experiment that will be executed; Strategy, to allow choosing different data preprocessing, feature engineering, and model evaluation techniques; and Template Method, to allow the reuse of generic code snippets that can be extended for future experiments (often in combination with the Strategy pattern). ...
Preprint
Full-text available
Deploying a Machine Learning (ML) training pipeline into production requires robust software engineering practices. This differs significantly from experimental workflows. This experience report investigates this challenge in SPIRA, a project whose goal is to create an ML-Enabled System (MLES) to pre-diagnose insufficiency respiratory via speech analysis. The first version of SPIRA's training pipeline lacked critical software quality attributes. This paper presents an overview of the MLES, then compares three versions of the architecture of the Continuous Training subsystem, which evolved from a Big Ball of Mud, to a Modular Monolith, towards Microservices. By adopting different design principles and patterns to enhance its maintainability, robustness, and extensibility. In this way, the paper seeks to offer insights for both ML Engineers tasked to productionize ML training pipelines and Data Scientists seeking to adopt MLOps practices.
... Key goals are to automatically assess the architectural soundness of those principles when applying them today (utilizing modern tools like Kotlin and Jetpack components) and to gauge their effect on code quality and team productivity and to uncover major road blocks and ways to circumvent them. Furthermore, an issue facing software editors is the development of a framework of implementable solutions with potential for use by large numbers of development teams with successful practical applications [5]. This method is designed to help mobile software developers improve their design skills. ...
... This trade-off between long-term benefits and short-term costs is a recurring theme in the literature, with additional evidence from Meta's engineering reports suggesting that cultural resistance within development teams often hampers consistent application of these principles [9]. Moreover, Gamma, Helm, Johnson, and Vlissides (2023) caution against over-engineering through excessive adherence to design patterns, a risk particularly acute in Android where simplicity often trumps complexity due to performance constraints [5]. These observations suggest that while SOLID principles offer a compelling blueprint for quality, their implementation must be tempered by situational awareness and pragmatic decisionmaking. ...
Article
Full-text available
Developing and maintaining large-scale applications has become a daunting task with the rapid evolution of the Android ecosystem. This research examines the application of SOLID (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion) principles in contemporary Android development. By the case study of Meta and an analysis of the application in top tech companies, the present research shares how SOLID principles can achieve better product quality, maintainability, and a positive outcome between your team. The study is based on a mixed-methodology, including qualitative and quantitative, analyzing the source code of 25 enterprise-grade Android applications, in-depth interviews with 50 senior professionals from top-tier technology companies, and code-metrics data for 24 months. We implemented it in Kotlin, taking advantage of the modern Android Jetpack ecosystem. The results of the study demonstrate dramatic increases in all aspects of software development. These include 45% reduction in technical debt, 89% increase in test coverage and 30% reduction in bug rate. A qualitative analysis indicates that teams report increased ease of code maintenance and ramp up of new team members. The research also highlights some of the barriers to applying SOLID: high learning curve, challenges convincing team members to adopt SOLID mindset. Our research contributes (1) a SOLID implementation framework for Android, empirically validated in four case studies. It also includes (2) metrics and tools for measuring adherence to SOLID principles, and (3) recommendations for resolving issues encountered during the implementation of these principles. These results have significant practical implications for mobile software industry practitioners and researchers
... This blueprint can be adapted for corporate membership programs, ensuring that they can operate efficiently and respond to user needs without constant human intervention [11]. Gamma (1995) discussed design patterns for reusable object-oriented software, which can be applied in the development of self-adaptive systems. Design patterns provide a standardized approach to solving common software development problems, facilitating the implementation and maintenance of adaptive features in corporate membership programs [12]. ...
... Gamma (1995) discussed design patterns for reusable object-oriented software, which can be applied in the development of self-adaptive systems. Design patterns provide a standardized approach to solving common software development problems, facilitating the implementation and maintenance of adaptive features in corporate membership programs [12]. Cheng et al. (2006) explored architecture-based adaptation in the presence of multiple objectives. ...
Article
Full-text available
The current study explores the acceptance of XYZ Company’s digital membership program in the light of the Technology Acceptance Model (TAM). The objective is to find some key values that predict user adoption, focusing mainly on the predictors of Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) in behavioral intention (BI). Data was collected from 378 respondents and Structural Equation Modeling (SEM) was used on the data obtained. The data provides evidence for the significant effects of PU and PEOU on BI as expected, underlining how important these constructions are to influence user acceptance. The study extends the TAM with other variables such as Trust, data privacy, and user experience (UX) to provide a broader understanding. These variables are important, especially in the case of a digital membership program operated by a company, for the technological requirements that need to satisfy multiple customers and match those needs with industry constraints. The study findings reveal a significant mediating effect of UX on the relationship between PEOU and PU, as well as a moderating impact of trust and data privacy on the relationship between PEOU and PU, which in turn creates a more satisfying level of assurance and satisfaction. While the findings inform, in a very specific way what Company XYZ can do from a management perspective to improve the user experience and enhance the benefits of its digital membership program and user adoption/engagement. In line with the academic literature, this study places TAM into the context of corporate digital membership offering relevant knowledge and practical recommendations for organizations to replicate similar initiatives. The strategic implications of these results demonstrate the importance for companies to design their digital solutions according to user expectations and needs, to consistently deliver optimal customer value and satisfaction in a highly competitive sales environment.
... For performance and scalability reasons, all message subscribers are implemented as a multi-threaded process using a configurable thread-pool. Message processors are implemented using factory and strategy design patterns [29] to maximize extensibility, while the messages themselves are implemented using the Data Transfer Object (DTO) [30] enterprise design pattern. ...
... It is anticipated that the design strategy for entity resolution will evolve over time and perhaps even include multiple strategies depending on sensor modalities. To support this scenario, the DFE applies the Strategy Pattern [29] to determine the most effective implementation of entity resolution based on the type of observation message. The DFE allows for the configuration of multiple strategies and the association of these strategies for event source types (or sensor types). ...
Preprint
Full-text available
Scientific investigation of Unidentified Anomalous Phenomena (UAP) is limited by poor data quality and a lack of transparency. Existing data are often fragmented, uncalibrated, and missing critical metadata. To address these limitations, the authors present the Observatory Class Integrated Computing Platform (OCICP), a system designed for the systematic and scientific study of UAPs. OCICP employs multiple sensors to collect and analyze data on aerial phenomena. The OCICP system consists of two subsystems. The first is the Edge Computing Subsystem which is located within the observatory site. This subsystem performs real-time data acquisition, sensor optimization, and data provenance management. The second is the Post-Processing Subsystem which resides outside the observatory. This subsystem supports data analysis workflows, including commissioning, census operations, science operations, and system effectiveness monitoring. This design and implementation paper describes the system lifecycle, associated processes, design, implementation, and preliminary results of OCICP, emphasizing the ability of the system to collect comprehensive, calibrated, and scientifically sound data.
... We formalize these structures as design patterns for agentic visualization that balance automation and analytical control. Design patterns, originating in architecture [1] and software engineering [5], provide a structured vocabulary for describing reusable solutions to common challenges. This approach aligns with human-centered AI principles that emphasize human values, interpretability, and agency in AI-augmented systems [2], [13]. ...
... Following long-standing design pattern praxis [1], [5], each pattern in our catalog has a descriptive title capturing the pattern's essence; the specific problem the pattern addresses; the reusable solution that resolves the problem; and the consequences of implementing the pattern, including benefits, limitations, and tradeoffs. Our analysis revealed three distinct categories of design patterns for agentic visualization: 1) Agent Role Patterns: These patterns define the fundamental roles and responsibilities agents can assume within a visualization system. ...
Preprint
Full-text available
Autonomous agents powered by Large Language Models are transforming AI, creating an imperative for the visualization field to embrace agentic frameworks. However, our field's focus on a human in the sensemaking loop raises critical questions about autonomy, delegation, and coordination for such \textit{agentic visualization} that preserve human agency while amplifying analytical capabilities. This paper addresses these questions by reinterpreting existing visualization systems with semi-automated or fully automatic AI components through an agentic lens. Based on this analysis, we extract a collection of design patterns for agentic visualization, including agentic roles, communication and coordination. These patterns provide a foundation for future agentic visualization systems that effectively harness AI agents while maintaining human insight and control.
... Typical examples include studies on discovering recurring usage patterns, which may indicate good or bad modeling practices, continuous deviations from the established language, or the repeated use of a particular representation to solve a problem in a specific domain. In this regard, along with seminal works like those in [43,44], there is a wealth of research [45][46][47] connected with diverse fields such as software code [48], databases [49], educational processes [50], and business processes [51]. ...
... Another challenge is the time-consuming and labor-intensive nature of analysis, which has led to the exploration of automatic techniques to support heuristic analysis, see for instance the works in [58][59][60]. In some cases, however, such as in the work presented in [43], valuable insights are gathered without relying on a large number of models for analysis, but just focusing on some key pre-selected case studies. Additionally, it is crucial to categorize and qualify the information obtained through these consulting heuristics. ...
... Fig. 13 provides a visual overview of the core design. The greatest emphasis is placed on the strategy and adapter design patterns (Gamma et al., 1994). Simply put, the strategy pattern decouples the source code for algorithm selection at runtime into separate classes. ...
... To address this, JobShopLib employs the Observer design pattern (Gamma et al., 1995). In this pattern, the Dispatcher acts as the "Subject" (or "Observable"), and other objects that need to react to its changes act as "Observers". ...
Preprint
Full-text available
The job shop scheduling problem is an NP-hard combinatorial optimization problem relevant to manufacturing and timetabling. Traditional approaches use priority dispatching rules based on simple heuristics. Recent work has attempted to replace these with deep learning models, particularly graph neural networks (GNNs), that learn to assign priorities from data. However, training such models requires customizing numerous factors: graph representation, node features, action space, and reward functions. The lack of modular libraries for experimentation makes this research time-consuming. This work introduces JobShopLib, a modular library that allows customizing these factors and creating new components with its reinforcement learning environment. We trained several dispatchers through imitation learning to demonstrate the environment's utility. One model outperformed various graph-based dispatchers using only individual operation features, highlighting the importance of feature customization. Our GNN model achieved near state-of-the-art results on large-scale problems. These results suggest significant room for improvement in developing such models. JobShopLib provides the necessary tools for future experimentation.
... Patterns are successfully adopted in several other domains. Well-known work is the work by the 'Gang of Four' [19] and Fowler [18] about patterns related to software design. Closer to the business process community are patterns in workflows [1] and event logs [35]. ...
Preprint
Full-text available
With the advent of digital transformation, organisations are increasingly generating large volumes of data through the execution of various processes across disparate systems. By integrating data from these heterogeneous sources, it becomes possible to derive new insights essential for tasks such as monitoring and analysing process performance. Typically, this information is extracted during a data pre-processing or engineering phase. However, this step is often performed in an ad-hoc manner and is time-consuming and labour-intensive. To streamline this process, we introduce a reference model and a collection of patterns designed to enrich production event data. The reference model provides a standard way for storing and extracting production event data. The patterns describe common information extraction tasks and how such tasks can be automated effectively. The reference model is developed by combining the ISA-95 industry standard with the Event Knowledge Graph formalism. The patterns are developed based on empirical observations from event data sets originating in manufacturing processes and are formalised using the reference model. We evaluate the relevance and applicability of these patterns by demonstrating their application to use cases.
... We found that already only studying similarities and differences in several disciplines is useful for these disciplines, because they have to make explicit their concepts. The comparison of concepts and approaches between disciplines can also offer new points of view for the separate disciplines; a well-known example is software engineering that learns from architecture a way of thinking in design patterns (Gamma, Helm, and Johnson 1995). Our model may be used as a basic representation of a design process both in design practice and design research. ...
... See Gamma et al. (1995), for example. As an interesting side note, Alexander's (1977) pattern languages as well formed part of the experience that allowed Ward Cunningham's (2001) invention of the wikiwikiweb. ...
... Alexander suggested that patterns provide a common framework for designers to solve problems and maintain consistency and usability (Alexander, 1977). This idea was later adapted into software engineering by the Gang of Four (GoF), Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, who established design patterns as reusable solutions to common coding problems to offer developers a blueprint for solving design challenges efficiently and enhance software maintainability and scalability (Gamma et al., 1994). ...
Thesis
Full-text available
In today’s world of digital product design, data plays a critical role in shaping how user interfaces are developed. Instead of focusing on usability and creating userfriendly interfaces, companies are shifting towards manipulative designs that influence user behavior in ways that benefit the business. This often leads to users spending more money, sharing more personal data, or making unintended decisions. These tactics, known as dark patterns, take advantage of cognitive biases in an attempt to boost short-term engagement at the expense of the user’s autonomy. This thesis conducts an extensive literature review to trace the origins of dark patterns, their taxonomies, and their effects on engagement and trust. To investigate these effects further, a controlled online experiment was carried out using a prototype of a nutrition-tracking app. The study involved 107 participants, who were randomly assigned to one of three different interface versions: a transparent control version, a mildly manipulative version, and a highly manipulative version that used aggressive dark patterns. The results revealed that both mild and extreme dark patterns significantly boosted user engagement across various metrics, such as agreeing to share data, signing up for newsletters, and accepting free trials. However, this increased engagement came with a potential downside. Participants who encountered dark patterns reported feeling less trust, especially regarding emotional and transactional commitments. While the drop in trust wasn’t statistically significant, it does reflect trends seen in previous studies. Interestingly, digital competence seemed to influence how susceptible users were to dark patterns, with those more skilled in digital literacy showing lower engagement rates in manipulative scenarios.
... The high-level structure of the PyGrossone library and the relationships between its main components through a UML component diagram Given the intricacy of constructing and managing a gross-Number, which entails dealing with grossTokens, the library employs a systematic, step-by-step methodology founded on the Builder design pattern (see Gamma et al. 1995). The Builder is a creational design pattern that focuses on constructing complex objects. ...
Article
Full-text available
The paper presents PyGrossone, a general-purpose and domain-independent Python library for the Infinity Computer arithmetic allowing one to work numerically with different infinite and infinitesimal numbers. PyGrossone offers a set of arithmetic, elementary, trigonometric, and differentiation modules to perform computations with 1\textcircled {1}-based numbers (where 1\textcircled {1} is a new infinite number introduced in the series of papers by the last author) with exact precision up to the machine one, since the computations on the Infinity Computer are numeric, not symbolic. Experiments have been conducted to evaluate the validity and performances of the proposed library. The availability of a Python-based implementation of the Infinity Computer arithmetic enables its exploitation in further research fields, such as artificial intelligence, scientific computing, and machine learning.
... The resulting BSS encode unique structural, behavioral, and communicational properties, with an emphasis on the behavioral and communicational attributes that distinguish each design pattern. To demonstrate the feasibility of the approach proposed in this study, a confusion matrix using the Levenshtein distance [15] is calculated between the BSS of eight predefined from the Gang of Four (GoF) design patterns list [16]. This allows to assess the uniqueness of the sequences. ...
Article
Full-text available
The process of Automatic Design Pattern Recovery (ADPR) aims to identify and document design patterns within software systems by analyzing code for recurring signatures indicative of predefined patterns. Despite the development of numerous techniques to address this challenge, the varying implementations of design patterns across different programming languages, combined with the lack of contextual analysis, hinder the effectiveness of these methods. Building on prior research, this paper introduces an innovative approach to ADPR that integrates structural and behavioral analysis of source code. This approach leverages the characteristic that design patterns tend to exhibit predictable behaviors as software architecture scales or when the behavior of its constituent objects is filtered. The core contribution of this work is the introduction of Behavio-Structural Sequences (BSS), which encapsulate key features for identifying the intrinsic nature of design patterns. These sequences are filtered based on architectural and behavioral properties, transformed into text strings, and classified using the Levenshtein distance metric. The proposed classifier is evaluated on a diverse set of source codes, achieving an 89% accuracy rate with a 0.09 miss rate.
... We consider maintainability as the primary concern of refactoring since this is the major aim of code refactoring [53]. Robustness can be improved by restructuring modules to integrate patterns that improve error handling (e.g., Chain of Responsibility [35]). The performance can be improved by eliminating code redundancies (via refactoring) and fixing the sub-optimal distribution of code entities to reduce execution time [67]. ...
Preprint
Full-text available
Refactoring is a practice widely adopted during software maintenance and evolution. Due to its importance, there is extensive work on the effectiveness of refactoring in achieving code quality. However, developer's intentions are usually overlooked. A more recent area of study involves the concept of self-affirmed refactoring (SAR), where developers explicitly state their intent to refactor. While studies on SAR have made valuable contributions, they provide little insights into refactoring complexity and effectiveness, as well as the refactorings' relations to specific non-functional requirements. A study by Soares et al. addressed such aspects, but it relied on a quite small sample of studied subject systems and refactoring instances. Following the empirical method of replication, we expanded the scope of Soares et al.'s study by doubling the number of projects analyzed and a significantly larger set of validated refactorings (8,408). Our findings only partially align with the original study. We observed that when developers explicitly state their refactoring intent, the resulting changes typically involve a combination of different refactoring types, making them more complex. Additionally, we confirmed that such complex refactorings positively impact code's internal quality attributes. While refactorings aimed at non-functional requirements tend to improve code quality, our findings only partially align with the original study and contradict it in several ways. Notably, SARs often result in fewer negative impacts on internal quality attributes despite their frequent complexity. These insights suggest the importance of simplifying refactorings where possible and explicitly stating their goals, as clear intent helps shape more effective and targeted refactoring strategies.
... • Applying Relevant Design Patterns: Automatically incorporating established design patterns that promote flexibility, reusability, and understandability [18,[21][22][23]. For instance, a vibe emphasizing adaptability might trigger the use of Strategy or Observer patterns, while a vibe focused on object creation might leverage Factory or Builder patterns. ...
Preprint
Full-text available
This paper addresses the emerging concept of "vibe-coding"-the translation of intuitive feelings or high-level intentions directly into software code. While potentially accelerating development or enabling novel forms of creation, such code inherently risks being opaque, poorly understood, and difficult to maintain. We propose the Intent-Driven Explicable Architecture (IDEA) framework, a novel approach designed to ensure that vibe-coded software remains maintainable, understandable, and supportable throughout its lifecycle. IDEA integrates techniques for formalizing intuitive inputs, constraining code generation using established software engineering principles, automatically generating explanations linking code to intent, and incorporating rigorous human-in-the-loop validation. We argue that by structuring the translation process and embedding traceability and explicability, IDEA mitigates the primary risks associated with intuition-driven code generation, paving the way for its responsible exploration.
Article
Агент-ориентированные технологии позволяют выполнять сложные вычисления, решать многоуровневые задачи, осуществлять комплексное управление, имитировать реальные процессы, поэтому они имеют большое прикладное и практическое значение. Во второй части обзорного исследования рассматриваются различные подходы к моделированию многоагентных систем, современные направления их проектирования, приведены примеры инструментов разработки. Большое внимание уделено существующим приложениям многоагентных систем. Недостатком классического подхода к моделированию являются «жесткие» модели и заранее заданные протоколы коммуникации агентов, что не позволяет в полной мере реализовать такие свойства агентных систем, как самоорганизация, адаптация, способность к обучению и самообучению. Эволюционный подход базируется на организации вычислений на основе взаимодействий, при этом возникающие структуры требуют дополнительного анализа. Процесс разработки агентных приложений требует решения следующих основных задач: анализ предметной области и ее формализация; выбор модели многоагентной системы и формирование ее архитектуры; выбор модели агента, спецификация его свойств и поведения; формирование схем взаимодействия агентов, а также агентов и пользователей.
Preprint
Full-text available
Abstrak Penelitian ini membahas penerapan Large Language Model (LLM) untuk meningkatkan otomatisasi dalam proses desain perangkat lunak. Latar belakang penelitian didasari oleh kebutuhan industri akan proses pengembangan perangkat lunak yang lebih cepat, efisien, dan adaptif terhadap perubahan kebutuhan bisnis, di mana proses desain tradisional yang masih manual sering kali menjadi hambatan utama. Dengan memanfaatkan LLM, spesifikasi kebutuhan sistem dalam bahasa alami dapat diubah secara otomatis menjadi artefak desain seperti diagram UML, spesifikasi teknis, dan rekomendasi pola desain. Metode penelitian yang digunakan meliputi analisis kebutuhan, transformasi otomatis spesifikasi menjadi artefak desain menggunakan LLM, pengukuran efisiensi dan kualitas hasil desain, serta evaluasi komparatif antara pendekatan manual dan otomatis. Studi kasus pada sistem pertanian cerdas menunjukkan bahwa LLM mampu menghasilkan rancangan arsitektur perangkat lunak dengan efisiensi waktu yang signifikan, rata-rata 60-70% lebih cepat dibandingkan metode manual, serta akurasi teknis yang cukup tinggi pada paradigma arsitektur event-driven. Namun, hasil penelitian juga mengidentifikasi beberapa keterbatasan, seperti kecenderungan LLM menghasilkan solusi yang over-engineered, ketidakmampuan melakukan refinement iteratif, serta tantangan dalam menangani kebutuhan non-fungsional dan integrasi sistem yang kompleks. Oleh karena itu, pendekatan hibrida yang menggabungkan efisiensi LLM dengan keahlian manusia direkomendasikan untuk memperoleh keseimbangan antara kecepatan, kualitas, dan relevansi desain perangkat lunak. Penelitian ini diharapkan dapat menjadi referensi bagi pengembangan otomatisasi desain perangkat lunak di masa depan. Kata kunci-3-5 kata kunci, otomatisasi, desain perangkat lunak, Large Language Model, arsitektur perangkat lunak, efisiensi, kecerdasan buatan Abstract This study discusses the application of Large Language Model (LLM) to improve automation in the software design process. The background of the research is based on the industry's need for a faster, more efficient, and more adaptive software development process to changing business needs, where the traditional manual design process is often the main obstacle. By utilizing LLM, system requirement specifications in natural language can be automatically transformed into design artifacts such as UML diagrams, technical specifications, and design pattern recommendations. The research methods used include needs analysis, automatic transformation of specifications into design artifacts using LLM, measuring the efficiency and quality of design results, and comparative evaluation between manual and automated approaches. Case studies on smart agricultural systems show that LLM is able to produce software architecture designs with significant time efficiency, an average of 60-70% faster than manual methods, and fairly high technical accuracy in the event-driven architecture paradigm. However, the results of the
Chapter
Reusability in blockchain focuses on reusing pre-built components, such as smart contracts, algorithms, and protocols, across multiple dApps. This reduces development time, cuts costs, and enhances system efficiency. The chapter introduces Smart Contract as a Service (SCaaS), which provides reusable smart contract templates to developers. Reusability aligns with blockchain sustainability by lowering energy consumption and promoting long-term system scalability. By understanding the characteristics of reusable software components, blockchain developers can design and deploy flexible, interoperable, and secure systems, ultimately accelerating blockchain innovation.
Chapter
This chapter introduces the Secure and Sustainable Software Engineering Framework for Blockchain Applications (S3EF-BC), designed to address the unique challenges of blockchain technology, such as scalability, energy efficiency, security, and privacy. The S3EF-BC framework emphasizes secure coding practices, energy-efficient operations, and reusability of blockchain components. The chapter discusses the Agile methods suitable for blockchain development in healthcare and explores strategies for managing cybersecurity attacks and privacy issues. The chapter concludes with a critical evaluation of the S3EF-HBCAs framework, which is tailored to healthcare applications, integrating secure requirements engineering, smart contract validation, and energy-efficient consensus mechanisms.
Conference Paper
Energy research software (ERS) is a central cornerstone to facilitate energy research. However, ERS is developed by researchers who, in many cases, lack formal training in software engineering. This reduces the quality of ERS, leading to limited reproducibility and reusability. To address these issues, we developed ten central recommendations for the development of ERS, covering areas such as conceptualization, development, testing, and publication of ERS. The recommendations are based on the outcomes of two workshops with a diverse group of energy researchers and aim to improve the awareness of research software engineering in the energy domain. The recommendations should enhance the quality of ERS and, therefore, the reproducibility of energy research.
Article
Full-text available
Enterprise systems, such as enterprise resource planning, customer relationship management, and supply chain management systems, are widely used in corporate sectors and are notorious for being large, inflexible and monolithic. Their many application-specific methods are challenging to decouple manually because they manage asynchronous, user-driven business processes and business objects having complex structural relationships. We present an automated technique for identifying parts of enterprise systems that can run separately as fine-grained microservices in flexible and scalable Cloud systems. Our remodularization technique uses both semantic properties of enterprise systems, i.e., domain-level business object and method relationships, together with syntactic features of the methods’ code, e.g., their call patterns and structural similarity. Semantically, business objects derived from databases form the basis for prospective clustering of those methods that act on them as modules, while on a syntactic level, structural and interaction details between the methods themselves provide further insights into module dependencies for grouping, based on K-means clustering and optimization. Our technique was prototyped and validated using two open-source enterprise customer relationship management systems, SugarCRM and ChurchCRM. The empirical results demonstrate improved feasibility of remodularizing enterprise systems, inclusive of coded business objects and methods, compared to microservices constructed using class-level decoupling of business objects only. Furthermore, the microservices recommended, integrated with “backend” enterprise systems, demonstrate improvements in execution efficiency, scalability, and availability.
Chapter
Full-text available
Agility embraces changes in the functional and non-functional requirements. When the latter happens, the architecture needs to evolve, putting architectural refactoring in evidence. Microservices is an architectural style that enables more agility in a system’s architecture, as it favors the evolution of the system by adding new operations. But it also has its liabilities: the number of services can explode, with similar ones being created. Ultimately, that harms the system’s evolution and maintenance. This work addresses these challenges by proposing a catalog of architecture refactorings to promote reusability in Microservices. These refactorings target patterns that embrace data heterogeneity in the APIs and employ metadata to enhance messages and guide processing. We evaluated the catalog with case studies of three real-world applications and conducted change impact analysis in two scenarios: adding a new data provider, and adding a new processing algorithm. The results showed that embracing heterogeneous data in the API enables a more seamless addition of new data providers, and using metadata can strongly decouple the processing algorithms from the data they use. Furthermore, the results point to other improvements in observability, scalability, and infrastructure.
Article
Open access: https://e-publishing.cern.ch/index.php/CIJ/article/view/1582/1423 - This is a methodological note on design principles for sociotechnical artifact. It is written for CERN IdeaSquare, a broader audience beyond the Information Systems and Design Science Research communities. At its core is the enthusiasm for experimentation-driven innovation research! :-)
Preprint
Full-text available
1. Pendahuluan • Latar Belakang Masalah Kode warisan (legacy code) merupakan aset penting bagi banyak organisasi, namun seringkali hadir dengan berbagai tantangan. Seiring waktu, kode dapat menjadi semakin kompleks, sulit dipahami, dan mahal untuk dipelihara atau dimodifikasi. Kompleksitas yang tinggi ini seringkali disebabkan oleh desain awal yang kurang optimal, perubahan kebutuhan bisnis yang tidak diakomodasi dengan baik, atau kurangnya dokumentasi yang memadai. Akibatnya, pengembang menghabiskan waktu dan upaya yang signifikan hanya untuk memahami alur kerja kode sebelum dapat melakukan perbaikan bug atau penambahan fitur baru. Hal ini berdampak negatif pada produktivitas tim, kualitas perangkat lunak, dan kemampuan organisasi untuk beradaptasi dengan perubahan pasar. Salah satu pendekatan yang menjanjikan untuk mengatasi masalah ini adalah refactoring, yaitu proses restrukturisasi kode internal tanpa mengubah perilaku eksternalnya. Lebih lanjut, refactoring yang dipandu oleh pola desain (design patterns) dapat memberikan solusi terstruktur dan teruji untuk masalah desain yang umum, sehingga berpotensi meningkatkan kualitas kode secara signifikan. Pola desain menawarkan blueprint untuk struktur kode yang lebih baik, lebih mudah dibaca, dan lebih mudah dipelihara. Namun, efektivitas penerapan teknik refactoring berbasis pola desain, khususnya pada kode warisan dengan tingkat kompleksitas yang sangat tinggi, masih memerlukan evaluasi empiris yang mendalam untuk memahami dampaknya secara kuantitatif dan kualitatif terhadap keterbacaan (readability) dan kemudahan pemeliharaan (maintainability). • Rumusan Masalah Berdasarkan latar belakang di atas, rumusan masalah dalam penelitian ini adalah sebagai berikut: 1. Sejauh mana teknik refactoring berbasis pola desain dapat meningkatkan keterbacaan kode warisan dengan kompleksitas tinggi? 2. Bagaimana pengaruh penerapan teknik refactoring berbasis pola desain terhadap kemudahan pemeliharaan kode warisan dengan kompleksitas tinggi? 3. Pola desain mana yang paling efektif dalam meningkatkan keterbacaan dan pemeliharaan untuk berbagai jenis masalah desain yang umum ditemukan pada kode warisan dengan kompleksitas tinggi? 4. Apa saja tantangan dan pertimbangan praktis dalam menerapkan teknik refactoring berbasis pola desain pada kode warisan yang kompleks? • Tujuan Penelitian Tujuan dari penelitian ini adalah: 5. Mengevaluasi secara kuantitatif dan kualitatif peningkatan keterbacaan kode warisan dengan kompleksitas tinggi setelah diterapkan teknik refactoring berbasis pola desain. 6. Menganalisis dampak teknik refactoring berbasis pola desain terhadap metrik-metrik kemudahan pemeliharaan kode warisan dengan kompleksitas tinggi. 7. Mengidentifikasi dan merekomendasikan pola desain yang paling sesuai dan efektif untuk diterapkan dalam proses refactoring kode warisan berdasarkan karakteristik kompleksitas dan masalah desain spesifik.
Article
Full-text available
Common practice in agile delivery planning is based on user requirements-related artifacts. However, an aspect of business process alignment to software product functions comes into focus in the phase of inception of enterprise-aware disciplined agile software projects. This research proposes a method for mapping business process model elements to sets of semantically coupled and prioritized software functions to obtain ordered software product backlog, i.e., work items list. These software functions are derived from primitive business processes and software functional patterns. The mapping table enables assignment of primitive business processes to categorized software functions. Derived and prioritized software functions are related to a work item list pattern according to selected technology implementation. This way, a prioritized work items list is formulated, which enables development iteration planning. This method could be useful in software functional design alternatives comparison, change management, multi-project integration of software modules to support business processes in information systems, etc. Feasibility of the proposed method has been demonstrated with a case study, related to the development of a billing and reporting software utilized in a private hospital. This case study shows usability of the proposed method in the case of two related development projects that enable software functionality enhancement.
ResearchGate has not been able to resolve any references for this publication.