September 2024
·
4 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 2024
·
4 Reads
July 2024
·
7 Reads
Software products have many configurations to meet different environments and diverse needs. Building software with multiple software configurations typically incurs high costs in terms of build time and computing resources. Incremental builds could reuse intermediate artifacts if configuration settings affect only a portion of the build artifacts. The efficiency gains depend on the strategic ordering of the incremental builds as the order influences which build artifacts can be reused. Deriving an efficient order is challenging and an open problem, since it is infeasible to reliably determine the degree of re-use and time savings before an actual build. In this paper, we propose an approach, called BUDDI—BUild Declaration DIstance, for C-based and Make-based projects to derive an efficient order for incremental builds from the static information provided by the build scripts (i.e., Makefile). The core strategy of BUDDI is to measure the distance between the build declarations of configurations and predict the build size of a configuration from the build targets and build commands in each configuration. Since some artifacts could be reused in the subsequent builds if there is a close distance between the build scripts for different configurations. We implemented BUDDI as an automated tool called BuddiPlanner and evaluated it on 20 popular open-source projects, by comparing it to a baseline that randomly selects a build order. The experimental results show that the order created by BuddiPlanner outperforms 96.5% (193/200) of the random build orders in terms of build time and reduces the build time by an average of 305.94s (26%) compared to the random build orders, with a median saving of 64.88s (28%). BuddiPlanner demonstrates its potential to relieve practitioners of excessive build times and computational resource burdens caused by building multiple software configurations.
July 2024
·
30 Reads
·
1 Citation
Journal of Software: Evolution and Process
Software process simulation (SPS) has become an effective tool for software process management and improvement. However, its adoption in industry is less than what the research community expected due to the burden of measurement cost and the high demand for domain knowledge. The difficulty of extracting appropriate metrics with real data from process enactment is one of the great challenges. We aim to provide evidence‐based support of the process metrics for software process (simulation) modeling. A systematic literature review was performed by extending our previous review series to draw a comprehensive understanding of the metrics for process modeling following our proposed ontology of metrics in SPS. We identify 131 process modeling studies that collectively involve 1975 raw metrics and classified them into 21 categories using the coding technique. We found product and process external metrics are not used frequently in SPS modeling while resource external metrics are widely used. We analyze the causal relationships between metrics. We find that the models exhibit significant diversity, as no pairwise relationship between metrics accounts for more than 10% SPS models. We identify 17 data issues may encounter in measurement and 10 coping strategies. The results of this study provide process modelers with an evidence‐based reference of the identification and the use of metrics in SPS modeling and further contribute to the development of the body of knowledge on software metrics in the context of process modeling. Furthermore, this study is not limited to process simulation but can be extended to software process modeling, in general. Taking simulation metrics as standards and references can further motivate and guide software developers to improve the collection, governance, and application of process data in practice.
July 2024
·
7 Reads
June 2024
·
7 Reads
Journal of Software: Evolution and Process
Microservice architecture (MSA) is a mainstream architectural style due to its high maintainability and scalability. In practice, an appropriate microservice‐oriented decomposition is the foundation to make a system enjoy the benefits of MSA. In terms of decomposing monolithic systems into microservices, researchers have been exploring many optimization objectives, of which modularity is a predominantly focused quality attribute. Security is also a critical quality attribute, that measures the extent to which a system protects data from malicious access or use by attackers. Considering security in microservices‐oriented decomposition can help avoid the risk of leaking critical data and other unexpected software security issues. However, few researchers consider the security objective during microservice‐oriented decomposition, because the measurement of security and the trade‐off with other objectives are challenging in reality. To bridge this research gap, we propose a security‐optimized approach for microservice‐oriented decomposition (So4MoD). In this approach, we adapt five metrics from previous studies for the measurement of the data security of candidate microservices. A multi‐objective optimization algorithm based on NSGA‐II is designed to search for microservices with optimized security and modularity. To validate the effectiveness of the proposed So4MoD, we perform several experiments on eight open‐source projects and compare the decomposition results to other three state‐of‐the‐art approaches, that is, FoSCI, CO‐GCN, and MSExtractor. The experiment results show that our approach can achieve at least an 11.5% improvement in terms of security metrics. Moreover, the decomposition results of So4MoD outperform other approaches in four modularity metrics, demonstrating that So4MoD can optimize data security while pursuing a well‐modularized MSA.
June 2024
Journal of Software: Evolution and Process
Shift To Left” is the cornerstone of the successful implementation of DevSecOps. By testing projects for vulnerabilities in the early stages of development, teams can save overall costs before security issues reach the build phase. As one of the popular practices in “Shift To Left,” the Software Composition Analysis (SCA) system aims to leverage the Software Bill of Materials (SBOM) to enhance software supply chain security. However, the SBOM lacks mature generation and distribution mechanisms, requiring incentive measures to drive industry consensus. Additionally, the data and tools associated with the SBOM lack effective record‐keeping and monitoring, making it challenging to ensure data integrity and tool security. Traditional SCA systems treat SBOM as a regular data format for external service provision, yet fail to solve problems such as lack of shared platforms, inability to guarantee data integrity and tool security, as well as issues with poor interoperation compatibility. This paper introduces blockchain technology into the SCA system, utilizing smart contracts to provide core SBOM tool services and microservices to improve the operational efficiency of smart contract deployment and maintenance. The proposed SCA system effectively provides a shared platform for SBOM with reliable data integrity, guaranteed tool security, and good interoperability.
June 2024
·
91 Reads
·
5 Citations
IEEE Transactions on Software Engineering
MicroService Architecture (MSA), a predominant architectural style in recent years, still faces the arduous task of identifying the boundaries of microservices. Domain-Driven Design (DDD) is regarded as one of the major design methods for addressing this task in practice, which aims to iteratively build domain models using a series of patterns, principles, and practices. The adoption of DDD for MSA ( DDD4M in short) can, however, present considerable challenges in terms of a sufficient understanding of the methodological requirements and the application domains. It is imperative to establish a systematic understanding about the various aspects of employing DDD4M and provide effective guidance. This study reports an empirical inquiry that integrates a systematic literature review and a confirmatory survey. By reviewing 34 scientific studies and consulting 63 practitioners, this study reveals several distinctive findings with regard to the state and challenges of as well as the possible solutions for DDD4M applications, from the 5W1H perspectives: When , Where , Why , Who , What , and How . The analysis and synthesis of evidence show a wide variation in understanding of domain modeling artifacts. The status quo indicates the need for further methodological support in terms of application process, domain model design and implementation, and domain knowledge acquisition and management. To advance the state-of-the-practice, our findings were organized into a preliminary checklist that intends to assist practitioners by illuminating a DDD4M application process and the specific key considerations along the way.
December 2023
·
24 Reads
·
3 Citations
IEEE Transactions on Software Engineering
Continuous Integration (CI) enables developers to detect defects early and thus reduce lead time. However, the high frequency and long duration of executing CI have a detrimental effect on this practice. Existing studies have focused on using CI outcome predictors to reduce frequency. Since there is no reported project using predictive CI, it is difficult to evaluate its economic impact. This research aims to investigate predictive CI from a process perspective, including why and when to adopt predictors, what predictors to be used, and how to practice predictive CI in real projects. We innovatively employ Software Process Simulation to simulate a predictive CI process with a Discrete-Event Simulation (DES) model and conduct simulation-based experiments. We develop the Rollback-based Identification of Defective Commits (RIDEC) method to account for the negative effects of false predictions in simulations. Experimental results show that: 1) using predictive CI generally improves the effectiveness of CI, reducing time costs by up to 36.8% and the average waiting time before executing CI by 90.5%; 2) the time-saving varies across projects, with higher commit frequency projects benefiting more; and 3) predictor performance does not strongly correlate with time savings, but the precision of both failed and passed predictions should be paid more attention. Simulation-based evaluation helps identify overlooked aspects in existing research. Predictive CI saves time and resources, but improved prediction performance has limited cost-saving benefits. The primary value of predictive CI lies in providing accurate and quick feedback to developers, aligning with the goal of CI.
October 2023
·
22 Reads
·
4 Citations
Software Practice and Experience
As a predominant design method for microsservices architecture (MSA), domain‐driven design (DDD) utilizes a series of standard patterns in both models and implementations to effectively support the design of architectural elements. However, an implementation may deviate from its original domain model that uses certain patterns. The deviation between a domain model and its implementation is a type of architectural drift , which needs to be detected promptly. This paper proposes an approach, namely DOMICO, to check the conformance between the domain model and its implementation, by which the conformance is formalized by defining eight common structural patterns of domain modeling and their representations in both models and the corresponding source code. Based on the formalization, our approach can not only identify the discrepancies (e.g., divergence, absence, and modification) with respect to pattern elements, but also detect possible violations of 24 compliance rules imposed by the patterns. To validate DOMICO, we performed a case study to investigate its use in a supply chain project and its performance. The results show that DOMICO can accurately identify 100% inconsistency issues in the cases examined. As the first conformance checking approach for DDD, DOMICO can be integrated into the regular domain modeling process and help ensure the conformity of microservice implementations to models.
August 2023
·
40 Reads
·
4 Citations
CCF Transactions on High Performance Computing
To solve the interconnection between a large number of coexisting blockchains such as Bitcoin and Ethereum and other types of blockchains, more and more scholars have pay attention to the cross-chain technology in recent years. However, the cross-chain exchange of transactions also imposes stricter requirements on the concurrent running speed of blockchains, which indirectly affects the performance and security of cross-chain systems. Therefore, evaluating and optimizing cross-chain technologies is of great significance for forming new internet value models. Queuing theory has been widely used to model various blockchain transaction processes and provide replicable performance evaluation results. However, existing research has overlooked the limitations of cross-chain systems. Many research works focus on modeling, simulating, and analyzing the performance of traditional blockchain systems, rather than cross-regional blockchain processes. To fill this gap, our study takes Cosmos as an example and proposes a queuing theory model based on finite space, which is a typical cross-regional blockchain implemented through a relay mode. Several performance indicators, such as average queue length, transaction rejection probability, system throughput etc., are obtained through three-dimensional continuous time Markov process. Finally, we simulated the analytical solutions of relevant performance indicators through experiments to verify the proposed simulation model’s effectiveness. This analysis method can be extended to other blockchain systems with similar cross-chain processes.
... Eine andere Studie zeigt, dass eine besondere Schwierigkeit darin besteht, Domänenkonzepte in angemessene Microservice-Strukturen zu übersetzen, was sich in zu grobgranularen oder zu feingranularen Services niederschlagen kann (Zhong et al. 2024). Im Einklang mit diesen Ideen zeigen unsere Daten neben den in diesem Artikel berichteten Korrelationen auch signifikante positive Zusammenhänge zwischen Microservice-Architektur mit Aufgabeninterdependenz (r = 0,31, p = 0,002, gemessen durch drei Items) und mit Koordinationsproblemen (r = 0,21, p = 0,04, gemessen durch drei Items), was ebenfalls darauf hindeutet, dass der Versuch, autonome, unabhängig einsetzbare Dienste zu entwerfen, in der Praxis nicht immer gelingt. ...
June 2024
IEEE Transactions on Software Engineering
... If num(ea * ) ≤ 2 then 27 : The substructure of a real-life business process can contain various nested types [41]. Four common nested substructures are described in Figure 7. ...
October 2023
Software Practice and Experience
... In contrast, a prior study reported that block time in Cosmos adheres to the Erlang distribution [14]. Nonetheless, the study was based on the assumption that the verification time required for transaction incorporation into a block in PoS-based blockchains, which do not rely on proof of work (PoW), follows an exponential distribution. ...
August 2023
CCF Transactions on High Performance Computing
... Future work should focus on optimizing the design and implementation of channels in blockchain systems for food testing. This includes exploring how to balance the need for data privacy with the requirement for transparency in food supply chains [89], and investigating how channel configurations can be optimized to enhance system performance and scalability while maintaining high levels of security [90]. • Information sharing: The proof-of-stake protocols in blockchain consistently result in nodes with more information gaining additional data and being selected for mining, leading to an imbalance in information sharing among the participating nodes in the blockchain network [10]. ...
July 2023
Parallel Computing
... By combining Equations (7)(8)(9)(10)(11)(12)(13)(14), (19)(20)(21)(22)(23)(24), and (46-49), we can form a system of modified equations. This system can be used to obtain a 2ðN − b þ 1Þ × 2ðN − b þ 1Þ minimum generator matrix. ...
January 2023
... Wang et al. [10] proposed a Lyapunov-based scheduling algorithm to optimize the performance of Hyperledger Fabric. They addressed security and efficiency concerns arising from potential malicious nodes. ...
December 2022
... Our focus in cloud development is on the design and implementation of design patterns since these two phases may largely impact the quality of cloud-hosted applications [8]. There exist some approaches in the literature that focus on specific quality attributes, for instance, the performance evaluation of microservice-based architectures is gaining increasing attention from the research community, e.g., [9,10,11,12]. However, to the best of our knowledge, existing works lack a systematic and experimental evaluation that quantitatively estimates whether the design patterns address the quality (performance) issues that practitioners experience. ...
October 2022
Journal of Systems and Software
... Likewise, Zhong et al. [78] analyze the impacts, causes, and solutions of ASs as possible root causes of ATD accumulation in MSAs. According to their findings, ASs can arise either inadvertently or intentionally, often due to business aspects and responsibility management. ...
August 2022
Software Practice and Experience
... Other tools like Mythril [9], DefectChecker [25], Manticore [66] and GasChecker [67] also utilize symbolic execution to detect contract vulnerabilities. Symbolic execution involves replacing program variables with symbols and performing symbolic computations during code traversal, generating path conditions for each execution path, which are crucial for verifying path feasibility and detecting potential issues [68]. Osiris [6] combines symbolic execution with taint analysis to identify integer bugs, while Sereum [24] uses taint analysis to uncover vulnerabilities. ...
June 2022
... Other works focused on modeling general performance metrics of the Hyperledger Fabric platform specifically [16][17][18][19]. Jiang et al. [17] developed a hierarchical model for Hyperledger Fabric 1.4 transactions process and conducted numerical analyses of throughput, discard rate, and mean response time. ...
June 2022