Conference Paper

Darwinian Data Structure Selection

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Data structure selection and tuning is laborious but can vastly improve an application’s performance and memory footprint. Some data structures share a common interface and enjoy multiple implementations. We call them Darwinian Data Structures (DDS), since we can subject their implementations to survival of the fittest. We introduce ARTEMIS a multi-objective, cloud-based search-based optimisation framework that automatically finds optimal, tuned DDS modulo a test suite, then changes an application to use that DDS. ARTEMIS achieves substantial performance improvements for every project in 5 Java projects from DaCapo benchmark, 8 popular projects and 30 uniformly sampled projects from GitHub. For execution time, CPU usage, and memory consumption, ARTEMIS finds at least one solution that improves all measures for 86% (37/43) of the projects. The median improvement across the best solutions is 4.8%, 10.1%, 5.1% for runtime, memory and CPU usage. These aggregate results understate ARTEMIS’s potential impact. Some of the benchmarks it improves are libraries or utility functions. Two examples are gson, a ubiquitous Java serialization framework, and xalan, Apache’s XML transformation tool. ARTEMIS improves gson by 16.5%, 1% and 2.2% for memory, runtime, and CPU; ARTEMIS improves xalan’s memory consumption by 23.5%. Every client of these projects will benefit from these performance improvements.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... [4]) and memory consumption (e.g. [5]), among others. Changes evolved by GI have been incorporated into development [6] and GI-based repair has been incorporated into software development process [7]. ...
... Much less work has been devoted to operators for improvement of non-functional properties [13]. The only excpetion being work on exchanging Java Collections [5], [14] and tuning of parameters embedded in code (so-called deep parameter tuning [15]). ...
... However, a question arises whether more effective mutation operators exist. Several researchers have looked into this issue by proposing tuning parameters embedded in code [7], [15], or replacement of Java Collections [5], [14]. In related, automated program repair (APR) field, more targeted mutations have been proposed, for example, borrowing from human-evolved patches [9], [11], or programming language specific ones [12]. ...
Conference Paper
Genetic Improvement of software applies search methods to existing software to improve the target program in some way. Impressive results have been achieved, including substantial speedups, using simple operations that replace, swap and delete lines or statements within the code. Often this is achieved by specialising code, removing parts that are unnecessary for particular use-cases. Previous work has shown that there is a great deal of potential in targeting more specialised operations that modify the code to achieve the same functionality in a different way. We propose six new edit types for Genetic Improvement of Java software, based on the insertion of break, continue and return statements. The idea is to add shortcuts that allow parts of the program to be skipped in order to speed it up. 10000 randomlygenerated instances of each edit were applied to three opensource applications taken from GitHub. The key findings are: (1) compilation rates for inserted statements without surrounding “if” statements are 1.3-18.3%; (2) edits where the inserted statement is embedded within an “if” have compilation rates of 3.2-55.8%; (3) of those that compiled, all 6 edits have a high rate of passing tests (Neutral Variant Rate), >60% in all but one case, and so have the potential to be performance improving edits. Finally, a preliminary experiment based on local search shows how these edits might be used in practice.
... This tool, Sosiefier is open source and available online. 3 The analysis and transformation of the JAVA AST mostly relies on another open source library called Spoon [27]. ...
... All this work leverage the existence of code plasticity, and the performance of the search process can be improved with targeted program transformations. In particular, our results with the swap subtype transformation, show that changing library is very effective to generate neutral variants, and this transformation is a key enabler to improve performance [3]. ...
... Shacham et al. [35] and, more recently, Basios et al. [3] investigate source code transformations to replace libraries and data structures, in a similar was as the swap subtype transformation. This corroborates the idea of a certain plasticity around these data structures, and the notion of interface. ...
Article
Full-text available
Neutral program variants are alternative implementations of a program, yet equivalent with respect to the test suite. Techniques such as approximate computing or genetic improvement share the intuition that potential for enhancements lies in these acceptable behavioral differences (e.g., enhanced performance or reliability). Yet, the automatic synthesis of neutral program variants, through program transformations remains a key challenge. This work aims at characterizing plastic code regions in Java programs, i.e., the code regions that are modifiable while maintaining functional correctness, according to a test suite. Our empirical study relies on automatic variations of 6 real-world Java programs. First, we transform these programs with three state-of-the-art program transformations: add, replace and delete statements. We get a pool of 23,445 neutral variants, from which we gather the following novel insights: developers naturally write code that supports fine-grain behavioral changes; statement deletion is a surprisingly effective program transformation; high-level design decisions, such as the choice of a data structure, are natural points that can evolve while keeping functionality. Second, we design 3 novel program transformations, targeted at specific plastic regions. New experiments reveal that respectively 60%, 58% and 73% of the synthesized variants (175,688 in total) are neutral and exhibit execution traces that are different from the original.
... For example, note that all dynamic approaches support representation switching while most static approaches do not. De Wael et al. [13], Xu [41] both allow the user to specify the data and switching criteria with varying level of compiler-support, while [34] Bench ✓ PetaBricks [3] Bench+Static ✓ ✓ Brainy [10,17] Bench+Static * ✓ CoCo [41] Dynamic ✓ ✓ ✓ [30] Dynamic ✓ ✓ [18,19] Dynamic ✓ ✓ ✓ Late Data Layout [38] Static † ✓ ✓ ✓ SimpleL [6] Static ✓ JitDS [13] Dynamic ✓ ✓ ✓ [32] S+B+D ✓ Artemis [4,5] Static+Bench ✓ ✓ DBFlex [35] Static+Bench ✓ Cres [40] Static ✓ ✓ ✓ CT+ [27,28] Bench+Static * ✓ SETL [33] Static [9] Static † ✓ ✓ * Analysis only † No analysis Table 2. Overview of related work, highlighting fulfilment of the non-quantitative challenges from Section 1. We mark a challenge if a work mentions it explicitly or strongly implies support for it. ...
... Section 4.1) where implementations may mutate or read properties, and representations are chosen based on observable effect on these properties. Brainy [10,17] builds a machine learning model to select representations, while Artemis [4,5] uses a genetic algorithm combined with benchmarking. PetaBricks [3] focuses on operations and implementations, permitting far greater granularity in composition, e.g., automatically switching implementations depending on input size, including in recursive calls from other implementations, or decomposing the input and running separate implementations on each part. ...
Preprint
Full-text available
The choice of how to represent an abstract type can have a major impact on the performance of a program, yet mainstream compilers cannot perform optimizations at such a high level. When dealing with optimizations of data type representations, an important feature is having extensible representation-flexible data types; the ability for a programmer to add new abstract types and operations, as well as concrete implementations of these, without modifying the compiler or a previously defined library. Many research projects support high-level optimizations through static analysis, instrumentation, or benchmarking, but they are all restricted in at least one aspect of extensibility. This paper presents a new approach to representation-flexible data types without such restrictions and which still finds efficient optimizations. Our approach centers around a single built-in type repr\texttt{repr} and function overloading with cost annotations for operation implementations. We evaluate our approach (i) by defining a universal collection type as a library, a single type for all conventional collections, and (ii) by designing and implementing a representation-flexible graph library. Programs using repr\texttt{repr} types are typically faster than programs with idiomatic representation choices -- sometimes dramatically so -- as long as the compiler finds good implementations for all operations. Our compiler performs the analysis efficiently by finding optimized solutions quickly and by reusing previous results to avoid recomputations.
... Time is the concern addressed in the vast majority of papers, with 34 papers considering execution time [1, 2, 7, 8, 10, 14, 15, 17, 24, 32, 35, 39, 41-44, 47-50, 55, 58-63, 68, 70-72, 75, 77, 87, 88], number of CPU or bytecode instructions [4,11,12,21,22,85], or also loading time [23]. Other NFPs include code size [25,38,90,91], energy consumption [13,18,19,27], memory usage [7,8,88], accuracy of the underlying algorithm [30,31,59,60,62,81], readability [73], or other application-specific NFPs [37, 40, 45, 46, 51-53, 64, 65]. A summary is presented in Figure 1. ...
... A summary is presented in Figure 1. Furthermore, a few pieces of work considered multiple NFPs [7,8,27,59,60,62,88]. ...
Conference Paper
Genetic improvement (GI) improves both functional properties of software, such as bug repair, and non-functional properties, such as execution time, energy consumption, or source code size. There are studies summarising and comparing GI tools for improving functional properties of software; however there is no such study for improvement of its non-functional properties using GI. Therefore, this research aims to survey and report on the existing GI tools for improvement of non-functional properties of software. We conducted a literature review of available GI tools, and ran multiple experiments on the found open-source tools to examine their usability. We applied a cross-testing strategy to check whether the available tools can work on different programs. Overall, we found 63 GI papers that use a GI tool to improve nonfunctional properties of software, within which 31 are accompanied with open-source code. We were able to successfully run eight GI tools, and found that ultimately only two ---Gin and PyGGI--- can be readily applied to new general software.
... The natural emergence of functionally diverse implementations of the same features has been harnessed in several ways in the past, to reduce common failure [52]. For example, collection libraries exist in many different implementations, which can be selected according to application specific performance requirements, either statically [53]or dynamically [16], [53]. We have previously harnessed the natural diversity of Java decompilers to improve the overall precision of decompilation [54]. ...
... The natural emergence of functionally diverse implementations of the same features has been harnessed in several ways in the past, to reduce common failure [52]. For example, collection libraries exist in many different implementations, which can be selected according to application specific performance requirements, either statically [53]or dynamically [16], [53]. We have previously harnessed the natural diversity of Java decompilers to improve the overall precision of decompilation [54]. ...
Preprint
Full-text available
Despite its obvious benefits, the increased adoption of package managers to automate the reuse of libraries has opened the door to a new class of hazards: supply chain attacks. By injecting malicious code in one library, an attacker may compromise all instances of all applications that depend on the library. To mitigate the impact of supply chain attacks, we propose the concept of Library Substitution Framework. This novel concept leverages one key observation: when an application depends on a library, it is very likely that there exists other libraries that provide similar features. The key objective of Library Substitution Framework is to enable the developers of an application to harness this diversity of libraries in their supply chain. The framework lets them generate a population of application variants, each depending on a different alternative library that provides similar functionalities. To investigate the relevance of this concept, we develop ARGO, a proof-of-concept implementation of this framework that harnesses the diversity of JSON suppliers. We study the feasibility of library substitution and its impact on a set of 368 clients. Our empirical results show that for 195 of the 368 java applications tested, we can substitute the original JSON library used by the client by at least 15 other JSON libraries without modifying the client's code. These results show the capacity of a Library Substitution Framework to diversify the supply chain of the client applications of the libraries it targets.
... All these works leverage the existence of code plasticity, and the performance of the search process can be improved with targeted speculative transformations. In particular, our results with the swap subtype transforma-tion, show that changing library is very effective to generate neutral variants, and this transformation is a key enabler to improve performance [2]. ...
... Shacham and colleagues [28] and, more recently, Basios and colleagues [2] investigate source code transformations to replace libraries and data structures, in a similar was as the swap subtype transformation. This corroborates the idea of a certain plasticity around these data structures, and the notion of interface. ...
Preprint
Neutral program variants are functionally similar to an original program, yet implement slightly different behaviors. Techniques such as approximate computing or genetic improvement share the intuition that potential for enhancements lies in these acceptable behavioral differences (e.g., enhanced performance or reliability). Yet, the automatic synthesis of neutral program variants, through speculative transformations remains a key challenge. This work aims at characterizing plastic code regions in Java programs, i.e., the areas that are prone to the synthesis of neutral program variants. Our empirical study relies on automatic variations of 6 real-world Java programs. First, we transform these programs with three state-of-the-art speculative transformations: add, replace and delete statements. We get a pool of 23445 neutral variants, from which we gather the following novel insights: developers naturally write code that supports fine-grain behavioral changes; statement deletion is a surprisingly effective speculative transformation; high-level design decisions, such as the choice of a data structure, are natural points that can evolve while keeping functionality. Second, we design 3 novel speculative transformations, targeted at specific plastic regions. New experiments reveal that respectively 60\%, 58\% and 73\% of the synthesized variants (175688 in total) are neutral and exhibit execution traces that are different from the original.
... and logical operators (such as ||/&&) [31]. Basios et al. proposed Darwinian evolution [10,11] which evolves software's data structures. Their study on Java software identified optimal implementations of List containers for specific variables (e.g., ArrayList, LinkedList). ...
Article
Performance is one of the most important qualities of software. Several techniques have thus been proposed to improve it, such as program transformations, optimisation of software parameters, or compiler flags. Many automated software improvement approaches use similar search strategies to explore the space of possible improvements, yet available tooling only focuses on one approach at a time. This makes comparisons and exploration of interactions of the various types of improvement impractical. We propose MAGPIE, a unified software improvement framework. It provides a common edit sequence based representation that isolates the search process from the specific improvement technique, enabling a much simplified synergistic workflow. We provide a case study using a basic local search to compare compiler optimisation, algorithm configuration, and genetic improvement. We chose running time as our efficiency measure and evaluated our approach on four real-world software, written in C, C++, and Java. Our results show that, used independently, all techniques find significant running time improvements: up to 25% for compiler optimisation, 97% for algorithm configuration, and 61% for evolving source code using genetic improvement. We also show that up to 10% further increase in performance can be obtained with partial combinations of the variants found by the different techniques. Furthermore, the common representation also enables simultaneous exploration of all techniques, providing a competitive alternative to using each technique individually.
... Model Code Optimisation: Using evoML, users can identify the optimal model and further enhance the speed and efficiency of the model. This is achieved through the use of a variety of techniques including lower level source code optimisation techniques [4] and modifications to the internal representation of the model [5]. This component enables users to improve the speed and efficiency of the model by optimising the way in which it processes data and makes predictions. ...
Preprint
Full-text available
Machine learning model development and optimisation can be a rather cumbersome and resource-intensive process. Custom models are often more difficult to build and deploy, and they require infrastructure and expertise which are often costly to acquire and maintain. Machine learning product development lifecycle must take into account the need to navigate the difficulties of developing and deploying machine learning models. evoML is an AI-powered tool that provides automated functionalities in machine learning model development, optimisation, and model code optimisation. Core functionalities of evoML include data cleaning, exploratory analysis, feature analysis and generation, model optimisation, model evaluation, model code optimisation, and model deployment. Additionally, a key feature of evoML is that it embeds code and model optimisation into the model development process, and includes multi-objective optimisation capabilities.
... By combining both the preliminary and the systematic repository search we thus construct a very rich and diverse corpus of 386 relevant publications, that is, by construction, both relevant in terms of coverage of the different aspects 186-188, 190, 191, 193-197, 199, 200, 204-208, 210- (57): [9, 22, 28, 36, 47, 58, 70, 79-81, 85, 89, 92, 105, 124, 124, 128, 130, 149, 152, 153, 175, 181, 188, 191, 196, 197, 201, 203, 207, 222, 223, 226, 228, 240, 247, 253, 254, 256, 260, 268, 271, 279, 291, 292, 298, 312, 313, 322, 328, 333, 334, 356, 374-376, 383], report (54): [3, 5, 21, 23, 48, 51, 78, 87, 99, 102, 115, 121, 129, 134, 135, 141, 142, 145, 157, 161, 165, 168, 171, 176, 177, 180, 183, 193, 194, 205, 212, 224, 227, 235, 244-246, 257, 273, 283, 290, 295, 308, 321, 329-331, 339, 340, 342, 350, 355, 359, 390], search (22): [18,19,24,29,39,42,53,103,113,155,159,236,237,239,241,288,297,353,[365][366][367]370] Search static (204) (20) and IEEE (26). Despite this, the corpus contains two papers simultaneously returned by ACM, IEEE, Scopus, and our manual search [328,375]. ...
Preprint
Full-text available
Performance is a key quality of modern software. Although recent years have seen a spike in research on automated improvement of software's execution time, energy, memory consumption, etc., there is a noticeable lack of standard benchmarks for such work. It is also unclear how such benchmarks are representative of current software. Furthermore, frequently non-functional properties of software are targeted for improvement one-at-a-time, neglecting potential negative impact on other properties. In order to facilitate more research on automated improvement of non-functional properties of software, we conducted a survey gathering benchmarks used in previous work. We considered 5 major online repositories of software engineering work: ACM Digital Library, IEEE Xplore, Scopus, Google Scholar, and ArXiV. We gathered 5000 publications (3749 unique), which were systematically reviewed to identify work that empirically improves non-functional properties of software. We identified 386 relevant papers. We find that execution time is the most frequently targeted property for improvement (in 62% of relevant papers), while multi-objective improvement is rarely considered (5%). Static approaches are prevalent (in 53% of papers), with exploratory approaches (evolutionary in 18% and non-evolutionary in 14% of papers) increasingly popular in the last 10 years. Only 40% of 386 papers describe work that uses benchmark suites, rather than single software, of those SPEC is most popular (covered in 33 papers). We also provide recommendations for choice of benchmarks in future work, noting, e.g., lack of work that covers Python or JavaScript. We provide all programs found in the 386 papers on our dedicated webpage at https://bloa.github.io/nfunc_survey/ We hope that this effort will facilitate more research on the topic of automated improvement of software's non-functional properties.
... and logical operators (such as ||/&&) [31]. Basios et al. proposed Darwinian evolution [10,11] which evolves software's data structures. Their study on Java software identified optimal implementations of List containers for specific variables (e.g., ArrayList, LinkedList). ...
Preprint
Full-text available
Performance is one of the most important qualities of software. Several techniques have thus been proposed to improve it, such as program transformations, optimisation of software parameters, or compiler flags. Many automated software improvement approaches use similar search strategies to explore the space of possible improvements, yet available tooling only focuses on one approach at a time. This makes comparisons and exploration of interactions of the various types of improvement impractical. We propose MAGPIE, a unified software improvement framework. It provides a common edit sequence based representation that isolates the search process from the specific improvement technique, enabling a much simplified synergistic workflow. We provide a case study using a basic local search to compare compiler optimisation, algorithm configuration, and genetic improvement. We chose running time as our efficiency measure and evaluated our approach on four real-world software, written in C, C++, and Java. Our results show that, used independently, all techniques find significant running time improvements: up to 25% for compiler optimisation, 97% for algorithm configuration, and 61% for evolving source code using genetic improvement. We also show that up to 10% further increase in performance can be obtained with partial combinations of the variants found by the different techniques. Furthermore, the common representation also enables simultaneous exploration of all techniques, providing a competitive alternative to using each technique individually.
... In the future, we would like to extend this study to investigate in details what are considered clones from industrial point of view, and how we could adapt such view to guide Clone Detection algorithms to be more practical in industrial cases. Furthermore, we want to demonstrate that Clone Detection techniques can be used in various software optimisation studies [31]- [33], as additional restriction or even optimisation objective during the optimisation process. ...
Preprint
Full-text available
Code clones are identical or similar code segments. The wide existence of code clones can increase the cost of maintenance and jeopardise the quality of software. The research community has developed many techniques to detect code clones, however, there is little evidence of how these techniques may perform in industrial use cases. In this paper, we aim to uncover the differences when such techniques are applied in industrial use cases. We conducted large scale experimental research on the performance of two state-of-the-art code clone detection techniques, SourcererCC and AutoenCODE, on both open source projects and an industrial project written in the Scala language. Our results reveal that both algorithms perform differently on the industrial project, with the largest drop in precision being 30.7\%, and the largest increase in recall being 32.4\%. By manually labelling samples of the industrial project by its developers, we discovered that there are substantially less Type-3 clones in the aforementioned project than that in the open source projects.
... Additionally, many code bases lack proper testing and performance benchmarks, upon which many GI techniques rely on. For instance, when we wanted to apply Artemis [34], a GI tool that does automatic code optimisation using better data structures on a performance critical component of a system, we realised that the project did not contain a proper performance benchmark and relying on the test cases was not a realistic behaviour of the application, thus making the optimisations impractical. Additionally, building a performance benchmark for the specific project would take a long time, and thus was considered a lower priority on the project manager list of features. ...
Conference Paper
Following Prof. Mark Harman of Facebook's keynote and formal presentations (which are recorded in the proceedings) there was a wide ranging discussion at the eighth international Genetic Improvement workshop, GI-2020 @ ICSE (held as part of the 42nd ACM/IEEE International Conference on Software Engineering on Friday 3rd July 2020). Topics included industry take up, human factors, explainabiloity (explainability, justifyability, exploitability) and GI benchmarks. We also contrast various recent online approaches (e.g. SBST 2020) to holding virtual computer science conferences and workshops via the WWW on the Internet without face-2-face interaction. Finally we speculate on how the Coronavirus Covid-19 Pandemic will affect research next year and into the future.
... Muralidharan et al [37] leverages code variants to adapt performance in the context of GPU code. Basios [38] and Shacham and colleagues [12] exploit the diversity of data structure implementations to tailor the selection according to the application that uses a data structure. Sondhi and colleagues [13] leverage similarities between library implementations to reuse test cases from one to test another. ...
Preprint
Full-text available
JSON is a popular file and data format that is precisely specified by the IETF in RFC 8259. Yet, this specification implicitly and explicitly leaves room for many design choices when it comes to parsing and generating JSON. This yields the opportunity of diverse behavior among independent implementations of JSON libraries. A thorough analysis of this diversity can be used by developers to choose one implementation or to design a resilient multi-version architecture. We present the first systematic analysis and comparison of the input / output behavior of 20 JSON libraries, in Java. We analyze the diversity of architectural choices among libraries, and we execute each library with well-formed and ill-formed JSON files to assess their behavior. We first find that the data structure selected to represent JSON objects and the encoding of numbers are the main design differences, which influence the behavior of the libraries. Second, we observe that the libraries behave in a similar way with regular, well-formed JSON files. However, there is a remarkable behavioral diversity with ill-formed files, or corner cases such as large numbers or duplicate data.
... GI navigates the search space of mutated program variants in order to find one that improves the desired property. This technique has been successfully used to fix bugs [1,14], add an additional feature [3,19], improve runtime [12], energy [7], and reduce memory consumption [5,23]. ...
Conference Paper
Full-text available
Genetic Improvement (GI) uses automated search to improve existing software. It can be used to improve runtime, energy consumption, fix bugs, and any other software property, provided that such property can be encoded into a fitness function. GI usually relies on testing to check whether the changes disrupt the intended functionality of the software, which makes test suites important artefacts for the overall success of GI. The objective of this work is to establish which characteristics of the test suites correlate with the effectiveness of GI. We hypothesise that different test suite properties may have different levels of correlation to the ratio between overfitting and non-overfitting patches generated by the GI algorithm. In order to test our hypothesis, we perform a set of experiments with automatically generated test suites using EvoSuite and 4 popular coverage criteria. We used these test suites as input to a GI process and collected the patches generated throughout such a process. We find that while test suite coverage has an impact on the ability of GI to produce correct patches, with branch coverage leading to least overfitting, the overfitting rate was still significant. We also compared automatically generated tests with manual, developer-written ones and found that while manual tests had lower coverage, the GI runs with manual tests led to less overfitting than in the case of automatically generated tests. Finally, we did not observe enough statistically significant correlations between the coverage metrics and overfitting ratios of patches, i.e., the coverage of test suites cannot be used as a linear predictor for the level of overfitting of the generated patches.
... Additionally, many code bases lack proper testing and performance benchmarks, upon which many GI techniques rely on. For instance, when we wanted to apply Artemis [34], a GI tool that does automatic code optimisation using better data structures on a performance critical component of a system, we realised that the project did not contain a proper performance benchmark and relying on the test cases was not a realistic behaviour of the application, thus making the optimisations impractical. Additionally, building a performance benchmark for the specific project would take a long time, and thus was considered a lower priority on the project manager list of features. ...
Article
Following Prof. Mark Harman of Facebook's keynote and formal presentations (which are recorded in the proceed- ings) there was a wide ranging discussion at the eighth inter- national Genetic Improvement workshop, GI-2020 @ ICSE (held as part of the International Conference on Software En- gineering on Friday 3rd July 2020). Topics included industry take up, human factors, explainabiloity (explainability, jus- tifyability, exploitability) and GI benchmarks. We also con- trast various recent online approaches (e.g. SBST 2020) to holding virtual computer science conferences and workshops via the WWW on the Internet without face to face interac- tion. Finally we speculate on how the Coronavirus Covid-19 Pandemic will a ect research next year and into the future.
... GI navigates the search space of mutated program variants in order to find one that improves the desired property. This technique has been successfully used to fix bugs [1,14], add an additional feature [3,19], improve runtime [12], energy [7], and reduce memory consumption [5,23]. ...
Chapter
Full-text available
Genetic Improvement (GI) uses automated search to improve existing software. It can be used to improve runtime, energy consumption, fix bugs, and any other software property, provided that such property can be encoded into a fitness function. GI usually relies on testing to check whether the changes disrupt the intended functionality of the software, which makes test suites important artefacts for the overall success of GI. The objective of this work is to establish which characteristics of the test suites correlate with the effectiveness of GI. We hypothesise that different test suite properties may have different levels of correlation to the ratio between overfitting and non-overfitting patches generated by the GI algorithm. In order to test our hypothesis, we perform a set of experiments with automatically generated test suites using EvoSuite and 4 popular coverage criteria. We used these test suites as input to a GI process and collected the patches generated throughout such a process. We find that while test suite coverage has an impact on the ability of GI to produce correct patches, with branch coverage leading to least overfitting, the overfitting rate was still significant. We also compared automatically generated tests with manual, developer-written ones and found that while manual tests had lower coverage, the GI runs with manual tests led to less overfitting than in the case of automatically generated tests. Finally, we did not observe enough statistically significant correlations between the coverage metrics and overfitting ratios of patches, i.e., the coverage of test suites cannot be used as a linear predictor for the level of overfitting of the generated patches.
... Additionally, many code bases lack proper testing and performance benchmarks, upon which many GI techniques rely on. For instance, when we wanted to apply Artemis [34], a GI tool that does automatic code optimisation using better data structures on a performance critical component of a system, we realised that the project did not contain a proper performance benchmark and relying on the test cases was not a realistic behaviour of the application, thus making the optimisations impractical. Additionally, building a performance benchmark for the specific project would take a long time, and thus was considered a lower priority on the project manager list of features. ...
Preprint
Full-text available
Following Prof. Mark Harman of Facebook's keynote and formal presentations (which are recorded in the proceedings) there was a wide ranging discussion at the eighth international Genetic Improvement workshop, GI-2020 @ ICSE (held as part of the 42nd ACM/IEEE International Conference on Software Engineering on Friday 3rd July 2020). Topics included industry take up, human factors, explainabiloity (explainability, justifyability, exploitability) and GI benchmarks. We also contrast various recent online approaches (e.g. SBST 2020) to holding virtual computer science conferences and workshops via the WWW on the Internet without face-2-face interaction. Finally we speculate on how the Coronavirus Covid-19 Pandemic will affect research next year and into the future.
Article
Despite recent increase in research on improvement of non-functional properties of software, such as energy usage or program size, there is a lack of standard benchmarks for such work. This absence hinders progress in the field, and raises questions about the representativeness of current benchmarks of real-world software. To address these issues and facilitate further research on improvement of non-functional properties of software, we conducted a comprehensive survey on the benchmarks used in the field thus far. We searched five major online repositories of research work, collecting 5499 publications (4066 unique), and systematically identified relevant papers to construct a rich and diverse corpus of 425 relevant studies. We find that execution time is the most frequently improved property in research work (63%), while multi-objective improvement is rarely considered (7%). Static approaches for improvement of non-functional software properties are prevalent (51%), with exploratory approaches (18% evolutionary and 15% non-evolutionary) increasingly popular in the last 10 years. Only 39% of the 425 papers describe work that uses benchmark suites, rather than single software, of those SPEC is most popular (63 papers). We also provide recommendations for future work, noting, for instance, lack of benchmarks for non-functional improvement that covers Python, JavaScript, or mobile devices. All the details regarding the 425 identified papers are available on our dedicated webpage: https://bloa.github.io/nfunc_survey.
Article
The GI @ ICSE 2024 workshop, held 16 April, in addition to presentations, contained a keynote on how to use Genetic Improvement to control deep AI large language models in software engineering and a tutorial on a language independent tool for GI research. We summarise these, the papers, people, prizes, acknowledgements, discussions and hopes for the future.
Preprint
Full-text available
GitHub, renowned for facilitating collaborative code version control and software production in software teams, expanded its services in 2017 by introducing GitHub Marketplace. This online platform hosts automation tools to assist developers with the production of their GitHub-hosted projects, and it has become a valuable source of information on the tools used in the Open Source Software (OSS) community. In this exploratory study, we introduce GitHub Marketplace as a software marketplace by comprehensively exploring the platform's characteristics, features, and policies and identifying common themes in production automation. Further, we explore popular tools among practitioners and researchers and highlight disparities in the approach to these tools between industry and academia. We adopted the conceptual framework of software app stores from previous studies to examine 8,318 automated production tools (440 Apps and 7,878 Actions) across 32 categories on GitHub Marketplace. We explored and described the policies of this marketplace as a unique platform where developers share production tools for the use of other developers. Furthermore, we systematically mapped 515 research papers published from 2000 to 2021 and compared open-source academic production tools with those available in the marketplace. We found that although some of the automation topics in literature are widely used in practice, they have yet to align with the state of practice for automated production. We discovered that practitioners often use automation tools for tasks like "Continuous Integration" and "Utilities," while researchers tend to focus more on "Code Quality" and "Testing". Our study illuminates the landscape of open-source tools for automation production in industry and research.
Article
Following the formal presentations, which included keynotes by Prof. Myra B. Cohen of Iowa State University and Dr. Sebastian Baltes of SAP as well as six papers (which are recorded in the pro- ceedings) there was a wide ranging discussion at the twelfth inter- national Genetic Improvement workshop, GI-2023 @ ICSE held on Saturday 20th May 2023 in Melbourne and online via Zoom. Topics included GI to improve testing, and remove unpleasant surprises in cloud computing costs, incorporating novelty search, large language models (LLM ANN) and GI benchmarks.
Article
Computing the differences between two versions of the same program is an essential task for software development and software evolution research. AST differencing is the most advanced way of doing so, and an active research area. Yet, AST differencing algorithms rely on configuration parameters that may have a strong impact on their effectiveness. In this paper, we present a novel approach named DAT (D iff A uto T uning ) for hyperparameter optimization of AST differencing. We thoroughly state the problem of hyper-configuration for AST differencing. We evaluate our data-driven approach DAT to optimize the edit-scripts generated by the state-of-the-art AST differencing algorithm named GumTree in different scenarios. DAT is able to find a new configuration for GumTree that improves the edit-scripts in 21.8% of the evaluated cases.
Chapter
Automated multi-objective software optimisation offers an attractive solution to software developers wanting to balance often conflicting objectives, such as memory consumption and execution time. Work on using multi-objective search-based approaches to optimise for such non-functional software behaviour has so far been scarce, with tooling unavailable for use. To fill this gap we extended an existing generalist, open source, genetic improvement tool, Gin, with a multi-objective search strategy, NSGA-II. We ran our implementation on a mature, large software to show its use. In particular, we chose EvoSuite—a tool for automatic test case generation for Java. We use our multi-objective extension of Gin to improve both the execution time and memory usage of EvoSuite. We find improvements to execution time of up to 77.8% and improvements to memory of up to 9.2% on our test set. We also release our code, providing the first open source multi-objective genetic improvement tooling for improvement of memory and runtime for Java.KeywordsGenetic improvementMulti-objective optimisationSearch-based software engineering
Article
Containers are ubiquitous data structures that support a variety of manipulations on the elements, inducing the indirect value flows in the program. Tracking value flows through containers is stunningly difficult because it depends on container memory layouts, which are expensive to be discovered. This work presents a fast and precise value-flow analysis framework called Anchor for the programs using containers. We introduce the notion of anchored containers and propose the memory orientation analysis to construct a precise value-flow graph. Specifically, we establish a combined domain to identify anchored containers and apply strong updates to container memory layouts. Anchor finally conducts a demand-driven reachability analysis in the value-flow graph for a client. Experiments show that it removes 17.1% spurious statements from thin slices and discovers 20 null pointer exceptions with 9.1% as its false-positive ratio, while the smashing-based analysis reports 66.7% false positives. Anchor scales to millions of lines of code and checks the program with around 5.12 MLoC within 5 hours.
Article
Full-text available
Nowadays there is an increased pressure on mobile app developers to take non-functional properties into account. An app that is too slow or uses much bandwidth will decrease user satisfaction, and thus can lead to users simply abandoning the app. Although automated software improvement techniques exist for traditional software, these are not as prevalent in the mobile domain. Moreover, it is yet unknown if the same software changes would be as effective. With that in mind, we mined overall 100 Android repositories to find out how developers improve execution time, memory consumption, bandwidth usage and frame rate of mobile apps. We categorised non-functional property (NFP) improving commits related to performance to see how existing automated software improvement techniques can be improved. Our results show that although NFP improving commits related to performance are rare, such improvements appear throughout the development lifecycle. We found altogether 560 NFP commits out of a total of 74,408 commits analysed. Memory consumption is sacrificed most often when improving execution time or bandwidth usage, although similar types of changes can improve multiple non-functional properties at once. Code deletion is the most frequently utilised strategy except for frame rate, where increase in concurrency is the dominant strategy. We find that automated software improvement techniques for mobile domain can benefit from addition of SQL query improvement, caching and asset manipulation. Moreover, we provide a classifier which can drastically reduce manual effort to analyse NFP improving commits.
Article
Containers, such as lists and maps, are fundamental data structures in modern programming languages. However, improper choice of container types may lead to significant performance issues. This paper presents Cres, an approach that automatically synthesizes container replacements to improve runtime performance. The synthesis algorithm works with static analysis techniques to identify how containers are utilized in the program, and attempts to select a method with lower time complexity for each container method call. Our approach can preserve program behavior and seize the opportunity of reducing execution time effectively for general inputs. We implement Cres and evaluate it on 12 real-world Java projects. It is shown that Cres synthesizes container replacements for the projects with 384.2 KLoC in 14 minutes and discovers six categories of container replacements, which can achieve an average performance improvement of 8.1%.
Chapter
Full-text available
Java 8 marked a shift in the Java development landscape by introducing functional-like concepts in its stream library. Java developers can now rely on stream pipelines to simplify data processing, reduce verbosity, easily enable parallel processing and increase the expressiveness of their code. While streams have seemingly positive effects in Java development, little is known to what extent Java developers have incorporated streams into their programs and the degree of adoption by the Java community of individual stream’s features.
Article
The k -nearest-neighbors ( k NN) graph is a popular and powerful data structure that is used in various areas of Data Science, but the high computational cost of obtaining it hinders its use on large datasets. Approximate solutions have been described in the literature using diverse techniques, among which Locality-sensitive Hashing (LSH) is a promising alternative that still has unsolved problems. We present Variable Resolution Locality-sensitive Hashing, an algorithm that addresses these problems to obtain an approximate k NN graph at a significantly reduced computational cost. Its usability is greatly enhanced by its capacity to automatically find adequate hyperparameter values, a common hindrance to LSH-based methods. Moreover, we provide an implementation in the distributed computing framework Apache Spark that takes advantage of the structure of the algorithm to efficiently distribute the computational load across multiple machines, enabling practitioners to apply this solution to very large datasets. Experimental results show that our method offers significant improvements over the state-of-the-art in the field and shows very good scalability as more machines are added to the computation.
Conference Paper
Full-text available
Selecting collection data structures for a given application is a crucial aspect of the software development. Inefficient usage of collections has been credited as a major cause of performance bloat in applications written in Java, C++ and C#. Furthermore, a single implementation might not be optimal throughout the entire program execution. This demands an adaptive solution that adjusts at runtime the collection implementations to varying workloads. We present CollectionSwitch, an application-level framework for efficient collection adaptation. It selects at runtime collection implementations in order to optimize the execution and memory performance of an application. Unlike previous works, we use workload data on the level of collection allocation sites to guide the optimization process. Our framework identifies allocation sites which instantiate suboptimal collection variants, and selects optimized variants for future instantiations. As a further contribution we propose adaptive collection implementations which switch their underlying data structures according to the size of the collection. We implement this framework in Java, and demonstrate the improvements in terms of time and memory behavior across a range of benchmarks. To our knowledge, it is the first approach which is capable of runtime performance optimization of Java collections with very low overhead.
Conference Paper
Full-text available
Data structure selection and tuning is laborious but can vastly improve application performance and memory footprint. In this paper, we demonstrate how artemis, a multiobjective, cloud-based optimisation framework can automatically find optimal, tuned data structures and how it is used for optimising the Guava library. From the proposed solutions that artemis found, 27.45%27.45\% of them improve all measures (execution time, CPU usage, and memory consumption). More specifically, artemis managed to improve the memory consumption of Guava by up 13%, execution time by up to 9%, and 4% CPU usage.
Article
Full-text available
Genetic improvement uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of genetic improvement back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. Genetic improvement has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between genetic improvement and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields.
Article
Full-text available
Dynamically typed language implementations often use more memory and execute slower than their statically typed cousins, in part because operations on collections of elements are unoptimised. This paper describes storage strategies, which dynamically optimise collections whose elements are instances of the same primitive type. We implement storage strategies in the PyPy virtual machine, giving a performance increase of 18% on wide-ranging benchmarks of real Python programs. We show that storage strategies are simple to implement, needing only 1500LoC in PyPy, and have applicability to a wide range of virtual machines.
Article
Full-text available
Uncertainty is characterised by incomplete understanding. It is inevitable in the early phase of requirements engineering, and can lead to unsound requirement decisions. Inappropriate requirement choices may result in products that fail to satisfy stakeholders’ needs, and might cause loss of revenue. To overcome uncertainty, requirements engineering decision support needs uncertainty management. In this research, we develop a decision support framework METRO for the Next Release Problem (NRP) to manage algorithmic uncertainty and requirements uncertainty. An exact NRP solver (NSGDP) lies at the heart of METRO. NSGDP’s exactness eliminates interference caused by approximate existing NRP solvers. We apply NSGDP to three NRP instances, derived from a real world NRP instance, RALIC, and compare with NSGA-II, a widely-used approximate (inexact) technique. We find the randomness of NSGA-II results in decision makers missing up to 99:95% of the optimal solutions and obtaining up to 36:48% inexact requirement selection decisions. The chance of getting an inexact decision using existing approximate approaches is negatively correlated with the implementation cost of a requirement (Spearman up to �0:72). Compared to the inexact existing approach, NSGDP saves 15:21% lost revenue, on average, for the RALIC dataset.
Conference Paper
Full-text available
Recent work on genetic-programming-based approaches to automatic program patching have relied on the insight that the content of new code can often be assembled out of fragments of code that already exist in the code base. This insight has been dubbed the plastic surgery hypothesis; successful, well-known automatic repair tools such as GenProg rest on this hypothesis, but it has never been validated. We formalize and validate the plastic surgery hypothesis and empirically measure the extent to which raw material for changes actually already exists in projects. In this paper, we mount a large-scale study of several large Java projects, and examine a history of 15,723 commits to determine the extent to which these commits are graftable, i.e., can be reconstituted from existing code, and find an encouraging degree of graftability, surprisingly independent of commit size and type of commit. For example, we find that changes are 43% graftable from the exact version of the software being changed. With a view to investigating the difficulty of finding these grafts, we study the abundance of such grafts in three possible sources: the immediately previous version, prior history, and other projects. We also examine the contiguity or chunking of these grafts, and the degree to which grafts can be found in the same file. Our results are quite promising and suggest an optimistic future for automatic program patching methods that search for raw material in already extant code in the project being patched.
Conference Paper
Full-text available
Determining which functional components should be integrated to a large system is a challenging task, when hardware constraints, such as available memory, are taken into account. We formulate such problem as a multi-objective component selection problem, which searches for feature subsets that balance the provision of maximal functionality at minimal memory resource cost. We developed a search-based component selection tool, and applied it to the KDE-based application, Kate, to find a set of Kate instantiations that balance functionalities and memory consumption. Our results report that, compared to the best attainment of random search, our approach can reduce at most 23.70%23.70\,\% memory consumption with respect to the same number components. While comparing to greedy search, the memory reduction can be up to 19.04%19.04\,\%. SBSelector finds a instantiation of Kate that provides 16 more components, while only increasing memory by 1.7%1.7\,\%.
Conference Paper
Full-text available
Today, software engineering practices focus on finding the single "right" data representation for a program. The "right" data representation, however, might not exist: changing the representation of an object during program execution can be better in terms of performance. To this end we introduce Just-in-Time Data Structures, which enable representation changes at runtime, based on declarative input from a performance expert programmer. Just-in-Time Data Structures are an attempt to shift the focus from finding the " right " data structure to finding the " right " sequence of data representations. We present JitDS, a programming language to develop such Just-in-Time Data Structures. Further, we show two example programs that benefit from changing the representation at runtime.
Article
Full-text available
Uncertainty is inevitable in real world requirement engineering. It has a significant impact on the feasibility of proposed solutions and thus brings risks to the software release plan. This paper proposes a multi-objective optimization technique, augmented with Monte-Carlo Simulation, that optimizes requirement choices for the three objectives of cost, revenue, and uncertainty. The paper reports the results of an empirical study over four data sets derived from a single real world data set. The results show that the robust optimal solutions obtained by our approach are conservative compared to their corresponding optimal solutions produced by traditional Multi-Objective Next Release Problem. We obtain a robustness improvement of at least 18% at a small cost (a maximum 0.0285 shift in the 2D Pareto-front in the unit space). Surprisingly we found that, though a requirement's cost is correlated with inclusion on the Pareto-front, a requirement's expected revenue is not.
Conference Paper
Full-text available
Dynamically typed language implementations often use more memory and execute slower than their statically typed cousins, in part because operations on collections of elements are unoptimised. This paper describes storage strategies, which dynamically optimise collections whose elements are instances of the same primitive type. We implement storage strategies in the PyPy virtual machine, giving a performance increase of 18% on wide-ranging benchmarks of real Python programs. We show that storage strategies are simple to implement, needing only 1500LoC in PyPy, and have applicability to a wide range of virtual machines.
Conference Paper
Full-text available
Experimental evaluation is key to systems research. Because modern systems are complex and non-deterministic, good experimental methodology demands that researchers account for uncertainty. To obtain valid results, they are expected to run many iterations of benchmarks, invoke virtual machines (VMs) several times, or even rebuild VM or benchmark binaries more than once. All this repetition costs time to complete experiments. Currently, many evaluations give up on sufficient repetition or rigorous statistical methods, or even run benchmarks only in training sizes. The results reported often lack proper variation estimates and, when a small difference between two systems is reported, some are simply unreliable. In contrast, we provide a statistically rigorous methodology for repetition and summarising results that makes efficient use of experimentation time. Time efficiency comes from two key observations. First, a given benchmark on a given platform is typically prone to much less non-determinism than the common worst-case of published corner-case studies. Second, repetition is most needed where most uncertainty arises (whether between builds, between executions or between iterations). We capture experimentation cost with a novel mathematical model, which we use to identify the number of repetitions at each level of an experiment necessary and sufficient to obtain a given level of precision. We present our methodology as a cookbook that guides researchers on the number of repetitions they should run to obtain reliable results. We also show how to present results with an effect size confidence interval. As an example, we show how to use our methodology to conduct throughput experiments with the DaCapo and SPEC CPU benchmarks on three recent platforms.
Article
Full-text available
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.
Conference Paper
Full-text available
Delivering increasingly complex software-reliant systems demands better ways to manage the long-term effects of short-term expedients. The technical debt metaphor is gaining significant traction in the agile development community as a way to understand and communicate such issues. The idea is that developers sometimes accept compromises in a system in one dimension (e.g., modularity) to meet an urgent demand in some other dimension (e.g., a deadline), and that such compromises incur a "debt": on which "interest" has to be paid and which the "principal" should be repaid at some point for the long-term health of the project. We argue that the software engineering research community has an opportunity to study and improve this concept. We can offer software engineers a foundation for managing such trade-offs based on models of their economic impacts. Therefore, we propose managing technical debt as a part of the future research agenda for the software engineering field.
Conference Paper
Full-text available
Languages such as Java and C#, as well as scripting languages like Python, and Ruby, make extensive use of Collection classes. A collection implementation represents a fixed choice in the dimensions of operation time, space utilization, and synchronization. Using the collection in a manner not consistent with this fixed choice can cause significant performance degradation. In this paper, we present CHAMELEON, a low-overhead automatic tool that assists the programmer in choosing the appropriate collection implementation for her application. During program execution, CHAMELEON computes elaborate trace and heap-based metrics on collection behavior. These metrics are consumed on-thefly by a rules engine which outputs a list of suggested collection adaptation strategies. The tool can apply these corrective strategies automatically or present them to the programmer. We have implemented CHAMELEON on top of a IBM's J9 production JVM, and evaluated it over a small set of benchmarks. We show that for some applications, using CHAMELEON leads to a significant improvement of the memory footprint of the application.
Article
Full-text available
In a mathematical approach to hypothesis tests, we start with a clearly defined set of hypotheses and choose the test with the best properties for those hypotheses. In practice, we often start with less precise hypotheses. For example, often a researcher wants to know which of two groups generally has the larger responses, and either a t-test or a Wilcoxon-Mann-Whitney (WMW) test could be acceptable. Although both t-tests and WMW tests are usually associated with quite different hypotheses, the decision rule and p-value from either test could be associated with many different sets of assumptions, which we call perspectives. It is useful to have many of the different perspectives to which a decision rule may be applied collected in one place, since each perspective allows a different interpretation of the associated p-value. Here we collect many such perspectives for the two-sample t-test, the WMW test and other related tests. We discuss validity and consistency under each perspective and discuss recommendations between the tests in light of these many different perspectives. Finally, we briefly discuss a decision rule for testing genetic neutrality where knowledge of the many perspectives is vital to the proper interpretation of the decision rule.
Conference Paper
Full-text available
This paper describes work on the application of optimization techniques in software engineering. These optimization techniques come from the operations research and metaheuristic computation research communities. The paper briefly reviews widely used optimization techniques and the key ingredients required for their successful application to software engineering, providing an overview of existing results in eight software engineering application domains. The paper also describes the benefits that are likely to accrue from the growing body of work in this area and provides a set of open problems, challenges and areas for future work.
Conference Paper
Requirements engineering is the prerequisite of software engineering, and plays a crit- ically strategic role in the success of software development. Insufficient management of uncertainty in the requirements engineering process has been recognised as a key reason for software project failure. The essence of uncertainty may arise from partially observable, stochastic environments, or ignorance. To ease the impact of uncertainty in the software development process, it is important to provide techniques that explicitly manage uncertainty in requirements selection and optimisation. This thesis presents a decision support framework to exactly address the uncertainty in requirements selection and optimisation. Three types of uncertainty are managed. They are requirements uncertainty, algorithmic uncertainty, and uncertainty of resource constraints. Firstly, a probabilistic robust optimisation model is introduced to enable the manageability of requirements uncertainty. Requirements uncertainty is probabilis- tically simulated by Monte-Carlo Simulation and then formulated as one of the opti- misation objectives. Secondly, a probabilistic uncertainty analysis and a quantitative analysis sub-framework METRO is designed to cater for requirements selection deci- sion support under uncertainty. An exact Non-dominated Sorting Conflict Graph based Dynamic Programming algorithm lies at the heart of METRO to guarantee the elim- ination of algorithmic uncertainty and the discovery of guaranteed optimal solutions. Consequently, any information loss due to algorithmic uncertainty can be completely avoided. Moreover, a data analytic approach is integrated in METRO to help the deci- sion maker to understand the remaining requirements uncertainty propagation through- out the requirements selection process, and to interpret the analysis results. Finally, a more generic exact multi-objective integrated release and schedule planning approach iRASPA is introduced to holistically manage the uncertainty of resource constraints for requirements selection and optimisation. Software release and schedule plans are inte- grated into a single activity and solved simultaneously. Accordingly, a more advanced globally optimal result can be produced by accommodating and managing the inherent additional uncertainty due to resource constraints as well as that due to requirements. To settle the algorithmic uncertainty problem and guarantee the exactness of results, an ε-constraint Quadratic Programming approach is used in iRASPA.
Conference Paper
We present a search based testing system that automatically explores the space of all possible GUI event interleavings. Search guides our system to novel crashing sequences using Levenshtein distance and minimises the resulting fault-revealing UI sequences in a post-processing hill climb. We report on the application of our system to the SSBSE 2014 challenge program, Pidgin. Overall, our Pidgin Crasher found 20 different events that caused 2 distinct kinds of bugs, while the event sequences that caused them were reduced by 84% on average using our minimisation post processor.
Article
Data structure selection is one of the most critical aspects of developing effective applications. By analyzing data structures' behavior and their interaction with the rest of the application on the underlying architecture, tools can make suggestions for alternative data structures better suited for the program input on which the application runs. Consequently, developers can optimize their data structure usage to make the application conscious of an underlying architecture and a particular program input. This paper presents the design and evaluation of Brainy, a new program analysis tool that automatically selects the best data structure for a given program and its input on a specific microarchitecture. The data structure's interface functions are instrumented to dynamically monitor how the data structure interacts with the application for a given input. The instrumentation records traces of various runtime characteristics including underlying architecture-specific events. These generated traces are analyzed and fed into an offline model, constructed using machine learning, to select the best data structure. That is, Brainy exploits runtime feedback of data structures to model the situation an application runs on, and selects the best data structure for a given application/input/architecture combination based on the constructed model. The empirical evaluation shows that this technique is highly accurate across several real-world applications with various program input sets on two different state-of-the-art microarchitectures. Consequently, Brainy achieved an average performance improvement of 27% and 33% on both microarchitectures, respectively.
Article
Many opportunities for easy, big-win, program optimizations are missed by compilers. This is especially true in highly layered Java applications. Often at the heart of these missed optimization opportunities lie computations that, with great expense, produce data values that have little impact on the program's final output. Constructing a new date formatter to format every date, or populating a large set full of expensively constructed structures only to check its size: these involve costs that are out of line with the benefits gained. This disparity between the formation costs and accrued benefits of data structures is at the heart of much runtime bloat. We introduce a run-time analysis to discover these low-utility data structures. The analysis employs dynamic thin slicing, which naturally associates costs with value flows rather than raw data flows. It constructs a model of the incremental, hop-to-hop, costs and benefits of each data structure. The analysis then identifies suspicious structures based on imbalances of its incremental costs and benefits. To decrease the memory requirements of slicing, we introduce abstract dynamic thin slicing , which performs thin slicing over bounded abstract domains. We have modified the IBM J9 commercial JVM to implement this approach. We demonstrate two client analyses: one that finds objects that are expensive to construct but are not necessary for the forward execution, and second that pinpoints ultimately-dead values. We have successfully applied them to large-scale and long-running Java applications. We show that these analyses are effective at detecting operations that have unbalanced costs and benefits.
Conference Paper
In software engineering, determining the set of requirements to implement in the next release is a critical foundation for the success of a project. Inappropriately including or excluding requirements may result in products that fail to satisfy stakeholders' needs, and might cause loss of revenue. In the meantime, uncertainty is characterised by incomplete understanding. It is inevitable in the early phase of requirements engineering, and could lead to unsound requirement decisions. To ease the impact of uncertainty in the software development process, it is important to provide techniques that explicitly manage uncertainty in requirements analysis and optimisation. This proposed research aims to provide a decision support framework for analysing uncertainty in requirements selection and optimisation. The proposed research involves three stages. Firstly, a simulation optimisation technique is introduced to model requirements uncertainty in requirements optimisation. Then, an exact technique is designed to eliminate the algorith-mic uncertainty. Lastly, a probabilistic uncertainty analysis is applied to help the decision maker to understand requirement uncertainty propagation and the characteristics of requirements in requirements selection process.
Article
Many large-scale Java applications suffer from runtime bloat. They execute large volumes of methods, and create many temporary objects, all to execute relatively simple operations. There are large opportunities for performance optimizations in these applications, but most are being missed by existing optimization and tooling technology. While JIT optimizations struggle for a few percent, performance experts analyze deployed applications and regularly find gains of 2x or more. Finding such big gains is difficult, for both humans and compilers, because of the diffuse nature of runtime bloat. Time is spread thinly across calling contexts, making it difficult to judge how to improve performance. Bloat results from a pile-up of seemingly harmless decisions. Each adds temporary objects and method calls, and often copies values between those temporary objects. While data copies are not the entirety of bloat, we have observed that they are excellent indicators of regions of excessive activity. By optimizing copies, one is likely to remove the objects that carry copied values, and the method calls that allocate and populate them. We introduce copy profiling , a technique that summarizes runtime activity in terms of chains of data copies. A flat copy profile counts copies by method. We show how flat profiles alone can be helpful. In many cases, diagnosing a problem requires data flow context. Tracking and making sense of raw copy chains does not scale, so we introduce a summarizing abstraction called the copy graph . We implement three clients analyses that, using the copy graph, expose common patterns of bloat, such as finding hot copy chains and discovering temporary data structures. We demonstrate, with examples from a large-scale commercial application and several benchmarks, that copy profiling can be used by a programmer to quickly find opportunities for large performance gains.
Article
Languages such as Java and C#, as well as scripting languages like Python, and Ruby, make extensive use of Collection classes. A collection implementation represents a fixed choice in the dimensions of operation time, space utilization, and synchronization. Using the collection in a manner not consistent with this fixed choice can cause significant performance degradation. In this paper, we present CHAMELEON, a low-overhead automatic tool that assists the programmer in choosing the appropriate collection implementation for her application. During program execution, CHAMELEON computes elaborate trace and heap-based metrics on collection behavior. These metrics are consumed on-thefly by a rules engine which outputs a list of suggested collection adaptation strategies. The tool can apply these corrective strategies automatically or present them to the programmer. We have implemented CHAMELEON on top of a IBM's J9 production JVM, and evaluated it over a small set of benchmarks. We show that for some applications, using CHAMELEON leads to a significant improvement of the memory footprint of the application.
Conference Paper
We introduce a mutation-based approach to automatically discover and expose `deep' (previously unavailable) parameters that affect a program's runtime costs. These discovered parameters, together with existing (`shallow') parameters, form a search space that we tune using search-based optimisation in a bi-objective formulation that optimises both time and memory consumption. We implemented our approach and evaluated it on four real-world programs. The results show that we can improve execution time by 12\% or achieve a 21\% memory consumption reduction in the best cases. In three subjects, our deep parameter tuning results in a significant improvement over the baseline of shallow parameter tuning, demonstrating the potential value of our deep parameter extraction approach.
Conference Paper
This paper presents a brief outline of a higher-order mutation-based framework for Genetic Improvement (GI). We argue that search-based higher-order mutation testing can be used to implement a form of genetic programming (GP) to increase the search granularity and testability of GI.
Article
Context Three decades of mutation testing development have given software testers a rich set of mutation operators, yet relatively few operators can target memory faults (as we demonstrate in this paper). Objective To address this shortcoming, we introduce Memory Mutation Testing, proposing 9 Memory Mutation Operators each of which targets common forms of memory fault. We compare Memory Mutation Operators with traditional Mutation Operators, while handling equivalent and duplicate mutants. Method We extend our previous workshop paper, which introduced Memory Mutation Testing, with a more extensive and precise analysis of 18 open source programs, including 2 large real-world programs, all of which come with well-designed unit test suites. Specifically, our empirical study makes use of recent results on Trivial Compiler Equivalence (TCE) to identify both equivalent and duplicate mutants. Though the literature on mutation testing has previously deployed various techniques to cater for equivalent mutants, no previous study has catered for duplicate mutants. Results Catering for such extraneous mutants improves the precision with which claims about mutation scores can be interpreted. We also report the results of a new empirical study that compares Memory Mutation Testing with traditional Mutation Testing, providing evidence to support the claim that traditional mutation testing inadequately captures memory faults; 2% of the memory mutants are TCE-duplicates of traditional mutants and average test suite effectiveness drops by 44% when the target shifts from traditional mutants to memory mutants. Conclusions Introducing Memory Mutation Operators will cost only a small portion of the overall testing effort, yet generate higher quality mutants compared with traditional operators. Moreover, TCE technique does not only help with reducing testing effort, but also improves the precision of assessment on test quality, therefore should be considered in other Mutation Testing studies.
Conference Paper
Though mutation operators have been designed for a wide range of programming languages in the last three decades, only a few operators are able to simulate memory faults. This paper introduces 9 Memory Mutation Operators targeting common memory faults. We report the results of an empirical study using 16 open source programs, which come with well designed unit test suites. We find only 44% of the new memory mutants introduced are captured by the traditional strong mutation killing criterion. We thus further introduce two new killing criteria, the Memory Fault Detection and the Control Flow Deviation killing criteria to augment the traditional strong mutation testing criterion. Our results show that the two new killing criteria are more effective at detecting memory mutants, killing between 10% and 75% of those mutants left unkilled by the traditional criterion.
Article
Data structure selection is one of the most critical aspects of developing effective applications. By analyzing data structures' behavior and their interaction with the rest of the application on the underlying architecture, tools can make suggestions for alternative data structures better suited for the program input on which the application runs. Consequently, developers can optimize their data structure usage to make the application conscious of an underlying architecture and a particular program input. This paper presents the design and evaluation of Brainy, a new program analysis tool that automatically selects the best data structure for a given program and its input on a specific microarchitecture. The data structure's interface functions are instrumented to dynamically monitor how the data structure interacts with the application for a given input. The instrumentation records traces of various runtime characteristics including underlying architecture-specific events. These generated traces are analyzed and fed into an offline model, constructed using machine learning, to select the best data structure. That is, Brainy exploits runtime feedback of data structures to model the situation an application runs on, and selects the best data structure for a given application/input/architecture combination based on the constructed model. The empirical evaluation shows that this technique is highly accurate across several real-world applications with various program input sets on two different state-of-the-art microarchitectures. Consequently, Brainy achieved an average performance improvement of 27% and 33% on both microarchitectures, respectively.
Article
It has been observed that component-based applications exhibit object churn , the excessive creation of short-lived objects, often caused by trading performance for modularity. Because churned objects are short-lived, they appear to be good candidates for stack allocation. Unfortunately, most churned objects escape their allocating function, making escape analysis ineffective. We reduce object churn with three contributions. First, we formalize two measures of churn, capture and control (15). Second, we develop lightweight dynamic analyses for measuring both capture and control . Third, we develop an algorithm that uses capture and control to inline portions of the call graph to make churned objects non-escaping, enabling churn optimization via escape analysis. JOLT is a lightweight dynamic churn optimizer that uses our algorithms. We embedded JOLT in the JIT compiler of the IBM J9 commercial JVM, and evaluated JOLT on large application frameworks, including Eclipse and JBoss. We found that JOLT eliminates over 4 times as many allocations as a state-of-the-art escape analysis alone.
Conference Paper
Genetic Improvement (GI) is an area of Search Based Soft- ware Engineering which seeks to improve software’s non- functional properties by treating program code as if it were genetic material which is then evolved to produce more op- timal solutions. Hitherto, the majority of focus has been on optimising program’s execution time which, though im- portant, is only one of many non-functional targets. The growth in mobile computing, cloud computing infrastruc- ture, and ecological concerns are forcing developers to fo- cus on the energy their software consumes. We report on investigations into using GI to automatically find more en- ergy efficient versions of the MiniSAT Boolean satisfiability solver when specialising for three downstream applications. Our results find that GI can successfully be used to reduce energy consumption by up to 25%.
Conference Paper
Genetic Improvement (GI) is a form of Genetic Programming that improves an existing program. We use GI to evolve a faster version of a C++ program, a Boolean satisfiability (SAT) solver called MiniSAT, specialising it for a particular problem class, namely Combinatorial Interaction Testing (CIT), using automated code transplantation. Our GI-evolved solver achieves overall 17% improvement, making it comparable with average expert human performance. Additionally, this automatically evolved solver is faster than any of the human-improved solvers for the CIT problem.
Conference Paper
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.
Article
Genetic Improvement (GI) is shown to optimise, in some cases by more than 35percent, a critical component of healthcare industry software across a diverse range of six nVidia graphics processing units (GPUs). GP and other search based software engineering techniques can automatically optimise the current rate limiting CUDA parallel function in the NiftyReg open source C++ project used to align or register high resolution nuclear magnetic resonance NMRI and other diagnostic NIfTI images. Future Neurosurgery techniques will require hardware acceleration, such as GPGPU, to enable real time comparison of three dimensional in theatre images with earlier patient images and reference data. With millimetre resolution brain scan measurements comprising more than ten million voxels the modified kernel can process in excess of 3 billion active voxels per second.
Article
Reducing the energy usage of software is becoming more important in many environments, in particular, battery-powered mobile devices, embedded systems and data centers. Recent empirical studies indicate that software engineers can support the goal of reducing energy usage by making design and implementation decisions in ways that take into consideration how such decisions impact the energy usage of an application. However, the large number of possible choices and the lack of feedback and information available to software engineers necessitates some form of automated decision-making support. This paper describes the first known automated support for systematically optimizing the energy usage of applications by making code-level changes. It is effective at reducing energy usage while freeing developers from needing to deal with the low-level, tedious tasks of applying changes and monitoring the resulting impacts to the energy usage of their application. We present a general framework, SEEDS, as well as an instantiation of the framework that automatically optimizes Java applications by selecting the most energy-efficient library implementations for Java's Collections API. Our empirical evaluation of the framework and instantiation show that it is possible to improve the energy usage of an application in a fully automated manner for a reasonable cost.
Book
More than ever, mission-critical and business-critical applications depend on object-oriented (OO) software. Testing techniques tailored to the unique challenges of OO technology are necessary to achieve high reliability and quality. Testing Object-Oriented Systems: Models, Patterns, and Tools is an authoritative guide to designing and automating test suites for OO applications. This comprehensive book explains why testing must be model-based and provides in-depth coverage of techniques to develop testable models from state machines, combinational logic, and the Unified Modeling Language (UML). It introduces the test design pattern and presents 37 patterns that explain how to design responsibility-based test suites, how to tailor integration and regression testing for OO code, how to test reusable components and frameworks, and how to develop highly effective test suites from use cases. Effective testing must be automated and must leverage object technology. The author describes how to design and code specification-based assertions to offset testability losses due to inheritance and polymorphism. Fifteen micro-patterns present oracle strategies--practical solutions for one of the hardest problems in test design. Seventeen design patterns explain how to automate your test suites with a coherent OO test harness framework. The author provides thorough coverage of testing issues such as: *The bug hazards of OO programming and differences from testing procedural code *How to design responsibility-based tests for classes, clusters, and subsystems using class invariants, interface data flow models, hierarchic state machines, class associations, and scenario analysis *How to support reuse by effective testing of abstract classes, generic classes, components, and frameworks *How to choose an integration strategy that supports iterative and incremental development *How to achieve comprehensive system testing with testable use cases *How to choose a regression test approach *How to develop expected test results and evaluate the post-test state of an object *How to automate testing with assertions, OO test drivers, stubs, and test frameworks Real-world experience, world-class best practices, and the latest research in object-oriented testing are included. Practical examples illustrate test design and test automation for Ada 95, C++, Eiffel, Java, Objective-C, and Smalltalk. The UML is used throughout, but the test design patterns apply to systems developed with any OO language or methodology
Article
There are more bugs in real-world programs than human programmers can realistically address. This paper evaluates two research questions: “What fraction of bugs can be repaired automatically?” and “How much does it cost to repair a bug automatically?” In previous work, we presented GenProg, which uses genetic programming to repair defects in off-the-shelf C programs. To answer these questions, we: (1) propose novel algorithmic improvements to GenProg that allow it to scale to large programs and find repairs 68% more often, (2) exploit GenProg's inherent parallelism using cloud computing resources to provide grounded, human-competitive cost measurements, and (3) generate a large, indicative benchmark set to use for systematic evaluations. We evaluate GenProg on 105 defects from 8 open-source programs totaling 5.1 million lines of code and involving 10,193 test cases. GenProg automatically repairs 55 of those 105 defects. To our knowledge, this evaluation is the largest available of its kind, and is often two orders of magnitude larger than previous work in terms of code or test suite size or defect count. Public cloud computing prices allow our 105 runs to be reproduced for 403;asuccessfulrepaircompletesin96minutesandcosts403; a successful repair completes in 96 minutes and costs 7.32, on average.
Conference Paper
Framework-intensive applications (e.g., Web applications) heavily use temporary data structures, often resulting in performance bot- tlenecks. This paper presents an optimized blended escape analysis to approximate object lifetimes and thus, to identify these tempo- raries and their uses. Empirical results show that this optimized analysis on average prunes 37% of the basic blocks in our bench- marks, and achieves a speedup of up to 29 times compared to the original analysis. Newly defined metrics quantify key properties of temporary data structures and their uses. A detailed empirical eval- uation offers the first characterization of temporaries in framework- intensive applications. The results show that temporary data struc- tures can include up to 12 distinct object types and can traverse through as many as 14 method invocations before being captured.
Conference Paper
A memory leak in a Java program occurs when object references that are no longer needed are unnecessarily maintained. Such leaks are difficult to detect because static analysis typically cannot precisely identify these redundant references, and existing dynamic leak detection tools track and report fine-grained information about individual objects, producing results that are usually hard to interpret and lack precision. In this article we introduce a novel container-based heap-tracking technique, based on the fact that many memory leaks in Java programs occur due to incorrect uses of containers, leading to containers that keep references to unused data entries. The novelty of the described work is twofold: (1) instead of tracking arbitrary objects and finding leaks by analyzing references to unused objects, the technique tracks only containers and directly identifies the source of the leak, and (2) the technique computes a confidence value for each container based on a combination of its memory consumption and its elements' staleness (time since last retrieval), while previous approaches do not consider such combined metrics. Our experimental results show that the reports generated by the proposed technique can be very precise: for two bugs reported by Sun, a known bug in SPECjbb 2000, and an example bug from IBM developerWorks, the top containers in the reports include the containers that leak memory.
Conference Paper
Our research group has analyzed many industrial, framework-based applications. In these applications, simple functionality often requires excessive runtime activity. It is increasingly difficult to assess if and how inefficiencies can be fixed. Much of this activity involves the transformation of information, due to framework couplings. We present an approach to modeling and quantifying behavior in terms of what transformations accomplish. We structure activity into dataflow diagrams that capture the flow between transformations. Across disparate implementations, we observe commonalities in how transformations use and change their inputs. We introduce vocabulary of common phenomena of use and change, and four ways to classify data and transformations using this vocabulary. The structuring and classification enable evaluation and comparison in terms abstracted from implementation specifics. We introduce metrics of complexity and cost, including behavior signatures that attribute measures to phenomena. We demonstrate the approach on a benchmark, a library, and two industrial applications.
Conference Paper
Applications often have large runtime memory requirements. In some cases, large memory footprint helps accomplish an important functional, performance, or engineering requirement. A large cache,for example, may ameliorate a pernicious performance problem. In general, however, finding a good balance between memory consumption and other requirements is quite challenging. To do so, the development team must distinguish effective from excessive use of memory. We introduce health signatures to enable these distinctions. Using data from dozens of applications and benchmarks, we show that they provide concise and application-neutral summaries of footprint. We show how to use them to form value judgments about whether a design or implementation choice is good or bad. We show how being independent ofany application eases comparison across disparate implementations. We demonstrate the asymptotic nature of memory health: certain designsare limited in the health they can achieve, no matter how much the data size scales up. Finally, we show how to use health signatures to automatically generate formulas that predict this asymptotic behavior, and show how they enable powerful limit studies on memory health.
Article
This study focuses largely on two issues: (a) improved syntax for iterations and error exits, making it possible to write a larger class of programs clearly and efficiently without ″go to″ statements; (b) a methodology of program design, beginning with readable and correct, but possibly inefficient programs that are systematically transformed in necessary into efficent and correct, but possibly less readable code. The discussion brings out opposing points of view about whether or not ″go to″ statements should be abolished; some merit is found on both sides of this question. Finally, an attempt is made to define the true nature of structured programming, and to recommend fruitful directions for further study.
Article
This paper claims that a new field of Software Engineering research and practice is emerging: Search-Based Software Engineering. The paper argues that Software Engineering is ideal for the application of metaheuristic search techniques, such as genetic algorithms, simulated annealing and tabu search. Such search-based techniques could provide solutions to the difficult problems of balancing competing (and sometimes inconsistent) constraints and may suggest ways of finding acceptable solutions in situations where perfect solutions are either theoretically impossible or practically infeasible.
Book
Design patterns are a form of documentation that proposes solutions to recurring object-oriented software design problems. Design patterns became popular in software engineering thanks to the book published in 1995 by the Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Since the publication of the book Design Patterns: Elements of Reusable Object-Oriented Software, design patterns have been used to design programs and ease their maintenance, to teach object-oriented concepts and related “good” practices in classrooms, and to assess quality and help program comprehension in research. However, design patterns may also lead to overengineered programs and may negatively impact quality. We recall the history of design patterns and present some recent development characterizing the advantages and disadvantages of design patterns. Design patterns are a form of documentation that proposes solutions to recurring object-oriented software design problems. Design patterns became popular in software engineering thanks to the book published in 1995 by the Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Since the publication of the book Design Patterns: Elements of Reusable Object-Oriented Software, design patterns have been used to design programs and ease their maintenance, to teach object-oriented concepts and related “good” practices in classrooms, and to assess quality and help program comprehension in research. However, design patterns may also lead to overengineered programs and may negatively impact quality. We recall the history of design patterns and present some recent development characterizing the advantages and disadvantages of design patterns.
Article
Contenido: Técnicas de especificación abstracta; Análisis de algoritmos; Hacia más generalización en algoritmos; Tipo de datos no estructurados; Tipos de datos semiestructurados; Tipos de datos estructurados linealmente; Arboles binarios; Arboles binarios de búsqueda; Arboles de camino múltiples de búsqueda; Grafos directos o digrafos; Grafos indirectos y complejidades; Listas generalizadas; Administración de la memoria.
Article
An important issue in multiobjective optimization is the quantitative comparison of the performance of different algorithms. In the case of multiobjective evolutionary algorithms, the outcome is usually an approximation of the Pareto-optimal set, which is denoted as an approximation set, and therefore the question arises of how to evaluate the quality of approximation sets. Most popular are methods that assign each approximation set a vector of real numbers that reflect different aspects of the quality. Sometimes, pairs of approximation sets are also considered. In this study, we provide a rigorous analysis of the limitations underlying this type of quality assessment. To this end, a mathematical framework is developed which allows one to classify and discuss existing techniques.
Article
Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Article
This paper describes the implementation of an online feedback-directed optimization system. The system is fully automatic; it requires no prior (o#ine) profiling run. It uses a previously developed low-overhead instrumentation sampling framework to collect control flow graph edge profiles. This profile information is used to drive several traditional optimizations, as well as a novel algorithm for performing feedback-directed control flow graph node splitting. We empirically evaluate this system and demonstrate improvements in peak performance of up to 17% while keeping overhead low, with no individual execution being degraded by more than 2% because of instrumentation.
Statistically rigorous java performance evaluation
  • A Georges
  • D Buytaert
  • L Eeckhout
A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous java performance evaluation. ACM SIGPLAN Notices, 42(10):57-76, 2007.