Fig 3 - uploaded by Carlos Cetina
Content may be subject to copyright.
Overview of the evaluation process to answer each research question

Overview of the evaluation process to answer each research question

Contexts in source publication

Context 1
... 3 : How much is the performance influenced in terms of the solution quality using HaFF compared to the baselines? Fig. 3 presents an overview of the process that is planned to answer each research question. The upper part of the figure shows the data set from the industrial case study, which was provided by our industrial partner CAF. CAF uses software models to generate the firmware to control the trains that have been manufactured over years. The ...
Context 2
... feature (also assuming that only 1 second is needed to consider a property of a model element). Therefore, we use a SingleObjective Evolutionary Algorithm (SOEA) as a means of efficiently exploring the huge search space. The objective of the algorithm is to find the model fragment that best realizes the feature being located. The lower part of Fig. 3 shows the two baselines and the two HaFF variants for FLiM that are used to answer the research questions. All of the variants include initialization of the model fragment population from feature seed. The options for the genetic operations are mask crossover operation plus random mutation operation, or the replacement reformulation ...
Context 3
... RQ 2 : To assess the performance in terms of solution quality of the reformulation operation in the baseline and in HaFF, we use the variants of Column 4 (identified as Baseline R) and Column 5 (identified as HaFF R) of the table that is shown in the lower part of Fig. 3. To answer this question, we executed 1798 independent runs of the evolutionary algorithm: 58 (features) x 1 (Baseline R) x 30 repetitions (as suggested by Arcuri and Fraser [63]) + 58 (features) x 1 (HaFF ...


In industry, software projects might span over decades and many engineers join or leave the company over time. For these reasons, no single engineer has all of the knowledge when maintenance tasks such as Traceability Link Recovery (TLR), Bug Localization (BL), and Feature Location (FL) are performed. Thus, collaboration has the potential to boost the quality of maintenance tasks since the solution of an engineer might be enhanced with contributions of solutions from other engineers. However, assembling a team of software engineers to collaborate may not be as intuitive as we might think. In the context of a worldwide industrial supplier of railway solutions, this work evaluates how the quality of TLR, BL, and FL is affected by the criteria for selecting engineers for collaboration. The criteria for collaboration are based on engineers’ profile information to select the set of search queries that are involved in the maintenance task. Collaboration is achieved by applying automatic query reformulation, and the location relies on an evolutionary algorithm. Our work uncovers how software engineers who might be seen as not being relevant in the collaboration can lead to significantly better results. A focus group confirmed the relevance of the findings.