Alessandro Garcia

Federal University of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil

Are you Alessandro Garcia?

Claim your profile

Publications (167)22.74 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: A domain-specific language (DSL) aims to support software development by offering abstractions to a particular domain. It is expected that DSLs improve the maintainability of artifacts otherwise produced with general-purpose languages. However, the maintainability of the DSL artifacts and, hence, their adoption in mainstream development, is largely dependent on the usability of the language itself. Unfortunately, it is often hard to identify their usability strengths and weaknesses early, as there is no guidance on how to objectively reveal them. Usability is a multi-faceted quality characteristic, which is challenging to quantify beforehand by DSL stakeholders. There is even less support on how to quantitatively evaluate the usability of DSLs used in maintenance tasks. In this context, this paper reports a study to compare the usability of textual DSLs under the perspective of software maintenance. A usability measurement framework was developed based on the Cognitive Dimensions of Notations. The framework was evaluated both qualitatively and quantitatively using two DSLs in the context of two evolving object-oriented systems. The results suggested that the proposed metrics were useful: (1) to early identify DSL usability limitations, (2) to reveal specific DSL features favoring maintenance tasks, and (3) to successfully analyze eight critical DSL usability dimensions.
    Journal of Systems and Software 12/2014; 101. DOI:10.1016/j.jss.2014.11.051 · 1.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Design patterns often need to be blended (or composed) when they are instantiated in a software system. The composition of design patterns consists of assigning multiple pattern elements into overlapping sets of classes in a software system. Whenever the modularity of each design pattern is not preserved in the source code, their implementation becomes tangled with each other and with the classes' core responsibilities. As a consequence, the change or removal of each design pattern will be costly or prohibitive as the software system evolves. In fact, composing design patterns is much harder than instantiating them in an isolated manner. Previous studies have found design pattern implementations are naturally crosscutting in object-oriented systems, thereby making it difficult to modularly compose them. Therefore, aspect-oriented programming (AOP) has been pointed out as a natural alternative for modularizing and blending design patterns. However, there is little empirical knowledge on how AOP models influence the composability of widely used design patterns. This paper investigates the influence of using AOP models for composing the Gang-of-Four design patterns. Our study categorizes different forms of pattern composition and studies the benefits and drawbacks of AOP in these contexts. We performed assessments of several pair-wise compositions taken from 3 medium-sized systems implemented in Java and two AOP models, namely, AspectJ and Compose*. We also considered complex situations where more than two patterns involved in each composition, and the patterns were interacting with other aspects implementing other crosscutting concerns of the system. In general, we observed two dominant factors impacting the pattern composability with AOP: (i) the category of the pattern composition, and (ii) the Aspecti idioms used to implement the design patterns taking part in the composition.
    Journal of Systems and Software 12/2014; 98:117-139. DOI:10.1016/j.jss.2014.08.041 · 1.25 Impact Factor
  • Source
    Camila Nunes, Alessandro Garcia, Carlos Lucena, Jaejoon Lee
    [Show abstract] [Hide abstract]
    ABSTRACT: Establishing explicit mappings between features and their implementation elements in code is one of the critical factors to maintain and evolve software systems successfully. This is especially important when developers have to evolve program families, which have evolved from one single core system to similar but different systems to accommodate various requirements from customers. Many techniques and tools have emerged to assist developers in the feature mapping activity. However, existing techniques and tools for feature mapping are limited as they operate on a single program version individually. Additionally, existing approaches are limited to recover features on demand, that is, developers have to run the tools for each family member version individually. In this paper, we propose a cohesive suite of five mapping heuristics addressing those two limitations. These heuristics explore the evolution history of the family members in order to expand feature mappings in evolving program families. The expansion refers to the action of automatically generating the feature mappings for each family member version by systematically considering its previous change history. The mapping expansion starts from seed mappings and continually tracks the features of the program family, thus eliminating the need of on demand algorithms. Additionally, we present the MapHist tool that provides support to the application of the proposed heuristics. We evaluate the accuracy of our heuristics through two evolving program families from our industrial partners. Copyright © 2013 John Wiley & Sons, Ltd.
    Software Practice and Experience 11/2014; 44(11). DOI:10.1002/spe.2200 · 1.15 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Mainstream programming languages provide built-in exception handling mechanisms to support robust and maintainable implementation of exception handling in software systems. Most of these modern languages, such as C#, Ruby, Python and many others, are often claimed to have more appropriated exception handling mechanisms. They reduce programming constraints on exception handling to favor agile changes in the source code. These languages provide what we call maintenance-driven exception handling mechanisms. It is expected that the adoption of these mechanisms improve software maintainability without hindering software robustness. However, there is still little empirical knowledge about the impact that adopting these mechanisms have on software robustness. This paper addressed this gap by conducting an empirical study aimed at understanding the relationship between changes in C# programs and their robustness. In particular, we evaluated how changes in the normal and exceptional code were related to exception handling faults. We applied a change impact analysis and a control flow analysis in 119 versions of 16 C# programs. The results showed that: (i) most of the problems hindering software robustness in those programs are caused by changes in the normal code, (ii) many potential faults were introduced even when improving exception handling in C# code, and (iii) faults are often facilitated by the maintenance-driven flexibility of the exception handling mechanism. Moreover, we present a series of change scenarios that decrease the program robustness.
  • [Show abstract] [Hide abstract]
    ABSTRACT: As software systems are maintained, their architecture often de-grades through the processes of architectural drift and erosion. These processes are often intertwined and the same modules in the code become the locus of both drift and erosion symptoms. Thus, architects should elaborate architecture rules for detecting occur-rences of both degradation symptoms. While the specification of such rules is time-consuming, they are similar across software projects adhering to similar architecture decompositions. Unfortu-nately, existing anti-degradation techniques are limited as they focus only on detecting either drift or erosion symptoms. They also do not support the reuse of recurring anti-degradation rules. In this context, the contribution of this paper is twofold. First, it presents TamDera, a domain-specific language for: (i) specifying rule-based strategies to detect both erosion and drift symptoms, and (ii) promoting the hierarchical and compositional reuse of design rules across multiple projects. The language was designed with usual concepts from programming languages in mind such as, inheritance and modularization. Second, we evaluated to what extent developers would benefit from the definition and reuse of hybrid rules. Our study involved 21 versions pertaining to 5 software projects, and more than 600 rules. On average 45% of classes that had drift symptoms in first versions presented inter-related erosion problems in latter versions or vice-versa. Also, up to 72% of all the TamDera rules in a project are from a pre-defined library of reusable rules. They were responsible for detecting on average of 73% of the inter-related degradation symptoms across the projects.
    Proceedings of the 13th international conference on Modularity; 04/2014
  • Source
    Kleinner Farias, Alessandro Garcia, Jon Whittle
    [Show abstract] [Hide abstract]
    ABSTRACT: Model composition is a common operation used in many software development activities---for example, reconciling models developed in parallel by different development teams, or merging models of new features with existing model artifacts. Unfortunately, both commercial and academic model composition tools suffer from the composition conflict problem. That is, models to-be-composed may conflict with each other and these conflicts must be resolved. In practice, detecting and resolving conflicts is a highly-intensive manual activity. In this paper, we investigate whether aspect-orientation reduces conflict resolution effort as improved modularization may better localize conflicts. The main goal of the paper is to conduct an exploratory study to analyze the impact of aspects on conflict resolution. In particular, model compositions are used to express the evolution of architectural models along six releases of a software product line. Well-known composition algorithms, such as override, merge and union, are applied and compared on both AO and non-AO models in terms of their conflict rate and effort to solve the identified conflicts. Our findings identify specific scenarios where aspect-orientation properties, such as obliviousness and quantification, result in a lower (or higher) composition effort.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an initial quality model for model composition effort, which serves as a frame of reference to developers and researchers to plan and perform qualitative and quantitative investigations, as well as replicate and reproduce empirical studies. A series of empirical studies supports the proposed quality model, including five industrial case studies, two controlled experiments, three quasi-experiments, interviews and seven observational studies. Moreover, these studies have systematically demonstrated the real benefits of using a frame of reference to enable learning about model composition effort from experimentation.
    SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing; 03/2014
  • Everton Guimaraes, Alessandro Garcia, Kleinner Farias
  • [Show abstract] [Hide abstract]
    ABSTRACT: Code anomalies are structural problems in the program. Even though they might represent symptoms of architecture degradation, several code anomalies do not contribute to this process. Source code inspection by developers might not support time-effective detection of architecturally-relevant anomalies in a program. Hence, they usually rely on multiple software metrics known to effectively detect code anomalies. However, there is still no empirical knowledge about the time effectiveness of metric-based strategies to detect architecturally-relevant anomalies. Given the longitudinal nature of this activity, we performed a first exploratory case study to address this gap. We compare metrics-based strategies with manual inspections made by the actual software developers. The study was conducted in the context of a legacy software system with 30K lines of code, 415 architectural elements, 210 versions, and embracing reengineering effort for almost 3 years. Effectiveness was assessed in terms of several quantitative and qualitative indicators. To measure the effort, we computed the amount of time used in several activities required to identify architecturally-relevant code anomalies. The results of our study shed light on potential effort reduction and effectiveness improvements of metrics-based strategies.
  • Software and Systems Modeling 01/2014; DOI:10.1007/s10270-014-0408-2 · 0.82 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Code anomalies are symptoms of software maintainability problems, particularly harmful when contributing to architectural degradation. Despite the existence of many automated techniques for code anomaly detection, identifying the code anomalies that are more likely to cause architecture problems remains a challenging task. Even when there is tool support for detecting code anomalies, developers often invest a considerable amount of time refactoring those that are not related to architectural problems. In this paper we present and evaluate four different heuristics for helping developers to prioritize code anomalies, based on their potential contribution to the software architecture degradation. Those heuristics exploit different characteristics of a software project, such as change-density and error-density, for automatically ranking code elements that should be refactored more promptly according to their potential architectural relevance. Our evaluation revealed that software maintainers could benefit from the recommended rankings for identifying which code anomalies are harming architecture the most, helping them to invest their refactoring efforts into solving architecturally relevant problems.
    2013 27th Brazilian Symposium on Software Engineering (SBES); 10/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Software metrics have been traditionally used to evaluate the modularity of software systems and to detect code smells, such as God Method. Code smells are symptoms that may indicate something wrong in the system code. God Method represents a method that has grown too much. It tends to centralize the functionality of a class. Recently, concern metrics have also been proposed to evaluate software maintainability. While traditional metrics quantify properties of software modules, concern metrics quantify properties of concerns, such as scattering and tangling. Despite being increasingly used in empirical studies, there is a lack of empirical knowledge about the usefulness of concern metrics to detect code smells. This paper goal is to report the results of an exploratory study which investigates whether concern metrics provide useful indicators to detect God Method. In this study, a set of 47 subjects from two institutions have analyzed traditional and concern metrics aiming to detect instances of this code smell in a system. The study results indicate that elaborated joint analysis of both traditional and concern metrics is often required to detect God Method. We conclude that new focused metrics may be required to support detection of smelly methods.
    7th Latin-American Workshop on Aspect-Oriented Software Development (LA-WASP), 2013, Brasilia - DF. co-allocated with CBSoft. Brasilia: SBC, 2013, Brasilia - DF. Brasil; 09/2013
  • Alessandro Garcia
    Journal of Systems and Software 04/2013; 86(4):869–871. DOI:10.1016/j.jss.2013.01.058 · 1.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A considerable part of software design is dedicated for the composition of two or more modules. The implication is that changes made later in the implementation often require some reasoning about module composition properties. However, these properties are often not explicitly specified in design artefacts. Moreover, they cannot be easily inferred from the source code either. As a result, implicit composition properties may represent a major source of software maintenance complexity. This fact is particularly true with the advent of post object-oriented techniques, which are increasingly providing advanced mechanisms to enable flexible module composition. However, there is little empirical knowledge on how design models with explicitly-specified composition properties can improve software maintenance tasks. This paper reports an experiment that analyses the impact of design models enriched with composition properties on system maintenance. Explicit composition modelling was achieved in the experiment through a conservative approach, i.e., a specific set of additional UML stereotypes dedicated to model composition properties. The experiment involved 28 participants, who were asked to realize four maintenance tasks using UML design models with different levels of module composition details. Our findings suggested that explicit composition modelling contributed to better results in the realization of program change tasks, regardless of the developers' expertise. Users of composition-enhanced models consistently yielded better results when compared to users of plain UML models. The use of explicit composition specification also led to an average increase of 44.7% in the quality of changes produced by the participants
    Proceedings of the 12th annual international conference on Aspect-oriented software development; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Checking a software's structural dependencies is a line of research on methods and tools for analyzing, modeling and checking the conformance of source code w.r.t. specifications of its intended static structure. Existing approaches have focused on the correctness of the specification, the impact of the approaches on software quality and the expressiveness of the modeling languages. However, large specifications become unmaintainable in the event of evolution without the means to modularize such specifications. We present Vespucci, a novel approach and tool that partitions a specification of the expected and allowed dependencies into a set of cohesive slices. This facilitates modular reasoning and helps individual maintenance of each slice. Our approach is suited for modeling high-level as well as detailed low-level decisions related to the static structure and combines both in a single modeling formalism. To evaluate our approach we conducted an extensive study spanning nine years of the evolution of the architecture of the object-relational mapping framework Hibernate.
    Proceedings of the 12th annual international conference on Aspect-oriented software development; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: When reflecting upon driving system requirements such as security and availability, software architects often face decisions that have a broadly scoped impact on the software architecture. These decisions are the core of the architecting process because they typically have implications intertwined in a multitude of architectural elements and across multiple views. Without a modular representation and management of those crucial choices, architects cannot properly communicate, assess and reason about their crosscutting effects. The result is a number of architectural breakdowns, such as misinformed architectural evaluation, time‐consuming trade‐off analysis and unmanageable traceability. This paper presents an architectural documentation approach in which aspects are exploited as a natural way to capture widely‐scoped design decisions in a modular fashion. The approach consists of a simple high‐level notation to describe crosscutting decisions, and a supplementary language that allows architects to formally define how such architectural decisions affect the final architectural decomposition according to different views. On the basis of two case studies, we have systematically assessed to what extent our approach: (i) supports the description of heterogeneous forms of crosscutting architecture decisions, (ii) improves the support for architecture modularity analysis, and (iii) enhances upstream and downstream traceability of crosscutting architectural decisions. Copyright © 2012 John Wiley & Sons, Ltd.
    Software Practice and Experience 03/2013; 43(3). DOI:10.1002/spe.2113 · 1.15 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Research has shown that code anomalies may be related to problems in the architecture design. Without proper mechanisms to support the identification of architecturally-relevant code anomalies, software systems will degrade and might be discontinued as a consequence. Nowadays, metrics-based detection strategy is the most common mechanism to identify code anomalies. However, these strategies often fail to detect architecturally-relevant code anomalies. A key limitation is that they solely exploit measurable static properties of the source code. This paper proposes and evaluates a suite of architecture-sensitive detection strategies. These strategies exploit information related to how fully-modularized and widely-scoped architectural concerns are realized by the code elements. The accuracy of the proposed detection strategies is assessed in a sample of nearly 3500 architecturally-relevant code anomalies and 950 architectural problems distributed in 6 software systems. Our findings show that more than the 60% of code anomalies detected by the proposed strategies were related to architectural problems. Additionally, the proposed strategies identified on average 50% more architecturally-relevant code anomalies than those gathered with using conventional strategies.
    Software Maintenance and Reengineering (CSMR), 2013 17th European Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Program comprehension is an essential activity to perform software maintenance and evolution. Comprehensibility often encompasses the analysis of individual logical units, called features, which are often scattered through many program modules. Understanding how the feature code is implemented along the software evolution history is essential, for instance, to perform refactoring activities. However, existing tools do not provide means to comprehend the feature code evolution. To overcome this shortcoming, this paper presents a tool called Source Miner Evolution (SME) that provides multiple interactive and coordinated views to comprehend feature code evolution. SME implements a feature-sensitive comparison of multiple program versions. Our usability assessment with experienced developers indicated that SME allows them to efficiently perform recurring comprehension tasks on evolving feature code. The developers' performance was influenced by the combination of visual SME mechanisms, such as colors, tool tips and menu-popup interactions over the features' code elements.
    Software Maintenance (ICSM), 2013 29th IEEE International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: The importance of model composition in model-centric software development is well recognized by researchers and practitioners. However, little is known about the critical factors influencing the effort that developers invest to combine design models, detect and resolve inconsistencies in practice. This paper, therefore, reports on five industrial case studies where the model composition was used to evolve and reconcile large-scale design models. These studies aim at: (1) gathering empirical evidence about the extent of composition effort when realizing different categories of changes, and (2) identifying and analyzing their influential factors. A series of 297 evolution scenarios was performed on the target systems, leading to more than 2 million compositions of model elements. Our findings suggest that: the inconsistency resolution effort is much higher than the upfront effort to apply the composition technique and detect inconsistencies; the developer’s reputation significantly influences the resolution of conflicting changes; and the evolutions dominated by additions required less effort.
    Model-Driven Engineering Languages and Systems, 01/2013: pages 639-655;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The lack of empirical knowledge about the effects of model composition techniques on developers' effort is the key impairment for their widespread adoption in practice. This problem applies to both existing categories of model composition techniques, i.e. specification-based (e.g. Epsilon) and heuristic-based (e.g. IBM RSA) techniques. This paper reports on a controlled experiment that investigates the effort to: (1) apply both categories of model composition techniques, and (2) detect and resolve inconsistencies in the output composed models. The techniques are investigated in 144 evolution scenarios, where 2304 compositions of elements of class diagrams were produced. The results suggest that: (1) the employed heuristic-based techniques require less effort to produce the intended model than the chosen specification-based technique, (2) the correctness of the output composed models generated by the techniques is not significantly different, and (3) the use of manual heuristics for model composition outperforms their automated counterparts.
    Proceedings of the 15th international conference on Model Driven Engineering Languages and Systems; 09/2012

Publication Stats

2k Citations
22.74 Total Impact Points

Institutions

  • 2009–2014
    • Federal University of Rio de Janeiro
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 2013
    • Universidade Federal da Bahia
      Bahia, Bahia, Brazil
  • 2010–2012
    • Tecgraf / PUC-Rio
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 2002–2010
    • Pontifícia Universidade Católica do Rio de Janeiro
      • Department of Informatics (INF)
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 1970–2009
    • Lancaster University
      • School of Computing and Communications
      Lancaster, England, United Kingdom
  • 2007
    • Federal University of Pernambuco
      • Center of Informatics (CIn)
      Recife, Estado de Pernambuco, Brazil