Alessandro Garcia

Federal University of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil

Are you Alessandro Garcia?

Claim your profile

Publications (187)26.62 Total impact

  • Luciano Sampaio · Alessandro Garcia
    [Show abstract] [Hide abstract]
    ABSTRACT: Secure programming is the practice of writing programs that are resistant to attacks by malicious people or programs. Programmers of secure software have to be continuously aware of security vulnerabilities when writing their program statements. In order to improve programmers’ awareness, static analysis techniques have been devised to find vulnerabilities in the source code. However, most of these techniques are built to encourage vulnerability detection a posteriori, only when developers have already fully produced (and compiled) one or more modules of a program. Therefore, this approach, also known as late detection, does not support secure programming but rather encourages posterior security analysis. The lateness of vulnerability detection is also influenced by the high rate of false positives yielded by pattern matching, the underlying mechanism used by existing static analysis techniques. The goal of this paper is twofold. First, we propose to perform continuous detection of security vulnerabilities while the developer is editing each program statement, also known as early detection. Early detection can leverage his knowledge on the context of the code being created, contrary to late detection when developers struggle to recall and fix the intricacies of the vulnerable code they produced from hours to weeks ago. Second, we explore context-sensitive data flow analysis (DFA) for improving vulnerability detection and mitigate the limitations of pattern matching. DFA might be suitable for finding if an object has a vulnerable path. To this end, we have implemented a proof-of-concept Eclipse plugin for continuous DFA-based detection of vulnerabilities in Java programs. We also performed two empirical studies based on several industry-strength systems to evaluate if the code security can be improved through DFA and early vulnerability detection. Our studies confirmed that: (i) the use of context-sensitive DFA significantly reduces the rate of false positives when compared to existing techniques, without being detrimental to the detector performance, and (ii) early detection improves the awareness among developers and encourages programmers to fix security vulnerabilities promptly.
    No preview · Article · Dec 2015
  • Source
    Kleinner Farias · Alessandro Garcia · Jon Whittle

    Full-text · Dataset · Oct 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: [Context] Collaborative programming is achieved when two or more programmers develop software together. Pair Programming and Coding Dojo Randori are two increasingly adopted practices for collaborative programming. While the former encourages the collaboration in pairs, the latter promotes collaboration in groups. However, there is no broad understanding about the impact of these practices on the acquisition of programming skills. [Goal] In this study, we empirically compare the influence of both collaborative practices on two essential aspects of skill acquisition: motivation and learning. [Method] We conducted a controlled experiment with novice programmers applying solo programming and both collaborative practices to three different programming exercises using a crossed design. [Results] Our results showed that, while both practices outperformed solo programming, they also presented complementary benefits on acquiring programming skills. For instance, the programmers inserted less code anomalies in Coding Dojo Randori sessions than in Pair Programming sessions. On the other hand, the motivation was often considered to be stronger in the latter than in the former. [Conclusions] Our results suggest that the use of collaborative practices is particularly promising for acquiring programming skills, when programmers have little or no practical experience with software development.
    Full-text · Conference Paper · Sep 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: A Software Product Line (SPL) is a set of software systems that share common functionalities, so-called features. When features are related, we consider this relation a feature dependency. Whenever a new feature is added, the presence of feature dependencies in the source code may increase the maintenance effort. In particular, along the maintenance of SPL implementation, added features may induce changes in other features, the so-called change propagation. Change propagation is the set of ripple changes required to other features whenever a particular feature is added or changed.
    No preview · Article · Sep 2015 · Information and Software Technology
  • [Show abstract] [Hide abstract]
    ABSTRACT: A domain-specific language (DSL) aims to support software development by offering abstractions to a particular domain. It is expected that DSLs improve the maintainability of artifacts otherwise produced with general-purpose languages. However, the maintainability of the DSL artifacts and, hence, their adoption in mainstream development, is largely dependent on the usability of the language itself. Unfortunately, it is often hard to identify their usability strengths and weaknesses early, as there is no guidance on how to objectively reveal them. Usability is a multi-faceted quality characteristic, which is challenging to quantify beforehand by DSL stakeholders. There is even less support on how to quantitatively evaluate the usability of DSLs used in maintenance tasks. In this context, this paper reports a study to compare the usability of textual DSLs under the perspective of software maintenance. A usability measurement framework was developed based on the Cognitive Dimensions of Notations. The framework was evaluated both qualitatively and quantitatively using two DSLs in the context of two evolving object-oriented systems. The results suggested that the proposed metrics were useful: (1) to early identify DSL usability limitations, (2) to reveal specific DSL features favoring maintenance tasks, and (3) to successfully analyze eight critical DSL usability dimensions.
    No preview · Article · Mar 2015 · Journal of Systems and Software
  • Eiji Barbosa · Alessandro Garcia · Martin Robillard · Benjamin Jakobus

    No preview · Article · Jan 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Design patterns often need to be blended (or composed) when they are instantiated in a software system. The composition of design patterns consists of assigning multiple pattern elements into overlapping sets of classes in a software system. Whenever the modularity of each design pattern is not preserved in the source code, their implementation becomes tangled with each other and with the classes' core responsibilities. As a consequence, the change or removal of each design pattern will be costly or prohibitive as the software system evolves. In fact, composing design patterns is much harder than instantiating them in an isolated manner. Previous studies have found design pattern implementations are naturally crosscutting in object-oriented systems, thereby making it difficult to modularly compose them. Therefore, aspect-oriented programming (AOP) has been pointed out as a natural alternative for modularizing and blending design patterns. However, there is little empirical knowledge on how AOP models influence the composability of widely used design patterns. This paper investigates the influence of using AOP models for composing the Gang-of-Four design patterns. Our study categorizes different forms of pattern composition and studies the benefits and drawbacks of AOP in these contexts. We performed assessments of several pair-wise compositions taken from 3 medium-sized systems implemented in Java and two AOP models, namely, AspectJ and Compose*. We also considered complex situations where more than two patterns involved in each composition, and the patterns were interacting with other aspects implementing other crosscutting concerns of the system. In general, we observed two dominant factors impacting the pattern composability with AOP: (i) the category of the pattern composition, and (ii) the Aspecti idioms used to implement the design patterns taking part in the composition.
    No preview · Article · Dec 2014 · Journal of Systems and Software
  • Source
    Camila Nunes · Alessandro Garcia · Carlos Lucena · Jaejoon Lee
    [Show abstract] [Hide abstract]
    ABSTRACT: Establishing explicit mappings between features and their implementation elements in code is one of the critical factors to maintain and evolve software systems successfully. This is especially important when developers have to evolve program families, which have evolved from one single core system to similar but different systems to accommodate various requirements from customers. Many techniques and tools have emerged to assist developers in the feature mapping activity. However, existing techniques and tools for feature mapping are limited as they operate on a single program version individually. Additionally, existing approaches are limited to recover features on demand, that is, developers have to run the tools for each family member version individually. In this paper, we propose a cohesive suite of five mapping heuristics addressing those two limitations. These heuristics explore the evolution history of the family members in order to expand feature mappings in evolving program families. The expansion refers to the action of automatically generating the feature mappings for each family member version by systematically considering its previous change history. The mapping expansion starts from seed mappings and continually tracks the features of the program family, thus eliminating the need of on demand algorithms. Additionally, we present the MapHist tool that provides support to the application of the proposed heuristics. We evaluate the accuracy of our heuristics through two evolving program families from our industrial partners. Copyright © 2013 John Wiley & Sons, Ltd.
    Preview · Article · Nov 2014 · Software Practice and Experience
  • Everton Guimaraes · Alessandro Garcia · Yuanfang Cai
    [Show abstract] [Hide abstract]
    ABSTRACT: The progressive insertion of code anomalies in evolving programs may lead to architecture degradation symptoms. Several approaches have been proposed aiming to detect code anomalies in the source code, such as God Class and Shotgun Surgery. However, most of them fail to assist developers on prioritizing code anomalies harmful to the software architecture. These approaches often rely on source code analysis and do not provide developers with useful information to help the prioritization of those anomalies that impact on the architectural design. In this context, this paper presents a controlled experiment aiming at investigating how developers, when supported by architecture blueprints, are able to prioritize different types of code anomalies in terms of their architectural relevance. Our contributions include: (i) quantitative indicators on how the use of blueprints may improve process of prioritizing code anomalies, (ii) a discussion of how blueprints may help on the prioritization processes, (iii) an analysis of whether and to what extent the use of blueprints impacts on the time for revealing architecturally relevant code anomalies, and (iv) a discussion on the main characteristics of false positives and false negatives observed by the actual developers.
    No preview · Article · Sep 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Even though exception handling mechanisms have been proposed as a means to improve software robustness, empirical evidence suggests that exception handling code is still poorly implemented in industrial systems. Moreover, it is often claimed that the poor quality of exception handling code can be a source of faults in a software system. However, there is still a gap in the literature in terms of better understanding exceptional faults, i.e., faults whose causes regard to exception handling. In particular, there is still little empirical knowledge about what are the specific causes of exceptional faults in software systems. In this paper we start to fill this gap by presenting a categorization of the causes of exceptional faults observed in two mainstream open source projects. We observed ten different categories of exceptional faults, most of which were never reported before in the literature. Our results pinpoint that current verification and validation mechanisms for exception handling code are still not properly addressing these categories of exceptional faults.
    No preview · Conference Paper · Sep 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Mainstream programming languages provide built-in exception handling mechanisms to support robust and maintainable implementation of exception handling in software systems. Most of these modern languages, such as C#, Ruby, Python and many others, are often claimed to have more appropriated exception handling mechanisms. They reduce programming constraints on exception handling to favor agile changes in the source code. These languages provide what we call maintenance-driven exception handling mechanisms. It is expected that the adoption of these mechanisms improve software maintainability without hindering software robustness. However, there is still little empirical knowledge about the impact that adopting these mechanisms have on software robustness. This paper addressed this gap by conducting an empirical study aimed at understanding the relationship between changes in C# programs and their robustness. In particular, we evaluated how changes in the normal and exceptional code were related to exception handling faults. We applied a change impact analysis and a control flow analysis in 119 versions of 16 C# programs. The results showed that: (i) most of the problems hindering software robustness in those programs are caused by changes in the normal code, (ii) many potential faults were introduced even when improving exception handling in C# code, and (iii) faults are often facilitated by the maintenance-driven flexibility of the exception handling mechanism. Moreover, we present a series of change scenarios that decrease the program robustness.
    No preview · Article · May 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: As software systems are maintained, their architecture often de-grades through the processes of architectural drift and erosion. These processes are often intertwined and the same modules in the code become the locus of both drift and erosion symptoms. Thus, architects should elaborate architecture rules for detecting occur-rences of both degradation symptoms. While the specification of such rules is time-consuming, they are similar across software projects adhering to similar architecture decompositions. Unfortu-nately, existing anti-degradation techniques are limited as they focus only on detecting either drift or erosion symptoms. They also do not support the reuse of recurring anti-degradation rules. In this context, the contribution of this paper is twofold. First, it presents TamDera, a domain-specific language for: (i) specifying rule-based strategies to detect both erosion and drift symptoms, and (ii) promoting the hierarchical and compositional reuse of design rules across multiple projects. The language was designed with usual concepts from programming languages in mind such as, inheritance and modularization. Second, we evaluated to what extent developers would benefit from the definition and reuse of hybrid rules. Our study involved 21 versions pertaining to 5 software projects, and more than 600 rules. On average 45% of classes that had drift symptoms in first versions presented inter-related erosion problems in latter versions or vice-versa. Also, up to 72% of all the TamDera rules in a project are from a pre-defined library of reusable rules. They were responsible for detecting on average of 73% of the inter-related degradation symptoms across the projects.
    No preview · Conference Paper · Apr 2014
  • Source
    Kleinner Farias · Alessandro Garcia · Jon Whittle
    [Show abstract] [Hide abstract]
    ABSTRACT: Model composition is a common operation used in many software development activities---for example, reconciling models developed in parallel by different development teams, or merging models of new features with existing model artifacts. Unfortunately, both commercial and academic model composition tools suffer from the composition conflict problem. That is, models to-be-composed may conflict with each other and these conflicts must be resolved. In practice, detecting and resolving conflicts is a highly-intensive manual activity. In this paper, we investigate whether aspect-orientation reduces conflict resolution effort as improved modularization may better localize conflicts. The main goal of the paper is to conduct an exploratory study to analyze the impact of aspects on conflict resolution. In particular, model compositions are used to express the evolution of architectural models along six releases of a software product line. Well-known composition algorithms, such as override, merge and union, are applied and compared on both AO and non-AO models in terms of their conflict rate and effort to solve the identified conflicts. Our findings identify specific scenarios where aspect-orientation properties, such as obliviousness and quantification, result in a lower (or higher) composition effort.
    Full-text · Dataset · Apr 2014
  • Everton Guimaraes · Alessandro Garcia · Kleinner Farias
    [Show abstract] [Hide abstract]
    ABSTRACT: Researchers and practitioners advocate that design properties, such as obliviousness and quantification, can improve the modularity of software systems, thereby reducing the effort of composing design models. However, there is no empirical knowledge about how these design properties impact model composition effort. This paper, therefore, performs an empirical study to understand this impact. The main contributions are: (i) quantitative indicators to evaluate to what extent such design properties impact model composition effort; (ii) an objective evaluation of the impact of such modularity properties in 26 versions of two software projects by using statistical tests; and (iii) lessons learned on whether (and how) modularity anomalies related to misuse of quantification and obliviousness in the input models can significantly increase model composition effort.
    No preview · Article · Mar 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an initial quality model for model composition effort, which serves as a frame of reference to developers and researchers to plan and perform qualitative and quantitative investigations, as well as replicate and reproduce empirical studies. A series of empirical studies supports the proposed quality model, including five industrial case studies, two controlled experiments, three quasi-experiments, interviews and seven observational studies. Moreover, these studies have systematically demonstrated the real benefits of using a frame of reference to enable learning about model composition effort from experimentation.
    Full-text · Conference Paper · Mar 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Code anomalies are structural problems in the program. Even though they might represent symptoms of architecture degradation, several code anomalies do not contribute to this process. Source code inspection by developers might not support time-effective detection of architecturally-relevant anomalies in a program. Hence, they usually rely on multiple software metrics known to effectively detect code anomalies. However, there is still no empirical knowledge about the time effectiveness of metric-based strategies to detect architecturally-relevant anomalies. Given the longitudinal nature of this activity, we performed a first exploratory case study to address this gap. We compare metrics-based strategies with manual inspections made by the actual software developers. The study was conducted in the context of a legacy software system with 30K lines of code, 415 architectural elements, 210 versions, and embracing reengineering effort for almost 3 years. Effectiveness was assessed in terms of several quantitative and qualitative indicators. To measure the effort, we computed the amount of time used in several activities required to identify architecturally-relevant code anomalies. The results of our study shed light on potential effort reduction and effectiveness improvements of metrics-based strategies.
    No preview · Article · Mar 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Model composition plays a key role in many tasks in model-centric software development, e.g., evolving UML diagrams to add new features or reconciling models developed in parallel by different software development teams. However, based on our experience in previous empirical studies, one of the main impairments for the widespread adoption of composition techniques is the lack of empirical knowledge about their effects on developers’ effort. This problem applies to both existing categories of model composition techniques, i.e., specification-based (e.g., Epsilon) and heuristic-based techniques (e.g., IBM RSA). This paper, therefore, reports on a controlled experiment that investigates the effort of (1) applying both categories of model composition techniques and (2) detecting and resolving inconsistencies in the output composed models. We evaluate the techniques in 144 evolution scenarios, where 2,304 compositions of elements of UML class diagrams were produced. The main results suggest that (1) the employed heuristic-based techniques require less effort to produce the intended model than the chosen specification-based technique, (2) there is no significant difference in the correctness of the output composed models generated by these techniques, and (3) the use of manual heuristics for model composition outperforms their automated counterparts.
    No preview · Article · Jan 2014 · Software and Systems Modeling
  • [Show abstract] [Hide abstract]
    ABSTRACT: To prevent the quality decay, detection strategies are reused to identify symptoms of maintainability problems in the entire program. A detection strategy is a heuristic composed by the following elements: software metrics, thresholds, and logical operators combining them. The adoption of detection strategies is largely dependent on their reuse across the portfolio of the organizations software projects. If developers need to define or tailor those strategy elements to each project, their use will become time-consuming and neglected. Nevertheless, there is no evidence about efficient reuse of detection strategies across multiple software projects. Therefore, we conduct an industry multi-project study to evaluate the reusability of detection strategies in a critical domain. We assessed the degree of accurate reuse of previously-proposed detection strategies based on the judgment of domain specialists. The study revealed that even though the reuse of strategies in a specific domain should be encouraged, their accuracy is still limited when holistically applied to all the modules of a program. However, the accuracy and reuse were both significantly improved when the metrics, thresholds and logical operators were tailored to each recurring concern of the domain.
    No preview · Conference Paper · Oct 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Code anomalies are symptoms of software maintainability problems, particularly harmful when contributing to architectural degradation. Despite the existence of many automated techniques for code anomaly detection, identifying the code anomalies that are more likely to cause architecture problems remains a challenging task. Even when there is tool support for detecting code anomalies, developers often invest a considerable amount of time refactoring those that are not related to architectural problems. In this paper we present and evaluate four different heuristics for helping developers to prioritize code anomalies, based on their potential contribution to the software architecture degradation. Those heuristics exploit different characteristics of a software project, such as change-density and error-density, for automatically ranking code elements that should be refactored more promptly according to their potential architectural relevance. Our evaluation revealed that software maintainers could benefit from the recommended rankings for identifying which code anomalies are harming architecture the most, helping them to invest their refactoring efforts into solving architecturally relevant problems.
    No preview · Conference Paper · Oct 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Software metrics have been traditionally used to evaluate the modularity of software systems and to detect code smells, such as God Method. Code smells are symptoms that may indicate something wrong in the system code. God Method represents a method that has grown too much. It tends to centralize the functionality of a class. Recently, concern metrics have also been proposed to evaluate software maintainability. While traditional metrics quantify properties of software modules, concern metrics quantify properties of concerns, such as scattering and tangling. Despite being increasingly used in empirical studies, there is a lack of empirical knowledge about the usefulness of concern metrics to detect code smells. This paper goal is to report the results of an exploratory study which investigates whether concern metrics provide useful indicators to detect God Method. In this study, a set of 47 subjects from two institutions have analyzed traditional and concern metrics aiming to detect instances of this code smell in a system. The study results indicate that elaborated joint analysis of both traditional and concern metrics is often required to detect God Method. We conclude that new focused metrics may be required to support detection of smelly methods.
    Full-text · Conference Paper · Sep 2013

Publication Stats

2k Citations
26.62 Total Impact Points

Institutions

  • 2009-2015
    • Federal University of Rio de Janeiro
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 2002-2014
    • Pontifícia Universidade Católica do Rio de Janeiro
      • Department of Informatics (INF)
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 2009-2013
    • Universidade Federal da Bahia
      Bahia, Bahia, Brazil
  • 2009-2012
    • Tecgraf / PUC-Rio
      Rio de Janeiro, Rio de Janeiro, Brazil
  • 1970-2009
    • Lancaster University
      • School of Computing and Communications
      Lancaster, England, United Kingdom
  • 2007
    • Federal University of Pernambuco
      • Center of Informatics (CIn)
      Recife, Estado de Pernambuco, Brazil