ArticlePDF Available

Abstract and Figures

Context: Code annotations is a widely used feature in Java systems to configure custom metadata on programming elements. Their increasing presence creates the need for approaches to assess and comprehend their usage and distribution. In this context, software visualization has been studied and researched to improve program comprehension in different aspects. Objective: This study aimed at designing a software visualization approach that graphically displays how code annotations are distributed and organized in a software system and developing a tool, as a reference implementation of the approach, to generate views and interact with users. Methods: We conducted an empirical evaluation through questionnaires and interviews to evaluate our visualization approach considering four aspects: (i) effectiveness for program comprehension, (ii) perceived usefulness, (iv) perceived ease of use, and (iv) suitability for the intended audience. The resulting data was used to perform a qualitative and quantitative analysis. Results: The tool identifies package responsibilities providing visual information about their annotations at different levels. Using the developed tool, the participants achieved a high correctness rate in the program comprehension tasks and performed very well in questions about the overview of the system under analysis. Finally, participants perceived that the tool is suitable to visualize the distribution of code annotations. Conclusion: The results show that the visualization approach using the developed tool is effective in program comprehension tasks related to code annotations, which can also be used to identify responsibilities in the application packages. Moreover, it was evaluated as suitable for newcomers to overview the usage of annotations in the system and for architects to perform a deep analysis that can potentially detect misplaced annotations and abnormal growths on their usage.
Content may be subject to copyright.
CADV: A software visualization approach for code
annotations distribution
Phyllipe Limaa,b, Jorge Melegatid, Everaldo Gomese, Nathalya Stefhany
Pereirac, Eduardo Guerrad, Paulo Meirellese
aFederal University of Itajub´a - IMC - UNIFEI
bNational Institute for Space Research INPE
cNational Institute for Telecommunications Inatel
dFree University of Bolzano-Bolzen UniBZ
eFederal University of ABC CMCC-UFABC
Abstract
Context: Code annotations is a widely used feature in Java systems to configure
custom metadata on programming elements. Their increasing presence creates
the need for approaches to assess and comprehend their usage and distribution.
In this context, software visualization has been studied and researched to im-
prove program comprehension in different aspects.
Objectives: This study aimed at designing a software visualization approach
that graphically displays how code annotations are distributed and organized in
a software system and developing a tool, as a reference implementation of the
approach, to generate views and interact with users.
Methods: We conducted an empirical evaluation through questionnaires and
interviews to evaluate our visualization approach considering four aspects: (i)
effectiveness for program comprehension, (ii) perceived usefulness, (iv) perceived
ease of use, and (iv) suitability for the intended audience. The resulting data
was used to perform a qualitative and quantitative analysis.
Results: The tool identifies package responsibilities providing visual information
about their annotations at different levels. Using the developed tool, the par-
Fully documented templates are available in the elsarticle package on CTAN.
Email addresses: phyllipe@unifei.edu.br (Phyllipe Lima), jorge@jmelegati.com
(Jorge Melegati), everaldogjr@gmail.com (Everaldo Gomes),
nathalya.stefhany@gec.inatel.br (Nathalya Stefhany Pereira), eduardo.guerra@unibz.it
(Eduardo Guerra), paulo.meirelles@ufabc.edu.br (Paulo Meirelles)
Preprint submitted to Information and Software Technology October 17, 2022
ticipants achieved a high correctness rate in the program comprehension tasks
and performed very well in questions about the overview of the system under
analysis. Finally, participants perceived that the tool is suitable to visualize the
distribution of code annotations.
Conclusions: The results show that the visualization approach using the de-
veloped tool is effective in program comprehension tasks related to code anno-
tations, which can also be used to identify responsibilities in the application
packages. Moreover, it was evaluated as suitable for newcomers to overview the
usage of annotations in the system and for architects to perform a deep analy-
sis that can potentially detect misplaced annotations and abnormal growths on
their usage.
Keywords: Code Annotations, Circle Packing, Empirical Evaluation, Software
Visualization
1. Introduction
Code annotations were introduced in version 5 of Java to configure custom
metadata directly on programming elements, such as methods and classes. Tools
or frameworks usually consume this metadata to gather additional information
about the software, allowing the execution of more specific behavior. Since
annotations are inserted directly into the source code, they are a convenient
and quick alternative to configure metadata. Core enterprise Java APIs make
extensive use of code annotations, making them a relevant feature used daily
by developers.
A recent study performed in Java open source projects [1] identified at least
one code annotation in 78% of the classes. Another study demonstrated that
code annotations are actively maintained, and most code annotations changes
are consistent with other code changes [2]. This evidence suggests that code
annotations are a feature that has a non-neglectable impact on software main-
tenance.
According to Rajlich [3], software evolution and maintenance is the most
2
costly and challenging activity in the software development life cycle. While the
comprehension of a single module depends directly on the code quality, it is not
straightforward to understand one aspect across the whole source code. Given
that developers spend most of their time comprehending the software to be able
to add and maintain features [4], the increasing presence of code annotations
creates the need for approaches that can help in understanding how this feature
is used through the system components.
The approach of software visualization has become increasingly used to sup-
port developers in software comprehension [5, 6]. Some approaches aim to rep-
resent software as a known environment, such as a city [7, 8] or a forest [9].
Another technique is to create what is known as a polymetric view, defined as
a lightweight visualization enriched with software metrics [10].
To aid the software comprehension of code annotations usage in software
systems, we propose a visualization approach named Code Annotations Distri-
bution Visualization (CADV). This visualization uses code annotation metrics
from our previous work [1] as input and the circle packing approach [11] to
draw the system under analysis. The CADV is composed by three different
views: System View,Package View, and Class View. They complement
each other and display code annotations information within different scopes and
granularity. The System View displays annotations distributed by packages,
the Package View displays annotations distributed in the classes of a single
package, and the Class View displays the distribution of annotations in the
code elements within a class. The views use colors to represent the annotation
schemas and circle size to represent metric values. As a reference implemen-
tation of the CADV, we developed an open-source web tool called Annotation
Visualizer (AVisualizer)1.
We conducted an empirical evaluation through questionnaires with students
and interviews with professional developers to assess CADV using AVisualizer.
We built the survey to evaluate its effectiveness in program comprehension tasks,
1https://github.com/metaisbeta/avisualizer
3
its suitability for the intended target audience of the tool, and how useful and
easy the respondents felt about the approach. Combining the results from the
questionnaire and interview, we conducted a qualitative and quantitative anal-
ysis to reach our conclusions about the usage of the proposed approach in the
mentioned aspects.
Based on these studies, we observed a high correctness rate in questions
related to a general view of the system. Furthermore, participants could visu-
alize package responsibilities by associating them with annotations usage. The
strategy applied to display code annotation also allowed participants to detect
potentially misplaced annotations. The interview with developers also suggests
that the AVisualizer may outperform code inspection using standard IDE code
search tools in code comprehension tasks related to annotations. Finally, the
circle packing approach was shown suitable to display the hierarchical structure
of the analyzed source code.
The remaining of this paper is organized as follows: Section 2 presents our
background and motivations for this work; Section 3 presents related works that
explored software visualization to improve software comprehension; Section 4
describe the research design used to propose and evaluate the CADV; Section 5
defines the three views from the CADV approach and their implementation in
the AVisualizer tool; Section 6 describes the design of the study conducted to
evaluate the CADV; Section 7 presents the study results and discussions about
them; finally, in Section 8, we conclude the paper, highlighting the contributions
and pointing some directions for future studies.
2. Background and Motivation
In this section, we introduce code annotations and how they are used to
configure custom metadata in programming elements (Section 2.1). Then, in
Section 2.2, we present the suite of software metrics for code annotations pro-
posed and defined in [1]. These metrics aim to measure the characteristics
of code annotations we use as input for the software visualization approach
4
(CADV) proposed in this paper. In the final section, we present other works
that have assessed code annotations and how their results reinforce the rele-
vance and impact of code annotations, motivating our work of developing an
approach to visualize code annotations and their distribution.
2.1. Code Annotations and Metadata Configuration
The term “metadata” is used in various contexts in Computer Science. In all
of them, it means data referring to the data itself. When discussing databases,
while “data” refers to the domain information persisted, “metadata” refers to
their description, i.e., the table’s structure. In the object-oriented context, the
data are the instances, and the metadata is their description, i.e., information
describing the class. As such, fields, methods, super-classes, and interfaces are
all metadata of a class instance. A class field, in turn, has its type, access
modifiers, and name as its metadata. When a developer uses reflection, it is
manipulating the metadata of a program and using it to work with previously
unknown classes [12].
Some programming languages provide features that allow custom metadata
to be defined and included directly on programming elements. This feature
is supported in languages like Java and C#, which are referred to as code
annotations and attributes. They were integrated into the Java language in
version 1.5, becoming popular in the development community. Some well-known
and established APIs and frameworks such as EJB (Enterprise Java Beans), JPA
(Java Persistence API), JUnit, and Spring Boot use annotations extensively.
This native annotation support encourages many Java frameworks and API
developers to adopt the metadata-based approach in their solutions.
Two recent studies that measured their occurrence in the source code high-
light the importance of studying code annotations. One of them evaluated the
top-ranked Java projects hosted on GitHub and observed that, on average, 76%
of classes have at least one annotation [1]. The other study included 1,094
projects in the analysis and found that the median value of annotations per
project is 1,707 [2]. Both studies detected possible abuses in the usage of anno-
5
tations, finding a high number of annotations in individual classes and projects.
This evidence shows that annotations are a popular feature in Java projects and
consequently can influence several aspects of the code and software projects as
a whole.
When discussing annotation-based APIs in the context of metadata-based
frameworks, such as JPA, JUnit, and Spring, an important concept is “anno-
tation schema”2. Since one annotation rarely is enough to represent the infor-
mation needed in a metadata-based API [13], usually a group of annotations
is combined in the same package to fulfill that requirement. So, “annotation
schema” can be defined as the group of annotations used to represent the meta-
data structure used by a metadata-based API.
Since these annotations, which are part of the same metadata structure, are
often part of the same package, this information is used as a heuristic to identify
an annotation schema [1]. In other words, annotations from the same package
are considered part of the same schema. Consequently, a single metadata-
based framework may contain several schemas, usually representing distinct
data structures responsible for representing metadata for different framework
features.
To better illustrate the definition of “schema”, consider the code in Figure 1
that presents a simple class responsible for executing unit tests using the JUnit
4 framework. The annotations @Test,@After,@Before that belong to the
package org.junit are used in combination to define how the framework should
execute the tests, being part of the same annotation-based API. In this case,
we refer to this annotation set as part of the org.junit annotation schema.
In practical terms, an automated approach for extracting the schemas used in
classes can identify each package from the imported annotations as a separate
schema [14].
The code example in Figure 1 makes clear the strong relationship between
the presence of annotations of a given schema and code responsibilities. It is
2in this work we also refer to it simply as “schema”
6
1import org.junit.After;
2import org.junit.Before;
3import org.junit.Test;
4
5public class TestClass {
6@Before
7public void setUp(){
8//initializations
9}
10 @Test
11 public void testMethod(){
12 //Execute tests
13 }
14 @After
15 public void cleanTest(){
16 //clear resources allocated during initialiazation
17 }
18 }
Figure 1: Example with org.junit schema.
clear that a class with annotations from the org.junit schema is responsible
for testing. Therefore, detecting their presence in software systems may help
understand how developers organize packages’ responsibilities. As another ex-
ample, if the schema javax.persistence is imported in a class, we can infer
that this class is performing actions related to object-relational mapping and
dealing with a database. We also would expect that these classes would be
grouped in the same package, which may help identify the responsibility of the
package instead of a single class, giving a broader understanding of the system
under analysis. In other words, the presence of classes with annotations of a
given schema in a package might indicate their role in the system architecture.
Some studies confirmed this possibility by performing Java source code static
analysis using code annotations to identify the classes’ architectural role [15, 16].
Finally, this information also helps understand the coupling between applica-
7
tions to specific annotation-based APIs or metadata-based frameworks. In the
first example (org.junit) the class is coupled to the JUnit framework, and in the
second example (javax.persistence), the class is coupled to the JPA.
The software visualization that we are proposing and discussing in Sec-
tion 5 was highly motivated by this relationship between annotation schema
and class/package responsibilities. The visualization aims to display the used
schemas so developers may visualize the responsibilities of classes and packages
in the target system.
2.2. Measuring Code Annotations
The visualization approach we propose - Code Annotations Distribution Vi-
sualization (CADV) - uses source code metrics values as input. Source code
metrics help summarize particular aspects of software elements, detecting out-
liers in large amounts of code. They are valuable in software engineering since
they aid in monitoring code quality and controlling complexity [17], making
developers aware of possible abnormal growth of specific system characteristics.
The metrics used in CADV are a suite dedicated to measuring code annota-
tions proposed in our previous work [1]. This subsection describes these metrics
briefly to clarify how they are used in our visualization. The description focuses
only on the metrics used in CADV. The code presented in Figure 2 is used as
an example of how the metrics’ values would be collected.
Annotations in Class (AC): It represents the number of annotations de-
clared on all code elements in a class, including nested annotations. In
our code example, the value of AC is equal to 10.
Annotations Schemas in Class (ASC): It counts the number of different
schemas present in a class, measuring how coupled a class is to differ-
ent frameworks. This value is obtained by tracking the imports used for
the annotations. In the code example, the ASC value is 2. The import
javax.persistence is a schema provided by the JPA API, and the import
javax.ejb is provided by the EJB API.
8
1import javax.persistence.AssociationOverrides;
2import javax.persistence.AssociationOverride;
3import javax.persistence.JoinColumn;
4import javax.persistence.NamedQuery;
5import javax.persistence.DiscriminatorColumn;
6import javax.ejb.Stateless;
7import javax.ejb.TransactionAttribute;
8
9@AssociationOverrides(value = {
10 @AssociationOverride(name="ex",
11 joinColumns =@JoinColumn(name="EX_ID")),
12 @AssociationOverride(name="other",
13 joinColumns =@JoinColumn(name="O_ID"))})
14 @NamedQuery(name="findByName",
15 query="SELECT c " +
16 "FROM Country c " +
17 "WHERE c.name = :name")
18 @Stateless
19 public class Example {...
20
21 @TransactionAttribute(SUPPORTS)
22 @DiscriminatorColumn(name = "type", discriminatorType =STRING)
23 public String exampleMethodA(){...}
24
25 @TransactionAttribute(SUPPORTS)
26 public String exampleMethodB(){...}
27
28 }
Figure 2: Code to exemplify annotation metrics extracted from [1]
9
Arguments in Annotations (AA): It counts the number of arguments con-
tained in an annotation. Each annotation has its own value for the AA
metric. For instance, the @AssociationOverrides has only one argu-
ment named value, so AA has the value of 1. As another example,
@AssociationOverride, contains two arguments, name and joinColumns,
so the value for the AA metric is 2.
Annotations in Element Declaration (AED): It counts how many annota-
tions are declared in a code element, including nested annotations. In the
code example, the method exampleMethodA has an AED value of 2, as it
is annotated by @TransactionAttribute and @DiscriminatorColumn.
LOC in Annotation Declaration (LOCAD): LOC (Lines of Code) is a well-
known metric that counts the number of code lines in a given context.
This metric is a variant of LOC that counts the number of lines used in
an annotation declaration. As examples, @AssociationOverrides has a
LOCAD value of 5, while @NamedQuery has LOCAD equals 4.
With this suite of metrics, it is possible to characterize the usage of code
annotations in a given system, capturing their size and complexity and the
coupling generated by them. In our software visualization approach, these values
will be used to render the system. Therefore, these metrics can be considered
the basis for the CADV approach.
2.3. Code Annotations Assessment
In our previous work [1], we defined a novel suite of software metrics (as
described in Section 2.2). This work also describes an exploratory data analysis
that aimed to understand how these metrics values were distributed in open-
source projects by collecting the metrics values from 25 selected projects. It
was identified that 90% of the classes use up to 11 annotations, often declared
using one line of code and one argument. Finally, they are usually not nested,
being rare to find a two-level nesting. However, some outlier values found in this
study indicated potential problems that at least deserve a closer inspection. For
10
instance, we found classes with more than 700 annotations and one annotation
taking 58 lines of code. Also, we found that one single code element had 27
annotations configuring it. This result is evidence of extremely high values
for annotation metrics that, like other instances of large size metrics values as
an extensive method or a high number of parameters [18], might negatively
influence code maintenance.
Yu et al. [2] performed a large-scale empirical study on code annotations
usage, evolution, and impact. The authors collected data from 1094 open-
source Java projects and conducted a historical analysis to assess changes in
code annotations. The study found huge values for the number of annotations
considering the code element, the proportion in terms of lines of code, and
the project as a whole. For instance, the authors detected that some code
elements might have 41 annotations configuring them, which they considered
extremely high. These results indicate the existence of abuses of this feature
and confirm the result from Lima et al. [1]. Another finding revealed that code
annotations are constantly changed during project evolution, implicating that
some developers use annotations subjectively and arbitrarily, introducing code
problems. They also demonstrated that code annotations are deleted because
they are redundant (15.4%) or incorrectly placed (24.5%).
Concerning annotation repetition, Teixeira et al. [19] performed a study that
investigated the source code of a web application by searching for annotations
repeated throughout the source code. The findings revealed that some annota-
tions were repeated around 100 times in code elements with shared characteris-
tics in the target system. The repetition of annotations is also mentioned in Yu
et al. [2], but no quantitative evidence was presented. The study suggested that
more general definitions, such as application-specific code conventions, could
significantly reduce the number of configurations.
Based on these studies, it is possible to state that code annotations are a
relevant language feature widely used in Java projects. Therefore, they can
positively or negatively influence those systems’ maintenance and evolution.
Furthermore, these works also showed that code annotations might be misused
11
in several ways, such as being misplaced [2], duplicated [19, 2], and overused
[1, 2]. Consequently, developers might benefit from tools and approaches that
aid in comprehending how code annotations are distributed in systems, and
software visualization can be used for such purposes.
3. Related Work
Researchers have explored software visualization to represent a target soft-
ware system and improve code comprehension. These approaches usually use
software metrics as the input data to create a graphical system representation.
Both 2D and 3D approaches have been used, and even virtual reality-based vi-
sualizations. However, designing a software visualization approach focusing on
code annotations is a novel study to the best of our knowledge. For this rea-
son, in this section, we focus on presenting the similarities between our design
decisions and how other authors evaluated their approaches.
Lanza and Ducasse [10] introduce the concept of a polymetric view, a lightweight
software visualization enriched with software metrics. This visualization is based
on shapes, such as rectangles, using metrics values to determine their size, color,
and position. The polymetric view was also designed to represent relationships
between entities. The represented entities in the software might have an edge
connecting them to another entity, and the color and thickness of this edge
might also represent some metric. This approach for visualization can support
five metrics on each rectangle and two metrics on each edge. For the color, usu-
ally, the darker the color, the higher the metric value being represented. They
developed a tool named CodeCrawler to create a polymetric view for a given
system [20], but it is currently unavailable. As opposed to our visualization ap-
proach, we use only circles to represent the source code information. The reason
is that we used the circle packing approach to aid in displaying the source code
hierarchical structure.
Francese et al. [5] propose a polymetric view for traditional and object-
oriented metrics. Their work aims to provide an overview of the observed soft-
12
ware in size, complexity, and structure. The view is based not only on static
information but also on how classes exchange messages. This view is created as
a directed graph where each basic unit is a “type definition” (class, interface,
enum) that composes a graph as a node. Each node is drawn as an ellipsis
and a rectangle, and their measures and colors represent several source code
metrics. This visualization was implemented in an Eclipse Plugin called Metri-
cAttitude [21]. To evaluate how their approach aided in Java software compre-
hension, they conducted a questionnaire-based experiment with 18 participants,
ten undergraduate students and eight professional developers. The target soft-
ware used was the JLine3. Participants filled a comprehension questionnaire
with 16 open-ended questions about the target software system. Afterward,
they filled out a perception questionnaire about their impressions and opinions
of the visualization tool. According to the author’s report, the participants
found the tool easy to understand and expressed an overall favorable judgment.
As for the comprehension, the average correctness of answers has values between
0.70 and 0.87.
Comparing the experiments carried out by Francese et al. [5] with our ap-
proach to evaluating the CADV, we also conducted a questionnaire to find the
usefulness and ease of use. However, since our approach focuses on visual-
izing code annotations, we did not find other software visualization tools or
approaches to compare. As such, we compared our tool with manual code in-
spection aided by tools present in IDEs for searching in the project’s source code.
Nevertheless, we also conducted interviews with the developers and conducted
qualitative data analysis (as described in Section 4.3).
Another approach used to represent software systems is to create a syn-
thetic natural environment (SNE). The goal is to create familiar environments
artificially using metaphors, such as a city [7], a forest [9], and the solar sys-
tem [22]. Then the system maps software characteristics, mainly through met-
rics, to this environment. Possibly the most well-known SNE used for software
3https://github.com/jline/jline3
13
visualization is the city metaphor popularized by Wettel and Lanza [23], where
the authors developed a tool called CodeCity [7] to implement this metaphor.
The city metaphor represents types (classes, interfaces, enums) as buildings
(parallelepipeds). In turn, these buildings are localized inside districts, which
represent packages. The visual properties of each parallelepiped represent the
software metrics of the class. One aspect in common with these SNE is that they
were all designed to use traditional source code metrics to display the system,
in contrast with ours that use code annotation metrics.
Wettel et al. [7] performed an experiment to assess the CodeCity and city
metaphor. They elaborated nine questions that the participants should answer
using CodeCity. For instance, one question was to Locate all the unit test classes,
and another Find three classes with the highest number of methods (NOM). The
authors obtained an increase of (+24%) task correctness and a decrease in the
completion time (-12%) compared to traditional tools such as IDEs. CodeCity
constantly outperformed manual code inspection in tasks that benefit from an
overview of the system. Comparing these results with our evaluation (discussed
in Section 7), we also observed good performance of questions related to a
general view of the system.
The city metaphor gained a Virtual Reality (VR) version [24]. The authors
proposed the CityVR - an interactive visualization tool that uses virtual reality
to implement the city metaphor. The authors conducted a qualitative empiri-
cal evaluation using semi-structured interviews with six experienced developers.
From their results, developers felt curious and spent considerable interaction
time navigating and selecting elements. Finally, the participants were willing
to spend more time using CityVR to solve software comprehension tasks. Com-
pared with our work, we conducted a similar evaluation by interviewing six
developers. Nevertheless, we also conducted a study with students to obtain a
complementary result.
More recently, a new tool called Code2City was developed to support VR
and flatscreen visualization [8]. The authors conducted experiments with 42
participants comparing three different approaches: Code2City displayed on (i)
14
a regular computer screen, (ii) the VR version, and (iii) a plugin for the Eclipse
IDE named Metrics and Smells that collects metrics and detects bad smells. The
authors concluded that the city metaphor increases software comprehension, and
users using the VR version concluded the experiment tasks more quickly and
were more satisfied.
Finally, Merino et al. [6] mention that several software visualizations work
has failed in their evaluations. Therefore, they provide little evidence of the
effectiveness of their approach. They mention that 62% of the proposed ap-
proaches do not include any evaluation or a weak one, such as anecdotal evidence
of simple usage scenarios. They conclude that software visualization research
should use more surveys with the target audience (to extract helpful informa-
tion), conduct interviews, and perform an experiment using real-world software
systems and a controlled system with practical tasks. These can be viable op-
tions to evaluate and assess a software visualization approach. As mentioned,
we conducted an empirical evaluation with students and developers to support
our findings to overcome this.
In conclusion, the visualizations we found in the literature focus on display-
ing size, complexity, and cohesion, with the types (classes, interfaces, enums)
representing the main elements. Even though annotations are a relevant code
element in Java, we did not find any software visualization approaches focusing
on them. Since most visualization focuses on class structure metrics, they do
not display much information about class responsibilities. For instance, in the
study performed with CodeCity [7], to complete the task of locating unit test
classes, the user should search for classes (or packages) containing the word
“test”. On the other hand, code annotations are usually tied to a specific re-
sponsibility or architectural role [15, 16]. Therefore, a visualization approach
that allows identifying the annotation schemas present in classes and packages
provides valuable information to identify how the responsibilities are organized
in the system under analysis.
15
4. Research Design
This section presents the research design to propose and evaluate the CADV
approach. To build our visualization, we follow best practices and guidelines
applied in previous works about software visualization [5, 6, 8]. The following
subsections present and discuss the goals for CADV, the steps to reach them,
the target audience, and our approach for empirical evaluation based on ques-
tionnaires and interviews. In short, we took the following steps to reach our
goals:
Proposed and designed the visualization approach for code annotation
distribution named CADV;
Developed a tool, named AVisualizer, that generates CADV for a given
system and provides navigability through the different views;
Applied questionnaires and conducted interviews to evaluate CADV through
the use of AVisualizer, aiming to assess its effectiveness for program com-
prehensions tasks, usefulness, and ease of use;
Analyzed the data collected from the questionnaires and interviews, com-
bining quantitative and qualitative strategies to evaluate the proposed
approach, raising its applicability, strengths, and weaknesses.
4.1. Visualization Goals
The primary goal of CADV is to aid in the comprehension and provide
an overview of how code annotations are distributed in the analyzed software
system. Then, the visualization should also provide more details of code anno-
tations in specific classes and packages that the user wishes to explore further.
Following, we describe the five goals.
(#G1) - Detect annotations schemas and how they are distributed in
the packages: The user should be able to spot all schemas and quickly
identify where they are being used. For instance, if the code annotations
16
from a given schema are present in different packages, it might indicate
that the responsibilities related to that schema are distributed in these
packages. As another example, a concentration of an annotation schema
usage in a package enables the user to locate the classes that handle the
respective responsibility quickly. To reach this goal, the view should rep-
resent the annotations from each schema in the whole system and should
not overwhelm the user with details closer to the source code.
(#G2) - Detect how annotations are distributed per class in packages:
From the general view of the system, the user might want to investi-
gate specific packages to obtain more details about code annotations in
its classes. For instance, (i) how schemas are distributed inside classes,
(ii) if the classes are coupled to several schemas, and (iii) if there are
classes with a large number of annotations. An annotation size metric,
such as LOCAD (Lines of Code in Annotation Declaration), can represent
the annotations. To reach this goal, the view should present the code
annotations characterized by a size metric grouped by their classes.
(#G3) - Detect how annotations are distributed and grouped per code
elements inside the classes: Code annotations are placed on code ele-
ments, such as methods, members, and type definitions. Several annota-
tions can be added to the same code element. The user might be interested
in how the annotations are grouped inside these code elements to identify
(i) if a specific code element is overloaded with annotations, (ii) if several
code elements contain repeated annotations, and (iii) how annotations
from different schemas are distributed inside the class. The size of the
annotations should also be represented by a metric that can characterize
its size. The view should group the annotations by code elements inside
the classes to reach this goal.
(#G4) - Provide a navigation system between views with different gran-
ularity: For each goal presented previously, #G1, #G2, and #G3, we
are proposing a different view consistent with each of these three goals.
17
Therefore, our visualization approach should provide some mechanism to
navigate these different views.
(#G5) - Detect misconfigurations: Our goals are concerned with the distri-
bution of code annotations in the system rather than detecting problems
related to them. However, we might also detect potential problems if we
can visualize code annotations. According to Yu et al. [2], code annota-
tions are deleted because they are redundant (15.4%) or wrong (24.5%).
The same study also points out that 19.8% of the annotation replace-
ments switch them to the “same name” annotations from other libraries,
revealing that annotation from an unexpected schema was present. The
proposed approach should identify inconsistent annotations, enabling the
user to locate potentially misplaced annotations.
4.2. Target Audience
Software visualization approaches should clearly state the target audience,
which should be consistent with the goals. Following, we describe the target
audience aimed for the visualization approach:
Newcomer Developer4: According to Dagenais et al. [25], whenever a de-
veloper approaches a new software system, he/she feels like an explorer
who need to orient themselves within an unfamiliar landscape, even senior
developers. The goal #G1 provides a general overview of the system that
may help newcomers.
Student: Given that they are constantly learning, they are also part of
the target audience. They could use the visualization like newcomer de-
velopers to understand better some concepts related to code annotations
and metadata-based frameworks usage. They can also identify aspects of
the software system and further study them using other methods.
4The term ”newcomer developer” is used in this work to refer to someone who has just
joined a team. It does not necessarily mean that he/she has no software experience since even
a senior software developer can be a newcomer.
18
Software Architect: Since he/she already has a firm grasp of the whole sys-
tem, this audience can use the visualization approach for a more detailed
analysis focusing on aspects such as: how to make this system better ad-
here to the proposed software architecture? Are annotations growing out
of control in some parts of the system? Are package responsibilities clearly
defined? Are there any misplaced or misconfigured code annotations? Are
there any packages coupled with different annotation schemas?
The target audience does not need to adhere strictly to these definitions.
A newcomer can use the visualization to detect outliers, just as a seasoned
developer can also gain new insight about the system software it did not know.
4.3. Evaluation and Data Analysis Approach
The CADV, as further explained in Section 5, uses values extracted from the
suite of metrics dedicated to code annotations [1]. Given the novelty of these
metrics, we were unable to find a tool to compare with our solution. Therefore,
to evaluate the CADV, we conducted an empirical study using questionnaires
and interviews where participants had to use the AVisualizer, inspect a target
software system, and answer some questions. We assessed the effectiveness of
the tool in (i) program comprehension tasks related to code annotations, (ii) its
perceived usefulness, (iii) its perceived ease of use, and (iv) the audience that it
can potentially benefit.
The program comprehension tasks were elaborated based on real-world sce-
narios extracted from our previous experience [12, 13, 19, 1, 14], our visualization
goals (Section 4.1), and also tasks other researchers elaborated when evaluating
their software visualization approach [7, 5, 8]
The correctness of the program comprehension tasks was defined by three
authors, all with previous experience with code annotations. They were not
involved in the development of the target system nor had any direct relationship
with the developers. To elaborate on the tasks, they were granted access to the
code. After the interview was carried out, all six developers agreed with the
correctness of the answers.
19
The categories perceived ease of use and perceived usefulness were
based on the TAM (Technology Acceptance Model) [26]. These two variables
characterize how users find the technology easy to use and how users find the
technology helpful. The questions we elaborated to measure them used the
Likert scale, ranging from strongly disagree to strongly agree, and a strategy
similar to the work of Choma et al. [27].
We generated the proposed view for a real-world system to perform the
evaluation, using a module from EMBRACE as the target software. It is a web
application whose goal is to show space weather information developed by the
National Institute for Space Research (INPE)5, a Brazilian public research insti-
tution active in the fields of meteorology and aerospace. INPE has the mission
to foster scientific research, technological applications, and qualifying personnel
in space and atmospheric sciences, space engineering, and space technology. It
is vital in environmental protection, monitoring forest fires, and deforestation.
The institute requires enterprise software systems that allow processing, per-
sisting, and making data available to the community. This work was developed
in INPE, and using a system in production from this institute as a target from
our study applies our approach in a real-world application developed to meet
the demands of scientific research institutions.
The EMBRACE web application is composed of several modules that fol-
low the same reference architecture [28] based on Java Enterprise APIs. The
application is used to process and make data publicly available. This software
system has 1314 classes, from which 837 (64%) contain at least one annotation.
The application comprises six modules spanning 94 components archived in in-
dependent deployment units. We selected the module SpaceWeatherTSI to
be used as the target software in the evaluation, given that it uses annotations
from different metadata-based frameworks. This attribute matches the char-
acteristics of a software system that could benefit from the CADV approach,
that is currently in production. Afterward, we conducted an interview study
5https://www.gov.br/inpe/ - partially available in English
20
with six developers involved in constructing the target software and a question-
naire study with 79 master and undergraduate students from three different
institutions. The idea of applying a questionnaire and interviews with different
groups have been used in other Software Engineering research [29, 30]. We aim
to capture different perspectives from groups with different backgrounds and
experiences. The design of this study is described in more detail in Section 6.
Afterward, we conducted qualitative and quantitative analyses of the data
obtained from the abovementioned studies. With this information, we evalu-
ated the approach’s effectiveness based on the number of correct answers to the
program comprehension questions and how useful, and easy the tool was per-
ceived. Finally, since the interview was semi-structured and the questionnaire
also provided one open-ended question, we conducted a qualitative analysis to
extract relevant points and insights about the proposed approach.
5. Code Annotations Distribution Visualization - CADV
This section presents the CADV proposal and definition. It comprises three
different circle packing views, where a leaf node size is calculated based on
an annotation-metric value. The tool that implements the CADV supports a
navigation system that allows the user to switch between these three views.
The CADV displays the annotations used in a given class or package context.
Given the hierarchical structure inherent to a source code, we chose a circle pack-
ing approach as a basis for the visualization. According to Bostock [31], even
though a circle packing is not space-efficient, it can better reveal the hierarchical
structure.
Source code information such as packages, classes, and annotations are dis-
played as circles. We also use colors and outlines to differentiate them. Some
elements are only visible in specific views, and the hierarchical structure of the
source code also organizes the elements since code annotations are placed inside
a class and classes inside packages.
To guide the design process of the views, we used a GQM (Goal Question
21
Metric) Model [32]. From the five goals (Section 4), four questions were ex-
tracted to be used in the model. Table 1 presents the GQM model with the
questions and the respective goal from which they were extracted. We delib-
erately left #G4 out of the GQM model since it is related to the navigation
system, which had become a requirement for the tool implemented later.
Table 1: GQM applied for the CADV Approach.
Goal (Purpose) Visualize
(Issue) the usage and distribution of
(Object) annotated code
(Viewpoint) from software developer viewpoint
(Question) Q1 How are annotations schemas distributed by packages? (Extracted from G1)
(View 1) System View Provides a polymetric view that displays annotation schemas being used by packages
(Question) Q2 How are annotations schemas distributed inside packages? (Extracted from G2)
(View 2) Package View Provides a polymetric view that displays annotations being used in classes of a package
(Question) Q3 How are annotations schemas distributed inside classes? (Extracted from G3)
(View 3) Class View Provides a polymetric view that displays annotations, and how they are grouped in the class code elements.
(Question) Q4 How to detect potential misplaced code annotations? (Extracted from G5)
(View 1) System View
(View 2) Package View
(View 3) Class View
Every view may contribute to reaching the goal #G5 differently and within
their scope. For instance, the Package View seems suited to detect an ex-
tensive annotation. In contrast, the System View does not help much since
it does not display any information about the annotation size. On the other
hand, an annotation schema potentially misplaced might be quickly spotted in
the System View. The Class View may contribute to detecting a specific
code element overloaded with code annotations or annotations in unexpected
code elements.
The CADV is a software visualization approach that different tools can im-
plement. We developed AVisualizer as a reference implementation for the pro-
22
posed approach, a web application that generates CADV, allowing users to
interact with the views and navigate between them. In the following sections,
we present CADV through the usage of AVisualizer. That way is simple to ex-
plain the static structure of the views and their interaction with the user. This
tool was used in the studies performed to evaluate the proposed approach. The
D3.js library [33] was used as the basis to develop the AVisualizer.
The following subsection describes the basic layout of the AVisualizer tool,
and the subsequent subsections define the three views that compose the CADV.
Reference images of the views were generated by the AVisualizer tool using
the SpaceWeatherTSI as the target software (described in Section 4.3). Fol-
lowing the same guidelines as Lanza and Ducasse [10] and Romano et al. [8],
we recommend reading this work on a colored screen monitor. In our supple-
mentary material [34] there are open-source projects used as an example for
the AVisualizer. The demonstration of the AVisualizer is available at https:
//avisualizer.herokuapp.com/
5.1. AVisualizer Layout
Figure 3 presents the initial page of the AVisualizer tool. The circle packing
view, drawing a given system under analysis, is presented on the left side. This
interface is constantly updated as the user navigates the tool. Above the view,
there is a small header with the following information:
Visualization: Informs which of the three views is currently rendered. It
could be System View,Package View, or Class View.
Annotation Metric: Inform the metric value used to render the leaf nodes.
Package or Class: Informs what package or class is currently being dis-
played, represented by the outermost circle.
On the right of the visualization, the table Annotation Schemas lists all
schemas found on the project, displaying the associated color and informing their
total number of annotations. For instance, there are 168 annotations from the
23
Figure 3: AVisualizer tool with System View
javax.persistence schema, and 19 from the org.junit schema. By associating
a color to a schema, the user can spot its annotations by searching circles of
that color inside circles representing packages or classes. There is also a check
box on the table that allows hiding specific annotations.
AVisualizer uses fixed colors for some common annotation schemas found
in open-source software [1]. This strategy helps to standardize the usage of
the AVisualizer since common schemas will receive the same color in the views
generated for different target systems. For instance, given that pink is used for
the javax.persistence, regardless of the software being analyzed, if the user
spots a pink circle, she will associate it with persistence. If the schema found in
the system is not predetermined, then the AVisualizer randomly associates one.
However, white and gray colors do not represent schemas since they already have
other meanings within the CADV. The following is a list with some examples
of these predetermined schemas:
java.lang - Blue
javax.persistence and org.hibernate - Pink tones
org.springframework - Orange tones
24
org.junit and org.mockito - Purple
javax.ejb - Green
Related schemas will use a tone variation of the same color. For instance
javax.persistence and javax.persistence.metamodel will have different pink
tonality. Finally, the views do not display any textual information to create a
clean visualization. However, the user can hover the mouse over the circles to
reveal labels with more information.
5.2. System View
The System View, which is presented in Figure 3 is the initial view dis-
played to the user. It displays the whole project allowing users to overview how
code annotations are spread in the system under analysis. The System View
displays circles representing packages and a representation of the annotation
schemas usage. The light gray background color was chosen to remain as neu-
tral as possible. The following list presents the characteristics of these circles
and what information becomes available when the user hovers the mouse over
the circle representing the annotation.
Packages: Every circle representing a package has a dashed outline. The
outermost dashed circle represents the root package of the project. The
inner “dashed outlined circles” are other packages, which can be at the
same level or nested. In Figure 3, there are several packages in this system,
each represented by circles with a dashed outline. This approach displays
the hierarchical structure of the source code under analysis. The user can
also click on these circles (packages) to perform a zoom action. When
this happens, the outermost circle changes, and the header will reflect the
current package.
Annotation Schemas Usage: These are colored filled circles rendered inside
“dashed outline circles” (packages). They represent the occurrence of
annotations of the schema mapped to the circle color inside that package.
The Annotation Schemas table works as a legend for the annotation
25
schema color. The size of these circles is proportional to the number of
code annotations of that particular schema inside that package. The larger
the circles, the higher the number of annotations from that schema. For
instance, the System View on Figure 3 shows a large pink-toned circle,
meaning that this package has a high number of annotations from the
javax.persistence schema.
Label: When the user hovers the mouse over a colored circle, i.e., an anno-
tation, a label appears with the following information: (i) the name of the
schema, (ii) the name of the package and, (iii) the number of occurrences
of annotations from this schema in the package.
In the System View, the classes of the packages are not visible. It does
not show in how many classes the annotations are divided. To access this
information, the user should click on the package to navigate to the Package
View, described in the next section.
5.3. Package View
The Package View can display classes and individual code annotations
inside a given observed package. Differently from the System View designed
to visualize the whole system, the Package View displays a specific package.
Figure 4a displays an example of the Package View. The circles are rendered
with the following characteristics:
Package: It has the same characteristics as the System View.
Classes: Classes are rendered as white-filled circles. Their size depends
on the number of code annotations used inside the class. If a white circle
appears large than other white circles, it represents a class with more code
annotations.
Code Annotations: These are colored (any color besides white and gray)
filled circles rendered on top of white circles. They represent code anno-
tations being used inside a specific class. Their color matches the color
of their schema, present on the Annotations Schema table. The size of
26
(a) Package View (b) Class View
Figure 4: An example of the Package and Class View
these circles is proportional to their LOCAD value, i.e., the default metric
used in the Package View.
Label: When the user hovers the mouse over a colored circle, i.e., an
annotation, a label appears with the following information: (i) the name
of the package, (ii) the name of the class, (iii) the name of the annotation
and, (iv) the metric used to determine the size of the circle. For the
Package View, the default metric is the LOCAD (LOC in Annotation
Declaration).
The Package View does not display how individual annotations are grouped
in a specific code element inside a class. To access this information, the user
should click on the class to navigate to the Class View. Comparing the Pack-
age View with the System View, it displays information with finer granularity
but for a more restricted context.
27
5.4. Class View
The Class View displays classes with individual code annotations inside the
observed class. It shows how code annotations are distributed by code elements,
such as a method, a field, or the class itself. Figure 4b presents an example of
Class View. The circles are rendered according to the following rules:
Classes: Just as in the Package View, they are rendered as a white circle.
There is only one white circle enclosing all the others since the focus is on
a specific class.
Annotations: Annotations are represented individually by colored circles,
whose color represents the schema. The circle size is based on the AA
(Arguments in Annotations) metric.
Gray-Circles: This color is also used to represent packages, but in the
Class View they represent code elements, such as method and class mem-
bers. The code annotations (colored circles) are rendered on top of these
gray circles. Colored circles rendered directly on top of a white circle rep-
resent code annotations configuring the class itself. The number of colored
circles rendered on the same gray circle represents the AED (Annotations
in Element Declaration) metric.
Label: When the user hovers the mouse over a colored circle, i.e., an
annotation, a label appears with the following information: (i) the name
of the package, (ii) the name of the class, (iii) the name and type of the
element the annotation is configuring, (iv) the name of the annotation
and, (v) the metric value used to determine the size of the circle. For the
Class View, the default metric is the AA (Arguments in Annotation).
The gray-toned background used to represent code elements is the same one
used to represent a package background in the other two views. However, it
does not affect the Class View since it shows a single class, and no package is
displayed. Furthermore, the gray color provides a suitable neutral background.
Apart from white and gray, every other color represents a schema.
Inspecting the Class View, it is unclear whether the gray circle represents
a method, enum, or other code elements. This design decision was made to
28
avoid overloading users with more information. Users, however, can access this
information by pointing the mouse to it. The Class View can display the
grouping of code annotations by code elements based on the size of these gray
circles. Furthermore, the schema information is always available since each one
has a unique color throughout the project.
The metric used to render the leaf nodes, i.e., the colored circles represent-
ing individual annotations, was the AA (Arguments in Annotations). In other
words, annotations with more arguments are larger. We decided to use this
metric to vary from the one used in another view since Package View already
uses LOCAD.
5.5. Code Annotations Distribution Visualization Summary
The CADV is a circle packing-based view designed to aid in visualizing code
annotations used in software systems. It comprises three different views to
display information at different granularity levels. This approach allows users
to switch between these views and analyze the parts of the software with the
desired granularity:
System View (SV): Related to the goals G1 and G5, displays packages and
schemas used. Each schema is assigned a color, and packages are gray
circles with a dashed outline.
Package View (PV): Related to the goals G2 and G5, displays a package,
classes, and code annotations used in the classes. Classes are white circles.
Code annotations are assigned the schema color and are rendered on top
of the white circles. Packages are gray circles with a dashed outline.
Class View (CV): Related to the G3 and G5, displays a class, code elements,
and code annotations grouped by code elements. Classes are white circles.
Code elements are a gray color. Code annotations are assigned the schema
color and are rendered on top of either the white circles or gray circles.
29
6. Empirical Evaluation Design
This section describes the evaluation approach conducted for CADV using
the AVisualizer tool. The following subsections present the steps of the inter-
view, the questionnaires, and the qualitative analysis.
6.1. Interview
We invited developers who worked in the SpaceWeatherTSI (described
in Section 4.3) web application to participate in the interview study. Members
involved in this project’s construction could improve the assessment of the vi-
sualization approach. Given that they know the system, they could assess if
the AVisualizer tool could effectively display a coherent view of the project un-
der analysis. They could also analyze if the CADV brought new insights and
valuable information to an already familiar system.
6.1.1. Procedure
We first provided a link with a recorded video as a tutorial on using the
AVisualizer tool. We sent it to each participant 1 week before the interview. We
also sent a Google Form link to collect personal data, programming background,
and consent to participate in the evaluation.
The interview occurred according to the following steps:
1. Video call: Using Google Meet.
2. AVisualizer Demo: We provided the link to the deployed demo of the
AVisualizer tool to the interviewee.
3. Recording: Initiated the recording using the OBS desktop application.
4. Training Session: The interviewer shared the screen with the interviewee
and provided another 15-minute training. Therefore, participants were
submitted to two training sessions on the AVisualizer tool.
30
5. AVisualizer Execution: The interviewees could use the tool themselves (on
their computer), but they preferred to have the interviewer manipulate it
for them (in the interviewer’s computer, with the screen shared). The
interviewees issued all the commands while answering and discussing the
questions. In parts of the interview, the interviewees also interacted with
the tool on their computer while providing feedback. This approach was
natural since the demonstration tool was available as a web application
and was easily accessed from a browser.
6. Questions: The interviewer started asking questions presented in Table 2
to guide the interview. However, the conversation was informal, so the
interviewees could freely answer and discuss other topics as the interview
carried on.
On average, the interviews lasted 60 minutes, i.e., 15 minutes of training
with 45 minutes of questions/discussions.
6.1.2. Interview Questions
To guide the interview, we elaborated 15 questions presented in Table 2.
They were based on the four categories considered in the evaluation of the CADV
using AVisualizer (discussed in Section 4). Since this was a semi-structured
interview, we opted to elaborate questions that could tackle more than one
category at once, making the interview more fluid. For instance, the first five
questions directly ask the participants to execute program comprehension tasks
in which the perceived ease of use can also be evaluated. Similarly, the following
seven questions use the same strategy for the perceived usefulness.
As explained in Section 6.3, we performed a qualitative analysis of the in-
terview transcriptions to extract themes related to the target categories, such
as “perceived ease of use” and “perceived usefulness”. For the program com-
prehension questions, the participants’ answers were analyzed and classified as
“right” or “wrong”.
31
Table 2: Interview Questions.
ID Question Goal
Program Comprehension and Perceived Ease of Use
Q1 What annotation schemas are concentrated in fewer packages? G1
Q2 What annotations schemas are present in more packages? G1
Q3 Is the circle packing able to represent the hierarchical
structure of packages adequately?
G1
Q4 When changing to the Package View (any package),
can you tell which class(es) contain the largest number of code annotations?
G2/G4
Q5 When changing to the Class View (any class),
can you tell which code elements contain the largest number of annotations ?
G3/G4
Program Comprehension and Perceived Usefulness
Q6 Which package(s) contain model classes mapped to databases? G2
Q7 Which package(s) contain web controllers classes? G2
Q8 Which package(s) contain unit testing classes? Is there enough unit testing code? G2
Q9 Is it possible to identify packages with specific responsibilities by visualizing schemas? G2
Q10 Is it possible to detect potentially misplaced code annotations? Describe the steps G5
Q11 Is it possible to detect code annotations potentially being used excessively? G5
Q12 Out of the three views, System, Package, and Class, which one did you prefer? G1-G5
Target Audience
Q13 Do you believe the tool eased the process of seeing how the
code annotations are distributed in the system?
G1-G5
Q14 Do you believe a newcomer developer could benefit from such a tool? G1-G5
Q15 What role in a software team do you think can better use the AVisualizer tool? G1-G5
6.2. Questionnaire
We invited students from Inatel6(National Institute of Telecommunications)
and IPT7(Institute for Technological Research) in Brazil, and UniBz8(Free
University of Bozen-Bolzano) in Italy to answer the questionnaire. Seventy-nine
students participated in the study, where 70% were undergraduates in Computer
6https://inatel.br/home/
7https://www.ipt.br/
8https://www.unibz.it/
32
Science/Engineering courses, and the remaining were master students. These
studies were carried out remotely for eight months. None of these students was
involved in developing the SpaceWeatherTSI software and provided a neutral
point of view. Using students to conduct experiments and evaluations is a viable
option to advance software engineering [35, 36]. The questionnaire was applied
in the context of courses that explained concepts about code annotations for
the students. Participation in the study was optional, no grade was associated
with the answers, and we did not keep any information that could identify the
students.
6.2.1. Procedure
The questionnaire was carried out through a survey using Google Forms. As
soon as the students accessed the form, they went through the following items:
1. Clarified Consent Term: A text explained the study’s goals and requested
their participation. The students could choose not to be involved.
2. Personal Information: The questionnaire started with questions that gath-
ered demographic information, experience with code annotations, current
role, and primary programming language.
3. Code Annotations: A text briefly explained code annotations and the code
annotation metrics used in the views (Section 2.2).
4. AVisualizer Tutorial: The questionnaire provided a textual explanation of
the AVisualizer tool and a link to a Youtube video tutorial of the tool.
The participants were free to watch the video during the whole evaluation
process. It was the same video used for the interview study.
5. Using the AVisualizer: The questionnaire provided a link to an instance of
the AVisualizer tool displaying the views for the SpaceWeatherTSI sys-
tem. While using the AVisualizer, the students had to answer 10 program
comprehension tasks about this web application.
33
6. Impressions of the AVisualizer: After answering the 10 program compre-
hension tasks, they were presented with eight questions using the Likert
Scale to measure the perceived ease of use.
7. Possible Scenarios Usage: To measure the perceived usefulness from
the students, we presented a question with four possible scenarios and
asked them which of these would most likely use the tool.
8. Open-ended Question: Additionally, a final open-ended question asked
the students to describe their overall opinion and impressions about the
approach and the tool. The answer to this question was coded and used
for the qualitative analysis.
6.2.2. Program Comprehension
We elaborated ten close-ended questions for the questionnaire with five alter-
natives each. These questions aimed to detect if the participants could identify
code annotations characteristics from the target project using the AVisualizer
tool. The questions were elaborated based on the goals proposed in Section 4.1.
They can be found in our supplementary material [34].
6.2.3. Perceived Ease of Use and Perceived Usefulness
To measure the “perceived ease of use”, we elaborated eight statements that
measured the students’ impressions about the AVisualizer tool when navigat-
ing between the views and identifying packages and classes. These statements
used the Likert Scale, ranging from strongly disagree to strongly agree. Table 3
presents these statements.
To measure the “perceived usefulness” we presented the students with a
question with four possible usage scenarios they would most likely use the tool:
(i) Identify schemas to learn about them elsewhere; (ii) Search for large or
misplaced annotations; (iii) analyze the architecture; (iv) Familiarize with the
system before adding new features.
34
Table 3: Perceived Ease of Use - Statements
ID Statement
SE1 I can easily identify java packages with different responsibilities using the AVisualizer tool
SE2 I can easily see how code annotations are distributed in the system under analysis using the AVisualizer tool
SE3 Learning how to use the AVisualizer was easy to me
SE4 I can easily see how many annotation schemas are being used inside a java class using the AVisualizer
SE5 I can easily see how many annotation schemas are being used inside a java package using the AVisualizer
SE6 I can easily identify what java package I am currently inspecting using the AVisualizer
SE7 I can easily identify the class I’m inspecting in the AVisualizer tool
SE8 I can easily navigate to and from the packages and classes being analyzed with the AVisualizer tool
6.2.4. Target Audience
The students were also asked what role in a software development team is
the audience for the AVisualizer. They could check multiple options of the fol-
lowing roles: Developer, Architect, Tester, Quality Assurance, Project Manager,
Framework Developers, and Newcomer Developer.
6.3. Data Analysis
We gathered the data obtained from the interviews and questionnaires to
analyze and obtain the results considering the four categories mentioned in
Section 4: (i) Program Comprehension Tasks, (ii) Perceived Usefulness,
(iii) Perceived Ease of Use, and (iv) the Target Audience (iv).
The first item, Program Comprehension, can be analyzed by validating the
correctness of the answers. For the remaining three, we combined a qualitative
and quantitative approach. The quantitative data can be extracted from the
Likert Scale answers. The qualitative data collected in the interviews and an-
swers to the open-question allowed us to obtain insights from the answers to the
objective questions. To analyze this qualitative data, we followed a thematic
analysis. This approach is “a method for identifying, analyzing and reporting
patterns (themes) within data” [37]. In this process, pieces of text are coded
using labels that are later grouped in higher-order themes, increasing the ab-
35
straction level until a defined saturation point, e.g., a model, could be built.
Therefore, we aim to label and categorize data using this approach to iden-
tify the strengths and weaknesses of the approach commonly mentioned by the
subjects.
Cruzes and Dyb˚a [38] identify three ways a thematic analysis could be con-
ducted. First, researchers build an initial list of codes that will guide them in
the coding process in a deductive approach. Another option is an inductive
approach where researchers do not have a starting list of codes and inspect the
code systematically, e.g., sentence by sentence, to make some concepts emerge
from data. Finally, the authors acknowledge an integrated approach that is a
“partway” between the previous two. Using this approach, researchers have a
general scheme pointing to some aspects, but the themes are built bottom-up.
We followed an integrated approach in our analysis since TAM already gives us
two themes: perceived ease of use and perceived usefulness. Our supplementary
material provides an example of the steps we used to extract some codes/themes
as well as how they were consolidated/grouped in Table4 [34].
We hired an external service to transcribe the interviews. For the coding
process, we employed NVivo 129. A researcher coded the transcriptions of the
interviews and the answers to the open questions in the questionnaires. Then,
another member of the authoring team reviewed the labels. The two researchers
discussed the disagreements until they reached a consensus.
7. Results and Discussion
This section presents our results and discussions from the conducted empiri-
cal evaluation with questionnaires and interviews. We follow the four categories
to analyze the data as mentioned in Section 6. Table 4 presents the consolidated
themes we obtained for Perceived Ease of Use and Perceived Usefulness
by coding the interviews and the answers for the single open-ended question
9https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home
36
present in the questionnaire for students. To differentiate the source, we have
two columns, specifying the amount from the “interviews” and from “question-
narie”. We used as labels: PES (“Perceived Ease of Use Strenghts”), PEW
(“Perceived Ease of Use Weakness”), PUS (Perceived Usefulness Strengths)
and PUW (Perceived Usefulness Weakness).
Table 4: Themes
Perceived Ease of Use - Themes
Theme Strengths Interview Questionnaire Total
PES-1 Easy to see annotation’s usage 0 6 6
PES-2 Easy to understand how to use the tool 2 9 11
PES-3 Easy to understand package hierarchy 5 0 5
PES-4 Quick learning curve 0 3 3
Theme Weakness Interview Questionnaire Total
PEW-1 Understand the concept of annotation schema 8 1 9
PEW-2 Similar Colors may complicate the analysis 3 1 4
PEW-3 Confusion when switching metrics between views 16 1 17
PEW-4 Understand the grouping in Class View 1 2 3
PEW-5 Confusing how to trigger a new view 2 3 5
PEW-6 Headers sub-utilized 3 0 3
PEW-7 Tool requires training to use 3 5 8
PEW-8 Knowledge of Annotation Metrics 2 1 3
Perceived Usefulness - Themes
Theme Strengths Interview Questionnaire Total
PUS-1 Visualization is better than code inspection to
search annotations related information
10 1 11
PUS-2 More useful to medium and large projects 0 1 1
PUS-3 Useful to improve and visualize the architecture 4 6 10
PUS-4 Detection of frameworks that are interchangeable 1 0 1
PUS-5 Detection of Responsibilities Using Schemas 6 1 7
PUS-6 Color strategy helps identify potentially
misplaced or inconsistent annotation
6 1 7
PUS-7 Update libraries and frameworks 2 0 2
PUS-8 Newcomer may familiarize with the system
before contributing
3 1 4
Theme Weakness Interview Questionnaire Total
PUW-1 I can find large annotations, but this is not very useful isolated 12 0 12
PUW-2 Noise generated by less meaningful annotations 1 0 1
PUW-3 I can find test packages but cannot infer the coverage 2 0 2
PUW-4 Tool is more useful if the developer is familiar with schemas 4 1 5
37
7.1. Program Comprehension
The number of correct answers from the program comprehension question-
naire [34] is graphically displayed in Figure 5. We used a bar chart to display the
correctness of each question by all participants of the questionnaire. Questions
Q2-Q6 had a high success rate, with roughly 90% of the participants answering
correctly. Questions Q1, Q6, and Q10 also performed well with 80%, 75%, and
73% success rates, respectively.
Figure 5: Program Comprehension Questionnaire
Most of the questions with a high success rate are related to the System
View and Package View. In other words, these are questions related to
the general view of the system instead of a specific class or type. The par-
ticipants could answer these without navigating far on the tool. The excep-
tion was Q4, which asked What class contains the highest number of
javax.persistence annotations? which required the participants to reach
the Class View and still performed very well with a 93% of success rate. The
coding presented in Table 4 can be used to sustain this result. The theme PES-1
38
-Easy to see annotation’s usage and PUS-5 - Detection of Responsibili-
ties Using Schemas, suggests that even in the Class View, the schema color
information is still present, which helps detect the responsibilities. As seen in
Table 4, PUS-5 had seven mentions, the third most present theme. This result
helps confirm that there is a relationship between code annotations and class/-
package responsibilities as presented in Section 2. Finally, the CADV allows the
visualization of such annotations, which helps identify and detect class/package
responsibilities.
Questions Q8 and Q9, however, performed below the 70% correctness rate.
These questions required the user to investigate more deeply using the Package
View and the Class View. Furthermore, they require a good understanding of
the code annotation metrics instead of Q4, which asked about responsibilities.
As found in Table 4, the theme PEW-3 appeared in 17 answers mentioning Con-
fusion when switching metrics between views. Also, the theme PEW-5
-Confusing how to trigger a new view was mentioned five times. These
results support the lower rate of correct answers for questions Q8 and Q9 that
required changing views and understanding these metrics. The theme PEW-6
also helps explain these results because it implied that users were not checking
the header to see what annotation metric was used.
Compared to Francese et al. [5], they had a similar correctness rate, ranging
from 70% to 87%. However, we had higher values (93%) in some questions, and
one question performed lower (51%).
7.2. Perceived Ease of Use
To measure the perceived ease of use in the questionnaire with students, we
issued eight statements presented in Table 3 using the Likert Scale. Figure 6 dis-
plays a diverging bar chart with the results. Table 4 presents the coding/themes
that were in the majority extracted from the interviews with developers but also
the one open-ended question students answered in the survey.
From Figure 6 the majority of the respondents agree or strongly agree
that, overall, it is easy to use and obtain information from the target system.
39
The statement S3 “Learning how to use the AVisualizer was easy” had the
highest strongly disagree value, suggesting the tool requires training to be
used appropriately. The theme PEW-7, which appeared in eight answers, states
that training is required to use the tool and supports this result.
The statement S6 had the highest “strongly agree” value, suggesting it was
easy localizing in what part of the software the user was. This may seem con-
tradictory to the theme PEW-6, which suggests the headers were sub-utilized.
However, crossing this data with the Program Comprehension results, themes
PEW-5 and PEW-3, the header most likely failed to inform what metric was
being used rather than what package or class the user was inspecting. Further-
more, PES-3 reinforces that the circle packing provided a good comprehension
of the hierarchical structure, helping users locate themselves while using the
tool.
Figure 6: Perceived Ease of Use - Questionnaire
In the strength theme, the most common theme in eleven answers was PES-2
claiming that it was Easy to understand how to use the tool. Then, in six
answers, the respondents said it was easy to see annotations’ use (PES-1).
Three respondents also mentioned that the tool has a quick learning curve -
PES-4.
In the weaknesses theme, we grouped codes considering aspects that make
40
the tool harder to use. The most prominent was PEW-3 claiming it was Con-
fusing when switching metrics between views, with 17 answers, followed
by PEW-1 Understand the concept of annotation schema. The code PEW-3 is
also related to PEW-8 Knowledge of Annotation Metrics. The issue is that
most users are not familiar with the code annotation metrics. PEW-1 suggests
that even though code annotations are very popular among Java programmers,
some may not know the concept of annotation schema. If a user is familiar with
schemas, it becomes easier to use the tool. On the other hand, if the user lacks
familiarity with schemas, it is harder to reason and extract information about
them.
7.3. Perceived Usefulness
To measure the perceived usefulness from the students, we presented them
with four scenarios, mentioned in Section 6.2.3. From the results, 41% of the
students would employ it in Scenario I, i.e., to detect schemas used in a given
project and further learn about them. In second, with 34.62%, was Scenario IV,
“Familiarize with the system before adding new features”. In third place, with
12.82%, was Scenario III, “Analyze the architecture”. The least chosen was
Scenario II, related to searching for problems or misplaced annotations. These
results suggest that the students perceive the tool. useful to support a learning
process.
Analyzing Table 4, the most common theme for strengths, identified in eleven
answers, is PUS-1 Visualization is better than code inspection to search
annotations related information, suggesting the AVisualizer may outper-
form manual code inspection. As a respondent wrote: “It is also a great idea
since checking the code (going through dozens of packages) takes quite some time
and is not visual, so this tool helps [a lot].” Another interviewee mentioned
that: “[...] when you put it visually, it is much easier for me to understand how
they are spread in the system [...]”. Besides that, the theme PUS-3 suggests
that the tool can aid in refactoring and modularizing the system’s architecture.
Theme PUS-4 mentions that the AVisualizer can detect the simultaneous usage
41
of frameworks with the same function. Even though these frameworks can usu-
ally work together without errors, using only one would make the solution more
consistent across different features in the architecture. Finally, theme PUS-6
reinforces that the color strategy used to identify schemas can also help identify
potentially misplaced code annotations.
Detecting misplaced annotations may benefit developers when performing a
refactoring process. As mentioned, the work of Yu [2] demonstrated that 19.8%
of annotation replacements are related to “same name” changes. In other words,
the developers identified that the annotation name was correct. However, it be-
longed to an unexpected schema. With the AVisualizer, these scenarios can be
spotted because an “unexpected” or “different” color circle will be in the wrong
package. As stated by one interviewee: “[...] it is very simple to detect poten-
tially misplaced code annotations. If I spot a pink circle in a package with only
orange circles, I would say something is wrong. Why is a javax.persistence
code annotation alone in a package with only org.springframework code an-
notations?
For the weakness, the most common code, present in 12 answers, was PUW-1
I can find large annotations, but this is not very useful isolated. These
answers were rising when discussing topics such as finding large annotations or
excessively used annotations. Even though the AVisualizer allows the detection
of large annotations, the respondents did not see much value in this information
isolated, claiming other design decisions outside the scope of the AVisualizer
should also be used. This result differed from detecting misplaced annotations,
which was seen as very useful.
One interesting code was PUW-2, suggesting that annotations such as @Override
is trivial and probably pollute the visualization. Finally, present in the five an-
swers was PUW-4: tool is more useful if the developer is familiar with
schemas. As one respondent stated: “The colors help detect these potentially
misplaced annotations. However, I argue that whoever is analyzing the system
should also be familiar with code annotations and schemas. This way, a better
decision is made to determine if the package or class requires further investiga-
42
tion or code inspection.”
7.4. Target Audience
To validate the intended target audience, we asked the students what role
they found the tool was suited for, allowing them to select multiple roles. The
students chose “Developer” as the preferred role for using the AVisualizer, fol-
lowed by “Newcomer developer” and “Architect”. The least chosen was “Project
Manager”. Of the interviewees, three said “Architect” was most suited, while
three said “Every role in the team can benefit”, including the “Architect”.
One interviewee stated: “[...] the tool has different utilities within a software
development team. The architect is usually concerned with the organization, and
the tester may use it to detect packages that require more test code. A developer
is usually more concerned with going straight to the code.”. Another interviewee
stated: “Everyone that touches code can benefit from this tool”. However, one
interviewee believes the newcomer should consider other aspects before, stating:
“Newcomers should first study what design patterns are being used in the project.
Verifying the annotations should be second”.
7.5. Visualization Goals Discussion
(#G1) : To reach this goal, we designed the System View. The results show
that both students and developers found this view the most useful and
were able to get a general view of how code annotations were distributed
in the system.
(#G2) : To reach this goal, we designed the Package View. The results show
similar results compared to the System View. Students and developers
could visualize the usage and usage of code annotations within a specific
package.
(#G3) : To reach this goal, we designed the Class View. The results show that
this view did not reach the same correctness rate as the other two. In the
qualitative analysis, we identified that this result was related to switching
43
views and code annotation metrics unfamiliar to respondents. However,
in the worst scenario, it reached correctness of 51%.
(#G4) : The AVisualizer tool was developed with this feature, enabling users
to navigate between all three views. From the results, the users could
navigate between views after proper training. Although S8 presented a
good agreement about how easy is the navigation in the tool, PEW-3 and
PEW-5 showed that it could be improved.
(#G5) : From the qualitative analysis, we found the tool is useful for detecting
potentially misplaced annotations. As for detecting large or excessively
used annotations, the coding revealed that the AVisualizer tool allows their
detection, but respondents did not see much value in this. We argue that
future experiments and a proper study should investigate code annotations
bad smells and how these large annotations potentially impact software
maintenance.
7.6. Threats to Validity
This section discusses the threats to the validity of our study. We followed
the guidelines from Wohlin et al. [39] defining four steps to evaluate a study:
conclusion validity, internal validity, construct validity, and external validity.
7.6.1. Conclusion Validity
The relationship between the treatment and the outcome, i.e., if the treat-
ment is responsible for the outcome. A threat is that the respondents had five
alternatives to the questions in the questionnaire, and some might have guessed
the correct answer. For the interview, the developers were familiar with the
underlying system and may have answered questions considering their previous
knowledge of the system. We asked the interviewees to describe how they used
the tool to obtain the answers to mitigate this. Finally, we conducted the study
with two different groups of participants to mitigate potential bias from one
group.
44
7.6.2. Internal Validity
In this step, the threat is identifying a causal relationship between two as-
pects when in reality, another aspect, not considered, is responsible for the
consequence. In our study, an issue might be that the tasks were accomplished
for other reasons than the tool.
All participants received the same training through video. However, the
interviewees had a second live training session. Therefore they may have felt
more comfortable than respondents to the questionnaire. During the evalua-
tion, the participants had access to the video and could re-watch it to mitigate
this threat. Furthermore, the questionnaire contained a guide explaining the
annotation metrics.
Two authors of this work were researchers that belonged to INPE. However,
there was no direct relationship between them and the six interviewed developers
of the SpaceWeatherTSI application, nor did the authors have any involvement
in developing this application.
7.6.3. Construct Validity
This step regards if we are correctly assessing the constructs under study.
In our study, the key constructs are perceived ease of use and usefulness. To
mitigate the threats of not correctly assessing them, we employed instruments
developed for TAM that had been extensively used in research. The question-
naire was conducted remotely by the participants in their environment. They
were also responsible for using the AVisualizer, watching the video tutorials,
and answering the questions truthfully. Some participants might have had a
more suitable environment or less interruption during the evaluation.
7.6.4. External Validity
This step regards generalization, i.e., if the results are valid beyond the sub-
jects that participated in the study. Evaluations carried out with participants
may have the “evaluation apprehension” threat, in which some might think they
were being tested or evaluated and perform poorly. In order to mitigate that,
45
especially with students, it was made clear that their answers to the question-
naire would not influence their grades. Another threat is that the chosen target
software system displayed characteristics aligned with the goals of the AVisu-
alizer. Not every Java software system is suitable to be visualized with the
CADV. To mitigate any concerns, we clarified to participants that the AVisual-
izer was designed to visualize code annotations. Finally, a threat to the CADV
approach is the heavy use of colors, severely impacting its usage by colorblind
users. Currently, the approach revolves around colors to identify annotation
schemas, being a core of the CADV approach.
8. Concluding Remarks
We proposed a software visualization approach named Code Annotations
Distribution Visualization (CADV) for code annotations. It is composed of
three views based on circle packing representing the hierarchical structure of
the system under analysis. The leaf nodes represent information about code
annotations, and the size is calculated using a code annotation metric. The
color strategy to represent annotation schemas allowed users to visualize how
their responsibilities were distributed in the system under analysis.
As a reference implementation of the CADV, we developed a web-based tool
called AVisualizer. To evaluate our visualization approach, we conducted an
empirical study with students through a questionnaire and professional devel-
opers through an interview. The data was analyzed using both quantitative and
qualitative approaches. We extracted the correctness of program comprehen-
sion tasks from the questionnaire, perceived ease of use, perceived usefulness,
and evaluated the suitability for the intended target audience. The qualitative
approach was carried out through a thematic analysis of the interview transcrip-
tions and the questionnaire’s open questions, which supported the results of the
quantitative analysis. Our findings suggest that CADV is suitable for represent-
ing code annotations’ distribution and organization, identifying responsibilities
associated with schemas, and spotting potentially misplaced annotations. Ac-
46
cording to the participants, CADV is more appropriate for these tasks than
the alternatives currently available to developers, which rely on code inspection
aided by IDE code search features.
According to Merino et al. [6], it is not common for software visualization
studies to conduct empirical evaluations. Our findings demonstrate more gener-
ally that software visualization approaches can aid in the software comprehen-
sion process.
The current implementation of the AVisualizer supports only Java code an-
notations. However, it can be adapted to display C# attributes by associating
the schema with namespaces used to define C# attributes. As for the circle
packing approach is worth investigating how traditional source code metrics
would be displayed in this kind of visualization.
Finally, the AVisualizer is a tool in constant evolution, and several new
features are currently being added, mainly for customization purposes and to
aid in identifying other potential annotation’s bad practices. As of this point, a
plugin for the IntelliJ IDE was developed10 so developers can use the AVisualizer
embedded in an IDE. The demonstration can be downloaded directly from the
Marketplace11. Furthermore, a feature to allow navigation from the tool to the
source code is currently a work in progress.
Acknowledgments
We would like to thank the support granted by Brazilian funding agency
FAPESP (S˜ao Paulo Research Foundation) grant 2019/12743-4
References
[1] P. Lima, E. Guerra, P. Meirelles, L. Kanashiro, H. Silva, F. Sil-
veira, A Metrics Suite for code annotation assessment, Jour-
nal of Systems and Software 137 (2018) 163 183. doi:https:
10https://github.com/metaisbeta/intelliJ- avisualizer-plugin
11https://plugins.jetbrains.com/plugin/18237- annotation-visualizer
47
//doi.org/10.1016/j.jss.2017.11.024.
URL http://www.sciencedirect.com/science/article/pii/
S016412121730273X
[2] Z. Yu, C. Bai, L. Seinturier, M. Monperrus, Characterizing the Usage,
Evolution and Impact of Java Annotations in Practice, IEEE Transactions
on Software Engineering 47 (5) (2021) 969–986. doi:10.1109/TSE.2019.
2910516.
[3] V. Rajlich, Software Evolution and Maintenance, in: Future of Software En-
gineering Proceedings, FOSE 2014, Association for Computing Machinery,
New York, NY, USA, 2014, p. 133–144. doi:10.1145/2593882.2593893.
URL https://doi.org/10.1145/2593882.2593893
[4] W. Hasselbring, A. Krause, C. Zirkelbach, ExplorViz: research on software
visualization, comprehension and collaboration, Software Impacts 6 (2020)
100034. doi:https://doi.org/10.1016/j.simpa.2020.100034.
URL https://www.sciencedirect.com/science/article/pii/
S2665963820300257
[5] R. Francese, M. Risi, G. Scanniello, G. Tortora, Proposing and as-
sessing a software visualization approach based on polymetric views,
Journal of Visual Languages & Computing 34-35 (2016) 11–24.
doi:https://doi.org/10.1016/j.jvlc.2016.05.001.
URL https://www.sciencedirect.com/science/article/pii/
S1045926X16300623
[6] L. Merino, M. Ghafari, C. Anslow, O. Nierstrasz, A system-
atic literature review of software visualization evaluation, Jour-
nal of Systems and Software 144 (2018) 165–180. doi:https:
//doi.org/10.1016/j.jss.2018.06.027.
URL https://www.sciencedirect.com/science/article/pii/
S0164121218301237
48
[7] R. Wettel, M. Lanza, R. Robbes, Software Systems as Cities: A Controlled
Experiment, in: Proceedings of the 33rd International Conference on Soft-
ware Engineering, ICSE ’11, Association for Computing Machinery, New
York, NY, USA, 2011, p. 551–560. doi:10.1145/1985793.1985868.
URL https://doi.org/10.1145/1985793.1985868
[8] S. Romano, N. Capece, U. Erra, G. Scanniello, M. Lanza, On the use of
virtual reality in software visualization: The case of the city metaphor,
Information and Software Technology 114 (2019) 92 106. doi:https:
//doi.org/10.1016/j.infsof.2019.06.007.
[9] U. Erra, G. Scanniello, Towards the Visualization of Software Systems
as 3D Forests: The CodeTrees Environment, in: Proceedings of the
27th Annual ACM Symposium on Applied Computing, SAC ’12, Asso-
ciation for Computing Machinery, New York, NY, USA, 2012, p. 981–988.
doi:10.1145/2245276.2245467.
URL https://doi.org/10.1145/2245276.2245467
[10] M. Lanza, S. Ducasse, Polymetric Views-A Lightweight Visual Approach
to Reverse Engineering, IEEE Transactions on Software Engineering 29 (9)
(2003) 782–795. doi:10.1109/TSE.2003.1232284.
URL https://doi.org/10.1109/TSE.2003.1232284
[11] K. Stephenson, J. Cannon, W. Floyd, W. Parry, Introduction to circle
packing: the theory of discrete analytic functions, The Mathematical In-
telligencer 29 (2007) 63–66. doi:10.1007/BF02985693.
[12] E. M. Guerra, J. T. de Souza, C. T. Fernandes, A Pattern Language for
Metadata-Based Frameworks, in: Proceedings of the 16th Conference on
Pattern Languages of Programs, PLoP ’09, Association for Computing Ma-
chinery, New York, NY, USA, 2009. doi:10.1145/1943226.1943230.
URL https://doi.org/10.1145/1943226.1943230
[13] E. Guerra, Design Patterns for Annotation-Based APIs, in: Proceedings
49
of the 11th Latin-American Conference on Pattern Languages of Program-
ming, SugarLoafPLoP ’16, The Hillside Group, USA, 2016.
[14] P. Lima, E. Guerra, P. Meirelles, Annotation Sniffer: a tool to Extract
Code Annotations Metrics, Journal of Open Source Software 5 (47) (2020)
1960. doi:10.21105/joss.01960.
URL https://doi.org/10.21105/joss.01960
[15] M. Aniche, G. Bavota, C. Treude, M. Gerosa, A. van Deursen, Code Smells
for Model-View-Controller Architectures, Empirical Software Engineering
Journal (EMSE)doi:10.1007/s10664-017-9540-2.
[16] M. Aniche, C. Treude, A. Zaidman, A. van Deursen, M. Gerosa, SATT:
Tailoring Code Metric Thresholds for Different Software Architectures, in:
Proceedings of 16th IEEE International Working Conference on Source
Code Analysis and Manipulation (SCAM), 2016. doi:10.1109/SCAM.
2016.19.
[17] M. Lanza, R. Marinescu, Object-oriented metrics in practice: using soft-
ware metrics to characterize, evaluate, and improve the design of object-
oriented systems, Springer, Heidelberg, 2006.
[18] M. Fowler, Refactoring: Improving the Design of Existing Code, Addison-
Wesley, Boston, MA, USA, 1999.
[19] R. Teixeira, E. Guerra, P. Lima, P. Meirelles, F. Kon, Does it make sense
to have application-specific code conventions as a complementary approach
to code annotations?, in: Proceedings of the 3rd ACM SIGPLAN Interna-
tional Workshop on Meta-Programming Techniques and Reflection, 2018,
pp. 15–22.
[20] M. Lanza, CodeCrawler - Polymetric Views in Action, in: Proceedings.
19th International Conference on Automated Software Engineering, 2004.,
2004, pp. 394–395. doi:10.1109/ASE.2004.1342773.
50
[21] R. Francese, M. Risi, G. Scanniello, G. Tortora, Viewing Object-Oriented
Software with MetricAttitude: An Empirical Evaluation, in: 2014 18th
International Conference on Information Visualisation, 2014, pp. 59–64.
doi:10.1109/IV.2014.42.
[22] H. Graham, H. Y. Yang, R. Berrigan, A Solar System Metaphor for 3D
Visualisation of Object Oriented Software Metrics, in: Proceedings of the
2004 Australasian Symposium on Information Visualisation - Volume 35,
APVis ’04, Australian Computer Society, Inc., AUS, 2004, p. 53–59.
[23] R. Wettel, M. Lanza, Visualizing Software Systems as Cities, in: 2007 4th
IEEE International Workshop on Visualizing Software for Understanding
and Analysis, 2007, pp. 92–99. doi:10.1109/VISSOF.2007.4290706.
[24] L. Merino, M. Ghafari, C. Anslow, O. Nierstrasz, CityVR: Gameful Soft-
ware Visualization, in: 2017 IEEE International Conference on Software
Maintenance and Evolution (ICSME), 2017, pp. 633–637. doi:10.1109/
ICSME.2017.70.
[25] B. Dagenais, H. Ossher, R. K. E. Bellamy, M. P. Robillard, J. P. de Vries,
Moving into a New Software Project Landscape, in: Proceedings of the
32nd ACM/IEEE International Conference on Software Engineering - Vol-
ume 1, ICSE ’10, Association for Computing Machinery, New York, NY,
USA, 2010, p. 275–284. doi:10.1145/1806799.1806842.
URL https://doi.org/10.1145/1806799.1806842
[26] F. D. Davis, Perceived Usefulness, Perceived Ease of Use, and User Accep-
tance of Information Technology, MIS Quarterly 13 (3) (1989) 319–340.
URL http://www.jstor.org/stable/249008
[27] J. Choma, E. M. Guerra, T. S. Da Silva, L. A. Zaina, F. F. Correia, Towards
an artifact to support agile teams in software analytics activities, 2019, pp.
88–93. doi:10.18293/SEKE2019-146.
51
[28] N. Sant’Anna, E. Guerra, A. Ivo, F. Pereira, M. Moraes, V. Gomes, L. G.
Veras, Modelo Arquitetural para Coleta, Processamento e Visualiza¸ao de
Informa¸oes de Clima Espacial, in: Proceedings..., Simp´osio Brasileiro de
Sistemas de Informa¸ao, SBC, Porto Alegre, RS, Brasil, 2014, pp. 125–136.
doi:10.5753/sbsi.2014.6107.
URL https://sol.sbc.org.br/index.php/sbsi/article/view/6107
[29] M. Wen, R. Siqueira, N. Lago, D. Camarinha, A. Terceiro, F. Kon,
P. Meirelles, Leading successful government-academia collaborations using
FLOSS and agile values, Journal of Systems and Software 164 (2020)
110548. doi:https://doi.org/10.1016/j.jss.2020.110548.
URL https://www.sciencedirect.com/science/article/pii/
S0164121220300303
[30] E. Murphy-Hill, T. Zimmermann, N. Nagappan, Cowboys, Ankle Sprains,
and Keepers of Quality: How is Video Game Development Different from
Software Development?, in: Proceedings of the 36th International Con-
ference on Software Engineering, ICSE 2014, Association for Computing
Machinery, New York, 2014, p. 1–11.
[31] M. Bostock, Circle Packing (Dec. 2017).
URL https://observablehq.com/@d3/circle-packing
[32] V. R. Basili, G. Caldiera, H. D. Rombach, The goal question metric ap-
proach, Encyclopedia of software engineering (1994) 528–532.
[33] M. Bostock, V. Ogievetsky, J. Heer, D3: Data-Driven Documents, IEEE
Transactions on Visualization and Computer Graphics 17 (12) (2011) 2301–
2309. doi:10.1109/TVCG.2011.185.
[34] P. Lima, J. Melegati, E. Gomes, N. S. Pereira, E. Guerra, P. Meirelles,
Supplementary Material for the Manuscript ”CADV: A software visual-
ization approach for code annotations distribution” (Dec. 2021). doi:
10.5281/zenodo.6660133.
URL https://doi.org/10.5281/zenodo.6660133
52
[35] D. Falessi, N. Juristo, C. Wohlin, B. Turhan, J. unch, A. Jedlitschka,
M. Oivo, Empirical software engineering experts on the use of students
and professionals in experiments, Empirical Software Engineering 23 (1)
(2018) 452–489. doi:10.1007/s10664-017-9523-3.
URL https://doi.org/10.1007/s10664-017-9523-3
[36] M. ost, B. Regnell, C. Wohlin, Using Students as Subjects—A Compara-
tive Study OfStudents and Professionals in Lead-Time Impact Assessment,
Empirical Software Engineering 5 (3) (2000) 201–214. doi:10.1023/A:
1026586415054.
URL https://doi.org/10.1023/A:1026586415054
[37] V. Braun, V. Clarke, Using thematic analysis in psychology, Qual-
itative Research in Psychology 3 (2) (2006) 77–101. doi:10.1191/
1478088706qp063oa.
[38] D. S. Cruzes, T. Dyba, Recommended Steps for Thematic Synthesis
in Software Engineering, in: 2011 International Symposium on Empir-
ical Software Engineering and Measurement, IEEE, 2011, pp. 275–284.
doi:10.1109/ESEM.2011.36.
[39] C. Wohlin, P. Runeson, M. ost, M. C. Ohlsson, B. Regnell, A. Wessl´en,
Planning, in: Experimentation in Software Engineering, Vol. 9783642290,
Springer Berlin Heidelberg, Berlin, Heidelberg, 2012, pp. 89–116. doi:
10.1007/978-3-642-29044-2_8.
53
... For example, one kind of graph shows the structural information of software code, and another one shows the inheritance relations across software classes. Software visualization is a hot topic in the software engineering domain [32]. Graphs give software developers an indication of software size (or complexity level). ...
... In software visualization, some methods aim at displaying software source code in a recognized environment, like a forest [69] or a city [47]. Another method is to generate what is called a polymetric view [70], described as a lightweight visualization technique supplemented with several metrics regarding software code [32]. ScaMaha visualizes different aspects of software source code, such as code organization and relations. ...
Preprint
Full-text available
Reverse engineering tools are required to handle the complexity of software products and the unique requirements of many different tasks, like software analysis and visualization. Thus, reverse engineering tools should adapt to a variety of cases. Static Code Analysis (SCA) is a technique for analyzing and exploring software source code without running it. Manual review of software source code puts additional effort on software developers and is a tedious, error-prone, and costly job. This paper proposes an original approach (called ScaMaha) for Object-Oriented (OO) source code analysis and visualization based on SCA. ScaMaha is a modular, flexible, and extensible reverse engineering tool. ScaMaha revolves around a new meta-model and a new code parser, analyzer, and visualizer. ScaMaha parser extracts software source code based on the Abstract Syntax Tree (AST) and stores this code as a code file. The code file includes all software code identifiers, relations, and structural information. ScaMaha analyzer studies and exploits the code files to generate useful information regarding software source code. The software metrics file gives unique metrics regarding software systems, such as the number of method access relations. Software source code visualization plays an important role in software comprehension. Thus, ScaMaha visualizer exploits code files to visualize different aspects of software source code. The visualizer generates unique graphs about software source code, like the visualization of inheritance relations. ScaMaha tool was applied to several case studies from small to large software systems, such as drawing shapes, mobile photo, health watcher, rhino, and ArgoUML. Results show the scalability, performance, soundness, and accuracy of ScaMaha tool. Evaluation metrics, such as precision and recall, demonstrate the accuracy of ScaMaha ...
... For example, one kind of graph shows the structural information of software code, and another one shows the inheritance relations across software classes. Software visualization is a hot topic in the software engineering domain [32]. Graphs give software developers an indication of software size (or complexity level). ...
... In software visualization, some methods aim at displaying software source code in a recognized environment, like a forest [69] or a city [47]. Another method is to generate what is called a polymetric view [70], described as a lightweight visualization technique supplemented with several metrics regarding software code [32]. ScaMaha visualizes different aspects of software source code, such as code organization and relations. ...
Article
Full-text available
Reverse engineering tools are required to handle the complexity of software products and the unique requirements of many different tasks, like software analysis and visualization. Thus, reverse engineering tools should adapt to a variety of cases. Static Code Analysis (SCA) is a technique for analyzing and exploring software source code without running it. Manual review of software source code puts additional effort on software developers and is a tedious, error-prone, and costly job. This paper proposes an original approach (called ScaMaha) for Object-Oriented (OO) source code analysis and visualization based on SCA. ScaMaha is a modular, flexible, and extensible reverse engineering tool. ScaMaha revolves around a new meta-model and a new code parser, analyzer, and visualizer. ScaMaha parser extracts software source code based on the Abstract Syntax Tree (AST) and stores this code as a code file. The code file includes all software code identifiers, relations, and structural information. ScaMaha analyzer studies and exploits the code files to generate useful information regarding software source code. The software metrics file gives unique metrics regarding software systems, such as the number of method access relations. Software source code visualization plays an important role in software comprehension. Thus, ScaMaha visualizer exploits code files to visualize different aspects of software source code. The visualizer generates unique graphs about software source code, like the visualization of inheritance relations. ScaMaha tool was applied to several case studies from small to large software systems, such as drawing shapes, mobile photo, health watcher, rhino, and ArgoUML. Results show the scalability, performance, soundness, and accuracy of ScaMaha tool. Evaluation metrics, like precision and recall, demonstrate the accuracy of ScaMaha in parsing, analyzing, and visualizing software source code, as all code artifacts (i.e., code file, software metrics file, and code visualizations) were correctly extracted.
... By providing a better understanding of the code, developers can make more informed decisions about how to refactor and improve their software. In software refactoring, polymetric visualization has been implemented, whereby nodes correspond to modules with the width determined by the number of classes they contain, the height by the number of files they contain, and the color by the number of directories they contain 18 . However, when dealing with extensive graphs and numerous releases, identifying disparities and patterns in the metrics of nodes and arcs becomes more intricate, if not unfeasible. ...
... Cyclomatic Complexity: Group 2 exhibited a lower cyclomatic complexity value (10) compared to Group 1 (18), indicating that the visualization tool contributed to reducing the complexity of the control flow in the codebase. ...
Article
Full-text available
Refactoring tools have advanced greatly and are being used in many large projects. As a result, a great deal of information is now available about past refactoring and its effects on the source code. However, when multiple refactoring is performed at once, it becomes more difficult to analyze their impact. Refactoring visualization can help developers create more maintainable code that is easier to understand and modify over time. Although there is an increasing interest in visualizing code changes in software engineering research, there has been relatively little research on visualizing the process of refactoring. In this paper, we propose a Radar Chart Refactoring Visualization (RcRV) approach to visualize software refactoring of source code across multiple software releases. Radar charts are a form of 2D visualization that can show multiple variables on a single chart. The RcRv receives input from developers or through refactoring identification tools, such as Ref-Finder, to generate charts. The generated charts can show the changes made during the refactoring process, highlighting areas of the trend of refactoring over evolution for multiple refactoring, multiple methods, and multiple classes. The evaluation study conducted to assess the usefulness of the RcRV tool has shown that the proposed tool is useful to developers, appealing, and easy to use. The proposed method of visualization can be beneficial for developers and maintainers to detect design violations and potential bugs in the code, thus saving time and effort during the development and maintenance process. Therefore, this research presents a significant contribution to the software engineering field by providing developers with an efficient tool to enhance code quality and maintainability.
... The work of Lima et al. (2023) proposed a software visualization approach to observe how code annotations are distributed in a given system and improve code comprehension of annotation-based systems. The authors conducted an empirical evaluation with students and professional developers using a Java web application as the target system. ...
Article
Full-text available
Context Code annotations have gained widespread popularity in programming languages, offering developers the ability to attach metadata to code elements to define custom behaviors. Many modern frameworks and APIs use annotations to keep integration less verbose and located nearer to the corresponding code element. Despite these advantages, practitioners’ anecdotal evidence suggests that annotations might negatively affect code readability. Objective To better understand this effect, this paper systematically investigates the relationship between code annotations and code readability. Method In a survey with software developers (n=332), we present 15 pairs of Java code snippets with and without code annotations. These pairs were designed considering five categories of annotation used in real-world Java frameworks and APIs. Survey participants selected the code snippet they considered more readable for each pair and answered an open question about how annotations affect the code’s readability. Results Preferences were scattered for all categories of annotation usage, revealing no consensus among participants. The answers were spread even when segregated by participants’ programming or annotation-related experience. Nevertheless, some participants showed a consistent preference in favor or against annotations across all categories, which may indicate a personal preference. Our qualitative analysis of the open-ended questions revealed that participants often praise annotation impacts on design, maintainability, and productivity but expressed contrasting views on understandability and code clarity. Conclusions Software developers and API designers can consider our results when deciding whether to use annotations, equipped with the insight that developers express contrasting views of the annotations’ impact on code readability.
... Utilizing AST for code transition representation is flawed because the AST method is affected by granularity and fails to completely represent the information of the code. Therefore, Chen et al. [10] used code visualization technology [24] to convert code files into images, and then deep learning technology was adopted to extract features from the images, promoting the model's performance. Table 1 summarizes and compares the existing methods. ...
Article
Full-text available
The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor. In previous software defect prediction studies, transfer learning was effective in solving the problem of inconsistent project data distribution. However, target projects often lack sufficient data, which affects the pe