Conference PaperPDF Available

Challenges in Assessing Technical Debt Based on Dynamic Runtime Data

Authors:

Abstract and Figures

Existing definitions and metrics of technical debt (TD) tend to focus on static properties of software artifacts, in particular on code measurement. Our experience from software renovation projects is that dynamic aspects-runtime indicators of TD-often play a major role. In this position paper, we present insights and solution ideas gained from numerous software renovation projects at QAware and from a series of interviews held as part of the ProDebt research project. We interviewed ten practitioners from two German software companies in order to understand current requirements and potential solutions to current problems regarding TD. Based on the interview results, we motivate the need for measuring dynamic indicators of TD from the practitioners' perspective, including current practical challenges. We found that the main challenges include a lack of production-ready measurement tools for runtime indicators, the definition of proper metrics and their thresholds, as well as the interpretation of these metrics in order to understand the actual debts and derive countermeasures. Measuring and interpreting dynamic indicators of TD is especially difficult to implement for companies because the related metrics are highly dependent on runtime context and thus difficult to generalize. We also sketch initial solution ideas by presenting examples of dynamic indicators for TD and outline directions for future work.
Content may be subject to copyright.
Challenges in Assessing Technical Debt based on
Dynamic Runtime Data
Marcus CiolkowskiLiliana Guzm´
an, Adam Trendowicz, Anna Maria Vollmer
QAware GmbH
Aschauer Str. 32, 81549, M¨
unchen, Germany
Email: marcus.ciolkowski@qaware.de
Fraunhofer IESE
Fraunhofer Platz 1, 67663, Kaiserslautern, Germany
Email: {liliana.guzman, adam.trendowicz, anna-maria.vollmer}@iese.fraunhofer.de
Abstract—Existing definitions and metrics of technical debt
(TD) tend to focus on static properties of software artifacts, in
particular on code measurement. Our experience from software
renovation projects is that dynamic aspects runtime indicators
of TD often play a major role. In this position paper,
we present insights and solution ideas gained from numerous
software renovation projects at QAware and from a series of
interviews held as part of the ProDebt research project. We inter-
viewed ten practitioners from two German software companies in
order to understand current requirements and potential solutions
to current problems regarding TD. Based on the interview results,
we motivate the need for measuring dynamic indicators of TD
from the practitioners’ perspective, including current practical
challenges. We found that the main challenges include a lack
of production-ready measurement tools for runtime indicators,
the definition of proper metrics and their thresholds, as well
as the interpretation of these metrics in order to understand
the actual debts and derive countermeasures. Measuring and
interpreting dynamic indicators of TD is especially difficult to
implement for companies because the related metrics are highly
dependent on runtime context and thus difficult to generalize.
We also sketch initial solution ideas by presenting examples of
dynamic indicators for TD and outline directions for future work.
Index Terms—Technical debt, measurement, dynamic data,
runtime quality
I. INTRODUCTION
An important principle in software engineering projects is
to avoid “broken windows” of software quality in any type
of software project. That is, it is important to find and fix
quality deficits (e.g., bad design, wrong architectural decisions,
or poor code) as soon as they are detected to avoid signs of
neglect similar to a broken window in a building that invites
vandalism [1]. This principle has become particularly promi-
nent with the advent of agile software development [2]. Agile
software development facilitates flexible, rapid, and continuous
development by providing practitioners with iterative methods
that rely on extensive collaboration. To succeed in this context,
organizations need tools for continuously analyzing software
quality along short-time releases [3]. Otherwise, developers
tend to focus on functional requirements, neglecting quality
requirements [4]. Thus, continuous measurement of quality
deficits is crucial for ensuring software quality.
We define the following terms (see Ampatzoglou et al. [5, p.
63]): Principal denotes the effort that is required to address the
difference between the current and the target level of design-
time quality. The interest to be paid on the principal means
the additional cost (e.g., effort) needed for maintaining and
operating the software, due to its decayed design-time quality.
Technical debt itself comprises the principal plus interest to
be paid within a certain time interval (e.g., the next year).
The experience at QAware a software-development com-
pany with a lot of experience in analyzing and renovating
software systems has shown that TD spreads throughout
systems, and affects code as well as non-code artifacts. More-
over, many forms of TD are difficult or even impossible to
spot statically and can only be observed dynamically; that is,
at system runtime.
In recent years, the research on TD has focused mainly on
static analysis of source code and test artifacts. For example,
Alves et al. [6, pp. 107–108] report that only two of 100 papers
mention dynamic properties.In both cases, it is unclear whether
this means an analysis at runtime or static code analysis. A
notable exception is TD related to testing, which is addressed
in 24 of 100 papers. However, all of these look at static
indicators of testing: Although they require running the system
(or, at least, executing the test cases), they ignore runtime data
such as test execution time. Thus, there is a research gap for
TD with respect to runtime or dynamic indicators of TD.
This paper is a position paper that is based on experi-
ence gained from numerous software renovation projects at
QAware, and on interview results from the ProDebt research
project on strategic planning of TD.
II. ON T HE IM PO RTANC E OF DY NA MI C ASP ECTS OF
TECHNICAL DEB T
The background of this position paper is that of QAware, an
independent software development and consultancy enterprise
that analyzes, renovates, invents, and implements software
systems for customers whose success heavily depends on IT.
One important field of business is system troubleshooting and
software renovation [7], [8]. System troubleshooting makes in-
tensive use of runtime analysis of distributed systems: QAware
experts install measurement infrastructure that measures fine-
grained data on the system and application level, and collects
and stores this data in one repository for analysis purposes.
From QAware’s experience, runtime aspects of system behav-
ior are critical in practice, contribute heavily to TD interest
and software bankruptcy, and are often difficult to detect.
Existing measurement support for TD focuses on static
aspects, such as component coupling or other code-based
metrics [9]. This is important, for instance, for managing debt
associated with software maintenance cost. Because there are
so many tools available for measuring static aspects [10], it can
be easily monitored and controlled to a certain degree: e.g., if
a project follows a zero–violations policy (no violations must
be carried across sprints or releases), there are no violations
against the measured (static) TD metrics.
Dynamic aspects of TD, however, are often overlooked in
existing tool support and research [9], although they often con-
tribute significantly to the debt’s interest. Thus, measurement
and interpretation of dynamic aspects lacks ready-to-use tool
support in spite of their importance: TD can create runtime
symptoms such as extensive (i.e., larger than necessary) con-
sumption of resources such as time, memory, or computing
power. Later on, during tests, dynamic symptoms of TD can
hinder automatic testing or increase the effort and complexity
of manual testing. Finally, during operation, TD may surface
as a decrease in performance, stability, or scalability, which in
turn affects operation costs and customer satisfaction.
Detecting runtime symptoms of TD is difficult, but this is
also where the tricky cases in maintenance are often hidden
(e.g., leaks or locks) [7]. Although expert tools are avail-
able to support runtime analysis (e.g., Profiler, Heap Walker,
stacktracing tools), these are typically difficult to use and not
suitable for continuous monitoring, in particular for distributed
systems. In addition, as the required metrics include system-
level as well as application-specific metrics, they stem from
many tools and have to be collected and stored in a common
repository in order to make them accessible for analysis
and interpretation. Moreover, they have to be measured and
collected from many components in distributed systems, often
over networks that are only partially secured .
However, measuring dynamic indicators of TD is important,
because some forms of TD are difficult or impossible to spot
statically and only surface at runtime. For example, problems
with the runtime architecture (i.e., actual call relationships
at runtime) are often difficult to spot in software code and
their detection is rather obfuscated by the intensive use of
inheritance, code injection, reflection, or soft links in modern
information systems. Thus, they can be only be measured
reliably at runtime; yet corresponding measurement typically
has to be built from scratch [11].
III. IDE AS F OR D EFI NI NG &MEASURING TECHNICAL DEBT
A. Context
The research project ProDebt aimed at developing an inno-
vative tool-supported method to support the strategic planning
of TD. At the beginning of the ProDebt project1, we aimed
at understanding the meaning, management, and problems
1http://www.prodebt.de/
related to TD from the perspective of different stakeholders in
German software companies involved in agile software devel-
opment. Therefore, we designed and performed ten individual
semi-structured interviews with product owners and developers
in the software companies involved in the ProDebt project,
namely QAware and Insiders Technologies.
Each interview lasted up to 45 minutes and was performed
by two researchers. The collected data was analyzed using
thematic analysis. For a detailed description of QAware and
Insiders Technologies and the related use cases, refer to [3].
B. Indicators of Dynamic Aspects
Besides traditional code-related static indicators of TD,
the interviews identified a number of non-code and dynamic
indicators of TD. Table I summarizes the most prominent
aspects and the associated measurement and runtime behavior
from the perspective of product owners and developers.
C. Measurement and Challenges
The interviewees also discussed potential challenges of
measuring dynamic aspects of TD. The main measurement
challenges include defining and interpreting appropriate met-
rics, as such metrics are highly dependent on the context
in which TD and the associated dynamic software aspects
are considered. From the implementation viewpoint, one of
the main obstacles perceived was the lack of tools for au-
tomatic measurement of software behavior at runtime. The
interviewees noted that there are a number of opportunities
for collecting runtime data that can be utilized: On the one
hand, there are a number of software lifecycle stages where
software behavior can be observed. These stages include
various levels of testing (e.g., unit, integration, acceptance)
that are typically at least partially automated as well as the
software in production. On the other hand, a number of
tools exist that may by utilized for collecting basic data on
software behavior. Examples include tools that come with
the operational system (e.g., Unix and Windows); within-
application metric frameworks (e.g., JMX/Jolokia for Java)
that offer many metrics out of the box, such as garbage
collection statistics or thread counters); or external monitoring
tools (e.g., Nagios). Other ways of collecting runtime data
include embedding measurement mechanisms (e.g., writing
custom code that uses JMX/Jolokia) or analyzing log files
produced during runtime (e.g., using the ELK/Elastic stack).
From the perspective of the interviewees, one critical aspect
of measuring software at runtime is potential intrusion into the
system under measurement and the related impact on, e.g.,
performance. Non-invasive measurement is particularity im-
portant regarding systems in operation. In this context, highly
invasive measurement approaches, such as using profiling tools
or measurement through bytecode injection are typically not
feasible for production systems but may be applicable in
testing scenarios [11].
Measurement data creates the basis for managing TD, but
to support decision making the data must be interpreted
properly. Interpreting runtime measurement data and deriving
TABLE I
EXCERPT OF MEASUREMENT IDEAS FOR DYNAMIC INDICATORS OF TECHNICAL DEBT
Indicator Meaning Metric
Service
usage
Usage profile of services provided by the software system. On the
one hand, service usage is an indicator of the runtime load of a
software system. On the other hand, it specifies the context for
measuring other performance indicators in a comparable way: i.e.,
performance indicators are measured and compared under the same
service usage profile.
Count service calls per each individual service.
Appropriateness
of test profile
Correspondence between the test profile and the real usage profile of
the software system. This factor is not a direct indicator of perfor-
mance efficiency but determines the reliability of the performance
indicators assessed during tests. The less a test profile corresponds
to the real usage of the software application the less reliable are the
performance indicators assessed during such tests.
Difference between the test profile (e.g. using Limbo tool) and
the real usage profile (measured by analysing the logfiles).
Similarity can be defined through a common statistical distance
metric between probability distributions or distance metrics
between the point in space where the test and usage profile
represent two points of an n-dimensional space and each of n
services represents one dimension.
Service call
duration
Duration of service/method call. With automation, the duration of
use cases can be measured. Trends as well as outliers are valuable.
Service or method call duration: log files (ELK), JMX/Jolokia
for Java. Testing environments: test execution time supported
by most testing frameworks; or by using Bytecode injection.
Intensity of
DB calls
Efficiency of DB operations (e.g., per service call or per
unit/integration/performance test). For example, DB operations are
realized in a way that leads to extensive runtime when the system
runs under high load (e.g., multiple small DB accesses in a short
time instead of one large one).
Count outgoing, opened DB connections initiated by the system
at runtime and their status.
Processor
load
Utilization of processor capacity (per component, service, system)
over time. Processor capacity should be utilized as expected in
a scenario; in practice, it can be minimal or maximal utilization,
depending on the specific context.
Production: runtime data collection from operation systems
tools (e.g., ps under Unix) or metric frameworks such as JMX
for Java.
Memory
load
Utilization of memory (per component, service, system). Available
memory should be utilized as expected in a given scenario; e.g., it
should not increase continuously and/or should not exceed a specific
threshold.
Production: runtime collection of data from operation systems
tools (e.g., ps under Unix) or metric frameworks such as JMX
for Java.
Testing: supported by most testing frameworks; needs to be
stored afterwards (e.g. with SonarQube’s Surefire plugin)
judgments regarding the actual debt is the second major
challenge of data-driven TD management. Other than for code-
based static software measurements, the definition of absolute
preference thresholds is typically not possible for runtime
behavior because they are highly dependent on runtime context
and thus difficult to generalize. Although for some indicators
a meaningful absolute threshold can be defined in specific
contexts as early indicators of potential debt, the best way
of assessing runtime behavior remains to observe trends over
time, from the short-term (e.g., sprints) as well as long-term
(e.g., releases) perspective. Typical interpretation patterns of
runtime data are thus:
Threshold: An issue is raised when values of interest
approach, reach, exceed or fall below a defined limit. In such
cases, not only an instant value but also the frequency or
duration with which a certain limit is reached should raise
an issue. The following types of limit values are particularly
interesting in practice: hard limits (e.g., 100% CPU consump-
tion), soft limits (e.g., >90% CPU consumption is critical),
and empirical limits (significant outliers according to baseline
data and a given significance level).
Trend: An issue is raised when a TD indicator (e.g.,
consumption of resources) show a continuous trend or even
a tendency to go beyond the limit. Although an absolute limit
is difficult to define, TD can be reliably detected based on
patterns and trends.
Correlation: An issue is raised when the values correlate
with values of an already known anomaly, that related to TD in
the past. A typical example of such a pattern is CPU workload.
D. Example Dynamic Indicators of Debt
This section discusses selected indicators of TD (Table I)
and how they can be measured and interpreted. The following
examples stem from QAware’s experience in troubleshooting
projects in many companies and domains. The setting in a
troubleshooting project is typically that a (distributed) system
is instable or suffers from performance problems, causing high
cost. QAware experts analyze the system to provide quick
fixes. Typically, troubleshooting projects end with lists of
short-term fixes and mid- to long-term TDs to be removed.
1) Connections Quantity: Connections refer to runtime
input-output operations. This represents a typical situation for
saturation patterns: While it is difficult to define an absolute
threshold, a growing trend is always problematic.
For example, in one troubleshooting project there was a
distributed system that had been rolled out world-wide but
was too slow to be used, causing high follow-up costs. An
analysis of file handles showed an increase in the number of
handles used by the operating system. The operating system
references each object of the operating system via a handle,
such as threads, Mutex objects, or files. In this case, a missing
call of Release() was responsible for the fact that COM objects
were never released, which could only be detected with a
debugger.
2) Intensity of (DB) calls: A situation occurring often
in troubleshooting projects is that one incoming connection
causes many outgoing connections; for example, one incom-
ing web request call resulting in many database calls. An
indicator for TD is therefore the relation between incoming
and outgoing calls, and the trend of system-internal calls vs.
external calls. In one troubleshooting project, we found that an
incoming web request caused up to 2400 JavaScript calls, 30
service requests and 1600 database calls. Root causes are most
often programming errors; for example, inefficient algorithms
or data structures, inefficient database queries with many tiny
queries and joins in Java code instead of within the database.
Another typical root cause is inappropriate usage of complex
frameworks (e.g., hibernate for database operations requires
expert knowledge about its side effects), or (sometimes) de-
fects in frameworks or the Java virtual machine.
3) Garbage Collection (GC): Garbage collection refers to
garbage collector activity and the amount of heap memory,
which typically indicate TD (e.g., a memory leak) if the
frequency of GC rises, or if maximum/minimum peaks of heap
memory rise over the course of GC activity (see Figure 1).
Typical root causes for class or memory leaks include inef-
ficient programming (e.g., forgetting to release handles), but
often also defects in frameworks or side-effects in frameworks
that are known but not well-documented.
Fig. 1. Technical debt: Garbage Collection activity rises, min/max memory
rises. This hints at a memory leak.
IV. CONCLUSION
Current approaches for measuring TD focus mainly on
static aspects of code or near-code artifacts, measured at
development or build time. There is a need for extending
current approaches to address dynamic indicators—that is,
properties emerging at system runtime—which are often an
important cause of software bankruptcy. In this paper, we
presented initial insights on requirements and measurement
ideas gained during commercial software renovation projects
and a research project on TD management.
The presented examples of dynamic indicators of TD have
in common that all of them frequently occur in practice. All
of them were difficult, or even impossible, to detect with static
analysis techniques, and were observable at system runtime,
and all of them resulted in quality deficits that need to be
detected and fixed as soon as possible. This is why indicators
for dynamic aspects of TD are important in practice and need
ready-to-use tools for measuring and storing runtime data;
advances are also needed with regard to their interpretation.
Continuous monitoring of dynamic aspects of TD can
supplement static measurement. This is becomming increas-
ingly important in the context of DevOps and CloudNative
approaches.There are plenty of measurement opportunities in
such environments: Every build or nightly build can include,
for example, automated load tests; systems in operation can
use metrics (e.g., based on JMX) or log files (e.g., based on the
ELK stack) to collect dynamic indicators and thereby detect
anomalies as well as slow trends over time. Detecting these
forms of TD early is important; there may be a long time until
an issue is detected, if it is detected at all (e.g., it may just
cause higher cost without being recognized as a defect). Tools
for defining and partly for collecting such data exist (e.g.,
ELK stack, Prometheus). The challenge is to make these data
accessible for analysis and interpretation.
ACKNOWLEDGMENT
We would like to thank the members of QAware GmbH and
Insiders Technologies who participated in the interviews. Parts
of this work have been supported by the German Ministry of
Education and Research (BMBF) under grant no. 01IS15008D
and 01IS15008A.
REFERENCES
[1] A. Hunt and D. Thomas, “Zero-tolerance construction,” IEEE Software,
vol. 19, pp. 100–102, 2002.
[2] I. Inayat, S. S. Salim, S. Marczak, M. Daneva, and S. Shamshirband,
“A systematic literature review on agile requirements engineering
practices and challenges,” Computers in Human Behavior, vol.
51, Part B, pp. 915 929, 2015. [Online]. Available:
//www.sciencedirect.com/science/article/pii/S074756321400569X
[3] L. Guzman, M. Oriol, P. Rodriguez, X. Franch, A. Jedlitschka, and
M. Oivo, “How can quality-awareness support rapid software develop-
ment? a vision paper, in Proceedings of the 23rd Working Conference on
Requirements Engineering Foundation for Software Quality, ser. REFSQ
2017. Berlin, Heidelberg: Springer-Verlag, 2017, pp. 1–7.
[4] S. Wagner, Software product quality control. Springer Verlag, 2013.
[5] A. Ampatzoglou, A. Ampatzoglou, A. Chatzigeorgiou, and P. Avgeriou,
“The financial aspect of managing technical debt: A systematic
literature review,” vol. 64, pp. 52–73, 2015. [Online]. Available:
http://linkinghub.elsevier.com/retrieve/pii/S0950584915000762
[6] N. S. Alves, T. S. Mendes, M. G. de Mendona, R. O. Spnola, F. Shull,
and C. Seaman, “Identification and management of technical debt: A
systematic mapping study, vol. 70, pp. 100–121, 2016. [Online]. Avail-
able: http://linkinghub.elsevier.com/retrieve/pii/S0950584915001743
[7] J. Weigend, J. Siedersleben, and J. Adersberger, “Dynamische Analyse
mit dem Software-EKG,” Informatik-Spektrum, vol. 34, no. 5, pp. 484–
495, Oct. 2011.
[8] J. Adersberger and J. Weigend, “IT-Sanierung. Pr¨
avention statt Herzstill-
stand: Wie sich kranke IT-Systeme kurieren lassen. Objekt-Spektrum,
no. 2015-01, pp. 54–55, Jan. 2015.
[9] N. S. Alves, L. F. Ribeiro, V. Caires, T. S. Mendes,
and R. O. Spinola, “Towards an Ontology of Terms
on Technical Debt. IEEE, pp. 1–7. [Online]. Available:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6974882
[10] Z. Li, P. Avgeriou, and P. Liang, “A systematic mapping study on tech-
nical debt and its management,” vol. 101, pp. 193–220. [Online]. Avail-
able: http://linkinghub.elsevier.com/retrieve/pii/S0164121214002854
[11] M. Ciolkowski, S. Faber, and S. von Mammen, “3-D visualization of
dynamic runtime structures.” Gothenburg: ACM Press, Oct. 2017, pp.
189–198.
... Existing definitions and metrics of technical debt tend to focus on static properties of software artifacts, in particular on code measurement. However, many critical aspects of technical debt often surface at runtime only and are difficult to measure statically [5]. ...
Chapter
Full-text available
Important and critical aspects of technical debt often surface at runtime only and are difficult to measure statically. This is a particular challenge for cloud applications because of their highly distributed nature. Fortunately, mature frameworks for collecting runtime data exist but need to be integrated. In this paper, we report an experience from a project that implements a cloud application within Kubernetes on Azure. To analyze the runtime data of this software system, we instrumented our services with Zipkin for distributed tracing; with Prometheus and Grafana for analyzing metrics; and with fluentd, Elasticsearch and Kibana for collecting, storing and exploring log files. However, project team members did not utilize these runtime data until we created a unified and simple access using a chat bot. We argue that even though your project collects runtime data, this is not sufficient to guarantee its usage: In order to be useful, a simple, unified access to different data sources is required that should be integrated into tools that are commonly used by team members.
Article
Software technical debt (TD) is a relevant software engineering problem. Only if properly managed can TD provide benefits while avoiding risks. Current TD management (TDM) support is limited. Recent advances in software engineering (SE) and data science (DS) promote data-driven TDM. In this paper, we summarize experiences concerning data-driven TDM gained in several research projects with industry. We report challenges and their consequences, propose solutions, and sketch improvement directions.
Conference Paper
Full-text available
Continued development and maintenance of software requires understanding its design and behavior. Software at runtime creates a complex network of call--callee relationships that are hard to determine but that developers need to understand to optimize software performance. Existing tools typically focus on static aspects (e.g., Structure101 or SonarQube), or they are difficult to use and require high expertise (e.g., software profiling tools). Unfortunately, these dependencies are hard to derive from static code analysis: For one, static analysis will reveal potential call--callee relationships not actual ones. Second, they are often difficult to detect, since information systems today increasingly use abstraction patterns and code injection, which obscures runtime behavior. In this paper, we present our efforts towards accessible and informative means of visualizing software runtime processes. We designed a novel visualization approach that utilizes a hierarchical and interactive 3-D city layout based on force-directed graphs to display the runtime structure of an application. This promises to reduce the time and effort invested in debugging programming errors or in finding bottlenecks of software performance. Our approach extends the city metaphor for translating programmatic relationships into accessible 3D visualizations. With the identified goals and constraints in mind, we designed a novel visual debugging system, which maps programming code structures to 3D city layouts based on force-directed graphs. Exploration of the animated visualization allows the user to investigate not only the static relationships of large software projects but also its dynamic runtime behavior. We conducted a formative evaluation of the approach with a preliminary version of a prototype. In a series of six interviews with experts in software development and dynamic analysis, we were able to confirm that the approach is useful and supports identifying bottlenecks. The interviews raised and prioritized potential future improvements, several of which we implemented into the final version of our prototype.
Article
Full-text available
Technical debt is a term that has been used to describe the increased cost of changing or maintaining a system due to shortcuts taken during its development. As technical debt is a recent research area, its different types and their indicators are not organized yet. Therefore, this paper proposes an ontology of terms on technical debt in order to organize a common vocabulary for the area. The organized concepts derived from the results of a systematic literature mapping. The proposed ontology was evaluated in two steps. In the first one, some ontology design quality criteria were used. For the second one, a specialist in the area performed an initial evaluation. This work contributes to evolve the Technical Debt Landscape through the organization of the different types of technical debt and their indicators. We consider this an important contribution for both researchers and practitioners because this information was spread out in the literature hindering their use in research and development activities.
Article
Full-text available
Context: Technical debt (TD) is a metaphor reflecting technical compromises that can yield short-term benefit but may hurt the long-term health of a software system. Objective: This work aims at collecting studies on TD and TD management (TDM), and making a classification and thematic analysis on these studies, to obtain a comprehensive understanding on the TD concept and an overview on the current state of research on TDM. Method: A systematic mapping study was performed to identify and analyze research on TD and its management, covering publications between 1992 and 2013. Results: Ninety-four studies were finally selected. TD was classified into ten types, eight TDM activities were identified, and twenty-nine tools for TDM were collected. Conclusions: The term “debt” has been used in different ways by different people, which leads to ambiguous interpretation of the term. Code-related TD and its management have gained the most attention. There is a need for more empirical studies with high-quality evidence on the whole TDM process and on the application of specific TDM approaches in industrial settings. Moreover, dedicated TDM tools are needed for managing various types of TD in the whole TDM process.
Article
Full-text available
Unlike traditional software development methods, agile methods are marked by extensive collaboration, i.e. face-to-face communication. Although claimed to be beneficial, the software development community as a whole is still unfamiliar with the role of the requirements engineering practices in agile methods. The term “agile requirements engineering” is used to define the “agile way” of planning, executing and reasoning about requirements engineering activities. Moreover, not much is known about the challenges posed by collaboration-oriented agile way of dealing with requirements engineering activities. Our goal is to map the evidence available about requirements engineering practices adopted and challenges faced by agile teams in order to understand how traditional requirements engineering issues are resolved using agile requirements engineering. We conducted a systematic review of literature published between 2002 and June 2013 and identified 21 papers, that discuss agile requirements engineering. We formulated and applied specific inclusion and exclusion criteria in two distinct rounds to determine the most relevant studies for our research goal. The review identified 17 practices of agile requirements engineering, five challenges traceable to traditional requirements engineering that were overcome by agile requirements engineering, and eight challenges posed by the practice of agile requirements engineering. However, our findings suggest that agile requirements engineering as a research context needs additional attention and more empirical results are required to better understand the impact of agile requirements engineering practices e.g. dealing with non-functional requirements and self-organising teams.
Article
Full-text available
Dieses Papier zeigt, wie man komplexe, heterogene Systeme analysiert, wenn die einfachen Methoden (Debugger, Profiler) nicht ausreichen. Wir erläutern die Grundlagen, beschreiben ein Vorgehen mit den nötigen Tools und bringen einige Beispiele aus unserer Praxis. Wir behandeln ferner den präventiven Einsatz des Vorgehens im Entwicklungsprozess und definieren die Diagnostizierbarkeit (Diagnosibility) eines Softwaresystems als wichtige nichtfunktionale Eigenschaft.
Book
Quality is not a fixed or universal property of software; it depends on the context and goals of its stakeholders. Hence, when you want to develop a high-quality software system, the first step must be a clear and precise specification of quality. Yet even if you get it right and complete, you can be sure that it will become invalid over time. So the only solution is continuous quality control: the steady and explicit evaluation of a product’s properties with respect to its updated quality goals. This book guides you in setting up and running continuous quality control in your environment. Starting with a general introduction on the notion of quality, it elaborates what the differences between process and product quality are and provides definitions for quality-related terms often used without the required level of precision. On this basis, the work then discusses quality models as the foundation of quality control, explaining how to plan desired product qualities and how to ensure they are delivered throughout the entire lifecycle. Next it presents the main concepts and techniques of continuous quality control, discussing the quality control loop and its main techniques such as reviews or testing. In addition to sample scenarios in all chapters, the book is rounded out by a dedicated chapter highlighting several applications of different subsets of the presented quality control techniques in an industrial setting. The book is primarily intended for practitioners working in software engineering or quality assurance, who will benefit by learning how to improve their current processes, how to plan for quality, and how to apply state-of-the-art quality control techniques. Students and lecturers in computer science and specializing in software engineering will also profit from this book, which they can use in practice-oriented courses on software quality, software maintenance and quality assurance.
Article
Context The technical debt metaphor describes the effect of immature artifacts on software maintenance that bring a short-term benefit to the project in terms of increased productivity and lower cost, but that may have to be paid off with interest later. Much research has been performed to propose mechanisms to identify debt and decide the most appropriate moment to pay it off. It is important to investigate the current state of the art in order to provide both researchers and practitioners with information that enables further research activities as well as technical debt management in practice. Objective This paper has the following goals: to characterize the types of technical debt, identify indicators that can be used to find technical debt, identify management strategies, understand the maturity level of each proposal, and identify what visualization techniques have been proposed to support technical debt identification and management activities. Method A systematic mapping study was performed based on a set of three research questions. In total, 100 studies, dated from 2010 to 2014, were evaluated. Results We proposed an initial taxonomy of technical debt types, created a list of indicators that have been proposed to identify technical debt, identified the existing management strategies, and analyzed the current state of art on technical debt, identifying topics where new research efforts can be invested. Conclusion The results of this mapping study can help to identify points that still require further investigation in technical debt research.
Article
Discusses a zero-tolerance approach in software construction in which the minor bugs and defects are immediately eliminated in order to prevent larger software-related problems later on Discusses a zero-tolerance approach in software construction in which the minor bugs and defects are immediately eliminated in order to prevent larger software-related problems later on. Observation that one piece of bad code will have a certain effect on the system, but two pieces of bad code will have a negative effect that is more than double.
Article
Software construction is the ultimate embodiment of the software development process. The art of programming lies in that nether region between the hopeful wishes of an elegant architecture and the hard grindstone of technical details. To prevent major catastrophic loss, we must focus on preventing the triggering mechanism from occurring. If we can fix the little problems as they occur, then we shall have fewer large problems with which to contend.
How can quality-awareness support rapid software development? a vision paper
  • L Guzman
  • M Oriol
  • P Rodriguez
  • X Franch
  • A Jedlitschka
  • M Oivo
L. Guzman, M. Oriol, P. Rodriguez, X. Franch, A. Jedlitschka, and M. Oivo, "How can quality-awareness support rapid software development? a vision paper," in Proceedings of the 23rd Working Conference on Requirements Engineering Foundation for Software Quality, ser. REFSQ 2017. Berlin, Heidelberg: Springer-Verlag, 2017, pp. 1-7.