Conference PaperPDF Available

Behavior-Driven Dynamics in Agile Development: The Effect of Fast Feedback on Teams

Authors:

Abstract and Figures

Agile software development teams strive for fast and continuous feedback. Both the quality of the resulting software and the performance of the team require feedback. The performance of the team developments is often addressed in retrospectives which are not only part of the SCRUM framework, but also in general. Reflecting on incidents during the last sprint helps the team to increase performances, expressed by, e.g., efficiency and productivity. However, it is not only essential to identify volatile sprint performances, but also to characterize the primary cause to solve them. Main reasons for low performance are often not visible, primarily when they are related to socialdriven team behavior, such as communication structures, mood, or satisfaction. In this paper, we analyze whether automated team feedback about retrospective sprint-behavior can help the team to increase performances due to additional awareness about the dynamic effects over time. In a comparative case study with 15 software projects and a total of 130 undergraduate students, we investigated the sustainable impact of feedback on human aspects. Our results indicate that automated feedback positively affects team performances – and customer satisfaction.
Content may be subject to copyright.
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
Behavior-Driven Dynamics in Agile Development:
The Effect of Fast Feedback on Teams
Fabian Kortum
Software Engineering Group
Leibniz University Hannover
Hannover, Germany
fabian.kortum@inf.uni-hannover.de
Jil Kl¨
under
Software Engineering Group
Leibniz University Hannover
Hannover, Germany
jil.kluender@inf.uni-hannover.de
Kurt Schneider
Software Engineering Group
Leibniz University Hannover
Hannover, Germany
kurt.schneider@inf.uni-hannover.de
Abstract—Agile software development teams strive for fast
and continuous feedback. Both the quality of the resulting
software and the performance of the team require feedback.
The performance of the team developments is often addressed in
retrospectives which are not only part of the SCRUM framework,
but also in general. Reflecting on incidents during the last sprint
helps the team to increase performances, expressed by, e.g.,
efficiency and productivity. However, it is not only essential to
identify volatile sprint performances, but also to characterize the
primary cause to solve them. Main reasons for low performance
are often not visible, primarily when they are related to social-
driven team behavior, such as communication structures, mood,
or satisfaction. In this paper, we analyze whether automated
team feedback about retrospective sprint-behavior can help the
team to increase performances due to additional awareness about
the dynamic effects over time. In a comparative case study with
15 software projects and a total of 130 undergraduate students,
we investigated the sustainable impact of feedback on human
aspects. Our results indicate that automated feedback positively
affects team performances – and customer satisfaction.
Index Terms—team dynamics, agile, retrospectives, pro-active
feedback, information transparency
I. INTRODUCTION
The performance of agile software development teams
mainly bases on effective processes, a culture of team bal-
ance, trust as well as constructive feedback about positive or
negative progress occurrences [1], [2]. Productivity measures
such as the velocity, i.e., the number of finished story points
per hour, or the number of reported bugs are of particular
importance when analyzing the development performance.
Team-driven behavior is also of particular importance but can
be hardly measured [3]. In general, the performance of agile
development teams is usually reflected during retrospectives at
the end of each sprint. According to the results of a large scale
international survey in industry, three of four projects use ret-
rospectives (n= 769) [4]. Retrospectives help the team to talk
about and to report challenges or incidents faced by individuals
during the last sprint. The primary goal of retrospectives
is the identification of the problem’s root, for instance, to
reveal falsely estimated sprints, e.g., targeted workloads or the
complexity of issues. By doing so, the team’s related developer
tasks enable improvements concerning effectivity, so that every
developer can perform as good as possible.
However, it is not only essential to identify weak perfor-
mances, but also to characterize the primary cause to solve
the underlying problems. In many of today’s agile projects,
potential blinds about the behavior-driven dynamics in teams
remain uncovered or not sufficiently questioned. However, al-
ternating performances cannot always be explained by falsely
estimated sprints. Changing performances can also be referred
to as unknown or not monitored influences, as it is often
the case when they are coherent with human factors. Some
of them are central social factors in software development,
which commonly involves communication structures, meeting
behavior or the overall motivation and team behavior. For
example, the productivity of a group can be often ensured
throughout an effective communication structure. Meanwhile,
goal-oriented meetings with clear resolutions reduce the im-
pression of wasting time, thus increasing the motivation to
participate actively [5]. Both improvements can have positive
long-term effects on the overall satisfaction in the team, while
some other factors might hold only until a next sprint [6].
In this paper, we introduce an approach that provides pro-
active feedback about a team’s behavioral dynamics during
sprints. We strive towards evaluating the development and
organizational team performances while analyzing potential
dependencies with a focus on productivity and social-driven
team factors. While the productivity metrics are easy to trace,
e.g., by using issues and progress tracking software as JIRA
[7], the human factors are often difficult to analyze [8]. In
an iterative process, we developed a self-assessment system
with pro-active feedback support which is realized as a JIRA
plugin. The team-specific feedback covers visualization graphs
about dependencies and textual implications according to team
characteristics and performances during the last sprints.
The research goal of this study is to investigate the sustain-
ability of positive or negative effects on sprint performances
because additionally provided feedback. To reach this goal,
we observed a comparative case study with 130 undergraduate
students working on 15 software projects. Seven out of the 15
project teams applied as participants for using the additional
feedback solution in JIRA. The other project groups could only
access the regular progress tracking functionalities available in
JIRA. Our findings indicate that the supplementary feedback
strategy can help the team to increase its overall performances
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
due to further awareness about the main reason and factorial
dependencies. Nevertheless, without measurements on the
human factors, there can’t be improvements. Therefore it is
crucial, that teams also come with the necessary motivation
for giving and receiving feedback that enables improvement.
The paper is structured as follows: In section II, we give a
brief outline of previous results and related work with strong
methodical relevance for this case study. Section III covers
the methodology and highlights besides the context of this
study, the structure, data elicitation and visual concepts of our
ProDynamics JIRA plugin. We interpret the findings in section
IV, followed by a listing of given threats to validity in section
V. At the latest, we conclude our current work and provide a
brief outcast for future work in section VI.
II. RE LATE D WORK
This study is related to work with human factors in software
engineering in the agile context. In recent years, there was an
increasing interest in the effects of social aspects in software
projects [9]. Study reports such as published by Cockburn
et al. [1] point out the central relevance of team commu-
nication, self-organization, and well-working relationships in
agile teams. Besides, they highlight that people have different
personalities, skills, and ambitions within while there are a lot
of interdependencies between these factors [10].
In previous work, Kortum et al. [11] explored and reported
on the interdependencies caused by those human factors in
software projects. They also show how exploratory analyzes
can improve interpretations [11]. Other studies’ results re-
vealed relationships between, for instance, social conflicts,
communication and meeting behavior [12]–[14] as well as
the impact of interactions during meetings on positive mood
afterward [5]. While we conducted several studies in industry
[3] and student software projects [5], [12], [14] in the past, the
essential problem always appeared in the effort for participants
to give and receive feedback during an ongoing project. Vol-
untary participants are likely to provide information. However,
it would be highly desirable to enable data elicitation with a
reduced effort for the respondents.
Indeed, feedback is essential because it can trigger mean
changes within projects [15]. However, there are only a few
publications that targeted, in particular, the effect of team
feedback involving human behavior aspects. Moreover, most
of these interpretations solely refer to self-management and
productivity measures in combination with the performance
outcome of a team, as in the case study of Moe et al.
[16]. In response to this situation, we especially want to
enlarge the focus on the effects of active team feedback with
direct implication on human factors such as communication
structures, meeting manners, group atmosphere, even taking
the results of previous work about performances into account.
Many studies provide supportive strategies and concepts
about how to give and receive feedback in the right way. Often,
the reported theory lacks practicability and thus can be only
insufficiently applied, especially in the case of human factors.
Self-reports, as introduced by Ross et al. [17], contribute to
higher achievements and improved behavior of teams, thus
forming the fundamental base of this study. In cooperation
with psychology researchers from Technical University Braun-
schweig, we improved a self-assessment survey covering a
minimized subset of relevant metrics on team behavior, while
the data provides a maximum information outcome. Further-
more, our approach resulted in a reduced hurdle of giving
feedback which is highly essential to retain the motivation of
giving and receiving feedback.
A central part of this study involves data visualization tech-
niques that will, later on, take place to characterize structures,
dependencies, and to derive implications on the sprint behav-
iors of a team. Lehtonen et al. [18] describe that the key for
improvements in the agile context comes back to information
visualization. A team can increase its awareness for particular
conditions of problem causes by enabling advanced cognitive
perception besides solely textual feedback.
Jermakovics et al. [19] show the benefits of analyzing and
visualizing developer networks. The authors characterize the
interactions of different team members based on data from
repositories to identify and highlight the workloads and inter-
nal development structures. The authors’ strategy to visualize
the team organization is a central focus of this study. However,
in addition to purely textual feedback in JIRA, the team will
be able to access a visualization of the relationships to increase
cognitive understanding.
III. METHODOLOGY
In this paper, we successively investigate behavior-driven
dynamics in agile projects and the effects of fast feedback
on team performances. Following, we present the context of
the comparative student software projects, give an overview of
the ProDynamics plugin, also how the behavior-driven factors
became elicit, analyzed and implication derived for the teams.
A. Context of Student Software Projects
At Leibniz University Hannover, we conducted a study with
130 undergraduate students working in 15 software projects.
The customers came from industry, government or public
institution. Each project had a duration of 15 weeks divided
into four sprints. Each of the projects holds functional re-
quirements with an overall complexity of approximately 2000
working hours. The software projects are ordinarily addressed
for undergraduate students in the fifth semester, mainly to
gain experience and practical knowledge in applying software
engineering methods and practices. Students do not get any
gradings about performances or participation in the project.
Nonetheless, the course is compulsory within the syllabus.
Teams were equally formed based on each student’s self-
assessed top three favored projects, his or her development
abilities according to project-specific requirements and expe-
riences in leadership. The size of the teams ranged from eight
to nine students per project. None of the students participated
in more than one project. For each project, there have been
two teams developing the software independently from each
other (except for one).
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
Fig. 1. Behavior-Driven Feedback Support in JIRA - ProDynamics
All project teams uses Scrum as the development framework
in combination with Planning Poker to estimate the complexity
of next sprint issues [20]. The story points were estimated
according to the Fibonacci number sequence. A team-elected
Scrum master managed to monitor the progress, the produc-
tivities and the assignment of tasks. Teams could either work
locally in provided rooms or decentralized at their places of
choice. All teams had to meet at least weekly in the university
environment. Accordingly, daily status meetings changed to
weeklies. Further meetings were self-managed by the groups
according to their needs. Sessions with the customer took place
at least in a weekly frequency to report and communicate the
sprint progress and to discuss changes.
For better coordination of development tasks and issues,
the teams used the commonly applied Atlassian JIRA and
Gitlab system to support the management of agile projects.
Besides this, JIRA helps to organize, trace and update the
team’s daily activities during the sprints. Code commits can
also be linked with JIRA to change the progress status of
issues automatically. Furthermore, JIRA natively allows the
teams to generate, for example, burn-down charts or sprint
reports to gain an overview of the ongoing productivities. All
teams followed an equivalent project estimation, involving four
sprint phases. The timeline of all projects is shown in Fig. 2.
Fig. 2. Timeline of the projects with sprints, goals and self-assessments
The exploration sprint allows the team to intensively elicit
and understand the customer demands for the targeted product.
First drafts, mock-ups or even prototypes are usually realized
during this first sprint. However, it is crucial that at the
end of the sprint, the backlog is primarily filled with story
tasks to be developed during the next sprint. Changes and
modifications can happen but should not dominate the initial
customer demands for the software product.
The next sprints address primary goals, e.g., a basic software
construct, followed by advanced features. The last sprint shall
provide the customer with a complete product release to be
used for the targeted purpose. During the project, we elicit
team and customer reflection at different degree and frequency
as highlighted in Figure 2.
B. JIRA Plugin – ProDynamics
In the following, we will describe how we extracted and
analyzed the team-driven data, and how we provided pro-active
feedback for teams.
The standard features of JIRA provide solid help for teams
to track, report and react to dynamic productivity changes
in sprints [7]. Subsequently, gained information during a
sprint can be retrospectively used to estimate future sprints
in addition to the previously experienced situations. However,
reasons for significant performance changes are not always
easy to explain, in particular as the JIRA system only traces
productivity measures without further information. To address
this problem and to enable teams to access fast feedback,
we developed a JIRA plugin called ProDynamics. The JIRA
plugin periodically invites teams for self-assessments on team
and sprint behaviors during sprints and derives visualized ret-
rospectives and implication based on dynamics changes over
time. The ProDynamics plugin enables us to systematically
elicit team behavior information, to analyze, to visualize and to
implicate team characteristics automatically. In the meantime,
the effort for the team as well the analysts decreases compared
with conventional concepts. The plugin was developed for
the latest version of JIRA server 7.9.0 and can be used by
everyone with a full or trial license. An overview of the
current ProDynamics features for retrospective sprint analyses
is shown in Fig. 1. The functionality and usage of each sprint
retrospective feature become detailedly described within the
following subsections.
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
C. Effect of Feedback on Behavior-Driven Factors
Our study was conducted with a project duration of ap-
proximately 15 weeks. During the observation, 15 student
software project teams participated in the study aiming at
studying the effects of behavior-driven factors for the sprint
performances. Seven out of the 15 project teams signalized an
interest in the ProDynamics feedback functionalities. The other
eight teams participated as control groups by solely using the
standard version of JIRA. This way, we achieved an almost
equally divided subset of projects with direct comparability
between those teams that could actively access the plugin and
those who did not. The study characterizes whether there is a
positive, negative or no effect for the projects with additionally
provided behavior-based feedback and the groups that kept
track in using standard productivity measures for sprints.
The effect for teams caused by the provided feedback sup-
port in ProDynamics is measured through statistical interpreta-
tion of positive or negative overall performance changes. This
way, we differentiate whether an interdependency is weak,
moderate or strong. Nevertheless, only significant findings are
considered. Team performance is resolved from each project’s
customer and Scrum master satisfaction with the organization
(concerning, e.g., communication, structure, reporting, and
information exchanges), as well the achieved Development
performance during each sprint (e.g., quality, progress, and
sustainability). In the end, we can compare each team’s per-
formance changes over time, addressed by the above factors.
We elicited the satisfaction of customer and Scrum master
three times during the active project (see Fig. 2). The data
collection was performed as self-assessment at the end of a
sprint starting from the active development phase of sprint 2.
Additionally, but only once at project’s end, each of the 130
students from both volunteering and regular groups provided
information on the overall satisfaction with the own team’s
performance and the final software product. These additional
questions have been collected with Likert scales and represent
personal reflections of each developer about whether feedback
was perceived to be useful for the project success of the
group. In case of access to the additional feedback provided
by ProDynamics, the developer also reported on his or her
perceived effects for the development and team performances.
D. Elicitation of Human Factor Information
Behavioral-driven team metrics can represent subjective or
objective information about human factors, resolved from the
center of focus, namely the developers. As stated by several
studies [1], [21], agile projects can fail or succeed based on the
communication and organization structure. For example, bad
habits or dysfunctional communication behavior can affect the
team’s overall atmosphere or motivation of single developers
[5], [13]. Consequently, inappropriate behavior is a potential
risk for the quality and productivity in software development
teams [12]. It is well known that the customer’s satisfaction
with the team and software product will determine whether a
project is successful or not [22]. In recent years, we developed
a self-assessment questionnaire for agile teams covering the
Fig. 3. Overview of Considered Behavior-Driven Factors
previously mentioned behavioral metrics with relevance for
project success [1], [12]. As one results from previously
conducted studies [12], we minimized the response effort for
each developer. The behavior-factors are listed in Fig. 3.
The features considered in this study result in four cate-
gories involving the team’s communication structures, emo-
tions, meetings and the perceived productivity during sprints
[8]. Besides, all team factors are based on psychology mea-
sures with the focus on team organization and social driven be-
havior [12], also on empirical studies in the agile development
context [1], [23]. In particular, we consider communication
by the use of different media channels as not all are equally
appropriate to transport different kind of information [24].
Besides with the perceived communication intensity, we can
use an established measure for the distance between two
persons based on the communication behavior, the so-called
FLOW distance [12], [14]. In particular, as meetings play
a significant role in software projects [5], [25], we also
considered the quantity, duration and participation of team
members for each meeting. The team member’s mood is elicit
in terms of positive and negative affects, also the motivation
and perceived pressure [26]. The productivity and workload
results from sprint-wise estimated vs. completed issues [3].
Fig. 4. Excerpt of Self-Assessment Survey integrated to JIRA
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
With the help of these behavioral-factors, sprint tenden-
cies can be calculated, as, e.g., balanced meeting behavior
and organizational structures are essential for success [13].
However, early recognition of decreased meeting participation
combined with an increase of maverick appearances can
indicate a dysfunctional team structure, often ending in follow-
up conflicts or in loss of performance [13], [14]. The other
element consists of similar bases; all factors present strong
dependencies for the success in software projects [11]. We
realized an integrated self-assessment solution in JIRA to
access behavior information during sprints. A question excerpt
on the face to face communication is shown in Fig. 4.
The self-assessment solution for JIRA is based on a self-
adjusting question set whenever the participant reports on his
or her agreement for the given options set (Likert scale). Stan-
dard surveys often involve a static number of questions, with
lack of proof for relevance for the participant. We address this
problem since our self-assessment survey provides a dynami-
cally changing number of items according to the information
provided by the participant. This way, the likelihood of giving
feedback increases since the responding person perceives a
higher relevance when only being asked appropriate questions
– for example the self-assessment excerpt in Fig. 4 shows
follow-up questions just for F2F communications with specific
members because of previous responses on internal team
interactions. For instance, the duration of interaction is only
asked when communication took place. The team member of
the seven volunteering teams received every week such an
automatically generated survey invitation in JIRA to report
about his or her contact, meeting and emotion-driven factors
during the past week. The self-assessment requires a single
developer with nine team member to complete the survey by an
average time of 1-2 minutes. Responses are considered within
the system for weekly updating retrospective feedback.
Besides the effort reduction due to the adapting question
set, the second vital incentive to fill out the questionnaire is
the survey platform. Studies like the one reported by Lee et al.
[27] write about the integrated use of surveys or information
elicitation within software solutions that teams already use for
their daily work. The present study follows Lee et al.’s [27]
strategy by providing various functions or supportive add-ons
in JIRA to better estimate and monitor sprint performance.
In compliance with our approach in using team information
from and in JIRA, we embedded our self-assessment survey on
behavioral factors within ProDynamics. The integration allows
us to elicit team characteristics, to perform analyses, to derive
implications about behavioral interdependencies, and to grant
the teams pro-active feedback about their ongoing performance
using a single system. Whenever a new survey is active, each
group receives a JIRA-internal pop-up notification, while the
project administrator can define the frequency.
E. Visualizing Team-Feedback on Behavioral Aspects
Our visualization concept strongly relies on related work
and previous studies with focus data and information repre-
sentation concepts [18], [19]. ProDynamics enables teams to
Fig. 5. Interactive Graph for the Communication and Structures in Teams
access four different types of retrospective feedback, covering
communication graphs, productivity-comparison charts, posi-
tive and negative sprint moment-traces as well as an issue-
chronology player for reviewing and tagging potential prob-
lems or conflicts. The communication graph grants insights
for each team on their communication structures and changes
secondarily supported through textual implication. Meanwhile,
each team member has the opportunity to explore his or her
manners over time with individually addressed tendencies. An
exemplary team structure graph is shown in Fig. 5.
The size of nodes indicate the centrality of a member,
derived by his or her communication and meeting behavior.
The overall connectivity and information exchange intensity in
a team simplifies the awareness, e.g., for dysfunctional struc-
tures, such as mavericks or separated subgroups. Furthermore,
groups and single developers can gain insights into activities
and organizational changes over time. The information is
expressed through trend implication according to the previous
sprint and the average trend of all yet completed project weeks.
Fig. 6. Retrospective Player with Annotation and Problem Tagging Support
The second available feedback option for teams is realized
through a retrospective player. The viewer adopts, for instance,
an applied Scrum board with the work-flow design (task
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
states) predefined by the team for their development process
[7]. The retrospective progress review enables the teams to
chronologically trace issue states. Therefore, progress changes
over time. The retrospective player with a standard work-
flow visualization of the issues states ”To Do”, ”In Progress”,
and ”Done” is shown in Fig. 6. The player grants teams a
summarizing review to discuss issues and to annotate, e.g.,
problems, conflicts, estimation gaps or beneficial performance
situations during a sprint. The player view is completely
interactive. Besides, issues can be selected to see further
information or add annotations of interest.
The common way of monitoring the sprint development
progress in JIRA can be done using burn-down or velocity
charts [7]. However, the information covered in both views
involves quantitative numbers, i.e., about estimation versus so
far completed issues. The established productivity comparison
viewer for JIRA - ProDynamics is shown in Fig. 7.
Fig. 7. Radial-Chart for Productivity and Workload Balance Comparison
The conventional view, therefore, gives a good overview of
the current productivity in a sprint. However, workload bal-
ances, productivity comparisons in teams, and sprint-crossing
trends are insufficiently or not provided by the native JIRA
system. Consequently, we realized another sprint retrospective
support with concern on team-driven productivity measures.
The idea of this chart is to extend and summarize produc-
tivity measures of each developer solely, as well as across
members and sprints. All productivity information used for
this chart are natively available in JIRA (single issue states and
assignee), but not trivial to access as a summarized outcome.
The data preprocessing and radial chart representation allow
a straight forward team member comparison throughout all
sprints. The productivity area in the radar-chart also enables
implications about the workload balance in the group which
should be equally distributed. Besides, the overview can char-
acterize the teams’ level of reaching a steady development
state, since it involves team-based comparability between
current sprint performance and the average of all yet completed
sprints. The upper radial chart in Fig. 7 shows the development
performance, as well as the work balance of a team for a
particular sprint selection. In the example, the productivity
measures show a perfect balance between the estimated sprint
workload and the finally completed. Therefore, the teams seem
to fulfill sprint conditions arranged with a customer in time.
However, this radial-chart also reveals that the workload
within the group has suboptimal structures. The workload of
Bob and Alice are by far much higher compared with the
other members. With a view on the lower radial-chart, the
workload balance also reflects the fact that the team faced
an extraordinarily high amount of issues. The chart reveals
that the productivity during the sprint Iteration II almost
doubled compared with the previous velocities. The sprint
goal was to provide a pre-release version of the software
product. Besides, the productivity indicates that the team tried
to keep up their overall project schedule. The additionally
provided productivity visualization covers simple measures,
while especially the resulting workload balance offers an
essential insight into the team structure.
The timeline chart in Fig. 8 summarizes the weekly distribu-
tion of emotions and atmosphere within the team. The data is
elicited during the weekly self-assessments by the developers.
For example, the red colored curve for the negative affective
state has an upper bounder resulting from the 3rd quartile.
The 2nd quartile, also known as median, is the sharp line,
and the 1st quartile presents the lower bounder. The team
emotion affection is described, e.g., with the PANAS scale
[26]. The atmosphere in teams is not explicitly assessed and
is primarily derived from the emotional trend. For example,
the overall balance of both affections results from the weekly
mood distributions, while a level of emotion = 3.0 would
mean an ideal outcome — the strong contrary deviation of
both affects, e.g., during the seventh week in Fig. 8, indicates
an extraordinary situation. The team showed nervous and
flattering atmosphere ambitions right after the end of the
second sprint.
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
Fig. 8. Retrospective Timeline-Chart for Mood-Affection and Satisfaction Highlighting
The productivity measures subsequently revealed that the
team did not fulfill the targeted sprint goals. Therefore, the
overall atmosphere turned downwards because the software
version reached not yet its expected state. Generally, the total
affective state in this example and atmosphere show a slight
downward trend during the project lifetime.
The retrospective view also provides an interactive tagging
feature to highlight special events or occurrences, e.g., the
end of a sprint. For the case that customer or Scrum master
completed the sprint-wise self-assessment about his or her
satisfaction with the team performance and software product
outcome, the responses are allocated to a End of Sprint tag.
The aggregated overall satisfaction score from both, customer
and Scrum master is subsequently visualized by two satisfac-
tion barometer. As the gauges in Fig. 8 reveal, it is possible that
the customer has a more positive perception for the ongoing
projects, whereas the Scrum master and his or her insight
knowledge about the team tend to be more critical. This respect
to privacy limitation of each developer’s emotion and thinking,
positive and negative affects are derived as an aggregated
expression for the entire team. Therefore, it is impossible to
draw conclusions about single members.
IV. INT ERPRETATION OF PERFORMANCE RESU LTS
In this section, we report about the statistical explanation
for a varying performance of teams with and without the
access to our ProDynamics feedback solution. The following
subsection describes the involvement of customer satisfaction,
as one reference measurement for the success of projects [21].
A. Applied Measures for Team and Development Performance
Communication and progress reports between agile teams
and stakeholder have substantial relevance for keeping track
of whether the sprint goals will be or have been reached.
Through continuous information and status exchanges, the
project progress, also the certainty of customer satisfaction en-
able tendency determinations about the success of the project.
For determining the potential performance changes during
the projects as an effect of the pro-active feedback solution in
ProDynamics, we used self-assessments on satisfactory reflec-
tions from the customer and Scrum master at the end of each
sprint. The self-assessment questionnaire for both interviewees
elicit satisfactory perceivings on a 5-point Likert scale, with
concern on the Team Performance and Development Perfor-
mance. The question set included a total of ten questions. Five
questions were about representative organization measures
(e.g., communication, structure, status reporting, information
exchanges and overall impression on the team behavior), while
another five questions covered the satisfaction on the devel-
opment progress and sprint-wise product achievements. In
particular, customer and Scrum master reflected on the product
maturity, sustainability, usability, requirements fulfillment and
innovation. With this, the aggregated customer and Scrum
master reflections have been analyzed and compared with each
team’s performance changes over time. Sprint-wise differences
on the Team performances and Development performances
became comparatively analyzed between team with access to
pro-active sprint feedback and those who did not. The detailed
analyses and interpretation of results about the performance
differences are described in the following subsection.
B. Analyzing the Sprint Performance Changes
As mentioned in the previous subsection, did the sprint-wise
assessed customer and Scrum master satisfaction reflection
have substantial relevance for the performance rating of teams.
Both respondents allow a superior scoring, indicating how well
a sprint was performed according to the team and development
performance. We applied Pearson correlation analysis with 2-
paired t-Tests to determine the sprint-wise performance gaps
between the seven teams with access to the ProDynamics
sprint feedback and the other eight teams without.
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
Fig. 9. Development and Team Performance Affection with ProDynamics
The cross-correlation matrices in Fig. 9 and Fig. 10 show
the sprint-wise Team Performances, also Development Per-
formances for the groups that received pro-active Feedback
in ProDynamics and those that did not. In the lower-left
half of each matrice, the numeric performance scores are
plotted and averaged with the help of a trendline. The 5-points
present the aggregated satisfaction reflection of customer and
Scrum Master after each sprint: 1 = very disappointed, 2 =
disappointed, 3 = moderate; 4 = satisfied, 5 = very satisfied.
The mid-diagonal columns in Fig. 9 and Fig. 10 cover the
distribution of each performance reflection, through bars and
an interpolated trends. The upper-right half of the matrices
present the Pearson correlation measures. The red asterisks
indicate the level of significance (p-value < 0.05). For the
teams with pro-active feedback support, the Pearson correla-
tion measures on performance differences over time in Fig. 9
are interpreted as follow:
The Development Performance shows a weak = 0.13,
also significant p-value < 0.05 increase over time.
The Team Performance shows no significant changes over
time, neither positive nor negative.
The Development Performance and Team Performance
show a moderate = 0.41, also significant correlation
over time with p-value < 0.05.
The last correlation interpretation between the Development
Performance and the Team Performance presents a trivial
outcome since the team communication and inner structure are
essential factors for managing, tracking and completing issues
[1], [10]. However, potential positive or negative changes
over time on organizational behaviors and structures expressed
through the Team Performance satisfaction were not detected
during the case study. On the over hand did the Development
Performance slightly improve, means the outcome of software
product caused higher customer satisfaction, e.g., in fulfilling
targeted sprint estimations more accurately over time.
Subsequently, we analyzed the Development Performance,
and the Team Performance changes over time for those teams
without access to the pro-active feedback solution in JIRA.
The correlation measures in Fig. 10 are interpreted as follow:
Fig. 10. Development and Team Performance Affection without ProDynamics
The Development Performance shows no significant
changes over time, neither positive nor negative.
The Team Performance shows no significant changes over
time, neither positive nor negative.
The Development Performance and Team Performance
show a moderate = 0.43, also significant correlation
over time with p-value < 0.05.
Similar to the correlation interpretation on the Develop-
ment Performance and Team Performance changes for teams
with pro-active feedback, do the groups without additional
feedback-access in JIRA show a significant affection between
both performances as well. However, the statistical measures
for the teams without pro-active feedback also characterize,
that the Development Performance and Team Performance had
no significant changes or adjustments during the projects. The
trendlines in Fig. 10 show a primarily constant satisfaction
scoring for both performances, without significant correlations.
Besides the sole statistical interpretations on the above self-
assessed sprint performances, we applied a subsequent self-
assessment for all 130 developers at the end of the projects.
The last assessment covers individual perceptions about the
experienced need also aims for a pro-active team- and de-
velopment feedback during the software project. With this
self-assessment, we wanted to reveal the developers deviated
thinking on the need for feedback during a running software
project. Also, in case that feedback was given, whether the de-
velopers recognized a positive or negative affection for follow-
up sprints. In particular, we wanted to determine the individual
responses of teams with active use of the ProDynamics plugin
versus those teams that could not access the additional sprint
feedback. The rating was done using a 5-point Likert scale:
68% of 130 developers stated that sprint-feedback about
development performance is important or very important.
45% of 130 developers stated that sprint-feedback about
team performance is important or very important.
77% of 56 developers using ProDynamics stated a (very)
positive effect for follow-up development performances.
59% of 56 developers using ProDynamics stated a (very)
positive effect for the team performances.
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
The overall statistical interpretation of the ProDynamics
feedback effects indicates that teams with access used the
additional team information to manage their Developement
Performance slightly more efficient, sprint by sprint. This
behavior can also be found in team building processes since
most teams learn to grow as a group within the first weeks in a
new project [28]. However, compared to the teams’ that had no
access to additional feedback, the Developement Performance
was significantly better in case of using the plugin. Especially
during the last sprint, the teams with access to ProDynamics
were able to fulfill their sprint estimations with lower gap rates
(targeted vs. completed issues). The productivity measures
(velocity history) and perceived Developement Performance of
the customer and Scrum master matching these results. The
team performance, however, tends to be only slightly better
according to customer perceptions, but without a significant
difference between both subject groups.
V. THR EATS T O VALIDITY
Our results must not be overgeneralized. According to
Wohlin et al. [29], studies are subject to threats limiting the
conclusion, construct, internal and external validity.
a) Construct validity: We interpreted behavioral-driven
effects on team performance solely through statistics. No
analysis was performed regarding potential independent vari-
ables, such as previous experiences of students or similarity
of projects. However, for comparison reason, the teams were
primarily formed with regard to each student’s self-rated
development, project skills and experience. Thus, performance
improvements of teams could also be because of independent
factors. Findings with strong significances but only weak cor-
relation measures are taken into account as well. Implications
provided in ProDynamics are systematically derived based on
the team’s actual or past performances and relevant factors
identified within previous studies [5], [11].
b) Internal validity: The interpretation relies on the self-
assessed team records. The assessments were a volunteer
activity of a few teams, whereas the use of the JIRA standard
version was mandatory for everyone. We assume that peo-
ple freely replied the truth when submitting information, in
expectation for accessing supportive team implication. There
were no parallel courses or gradings for the software projects
that would bias the assessed data. Some information, e.g.,
about emotional-driven factors, are restricted due to privacy
limitation, thus to interpret the overall responses as aggregated
team values. However, this does not apply for the team
communication, and structural data since responses of single
members describing communication interaction between two
people require both parties to hold back. In this case, the
connectivity between both is not considered public.
c) External validity: The results might not become over-
generalized since we interpreted student teams [17]. Even
though the software project relies on projects from industry,
government and public institutions, the results cannot be
completely transferred to industry. Data from practitioners,
or other university software projects may lead to alternative
effects when repeating the approach. Due to human factors
and unknown other influences, the investigated interpretation
of behavior cannot completely cover all real-world aspects.
d) Conclusion validity: All interpretations are reliable
and statistically valid. When repeating the projects with
same participants, there might be nevertheless different self-
assessment responses due to behavior-driven factors, emotions
or unregistered situations. The methodology, e.g., the statistical
analyses and deviation of implications, can be generalized for
several other kinds of investigation that provide empirical data.
VI. CONCLUSION AND FUTURE WO RK
In this study, we introduced a comparative case study with
15 student software projects to find out, whether behavioral-
driven dynamics in teams show positive or negative tendencies
when giving teams additional pro-feedback about their orga-
nization and development performances. Each team project
was estimated for 15 weeks and involved four sprints. To
determine the effect of additional feedback, we realized a JIRA
plugin named ProDynamics that simplifies the elicitation of
human factors through a self-adapting questionnaire and grants
feedback implications for each sprint. The gathered weekly
information about communication, meeting and emotional
metrics from seven out of 15 projects became analyzed and
accessible for teams as visualized, also textual implicated team
feedback to enable additional insights about performances
and team structure. After completing a sprint, customers
and Scrum master give retrospective reflections about their
perceived team and development performances.
With the help of Pearson correlation statistics, we inter-
preted the sprint-wise trend changes of the team performance,
also the development performance of each team. We found
statistical significance that the groups with access to the
additionally provided ProDynamics feedback showed a def-
inite increase in the development performance throughout all
sprints, while the teams without feedback access tend to have
more problems in estimating sprint workloads and keeping up
fulfilling the targeted goals. We could show, that 68% of 130
developers believe that pro-feedback about team characteristics
takes an essential role in agile projects, while 59% of 56
ProDynamics users noticed that the feedback on team and
development performances had a positive effect for their
follow-sprint planing, thus for their development performance.
We can conclude, that the ProDynamics plugin could sim-
plify the data elicitation, while some teams had a positive
effect for most of their sprints in gaining better knowledge
on how to estimate and assign tasks more optimal. The
team performance, however, by means the communication
structure and meeting behavior could be not found to show
significant benefits or negatives due to the plugin. This can
be due to already well-organized structures, or teams set their
focus more on the development performances, which is also
indicated by the subsequent feedback results of each student
at the end of projects.
For the future, we aim to extend the currently sole retrospec-
tive team feedback with some more analytical features. Involv-
©IEEE. PREPRINT. This is the author’s version of the work. It is posted here by permission of IEEE for your personal use.
Not for redistribution. The definitive version was published in the conference/workshop proceedings.
Refer to the paper using: to be updated once doi available
ing additional predictive analytics support could enable the
team to preview future trends about their weekly or sprint-wise
team behavioral dynamics and development performances.
Especially time series or interdependence models would give
a team a more positive insight about appearing behavioral
pattern over time while the consideration of a simulation
model would allow the groups to exploratory plan sprints
with adjustable scenarios. Generally spoken, the integrated
feedback solution, e.g., presents in our opinion a solid base to
provide pro-active feedback for a team based on information
that already exist within the JIRA, or simply assessed from
motivated teams with aim to keep track and continuously
improve their development activities.
ACKNOWLEDGMENT
This work was funded by the German Research Society
(DFG) under the project name Team Dynamics (2018-2020).
Grant number 263807701.
REFERENCES
[1] A. Cockburn and J. Highsmith, “Agile software development, the people
factor,Computer, vol. 34, no. 11, pp. 131–133, Nov 2001.
[2] N. B. Moe and T. Dingsøyr, “Scrum and team effectiveness: Theory and
practice,” in International Conference on Agile Processes and Extreme
Programming in Software Engineering. Springer, 2008, pp. 11–20.
[3] J. Kl¨
under, F. Kortum, T. Ziehm, and K. Schneider, “Helping teams to
help themselves: An industrial case study on interdependencies during
sprints,” in Human-Centered Software Engineering. Cham: Springer
International Publishing, 2019, pp. 31–50.
[4] M. Kuhrmann, P. Tell, J. Kl¨
under, R. Hebig, S. Licorish, and S. Mac-
Donell, Eds., HELENA Stage 2 Results. ResearchGate, 2018.
[5] K. Schneider, J. Kl¨
under, F. Kortum, L. Handke, J. Straube, and
S. Kauffeld, “Positive affect through interactions in meetings: The role of
proactive and supportive statements,Journal of Systems and Software,
vol. 143, pp. 59–70, 2018.
[6] N. Abbas, A. M. Gravell, and G. B. Wills, “Using factor analysis to
generate clusters of agile practices (a guide for agile process improve-
ment),” in 2010 Agile Conference. IEEE, 2010, pp. 11–20.
[7] P. Li, Jira Essentials. Packt Publishing Ltd, 2015.
[8] F. Kortum, J. Kl¨
under, and K. Schneider, “Don’t underestimate the
human factors! exploring team communication effects,” in Interna-
tional Conference on Product-Focused Software Process Improvement.
Springer, 2017, pp. 457–469.
[9] O. Salo and P. Abrahamsson, “Empirical evaluation of agile software
development: The controlled case study approach,” in International Con-
ference on Product Focused Software Process Improvement. Springer,
2004, pp. 408–423.
[10] A. Cockburn, Agile software development. Addison-Wesley Boston,
2002, vol. 177.
[11] F. Kortum, J. Kl ¨
under, and K. Schneider, “Characterizing relationships
for system dynamics models supported by exploratory data analysis,” in
29th International Conference on Software Engineering and Knowledge
Engineering. KSI Research Inc, 2017.
[15] L. A. Williams and A. Cockburn, “Agile software development: it’s
about feedback and change,” Computer, vol. 36, pp. 39–43, 2003.
[12] K. Schneider, O. Liskin, H. Paulsen, and S. Kauffeld, “Media, mood, and
meetings: related to project success?” ACM Transactions on Computing
Education (TOCE), vol. 15, no. 4, p. 21, 2015.
[13] S. Kauffeld and N. Lehmann-Willenbrock, “Meetings matter: Effects
of team meetings on team and organizational success,” Small Group
Research, vol. 43, no. 2, pp. 130–158, 2012.
[14] J. Kl¨
under, K. Schneider, F. Kortum, J. Straube, L. Handke, and S. Kauf-
feld, “Communication in teams-an expression of social conflicts,” in
Human-Centered and Error-Resilient Systems Development. Springer,
2016, pp. 111–129.
[16] N. B. Moe, T. Dingsøyr, and T. Dyb˚
a, “A teamwork model for
understanding an agile team: A case study of a scrum project,” Inf.
Softw. Technol., vol. 52, no. 5, pp. 480–491, May 2010. [Online].
Available: https://doi.org/10.1016/j.infsof.2009.11.004
[17] J. A. Ross, “The reliability, validity, and utility of self-assessment,” 2006.
[18] T. Lehtonen, V.-P. Eloranta, M. Leppanen, and E. Isohanni, “Visualiza-
tions as a basis for agile software process improvement,” in Software
Engineering Conference (APSEC), 2013 20th Asia-Pacific, vol. 1. IEEE,
2013, pp. 495–502.
[19] A. Jermakovics, A. Sillitti, and G. Succi, “Mining and visualizing
developer networks from version control systems,” in Proceedings of
the 4th International Workshop on Cooperative and Human Aspects of
Software Engineering. ACM, 2011, pp. 24–31.
[20] V. Mahniˇ
c and T. Hovelja, “On using planning poker for estimating user
stories,” Journal of Systems and Software, vol. 85, no. 9, pp. 2086–2095,
2012.
[21] R. C. Martin, Agile software development: principles, patterns, and
practices. Prentice Hall, 2002.
[22] N. Agarwal and U. Rathod, “Defining ‘success’ for software projects:
An exploratory revelation,International journal of project management,
vol. 24, no. 4, pp. 358–370, 2006.
[23] E. Whitworth and R. Biddle, “The social nature of agile teams,” in Agile
2007 (AGILE 2007). IEEE, 2007, pp. 26–36.
[24] J. Kl¨
under, O. Karras, N. Prenner, and K. Schneider, “Modeling and
analyzing information flow in development teams as a pipe system,” in
Third International Workshop on Human Factors in Modeling (HuFaMo
2018). CEUR-WS, 2018, pp. 3–10.
[25] O. Liskin, K. Schneider, S. Kiesling, and S. Kauffeld, “Meeting intensity
as an indicator for project pressure: exploring meeting profiles,” in
2013 6th International Workshop on Cooperative and Human Aspects
of Software Engineering (CHASE). IEEE, 2013, pp. 153–156.
[26] D. Watson, L. A. Clark, and A. Tellegen, “Development and validation of
brief measures of positive and negative affect: the panas scales.” Journal
of personality and social psychology, vol. 54, no. 6, p. 1063, 1988.
[27] G. Lee and W. Xia, “Toward agile: an integrated analysis of quanti-
tative and qualitative field data on software development agility,Mis
Quarterly, vol. 34, no. 1, pp. 87–114, 2010.
[28] B. W. Tuckman and M. A. C. Jensen, “Stages of small-group devel-
opment revisited,Group & Organization Studies, vol. 2, no. 4, pp.
419–427, 1977.
[29] C. Wohlin, P. Runeson, M. H ¨
ost, M. C. Ohlsson, B. Regnell, and
A. Wessl´
en, Experimentation in software engineering. Springer Science
& Business Media, 2012.
... Consequently, providing them with information about how their message might be perceived before they send it to the rest of the team allows them to adjust it. Such kind of realtime feedback allowing team members to react has proven to be valuable for software development team [19]. ...
Chapter
Sentiment analysis is an established possibility to gain an overview of the team mood in software projects. A software analyzes text-based communication with regards to the used wording, i.e., whether a statement is likely to be perceived positive, negative, or neutral by the receiver of said message.However, despite several years of research on sentiment analysis in software engineering, the tools still have several weaknesses including misclassifications, the impossibility to detect negotiations, irony, or sarcasm. Another huge issue is the retrospective analysis of the communication: The team receives the results of the analysis at best at the end of the day, but not in realtime. This way, it is impossible to react and to improve the communication by adjusting a message before sending it.To reduce this issue, in this paper, we present a concept for realtime sentiment analysis in software projects and evaluate it in a user study with twelve practitioners. We were in particular interested in how realtime sentiment analysis can be integrated in the developers’ daily lives and whether it appears to be helpful. Despite the still missing long-term case study in practice, the results of our study point to the usefulness of such kind of analysis.KeywordsSentiment analysisSocial aspectsSoftware projectTeam moodRealtime feedback
... Other agile software development frameworks are Extreme Programming (XP) (Ambler 2002;Beck and Andres 2004;Beck and Fowler 2000;Lindstrom et al. 2004), the Unified Software Development Process (Jacobson et al. 2000), which was originally developed by Rational/IBM (Gornik 2001;Kroll and Kruchten 2003;Kruchten 1998), andKanban (Brechner 2015;Ohno 1988). Besides these, there are development frameworks that stress specific aspects of the agile software development process such as Test-Driven Development (TDD) (Beck 2002), Behaviour-Driven Development (BDD) (Kortum et al. 2019;North 2006) or Adaptive Software Development (Highsmith 2000;Highsmith and Cockburn 2001). ...
Article
Full-text available
The importance of agile methods has increased in recent years, not only to manage IT projects but also to establish flexible and adaptive organisational structures, which are essential to deal with disruptive changes and build successful digital business strategies. This paper takes an industry-specific perspective by analysing the dissemination, objectives and relative popularity of agile frameworks in the German banking sector. The data provides insights into expectations and experiences associated with agile methods and indicates possible implementation hurdles and success factors. Our research provides the first comprehensive analysis of agile methods in the German banking sector. The comparison with a selected number of fintechs has revealed some differences between banks and fintechs. We found that almost all banks and fintechs apply agile methods in IT projects. However, fintechs have relatively more experience with agile methods than banks and use them more intensively. Scrum is the most relevant framework used in practice. Scaled agile frameworks are so far negligible in the German banking sector. Acceleration of projects is apparently the most important objective of deploying agile methods. In addition, agile methods can contribute to cost savings and lead to improved quality and innovation performance, though for banks it is evidently more challenging to reach their respective targets than for fintechs. Overall our findings suggest that German banks are still in a maturing process of becoming more agile and that there is room for an accelerated adoption of agile methods in general and scaled agile frameworks in particular.
... However, observing aspects like communication behavior, the workload, or conflicts can help project managers to lead the project to success [25,29]. Emotions or sentiments are one of the important social aspects [30]. For example, happiness has been shown to positively influence productivity and the ability to solve problems, which are both relevant aspects for a successful project closure [14,37]. ...
Preprint
Social aspects in software development teams are of particular importance for a successful project closure. To analyze sentiments in software projects, there are several tools and approaches available. These tools analyze text-based communication based on the used words to predict whether they appear to be positive, negative, or neutral for the receiver of the message. In the research project ComContA, we investigate so-called sentiment analysis striving to analyze the content of text-based communication in development teams with regard to the statement's polarity. That is, we analyze whether the communication appears to be adequate (i.e., positive or neutral) or negative. In a workshop paper, we presented a tool called SEnti-Analyzer that allows to apply sentiment analysis to verbal communication in meetings of software projects. In this technical report, we present the extended functionalities of the SEnti-Analyzer by also allowing the analysis of text-based communication, we improve the prediction of the tool by including established sentiment analysis tools, and we evaluate the tool with respect to its accuracy. We evaluate the tool by comparing the prediction of the SEnti-Analyzer to pre-labeled established data sets used for sentiment analysis in software engineering and to perceptions of computer scientists. Our results indicate that in almost all cases at least two of the three votes coincide, but in only about half of the cases all three votes coincide. Our results raise the question of the "ultimate truth" of sentiment analysis outcomes: What do we want to predict with sentiment analysis tools? The pre-defined labels of established data sets? The perception of computer scientists? Or the perception of single computer scientists which appears to be the most meaningful objective?
... General Human Factors: Projects and their progress are strongly affected by human and social factors [78]. Forecasts can support teams with information required to improve their performance in future iterations [79]. Especially, findings 2 and 3 highlight the influence of the used methods and practices on human perception. ...
Preprint
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile? In this article, we investigate the question: what makes a software development method agile? We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
... General Human Factors: Projects and their progress are strongly affected by human and social factors [78]. Forecasts can support teams with information required to improve their performance in future iterations [79]. Especially, findings 2 and 3 highlight the influence of the used methods and practices on human perception. ...
Article
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Book
Full-text available
The complexity of software projects and inherent customer demands is becoming increasingly challenging for developers and managers. Human factors in the development process are growing in importance. Consequently, understanding team dynamics is a central aspect of steady development planning and execution. Despite the many available management systems and development tools that are being continuously improved to support teams and managers with practical process information, the equally crucial sociological aspects have typically been addressed insufficiently or not at all. In people-focused agile software processes, a first socio-technical understanding can also be promoted by sharing positive and negative development experiences during specific team meetings (e.g., sprint Retrospectives). Nevertheless, there is still a lack of systematically recorded and processed socio-technical information in software projects, making it difficult for subsequent reviews by teams and managers to characterize and understand the sometimes volatile and complex team dynamics during the process. This thesis strives to support teams and managers in understanding and improving awareness of the team dynamics that occur in their agile software projects by introducing computer-aided sprint feedback. The concept builds on four information assets: (1) socio-technical data monitoring, (2) descriptive sprint feedback, (3) predictive sprint feedback, and (4) exploratory sprint planning. These assets unify interdisciplinary fundamentals, practical methods from software engineering, data science, organizational and social psychology. Using a design science research process for information systems, observations in several conducted studies (32 in academic project environments and three in industry) resulted in the foundations and methods for a practical feedback concept on the socio-technical aspects in sprint, prototypically realized for Jira. A practical evaluation involved two industry projects in an action research methodology that helped improve the concept’s usability and utility through practitioner reflections. The collaboration between industry and research resolved practical issues that did not arise during the design science process. Several beneficial outcomes based on the provided sprint feedback are reported and described in this study (e.g., the effect of team structures on development performance). Moreover, the reflections underscored the practical relevance of systematic feedback and the need to better understand human factors in the software development process.
Article
Background Metrics teams play an increasingly important role in handling data and information in modern software development organizations; they manage their companies’ measurement programs, collect and process data, and develop and distribute information products. Metrics teams can comprise several roles, and their set-up can differ between companies, as can the metrics maturity of host organizations. These differences impact the effectiveness and quality of a team’s measurement program. Objective Our objective was to design and evaluate a model to describe the characteristics of a mature metrics team, which efficiently designs, develops, maintains, and evolves its organization’s measurement program. Method We conducted an action research study on four metrics teams of four distinct companies. We designed and evaluated a domain-specific model for assessing the maturity of metrics teams – MeTeaM – and also assessed the four metrics teams per se. Results Our results were two-fold: the creation of the metrics team maturity model MeTeaM and a template to assess metrics teams. Our evaluation showed that the model captures the characteristics of successful metrics teams and quantifies the maturity status of both the metrics teams and their host organizations. Conclusions More mature metrics teams score higher in the MeTeaM model than less mature teams. The assessment provides less mature metrics teams with valuable insights on what factors to improve. Such insights can be shared with and acted upon successfully with their organizations.
Chapter
Agile Manifesto refers to Leadership and Team characteristics in three out of the twelve principles of Agile software. Clearly, the ‘people' in the organisation play a critical role in the success of Agile software development. Many studies have been done on the various aspects of the Leadership and Team characteristics in respect of the role, the behaviour, the process and the structure. While most of these studies are related to the role and behaviour aspects of the leadership and the teams, this chapter attempts to supplement them with those aspects that are related to structure, internal and external interfaces and interactions, to ensure that the Agile engagements are planned and executed successfully and consistently. The emphasis in this chapter is the Organisation Culture that must be conducive for sure success of Agile projects and Scrum teams.
Article
Context Over the last decade, Agile methods have changed the software development process in an unparalleled way and with the increasing popularity of Big Data, optimizing development cycles through data analytics is becoming a commodity. Objective Although a myriad of research exists on software analytics as well as on Agile software development (ASD) practice on itself, there exists no systematic overview of the research done on ASD from a data analytics perspective. Therefore, the objective of this work is to make progress by linking ASD with Big Data analytics (BDA). Method As the primary method to find relevant literature on the topic, we performed manual search and snowballing on papers published between 2011 and 2019. Results In total, 88 primary studies were selected and analyzed. Our results show that BDA is employed throughout the whole ASD lifecycle. The results reveal that data-driven software development is focused on the following areas: code repository analytics, defects/bug fixing, testing, project management analytics, and application usage analytics. Conclusions As BDA and ASD are fast-developing areas, improving the productivity of software development teams is one of the most important objectives BDA is facing in the industry. This study provides scholars with information about the state of software analytics research and the current trends as well as applications in the business environment. Whereas, thanks to this literature review, practitioners should be able to understand better how to obtain actionable insights from their software artifacts and on which aspects of data analytics to focus when investing in such initiatives.
Conference Paper
Full-text available
Team communication addresses a critical issue for software developments. Understanding human behavior and communication take an important role for cost optimized scheduling and adjustment of dysfunctional manner. But team phenomena are often not trivial to interpret. Empirical studies can disclose practical information. Many kinds of research with the focus on human factors justify findings solely through linear statistics. This leads to an estimation problem of formally interpreted effects, in particular for diagnosis models. In this paper, we investigate several team communication effects with data records from an empirical study with 34 academic software projects. In general, we want to increase the awareness for often insufficiently interpreted human factors. We apply conventional linear correlation statistics in comparison with the novel exploratory analysis MINE on three sample cases concerning team meetings and communication behavior. Both analyzing techniques approved to be capable in identifying the relevant team communication effects within the case studies, even though with different estimation of relevances. The study demonstrates how quickly e.g. group behavior and communication effects can be insufficiently interpreted with dangerous gaps for factor estimation in modeling approaches.
Article
Full-text available
This study follows the idea that the key to understanding team meeting effectiveness lies in uncovering the microlevel interaction processes throughout the meeting. Ninety-two regular team meetings were videotaped. Interaction data were coded and evaluated with the act4teams coding scheme and INTERACT software. Team and organizational success variables were gathered via questionnaires and telephone interviews. The results support the central function of interaction processes as posited in the traditional input-process-output model. Teams that showed more functional interaction, such as problem-solving interaction and action planning, were significantly more satisfied with their meetings. Better meetings were associated with higher team productivity. Moreover, constructive meeting interaction processes were related to organizational success 2.5 years after the meeting. Dysfunctional communication, such as criticizing others or complaining, showed significant negative relationships with these outcomes. These negative effects were even more pronounced than the positive effects of functional team meeting interaction. The results suggest that team meeting processes shape both team and organizational outcomes. The critical meeting behaviors identified here provide hints for group researchers and practitioners alike who aim to improve meeting success.
Article
Full-text available
Social network analysis has many applications in software engineering and is often performed through the use of visualizations. Discovery of these networks, however, presents a challenge since the relationships are initially not known. We present an approach for mining and visualizing networks of software developers from version control systems. It computes similarities among developers based on common file changes, constructs the network of collaborating developers and applies filtering techniques to improve the modularity of the network. We validate the approach on two projects from industry and demonstrate its use in a case study of an open-source project. Results indicate that the approach performs well in revealing the structure of development teams and improving the modularity in visualizations of developer networks.
Chapter
Software process improvement is a very important topic. Almost all companies and organizations face the necessity for improvement sooner or later. Sometimes, there is obvious potential for improvement (e.g., if the number of developers does not fit the project size). Nonetheless, fixing all obvious issues does not necessarily lead to a “perfect” project. There are a lot of interdependencies between project parameters that are difficult to detect – sometimes due to the influences of social aspects which can be hardly grasped.
Article
Software projects are dominated by meetings. For participants, not all meetings are useful and enjoyable. However, interaction within a meeting has an impact on individual and group affects. Group affect influences team performance and project success. Despite frequent yet vague dissatisfaction with some meetings, many software engineers are not aware of the crucial importance of their behavior in those meetings. This can set the tone for the entire project. By influencing group affect, meeting interaction influences success without participants even noticing. Due to this lack of awareness, it depends on good or bad luck whether software teams will adopt a promising meeting style. In a study of 32 student projects with 155 participants, we coded fine-grained interaction elements during the first internal meeting of each team. The analysis of resulting codes showed that constructive remarks had a positive impact on positive group affect tone (PGAT). However, this effect was only observed when constructive remarks were followed by supportive utterances. We were able to show a complete mediation of this statistically significant effect. Seemingly subtle behavior patterns influence group affect. Software projects could significantly benefit from supportive meeting behavior. We propose practical interventions to improve meeting quality.
Conference Paper
In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors; each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.
Article
Despite widespread use of self-assessment, teachers have doubts about the value and accuracy of the technique. This article reviews research evidence on student self-assessment, finding that (1) self-assessment produces consistent results across items, tasks, and short time periods; (2) self-assessment provides information about student achievement that corresponds only in part to the information generated by teacher assessments; (3) self-assessment contributes to higher student achievement and improved behavior. The central finding of this review is that (4) the strengths of self-assessment can be enhanced through training students how to assess their work and each of the weaknesses of the approach (including inflation of grades) can be reduced through teacher action.
Chapter
The experiment data from the operation is input to the analysis and interpretation. After collecting experimental data in the operation phase, we want to be able to draw conclusions based on this data. To be able to draw valid conclusions, we must interpret the experiment data.
Conference Paper
Meetings are hot spots of communication and collaboration in software development teams. Both distributed and co-located teams need to meet for coordination, communication, and collaboration. It is difficult to assess the quality of these three crucial aspects, or the social effectiveness and impact of a meeting: Personalities, psychological and professional aspects interact. It is, therefore, challenging to identify emerging communication problems or to improve collaboration by studying a wealth of interrelated details of project meetings. However, it is relatively easy to count meetings, and to measure when and how long they took place. This is objective information, does not violate privacy of participants, and the data might even be retrieved from project calendars automatically. In an exploratory study, we observed 14 student teams working on comparable four-month projects. Among many other aspects, we counted and measured meetings. In this contribution, we compare the meeting profiles qualitatively, and derive a number of hypotheses relevant for software projects.