ArticlePDF Available

Guest Editors' Introduction: TDD--The Art of Fearless Programming


Abstract and Figures

Test-driven development is a discipline of design and programming where every line of new code is written in response to a test the programmer writes just before coding. This special issue of IEEE Software includes seven feature articles on various aspects of TDD and a Point/Counterpoint debate on the use of mock objects in applying it. The articles demonstrate the ways TDD is being used in nontrivial situations (database development, embedded software development, GUI development, performance tuning), signifying an adoption level for the practice beyond the visionary phase and into the early mainstream. In this introduction to the special issue on TDD, the guest editors also summarize selected TDD empirical studies from industry and academia.
Content may be subject to copyright.
Test-driven development is a
discipline of design and
programming where every
line of new code is written
in response to a test the programmer
writes just before coding. As TDD prac-
titioners, we think of what small step in
capability would be a good next addi-
tion to the program. We then write a
test specifying just how the program
should invoke that capability and what
its result should be. The test fails, show-
ing that the capability isn’t already pres-
ent. We implement the code that makes
the test pass and then verify that all
prior tests are still passing. Finally, we
Published by the IEEE Computer Society
0740-7459/07/$25.00 © 2007 IEEE
TDD: The Art
Programmers! Cast out your guilt! Spend half of your time in joyous testing and debugging!
Thrill to the excitement of the chase! Stalk bugs with care, and with method, and with reason.
Build traps for them. Be more artful than those devious bugs and taste the joys of guiltless
programming! —Boris Beizer, 1983
guest editors’ introduction
Ron Jeffries, independent consultant
Grigori Melnik, University of Calgary
of Fearless
review the code as it now stands, improving the
design as we go in an activity called refactoring.
Then we repeat the process, devising another test
for another small addition to the program.
As we follow this simple cycle, shown in
figure 1, the program grows into being and the
design evolves with it. At the beginning of
every cycle, the intention is for all tests to pass
except the new one, which is “driving” the
new code development. At the end of the cy-
cle, the programmer runs all the tests, ensur-
ing that each one passes and hence that every
planned feature of the code still works.
TDD is a design and programming activity,
not a testing activity per se. Its testing aspect is
largely confirmatory, through the regression
suite it produces. Professional testers must still
perform investigative testing. (The potential
for confusion is spawning new terms for the
discipline, such as behavior-driven develop-
ment1and example-driven development.2)
This special issue of IEEE Software in-
cludes seven feature articles on various aspects
of TDD and a Point/Counterpoint debate on
the use of mock objects in applying it. No-
tably, these articles demonstrate the ways
TDD is being used in nontrivial situations (da-
tabase development, embedded software de-
velopment, GUI development, performance
tuning). This signifies an adoption level for the
practice beyond the visionary phase and into
the early mainstream.
Alleviating fear
In practice, of course, even the best pro-
grammers make mistakes. TDD’s growing col-
lection of comprehensive tests (the regression
suite) tends to detect these problems. No
scheme is perfect, but TDD practitioners seem
to experience a reduction in defects shipped,
plus much faster problem detection.
Anyone who’s worked on legacy software
recognizes the situation where a system con-
tinues to function but becomes more and more
outdated until, at some point, it turns into a
house of cards. No one wants to touch it be-
cause even a minor code change will likely
lead to an undesired side effect. With TDD,
developers organically develop a test suite
while building their applications. This pro-
vides a safety net for the whole system, offer-
ing reasonable confidence that no part of the
code is broken. As a result, TDD helps allevi-
ate the fear of changing the code.
In the past, most developers programmed
by first writing code and then testing it. We of-
ten performed the tests manually and often
gave only a cursory look at whether we had
broken any past tests. In spite of what now
seems like careless work, we were always sur-
prised when someone found a bug in our code.
We might have chalked it up to “just a mis-
take” or vowed to try harder and be more
careful next time. Those tricks rarely worked.
With TDD, things are different. Automated
tests specify and constrain each functional bit
of the program. While these tests tend to pre-
vent errors and detect them when they do oc-
cur, when an error does come up, our best re-
sponse is to write the test that was missing—the
test that would have prevented the defect.
Programmers using TDD become justly
confident in the code. As we become more
confident, we can relax more as we work. Less
stressed, we can focus more on quality be-
cause we’re keeping fewer balls in the air. We
become practiced in thinking about what
might not work, at testing whether it does,
and at making it work.
Joyous programming
The TDD approach extends the assertion
Boris Beizer made in 1983: “The act of de-
signing tests is one of the most effective bug
preventers known.”3As a practice, TDD first
appeared as part of the Extreme Program-
ming discipline, described in Kent Beck’s Ex-
treme Programming Explained, which came
out in 1999. In 2002, Beck released Test-
Driven Development: By Example, and Dave
Astels followed soon after with Test-Driven
Development: A Practical Guide. More books
appeared, covering various aspects of the tech-
nique, specific tools, and project experiences
(see the “Recommended Books” sidebar). TDD
May/June 2007
a test
for a new
Figure 1. The test-
step cycle: design a
failing test, implement
code to pass the test,
and improve the design
via refactoring.
tools now exist for almost every computer lan-
guage you can imagine, from C++ to Visual
Basic, all the major scripting languages, and
even some of the more exotic languages—cur-
rent and past.
TDD has caught the attention of a large
software development community that finds it
to be a good, fast way to develop reliable
code—and many of us find it to be an enjoy-
able way to work. It embodies elements of de-
sign, testing, and coding in a cyclical style
based on one fundamental rule: never write a
line of code except what’s necessary to make
the current test pass. The process might sound
tedious in the telling, but the practice is rhyth-
mic, quite pleasant, and productive. The swing
from test to code to test occurs as frequently as
every five or 10 minutes. It’s been compared to
a waltz, to the smooth grace of skating, and to
the seemingly effortless movements of a yin-
style martial art. As in all these analogous situ-
ations, the practitioner is fully engaged and
concentrating, while the work just seems to
flow. And like these other arts, the only way to
really understand TDD is to practice it.
System-level TDD
Applied at a higher level, TDD is known as
executable acceptance TDD4or storytest-
driven development,5and it helps with re-
quirements discovery, clarification, and com-
munication. Customers, domain experts, or
analysts specify tests before the features are
implemented. Once the code is written, the
tests serve as executable acceptance criteria. In
this issue, Jennitta Andrea makes a case for
better acceptance testing tools in her article,
“Envisioning the Next Generation of Func-
tional Testing Tools” (pp. 58–66).
Each test amounts to the first use of a
planned new capability. This helps the devel-
oper focus on the code in actual use, not just
Kent Beck,
TDD by Example
, Addison-Wesley, 2002 (intro-
TDD primer with a basic example, idealized situation,
baby-step demonstration. Appropriate for both acade-
mia and practitioners new to TDD.
David Astels,
Test Driven Development: A Practical Guide
Prentice Hall, 2003 (intermediate)
Relentlessly practical TDD “how-to” guide with real
problems, real solutions, and real code (including build-
ing a GUI test-first). Also includes an excellent overview
of tools and introduction to mock objects.
James Newkirk and Alexey Vorontzov,
Test-Driven Develop-
ment in Microsoft .NET
, Microsoft Press, 2004 (intermediate)
Extending TDD to more realistic scenarios (code with
Web interfaces, databases, and so on)
Ron Jeffries,
Extreme Programming Adventures in C#
, Mi-
crosoft Press, 2004 (introductory)
Trip through a software engineering project with an ex-
pert sitting beside you, sharing the triumphs and failures.
Rick Mugridge and Ward Cunningham,
Fit for Developing
Software: Framework for Integrated Tests
, Prentice Hall,
2005 (introductory-intermediate)
Primary resource on customer acceptance testing; half
the book written for business experts and the other half
for developers.
Martin Fowler et al.,
Refactoring: Improving the Design of
Existing Code
, Addison-Wesley, 1999 (introductory)
Fundamental introduction to refactoring.
Joshua Kerievsky,
Refactoring to Patterns
, Addison-Wesley,
2004 (intermediate)
Guide to improving the design of existing code with
pattern-directed refactorings.
Michael Feathers,
Working Effectively with Legacy Code
Prentice Hall, 2004 (advanced)
Start-to-finish strategies for working effectively with
large, untested legacy code bases.
J.B. Rainsberger,
JUnit Recipes
, Manning Publications, 2004
Concrete, practical expert advice for writing good pro-
grammer tests.
Gerard Meszaros,
XUnit Test Patterns
, Addison-Wesley,
2007 (intermediate-advanced)
Guide on refactoring of both programmer and customer
tests; includes a catalog of “smells” with root cause
analysis and possible solutions.
Brian Marick,
Everyday Scripting with Ruby: For Teams,
Testers, and Users,
Pragmatic Bookshelf, 2007 (introductory)
Introduction to test-driving Ruby scripts and beyond.
Recommended Books
as implemented. We can often improve a new
capability’s design when we have a chance to
see what we’re creating from the first user’s
viewpoint. Each test makes us think concretely
about how the proposed new feature will be-
have. What are suitable inputs? What behav-
ior will be executed? How will we know what
happened? When we turn to writing the code
a few minutes later, the concrete example
helps us focus on what the code needs to do.
The process is self-correcting. If the tests are
too simple, which is rare, the workflow will
feel choppy and without challenge. This will
encourage the developer to take larger bites.
On the other hand, if the tests are too difficult,
the longer time between successfully passed
tests will alert us that we might be off track.
Once developers gain some skill in TDD,
they commonly report less stress during devel-
opment, better requirements understanding,
lower defect insertion rates, less rework, and,
as a result, faster production of higher-quality
code. Once “test infected,” as TDD aficiona-
dos are called, a developer rarely wants to go
back to the old ways.
TDD practice has a special value as part of
agile methods, which are all characterized by
iterative delivery of increasingly capable sys-
tem versions in short cycles—usually fixed
lengths of a couple of weeks to a month. At
the beginning, a simple architecture and sim-
ple design are sufficient to support the sys-
tem’s capability. As it grows, however, the ar-
chitecture and design need continuous
improvement. Agile practitioners might or
might not have the complete design in mind
from the beginning. Either way, to deliver
working system versions every few weeks,
they must grow the complete design incremen-
tally, not all at once.
Improving the design incrementally is the
refactoring step in figure 1. It brings the whole
design back into alignment—now just a little
bit bigger and better. Changing the design in
continual small steps is a good thing, in that
we can deliver tangible features as we go
along. But frequent design changes also carry
the risk that we’ll break something that used
to work.
The tests we write with TDD have been
built, one by one, to cause some new software
property to exist and to show that it works.
So, as we refactor, we can run all the tests to
verify that everything that should work, in
fact, still does work. This makes TDD a pow-
erful asset to incremental software develop-
ment. It becomes a rule of software develop-
ment hygiene. Robert C. Martin argues for
that in his article, “Professionalism and Test-
Driven Development,” pp. 32–36.
The state of TDD research
TDD also intrigues the research commu-
nity, and a growing number of studies have in-
vestigated its effects. Tables 1 and 2 reflect the
current state of TDD research, summarizing
the productivity and quality impacts of indus-
try and academic work, respectively.6–23 The
results are sometimes controversial (more so
in the academic studies). This is no surprise,
given incomparable measurements and the dif-
ficulty in isolating TDD’s effects from many
other context variables. In addition, many
studies don’t have the statistical power to al-
low for generalizations. So, we advise readers
to consider empirical findings within each
study’s context and environment. We also in-
vite more researchers to methodically investi-
gate TDD practice and report on its effects,
both positive and negative.
All researchers seem to agree that TDD en-
courages better task focus and test coverage.
The mere fact of more tests doesn’t necessarily
mean that software quality will be better, but
the increased programmer attention to test de-
sign is nevertheless encouraging. If we view
testing as sampling a very large population of
potential behaviors, more tests mean a more
thorough sample. To the extent that each test
can find an important problem that none of
the others can find, the tests are useful, espe-
cially if you can run them cheaply.
TDD is also making its way to university
and college curricula. The IEEE/ACM 2004
guidelines for software engineering under-
graduate programs includes test-first as a de-
sirable skill.24 Educators report success stories
when using TDD in computer science pro-
gramming assignments. In this issue, Bas
Vodde and Lasse Koskela describe an effective
exercise for introducing TDD to novices—
practitioners or students—in their article,
“Learning Test-Driven Development by
Counting Lines” (pp. 74–79).
Other articles in this special issue give you a
taste of TDD’s use in diverse and nontrivial con-
texts: control system design (see Thomas
Dohmke and Henrik Gollee, “Test-Driven De-
May/June 2007
“test infected,”
as TDD
are called,
a developer
rarely wants
to go back
to the
old ways.
Table 1
A summary of selected empirical studies
of test-driven development: industry participants*
Develop- Organi- No. of
Family ment time Legacy zation Software Software partici- Productivity Quality
of studies Type analyzed project? studied built size pants Language effect effect
Sanchez Case 5 years Yes IBM Point-of- Medium 9–17 Java Increased 40%
et al.6study sale device effort 19%
Bhat and Case 4 months No Microsoft Windows Small 6 C/C++ Increased 62%
Nagappan7study networking effort 25–35%
Case 7 months No Microsoft MSN Web Medium 5–8 C++/C# Increased 76%
study services effort 15%
Canfora Controlled 5 hours No Soluziona Text analyzer Very small 28 Java Increased Inconclusive
et al.8experiment Software effort by based on
Factory 65% quality of test
Damm and Multi-case 1–1.5 years Yes Ericsson Components Medium 100 C++/Java Total project 5–30%
Lundberg9study for a mobile cost increased decrease in
network by 5–6% fault-slip-
operator through rate;
application 55% decrease
in avoidable
fault costs
Melis Simulation 49 days No Calibrated Market Medium 4Smalltalk Increased 36%
et al.10 (simulated) using information effort 17% reduction in
Klondike- project residual defect
Team and density
Quinary data
Mann11 Case 8 months Yes PetroSleuth Windows- Medium 4–7 C# n/a 81%
study based oil and customer and
gas project developers’
management perception of
with statistical improved
modeling quality
Geras Quasi- 3 hours No Various Simple Small 14 Java No effect Inconclusive
et al.12 controlled companies database- based on
experiment backed failure rates;
business improved
information based on no.
system of tests and
frequency of
George Quasi- 4.75 hours No John Deere, Bowling Very small 24 Java Increased 18%#
and controlled Role Model game effort 16%
Williams13 experiment Software,
Ynchausti14 Case 8.5 hours No Monster Coding Small 5 n/a Increased 38–267%
study Consulting exercises effort
*Green box = improvement; orange box = deterioration
†Reduction in the internal defect density
‡Simulated in 200 runs
§Reduction in external defect ratio (can’t be solely attributed to TDD, but to a set of practices)
Increase in percentage of functional black-box tests passed (external quality)
May/June 2007
Table 2
A summary of selected empirical studies of TDD: academic participants*
Develop- No. of
Family ment time Legacy Organization Software Software partici- Productivity Quality
of studies Type analyzed project? studied built size pants Language effect effect
Flohr and Quasi- 40 hours Yes University Graphical Small 18 Java Improved Inconclusive
Schneider15 controlled of Hannover workflow productivity
experiment library by 27%
Abrahamsson Case 30 days No VTT Mobile Small 4 Java Increased No value
et al.16 study application effort by 0% perceived by
for global (iteration 5) developers
markets to 30%
(iteration 1)
Erdogmus Controlled 13 hours No Politecnico Bowling Very 24 Java Improved No difference
et al.17 experiment di Torino game small normalized
by 22%
Madeyski18 Quasi- 12 hours No Wroclaw Accounting Small 188 Java n/a –25 to –45%
controlled University application
experiment of Technology
Melnik and Multi-case 4-month No University Various Web- Small 240 Java n/a 73% of
Maurer19 study projects of Calgary/ based systems respondents
over SAIT (surveying, event perceive TDD
3 years Polytechnic scheduling, price improves
consolidation, quality
travel mapping)
Edwards20 Artifact 2–3 No Virginia CS1 Very 118 Java Increased 45%
analysis weeks Tech programming small effort 90%
Panˇcur Controlled 4.5 No University 4 programming Very 38 Java n/a No difference
et al.21 experiment months of Ljubljana assignments small
George22 Quasi- 1-3/4 No North Bowling game Very 138 Java Increased 16%
controlled hours Carolina small effort 16%
experiment State
Müller and Quasi- 10 No University Graph Very 19 Java No effect No effect, but
Hagner23 controlled hours of Karlsruhe library small better reuse
experiment and improved
*Green box = improvement; orange box = deterioration
†Increase in percentage of functional black-box tests passed (external quality)
velopment of a PID Controller,” pp. 44–50),
GUI development (Alex Ruiz and Yvonne Wang
Price, “Test-Driven GUI Development with
TestNG and Abbot,” pp. 51–57), and database
development (Scott W. Ambler, “Test-Driven
Development of Relational Databases,” pp. 37–
43). In addition, Michael J. Johnson, Chih-Wei
Ho, E. Michael Maximilien, and Laurie Wil-
liams inspect the aspect of incorporating per-
formance testing in TDD (pp. 67–73). In Point/
Counterpoint (pp. 80–83), Steve Freeman and
Nat Pryce debate Joshua Kerievsky on the role
of mock objects in test-driving code.
TDD is becoming popular across all
sizes and kinds of software develop-
ment projects. A Cutter Consortium
survey of companies about various software
process improvement practices identified
TDD as the practice with the second-highest
impact on project success (after code inspec-
tions).25 Of course, like any other program-
ming tool or technique, TDD is no silver bul-
let. However, it can help you become a more
effective and disciplined developer—fearless
and joyful, too.
Happy reading!
We thank the 32 groups of authors who re-
sponded to our call for papers. Selecting seven articles
from this pool would have been impossible without
reviewers who contributed their expertise and effort.
Our profound gratitude goes to all of them.
1. D. Astels, “A New Look at Test-Driven Development,”
2. B. Marick, “Driving Software Projects with Examples,”
3 July 2004,
3. B. Beizer, Software Testing Techniques, Van Nostrand
Reinhold Electrical, 1983.
4. F. Maurer and G. Melnik, “Driving Software Develop-
ment with Executable Acceptance Tests,” Cutter Con-
sortium Report, vol. 7, no. 11, 2006, pp. 1–30.
5. T. Reppert, “Don’t Just Break Software, Make Soft-
ware: How Storytest Driven Development Is Changing
the Way QA, Customers, and Developers Work,” Better
Software, vol. 9, no. 6, 2004, pp. 18–23.
6. J. Sanchez, L. Williams, and E.M. Maximilien, “A Lon-
gitudinal Study of the Use of a Test-Driven Develop-
ment Practice in Industry,” to appear in Proc. Agile
2007 Conf., IEEE CS Press, 2007.
7. T. Bhat and N. Nagappan, “Evaluating the Efficacy of
Test-Driven Development: Industrial Case Studies,”
Proc. Int’l Symp. Empirical Software Eng. (ISESE 06),
ACM Press, 2006, pp. 356–363.
8. A. Canfora et al., “Evaluating Advantages of Test Dri-
ven Development: A Controlled Experiment with Pro-
fessionals,” Proc. Int’l Symp. Empirical Software Eng.
(ISESE 06), ACM Press, 2006, pp. 364–371.
9. L. Damm and L. Lundberg, “Results from Introducing
Component-level Test Automation and Test-Driven De-
velopment,” J. Systems and Software, vol. 79, no. 7,
2006, pp. 1001–1014.
10. M. Melis et al., “Evaluating the Impact of Test-First
Programming and Pair Programming through Software
Process Simulation,” J. Software Process Improvement
and Practice, vol. 11, 2006, pp. 345–360.
11. C. Mann, “An Exploratory Longitudinal Case Study of
Agile Methods in a Small Software Company,” master’s
thesis, Dept. Computer Science, Univ. of Calgary, 2004.
12. A. Geras et al., “A Prototype Empirical Evaluation of
Test Driven Development,” Proc. 10th Int’l Symp. Soft-
ware Metrics, (METRICS 04), IEEE CS Press, 2004, pp.
13. B. George and L. Williams, “An Initial Investigation of
Test Driven Development in Industry,” Proc. ACM
Symp. Applied Computing, ACM Press, 2003, pp.
14. R.A. Ynchausti, “Integrating Unit Testing into a Soft-
ware Development Team’s Process,” Proc. 2nd Int’l
Conf. Extreme Programming and Flexible Processes in
Software Eng. (XP 01), 2001, pp. 79–83.
15. T. Flohr and T. Schneider, “Lessons Learned from an
XP Experiment with Students: Test-First Needs More
Teachings,” Proc. 7th Int’l Conf. Product Focused Soft-
ware Process Improvement (Profes 06), LNCS 4034,
Springer, 2006, pp. 305–318.
16. P. Abrahamsson, A. Hanhineva, and J. Jäälinoja, “Im-
proving Business Agility through Technical Solutions: A
Case Study on Test-Driven Development in Mobile
Software Development, Business Agility and Informa-
tion Technology Diffusion,” IFIP TC8 WG 8.6 Int’l
Working Conf., Int’l Federation for Information Pro-
cessing, 2005, pp. 1–17.
17. H. Erdogmus et al., “On the Effectiveness of the Test-
First Approach to Programming,” IEEE Trans. Soft-
ware Eng., vol. 31, no. 3, 2005, pp. 226–237.
18. L. Madeyski, “Preliminary Analysis of the Effects of
Pair Programming and Test-Driven Development on the
External Code Quality,” Software Engineering: Evolu-
tion and Emerging Technologies, K. Zielinski and T. Sz-
muc, eds., IOS Press, 2005, pp. 113–123.
19. G. Melnik and F. Maurer, “A Cross-Program Investiga-
tion of Students’ Perceptions of Agile Methods,” Proc.
2nd Int’l Conf. Software Eng. (ICSE 05), ACM Press,
2005, pp. 481–489.
20. S. Edwards, “Using Software Testing to Move Students
from Trial-and-Error to Reflection-in-Action,” ACM
SIGSCE Bull., 2004, pp. 26–30.
21. M. Panˇcur et al., “Towards Empirical Evaluation of
Test-Driven Development in a University Environment,”
IEEE Region 8 Proc. EUROCON 2003, vol. 2, IEEE
Press, 2003, pp. 83–86.
22. B. George, “Analysis and Quantification of Test Driven
Development Approach,” master’s thesis, Dept. Com-
puter Science, N. Carolina State Univ., 2002.
23. M. Müller and O. Hagner, “Experiment about Test-
First Programming,” IEE Proc. Software, vol. 149, no.
5, 2002, pp. 131–136.
24. Joint Task Force on Computing Curricula, Software En-
gineering 2004: Curriculum Guidelines for Undergradu-
ate Degree Programs in Software Engineering, tech. re-
port, IEEE CS and ACM, 2004;
25. K. El Emam, “Evaluating ROI from Software Quality,”
Cutter Consortium Report, vol. 5, no. 1, 2004, p. 20.
For more information on this or any other computing topic, please visit our
Digital Library at
About the Authors
Ron Jeffries is an independent Extreme Programming author, trainer, coach, and practi-
tioner and proprietor of, one of the longest-running and certainly the
largest one-person site on XP, comprising over 200 articles at this time. His research interests
center on agile software development. He’s one of the 17 original authors and signatories of
the Agile Manifesto ( He has master’s degrees in mathematics and in
computer and communication science. Contact him at
Grigori Melnik is a software engineer, researcher, and educator, currently affiliated with
the University of Calgary and SAIT Polytechnic. His research interests include empirical evalua-
tion of agile methods, executable acceptance-test-driven development, domain-driven design,
e-business software engineering, the Semantic Web, and distributed cognition in software
teams. He’s the research chair of the Agile 2007 conference and the program academic chair of
Agile 2008. Contact him at
... [1]. Desta forma, a programação da aplicaçãoé realizada conforme a unidade de software passa ou reprova nos testes realizados [9]. A metodologia TDD pode ser empregada HIL, por ser um sistema preciso e eficiente no desenvolvimento de modelos de planta e produtos, permitindo a realização de testes e simulações com maior confiabilidade, a partir de um sistema digital que representa um sistema físico complexo. ...
... A eficiência do inversoré determinada através da relação entre a potência de entrada P in e a potência dissipada P diss durante a operação do inversor, dada pela soma das perdas de condução e comutação dos dispositivos semicondutores. A eficiência do inversoré dada por (9). ...
Conference Paper
This paper aims to validate the use of automatic test routines in HIL to evaluate performance of PMSM drivers. Therefore, a non-sinusoidal PMSM model is implemented, driven through six-steps commutation. In addition, the conduction and commutation losses of the inverter are determined, in order to use the efficiency as a comparison parameter to generate a report for benchmarking controllers. The six-step 120°strategy is used to drive the PMSM, which determines the order of commutations of a three-phase inverter through the information of the machine rotor position obtained by Hall effect sensors. The Test Driven Design methodology was used to compare the efficiency of different controllers. Therefore, the conduction and commutation losses of the inverter were determined, taking into account the junction temperature of the IGBTs and their thermal model. From this, the power dissipated in the inverter was used to define the efficiency of the system. An automated test routine was established to benchmark six current and speed controllers in terms of efficiency and rotor speed.
... Martin, 2009). As well as creating a comprehensive test suite for automated testing, such test-driven development encourages a more stringent and focussed programming style (Jeffries & Melnik, 2007). However, TDD is very timeconsuming to implement, and in terms of defect-finding, other techniques such as code reviews appear to be more effective (see Section 2.6). ...
Full-text available
1. Individual‐based models are doubly complex: as well as repressenting complex ecological systems, the software that implements them is complex in itself. Both forms of complexity must be managed in order to create reliable models. However, the ecological modelling literature to date has focussed almost exclusively on the biological complexity. Here, we discuss methods for containing software complexity. 2. Strategies for containing complexity include avoiding, subdividing, documenting, and reviewing it. Computer science has long‐established techniques for all of these strategies. We present some of these techniques and set them in the context of IBM development, giving examples from published models. 3. Techniques for avoiding software complexity are following best practices for coding style, choosing suitable programming languages and file formats, and setting up an automated workflow. Complex software systems can be made more tractable by encapsulating individual subsystems. Good documentation needs to take into account the perspectives of scientists, users, and developers. Code reviews are an effective way to check for errors, and can be used together with manual or automated unit and integration tests. 4. Ecological modellers can learn from computer scientists how to deal with complex software systems. Many techniques are readily available, but must be disseminated among modellers. There is a need for further work to adapt software development techniques to the requirements of academic research groups and individual‐based modelling.
... The earliest article, "TDD: The Art of Fearless Programming" [56], which was published in 2007 as a guest editors introduction, highlighted a mixed notion on TDDs impact on quality and productivity, which is, inter alia, attributed to incomparable measurements. However, despite it being no silver bullet, a positive effect on task focus, test coverage and an increased attention on test design by the developers are stated. ...
Conference Paper
Full-text available
Big data has evolved to a ubiquitous part of today’s society. However, despite its popularity, the development and testing of the corresponding applications are still very challenging tasks that are being actively researched in pursuit of ways for improvement. One newly introduced proposition is the application of test driven development (TDD) in the big data domain. To facilitate this concept, existing literature reviews on TDD have been analyzed to extract insights from those sources of aggregated knowledge, which can be applied to this new setting. After introducing the different studies, lessons for the application of TDD in the big data domain are deducted and discussed. Finally, avenues for future works are proposed.
... The end of a cycle allows a TDD practitioner to tackle a new piece of functionality, not yet implemented, so starting a new development cycle [1,2]. Advocates of TDD recommend ending a development cycle in few minutes (five or ten minutes [4]) and keeping the rhythm as uniform as possible over time [1,3]. The order with which unit tests interpose within the process underlying TDD-i.e., the writing of a test precedes the one of the corresponding production code-is known as test-first sequencing (or also test-first dynamic) [5]. ...
In this paper, we investigate the effect of TDD, as compared to a non-TDD approach, as well as its retainment (or retention) over a time span of (about) six months. To pursue these objectives, we conducted a (quantitative) longitudinal cohort study with 30 novice developers (i.e., third-year undergraduate students in Computer Science). We observed that TDD affects neither the external quality of software products nor developers' productivity. However, we observed that the participants applying TDD produced significantly more tests, with a higher fault-detection capability than those using a non-TDD approach. As for the retainment of TDD, we found that TDD is retained by novice developers for at least six months.
The research on the claimed effects of Test-Driven Development (TDD) on software quality and developers’ productivity has shown inconclusive results. Some researchers have ascribed such results to the negative affective reactions that TDD would provoke when developers apply it. In this paper, we studied whether and in which phases TDD influences the affective states of developers, who are new to this development approach. To that end, we conducted a baseline experiment and two replications, and analyzed the data from these experiments both individually and jointly. Also, we performed methodological triangulation by means of an explanatory survey, whose respondents were experienced with TDD. The results of the baseline experiment suggested that developers like TDD significantly less, compared to a non-TDD approach. Also, developers who apply TDD like implementing production code significantly less than those who apply a non-TDD approach, while testing production code makes TDD developers significantly less happy. These results were not confirmed in the replicated experiments. We found that the moderator that better explains these differences across experiments is experience (in months) with unit testing, practiced in a test-last manner. The higher the experience with unit testing, the more negative the affective reactions caused by TDD. The results from the survey seem to confirm the role of this moderator.
Full-text available
Η παρούσα διδακτική ενότητα περιγράφει τα στάδια ελέγχου, τις αρχές που διέπουν τις δοκιμές του συστήματος και των συστατικών στοιχείων, τις στρατηγικές ελέγχου που χρησιμοποιούνται καθώς επίσης και τα βασικά χαρακτηριστικά των εργαλείων λογισμικού που χρησιμοποιούνται για να υποστηρίξουν την αυτοματοποίηση του λογισμικού. Σε αυτή την ενότητα, ο ειδικός ανάπτυξης λογισμικού θα αποκτήσει τις απαραίτητες γνώσεις για να μπορεί να ελέγξει, να επαληθεύσει και να επικυρώσει σωστά ολόκληρο το λογισμικό, από τις προδιαγραφές των απαιτήσεων και της σχεδίασης μέχρι και τον πηγαίο κώδικα, ώστε να παραδώσει το αναμενόμενο σύστημα λογισμικού στον πελάτη και τους τελικούς χρήστες.
Full-text available
Στην παρούσα διδακτική ενότητα οι αναγνώστες θα μάθουν για τις διάφορες προσεγγίσεις υλοποίησης και τις διάφορες τεχνικές προγραμματισμού, καθώς επίσης και για τα αυτοματοποιημένα εργαλεία που μπορούν να χρησιμοποιήσουν για τη μετατροπή των προδιαγραφών του σχεδιασμού σε κώδικα.
Full-text available
Στην παρούσα διδακτική ενότητα περιγράφονται οι βασικές αρχές της αξιοπιστίας και ασφάλειας των συστημάτων λογισμικού. Εξηγούνται οι βασικές αρχές της αποφυγής, ανίχνευσης και επαναφοράς συστημάτων λογισμικού που χρησιμοποιούνται για τη δημιουργία αξιόπιστων και ασφαλών συστημάτων. Επιπρόσθετα, περιγράφονται οι τεχνικές της τεχνολογίας λογισμικού για την ανάπτυξη αξιόπιστων και ασφαλών συστημάτων, ενώ επίσης αναφέρονται μετρικές που χρησιμοποιούνται για την διασφάλιση της αξιοπιστίας του συστήματος.
In this paper, we investigate the effect of TDD, as compared to a non-TDD approach, as well as its retainment (or retention) over a time span of (about) six months. To pursue these objectives, we conducted a (quantitative) longitudinal cohort study with 30 novice developers (i.e., third-year undergraduate students in Computer Science). We observed that TDD affects neither the external quality of software products nor developers’ productivity. However, we observed that the participants applying TDD produced significantly more tests, with a higher fault-detection capability, than those using a non-TDD approach. As for the retainment of TDD, we found that TDD is retained by novice developers for at least six months.
This open access book constitutes the proceedings of the 19th International Conference on Agile Software Development, XP 2018, held in Porto, Portugal, in May 2018. XP is the premier agile software development conference combining research and practice, and XP 2018 provided a playful and informal environment to learn and trigger discussions around its main theme – make, inspect, adapt. The 21 papers presented in this volume were carefully reviewed and selected from 62 submissions. They were organized in topical sections named: agile requirements; agile testing; agile transformation; scaling agile; human-centric agile; and continuous experimentation.
Conference Paper
Full-text available
Introductory computer science students rely on a trial and error approach to fixing errors and debugging for too long. Moving to a reflection in action strategy can help students become more successful. Traditional programming assignments are usually assessed in a way that ignores the skills needed for reflection in action, but software testing promotes the hypothesis-forming and experimental validation that are central to this mode of learning. By changing the way assignments are assessed--where students are responsible for demonstrating correctness through testing, and then assessed on how well they achieve this goal--it is possible to reinforce desired skills. Automated feedback can also play a valuable role in encouraging students while also showing them where they can improve.
Conference Paper
Full-text available
Test driven development (TDD) is gaining interest among practitioners and researchers: it promises to increase the quality of the code. Even if TDD is considered a development practice, it relies on the use of unit testing. For this reason, it could be an alternative to the testing after coding (TAC), which is the usual approach to run and execute unit tests after having written the code. We wondered which are the differences between the two practices, from the standpoint of quality and productivity. In order to answer our research question, we carried out an experiment in a Spanish Software House. The results suggest that TDD improves the unit testing but slows down the overall process.
Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.
For many software development organizations it is of crucial importance to reduce development costs while still maintaining high product quality. Since testing commonly constitutes a significant part of the development time, one way to increase efficiency is to find more faults early when they are cheaper to pinpoint and remove. This paper presents empirical results from introducing a concept for early fault detection. That is, an alternative approach to Test-Driven Development which was applied on a component level instead of on a class/method level. The selected method for evaluating the result of introducing the concept was based on an existing method for fault-based process assessment and was proven practically useful for evaluating fault reducing improvements. The evaluation was made on two industrial projects and on different features within a project that only implemented the concept partly. The evaluation result demonstrated improvements regarding decreased fault rates and Return On Investment (ROI), e.g. the total project cost became about 5–6% less already in the first two studied projects.