ArticlePDF Available

Intelligent Test Automation

Authors:
  • Serious Quality, LLC
Intelligent
Test
Automation
ILLUSTRATIONS BY STEVE BJÖRKMAN
www.stqemagazine.com Software Testing &Quality Engineering September/October 2000
24
Tools & Automation
Tools & Automation
A model-based method for generating
tests from a description of an
application’s behavior by Harry Robinson
Warning: The fairy tale you are about to read is a fib—but it’s short, and the moral is true.
O
nce upon a product cycle, there were four testers who set
out on a quest to test software.
started hands-on testing imme-
diately, and found some nice bugs. The develop-
ment team happily fixed these bugs, and gave
Tester 1 a fresh version of the software to test.
More testing, more bugs, more fixes.
Tester 1 felt productive, and was happy—at
least for a while.
After several rounds of this find-and-fix cycle,
he became bored and bleary-eyed from running
virtually the same tests over and over again by
hand. When Tester 1 finally ran out of enthusi-
asm—and then out of patience—the software
was declared “ready to ship.”
Customers found it too buggy and bought
the competitor’s product.
Tester1
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
September/October 2000 Software Testing &Quality Engineering www.stqemagazine.com
25
QUICK LOOK
Improving the efficiency of your
automated testing through modeling
Overcoming the limitations of hands-on
and static automation testing
started testing by hand, but soon de-
cided it made more sense to create test scripts that
would perform the keystrokes automatically. After
carefully figuring out tests that would exercise useful
parts of the software, Tester 2 recorded the actions in
scripts. These scripts soon numbered in the hun-
dreds. At the push of a button, the scripts would
spring to life and run the software through its paces.
Tester 2 felt clever, and was happy—at least for
a while.
The scripts required a lot of maintenance when
the software changed. He spent weeks arguing
with developers to stop changing the software be-
cause it broke the automated tests. Eventually,
the scripts required so much maintenance that
there was little time left to do testing.
When the software was released,
customers found lots of bugs that the
scripts didn’t cover. They stopped buy-
ing the product and decided to wait for
version 2.0.
Tester2
didn’t want to maintain hundreds of automated
test scripts. She wrote a test program that went around randomly
clicking and pushing buttons in the application. This “random”
test program was hypnotic to watch, and it found a lot of crash-
ing bugs.
Tester 3 enjoyed uncovering such dramatic defects,
and was happy—at least for a while.
Since the random test program could only
find bugs that crashed the application, Tester 3
still had to do a lot of hands-on testing, getting
bored and bleary-eyed in the process. Cus-
tomers found so many functional bugs in the
software when it was released that they lost
trust in the company and stopped buying its
software.
Tester3
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
Commentaries
These four scenes show some of the approaches available
in software testing today.
Tester 1 is a typical hands-on tester, manually running all
tests from the keyboard. Hands-on testing is common
throughout the industry today—it provides immediate ben-
efits, but in the long run it is tedious for the tester and ex-
pensive for the company.
“One of the saddest sights to me has always been a hu-
man at a keyboard doing something by hand that could
be automated. It’s sad but hilarious.”
—Boris Beizer,
Black-Box Testing: Techniques for
Functional Testing of Software and Systems
Tester 2 practices what I call “static test automation.” Static
automation scripts exercise the same sequence of com-
mands in the same order every time. These scripts are cost-
ly to maintain when the application changes. The tests are
repeatable; but since they always perform the same com-
mands, they rarely find new bugs.
“Highly repeatable testing can actually minimize the
chance of discovering all the important problems, for
the same reason that stepping in someone else’s foot-
prints minimizes the chance of being blown up by a
land mine.”
—James Bach, “Test Automation Snake Oil,”
Windows Tech Journal,
October 1996
Tester 3 operates closer to the cutting edge of automated
testing. These types of “random” test programs are called
dumb monkeys because they essentially bang on the key-
board aimlessly. They come up with unusual test action se-
quences and find many crashing bugs, but it’s hard to
www.stqemagazine.com Software Testing &Quality Engineering September/October 2000
26
began with hands-on, exploratory testing to become familiar with the application—
and used the knowledge gained during the hands-on testing to create a very simple behavioral model
of the application. Tester 4 then used a test program to test the application’s behavior against what the
model predicted. The behavioral model was much
simpler than the application under test, so it was
easy to create. Since the test program knew what
the application was supposed to do, it could detect
when the application was doing the wrong thing.
As the product cycle progressed, developers
wrote new features for the application. Tester 4
quickly updated the model, and the tests contin-
ued running. The program ran day and night, con-
stantly generating new test sequences. Tester 4
was able to run the tests on a dozen machines at
once and get several days of testing done in a sin-
gle night.
After several rounds of testing and bug fixes,
Tester 4’s test generator began to find fewer bugs.
Tester 4 upgraded the model to test for additional
behaviors and continued testing. Tester 4 also did
some hands-on testing and static automation for
those parts of the application which were not yet
worth modeling.
When Tester 4’s software was released, there
were very few bugs to be found. The customers
were happy. The stockholders were happy.
And Tester 4 was happy.
Tester4
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
www.stqemagazine.com Software Testing &Quality Engineering September/October 2000
28
direct them to the specific parts of the
application you want tested. Since they
don’t know what they are doing, they
miss obvious failures in the application.
“Monkey testing should not be your
only testing. Monkeys don’t under-
stand your application, and in their
ignorance they miss many bugs.”
—Noel Nyman, “Using Monkey Test Tools,
STQE,
January/February 2000
Tester 4 combines the other testers’ ap-
proaches with a type of intelligent test
automation called “model-based test-
ing.”
Model-based testing doesn’t record
test sequences verbatim like static test automation does,
nor does it bang away at the keyboard blindly. Model-based
tests use a description of the application’s behavior to de-
termine what actions are possible and what outcome is ex-
pected. This automation generates new test sequences end-
lessly, adapts well to changes in the application, can be run
on many machines at once, and can run day and night.
“[An artist] paints with his brain, not with his hands.”
—Michelangelo Buonarroti
The Moral of the Story
Tester 1’s method required that his hands always be at
work on the keyboard. Eventually Tester 1’s brain and
hands gave out.
Tester 2’s static scripts repeated keyboard actions that
his hands had already performed.
Tester 3’s monkeys were essentially brainless hands
banging on the keyboard.
Tester 4, on the other hand, supplemented the others’
techniques by:
thinking about the application’s behavior,
describing that behavior to a test generator, and
letting the test generator create and run test cases.
By generating tests from a description of the application’s
behavior, Tester 4 could perform tests that were not practi-
cal under the other test approaches.
The moral of the tale: Automate your brain, not just
your hands.
Use Your Brain
Let’s look at an example of creating and using a behavioral
model to test a software application.
Hands-on testing is a good way to start the test automa-
tion process. I call this phase “exploratory modeling” be-
cause it combines exploratory testing with the discovery of
a model that can later be used to generate tests. As you be-
gin to understand the behavior of each action, you can cre-
ate rules that will help you model and test the application.
This is the essence of model-based testing: To describe
the behavior you expect in a way that can be used to gener-
ate tests. Ask yourself the following two questions for every
action you are going to test:
1. When is this action possible?
2. What is the outcome when this action is executed?
For instance, suppose you have been asked to test the be-
havior of files in a Windows folder. In particular, you are go-
ing to test the Create, Delete, and Invert Selection actions.
Modeling the “Create” Action
When is Create possible?
This example is kept simple by limiting the
number of files in the folder to 1 File. Therefore Create is only possi-
ble in this model when there are 0 Files in the folder.
What is the outcome when Create is executed?
When you Create a
new file in a folder, the number of files in the folder increases by one.
The newly created file is initially Selected, so it appears highlighted in
the folder. In fact, the new file is the only Selected file in the folder, no
matter how many were Selected before the Create action.
Modeling the “Delete” Action
When is Delete possible?
Delete is only possible in this model when
there is at least 1 Selected File in the folder.
What is the outcome when Delete is executed?
When you execute the
Delete action, any Selected file disappears from the folder.
Modeling the “Invert Selection” Action
When is Invert Selection possible?
Invert Selection is always pos-
sible in this model, even when there are 0 Files in the folder.
What is the outcome when Invert Selection is executed?
When you
execute the Invert Selection action, all Selected files in the folder be-
come Unselected, and all Unselected files become Selected. When
there are 0 Files in the folder, Invert Selection leaves the folder un-
changed.
Hands-on testing is a good way to start the test
automation process. I call this phase “exploratory modeling”
because it combines exploratory testing with the
discovery of a model that can later be used to generate
tests. As you begin to understand the behavior
of each action, you can create rules that will help you
model and test the application.
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
ANNIE BISSETT
September/October 2000 Software Testing &Quality Engineering www.stqemagazine.com
29
Creating a
State Model
You can now construct what is
called a “state model” of the sys-
tem’s behavior, as shown in Figure
1. It incorporates all the behaviors
described above. Note the way the
Invert Selection action loops from
the 0 Files State back to the 0 Files
State. That models the way Invert
Selection does nothing if there’s
nothing to invert.
Very Pretty. So What?
Now that you understand how the
application works, you could man-
ually test these actions and verify
whether the Windows folder be-
haves as you expect. However, be-
cause your understanding is being
carried around inside your head,
your results are limited by your
time and your stamina.
On the other hand, if you
could somehow communicate this
state model directly from your
brain to a computer, it could gener-
ate and execute tests on the system
for you.
Fortunately, this model can be
represented in a format known as a
“state table” that the computer can
read. Each row of the state table (see Table 1) shows the
Ending State that will result when an action is applied to
the application in the Starting State.
Use the Computer’s Brain, Too
Once we have put the state model into a state table that the
computer can understand, what can the computer do for
us? How can we exploit our information about the applica-
tion’s behavior?
The computer can use the state table to generate se-
quences of tests to run against the application. As you will
see in the following examples, these test sequences can be
chosen for their novelty, their effi-
ciency of testing, or their exhaus-
tiveness. This test generation is a
powerful way to apply your under-
standing—and this is what model-
based testing is all about.
A Random Walk Through
the State Model
One simple way to generate test
actions is to randomly select any
available action from the current
state of the application. For ex-
ample, if you are in the 0 Files
Starting State, you can choose either of these two
actions:
Invert Selection (which leaves you in the 0 Files State)
Create (which leaves you in the 1 Selected File State)
By choosing random actions in this way, you can generate
many unusual sequences (just like Tester 3’s random
“monkey test program”), and you will eventually exercise
all of the actions in the model. Figure 2 shows a typical
random walk. Notice that the random walk executed the
same action (Invert Selection) four times in a row, but has
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
0 Files 1 Selected File 1 Unselected File
FIGURE 1 The state model
Starting State Action Ending State
0 Files Invert Selection 0 Files
0 Files Create 1 Selected File
1 Selected File Invert Selection 1 Unselected File
1 Selected File Delete 0 Files
1 Unselected File Invert Selection 1 Selected File
TABLE 1 A state table for behavior of files in a Windows folder
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
0 Files 1 Selected File 1 Unselected File
12
3
4
5
FIGURE 2 A random walk
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
so far left two other actions untouched. Such is the nature
of random testing.
ACTION TO EXECUTE ENDING STATE
1. Create 1 Selected File
2. Invert Selection 1 Unselected File
3. Invert Selection 1 Selected File
4. Invert Selection 1 Unselected File
5. Invert Selection 1 Selected File
An Efficient Walk Through the State Model:
The Chinese Postman Walk
Random walks are inefficient at reaching all test actions
when the model is large. How can we test each of the ac-
tions in the model efficiently?
This turns out to be the same problem a letter carrier
faces when delivering mail. Imagine that each of the actions
in the model is a street where mail must be delivered—and
that each of the states in the model is an intersection where
the letter carrier can change direction. Just as the letter car-
rier must travel down each street to deliver the mail, we must
test each action in the model. And in both cases, we would
like to minimize the amount of additional travel needed.
A Chinese mathematician named Kwan Mei-Ko formu-
lated an elegant solution to this problem, and it is known as
the Chinese Postman algorithm in his honor (see Figure 3).
Kwan’s method generates a path through the state model
that exercises every action in the model in the fewest num-
ber of steps. The test sequence listed below covers all five
actions in the model in only five steps. This efficiency can
be handy if you have a large application that you want to
test quickly.
ACTION TO EXECUTE ENDING STATE
1. Invert Selection 0 Files
2. Create 1 Selected File
3. Invert Selection 1 Unselected file
4. Invert Selection 1 Selected File
5. Delete 0 Files
An Even More Efficient Walk: The State-
Changing Chinese Postman Walk
Some actions in a model—such as hitting Invert Selection
with 0 Files in the folder—do not change the state of the ap-
plication. If you think that bugs are
more likely to occur where the ap-
plication changes state, you may
want to prioritize your efforts by
first testing the state-changing ac-
tions.
A simple way to do this is to
filter out from the state table any
actions that don’t change the state.
In Table 1, that would remove the
first action (Invert Selection).
Running the Chinese Postman
algorithm over this reduced state
model generates a test sequence
that covers all of the model’s
state-changing actions in four
steps—essentially removing the
first step of the previous example:
ACTION TO EXECUTE ENDING STATE
1. Create 1 Selected File
2. Invert Selection 1 Unselected File
3. Invert Selection 1 Selected File
4. Delete 0 Files
Shortest Round Trips Back to the
Starting State
Suppose you want to exhaustively test every sequence that
takes the Windows folder from the 0 Files State back to the 0
Files State in a certain number of steps or less? Sequences
like these that constantly generate new variations would be
unthinkable for Tester 2’s static automation.
It is trivial for a computer to generate a list of such
paths from the state model. You can generate sequences of
increasing length as long as you have computer cycles to
burn, probing deeper and deeper into the model.
Figure 4 shows all round trips that start at the 0 Files
State and have a path length less than or equal to four steps.
Path Ahas a length of one step:
A1: Invert Selection
Path Bhas a length of two steps:
B1: Create
B2: Delete
Path Chas a length of four steps:
C1: Create
C2: Invert Selection
C3: Invert Selection
C4: Delete
Use the Computer’s Hands
The output of each of these algorithms is a sequence of test
actions to execute. What would be the best way to perform
these actions? You could hand a human tester the list of ac-
tions to execute by hand—but this would be slow, tedious,
and cruel. Who would want to spend their day executing
lists of actions? Such repetitious work is the kind of mind-
numbing scenario that caused poor Tester 1 such grief.
ANNIE BISSETT
www.stqemagazine.com Software Testing &Quality Engineering September/October 2000
30
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
0 Files 1 Selected File 1 Unselected File
1
23
4
5
FIGURE 3 A Chinese Postman walk
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
Instead, you can write a simple test execution program
that will read the list and then execute test code for each
action in that list.
For instance, in Visual Test, the code to implement the
Create action is:
WToolbarButtonClick("@1","File") ' Open the
File
menu
WMenuSelect("New") ' Select
New File
WMenuSelect("Text Document") ' Choose
Text Document
Play "{Enter}" ' Accept the default filename
In typical static automation, this code would be embedded
in a script—but in a model-based test program, this snippet
of code is invoked whenever the list of test actions says to
perform the Create action.
Use the Computer’s Eyes
Automating the test actions is only half the battle. You also
need an automated method of determining if the applica-
tion is working correctly.
This method—a function that determines if the appli-
cation has behaved correctly in response to a test action—
is called a test oracle. Some test methods, such as Tester
3’s random monkey test programs, must rely on crude test
oracles such as whether the application has crashed.
Model-based testing gives the test program the ability to
check for indicators of good behavior more subtle than
“didn’t crash.” From the information in the state table, the
model “knows” what actions should be available from each
state and the expected outcome of each action. For instance,
the model says that the test pro-
gram should be able to execute the
Delete action from the 1 Selected File
State. The model also says that exe-
cuting that Delete action should
leave the application in the 0 Files
State. This knowledge provides two
ways to verify that the application
has behaved correctly.
First, the test program can de-
tect if an action is not available
when it should be. If the Delete ac-
tion is not available when the ap-
plication is in the 1 Selected File
State, the test program will report
an error because the test code will
fail when it finds no menu selec-
tion for Delete.
Second, the model is always
aware of what state the application
should be in. Knowing the expected
ending state of each action means
that we can create test oracle rou-
tines to check (at the conclusion of
each action) that the appropriate
number of files are present and se-
lected in the folder. For instance,
when the Delete action above is exe-
cuted, the Ending State should
have 0 Files in the folder (and of
course, 0 Files Selected).
Programmatic test languages usually provide functions
that allow the test program to verify aspects of the application.
Two useful Visual Test functions for the current model are:
WViewCount( ) which indicates the number of files in the folder, and
WViewItemSelected( ) which tells how many files in the folder are
Selected.
The test program can verify that the application is in the
correct state, as shown in Table 2.
The Delete action discussed above should leave the ap-
plication in the 0 Files State. If WViewCount( ) returns a value
other than 0, the test program oracle will report an error
because the number of files in the folder is incorrect.
How to Update Model-based Tests
Remember how Tester 2’s static test automation attempts
were frustrated by application changes? Tester 4, in con-
trast, was able to adapt the model-based test automation
quickly to changes in the application.
Incorporating New Actions into the Model
Suppose your development team tells you that they have im-
plemented the Select All action. How should you update your
tests for this new action? Simple—upgrade the state model
to incorporate the Select All action, and regenerate the tests.
First, you model the Select All action by answering our
two basic questions:
ANNIE BISSETT
September/October 2000 Software Testing &Quality Engineering www.stqemagazine.com
31
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
0 Files 1 Selected File 1 Unselected File
A1
B1
B2
C1 C2
C3C4
FIGURE 4 All round trips in four steps or less
Expected Return Expected Return
Value of Value of
Application State WViewCount( ) WViewItemSelected( )
0 Files 0 0
1 Selected File 1 1
1 Unselected File 1 0
TABLE 2 A state table showing Visual Test functions WViewCount( ) and
WViewItemSelected( )
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
1. When is
Select All
possible?
Select All is always possible in
this model, even when there
are 0 Files in the folder.
2. What is the outcome when
Select All
is executed? When
you execute Select All, all the
files in the folder become Se-
lected. If there are 0 Files in
the folder, Select All leaves
the folder unchanged. This is
indicated in the illustration
below, where the Select All
action loops from the 0 Files
State back to the 0 Files
State.
Figure 5 shows the new state mod-
el with the Select All action incorporated.
Running the Chinese Postman algorithm on the updat-
ed model (see Figure 6) gives a nine-step test sequence—
using the 0 Files Starting State—that exercises every action
in the model, including the new Select All:
ACTION TO EXECUTE ENDING STATE
1. Invert Selection 0 Files
2. Create 1 Selected File
3. Invert Selection 1 Unselected File
4. Select All 1 Selected File
5. Invert Selection 1 Unselected File
6. Invert Selection 1 Selected File
7. Select All 1 Selected File
8. Delete 0 Files
9. Select All 0 Files
The next step would be to determine the code that is used
to invoke the Select All action whenever it occurs in the test
sequence. For Visual Test it would be as follows:
WToolbarButtonClick("@1","Edit")
WMenuSelect("Select All")
In Summary
It can take significant effort to un-
derstand and model an applica-
tion. And it can be difficult to
leave the easy path of hands-on
testing or static automation long
enough to invest time thinking
about how to test that applica-
tion—as we saw in the trials and
tribulations of our fairy tale’s four
testers.
The rewards, however, are
great:
Model-based testing creates flexible,
useful test automation from practically
the first day of development.
Models are simple to modify, so model-based tests are economical to
maintain over the life of a project.
Models can generate innumerable test sequences tailored to your
needs.
Models allow you to get more testing accomplished in a shorter
amount of time because a test generator can create and verify test se-
quences around the clock on multiple machines.
Model-based testing can supplement other forms of testing, and can
perform tests that aren’t practical under other test approaches.
You and I know that software testing is no fairy tale, and that
happily-ever-afters are never guaranteed. But adding model-
based intelligence to your testing is a powerful tool to help
you find your way toward your own happy ending. STQE
Harry Robinson is software test lead with the Intelligent
Search Group at Microsoft. He maintains the Model-
Based Testing Home Page (www.model-based-testing.org),
and is a long-time advocate and practitioner of model-
based testing.
www.stqemagazine.com Software Testing &Quality Engineering September/October 2000
32
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
Select All Select All
Select All
0 Files 1 Selected File 1 Unselected File
1
23
4
5
6
7
8
9
FIGURE 6 A Chinese Postman walk on the new state model
Create Invert
Selection
Invert Selection
Invert
Selection
Delete
Select All Select All
Select All
0 Files 1 Selected File 1 Unselected File
FIGURE 5 State model including Select All
ANNIE BISSETT
This article is provided courtesy of
Software Testing & Quality Engineering (STQE)
magazine.
... En algunas de las propuestas analizadas y en otras investigaciones generalmente utilizan el criterio de cobertura del cartero chino -chinese postman- (Robinson, 2000), que consiste en recorrer todas las posibles transiciones utilizando el menor número de caminos. Pero muchas veces esto no es suficiente o el número de transiciones es tan elevado que se convierte en una empresa imposible de llevar a cabo, por lo que es necesario tener en cuenta otros posibles criterios de cobertura con el objetivo de resaltar el valor del camino principal y disminuir la importancia de los secundarios, o para resaltar los caminos que más se utilizan, como lo proponen Riebisch et al. (2003). ...
... Estos puntos deben ser la base desde la que se puedan corregir las falencias en las encontradas y para potenciar sus fortalezas en el nuevo proyecto. @BULLET Otro asunto importante es el relacionado con el criterio de cobertura, y en especial el de cómo utilizar la prioridad de los casos de uso al momento de generar interacciones o secuencias; pueden tenerse en cuenta algunas propuestas desde la ingeniería de requisitos, como la de Robinson [41], Riebisch et al [42] y la de Escalona [43]. @BULLET En lo que respecta a los valores de prueba, debe diseñarse un conjunto de reglas concreto y sistemático que permita reducir el número de decisiones que toman los probadores, a lo cual ayudará lo ya expuesto de incluir el proceso los requisitos de almacenamiento. ...
Article
Full-text available
The fact of formalizing their knowledge allows engineering disciplines to achieve predictable results. Regrettably, the knowledge used in Software Engineering can be considered a relatively low level of maturity, developers are guided by intuition, fashion or the dictates of market, rather than facts or undisputed statements proper to an engineering discipline. The test proposals determine the different criteria to design the test cases that are used as imputs to consider a system under study; what means that to design effectively and efficiently the test cases is a condition of success for testing. The knowledge that allows to select a test method and a set of test cases, must arise from studies that justify the benefits and conditions of their application. This paper analyzes the maturity level of knowledge about design methods of test cases for functional testing through an critical analysis of the proposals developed in this subject-matter.
... • Otro asunto importante es el relacionado con el criterio de cobertura, y en especial el de cómo utilizar la prioridad de los casos de uso al momento de generar interacciones o secuencias; pueden tenerse en cuenta algunas propuestas desde la ingeniería de requisitos, como la de Robinson [41], Riebisch et al [42] y la de Escalona [43]. ...
Chapter
Test automation generally means automated execution of test cases including automated verification of the test results. However, this is a rather narrow view. This term is more usefully taken to refer to any type of tool support of a testing activity. Tools that support testing activities are almost all computer programs that are generally referred to as test tools. The different types of test tools are outlined. Almost any testing activity can be tool‐supported in some way, but this is not to say that the activity can be performed entirely by the tool or automatically. In most cases tool‐supported activities require some manual intervention. The main role of a testing tool is usually to support, not replace, a person's efforts. This article discusses issues associated with most types of tool support for testing, but much of the later sections deal with issues that are most applicable to tool support for test execution automation, the most popular type of test automation.
Article
Software test automation is widely accepted as an efficient software testing technique. However, automation has failed to deliver the expected productivity more often than not. The goal of this research was to find out the reason for these failures by collecting and understanding the metrics that affect software test automation and provide recommendations on how to successfully adopt automation with a positive return on investment (ROI). The metrics of concern were schedule, cost and effectiveness. The research employed an experimental study where subjects worked on individual manual and automated testing projects. The data collected were cross verified and supplemented with additional data from a feedback survey at the end of the experiment. The results of this study suggest that automation involves a heavy initial investment in terms of schedule and cost, which needs to be amortized over subsequent test cycles or even subsequent test projects.
Article
Testing software product lines (SPLs) is very challenging due to a high degree of variability leading to an enormous number of possible products. The vast majority of today’s testing approaches for SPLs validate products individually using different kinds of reuse techniques for testing. Because of their reusability and adaptability capabilities, model-based approaches are suitable to describe variability and are therefore frequently used for implementation and testing purposes of SPLs. Due to the enormous number of possible products, individual product testing becomes more and more infeasible. Pairwise testing offers one possibility to test a subset of all possible products. However, according to the best of our knowledge, there is no contribution discussing and rating this approach in the SPL context. In this contribution, we provide a mapping between feature models describing the common and variable parts of an SPL and a reusable test model in the form of statecharts. Thereby, we interrelate feature model-based coverage criteria and test model-based coverage criteria such as control and data flow coverage and are therefore able to discuss the potentials and limitations of pairwise testing. We pay particular attention to test requirements for feature interactions constituting a major challenge in SPL engineering. We give a concise definition of feature dependencies and feature interactions from a testing point of view, and we discuss adequacy criteria for SPL coverage under pairwise feature interaction testing and give a generalization to the T-wise case. The concept and implementation of our approach are evaluated by means of a case study from the automotive domain. KeywordsSoftware product lines–Model-based engineering and testing–Test generation and coverage–Combinatorial testing–Feature interaction
Conference Paper
Recently, it has become popular to use applications on window operating systems, such as Microsoft Windows and GNOME, and for this we attempt to use the GUI specific testing approach. Legacy type of approaches, such as control flow path and data flow path testing, are not always effective testing methods for GUI application. Thus, model-based testing was newly established and often used in GUI application testing. But the model-based testing method is not yet mature and has not been researched enough. Our developed model-based testing focuses not only on application behavior but also on data and external factors. In addition, we realized that model-based testing lacks the ability to find some types of defect. In this paper, we will clarify which factors are lacking and offer solutions with case studies.
Conference Paper
Recently, it has become popular to use applications on Window operating systems, such as Microsoft Windows and GNOME, and for this we attempt to use the GUI specific testing approach. Legacy type of approaches, such as control flow path and data flow path testing, are not always effective testing methods for GUI application. Thus, model-based testing was newly established and often used in GUI application testing. But the model-based testing method is not yet mature and has not been researched enough. Our developed model-based testing focuses not only on application behavior but also on data and external factors. Also, we use the directed Chinese postman algorithm in order to execute the extended-model based testing efficiently.
ResearchGate has not been able to resolve any references for this publication.