ArticlePDF Available

Finite state model based testing on a shoestring

Authors:
  • Serious Quality, LLC

Figures

Content may be subject to copyright.
A preview of the PDF is not available
... When similar studies are examined in the literature, it is seen that performance measurement is performed by using different model-based test tools [5,6,7] which take different model patterns as parameters. Model based test tools are prepared using the use scenarios, finite state diagrams [8], state diagrams or Markov-Chain models as input parameters and produce test scenario or test module. ...
... There exist numerous techniques (Neto et al. 2007) and tools (Dalal et al. 1999) proposed for this approach. These techniques and tools employ test models that are expressed in various formalisms, such as finite state machines (Robinson 1999), event sequence graphs (Belli 2001;Belli et al. 2006), event flow graphs (Memon et al. 2001), and Markov chains (Whittaker and Thomason 1994). In our approach, the test model is implicitly represented by an LSTM network. ...
Article
Full-text available
We propose a novel technique based on recurrent artificial neural networks to generate test cases for black-box testing of reactive systems. We combine functional testing inputs that are automatically generated from a model together with manually-applied test cases for robustness testing. We use this combination to train a long short-term memory (LSTM) network. As a result, the network learns an implicit representation of the usage behavior that is liable to failures. We use this network to generate new event sequences as test cases. We applied our approach in the context of an industrial case study for the black-box testing of a digital TV system. LSTM-generated test cases were able to reveal several faults, including critical ones, that were not detected with existing automated or manual testing activities. Our approach is complementary to model-based and exploratory testing, and the combined approach outperforms random testing in terms of both fault coverage and execution time.
... There exist a large body of work on MBT techniques (Neto et al. 2007), tools (Dalal et al. 1999) and different types of models employed like finite state machines (Robinson 1999;Chander et al. 2011) and Markov chains (Whittaker and Thomason 1994). There also exist surveys on MBT (Neto et al. 2007) and several case studies (Dalal et al. 1999;Keranen and Raty 2011) where the effectiveness of MBT is evaluated. ...
Article
Full-text available
Model-based testing relies on models of the system under test to automatically generate test cases. Consequently, the effectiveness of the generated test cases depends on models. In general, these models are created manually, and as such, they are subject to errors like omission of certain system usage behavior. Such omitted behaviors are also omitted by the generated test cases. In practice, these faults are usually detected with exploratory testing. However, exploratory testing mainly relies on the knowledge and manual activities of experienced test engineers. In this paper, we introduce an approach and a toolset, ARME, for automatically refining system models based on recorded testing activities of these engineers. ARME compares the recorded execution traces with respect to the possible execution paths in test models. Then, these models are automatically refined to incorporate any omitted system behavior and update model parameters to focus on the mostly executed scenarios. The refined models can be used for generating more effective test cases. We applied our approach in the context of 3 industrial case studies to improve the models for model-based testing of a digital TV system. In all of these case studies, several critical faults were detected after generating test cases based on the refined models. These faults were not detected by the initial set of test cases. They were also missed during the exploratory testing activities.
... MBT techniques are extensively studied in the literature [3]. There exist tools applied in practice [4] and different formalisms for model specification like finite state machines [5] [6] and Markov chains [7]. There also exist case studies [4], [8] for evaluating the effectiveness of MBT. ...
... There exist an extensive literature on MBT approaches [7], tools [8] and different types of models used for MBT like finite state machines [9] [10] and Markov chains [5]. Existing studies are focusing on the modeling approaches and test case generation methods. ...
... There exist an extensive literature on model based testing approaches [10], tools [11] and different types of models employed like finite state machines [12] [13] and Markov chains [14]. There also exist several case studies [10] [15] where the effectiveness of model based testing is evaluated. ...
Conference Paper
Model-based testing facilitates automatic generation of test cases by means of models of the system under test. Correctness and completeness of these models determine the effectiveness of the generated test cases. Critical faults can be missed due to omissions in the models, which are primarily created manually. In practice, these faults are usually detected with exploratory testing performed manually by experienced test engineers. In this paper, we propose an approach for refining system models based on the experience and domain knowledge of these test engineers. Our toolset analyzes the execution traces that are recorded during exploratory testing activities and identifies the omissions in system models. The identified omissions guide the refinement of models to be able to generate more effective test cases. We applied our approach in the context of an industrial case study to improve the models for model-based testing of a Digital TV system. After applying our approach, three critical faults were detected. These faults were not detected by the initial set of test cases and they were also missed during the exploratory testing activities.
... The model-based testing is a technique that generates software tests from explicit descriptions of behavior of an application [Robinson, 1999]. One way of modeling system using finite state machines (FSM) [Gill, 1962], which serves to specify the behavioral aspect of reactive systems. ...
Article
Full-text available
Models are a method of representing software behavior. Graph theory is an area of mathematics that can help us use this model information to test applications in many different ways. This paper describes several graph theory techniques, where they came from, and how they can be used to improve software testing. What's Wrong with Traditional Software Testing? Traditional software testing consists of the tester studying the software system and then writing and executing individual test scenarios that exercise the system. These scenarios are individually crafted and then can be executed either manually or by some form of capture/playback test tool. This method of creating and running tests faces at least two large challenges: First, these traditional tests will suffer badly from the "pesticide paradox" (Beizer, 1990) in which tests become less and less useful at catching bugs, because the bugs they were intended to catch have been caught and fixed. Second, handcrafted test scenarios are static and difficult to change, but the software under test is dynamically evolving as functions are added and changed. When new features change the appearance and behavior of the existing software, the tests must be modified to fit. If it is difficult to update the tests, it will be hard to justify the test maintenance costs. Model-based testing alleviates these challenges by generating tests from explicit descriptions of the application. It is easier, therefore, to generate and maintain useful, flexible tests.