Conference Paper

A Tool Suite for Diagnosis and Testing of Software Design Specifications.

Telcordia Technol., Morristown, NJ
DOI: 10.1109/ICDSN.2000.857553 Conference: 2000 International Conference on Dependable Systems and Networks (DSN 2000) (formerly FTCS-30 and DCCA-8), 25-28 June 2000, New York, NY, USA
Source: DBLP

ABSTRACT Available statistical data shows that the cost of finding and
repairing software rises dramatically in later development stages. Much
research has been done using verification and validation techniques to
prove correctness in terms of certain properties. Such approaches and
the approach of software testing are complementary. Testing reveals some
errors that cannot be easily identified through verification, and vice
versa. The new technology of generating implementation code from design
specifications if based on highly reliable designs is another approach
to reliable software. This paper presents a dynamic slicing technology
and an accompanying tool suite for understanding, diagnosis and testing
of software design specifications. We apply state-of-the-art technology
in coverage testing, diagnosis and understanding of software source code
to those of software designs. We use a simulation of the specifications
to collect the execution trace for computing the coverage and slicing
data. Our technology first generates a flow diagram from a specification
and then automatically analyses the coverage features of the diagram. It
collects the corresponding flow data during simulation to be mapped to
the flow diagram. The coverage information for the original
specification is then obtained from the coverage information of the flow
diagram. This technology has been used for C, C++, and Java, and has
proven effective

12 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to the increasing requirements imposed on fault-tolerant protocols, their complexity is steadily growing. Thus verification of the functionality of the fault-tolerance mechanisms is also more difficult to accomplish. In this thesis a model-based approach towards efficiently finding ``loopholes'' in the fault-tolerance properties of large protocols is provided. The contributions comprise thinning out the state space without missing behavior with respect to the validation goal through a partial ordering strategy based on single fault regions. Two algorithms for (partial) analysis are designed, implemented and evaluated: the H-RAFT algorithm is based on SDL elements constituting each transition and requires no user-knowledge. The Close-to-Failure algorithm on the other hand is purely based on user-provided information. Combination of the two algorithms is also investigated. All contributions exploit the fault-tolerant nature of the protocols. In order to compare the performances of the novel techniques to well-known algorithms, a tool has been developed to allow for easy integration of different algorithms. All contributions are thoroughly investigated through experiments summing up to several CPU-month. The results show unambiguously the advantages of the developed methods and algorithms. Durch die zunehmenden Anforderungen an fehlertolerante Protokolle steigt auch deren Komplexität zusehends. Dadurch ist es deutlich schwieriger die Funktionalität der Fehlertoleranzmechanismen zu überprüfen. In dieser Arbeit wird ein modellbasierter Ansatz vorgestellt, dessen Ziel es ist ``Lücken'' in den Fehlertoleranzeigenschaften effizient zu finden. Dazu wird ein Algorithmus entwickelt, der eine partiellen Ordnung erzeugt und es somit erlaubt den Zustandsraum zu verkleinern ohne Verhalten bezüglich der zu prüfenden Eigenschaften zu verlieren. Weiterhin werden zwei Algorithmen zur (partiellen) Analyse entworfen, implementiert und bewertet: Der H-RAFT Algorithmus basiert auf den SDL-Elementen der jeweiligen Transitionen und erfordert keinerlei weiteres Domänen-Wissen des Benutzers. Der Close-to-Failure Algorithmus hingegen ist nur von Benutzerinformationen abhängig. Kombinationen der beiden Ansätze werden ebenfalls untersucht. Für alle vorgestellten Methoden und Algorithmen wird ausgenutzt, dass es sich um fehlertolerante Protokolle handelt. Um die neuen Ansätze mit weitverbreiteten Algorithmen vergleichen zu können wird ein Werkzeug entwickelt, welches eine einfache Integration von Algorithmen ermöglicht. Die vorgestellten Techniken werden ausführlich in Experimenten mit einem Gesamtaufwand von etlichen CPU-Monaten untersucht. Die Ergebnisse dieser Experimentreihen zeigen eindeutig die Vorteile der entwickelten Algorithmen und Methoden.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Statistical data analysis shows that early fault detection can cut cost significantly. With improved technology for automatic code generation from architectural design specifications, it becomes even more important to have, from the beginning, a highly reliable and dependable architectural design. To ensure this, we have to predict the “quality” of the system early in the development process. The use of traces or execution histories as an aid to testing and analysis is well established for programming languages like C and C++, but it is rarely applied in the field of software specification for designs. We propose a solution by applying our technology at source code level to coverage testing software designs represented in a high-level specification and description language such as SDL. Sophisticated dominator analysis is applied to provide hints on how to generate efficient test cases to increase, as much as possible with as few tests as possible, the control-flow- and data-flow-based coverage of the SDL specification being tested. We extend source code-based coverage testing to the software design specification level for specification validation. A coverage analysis tool, CATSDL, with user-friendly interfaces was developed to support our method. An illustration is provided to demonstrate the feasibility of using our method to validate the architectural design efficiently in terms of higher testing coverage and lower cost.
    Computer Networks 06/2003; 42(3-42):359-374. DOI:10.1016/S1389-1286(03)00248-2 · 1.26 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Software metrics can provide an automated way for software practitioners to assess the quality of their software. The earlier in the software development lifecycle this information is available, the more valuable it is, since changes are much more expensive to make later in the lifecycle.Semantic metrics, introduced by Etzkorn and Delugach, assess software according to the meaning of the software's functionality in its domain. This is in contrast to traditional metrics, which use syntax measures to assess code. Because semantic metrics do not rely on the syntax or structure of code, they can be computed from requirements or design specifications before the system has been implemented. This paper focuses on using semantic metrics to assess systems that have not yet been implemented.
    Proceedings of the 42nd Annual Southeast Regional Conference, 2004, Huntsville, Alabama, USA, April 2-3, 2004; 01/2004
Show more