Conference Paper

Testability measures with concurrent good simulation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Methods designed to speed up the computation of controllability within an event-driven fault-free simulation environment are described. Input generation techniques are presented which reduce the activity of the simulator, thus achieving a considerable speed-up. Concurrent simulation of fault-free devices is shown to be very effective in this domain. Experimental results are reported for benchmark circuits. No general law has been derived, but a rich set of heuristics has been collected which may be useful in many other cases

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... concurrent goodmachine simulation. 8 In both cases there is a reference machine and the others are represented only where they differ. TPDL* has been extended to allow the user to handle concurrent simulation experiments. ...
Article
SUMMARY This paper presents TPDL* (extended temporal profile description language), a general-purpose language to observe and condition dynamic systems by means of temporal and logical expressions. It describes how time is modelled in TPDL*, gives an overview of the language through its basic types, primitives and conditional constructs, and its use in computer-aide d design of digital systems. The paper discusses TPDL*'s facilities to support the description of hardware behaviour, to define the environment in which devices operate, and to observe and control both circuits and environments. The characteristics of the language are demonstrated through some representative examples.
Article
This paper presents a new software named ASIC2000TA developed for design for test (DFT) aiming at optimizing test logic. This software consists of two modules: test analysis module and DFT module. Test analysis module can examine a circuit's testability, generate test vectors and perform fault simulation, in which some algorithms are described. DFT module automatically inserts test logic in gate-level netlist, including full scan and partial scan, in which a greedy search algorithm is discussed. Electronic design intermediate format (EDIF) acts as an interface between ASIC2000TA and Cadence. An experiment with ASIC2000TA is presented
Conference Paper
The authors present a comparative approach to some testability analysis methods for application to VLSI devices. Using a common framework of implementations and test cases, they compared the results between analysis methods and with those provided by fault simulation or exact calculation where possible. The methods dealt with are the weighted averaging algorithm, COP, the cutting algorithm, Stafan, and Predict
Article
Full-text available
Experimental evidence shows that low testability in a typical circuit is much more likely due to poor observability than poor controllability. Thus, from theoretical and practical standpoints, it is important to develop an accurate model for observability computation. One such model, in terms of supergates, is proposed in the first part of this paper thus complimenting our earlier work. It is now possible to obtain exact random-pattern testability for each line in a circuit. The second part of the paper analyzes the supegate structure of a circuit from a graph theoretic viewpoint. Finding a supergate is related to determining the dominator tree in a modified circuit graph which provides an exact bound on the complexity of computation. The uniqueness of the cover extends to multiple output circuits, and the complexity of finding it is shown to be quadratic in the number of nodes in the circuit graph. By prior determination of the maximal supergate cover, unnecessary duplication in computation of testability values can be avoided.
Conference Paper
Full-text available
This is not a technical paper, but a reference to the magnetic tape containing “ISCAS’85 combinational benchmark circuits”. The tape was distributed to the authors who contributed to the special session of ISCAS’85. It was requested to refer the tape in any publication as follows. Franc Brglez and Hideo Fujiwara, "A neutral netlist of 10 combinational circuits and a targeted translator in FORTRAN," Special Session on Recent Algorithms for Gate-Level ATPG with Fault Simulation and Their Performance Assessment, 1985 IEEE Int. Symp. on Circuits and Systems, June 5-7, 1985, Kyoto, Japan.
Article
Full-text available
The complexity and density of digital circuits is straining the capacity of computers when solving two basic aspects of the testing problem: test pattern generation to verify correct operation of the device, and assessment of the test pattern effectiveness with regard to the coverage of the postulated faults. The application of emerging design for testability (DFT) techniques can reduce the testing problem to one of testing large combinational modules. Still, even with a high number of on-chip generated random patterns, the question of fault coverage remains. This paper examines a testability algorithm that measures both the test pattern generation requirements as well as susceptibility to random pattern testing of large combinational networks.
Article
Full-text available
Abstmct-When test vectors are applied to a circuit, the fault coverage increases. The rate of increase, however, could be circuit dependent. A relation between the average fault coverage and circuit testability is developed in this paper. The statistical formulation allows computation of coverage for deterministic and random vectors. We discuss the following applications of this analysis: determination of circuit testability from fault simulation, coverage prediction from testability analysis, prediction of test length, and test generation by fault sampling. Zndex Terms- Fault coverage estimation, probabilistic testability, random pattern testability, statistical sampling, testability measures. I.
Article
A technique called PRobabilistic Estimation of DIgital Circuit Testability (PREDICT) is presented. Node controllabilities and observabilities are defined in terms of signal probabilities. A graph approach is used to compute these probabilities exactly using Shannon's expansion. A proposed approximation of this procedure keeps the computational complexity within reasonable bounds while providing a tradeoff between accuracy and computational cost in terms of the value of an easily interpreted parameter. Experimental results are given to demonstrate the effectiveness of PREDICT.
Article
SUMMARY Empirical observations are used to derive analytic formulae for test volumes, parallel fault simulation costs, deductive fault simulation costs, and minimum test pattern generation costs for LSSD logic structures. The formulae are significant in projecting growth trends for test volumes and various test generation costs with increasing gate count G. Empirical data is presented to support the thesis that test volume grows linearly with G for LSSD structures that cannot be partitioned into disjoint substructures. Such LSSD structures are referred to as "coupled" structures. Based on empirical observation that the num.ber of latches in an LSSD logic structure is proportional to the gate count G, it is shown that the logic test time for coupled structures yrows as G2. It is also shown that (i) parallel fault
Article
A major problem in self testing with random inputs is verification of the test quality, i.e., the computation of the fault coverage. The brute-force approach of using full-fault simulation does not seem attractive because of the logic structure volume, and the CPU time encountered. A new approach is therefore necessary. This paper describes a new analytical method of computing the fault coverage that is fast compared with simulation. If the fault coverage falls below a certain threshold, it is possible to identify the ``random-pattern-resistant'' faults, modify the logic to make them easy to detect, and thus, increase the fault coverage of the random test.
Article
This paper describes the theory and application of a testability assessment program, called CAMELOT, which has been developed for British Telecom. The program is intended for use as an interactive design aid, the testability predictions produced allowing the designer to make the best use of the range of testability improvement techniques available. CAMELOT uses a model of the test generation process to calculate controllability and observability values for each circuit node. These values measure the ease with which the node concerned can be set to a desired logic level and the ease with which its state can be observed respectively. The calculations take into account the possibility that a stored state device may have either an initialisation requirement or transient states. By combining the sets of values calculated, testability estimates can be produced for each circuit node and for the circuit as a whole. The paper describes the concepts of CAMELOT, relates it to other testability measures that have been described in the open literature and gives examples of its use.
Conference Paper
The problem is presented in the context of some recent theoretical advances on a related problem, called random satisfiability. These recent results indicate the theoretical limitations inherent in the problem of computing signal probabilities. Such limitations exist even if one uses Monte Carlo techniques for estimating signal probabilities. Theoretical results indicate that any practical method devised to compute signal probabilities would have to be evaluated purely on an empirical basis. An improved algorithm is offered for estimating the signal probabilities that takes into account the first-order effects of reconvergent input leads. It is demonstrated that this algorithm is linear in the product of the size of the network and the number of inputs. Empirical evidence is given indicating the improved performance obtained using this method over the straightforward probability computations. The results are very good, and the algorithm is very fast and easy to implement.
Conference Paper
The testing of large logic networks with random patterns is examined. Work by previous workers for single faults is extended to a class of multiple fault situations. Not only is the problem of fault detection in the presence of nonmasking multiple faults treated, but the question of distinguishing between them is also examined. It is shown that a test that merely exposes each fault has a high probability of distinguishing between the faults. The relationships between quality, diagnostic resolution, and random pattern test length are developed. The results have application to self-test schemes that use random patterns as stimuli.
Conference Paper
TMEAS is a program that implements a testability measure for digital circuits. In this paper important features of TMEAS are described, including the circuit models it assumes, the testability measure it applies, and the various commands for analyzing the information generated by the testability measure. In addition, a discussion is provided of possible use modes for TMEAS and some examples of its use so far.
Conference Paper
Empirical observations are used to derive analytic formulae for test volumes, parallel fault simulation costs, deductive fault simulation costs, and minimum test pattern generation costs for LSSD logic structures. The formulae are significant in projecting growth trends for test volumes and various test generation costs with increasing gate count G. Empirical data is presented to support the thesis that test volume grows linearly with G for LSSD structures that cannot be partitioned into disjoint substructures. Such LSSD structures are referred to as "coupled" structures. Based on empirical observation that the number of latches in an LSSD logic structure is proportional to the gate count G, it is shown that the logic test time for coupled structures grows as G2. It is also shown that (i) parallel fault simulation costs grow as G3 (ii) deductive fault simulation costs grow as G2, and (iii) the minimum test pattern generation costs grow as G2. Based on these projections some future testing problems become apparent.
Conference Paper
The authors present a comparative approach to some testability analysis methods for application to VLSI devices. Using a common framework of implementations and test cases, they compared the results between analysis methods and with those provided by fault simulation or exact calculation where possible. The methods dealt with are the weighted averaging algorithm, COP, the cutting algorithm, Stafan, and Predict
Article
Statistical Fault Analysis, or Stafan, is proposed as an alternative to fault simulation of digital circuits. This method defines Controllabilities and observabilities of circuit nodes as probabilities estimated from signal statistics of fault-free simulation. Special Procedures deal with these quantities at fanout and feedback nodes. The computed probabilities are used to derive unbiased estimates of fault detection probabilities and overall fault coverage for the given set of input vectors. Among Stafan's advantages, fault coverage and the undetected fault data obtained for actual circuits are shown to agree within five percent of fault simulator results, yet CPU time and memory demands fall far short of those required in fault simulation. The Computational complexity added to a fault-free simulator by Stafan grows only linearly with the number of circuit nodes.
Article
Fault simulation of circuits described at multiple levels of abstraction (RT, gate, switch) is a major problem in the area of CAD and testing. Although the concurrent paradigm is generally acknowledged as the most efficient, several techniques are crucial to successfully extend it to multilevel simulation of large circuits. In particular, based on multilist traversal, fraternal event processing, list events, and levelizing, advances are presented here in simulation speed, accuracy, and generality. For zero-delay elements, the simulation of irrelevant activity is avoided, but the accuracy of structural (interconnect) logic simulation is maintained. What is described here has been implemented in MOZART, and detailed experimental results are reported. Relative to the good machine, the average faulty machine is simulated 900 to 17 000 times faster. The approach presented is not restricted to fault simulation, and is thus applicable to the new area of concurrent case simulation.
Article
Algorithmic test generation for high fault coverage is an expensive and time-consuming process. As an alternative, circuits can be tested by applying pseudorandom patterns generated by a linear feedback shift register (LFSR). Although no fault simulation is needed, analysis of pseudorandom testing requires the circuit detectability profile. Measures of test quality are developed for pseudorandom testing. These include an exact expression and an approximation for the expected fault coverage. The influence of each fault on the expected fault coverage can then be evaluated. Relationships between test confidence, fault coverage, fault detectability, and test length are also examined. Previous analyses of pseudorandom testing have often used random testing as an approximation. It is shown that the random test model is not in general a good approximation. Finally, analysis of the pseudorandom input vector model is extended to situations where the size of the test pattern generator is not equal to the number of inputs to the circuit. Copyright © 1987 by The Institute of Electrical and Electronics Engineers, Inc.
Optimizing algorithms and data structures for concurrent simulation
  • G P Cabodi
Characteristics of Statistical Fault Analysis
  • S K Jain
  • D M Singer
Probabilistically guided test Generation
  • V D Agrawal
  • S C Seth
  • C C Chuang
Probabilistically guided test Generation
  • agrawal