# Test sequencing problems arising in test planning and design for testability

**ABSTRACT** We consider four test sequencing problems that frequently arise in

test planning and design for testability (DFT) processes. Specifically,

we consider the following problems: (1) how to determine a test sequence

that does not depend on the failure probability distribution; (2) how to

determine a test sequence that minimizes expected testing cost while not

exceeding a given testing time; (3) how to determine a test sequence

that does not utilize more than a given number of tests, while

minimizing the average ambiguity group size; and (4) how to determine a

test sequence that minimizes the storage cost of tests in the diagnostic

strategy. We present various solution approaches to solve the above

problems and illustrate the usefulness of the proposed algorithms

**0**Bookmarks

**·**

**91**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**Automobile is one of the most widely distributed cyber-physical systems. Over the last few years, the electronic explosion in automotive vehicles has significantly increased the complexity, heterogeneity and interconnectedness of embedded systems. Although designed to sustain long life, systems degrade in performance due to gradual development of anomalies eventually leading to faults. In addition, system usage and operating conditions (e.g., weather, road surfaces, and environment) may lead to different failure modes that can affect the performance of vehicles. Advanced diagnosis and prognosis technologies are needed to quickly detect and isolate faults in network-embedded automotive systems so that proactive corrective maintenance actions can be taken to avoid failures and improve vehicle availability. This paper discusses an integrated diagnostic and prognostic framework, and applies it to two automotive systems, viz., a Regenerative Braking System (RBS) in hybrid electric vehicles and an Electric Power Generation and Storage (EPGS) system.IEEE ICNC 2013 International Workshop on Cyber-Physical Systems; 01/2013 - [Show abstract] [Hide abstract]

**ABSTRACT:**This paper considers the problem of selecting a set of residual generators for inclusion in a model-based diagnosis system, while fulfilling fault isolability requirements and minimizing the number of residual generators. Two novel algorithms for solving the selection problem are proposed. The first algorithm provides an exact solution fulfilling both requirements and is suitable for small problems. The second algorithm, which constitutes the main contribution, is suitable for large problems and provides an approximate solution by means of a greedy heuristic and by relaxing the minimal cardinality requirement. The foundation for the algorithms is a novel formulation of the selection problem which enables an efficient reduction of the search-space by taking into account realizability properties, with respect to the considered residual generation method. Both algorithms are general in the sense that they are aimed at supporting any computerized residual generation method. In a case study the greedy selection algorithm is successfully applied in an industrial sized automotive engine system.IEEE Transactions on Systems Man and Cybernetics 07/2013; 43(6):1354 - 1369. - [Show abstract] [Hide abstract]

**ABSTRACT:**In this paper, we propose new formulations for the problem of test selection in the presence of imperfect tests in order to minimize the total costs of tests subject to lower bound constraints on fault detection and fault isolation. Our formulation allows tests to have multiple outcomes and delays caused by fault propagation, reporting, and transmission. Since the test selection problem is NP-hard even in the presence of perfect binary tests with no delays, we propose genetic algorithm (GA) and Lagrangian relaxation algorithm (LRA) to solve this problem. GA is a general approach for solving the problem with imperfect tests, including the scenarios with delayed and multiple test outcomes. The LRA is suitable for problems with perfect tests, including multiple outcomes. A key advantage of the LRA approach is that it provides an approximate duality gap, which is an upper bound measure of suboptimality of the solution. Our formulations and algorithms are tested on various real-world and simulated systems, and comparisons are made with previous test selection methods developed for perfect tests with no delays. The results show that our methods can efficiently solve the imperfect test selection problem. In addition, they have better performance (measured in terms of the number of tests used) than the methods in the literature for the perfect test selection cases. Finally, the GA has better computational efficiency than the LRA for all of the scenarios with perfect tests.Systems, Man, and Cybernetics: Systems, IEEE Transactions on. 11/2013; 43(6):1370-1384.

Page 1

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999153

Test Sequencing Problems Arising in

Test Planning and Design for Testability

Vijaya Raghavan, Member, IEEE, Mojdeh Shakeri, Member, IEEE, and Krishna R. Pattipati, Fellow, IEEE

Abstract— In this paper, we consider four test sequencing

problems that frequently arise in Test Planning and Design For

Testability (DFT) process. Specifically, we consider the follow-

ing problems: 1) How to determine a test sequence that does

not depend on the failure probability distribution? 2) How to

determine a test sequence that minimizes expected testing cost

while not exceeding a given testing time? 3) How to determine

a test sequence that does not utilize more than a given number

of tests, while minimizing the average ambiguity group size? 4)

How to determine a test sequence that minimizes the storage

cost of tests in the diagnostic strategy? We present various

solution approaches to solve the above problems and illustrate

the usefulness of the proposed algorithms.

Index Terms—AND/OR graph search, heuristics, minimal stor-

age testing, minimax test sequencing, multi-objective optimiza-

tion, test planning.

I. PRELIMINARIES

B

process, we discuss the formulation of the basic test sequenc-

ing problem and the associated top-down algorithms based on

AND/OR graph search. Once the basic notation and algorithms

are explained, we go on to present the variations needed on

these algorithms to solve the proposed problems.

The test sequencing problem, in its simplest form, consists

of:

1) a set of

system states

associated with the system, where

free state of the system and

of the

potential faulty states in the system;

2) the prior conditional probabilities of the system states

EFORE we consider the test sequencing problems aris-

ing in Test Planning and Design for Testability (DFT)

denotes the fault-

denotes one

where is the

conditional probability that no fault exists in the system

and

denotes the probability that

has occurred;

3) a set of

available tests

application cost vector

denotes the usage cost of test

of time, manpower requirements, or other economic

factors;

with an

where

, measured in terms

Manuscript received September 9, 1996; revised November 1, 1998.

V. Raghavan and M. Shakeri are with Mathworks, Inc., Natick, MA 01760-

1500 USA (e-mail: mshakeri@mathworks.com).

K. R. Pattipati is with the Department of Electrical and Systems Engi-

neering, University of Connecticut, Storrs, CT 06269-3157 USA (e-mail:

krishna@sol.uconn.edu).

Publisher Item Identifier S 1083-4427(99)01450-2.

4) a diagnostic dictionary matrix

if test

detects a failure state

The problem is to design a test algorithm that is able to

unambiguously identify the occurrence of any system state in

using the tests in the test set

expected testing cost,

, given by

, whereis 1

, and 0 otherwise.

, and that minimizes the

(1)

where

is an

is used in the path leading to the identification

, and is zero otherwise.

This problem belongs to the class of binary identification

problems that arise in medical diagnosis, nuclear power plant

control, pattern recognition, and computerized banking. The

optimal algorithms for this problem are based on dynamic

programming (DP) and AND/OR graph search procedures

[11]. The DP technique is a recursive algorithm that constructs

the optimal decision tree from the leaves up by identifying

successively larger subtrees until the optimal tree rooted at the

initial node of complete ambiguity is generated.

Suppose we are given an ambiguity subset, i.e., a suspected

set of failure states

. Upon applying a test

ambiguity set can be reduced based on the outcome of test

and the diagnostic dictionary matrix

then our reduced ambiguity set

those failure sources from

which can be detected by

. Similarly, if the test

then our reduced ambiguity subset

those failures from

which cannot be detected by

. Let

the optimal decision strategy starting from an ambiguity set .

We can now write the DP recursion relating

andas follows:

bybinary matrix such that

is if test

of system state

, the

. If the test

would consist of

fails,

, i.e.,

passes,

would consist of

, i.e.,

denote the cost of

,

(2)

where the conditional probabilities of the ambiguity subsets

andare given by

(3)

(4)

and

(5)

1083–4427/99$10.00 1999 IEEE

Page 2

154IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999

The DP recursion is initiated with the known terminal con-

ditions

. The DP technique has storage and

computational requirements of

quencing problem [10].

Efficient top-down algorithms based on AND/OR graph

search were developed in [11] to contain the computational

explosion of DP. An AND/OR graph

with a root (or initial) node

terminal leaf (or goal) nodes,

represents the given problem to be solved, while the terminal

leaf nodes correspond to sub-problems with known solutions.

The intermediate nodes

of

and intermediate leaf. An OR node is solved if any one of

its successor nodes is solved, but an AND node is solved

only when all of its immediate successor nodes are solved. An

intermediate leaf node has no successors and is unsolvable.

The AND/OR graph associated with the test sequencing

problem has the following properties:

1) the initial node of complete ignorance

original test sequencing problem to be solved;

2) the intermediate nodes of residual ambiguity correspond

to test sequencing subproblems that must be solved in

order to obtain a solution to the original problem;

3) the goal nodes of zero ambiguity represent primitive

sub-problems with known solution (that is, system state

identified) and zero cost;

4) if the solution tree contains an AND node, all its

successors (representing the resulting ambiguity subsets)

are also in the solution tree;

5) if an OR node is in the solution tree, then only one suc-

cessor of the node is in the solution tree and represents

the optimal test at that node.

Since the generation of an optimal test algorithm is an NP-

complete problem [11], it is necessary to explore heuristic

approaches for guiding the AND/OR graph search. These

heuristic approaches use problem domain knowledge, in the

form of an heuristic evaluation function (HEF), to avoid

enumerating the entire set of potential solution trees. The HEF

is an easily computable heuristic estimate

cost-to-go,

from any node of ambiguity set

nodes of zero ambiguity. Various HEF’s based on Huffman

and feasible codes were derived for the basic test sequencing

problem in [11], [13].

for the basic test se-

is a directed graph

, and a nonempty set of

. The initial node

are of three types: OR, AND,

represents the

of the optimal

to the goal

II. MINIMAX TEST SEQUENCING

A common criterion that is minimized in most test sequenc-

ing problems is the expected cost of diagnosis. Minimization

of the expected cost can sometimes result in inordinately

expensive sequences of tests to isolate faults of very low

probability of occurrence. This may not be acceptable since the

estimates of the mean time to failure (MTTF’s) of components

are often inaccurate. Typically, the theoretical estimates of

MTTF’s may be off by as much as a factor of 10 from

the actual MTTF’s under field conditions [1], [2]. In these

cases, the dependence of the cost function on the underlying

probability distribution can result in diagnostic strategies that

are not truly optimal. For this problem, we consider the

so-called Minimax (minimizing the maximum testing cost)

criterion to construct robust diagnostic strategies.

Formally, the problem is to devise a sequential testing strat-

egy that minimizes the maximum testing cost (i.e., diagnostic

cost) defined by

(6)

where

is an

is used in the path leading to the identification

, and is zero otherwise.

Suppose we are given an ambiguity subset, i.e., a suspected

set of failure states

. Upon applying a test

ambiguity set can be reduced based on the outcome of test

and the diagnostic dictionary matrix

then our reduced ambiguity set

those failure sources from

which can be detected by

. Similarly, if the test

then our reduced ambiguity subset

those failures from

which cannot be detected by

. Let

the optimal decision strategy starting from an ambiguity set .

We can now write the DP recursion relating

and as follows:

bybinary matrix such that

isif test

of system state

, the

. If the test

would consist of

fails,

, i.e.,

passes,

would consist of

, i.e.,

denote the cost of

,

(7)

The recursion is initialized with

nodes (i.e., leaf nodes of no ambiguity). Note that the above

DP recursion is very similar to that obtained for the basic

test sequencing problem. Thus, we can use AO to solve this

problem, provided an admissible and consistent HEF can be

found to approximate the optimal cost-to-go.

at the solution

A. Minimax Coding Problem

If all

tests are available and the test costs are identical,

then this test sequencing problem is identical to the Minimax

coding problem; that is, the problem of generating a prefix-

free binary code of a set of binary messages for transmission

over a noiseless channel. The problem is to find the minimal

maximum length of code

messages,

dependence on the prior probabilities of messages. The anal-

ogy between the Minimax test sequencing and the noiseless

Minimax coding is as follows: the system states correspond to

the binary messages, the sequence of test results are similar

to the message code word, and the maximum number of tests

required to isolate a failure correspond to the length of the

longest code word. The only differences are that the generation

of a test algorithm is constrained by the availability of tests,

whereas no such constraint exists for the coding problem,

and the tests may have unequal costs in the test-sequencing

problem. Using the following lemmas, we can construct such

a code for a given value of

Lemma 1: If

free code with maximum code word length less than or equal

to

.

for a set of

. Note that there is no

binary

.

, then there is no prefix-

Page 3

RAGHAVAN et al.: TEST SEQUENCING PROBLEMS155

Proof: If the maximum code word length is less than or

equal to

, then the maximum number of such distinct

binary code words is

cannot exist for this message set.

Lemma 2: A prefix code exists with a maximum code word

length of

.

Proof: Let

an expanded message set with

. This message set can be encoded by a

length

binary code. By dropping the code words for the

pseudomessages (in case

for

where all code words have length

From the above lemmas, it is seen that the minimal maxi-

mum code word length for a message of cardinality

given by

where

greater than or equal to .

. Hence, such a code

be

pseudomessages

), we then have a code

.

is

denotes the smallest integer

B. HEF for Minimax Test Sequencing

We derive the appropriate HEF for this problem by ap-

pealing to the analogy between the minimax test sequencing

problem and the minimax coding problem discussed earlier.

Let us denote the minimal maximum code word length

any node of ambiguity subset as follows:

for

(8)

where

The minimal maximum code word length

ambiguity subset

provides a lower bound on the maximum

length of any test algorithm rooted at

test algorithm with maximum length). Formally

is the cardinality of the ambiguity set.

for any node of

(including the optimal

(9)

where

at

The above property of Minimax code can be used to derive

an admissible HEF as shown in the following theorem.

Theorem 1: Assume, without loss of generality, that the test

costs are in ascending order

a lower bound

is given by

if test is used by a test algorithm rooted

and is zero otherwise.to identify the system state

. Then

(10)

Proof: The cost of the optimal tree rooted at

is given by

(11)

where

algorithm rooted at

length of the code word for

rooted at

. That is

if testis used in the subtree of the optimal

and is zero otherwise. Let

in the optimal test algorithm

be the

(12)

Since the test costs are in ascending order, we have

(13a)

where

(13b)

That is, the cost of the optimal tree must be greater than the

cost of a tree that uses the

is monotone increasing, convex function of .

Now

smallest test costs. Note that

(14)

where

algorithm rooted at

and the property of Minimax code in (9), we have

is the maximum length of the optimal test

. From the monotonicity of

(15)

completing the proof of the theorem.

Thus, the HEF for minimax optimization does not depend on

the probability distribution of failure sources in the ambiguity

group

. It has been shown in [12] that the application AO

in conjunction with an admissible HEF is sufficient to yield

optimal solutions. We now show that the above HEF has

the additional property of consistency, thus ensuring that the

AO algorithm with this HEF monotonically converges to an

optimal solution. An HEF

in an AND/OR graph

is consistent, if for each node

(16)

First, we consider the case where the test costs are identical

to derive a key result which will then be used to prove the

general case where test costs are not identical. Consider the

two minimax trees corresponding to

lengths of the Minimax code-words for each of these trees

be denoted by

and , respectively. One can construct

a prefix-free code (although not necessarily optimal) for the

subset

by connecting the two minimax trees by a binary

link. Since the length of each code-word is increased by 1 for

each

, the maximum code-word length of

the tree denoted by

, is

and. Let the

(17)

Let us now denote

generality, let us assume that

the following two cases.

• Case 1:

Note that, if

set

(i.e., splits

), then it will not be an admissible test for

, since

. Without loss of

. Consider

is an admissible test for the ambiguity

into two nonempty subsetsand

and

and.

Page 4

156IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999

Hence,

costs which do not include

, then

andare computed using sorted test

corresponding to test. If

(18)

Hence

(19)

• Case 2:

Since

tests sorted in the ascending order of costs. Hence

is computed using the first

(20)

Thus, we see that

(21)

proving (16).

C. Simulation Results

In order to demonstrate the robustness of the diagnostic

strategy computed via Minimax approach, we generated sev-

eral random diagnostic dictionary matrices of varying sizes

(

number of faults,number of tests) and computed

the diagnostic strategies due to Minimax HEF and Huffman

code based HEF1described in [11] (using a randomly gen-

erated prior probability distribution for failure sources). We

then computed the following parameters for these strategies

averaged over a set of 1000 randomly generated probability

distributions.

Projected expected cost of AO diagnostic strategy

based on the initial probability distribution.

Projected maximum testing cost of AO diagnostic

strategy based on the initial probability distribution.

Mean expected testing cost of AO diagnostic strat-

egy averaged over 1000 random fault probability

distributions.

1Assume, without loss of generality, that the test costs are in ascending

order ? ? ?? ? ?? ? ??? ? ??. The Huffman code based lower bound

(HEF) for the optimal cost-to-go ???? at an ambiguity node ? is given by

???? ?

? ???

???

??? ?????? ? ???????? ?????

(22)

where ????? is the Huffman code length computed using the normalized

conditional probabilities of failure sources belonging to the ambiguity group

? and ????? is the integer part of ?????.

TABLE I

PERFORMANCE OF HUFFMAN CODE HEF VERSUS MINIMAX HEF FOR ? ? ??

TABLE II

PERFORMANCE OF HUFFMAN CODE HEF VERSUS MINIMAX HEF FOR ? ? ??

TABLE III

PERFORMANCE OF HUFFMAN CODE HEF VERSUS MINIMAX HEF FOR ? ? ??

Standard deviation of expected testing cost of AO

diagnostic strategy over 1000 random fault proba-

bility distributions.

Projected expected cost of Minimax diagnostic

strategy based on the initial probability distribution.

Projected maximum testing cost of Minimax di-

agnostic strategy based on the initial probability

distribution.

Mean expected testing cost of Minimax diagnostic

strategy averaged over 1000 random fault proba-

bility distributions.

Standard Deviation of expected testing cost of

Minimax diagnostic strategy over 1000 random

fault probability distributions.

Tables I–IV show the results for various system sizes (

number of faults,number of tests). It is seen that

are less thanandfor most cases. Also, the projected

testing cost

for Minimax is very close to the actual testing

cost

for most of the cases; thus, with a Minimax strategy,

we have a reliable estimate of the actual testing cost when the

probability distribution is unknown.

and

III. CONSTRAINED OPTIMIZATION OF TEST SEQUENCE

Most reasonably complete formulations of real world

problems involve multiple, conflicting, and noncommensurate

objectives. In the context of test sequencing, optimizing the

diagnostic strategy with respect to both expected testing cost

with a constraint on the expected testing time

important problem that has not been addressed so far. The

is a very

Page 5

RAGHAVAN et al.: TEST SEQUENCING PROBLEMS 157

TABLE IV

PERFORMANCE OF HUFFMAN CODE HEF VERSUS MINIMAX HEF FOR ? ? ??

problem of constrained optimization of diagnostic strategy

can be formally written as

(23)

such that

(24)

where

is an

is used in the path leading to the identification

, and is zero otherwise and

are the test times.

The following argument shows that a variation of AO

algorithm can be used to solve this problem optimally. Let

an ambiguity node in the AO graph and

(information giving) test at

of cost-to-go (based on test times) at the root OR node before

splitting the node

using test

split using test

, the cost-to-go estimate (based on test times)

at

can be revised using the HEF values for the successors

andof. The admissibility and consistency of HEF

ensures that the cost-to-go estimate can only increase. This cost

increase is propagated all the way up to the root OR node, and

suppose that this entails a cost estimate of

OR node. If

that the inclusion of test

at the ambiguity node

an infeasible solution. Thus,

and the search space can thus be reduced to avoid infeasible

solutions. Note that more and more infeasible directions can be

identified and pruned as the estimates become more accurate

as node expansion proceeds. A nice feature of this approach

is that the computational requirements of this algorithm can

only be less than those of unconstrained AO . Also note that,

when all tests are pruned at an ambiguity node (that is not a

leaf), this indicates that a feasible diagnostic strategy does not

exist for the specified threshold on expected test time.

An important implementation issue in AO algorithm vari-

ants is that of node selection strategy for expansion. We

considered the following node-selection strategies and eval-

uated their performance (in terms of the number of nodes

expanded before the solution is obtained). While traversing

the graph from the root OR node in search of an expandable

node, if both children of the current node are unsolved.

Always pick the left child.

Always pick the right child.

Pick the child node with minimum test cost based

HEF

.

bybinary matrix such that

is if test

of system state

costs and

are the test

be

be an admissible

be the estimate . Let

. When the ambiguity nodeis

at the root

, then it clearly indicates

results in

can be deemed inadmissible at

TABLE V

PERFORMANCE OF VARIOUS NODE SELECTION HEURISTICS FOR ? ? ? ? ??

TABLE VI

PERFORMANCE OF VARIOUS NODE SELECTION HEURISTICS FOR ? ? ? ? ??

Pick the child node with maximum test cost based

HEF

.

Pick the child node with minimum test time based

HEF

.

Pick the child node with maximum test time based

HEF

.

Table V shows the number of node expansions required for

these strategies for various values of the test time constraint

for a random system having 15 faults and 15 tests. Table VI

shows the results for a system with 20 faults and 20 tests.

We see that the strategies

minimal node expansions. This is in contrast to unconstrained

AO where

is found to be the best node-selection

strategy.

Fig. 1 shows the set of nondominated solutions2obtained by

invoking constrained AO with various values of the test time

constraint

for thesystem considered above. Note

that it is possible to obtain solutions that linearly interpolate

the given set of nondominated solutions via randomizing the

test strategy selection over any two given points. In order

to understand this concept, consider any two solution points

andthat are adjacent to each other in

Fig. 1. Suppose, we employ a randomized testing strategy

that involves choosing the strategy corresponding to the first

solution point with a probability

corresponding to the second solution point with a probability

. The expected testing time and testing cost of this

strategy are

is equivalent to the existence of a solution point that divides

the line segment joining the original two solution points in a

ratio.

andresulted in

and choosing the strategy

. This

2In multi-objective optimization, a nondominated solution is one for which

no other solution exists which is better in terms of all the objective functions

which we are trying to optimize. In this case, since we are trying to optimize

expected test-cost and expected test-time, a nondominated solution is a

diagnostic strategy for which no other strategy exists which yields both a

better expected test-cost and expected test-time.

Page 6

158IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999

Fig. 1. Nondominated solution via constrained optimization.

IV. TEST SEQUENCING WITH A CONSTRAINT

ON THE NUMBER OF TESTS USED

An important problem that arises in the Design For Testa-

bility is one of constructing a test sequence that utilizes less

than a specified number of tests, while minimizing the average

ambiguity group. This is especially true for systems with

limited accessibility to test points, and/or very expensive tests.

In this case, the system designer is burdened with the task of

selecting the best set of tests that can be used to diagnose the

system with minimal average ambiguity group size.

Formally, let

represent the AND/OR graph denoting the

diagnostic strategy and let

represents the presence of test

is used in the diagnostic strategy

Let

be the set of

is one which is not split by a test, i.e., no successors). The

problem is to determine a diagnostic strategy minimizing

be a Boolean variable that

in. That is,

, and

leaves in

if test

otherwise.

(a leaf node

(25)

where

diagnostic strategy resulting in the leaf set

The solution approach to this problem consists of two steps.

First step involves constructing a complete diagnostic strategy

based on a limited lookahead heuristic without explicitly en-

forcing the constraint on the number of distinct tests used. The

second step is to prune the diagnostic strategy, removing tests

on an incremental basis, until the constraint on the number of

distinct tests used is satisfied. The limited lookahead strategy

in Step 1 to select the next best test at every unexpanded

represents the average ambiguity group size for a

.

ambiguity node is based on a Figure of Merit (FOM). This

FOM represents the effectiveness of a test in producing a

good diagnostic strategy in terms of the number of distinct

tests used. Clearly, if a diagnostic tree is constructed in such

a way that many tests are used more than once in the tree

(i.e., in multiple branches), then such a strategy results in a

low cardinality of the set of tests used. This enables us to

define two different heuristic functions for limited lookahead

strategies.

Let

be the ambiguity node for which the next best test is to

be chosen. Let

andbe the child ambiguity nodes of

following Pass and Fail outcomes of test

limited lookahead strategy is to pick the test that minimizes

the average ambiguity group size at the next step

, respectively. The

(26)

where

are the failure probabilities. This is a greedy heuristic that is

expected to produce reasonable diagnostic strategies by locally

optimizing the next test.

Another heuristic is to pick the test that maximizes

represents the cardinality of the set , and

(27)

where

an ambiguity node

because, by maximizing the cardinality of the set of common

tests available to the successors of the ambiguity node

can expect the same tests to be used in multiple branches

of the diagnostic strategy, thus utilizing a smaller number of

distinct tests in the diagnostic tree. In addition to the above

denotes the set of all admissible tests at

. This heuristic is intuitively appealing

, we

Page 7

RAGHAVAN et al.: TEST SEQUENCING PROBLEMS159

TABLE VII

PERFORMANCE OF THE TWO FOM’s FOR VARYING ????

heuristics, we can further ensure maximal reuse of tests in

parallel branches of the diagnostic strategy by giving higher

priority to tests that are already used in parallel branches.

Once a diagnostic tree is constructed using all the available

tests without enforcing the constraint on the number of tests,

as a post-processing step, we prune the tree removing extra

tests in Step 2. Once again, we employ greedy heuristics to

discard the tests in an incremental fashion. Note that pruning,

i.e. discarding a test that splits an ambiguity node that is not

a leaf, results in the removal of all tests that follow the Pass

and Fail branches of that test. This entails a decrement in the

number of distinct tests used, by more than one, and at the

same time, it also results in a larger increment of average

ambiguity group size for the tree. Hence, we need to weigh

these factors appropriately for the pruning choice.

Consider an ambiguity node

diagnostic tree. Let

be the number of distinct tests to be

removed from the diagnostic strategy in order to meet the

constraint. Let

be the number of tests that are discarded

when test

is pruned (the tests that follow the successor

nodes of

). Letbe the set of leaf nodes of the subtree

rooted at

. Pruningresults in an increment of the average

ambiguity group size given by

split by a testin the

(28)

Given this, we can choose the next test to be discarded as one

that minimizes the incremental average ambiguity group size

per pruned test, i.e.,

(29)

This incremental pruning process is continued until the con-

straint on the number of distinct tests used is met.

Tables VII–X show a comparison of the performance of

the two FOM’s for various random systems (

of faults,

number of tests,

of tests allowed,

size obtained using FOM1,

group size obtained using FOM2). FOM2 is seen to be clearly a

better criterion than FOM1, since it resulted in lower ambiguity

group sizes almost all the time.

number

maximum number

Average ambiguity group

Average ambiguity

TABLE VIII

PERFORMANCE OF THE TWO FOM’s FOR VARYING ????

TABLE IX

PERFORMANCE OF THE TWO FOM’s FOR VARYING ????

TABLE X

PERFORMANCE OF THE TWO FOM’s FOR VARYING ????

V. DIAGNOSTIC STRATEGIES WITH MINIMAL STORAGE COST

The problem of minimizing the storage (the number of tests

in the tree) required for a diagnostic strategy is of considerable

interest. In many situations, a decision algorithm residing in

the primary memory can perform more efficiently than an

algorithm whose components must continually be swapped

between the primary and the secondary storage. Pollack [3]

and Press [4] have given heuristics for computing storage

efficient decision trees for certain types of decision tables

(i.e., fault-test point dependency relationships). The General

Optimized Testing Algorithm (GOTA) of Hartmann et al.

[5] can also be applied to the storage problem. In addition,

Reinwald and Soland [6] have presented a branch and bound

algorithm for designing decision tables with minimum storage.

Recent work by Murphy and McCraw [7] considered this

problem and presented a suboptimal heuristic approach to this

problem which was shown to be faster and more efficient

than the previous approaches, yet achieving reasonably near-

optimal solutions.

Page 8

160IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999

In the following, we present a formulation of the generalized

minimal storage problem and derive an optimal AO

algorithm. The top-down nature of AO

extends to produce a series of near-optimal solutions which

provide a nice tradeoff between optimality and computational

effort. We present extensive simulation results to demonstrate

that our optimal and near-optimal approaches are superior to

the heuristic approaches presented in [7].

based

algorithm readily

A. Problem Formulation

1) Set of

associated with the system, where

free state of the system and

of the

potential faulty states in the system.

2) Finite set of

comprising the system states, where

free state of the system.

3) Mapping

system states

denotes the fault-

denotes one

modules

denotes the fault

, where

, ifcorresponds to a

faulty state in module

4) Set of

storage cost vector

the cost of storing test

5) Diagnostic dictionary matrix

if test

detects a failure state

The problem is to devise a sequential testing strategy such

that the storage cost defined by

of the system.

available testswith a

denoteswhere

in a diagnostic strategy.

, whereis 1

and 0 otherwise.

(30)

is minimized where

in the diagnostic strategy.

If we denote

andto denote the resultant ambiguity sets based on the

outcome (pass or fail) of test

test sequencing problem employs the recursion

is the number of times testis used

to represent the ambiguity subset,

, then the DP technique for the

(31)

where

for the diagnostic strategy rooted at the ambiguity group

The recursion is initialized with

nodes (i.e., leaf nodes of no ambiguity).

Noting that the above DP recursion is very similar to that

obtained for the basic test sequencing problem, we can see that

AO can be used to solve this problem, provided an admissible

and consistent HEF can be found to approximate the optimal

cost-to-go.

is the optimal cost-to-go (i.e., the storage cost)

.

at the solution

B. NP-Completeness of the Storage Problem

A simplified storage minimization problem would be to

assume that all the storage costs

the objective is to seek a strategy with the minimal number

of tests. In the following, we show the NP-completeness of

the simplified problem and then go on to consider the optimal

solution procedure for the general problem.

are identical. In this case,

Hyafil and Rivest [8] have shown that the problem of

designing decision trees that minimize the processing time

(average and worst case path length) is an NP-complete

problem. It is shown in Comer and Sethi [9] that designing

storage optimal trees is NP-complete for cases where each

module is associated with a single failure source provided that

the tests may have ternary outcomes. The result does not hold

for the binary case since, trivially, every decision algorithm

that identifies each of the

failure sources will require

tests. It is also known that designing storage optimal full binary

decision trees is NP-complete [9].

The Decision Tree Storage Problem for the reachability

matrix

and the integer[denoted by

determine whether there exists a decision tree for

storage cost less than or equal to

Theorem 2:

is NP-complete.

Proof:

is in NP since a nondeterministic

Turing machine can guess the decision tree and then determine

in polynomial time if it has storage cost less than or equal to

. To complete the proof, it will be shown that the problem

Vertex Cover reduces to DTS. (The vertex cover problem asks

if there is a subset of the set of vertices

such that every edge in the set

on at least one element of

, where

testing problem as follows. Define

where. (Let us ignore

result in any loss of generality.) The set of modules

have cardinality 2 where

and.

There are

tests, one for each vertex, where the test

associated with the vertex

is incident on . Note that the tests must be used

to partition the failure source

sources (i.e., partition

and

vertex cover of size

, then there is a decision tree for

withtests that discriminate

Alternatively, if there exists a decision algorithm for

tests, then thevertices associated with those tests form a

vertex cover for

.

An obvious corollary of Theorem 2 is that DTS remains NP-

complete if the set of failure sources is partitioned into two

modules or if the system has only a single module containing

more than one failure source. As noted, the set of instances

of DTS where there is a single module associated with each

action can be solved in polynomial time. Let

denote the set of instances of the problem DTS where there

are no more than two objects associated with any action.

Theorem 3:

is NP-complete.

Proof: It shall be shown that the NP-complete problem

CLIQUE reduces to DTS2. (The decision problem CLIQUE

asks if there is a clique (complete subgraph) of size

a given graph

.) For a given instance of the

problem CLIQUE, a corresponding instance of the problem

can be constructed in polynomial time as follows.

For each of the

vertices in the set

two failure sources to the set

] is to

with

.

for the graph

is incident

). For the graph,

, we construct the

and

andsince it does not

will

foris defined by

is described by the set

from the remaining failure

). Therefore, ifhas a

from every one of the edges.

with

for

, we shall add

. Therefore, define

. The set of moduleswill

Page 9

RAGHAVAN et al.: TEST SEQUENCING PROBLEMS161

have cardinalitywhereis defined byfor

. This definition essentially assigns the two failure

associated with the vertex

There will be

tests associated with the decision table.

For each failure source

, there will be an associated test

. These tests can be used exclusively to construct

a testing algorithm requiring a storage cost of

remaining

tests will be associated with the vertex set

each vertex

, there will be a test where

or if all three of the following properties hold:

. Alternatively,

.

Consider the decision problem

ated with the above construction. If the answer to

is yes, it implies that there are

been partitioned by the tests of some decision tree for

which means that there are

actions. A test associated with vertex

corresponding to vertices not adjacent to . This implies that

the

vertices associated with the unpartitioned modules in

the decision tree form a clique of size

graph. Alternatively, if

has a clique of size , then there is

a decision algorithm for

with

this, note that only a path constructed of

leave

modules undivided. The tests associated with each of

the individual failure sources can then be used to discriminate

the failures in divided modules from those in the undivided

modules.

sources

to module.

. The

. For

if

,

ifand

associ-

modules that have not

,

tests which do not split these

will split these actions

in the corresponding

vertices. To see

vertex tests will

C. HEF for Minimal Storage Test Sequencing

Consider an ambiguity set

modules is

. Clearly, the minimal number of tests required

for isolation is

. We need a lower bound on the storage

cost of the optimal diagnostic tree rooted at . Assume, without

loss of generality, that the test costs are in ascending order

.

Theorem 4: The lower bound

whose cardinality in terms of

is given by

(32)

where

expression

is an indicator function which is 1 when the logical

is true and 0 otherwise.is an integer satisfying

(33)

Proof: Let us first consider the case

obvious and crude lower bound would be to take

. However, this bound can be very weak for

since test

can be used at most

at . Such a tree is necessarily balanced, i.e., there are

at depth

from the root. Now that we have used the least

cost test

at all nodes at level

of times any other test can appear in this tree is

. An

,

times in a tree rooted

tests

, the maximum number

, which

implies that it be used at all nodes at depth

extending the above rule, we can assign the least cost tests to

nodes at various depths for this diagnostic tree resulting in a

storage cost of

. Thus, by

(34)

Now consider the case where

maximum number of times the least cost test can appear is

the maximum of

(depth =

depth . Note that this tree is balanced only up to a

depth of

. Now that the least cost test

used at

nodes. From this point on, tests

can be used at

in a storage cost given by (32), completing the proof of the

theorem.

The above proof of admissibility of the HEF is sufficient for

obtaining optimal solutions when used with AO algorithm.

In addition, we also show that the above HEF is consistent,

thus ensuring that AO algorithm with this HEF monotonically

converges to an optimal solution. An HEF

if for each node

in an AND/OR graph

. For this, the

) and

has been

nodes,can be used at

nodes, resulting

is consistent,

(35)

Consider the two minimal storage trees corresponding to

and. Note that one can construct a valid diagnostic

strategy (although not necessarily optimal) for the subset

connecting the two minimal storage trees with a binary link.

The storage cost of this feasible tree at

by

is given by

(36)

for which

bound for the optimal tree storage cost.

is clearly a lower bound, since is a lower

D. Heuristic Storage Measures

In the following, we briefly present the heuristic algorithms

considered in [7] for the minimal storage test sequencing

problem. Consider an ambiguity node

splits

intoandbased on its outcomes. Let

be the cardinality of

be the cardinality of

is defined as follows:

and a testwhich

be the cardinality of

. The heuristic storage measure

, and

if

otherwise

is root

Parent of

(37)

where

node. The single step look-ahead algorithm would select the

test maximizing the average storage measure per-unit cost of

is the test used to arrive atifis not the root

Page 10

162IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 2, MARCH 1999

TABLE XI

PERFORMANCE OF VARIOUS STORAGE MINIMIZING METHODS FOR ? ? ??

TABLE XII

PERFORMANCE OF VARIOUS STORAGE MINIMIZING METHODS FOR ? ? ??

test

(38)

A multistep look-ahead algorithm expands the node

partial tree up to a given depth (look-ahead value) for each

admissible test, and picks the test that maximizes the sum of

values over all nodes in the partial tree.

Tables XI and XII show the relative suboptimality of the

solutions obtained by various limited search AO algorithms

(

Limited Search AO with the number of best tests

retained at every OR-node during the AO cost propagation

and pruning steps) and the Heuristic Storage Measure based

algorithms (

Heuristic Algorithm of [7] with depth

). The AO based algorithms are clearly far superior to the

heuristic methods of [7]. A nice feature of the limited search

AO

is that they result in monotonically better solutions,

unlike the heuristic techniques where no monotonicity is

guaranteed.

into a

VI. CONCLUSION

In this paper, we considered a set of four test sequencing

problems arising in Design for Testability. Specifically, we

considered

1) robust test sequencing;

2) test sequencing with a constraint on one of the objec-

tives;

3) test sequencing with a constraint on the number of

distinct tests used;

4) the problem of minimal storage test sequencing.

We developed optimal and near-optimal solutions for these

problems based on AO and other heuristic top-down graph

search techniques.

ACKNOWLEDGMENT

The authors would like to thank an anonymous reviewer for

valuable comments on an earlier version of this paper and V.

Rajan for help in the preparation of the final manuscript.

REFERENCES

[1] D. J. Klinger et al.., AT&T Reliable Manual.

Reinhold, 1990.

[2] D. P. Sieworek and R. Schwarz, The Theory and Practice of Reliable

System Design.Bedford, MA: Digital, 1982.

[3] S. L. Pollack, “Conversion of decision tables to computer programs,”

Commun. ACM, vol. 8, pp. 677–682, 1965.

[4] L. I. Press, “Conversion of decision tables to computer programs,”

Commun. ACM, vol. 8, pp. 385–390, 1956.

[5] C. R. Hartmann, P. K. Varshney, K. G. Mehrothra, and C. L. Gerberich,

“Application of information theory to the construction of efficient

decision trees,” IEEE Trans. Inform. Theory, vol. 28, pp. 565–577, July

1982.

[6] L. T. Reinwald and R. M. Soland, “Conversion of limited entry decision

tables to optimal computer programs II: Minimum storage requirement,”

J. ACM, vol. 14, pp. 742–755, 1967.

[7] O. J. Murphy and R. L. McCraw, “Designing storage efficient decision

trees,” IEEE Trans. Comput., vol. 40, Mar. 1991.

[8] L. Hyafil and R. L. Rivest, “Constructing optimal binary decision trees

is NP-complete,” Inform. Process. Lett., vol. 5, no. 1, pp. 15–17, 1976.

[9] D. Comer and R. Sethi, “The complexity of trie index construction,” J.

ACM, vol. 24, pp. 428–440, 1977.

[10] M. R. Garey, “Optimal binary identification procedures,” SIAM J. Appl.

Math., vol. 23, no. 2, pp. 173–186, 1972.

[11] K. R. Pattipati and M. G. Alexandridis, “Application of heuristic search

and information theory to sequential fault diagnosis,” IEEE Trans. Syst.,

Man, Cybern., vol. 20, pp. 872–887, July 1990.

[12] A. Mahanti and A. Bagchi, “Admissible heuristic search in AND/OR

graphs,” Theor. Comput. Sci., vol. 24, pp. 207–219, July 1983.

[13] V. Raghavan, M. Shakeri, and K. R. Pattipati, “Optimal and near-optimal

test sequencing algorithms with realistic test models,” IEEE Trans. Syst.,

Man, Cybern., vol. 29, pp. 11–26, Jan. 1999.

New York: Van Nostrand

Vijaya Raghavan (M’88) received the B.E. degree

from Osmania University, India, in 1990, and the

Ph.D. degree in controls and communication sys-

tems from the University of Connecticut, Storrs, in

1996. His research interests as a doctoral student

included array signal processing, numerically robust

target tracking algorithms, and automated fault di-

agnosis algorithms.

He was with Qualtech Systems, Inc., Storrs, from

1995 to 1997, where his primary focus was to

develop and implement efficient, near-optimal al-

gorithms for a range of large-scale, computationally intensive problems in

the area of system health management and automated fault diagnosis. He

is currently with The Mathworks, Inc., Natick, MA. His main focus is

the application of graph algorithms for production quality code-generation

from hierarchical, directed-graph representations of reactive systems. He

is responsible for the code-generation module of the Stateflow product, a

software tool for the executable specification of control-flow algorithms, and

design, simulation, and implementation of complex, event-driven systems.

His current research interests include development of visual programming

languages, code-generation, coverage and automatic test-data generation for

hybrid (data-flow and control-flow) simulation systems.

Dr. Raghavan received the Best Technical Paper award and the Best Student

Paper award at the 1994 and 1995 IEEE Autotest Conferences.

Page 11

RAGHAVAN et al.: TEST SEQUENCING PROBLEMS 163

Mojdeh Shakeri (M’97) received the B.S. and M.S.

degrees in electrical engineering from Tehran Uni-

versity, Tehran, Iran, in 1988 and 1990, respectively,

and the Ph.D. degree in control and communication

systems from the University of Connecticut, Storrs,

in 1997.

She was with Qualtech Systems, Inc., Storrs, from

1995 to 1997. Her primary focus was to develop

and implement efficient, near-optimal algorithms for

a range of large-scale, computationally intensive

problems in the area of system health management

and automated fault diagnosis. Currently, she is with The Mathworks, Inc.,

Natick, MA. She is working on the simulation engine of Simulink, an

interactive environment for modeling, analyzing, and simulating a wide variety

of dynamic systems. Her main focus is to develop efficient algorithms for

simulation of large-scale, hierarchical systems arising in the areas of controls

and digital signal processing. Her research interests include measurement

scheduling, automated testing, system fault diagnosis, system simulation, and

real-time applications.

Dr. Shakeri received the Best Technical Paper award and the Best Student

Paper award at the 1994 and 1995 IEEE Autotest conference.

Krishna R. Pattipati (S’77–M’80–SM’91–F’95)

received the B.Tech. degree in electrical engineering

with highest honors from the Indian Institute of

Technology, Kharagpur, in 1975, and M.S. and

Ph.D. degrees in control and communication sys-

tems from the University of Connecticut, Storrs, in

1977 and 1980, respectively.

He was with Alphatech, Inc., Burlington, MA,

from 1980 to 1986, where he supervised and per-

formed research on artificial intelligence and sys-

tems theory approaches to human decision model-

ing, multitarget tracking, and automated testing. He has served as a Consultant

to Alphatech, Inc. and the IBM Thomas J. Watson Research Center, Yorktown

Heights, NY. Since September 1986, he has been with the University of

Connecticut, where he is a Professor of Electrical and Systems Engineering.

He is also President of Qualtech Systems, Inc., Storrs, a small business

specializing in software tools and solutions for testability, maintainability,

and quality control.

Dr. Pattipati was selected by the IEEE Systems, Man, and Cybernetics So-

ciety as the Outstanding Young Engineer of 1984 and received the Centennial

Key to the Future Award. He won the Best Technical Paper awards at the 1985,

1990, and 1994 IEEE AUTOTEST Conferences and at the 1997 Command

and Control Symposium. He served as the Vice-Chairman for invited sessions

of the IEEE International Conference on Systems, Man, and Cybernetics,

Boston, MA, 1989. He is the Editor of the IEEE TRANSACTIONS ON SYSTEMS,

MAN, AND CYBERNETICS—PART B: CYBERNETICS and is the Vice President for

Technical Activities of the IEEE SMC Society (1998–2000).

#### View other sources

#### Hide other sources

- Available from Krishna R. Pattipati · May 29, 2014
- Available from xpeqt.com