Edward J. McCluskey’s research while affiliated with Stanford University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (210)


Test Point Insertion For Non-Feedback Bridging Faults
  • Article

March 2012

·

28 Reads

·

Edward J. McCluskey

This paper studies pseudo-random pattern testing of bridging faults. Although bridging faults are generally more random pattern testable than stuck-at faults, examples are shown to illustrate that some bridging faults can be much less random pattern testable than stuck-at faults. A fast method for identifying these random-pattern-resistant bridging faults is described. It is shown that state-of-the-art test point insertion techniques, which are based on the stuck-at fault model, are inadequate. Data is presented which indicates that even after inserting test points that result in 100% single stuck-at fault coverage, many bridging faults are still not detected. A test point insertion procedure that targets both single stuck-at faults and non-feedback bridging faults is presented. It is shown that by considering both types of faults when selecting the location for test points, higher fault coverage can be obtained with little or no increase in overhead. Thus, the test point insertion procedure described here is a lowcost way to improve the quality of built-in self-test. While this paper considers only non-feedback bridging faults, the techniques that are described can be applied to feedback bridging faults in a straightforward manner.


Test Set Compression Through Alternation Between Deterministic and Pseudorandom Test Patterns

October 2010

·

13 Reads

·

1 Citation

Journal of Electronic Testing

This paper presents a new reseeding technique that reduces the storage required for the seeds as well as the test application time by alternating between ATPG and reseeding to optimize the seed selection. The technique avoids loading a new seed into the PRPG whenever the PRPG can be placed in a state that generates test patterns without explicitly loading a seed. The ATPG process is tuned to target only undetected faults as the PRPG goes through its natural sequence which is maximally used to generate useful test patterns. The test application procedure is slightly modified to enable higher flexibility and more reduction in tester storage and test time. The results of applying the technique show up to 90% reduction in tester storage and 80% reduction in test time compared to classic reseeding. They also show 70% improvement in defect coverage when the technique is emulated on test chips with real defects. KeywordsBIST-Built-in self-test-Test set compression-DFT-Design-for-testability-Reseeding-Scan testing-Pseudorandom test-Deterministic BIST


Failing Frequency Signature Analysis

November 2008

·

32 Reads

·

13 Citations

IEEE International Test Conference (TC)

A failing frequency signature is a collection of the maximum operating frequencies of each pattern in a pattern set. Analyzing the failing frequency signature can successfully detect small-delay defects, because even a small-delay defect can cause an outlier to appear in a failing frequency signature. Moreover, the failing frequency signature will be consistent even in the presence of process variations. Failing frequency signature analysis can effectively detect very hard-to-find defects that occur in No-Trouble-Found (NTF) devices, without the necessity of thorough test patterns. The analysis can also be used for characterization tests. In this paper, experimental results using the failing frequency signature to detect small-delay defects are presented using test chips fabricated in a 0.18 mum technology.



How Many Test Patterns are Useless?

April 2008

·

40 Reads

·

21 Citations

Studies by previous researchers using production test data reported that not all the production test patterns applied detected defective chips. Researchers found that 70% to 90% of their production test patterns seemed useless because these patterns detected no defective chips and they could therefore be removed without impacting test quality. Previous researchers qualitatively explained this finding by a lack of correlation between test metrics and defect coverage. Notwithstanding the lack of correlation between test metrics and defect coverage, in this paper we develop a simple statistical model that relates the expected number of useless patterns to the production yield, the defect coverage characteristics, and the number of tested chips. This model demonstrates that for practical values of production yield, defect coverage and number of chips tested, a significant fraction of test patterns will be useless. We validated this statistical model by comparing its results with actual production testing data.


Error Sequence Analysis

April 2008

·

24 Reads

·

6 Citations

With increasing IC process variation and increased operating speed, it is more likely that even subtle defects will lead to the malfunctioning of a circuit. Various fault models, such as the transition fault model and the path-delay model, have been used to aid delay defect detection. However, these models are not efficient for small-delay defect coverage or for test pattern generation time. Error sequence analysis utilizes the order in which the errors occur during a frequency sweep of a transition test to identify small- delay defects that may escape the same test applied in the conventional way. Moreover, it can detect such defects even in the presence of inter-die process variations, such as lot-to-lot and wafer-to-wafer process variation. In addition, error sequence analysis is very effective in separating devices with delay defects from devices that have failed due to process variation.


Inconsistent Fail due to Limited Tester Timing Accuracy

April 2008

·

24 Reads

·

4 Citations

Delay testing is a technique to determine if a chip will function correctly at a specified frequency. If a chip passes delay tests, it will presumably function at the specified frequency in the field. This paper presents experimental results that show how chips can pass very thorough delay tests and still fail in the field. It is shown that some chips sometimes pass and sometimes fail when the same delay test is applied multiple times under the same test conditions. These chips are called inconsistent fails. This paper shows how tester timing edge placement accuracy can cause inconsistent fails and suggests the minimum requirements for guardbands that avoid the inconsistent test results.


California scan architecture for high quality and low power testing

November 2007

·

20 Reads

·

32 Citations

IEEE International Test Conference (TC)

This paper presents a scan architecture - California scan - that achieves high quality and low power testing by modifying test patterns in the test application process. The architecture is feasible because most of the bits in the test patterns generated by ATPG tools are don't-care bits. Scan shift-in patterns have their don't-care bits assigned using the repeat-fill technique, reducing switching activity during the scan shift-in operation; the scan shift-in patterns are altered to toggle-fill patterns when they are applied to the combinational logic, improving defect coverage.


Test Set Reordering Using the Gate Exhaustive Test Metric

June 2007

·

17 Reads

·

6 Citations

When a test set size is larger than desired, some patterns must be dropped. This paper presents a systematic method to reduce test set size; the method reorders a test set using the gate exhaustive test metric and truncates the test set to the desired size. To determine the effectiveness of the method, test sets with 1,556 test patterns were applied to 140 defective Stanford ELF18 test cores. The original test set required 758 test patterns to detect all defective cores, while the test set reordered using the presented method required 286 test patterns. The method also reduces the test application time for defective cores.


Classifying Bad Chips and Ordering Test Sets

November 2006

·

20 Reads

·

20 Citations

IEEE International Test Conference (TC)

This paper shows data related to choosing a pair of test sets for digital IC production test. This data demonstrates that the choice of the second set of the pair should take into account the test metric used for the first test set. An approach for making this choice by taking defect coverage and total test length into account is presented


Citations (65)


... The basis for most LBIST approaches is pseudo-random tests that can easily be produced on-chip using linearfeedback shift-registers (LFSRs) [1]. For increased fault coverage, pseudo-random tests are enhanced by one of several approaches: controlling the probabilities of 0s and 1s in the applied tests to obtain weighted pseudo-random tests [1], using bit-flipping or bit-fixing logic to assign specific values to certain inputs as in [2] or [11], or using multiple seeds for the LFSR [3]. Test points are also useful in increasing the fault coverage [1]. ...

Reference:

Storage and Counter Based Logic Built-In Self-Test
Bit-fixing in pseudorandom sequences for scan BIST
  • Citing Article
  • Full-text available
  • April 2001

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

... This paper is organized as follows. Section 2 briefly describes the relevant aspects of the Test Chip, more details can be found in [1,18]. Section 3 covers the test conditions and test sets applied, and the next sections focus on timing-related defects. ...

An Experimental Chip to Evaluate Test Techniques Part 1: Description of Experiment
  • Citing Article
  • December 1998

... The voltage and frequency scaling schemes presented in this work are intended only for the reduction of test time and should not interfere with the fault coverage of the test. It has been shown that while V DD does not affect stuck-open defects, it may affect the behavior of resistive opens [8,25]. Chang and McCluskey conclude from their experiments that low voltage testing captures defects that can cause early-life and intermittent failures and that these defects are undetected at nominal voltage [2,3]. ...

Testing for resistive opens and stuck opens
  • Citing Article
  • January 2001

... There is a large amount of literature on this classic topic [1,2,8,10,33]. However, detecting these soft errors will increase execution time, which means the degradation of system performance [9,13,21,22,24,27,31,37]. For example, error detection by diverse data and duplicated instructions (EDDDDI) improves the reliability, but brings a huge overhead of execution time [24]. ...

Fault-Tolerant Systems in a Space Environment: The CRC ARGOS project

... They are good for magnetic and optical storage, where a simple retransmit request to correct bit errors is feasible. Hudak et al., compared several fault-tolerant software techniques including Algorithm Based Fault Tolerance based on error rate [15]. The techniques were ranked based on reliability and cost. ...

Algorithm-Based Fault Tolerance: A Performance Perspective Based on Error Rate

... Nevertheless, some small delay faults can be propagated along short paths only, such that they are undetectable even by advanced timing aware ATPG. Faster-than-at-speed Test (FAST) targets these Hidden Delay Faults (HDFs) by overclocking the circuit, typically using several frequencies up to three times higher than the nominal frequency [14], [15], [16], [17]. Silicon experiments have already demonstrated the effectiveness of this strategy [18], [19]. ...

Failing Frequency Signature Analysis
  • Citing Conference Paper
  • November 2008

IEEE International Test Conference (TC)

... The complexity of today's ICs and shrinking process technologies are also leading to prohibitively high test-data volumes. For example, the volume for TDFs is two to five times higher than that for stuck-at faults [10], and it has been demonstrated recently that test patterns for such sequence and timing-dependent faults are more important for newer technologies [11]. The 2007 International Technology Roadmap for Semiconductors predicted that the test data volume for integrated circuits will be as much as 38 times larger and the test application time will be about 17 times longer in 2015 than it was in 2007 [12]. ...

Classifying Bad Chips and Ordering Test Sets
  • Citing Conference Paper
  • November 2006

IEEE International Test Conference (TC)

... Shorts between lines feeding the same gate are not included. Shorts between signal lines and power/ground grid are not considered because they are more likely to behave as stuck-at or transition faults [72] [73][74] [75]. The bridge resistance is assumed to be uniformly distributed between 0 Ω and 40 kΩ [76]. ...

Pseudo-Random Pattern Testing of Bridging Faults.

... In this paper, time redundant execution and comparison of results are referred to as Temporal Error Detection (TED). TED can be applied on different levels such as the instruction level [4], procedure level [5] or at the task level [6]. Recent studies utilize modern technologies in processors to reduce the time overhead [7]. ...

Procedure Call Duplication: Minimization of Energy Consumption with Constrained Error Detection Latency.
  • Citing Conference Paper
  • January 2001