Testing in the Fourth Dimension

Download Report

Transcript Testing in the Fourth Dimension

Testing
1
Problems of Ideal Tests




Ideal tests detect all defects produced in
the manufacturing process.
Ideal tests pass all functionally good
devices.
Very large numbers and varieties of
possible defects need to be tested.
Difficult to generate tests for some real
defects. Defect-oriented testing is an open
problem.
2
Real Tests




Based on analyzable fault models, which
may not map on real defects.
Incomplete coverage of modeled faults due
to high complexity.
Some good chips are rejected. The
fraction (or percentage) of such chips is
called the yield loss.
Some bad chips pass tests. The fraction
(or percentage) of bad chips among all
passing chips is called the defect level.
3
Testing as Filter Process
Good chips
Prob(pass test) = high
Prob(good) = y
Mostly
good
chips
Fabricated
chips
Defective chips
Prob(bad) = 1- y Prob(fail test) = high
Mostly
bad
chips
4
Costs of Testing



Design for testability (DFT)
Chip area overhead and yield reduction
Performance overhead
Software processes of test
Test generation and fault simulation
Test programming and debugging
Manufacturing test
Automatic test equipment (ATE) capital cost
Test center operational cost
5
Design for Testability (DFT)
DFT refers to hardware design styles or added
hardware that reduces test generation complexity.
Motivation: Test generation complexity increases
exponentially with the size of the circuit.
Example: Test hardware applies tests to blocks A
and B and to internal bus; avoids test generation
for combined A and B blocks.
Int.
Logic
bus
Logic
PO
PI
block A
block B
Test
input
Test
output
6
Cost of Manufacturing
Testing in 2000AD



0.5-1.0GHz, analog instruments,1,024
digital pins: ATE purchase price
= $1.2M + 1,024 x $3,000 = $4.272M
Running cost (five-year linear depreciation)
= Depreciation + Maintenance + Operation
= $0.854M + $0.085M + $0.5M
= $1.439M/year
Test cost (24 hour ATE operation)
= $1.439M/(365 x 24 x 3,600)
= 4.5 cents/second
7
Testing Principle
8
Automatic Test
Equipment Components

Consists of:
Powerful computer
Powerful 32-bit Digital Signal Processor
(DSP) for analog testing
Test Program (written in high-level
language) running on the computer
Probe Head (actually touches the bare
or packaged chip to perform fault
detection experiments)
Probe Card or Membrane Probe
(contains electronics to measure signals
on chip pin or pad)
9
ADVANTEST Model
T6682 ATE
10
T6682 ATE Block Diagram
11
LTX FUSION HF ATE
12
Verification Testing


Ferociously expensive
May comprise:
Scanning Electron Microscope tests
Bright-Lite detection of defects
Electron beam testing
Artificial intelligence (expert system)
methods
Repeated functional tests
13
Characterization Test


Worst-case test
Choose test that passes/fails chips
Select statistically significant sample of
chips
Repeat test for every combination of 2+
environmental variables
Plot results in Shmoo plot
Diagnose and correct design errors
Continue throughout production life of chips
to improve design and process to increase
yield
14
Manufacturing Test






Determines whether manufactured chip
meets specs
Must cover high % of modeled faults
Must minimize test time (to control cost)
No fault diagnosis
Tests every device on chip
Test at speed of application or speed
guaranteed by supplier
15
Burn-in or Stress Test


Process:
Subject chips to high temperature & overvoltage supply, while running production
tests
Catches:
Infant mortality cases – these are
damaged chips that will fail in the first 2
days of operation – causes bad devices to
actually fail before chips are shipped to
customers
Freak failures – devices having same
failure mechanisms as reliable devices
16
Sub-types of Tests


Parametric – measures electrical
properties of pin electronics – delay,
voltages, currents, etc. – fast and cheap
Functional – used to cover very high % of
modeled faults – test every transistor and
wire in digital circuits – long and expensive
– main topic of tutorial
17
Fault Modeling




Why model faults?
Some real defects in VLSI and PCB
Common fault models
Stuck-at faults






Single stuck-at faults
Fault equivalence
Fault dominance and checkpoint theorem
Classes of stuck-at faults and multiple faults
Transistor faults
Summary
18
Some Real Defects in Chips




Processing defects
 Missing contact windows
 Parasitic transistors
 Oxide breakdown
 . . .
Material defects
 Bulk defects (cracks, crystal imperfections)
 Surface impurities (ion migration)
 . . .
Time-dependent failures
 Dielectric breakdown
 Electromigration
 . . .
Packaging failures
 Contact degradation
 Seal leaks
 . . .
Ref.: M. J. Howes and D. V. Morgan, Reliability and Degradation Semiconductor Devices and Circuits, Wiley, 1981.
19
Observed PCB Defects
Defect classes
Shorts
Opens
Missing components
Wrong components
Reversed components
Bent leads
Analog specifications
Digital logic
Performance (timing)
Occurrence frequency (%)
51
1
6
13
6
8
5
5
5
Ref.: J. Bateson, In-Circuit Testing, Van Nostrand Reinhold, 1985.
20
Common Fault Models








Single stuck-at faults
Transistor open and short faults
Memory faults
PLA faults (stuck-at, cross-point, bridging)
Functional faults (processors)
Delay faults (transition, path)
Analog faults
etc.
21
Single Stuck-at Fault

Three properties define a single stuck-at fault




Only one line is faulty
The faulty line is permanently set to 0 or 1
The fault can be at an input or output of a gate
Example: XOR circuit has 12 fault sites ( ) and
24 single stuck-at faults
Faulty circuit value
Good circuit value
1
0
a
b
c
d
e
f
j
s-a-0
g
1
0(1)
1(0)
h
i
k
z
1
Test vector for h s-a-0 fault
22
Fault Equivalence




Number of fault sites in a Boolean gate circuit
= #PI + #gates + # (fanout branches).
Fault equivalence: Fault sets f1 and f2 are
equivalent if all tests that detect f1 also
detect f2 and vice versa.
If faults f1 and f2 are equivalent then the
corresponding faulty functions are identical.
Fault collapsing: All single faults of a logic
circuit can be divided into disjoint equivalence
subsets, where all faults in a subset are
mutually equivalent. A collapsed fault set
contains one fault from each equivalence
subset.
23
Equivalence Rules
sa0 sa1
sa0
sa0
sa1
sa1
sa0 sa1
AND
sa0 sa1
sa0 sa1
OR
WIRE
sa0 sa1
sa0 sa1
sa0
sa1
sa0 sa1
sa0
sa0 sa1
NAND
sa0 sa1
NOT
sa1
sa0 sa1
NOR
sa0 sa1
sa0 sa1
sa0
sa1
FANOUT
sa0
sa1
sa0
sa1
24
Dominance Example
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
Faults in red
removed by
equivalence
collapsing
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
sa0 sa1
Faults in
yellow
removed by
dominance
collapsing
15
Collapse ratio = ── = 0.
32 25
Fault Dominance





If all tests of some fault F1 detect another fault F2, then
F2 is said to dominate F1.
Dominance fault collapsing: If fault F2 dominates F1,
then F2 is removed from the fault list.
When dominance fault collapsing is used, it is sufficient
to consider only the input faults of Boolean gates. See
the next example.
In a tree circuit (without fanouts) PI faults form a
dominance collapsed fault set.
If two faults dominate each other then they are
equivalent.
26
Dominance Example
All tests of F2
F1
s-a-1
F2
s-a-1
110
101
s-a-1
001
000
100
010
011
Only test of F1
s-a-1
s-a-1
s-a-0
A dominance collapsed fault set
27
Checkpoints


Primary inputs and fanout branches of a combinational
circuit are called checkpoints.
Checkpoint theorem: A test set that detects all single
(multiple) stuck-at faults on all checkpoints of a
combinational circuit, also detects all single (multiple)
stuck-at faults in that circuit.
Total fault sites = 16
Checkpoints ( ) = 10
28
Transistor (Switch) Faults

MOS transistor is considered an ideal switch
and two types of faults are modeled:




Stuck-open -- a single transistor is permanently
stuck in the open state.
Stuck-short -- a single transistor is permanently
shorted irrespective of its gate voltage.
Detection of a stuck-open fault requires two
vectors.
Detection of a stuck-short fault requires the
measurement of quiescent current (IDDQ).
29
Stuck-Open Example
Vector 1: test for A s-a-0
(Initialization vector)
pMOS
FETs
1
0
0
0
Vector 2 (test for A s-a-1)
VDD
A
B
nMOS
FETs
Stuckopen
C
0
Two-vector s-op test
can be constructed by
ordering two s-at tests
1(Z)
Good circuit states
Faulty circuit states
30
Stuck-Short Example
Test vector for A s-a-0
pMOS
FETs
1
0
A
VDD
IDDQ path in
faulty circuit
Stuckshort
B
nMOS
FETs
C
Good circuit state
0 (X)
Faulty circuit state
31
Functional vs. Structural
ATPG
32
Carry Circuit
33
Functional vs. Structural
(Continued)



Functional ATPG – generate complete set of tests for
circuit input-output combinations
129 inputs, 65 outputs:
2129 = 680,564,733,841,876,926,926,749,
214,863,536,422,912 patterns
Using 1 GHz ATE, would take 2.15 x 1022 years
Structural test:
No redundant adder hardware, 64 bit slices
Each with 27 faults (using fault equivalence)
At most 64 x 27 = 1728 faults (tests)
Takes 0.000001728 s on 1 GHz ATE
Designer gives small set of functional tests – augment
with structural tests to boost coverage to 98+ %
34
Exhaustive Algorithm


For n-input circuit, generate all 2n input
patterns
Infeasible, unless circuit is partitioned into
cones of logic, with  15 inputs
Perform exhaustive ATPG for each cone
Misses faults that require specific
activation patterns for multiple cones to
be tested
35
Random-Pattern Generation


Flow chart for
method
Use to get
tests for 6080% of faults,
then switch
to D-algorithm
or other ATPG
for rest
36
History of Algorithm
Speedups
Algorithm
Est. speedup over D-ALG Year
(normalized to D-ALG time)
D-ALG
1
1966
PODEM
7
1981
FAN
23
1983
TOPS
292
1987
SOCRATES
1574 † ATPG System
1988
Waicukauski et al. 2189 † ATPG System
1990
EST
8765 † ATPG System
1991
TRAN
3005 † ATPG System
1993
Recursive learning 485
1995
Tafertshofer et al. 25057
1997
37
Testability Measures




Definition
Controllability and observability
SCOAP measures
Combinational circuits
Sequential circuits
Summary
38
What are Testability Measures?


Approximate measures of:
Difficulty of setting internal circuit lines to 0 or 1
from primary inputs.
Difficulty of observing internal circuit lines at
primary outputs.
Applications:
Analysis of difficulty of testing internal circuit
parts – redesign or add special test hardware.
Guidance for algorithms computing test patterns
– avoid using hard-to-control lines.
39
Testability Analysis
 Determines testability measures
 Involves Circuit Topological analysis, but
no
test vectors (static analysis) and no search algorithm.
 Linear computational complexity
 Otherwise, is pointless – might as well use
automatic test-pattern generation and
calculate:
 Exact fault coverage
 Exact test vectors
40
SCOAP Measures




SCOAP – Sandia Controllability and Observability Analysis Program
Combinational measures:
 CC0 – Difficulty of setting circuit line to logic 0
 CC1 – Difficulty of setting circuit line to logic 1
 CO – Difficulty of observing a circuit line
Sequential measures – analogous:
 SC0
 SC1
 SO
Ref.: L. H. Goldstein, “Controllability/Observability Analysis of
Digital Circuits,” IEEE Trans. CAS, vol. CAS-26, no. 9. pp. 685 – 693,
Sep. 1979.
41
Range of SCOAP Measures




Controllabilities – 1 (easiest) to infinity (hardest)
Observabilities – 0 (easiest) to infinity (hardest)
Combinational measures:
Roughly proportional to number of circuit lines that
must be set to control or observe given line.
Sequential measures:
Roughly proportional to number of times flip-flops
must be clocked to control or observe given line.
42
Combinational Controllability
43
Controllability Formulas
(Continued)
44
Combinational Observability
To observe a gate input: Observe output and make other input
values non-controlling.
45
Observability Formulas
(Continued)
Fanout stem: Observe through branch with best
observability.
46
Comb. Controllability
Circled numbers give level number. (CC0, CC1)
47
Controllability Through
Level 2
48
Final Combinational
Controllability
49
Combinational
Observability for Level 1
Number in square box is level from primary outputs (POs).
(CC0, CC1) CO
50
Combinational
Observabilities for Level 2
51
Final Combinational
Observabilities
52
Sequential Measures
(Comparison)
 Combinational
 Increment CC0, CC1, CO whenever you pass through
a gate, either forward or backward.
 Sequential
 Increment SC0, SC1, SO only when you pass through
a flip-flop, either forward or backward.
 Both
 Must iterate on feedback loops until controllabilities
stabilize.
53
D Flip-Flop Equations
 Assume a synchronous RESET line.
 SC1 (Q) = SC1 (D) + SC1 (C) + SC0 (C) + SC0
(RESET) + 1
 SC0 (Q) = min [SC1 (RESET) + SC1 (C) + SC0 (C),
SC0 (D) + SC1 (C) + SC0 (C)] + 1
 SO (D) = SO (Q) + SC1 (C) + SC0 (C) + SC0
(RESET)
54
D Flip-Flop Clock and Reset





CO (RESET) = CO (Q) + CC1 (Q) + CC1 (RESET) +
CC1 (C) + CC0 (C)
SO (RESET) is analogous
Three ways to observe the clock line:
1. Set Q to 1 and clock in a 0 from D
2. Set the flip-flop and then reset it
3. Reset the flip-flop and clock in a 1 from D
CO (C) = min [ CO (Q) + CC1 (Q) + CC0 (D) +
CC1 (C) + CC0 (C),
CO (Q) + CC1 (Q) + CC1 (RESET) +
CC1 (C) + CC0 (C),
CO (Q) + CC0 (Q) + CC0 (RESET) +
CC1 (D) + CC1 (C) + CC0 (C)]
SO (C) is analogous
55
Testability Computation
1. For all PIs, CC0 = CC1 = 1 and SC0 = SC1 = 0
2. For all other nodes, CC0 = CC1 = SC0 = SC1 = ∞
3. Go from PIs to POs, using CC and SC equations to get
4.
5.
6.
7.
controllabilities -- Iterate on loops until SC stabilizes -convergence is guaranteed.
Set CO = SO = 0 for POs, ∞ for all other lines.
Work from POs to PIs, Use CO, SO, and controllabilities
to get observabilities.
Fanout stem (CO, SO) = min branch (CO, SO)
If a CC or SC (CO or SO) is ∞ , that node is
uncontrollable (unobservable).
56
Sequential Example
Initialization
57
After 1 Iteration
58
After 2 Iterations
59
After 3 Iterations
60
Stable Sequential Measures
61
Final Sequential
Observabilities
62
Testability Measures are
Not Exact


Exact computation of measures is NP-Complete and impractical
Green (Italicized) measures show correct (exact) values – SCOAP
measures are in orange -- CC0,CC1 (CO)
1,1(6)
1,1(5,∞)
2,3(4)
2,3(4,∞)
6,2(0)
4,2(0)
(6)
1,1(5)
1,1(4,6)
(6)
1,1(6)
1,1(5,∞)
(5)
(4,6)
2,3(4)
2,3(4,∞)
63
Summary


Testability measures are approximate measures of:
Difficulty of setting circuit lines to 0 or 1
Difficulty of observing internal circuit lines
Applications:
Analysis of difficulty of testing internal circuit parts
 Redesign circuit hardware or add special test
hardware where measures show poor
controllability or observability.
Guidance for algorithms computing test patterns –
avoid using hard-to-control lines
64
Exercise
Compute (CC0, CC1) CO for all lines in the following circuit.
Questions:
1. Is observability of primary input correct?
2. Are controllabilities of primary outputs correct?
3. What do the observabilities of the input lines of
the AND gate indicate?
65
Major Combinational
Automatic Test-Pattern
Generation Algorithms



Definitions
D-Algorithm (Roth) – 1966
PODEM (Goel) -- 1981
66
Forward Implication


Results in logic gate inputs
that are significantly
labeled so that output is
uniquely determined
AND gate forward
implication table:
67
Backward Implication

Unique determination of all gate inputs when the
gate output and some of the inputs are given
68
Implication Stack

Push-down stack. Records:
Each signal set in circuit by ATPG
Whether alternate signal value already tried
Portion of binary search tree already searched
69
Implication Stack after
Backtrack
Unexplored
Present Assignment
Searched and Infeasible
0
0
F
1
E
0
B
B
0
1
0
1
F
1
1
0
F
1
70
Objectives and Backtracing
of ATPG Algorithm


Objective – desired signal value goal for ATPG
Guides it away from infeasible/hard solutions
Backtrace – Determines which primary input and
value to set to achieve objective
Use testability measures
71
D-Algorithm -- Roth
IBM
(1966)

Fundamental concepts invented:
First complete ATPG algorithm
D-Cube
D-Calculus
Implications – forward and backward
Implication stack
Backtrack
Test Search Space
72
Primitive D-Cube of
Failure

Models circuit faults:
Stuck-at-0
Stuck-at-1
Bridging fault (short circuit)




Arbitrary change in logic function
AND Output sa0: “1 1 D”
AND Output sa1: “0 X D ”
“X 0 D ”
Wire sa0:
“D”
Propagation D-cube – models conditions
under which fault effect propagates
through gate
73
Implication Procedure
1. Model fault with appropriate primitive
D-cube of failure (PDF)
2. Select propagation D-cubes to
3.


propagate fault effect to a circuit
output (D-drive procedure)
Select singular cover cubes to justify
internal circuit signals (Consistency
procedure)
Put signal assignments in test cube
Regrettably, cubes are selected very
arbitrarily by D-ALG
74
D-Algorithm – Top Level
1. Number all circuit lines in increasing level
order from PIs to POs;
2. Select a primitive D-cube of the fault to be
the test cube;
Put logic outputs with inputs labeled as
D (D) onto the D-frontier;
3. D-drive ();
4. Consistency ();
5. return ();
75
D-Algorithm – D-drive
while (untried fault effects on D-frontier)
select next untried D-frontier gate for propagation;
while (untried fault effect fanouts exist)
select next untried fault effect fanout;
generate next untried propagation D-cube;
D-intersect selected cube with test cube;
if (intersection fails or is undefined) continue;
if (all propagation D-cubes tried & failed) break;
if (intersection succeeded)
add propagation D-cube to test cube -- recreate D-frontier;
Find all forward & backward implications of assignment;
save D-frontier, algorithm state, test cube, fanouts, fault;
break;
else if (intersection fails & D and D in test cube) Backtrack ();
else if (intersection fails) break;
if (all fault effects unpropagatable) Backtrack ();
76
D-Algorithm -- Consistency
g = coordinates of test cube with 1’s & 0’s;
if (g is only PIs) fault testable & stop;
for (each unjustified signal in g)
Select highest # unjustified signal z in g, not a PI;
if (inputs to gate z are both D and D) break;
while (untried singular covers of gate z)
select next untried singular cover;
if (no more singular covers)
If (no more stack choices) fault untestable & stop;
else if (untried alternatives in Consistency)
pop implication stack -- try alternate assignment;
else
Backtrack ();
D-drive ();
If (singular cover D-intersects with z) delete z from g, add
inputs to singular cover to g, find all forward and
backward implications of new assignment, and break;
If (intersection fails) mark singular cover as failed;
77
Backtrack
if (PO exists with fault effect) Consistency ();
else pop prior implication stack setting to try
alternate assignment;
if (no untried choices in implication stack)
fault untestable & stop;
else return;
78
Example 7.2 Fault A sa0

1
D
Step 1 – D-Drive – Set A = 1
D
79
Step 2 -- Example 7.2

Step 2 – D-Drive – Set f = 0
0
1
D
D
D
80
Step 3 -- Example 7.2

Step 3 – D-Drive – Set k = 1
1
D
0
1
D
D
D
81
Step 4 -- Example 7.2

Step 4 – Consistency – Set g = 1
1
1
D
0
1
D
D
D
82
Step 5 -- Example 7.2

Step 5 – Consistency – f = 0 Already set
1
1
D
0
1
D
D
D
83
Step 6 -- Example 7.2

Step 6 – Consistency – Set c = 0, Set e = 0
1
1
0
1
D
0
D
0
D
D
84
D-Chain Dies -- Example 7.2


Step 7 – Consistency – Set B = 0
D-Chain dies
X
1
0
0
1

D
0
1
D
0
D
D
Test cube: A, B, C, D, e, f, g, h, k, L
85
Example 7.3 – Fault s sa1

Primitive D-cube of Failure
1
sa1
D
86
Example 7.3 – Step 2 s sa1

Propagation D-cube for v
1
1
sa1
0
D
D
D
87
Example 7.3 – Step 2 s sa1

Forward & Backward Implications
0
1
1
1
1
1
sa1
0
D
D
D
88
Example 7.3 – Step 3 s sa1

Propagation D-cube for Z – test found!
0
1
1
1
1
1
sa1
0
D
D
D
D
1
89
Example 7.3 – Fault u sa1

Primitive D-cube of Failure
1
0
sa1
D
90
Example 7.3 – Step 2 u sa1

Propagation D-cube for v
1
0
0
D
sa1
D
91
Example 7.3 – Step 2 u sa1

Forward and backward implications
1
1
0
0
1
0
0
D
0
sa1
D
92
Inconsistent

d = 0 and m = 1 cannot justify r = 1
(equivalence)
Backtrack
Remove B = 0 assignment
93
Example 7.3 – Backtrack

Need alternate propagation D-cube for v
1
0
sa1
D
94
Example 7.3 – Step 3 u sa1

Propagation D-cube for v
1
1
0
sa1
D
D
95
Example 7.3 – Step 4 u sa1

Propagation D-cube for Z
1
1
1
0
D
1
sa1
D
D
96
Example 7.3 – Step 4 u sa1

Propagation D-cube for Z and implications
0
1
1
1
1
1
0
0
D
1
0
sa1
D
D
97
PODEM -- Goel
IBM
(1981)

New concepts introduced:
Expand binary decision tree only
around primary inputs
Use X-PATH-CHECK to test whether
D-frontier still there
Objectives -- bring ATPG closer to
propagating D (D) to PO
Backtracing
98
Motivation


IBM introduced semiconductor DRAM
memory into its mainframes – late 1970’s
Memory had error correction and
translation circuits – improved reliability
D-ALG unable to test these circuits
 Search too undirected
 Large XOR-gate trees
 Must set all external inputs to define
output
Needed a better ATPG tool
99
PODEM High-Level Flow
1. Assign binary value to unassigned PI
2. Determine implications of all PIs
3. Test Generated? If so, done.
4. Test possible with more assigned PIs? If
maybe, go to Step 1
5. Is there untried combination of values on
assigned PIs? If not, exit: untestable fault
6. Set untried combination of values on
assigned PIs using objectives and
backtrace. Then, go to Step 2
100
Example 7.3 Again

Select path s – Y for fault propagation
sa1
101
Example 7.3 -- Step 2 s sa1

Initial objective: Set r to 1 to sensitize fault
1
sa1
102
Example 7.3 -- Step 3 s sa1

Backtrace from r
1
sa1
103
Example 7.3 -- Step 4 s sa1

Set A = 0 in implication stack
1
0
sa1
104
Example 7.3 -- Step 5 s sa1

Forward implications: d = 0, X = 1
1
0
1
0
sa1
105
Example 7.3 -- Step 6 s sa1

Initial objective: set r to 1
1
0
1
0
sa1
106
Example 7.3 -- Step 7 s sa1

Backtrace from r again
1
0
1
0
sa1
107
Example 7.3 -- Step 8 s sa1

Set B to 1. Implications in stack: A = 0, B = 1
1
0
1
1
0
sa1
108
Example 7.3 -- Step 9 s sa1
Forward implications: k = 1, m = 0, r = 1, q = 1,
Y = 1, s = D, u = D, v = D, Z = 1
1
1

0
0
0
1
1
sa1
D
1
1
D
D
1
109
Backtrack -- Step 10 s sa1

X-PATH-CHECK shows paths s – Y and
s – u – v – Z blocked (D-frontier disappeared)
1
0
1
0
sa1
110
Step 11 -- s sa1

Set B = 0 (alternate assignment)
1
0
0
sa1
111
Backtrack -- s sa1

Forward implications: d = 0, X = 1, m = 1, r = 0,
s = 1, q = 0, Y = 1, v = 0, Z = 1. Fault not
sensitized.
1
0
0
0
0
1
1
sa1
1
0
1
0
1
112
Step 13 -- s sa1

Set A = 1 (alternate assignment)
1
1
sa1
113
Step 14 -- s sa1

Backtrace from r again
1
1
sa1
114
Step 15 -- s sa1

Set B = 0. Implications in stack: A = 1, B = 0
1
1
0
sa1
115
Backtrack -- s sa1

Forward implications: d = 0, X = 1, m = 1, r = 0.
Conflict: fault not sensitized. Backtrack
1
0
1
0
0
1
1
sa1
1
0
1
0
1
116
Step 17 -- s sa1

Set B = 1 (alternate assignment)
1
1
1
sa1
117
Fault Tested -- Step 18 s sa1

Forward implications: d = 1, m = 1, r = 1, q = 0,
s = D, v = D, X = 0, Y = D
0
1
1
1
1
D
1
sa1
D
0
D
D
X
118
Backtrace (s, vs)
Pseudo-Code
v = vs;
while (s is a gate output)
if (s is NAND or INVERTER or NOR) v = v;
if (objective requires setting all inputs)
select unassigned input a of s with
hardest controllability to value v;
else
select unassigned input a of s with
easiest controllability to value v;
s = a;
return (s, v) /* Gate and value to be assigned */;
119
Objective Selection Code
if (gate g is unassigned) return (g, v);
select a gate P from the D-frontier;
select an unassigned input l of P;
if (gate g has controlling value)
c = controlling input value of g;
else if (0 value easier to get at input of
XOR/EQUIV gate)
c = 1;
else c = 0;
return (l, c );
120
PODEM Algorithm
while (no fault effect at POs)
if (xpathcheck (D-frontier))
(l, vl) = Objective (fault, vfault);
(pi, vpi) = Backtrace (l, vl);
Imply (pi, vpi);
if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS);
(pi, vpi) = Backtrack ();
Imply (pi, vpi);
if (PODEM (fault, vfault) == SUCCESS) return (SUCCESS);
Imply (pi, “X”);
return (FAILURE);
else if (implication stack exhausted)
return (FAILURE);
else Backtrack ();
return (SUCCESS);
121
FAN -- Fujiwara and
Shimono
(1983)

New concepts:
Immediate assignment of uniquely-
determined signals
Unique sensitization
Stop Backtrace at head lines
Multiple Backtrace
122
PODEM Fails to Determine
Unique Signals

Backtracing operation fails to set all 3
inputs of gate L to 1
Causes unnecessary search
123
FAN -- Early Determination
of Unique Signals

Determine all unique signals implied by
current decisions immediately
Avoids unnecessary search
124
PODEM Makes Unwise
Signal Assignments

Blocks fault propagation due to
assignment J = 0
125
Unique Sensitization of
FAN with No Search
Path over which fault is uniquely sensitized

FAN immediately sets necessary signals to
propagate fault
126
Headlines

Headlines H and J separate circuit into 3
parts, for which test generation can be
done independently
127
Contrasting Decision Trees
FAN decision tree
PODEM decision tree
128
Multiple Backtrace
FAN – breadth-first
passes –
1 time
PODEM –
depth-first
passes – 6 times
129
AND Gate Vote Propagation
[5, 3]
[0, 3]
[0, 3]
[5, 3]
[0, 3]

AND Gate
Easiest-to-control Input –


# 0’s = OUTPUT # 0’s
# 1’s = OUTPUT # 1’s
All other inputs -

# 0’s = 0
# 1’s = OUTPUT # 1’s
130
Multiple Backtrace
Fanout Stem Voting
[18, 6]
[5, 1]
[1, 1]
[3, 2]
[4, 1]
[5, 1]

Fanout Stem --
# 0’s = S Branch # 0’s,
# 1’s = S Branch # 1’s
131
Multiple Backtrace
Algorithm
repeat
remove entry (s, vs) from current_objectives;
If (s is head_objective) add (s, vs) to
head_objectives;
else if (s not fanout stem and not PI)
vote on gate s inputs;
if (gate s input I is fanout branch)
vote on stem driving I;
add stem driving I to stem_objectives;
else add I to current_objectives;
132
Rest of Multiple Backtrace
if (stem_objectives not empty)
(k, n0 (k), n1 (k)) = highest level stem from
stem_objectives;
if (n0 (k) > n1 (k)) vk = 0;
else vk = 1;
if ((n0 (k) != 0) && (n1 (k) != 0) && (k not in fault
cone))
return (k, vk);
add (k, vk) to current_objectives;
return (multiple_backtrace (current_objectives));
remove one objective (k, vk) from head_objectives;
return (k, vk);
133
Logic simulation and
fault simulation
134
True-Value Simulation
Algorithms

Compiled-code simulation





Applicable to zero-delay combinational logic
Also used for cycle-accurate synchronous sequential
circuits for logic verification
Efficient for highly active circuits, but inefficient for
low-activity circuits
High-level (e.g., C language) models can be used
Event-driven simulation




Only gates or modules with input events are
evaluated (event means a signal change)
Delays can be accurately simulated for timing
verification
Efficient for low-activity circuits
Can be extended for fault simulation
135
Compiled-Code Algorithm



Step 1: Levelize combinational logic and
encode in a compilable programming language
Step 2: Initialize internal state variables (flipflops)
Step 3: For each input vector
Set primary input variables
Repeat (until steady-state or max. iterations)

Execute compiled code
Report or save computed variables
136
Event-Driven Algorithm
(Example)
2
0
e =1
t=0
2
2
d=0
4
b =1
f =0
g
0
4
8
Time, t
Activity
list
c=0
d, e
d = 1, e = 0
f, g
1
g =1
2
Time stack
a =1
c =1
Scheduled
events
3
4
g=0
5
6
f=1
g
7
8
g=1
137
Time Wheel (Circular Stack)
Current
time
pointer
max
t=0
1
2
Event link-list
3
4
5
6
7
138
Efficiency of Eventdriven Simulator


Simulates events (value changes) only
Speed up over compiled-code can be ten
times or more; in large logic circuits about
0.1 to 10% gates become active for an input
change
Steady 0
0 to 1 event
Steady 0 Large logic
block without
(no event)
activity
139
Fault Simulation
Algorithms





Serial
Parallel
Deductive
Concurrent
Differential
140
Serial Algorithm

Algorithm: Simulate fault-free circuit and save
responses. Repeat following steps for each
fault in the fault list:




Modify netlist by injecting one fault
Simulate modified netlist, vector by vector,
comparing responses with saved responses
If response differs, report fault detection and
suspend simulation of remaining vectors
Advantages:


Easy to implement; needs only a true-value
simulator, less memory
Most faults, including analog faults, can be
simulated
141
Serial Algorithm (Cont.)


Disadvantage: Much repeated computation;
CPU time prohibitive for VLSI circuits
Alternative: Simulate many faults together
Test vectors
Fault-free circuit
Comparator
f1 detected?
Comparator
f2 detected?
Comparator
fn detected?
Circuit with fault f1
Circuit with fault f2
Circuit with fault fn
142
Parallel Fault Simulation





Compiled-code method; best with twostates (0,1)
Exploits inherent bit-parallelism of logic
operations on computer words
Storage: one word per line for two-state
simulation
Multi-pass simulation: Each pass simulates
w-1 new faults, where w is the machine
word length
Speed up over serial method ~ w-1
143
Parallel Fault Sim. Example
Bit 0: fault-free circuit
Bit 1: circuit with c s-a-0
Bit 2: circuit with f s-a-1
1
a
b
1
1
1
1
1
1
1
c
0
1
1
e
s-a-0
0
d
0
c s-a-0 detected
0
f
1
0
s-a-1
0
0
0
1
g
1
144
Deductive Fault Simulation





One-pass simulation
Each line k contains a list Lk of faults
detectable on k
Following true-value simulation of each
vector, fault lists of all gate output lines
are updated using set-theoretic rules,
signal values, and gate input fault lists
PO fault lists provide detection data
Limitations:


Set-theoretic rules difficult to derive for nonBoolean gates
Gate delays are difficult to use
145
Deductive Fault Sim.
Example
Notation: Lk is fault list for line k
kn is s-a-n fault on line k
b
1
{b0}
{a0}
{b0 , c0}
c
d
{b0 , d0}
Le = La U Lc U {e0}
= {a0 , b0 , c0 , e0}
e
f
1
0
{b0 , d0 , f1}
1
g
Lg = (Le Lf ) U {g0}
= {a0 , c0 , e0 , g0}
U
a
1
Faults detected by
the input vector
146
Concurrent Fault Simulation





Event-driven simulation of fault-free circuit and
only those parts of the faulty circuit that differ in
signal states from the fault-free circuit.
A list per gate containing copies of the gate from
all faulty circuits in which this gate differs. List
element contains fault ID, gate input and output
values and internal states, if any.
All events of fault-free and all faulty circuits are
implicitly simulated.
Faults can be simulated in any modeling style or
detail supported in true-value simulation (offers
most flexibility.)
Faster than other methods, but uses most
memory.
147
Conc. Fault Sim. Example
0
0
1
a
b
1
1
1
c
d
1
1
1 0
c0
b0
a0
1
1
0
0
0
e0
0
0 1
0
1
1
e
1
0
0
f
d0
0
0 1
f1
1 1
1
1
0
b0
1
a0
g
0
0
1
1
0
0
g
0
b0
0
1
0
1
1
1
f1
c0
0
0
0
1
1
e0
0
1
d0
148
Fault Coverage and
Efficiency
Fault coverage =
# of detected faults
Total # faults
Fault
# of detected faults
=
Total # faults -- # undetectable faults
efficiency
149
Test Generation Systems
Compacter
Circuit
Description
Fault
List
Test
Patterns
SOCRATES
With fault
simulator
Aborted
Faults
Undetected
Faults
Redundant
Faults
Backtrack
Distribution
150
Test Compaction
Example



t1 = 0 1 X
t3 = 0 X 0
Combine t1 and
Obtain:
t13 = 0 1 0

t2 = 0 X 1
t4 = X 0 1
t3, then t2 and t4
t24 = 0 0 1
Test Length shortened from 4 to 2
151
Test Compaction

Fault simulate test patterns in reverse
order of generation
ATPG patterns go first
Randomly-generated patterns go last
(because they may have less coverage)
When coverage reaches 100%, drop
remaining patterns (which are the
useless random ones)
Significantly shortens test sequence –
economic cost reduction
152
Static and Dynamic
Compaction of Sequences

Static compaction
ATPG should leave unassigned inputs as X
Two patterns compatible – if no conflicting
values for any PI
Combine two tests ta and tb into one test
tab = ta  tb using D-intersection
Detects union of faults detected by ta & tb

Dynamic compaction
Process every partially-done ATPG vector
immediately
Assign 0 or 1 to PIs to test additional faults
153
Sequential Circuits


A sequential circuit has memory in addition
to combinational logic.
Test for a fault in a sequential circuit is a
sequence of vectors, which




Initializes the circuit to a known state
Activates the fault, and
Propagates the fault effect to a primary
output
Methods of sequential circuit ATPG


Time-frame expansion methods
Simulation-based methods
154
Example: A Serial Adder
An Bn
1
1
s-a-0
D
1
1
D
Cn
X
Cn+1
X
Combinational logic
FF
Sn X
1
155
Time-Frame Expansion
An-1 Bn-1
1
1
s-a-0
1
Cn-1
A n Bn
Time-frame -1
D
1
X
1 D
X
Combinational logic
1
Time-frame 0
s-a-0
1
X
Cn
1
Sn-1
X
D
D
1 D
Combinational logic
1
Cn+1
1
Sn
D
FF
156
Concept of Time-Frames

If the test sequence for a single stuck-at
fault contains n vectors,



Fault
Unknown
or given
Init. state
Comb.
block
Replicate combinational logic block n times
Place fault in each block
Generate a test for the multiple stuck-at fault
using combinational ATPG with 9-valued logic
Vector -n+1
Timeframe
-n+1
PO -n+1
State
variables
Vector -1
Vector 0
Timeframe
-1
Timeframe
0
PO -1
PO 0
Next
state
157
Example for Logic Systems
FF1
A
s-a-1
B
FF2
158
Five-Valued Logic (Roth)
0,1, D, D, X
A 0
A 0
s-a-1
s-a-1
D
FF1
FF2
D
X
X
X
X
D
D
Time-frame -1
B X
Time-frame 0
FF1
FF2
B X
159
Nine-Valued Logic (Muth)
0,1, 1/0, 0/1, 1/X, 0/X, X/0, X/1, X
A 0
A X
s-a-1
s-a-1
0/1
FF1
FF2
X/1
X
0/X
0/X
X
0/1
X/1
Time-frame -1
B X
Time-frame 0
B
FF1
FF2
0/1
160