Concept to Product Design, Verification & Test Basics and Challenges in VLSI Test Virendra Singh and Kewal K.

Download Report

Transcript Concept to Product Design, Verification & Test Basics and Challenges in VLSI Test Virendra Singh and Kewal K.

Concept to Product
Design, Verification & Test
Basics and Challenges in VLSI
Test
Virendra Singh and Kewal K. Saluja
Overview


Motivation for testing and sources for reference
Introduction – Practicality, Cost, Differences
Part I – Basics of VLSI Test
 Fault Models, Fault Simulation, and Test Generation
 DFT, BIST and ITRS Goals
Part II – Challenges in VLSI Test
 Test Data Volume



Test Power and Thermal Issues





2
Output data compaction
Input compression
Resource constraint scheduling
Power constraint scheduling
Thermal aware scheduling
Diagnosis
Conclusion and Future Directions
2015/11/7
Motivation



Complexity and growth
Where do the manufacturing $ go?
What is test on a chip?


Tutorial (Test) material information

3
Bottlenecks – pins, data volume, power, thermal, …
References – a separate list
2015/11/7
Motivation: Moore’s Law
Complexity Growth of VLSI circuits
Source (Copp, Int. AOC EW Conf., 2002)
4
2015/11/7
Microprocessor Cost per Transistor
Cost of testing will EXCEED cost of design/manufacturing
(Source: ITR-Semiconductor, 2002)
5
2015/11/7
Thermal Effect of Moore’s law

Moore’s Law
Greater packaging densities
Higher power densities
Higher temperature

6
Effects
 Reliability
 Performance
 Power
 Cooling cost
2015/11/7
Challenges under deep submicron technologies
Source: Intel
Source: Intel
Chip size decreases
Power density increases
Source: Wang et al. ISPD2003
Chip becomes hotter
7
Leakage power make it worse
2015/11/7
Introduction






8
VLSI realization process
Contract between design house and fab vendor
Test v/s verification
Ideal v/s real testing
Levels of testing – rule of 10 (or 20) and doing business
Cost of manufacturing and role of testing
2015/11/7
VLSI Realization Process
Customer’s need
Determine requirements
Write specifications
Design synthesis and Verification
Test development
Fabrication
Manufacturing test
Chips to customer
9
2015/11/7
Contract between a design house and a fab vendor






Design is complete and checked (verified)
Fab vendor: How will you test it?
Design house: I have checked it and …
Fab vendor: But, how would you test it?
Desing house: Why is that important? It is between
I and my clients – it is none of your business
Fab vendor – Sorry you can take your business
some where else.
complete the story and determine the reasons for the
importance of test generation etc.
10
2015/11/7
Contract between design …
Hence:
 “Test” must be comprehensive
 It must not be “too long”
Issues:
 Model possible defects in the process




11
Understand the process
Develop logic simulator and fault simulator
Develop test generator
Methods to quantify the test efficiency
2015/11/7
Verification v/s Testing
Definitions
 Design synthesis: Given an I/O function, develop a
procedure to manufacture a device using known
materials and processes.
 Verification: Predictive analysis to ensure that the
synthesized design, when manufactured, will perform
the given I/O function.
 Test: A manufacturing step that ensures that the
physical device, manufactured from the synthesized
design, has no manufacturing defect.
12
2015/11/7
Verification v/s Testing




Verifies correctness of
design.
Performed by simulation,
hardware emulation, or
formal methods.
Performed once prior to
manufacturing.
Responsible for quality of
design.
13


Verifies correctness of
manufactured hardware.
Two-part process:




1. Test generation: software process
executed once during design
2. Test application: electrical tests
applied to hardware
Test application performed on
every manufactured device.
Responsible for quality of devices.
2015/11/7
Part I – Basics of VLSI Test
14
2015/11/7
Testing as Filter Process
Good chips
Mostly
good
chips
Prob(pass test) = high
Prob(good) = y
Fabricated
chips
Mostly
bad
chips
Defective chips
Prob(bad) = 1- y
15
Prob(fail test) = high
2015/11/7
Levels of testing (1)

Levels



Chip
Board
System




System in field
Cost – Rule of 10

16
Boards put together
System-on-Chip (SoC)
It costs 10 times more to test a device as we move to
higher level in the product manufacturing process (Not
testing is more expensive from business point of view)
2015/11/7
Levels of testing (2)

Other ways to define levels – these are important
to develop correct “fault models” and “simulation
models”






17
Transistor
Gate
RTL
Functional
Behavioral
Architecture
2015/11/7
Cost of Testing

Design for testability (DFT)



Software processes of test



Test generation and fault simulation
Test programming and debugging
Manufacturing test


18
Chip area overhead and yield reduction
Performance overhead
Automatic test equipment (ATE) capital cost
Test center operational cost
2015/11/7
Roles of Testing




19
Detection: Determination whether or not the device
under test (DUT) has some fault.
Diagnosis: Identification of a specific fault that is
present on DUT.
Device characterization: Determination and
correction of errors in design and/or test
procedure.
Failure mode analysis (FMA): Determination of
manufacturing process errors that may have caused
defects on the DUT.
2015/11/7
Models and Testing





Logic models
Fault models
Fault simulation
Test generation
Modeling various fautls



20
Multiple faults
Transition faults
Clock line faults
2015/11/7
Logic Modeling – Model types

Behavior




21
External representation
•
Internal representation
DC behavior – no timing
Structural

•
•
•
Functional


System at I/O level
Timing inf provided
Internal details missing
Gate level description
Models are often described using an hierarchy
Gate level models are most prevalent in logic testing
2015/11/7
Netlist Format: An Example
ISCAS format
output = gate(inputs)
INPUT(G1)
INPUT(G2)
OUTPUT(G7)
G3 = NOT(G2)
G4 = NOT(G1)
G5 = AND(G1, G3)
G6 = AND(G2, G4)
G7 = OR(G5, G6)
22
2015/11/7
Common Fault Models








23
Single stuck-at faults
Transistor open and short faults
Bridging (and, or, dominance, 4-way)
Memory faults
Functional faults (processors)
Delay faults (gate delay, transition, path)
Analog faults
…
2015/11/7
Stuck-at Faults
 Single
stuck-at faults
 What does it achieve in practice?
 Fault collapsing
 Equivalence
 Fault
dominance and checkpoint faults
 Classes

Detectable, Redundant, Potentially detectable,
Hyperactive, Initialization related
 Multiple
24
of stuck-at faults
faults
2015/11/7
Fault Simulation

Fault simulation Problem: Given




Determine



Fault coverage - fraction (or percentage) of modeled faults
detected by test vectors
Set of undetected faults
Motivation


25
A circuit
A sequence of test vectors
A fault model
Determine test quality and in turn product quality
Find undetected fault targets to improve tests
2015/11/7
Usages of Fault Simulators





26
Test grading – as explained before
Test Generation
Fault diagnosis
Design for test (DFT) – identification of points that
may help improve test quality
Fault-tolerance – identification of damage a fault can
cause
2015/11/7
Fault Simulator in a VLSI Design Process
Verified design
netlist
Verification
input stimuli
Fault simulator
Test vectors
Modeled
Remove
fault list tested faults
Fault
coverage
?
Low
Test
Delete
compactor vectors
Test
generator
Add vectors
Adequate
Stop
27
2015/11/7
Fault Simulation Algorithms





Serial
Parallel
Deductive
Concurrent
Others



28
Differential
Parallel pattern
etc.
2015/11/7
Deductive Fault Simulation Example
Notation: Lk is fault list for line k
kn is s-a-n fault on line k
a
1
b
{b0}
Le = La U Lc U {e0}
= {a0 , b0 , c0 , e0}
{a0}
{b0 , c0}
c
d
{b0 , d0}
29
e
f
1
0
{b0 , d0 , f1}
1
g
Lg = (Le Lf ) U {g0}
= {a0 , c0 , e0 , g0}
U
1
Faults detected by
the input vector
2015/11/7
Major ATPG algorithms
• D-Algorithm (Roth) – 1966
• Enumeration at each gate in the circuit
• PODEM (Goel) – 1981
• Implicit enumeration
• FAN (Fujiwara and Shimono) – 1983
• Combines the two approaches
• Enumerate input space
30
2015/11/7
Basic Features of ATPGs

Objective – desired signal value goal for ATPG



Backtrace – Determines which (primary) input
and value to set to achieve objective


Guides it away from infeasible/hard solutions
Uses heuristics
Use heuristics such as nearest PI
Forward trace – Determines gate through which
the fault effect should be sensitized

Use heuristics such as output that is closest to the
present fault effect
31
2015/11/7
Basic Features: Branch-and-Bound Search



Efficiently searches (binary) search tree
Branching – At each tree level, selects which input
variable to set to what value
Bounding – Avoids exploring large tree portions by
artificially restricting search decision choices



32
Complete exploration is impractical
Uses heuristics
Backtracking – Search fails, therefore undo some of
the work completed and start searching from a
location where search options still exist
2015/11/7
Sequential ATPG
• Mostly use combinational ATPG
algorithms
• Use 9-value logic
• Time frame expansion for test generation
• Forward time test generation
• Reverse time test generation
• Combining the two
• Integrate fault simulation and test
generation
33
2015/11/7
ATPG System


Increase fault coverage
Reduce overall effort (CPU time)

Reduce cost of fault simulation




Fault list reduction
Efficient and diverse fault simulation methods
Fault sampling method
Reduce cost of test generation

Two phase approach to test generation


34
Phase 1: low cost methods initially – random vectors
Phase 2: use methods that target specific faults till desired fault
coverage is reached
2015/11/7
ATPG Systems

Phase 2 issues (during deterministic test generation):







35
Efficient heuristics for backtrace
What fault to pick next
Choice of backtrack limit
Switch heuristics
Interleave test generation and fault simulation
Fill in x’s (test compaction)
Identify untestable faults by other methods
2015/11/7
ATPG Systems

Reduce cost of test application


Fill in x’s (test compaction)
Test generator generates vectors with some
inputs unspecified


36
Can fill these values with random (0, 1) values (often
termed as dynamic compaction)
More on compaction and compression (static,
dynamic, etc.) later
2015/11/7
ATPG Systems
Compacter
Test
Patterns
Undetected
Faults
37
Circuit
Description
Fault
List
Test generator
With fault
simulation
Aborted
Faults
Redundant
Faults
Backtrack
Distribution
2015/11/7
Modeling non-stuck at faults



Multiple stuck-at faults can be modeled by single
stuck-at for the purpose of test generation.
Many non stuck-at faults, for example transition
faults, can be modeled as stuck-at faults for the
purpose of test generation.
Pros




Stuck-at test generator can be used
No need to develop specialize test generation tools
Offers the benefits of research in the area of stuck-at test
generation
Cons


38
Specialized tools may be more efficient for some
applications
Modeling effort can be deceptive
2015/11/7
Modeling Example: multiple Stuck-at faults
s-a-1
s-a-1
a
A
a
A
B
b
B
C
c
C
D
d
D
s-a-1
b
c
d
s-a-0
s-a-0
Multiple (4) stuck-at fault
Equivalent Single Stuck-at fault
Kim, Agrawal and Saluja, “Multiple Faults: Modeling, Simulation and test” - VLSI Design,2002
39
2015/11/7
Modeling Example: stuck-open fault
40
2015/11/7
Modeling Example: clock delay/skew fault
41
2015/11/7
Design For Testability (DFT)


Design for testability (DFT) refers to those design
techniques that make test generation and test
application cost-effective.
DFT methods for digital circuits:


Ad-hoc methods
Structured methods:




42
Scan
Partial Scan
Built-in self-test (BIST)
Boundary scan
2015/11/7
DFT: Scan design

Objectives



Simple read/write access to all or subset of storage elements in a
design.
Direct control of storage elements to an arbitrary value (0 or 1).
Direct observation of the state of storage elements and hence the
internal state of the circuit.
Key is – Enhanced controllability and observability.
Most common method used in practice – Serial scan in the form of
single or multiple scan chains
43
2015/11/7
DFT: Scan design
Master latch
D
TC
Slave latch
Q
Logic
overhead
MUX
SD
Q
CK
D flip-flop
CK
TC
44
Master open
Slave open
Normal mode, D selected
t
Scan mode, SD selected
2015/11/7
t
Built-In Self-Test (BIST)


Useful for field test and diagnosis (less expensive than
a local automatic test equipment)
Software tests for field test and diagnosis:




Hardware BIST benefits:




45
Low hardware fault coverage
Low diagnostic resolution
Slow to operate
Lower system test effort
Improved system maintenance and repair
Improved component repair
Better diagnosis at component level
2015/11/7
Built-In Self-Test (BIST)
46
2015/11/7
ITRS Test Goals
47
2015/11/7
ITRS Test Goals
48
2015/11/7
Part II – Challenges in VLSI Test
Selected topics
49
2015/11/7
Test Data Volume

Why Compression

Test data increase




50
Increasing number of
embedded cores
Increasing number of target
fault models
Older ATE equipment


Source: Mentor Graphics
Less test channels
Not enough ATE memory
Reduce test cost
2015/11/7
Test Data Compression

Advantages



Benefits



Complete set of ATPG test
patterns can be applied
Compatible with the convectional
design rules and test generation
flow
Reduce amount of test data
Reduce test time
Methods


51
Test stimulus compression
Test response compaction
2015/11/7
Test stimulus compression

Compress the test vectors losslessly (that is, it must
reproduce all the care bits after decompression) to
preserve fault coverage

Three categories:



52
Code based schemes
Linear-decompression based schemes
Broadcast-scan based schemes
2015/11/7
Code based test compression

Run-length-based codes

53
A. Jas and N. A. Touba, “Test Vector Compression via Cyclical Scan
Chains and Its Application to Testing Core-based Designs”, ITC’98
2015/11/7
Code based test compression

Dictionary codes

54
S.M. Reddy et al., “On Test Data Volume Reduction for Multiple Scan
Chain Designs”, VTS’02
2015/11/7
Code based test compression

Statistical codes

55
A. Jas et al. , “An Efficient Test Vector Compression Scheme Using
Selective Huffman Coding”, TCAD’03
2015/11/7
Code based test compression

Constructive codes

56
Z. Wang and K. Chakrabarty, “Test Data Compression for IP
Embedded Cores Using Selective Encoding of Scan Slices”, ITC’05
2015/11/7
Linear-decompression-based test compression

LFSR reseeding

57
B. Koenemann, “LFSR-Coded Test Patterns for Scan Designs”, ETC’91
2015/11/7
Linear-decompression-based test compression

LFSR reseeding with run-length

58
C. Yao, K. Saluja and A. Sinkar, “WOR-BIST: A Complete Test Solution for
Designs Meeting Power, Area and Performance Requirements”,VLSID’09
2015/11/7
Linear-decompression-based test compression

Ring Generator (EDT)

59
G. Mrugalski, J. Rajski and J. Tyszer, “Ring Generator-New Devices for
Embedded Test Applications”, TCAD’04
2015/11/7
Broadcast-scan-based test compression
Illinois scan Architecture

I. Hamzaoglu and J. H. Patel, “Reducing Test Application Time for Full Scan
Embedded Cores”, FTCS’99
Scan In
Segment 1
Segment 2
MISR

Segment 3
Scan Out
Segment 4
(a) Broadcast mode
Scan In
Scan Out
Scan Chain
(b) Serial chain mode
60
2015/11/7
Broadcast-scan-based test compression

Scan Tree (Scan Forest)

61
K. Miyase, S. Kajihara and S.M. Reddy, “Multiple Scan Tree Design with Test
Vector Modification”, ATS’04
2015/11/7
Test response compaction

Compaction – Drastically reduce # bits in original
circuit response – losse information

Space compaction:


Time compaction


62
Reduce the number of output bit size
Reduce the number of response patterns
Mixed
2015/11/7
Test response compaction

Transition count response compaction


MISR response compaction:


63
Count number of transitions as signature
Multiple Input Signature Register
Compact all outputs into one LFSR
2015/11/7
Test response compaction

Aliasing problem

64
When bad machine signature equals good machine signature
2015/11/7
Model: block compactor
Input
data block
n
Linear
compactor Output
data block
n.d
m.d
m
d
d
65
2015/11/7
Linear space compactors

Compactors implemented with xor trees.
in 1
in 2
in 3
n.d inputs in 4
in 5
in 6
in 7
in 8
Row of weight 3
Row of weight 1
out out out out
1 2 3
4
m.d outputs
66
2015/11/7
Single weight compactors
Construction: every row is non-zero, different and with
identical, odd weight.
 Property in the
1
1
absence of X values:
0
One, two, or any odd
0
number of errors are
0
0
guaranteed to be
0
detected.

0
0
67
1
0
2015/11/7
1
0
Test response compaction
•
X propagation:
–
–
–
•
Uninitialized memory elements
Floating tri-states
Multi-cycle paths
Problem:
–
–
–
68
X’s corrupt final signature
Prevents observation of other scan cells
Output compaction ratios and ATPG results are degraded by
the capture of unknown value
2015/11/7
Test response compaction

Handling X’s

X-Bounding (X-blocking)


X-tolerant compactor



Make compactor resilient to one or several X’s propagated to the
compactor
Non uniform weight compactors
X-Masking


69
Insert DFT to prevent X’s from propagating to output
Mask X’s at the input to compactor
Mask data required
2015/11/7
Test response compaction
•
N.A. Touba, “X-cancelling MISR – New Approach for XTolerant Output Compaction”, ITC’07
70
2015/11/7
Industrial Practice

Mentor Graphics


71
TestKompress
Embedded Deterministic Test (EDT)
2015/11/7
Industrial Practice

Synopsys


72
Adaptive Scan (DFTMAX)
Broadcasting
2015/11/7
Industrial Practice

SynTest


73
Virtual Scan
Broadcasting
2015/11/7
Fault Diagnosis




74
An estimation or prediction as to what’s
wrong with the malfunctioning circuit
Narrows the search for physical root cause
Makes inferences based on observed behavior
Usually based on the logical operation of the
circuit
2015/11/7
Fault Diagnosis
Defective Circuit
Observed
Behavior
Tests
Location
or
Fault
Physical Analysis
75
Diagnosis
Diagnosis Algorithm
2015/11/7
Fault Diagnosis

Two types of fault diagnosis:

Effect-Cause Diagnosis (Circuit Partitioning )



Cause-Effect Diagnosis (Model-Based)


76
Identify fault-free or possibly-faulty portions
Identify suspect components, logic blocks, interconnects
Assume one or more specific fault models
Compare behavior to results of fault simulations
2015/11/7
Fault Diagnosis (effect-cause)
•
Back cones method:
–
–
–
77
Identify failing flops or outputs
Input cone of logic is suspect
Intersection of multiple cones is highly suspect
2015/11/7
Fault Diagnosis (cause-effect)

Fault dictionary method:


A fault dictionary is a database of the simulated responses for
all candidate faults in faultlist
Used by some diagnosis algorithms for convenience



78
Fast: no simulation at time of diagnosis
Self-contained: netlist, simulator, and test set not needed after
dictionary creation
Dictionary can be very large, however!
2015/11/7
Fault Diagnosis

Dynamic Diagnosis:


Alternative to dictionary-based diagnosis
Fault simulation is only done for certain faults, based on test
results


79
Only simulate faults in input cones of failing
flip-flops/outputs
Dictionary is eliminated, but requires complete netlist and test
pattern file
2015/11/7
Fault Diagnosis

Pareto analysis:



80
Gather data on the frequency of
the causes of faults
Rank the causes from the most
to the least important
Construct a bar chart based on
the percentage of each case.
2015/11/7
Fault Diagnosis

Ranking fault candidates:

Two common ranking methods



Other common rankings:



81
Match/mismatch points
Fault candidate probability
Hamming distance
Set intersection/overlap
Nearest neighbor
2015/11/7
Fault Diagnosis

Match/mismatch point ranking:


Award points for matching observed failures
Optionally deduct points for not predicting fails


82
Non-prediction: A behavior not predicted by candidate
Mis-prediction: A prediction not fulfilled by behavior
2015/11/7
Fault Diagnosis

Probabilistic ranking:

Probability rank based on matches and mismatches and error
assumptions



83
Weights for non- and mis-prediction
Different prediction probabilities for different fault candidates (bridges
vs. stuck-at)
Usually normalized so that total of all candidates equals 1.0
2015/11/7
Fault Diagnosis

W. Zou, W.T. Cheng, S.M. Reddy and H. Tang, “Speeding up EffectCause Defect Diagnosis using a Small Dictionary”,VTS’07
84
2015/11/7
Fault Diagnosis

Scan chain diagnosis:



85
Find the faulty scan cell in the faulty chain
In scan based designs, 10%-30% defects are in scan chains.
Chips with faulty chains fail the scan based testing
2015/11/7
Fault Diagnosis
Scan Chain Faults
Functional Faults
Stuck-at
Bridging
Timing Faults
Setup-Time
Violation Faults
(Transition Faults)
Slow-To-Rise
Fault
86
Hold-Time
Violation Fault
Slow-To-Fall
Fault
2015/11/7
Fault Diagnosis

Scan chain diagnosis:

Hardware based methods:


Fault simulation based methods:


87
uses special scan cell and additional scan circuitry to facilitate the scan
chain diagnosis process
do not need any modification of the basic scan circuitry
find a faulty circuit matching the syndromes
2015/11/7
Fault Diagnosis

O. Sinanoglu, P. Schremmer, “Scan Chain Hold-Time
Violations” Can They be Tolerated?”, TVLSI’09

Diagnosis of scan chain hold-time violations in four-stage
process




88
Identification of the number of hold-time violations in each chain
Generation and application of a minimal set of diagnostic patterns
Pinpointing hold-time violator cells per pattern
Intersection of candidate scan cell sets.
2015/11/7
Fault Diagnosis

Multiple faults diagnosis:

Multiple fault simulations



89
partition the failing outputs
incremental simulation-based approach to diagnose failures one at
a time
Use n-detection tests
2015/11/7
Fault Diagnosis

H. Takahashi et al., “On Diagnosing Multiple Stuck-at Faults
Using Multiple and Single Fault Simulation in Combinational
Circuits”, TCAD’02



90
Multiple-fault simulations are performed by injecting all
suspected faults in the fault-free circuit.
For tests that lead to simulation results inconsistent with the
testing responses, single-fault simulations are performed.
 All simulated single faults that give results consistent with
the “observed responses” are maintained in (or added to)
the set of suspected faults.
 Single faults that lead to inconsistency between the result
of “passing test simulation” and the “observed responses”
are deduced as faults that may not be present in the.
Diagnosis is effected by repeated removal and addition of faults
to the set of suspected faults.
2015/11/7
Test Scheduling
BIST
Core 1
Core 2
BIST
Core 3
External
Test Bus
Core 4
BIST
Core 5
Core 6
BIST
Test Scheduling:
In order to reduce the test cost, the testing time must
be minimized by carefully scheduling tests for cores at the system level
and the tests must be scheduled without violating any constraint.
91
2015/11/7
Resource constrained Test scheduling
Goal: Minimize total test time under resource constraints
Test compatibility graph (TCG)
BIST
Core 1
Core 4
Core 2
BIST
BIST
Core 5
Core 3
t1
t3
t4
t6
Core 6
BIST
92
t2
t5
2015/11/7
Resource constrained Test scheduling

93
Clique cover heuristic

A clique is a maximal complete subgraph
of a graph.

Goal: cover all nodes by minimum number
of cliques.

Solution:

(1,3,5)

(2,6)

(4)
t2
t1
t3
t4
t6
t5
2015/11/7
Resource constrained Test scheduling
Test Session


A subset of the test set such that all the tests in the test session are compatible.

Next test session can start only after previous test session is completed.
Test time
t1
t2
t5
t6
t3
t4
Session 1 Session 2 Session 3
Test scheduling
with equal length tests
94
t1
t1
t3
Test time
Test time
t5
t3
t2
t6
t4
t5
t2
t6
t4
Session 1 Session 2 Session 3
Test scheduling
with unequal length tests
Better solution
For unequal length tests
2015/11/7
Power constrained Test scheduling
Goal: Minimize total test time under resource and power constraints

Solutions:

Chou et al. (TCAD’97)


Iyengar et al. (TCAD’02)


Graph-based heuristic
2
t1
t3 1
X
Mixed Integer Linear Programming
Zhao et al. (TCAD’05)

t2 2
Rectangle packing heuristic
Power limit
4 t4
Pmax = 5
t6 2
t5 3
Source: Zhao et al. TCAD’05
95
2015/11/7
Thermal constrained Test scheduling
Goal: Minimize total test time under resource, power and thermal constraints

Solutions:

Rosinger et al. (DATE’05, TCAD’06)

Liu et al. (DFT’05)



96
Generate solutions of power constrained test scheduling first, then modified by thermal
constraint
He et al. (DFT’06, ITC’07, ATS’08)

Test set partitioning and interleaving

Constraint logic programming
Bild et al. (ICCAD’08)

Simplified thermal model

MILP formulation

Seed-based clustering heuristic
2015/11/7
Thermal constrained Test scheduling

97
C.Yao, K. K. Saluja and P. Ramanathan, “ Thermalaware test scheduling using on-chip temperature
sensors”, VLSID’11 + Spencer Millican and Saluja –
“ .. Scheduling for 3-D Stacked ICs” ATS’12
 A dynamic test scheduling scheme based on the
real-time temperature measurements from the
on-chip sensors
 A test architecture that supports the dynamic
test scheduling
 Heuristics to generate initial static schedules for
online adaptation
 Optimal thermal scheduling formulation
2015/11/7
Future test problems

Difficult challenges

Test for yield learning
 Critically essential for fab process and device learning
below optical device dimensions

Detecting Systemic Defects
 Testing for local non-uniformities, not just hard defects
 Detecting symptoms and effects of line width variations,
finite dopant distributions, systemic process defects

Screening for reliability
 Effectiveness and Implementation of burn-in, IDDQ, and
Vstress testing
 Detection of erratic, non deterministic, and intermittent
device behavior

98
[Source: ITRS]
2015/11/7
Future test problems

Present and near future testing problems (≤2015)







99
Test generation and fault simulation for traditional
and non traditional fault models for a single core
Test generation and test application cost (test data
volume)
Reduction of test application power
Thermal constraint testing
Fault diagnosis for volume production
Test generation, DFT, BIST and test application
problems for multi-core Ics
Test stategies for SoC and multi-core ICs
2015/11/7
Future test problems

100
Future testing problems (>2015)
 DFT and standardization challenges
 Operational life failures
 NBTI, PBTI and other such failures
 Diagnosis for reconfiguration and tuning
 Esoteric technologies
 Esoteric and new failure modes
 New issues – parametric, statistical, …
2015/11/7
Thank you!
101
2015/11/7