CS252 Graduate Computer Architecture Lecture 16 Caches and Memory Systems October 29, 2000 Computer Science 252 10/29/00 CS252/Kubiatowicz Lec 16.1

Download Report

Transcript CS252 Graduate Computer Architecture Lecture 16 Caches and Memory Systems October 29, 2000 Computer Science 252 10/29/00 CS252/Kubiatowicz Lec 16.1

CS252
Graduate Computer Architecture
Lecture 16
Caches and Memory Systems
October 29, 2000
Computer Science 252
10/29/00
CS252/Kubiatowicz
Lec 16.1
Review: Who Cares About the
Memory Hierarchy?
• Processor Only Thus Far in Course:
– CPU cost/performance, ISA, Pipelined Execution
CPU
CPU-DRAM Gap
“Moore’s Law”
100
10
1
“Less’ Law?”
µProc
60%/yr.
Processor-Memory
Performance Gap:
(grows 50% / year)
DRAM
DRAM
7%/yr.
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
Performance
1000
• 1980: no cache in µproc; 1995 2-level cache on chip
(1989 first Intel µproc with a cache on chip)
10/29/00
CS252/Kubiatowicz
Lec 16.2
What is a cache?
• Small, fast storage used to improve average access
time to slow memory.
• Exploits spacial and temporal locality
• In computer architecture, almost everything is a cache!
–
–
–
–
–
–
Registers a cache on variables
First-level cache a cache on second-level cache
Second-level cache a cache on memory
Memory a cache on disk (virtual memory)
TLB a cache on page table
Branch-prediction a cache on prediction information?
Proc/Regs
Bigger
L1-Cache
L2-Cache
Faster
Memory
10/29/00
Disk, Tape, etc.
CS252/Kubiatowicz
Lec 16.3
Review: Cache performance
• Miss-oriented Approach to Memory Access:
MemAccess


CPUtime  IC   CPI Execution 
 MissRate  MissPenalty   CycleTime
Inst


• Separating out Memory component entirely
– AMAT = Average Memory Access Time
MemAccess


CPUtime  IC   CPI AluOps 
 AMAT   CycleTime
Inst


AMAT  HitTime  MissRate  MissPenalty
  HitTimeInst  MissRate Inst  MissPenalty Inst  
 HitTimeData  MissRateData  MissPenaltyData 
10/29/00
CS252/Kubiatowicz
Lec 16.4
Example: Harvard Architecture
• Unified vs Separate I&D (Harvard)
Proc
Unified
Cache-1
Unified
Cache-2
I-Cache-1
Proc
D-Cache-1
Unified
Cache-2
• Table on page 384:
– 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%
– 32KB unified: Aggregate miss rate=1.99%
• Which is better (ignore L2 cache)?
– Assume 33% data ops  75% accesses from instructions (1.0/1.33)
– hit time=1, miss time=50
– Note that data hit has 1 stall for unified cache (only one port)
AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05
AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24
10/29/00
CS252/Kubiatowicz
Lec 16.5
Review: Improving Cache
Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
10/29/00
CS252/Kubiatowicz
Lec 16.6
Review: Miss Classification
• Classifying Misses: 3 Cs
– Compulsory—The first access
to a block is not in the cache,
so the block must be brought into the cache. Also called cold
start misses or first reference misses.
(Misses in even an Infinite Cache)
– Capacity—If
the cache cannot contain all the blocks needed
during execution of a program, capacity misses will occur due to
blocks being discarded and later retrieved.
(Misses in Fully Associative Size X Cache)
– Conflict—If
block-placement strategy is set associative or
direct mapped, conflict misses (in addition to compulsory &
capacity misses) will occur because a block can be discarded and
later retrieved if too many blocks map to its set. Also called
collision misses or interference misses.
(Misses in N-way Associative, Size X Cache)
• More recent, 4th “C”:
– Coherence - Misses caused
10/29/00
by cache coherence.
CS252/Kubiatowicz
Lec 16.7
1. Reduce Misses via Larger
Block Size
25%
1K
20%
Miss
Rate
4K
15%
16K
10%
64K
5%
256K
256
128
64
32
16
0%
Block Size (bytes)
10/29/00
CS252/Kubiatowicz
Lec 16.8
2. Reduce Misses via Higher
Associativity
• 2:1 Cache Rule:
– Miss Rate DM cache size N Miss Rate 2-way cache
size N/2
• Beware: Execution time is only final measure!
– Will Clock Cycle time increase?
– Hill [1988] suggested hit time for 2-way vs. 1-way
external cache +10%,
internal + 2%
10/29/00
CS252/Kubiatowicz
Lec 16.9
Example: Avg. Memory Access
Time vs. Miss Rate
• Example: assume CCT = 1.10 for 2-way, 1.12 for
4-way, 1.14 for 8-way vs. CCT direct mapped
Cache Size
(KB)
1-way
1
2.33
2
1.98
4
1.72
8
1.46
16
1.29
32
1.20
64
1.14
128
1.10
10/29/00
Associativity
2-way 4-way
2.15
2.07
1.86
1.76
1.67
1.61
1.48
1.47
1.32
1.32
1.24
1.25
1.20
1.21
1.17
1.18
8-way
2.01
1.68
1.53
1.43
1.32
1.27
1.23
1.20
(Red means A.M.A.T. not improved by more associativity)
CS252/Kubiatowicz
Lec 16.10
3. Reducing Misses via a
“Victim Cache”
• How to combine fast hit
time of direct mapped
yet still avoid conflict
misses?
• Add buffer to place data
discarded from cache
• Jouppi [1990]: 4-entry
victim cache removed
20% to 95% of conflicts
for a 4 KB direct mapped
data cache
• Used in Alpha, HP
machines
10/29/00
TAGS
DATA
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
To Next Lower Level In
Hierarchy
CS252/Kubiatowicz
Lec 16.11
4. Reducing Misses via
“Pseudo-Associativity”
• How to combine fast hit time of Direct Mapped and
have the lower conflict misses of 2-way SA cache?
• Divide cache: on a miss, check other half of cache to
see if there, if so have a pseudo-hit (slow hit)
Hit Time
Pseudo Hit Time
Miss Penalty
Time
• Drawback: CPU pipeline is hard if hit takes 1 or 2
cycles
– Better for caches not tied directly to processor (L2)
– Used in MIPS R1000 L2 cache, similar in UltraSPARC
10/29/00
CS252/Kubiatowicz
Lec 16.12
5. Reducing Misses by Hardware
Prefetching of Instructions & Datals
• E.g., Instruction Prefetching
– Alpha 21064 fetches 2 blocks on a miss
– Extra block placed in “stream buffer”
– On miss check stream buffer
• Works with data blocks too:
– Jouppi [1990] 1 data stream buffer got 25% misses from
4KB cache; 4 streams got 43%
– Palacharla & Kessler [1994] for scientific programs for 8
streams got 50% to 70% of misses from
2 64KB, 4-way set associative caches
• Prefetching relies on having extra memory
bandwidth that can be used without penalty
10/29/00
CS252/Kubiatowicz
Lec 16.13
6. Reducing Misses by
Software Prefetching Data
• Data Prefetch
– Load data into register (HP PA-RISC loads)
– Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9)
– Special prefetching instructions cannot cause faults; a form of
speculative execution
• Prefetching comes in two flavors:
– Binding prefetch: Requests load directly into register.
» Must be correct address and register!
– Non-Binding prefetch: Load into cache.
» Can be incorrect. Frees HW/SW to guess!
• Issuing Prefetch Instructions takes time
– Is cost of prefetch issues < savings in reduced misses?
– Higher superscalar reduces difficulty of issue bandwidth
10/29/00
CS252/Kubiatowicz
Lec 16.14
7. Reducing Misses by
Compiler Optimizations
• McFarling [1989] reduced caches misses by 75%
on 8KB direct mapped cache, 4 byte blocks in
software
• Instructions
– Reorder procedures in memory so as to reduce conflict misses
– Profiling to look at conflicts(using tools they developed)
• Data
– Merging Arrays: improve spatial locality by single array of compound
elements vs. 2 arrays
– Loop Interchange: change nesting of loops to access data in order
stored in memory
– Loop Fusion: Combine 2 independent loops that have same looping and
some variables overlap
– Blocking: Improve temporal locality by accessing “blocks” of data
repeatedly vs. going down whole columns or rows
10/29/00
CS252/Kubiatowicz
Lec 16.15
Merging Arrays Example
/* Before: 2 sequential arrays */
int val[SIZE];
int key[SIZE];
/* After: 1 array of stuctures */
struct merge {
int val;
int key;
};
struct merge merged_array[SIZE];
Reducing conflicts between val & key;
improve spatial locality
10/29/00
CS252/Kubiatowicz
Lec 16.16
Loop Interchange Example
/* Before */
for (k = 0; k < 100; k = k+1)
for (j = 0; j < 100; j = j+1)
for (i = 0; i < 5000; i = i+1)
x[i][j] = 2 * x[i][j];
/* After */
for (k = 0; k < 100; k = k+1)
for (i = 0; i < 5000; i = i+1)
for (j = 0; j < 100; j = j+1)
x[i][j] = 2 * x[i][j];
Sequential accesses instead of striding
through memory every 100 words; improved
spatial locality
10/29/00
CS252/Kubiatowicz
Lec 16.17
Loop Fusion Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
a[i][j] = 1/b[i][j] * c[i][j];
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
d[i][j] = a[i][j] + c[i][j];
/* After */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{
a[i][j] = 1/b[i][j] * c[i][j];
d[i][j] = a[i][j] + c[i][j];}
2 misses per access to a & c vs. one miss per
access; improve spatial locality
10/29/00
CS252/Kubiatowicz
Lec 16.18
Blocking Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{r = 0;
for (k = 0; k < N; k = k+1){
r = r + y[i][k]*z[k][j];};
x[i][j] = r;
};
• Two Inner Loops:
– Read all NxN elements of z[]
– Read N elements of 1 row of y[] repeatedly
– Write N elements of 1 row of x[]
• Capacity Misses a function of N & Cache Size:
– 2N3 + N2 => (assuming no conflict; otherwise …)
• Idea: compute on BxB submatrix that fits
10/29/00
CS252/Kubiatowicz
Lec 16.19
Blocking Example
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk = kk+B)
for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B-1,N); j = j+1)
{r = 0;
for (k = kk; k < min(kk+B-1,N); k = k+1) {
r = r + y[i][k]*z[k][j];};
x[i][j] = x[i][j] + r;
};
• B called Blocking Factor
• Capacity Misses from 2N3 + N2 to N3/B+2N2
• Conflict Misses Too?
10/29/00
CS252/Kubiatowicz
Lec 16.20
Reducing Conflict Misses by Blocking
Miss Rate
0.1
Direct Mapped Cache
0.05
Fully Associative Cache
0
0
50
100
150
Blocking Factor
• Conflict misses in caches not FA vs. Blocking size
10/29/00
– Lam et al [1991] a blocking factor of 24 had a fifth the misses
vs. 48 despite both fit in cache
CS252/Kubiatowicz
Lec 16.21
Summary of Compiler Optimizations to
Reduce Cache Misses (by hand)
vpenta (nasa7)
gmty (nasa7)
tomcatv
btrix (nasa7)
mxm (nasa7)
spice
cholesky
(nasa7)
compress
1
1.5
2
2.5
3
Performance Improvement
merged
arrays
10/29/00
loop
interchange
loop fusion
blocking
CS252/Kubiatowicz
Lec 16.22
Summary: Miss Rate Reduction

CPUtime  IC  CPI

E xecutio n

Memory accesses

 Miss rate Miss penalty  Clock cycle time

Instruction
• 3 Cs: Compulsory, Capacity, Conflict
1.
2.
3.
4.
5.
6.
7.
Reduce Misses via Larger Block Size
Reduce Misses via Higher Associativity
Reducing Misses via Victim Cache
Reducing Misses via Pseudo-Associativity
Reducing Misses by HW Prefetching Instr, Data
Reducing Misses by SW Prefetching Data
Reducing Misses by Compiler Optimizations
• Prefetching comes in two flavors:
10/29/00
– Binding prefetch: Requests load directly into register.
» Must be correct address and register!
– Non-Binding prefetch: Load into cache.
» Can be incorrect. Frees HW/SW to guess!
CS252/Kubiatowicz
Lec 16.23
Administrative
• Final project descriptions due today by 5pm!
• Submit web site via email:
– Web site will contain all of your project results, etc.
– Minimum initial site: Cool title, link to proposal
• Anyone need resources?
– NOW: apparently can get account via web site
– Millenium: can get account via web site
– SimpleScalar: info on my web page
10/29/00
CS252/Kubiatowicz
Lec 16.24
Improving Cache Performance
Continued
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
10/29/00
CS252/Kubiatowicz
Lec 16.25
1. Reducing Miss Penalty:
Read Priority over Write on Miss
CPU
in out
Write Buffer
write
buffer
DRAM
(or lower mem)
10/29/00
CS252/Kubiatowicz
Lec 16.26
1. Reducing Miss Penalty:
Read Priority over Write on Miss
• Write through with write buffers offer RAW
conflicts with main memory reads on cache misses
– If simply wait for write buffer to empty, might increase read
miss penalty (old MIPS 1000 by 50% )
– Check write buffer contents before read;
if no conflicts, let the memory access continue
• Alternative: Write Back
– Read miss replacing dirty block
– Normal: Write dirty block to memory, and then do the read
– Instead copy the dirty block to a write buffer, then do the
read, and then do the write
– CPU stall less since restarts as soon as do read
10/29/00
CS252/Kubiatowicz
Lec 16.27
2. Reduce Miss Penalty:
Subblock Placement
• Don’t have to load full block on a miss
• Have valid bits per subblock to indicate valid
• (Originally invented to reduce tag storage)
Valid Bits
10/29/00
Subblocks
CS252/Kubiatowicz
Lec 16.28
3. Reduce Miss Penalty:
Early Restart and Critical Word
First
• Don’t wait for full block to be loaded before
restarting CPU
– Early restart—As soon as the requested word of the block
arrives, send it to the CPU and let the CPU continue execution
– Critical Word First—Request the missed word first from memory
and send it to the CPU as soon as it arrives; let the CPU continue
execution while filling the rest of the words in the block. Also
called wrapped fetch and requested word first
• Generally useful only in large blocks,
• Spatial locality a problem; tend to want next
sequential word, so not clear if benefit by early
restart
block
10/29/00
CS252/Kubiatowicz
Lec 16.29
4. Reduce Miss Penalty:
Non-blocking Caches to reduce stalls
on misses
• Non-blocking cache or lockup-free cache allow data
cache to continue to supply cache hits during a miss
– requires F/E bits on registers or out-of-order execution
– requires multi-bank memories
• “hit under miss” reduces the effective miss penalty
by working during miss vs. ignoring CPU requests
• “hit under multiple miss” or “miss under miss” may
further lower the effective miss penalty by
overlapping multiple misses
– Significantly increases the complexity of the cache controller as
there can be multiple outstanding memory accesses
– Requires multiple memory banks (otherwise cannot support)
– Penium Pro allows 4 outstanding memory misses
10/29/00
CS252/Kubiatowicz
Lec 16.30
Value of Hit Under Miss for SPEC
Hit Under i Misses
2
1.8
Avg. Mem. Acce ss Time
1.6
1.4
0->1
1.2
1->2
1
2->64
0.8
B as e
0.6
0.4
0->1
1->2
2->64
Base
“Hit under n Misses”
0.2
Integer
ora
spice2g6
nasa7
alvinn
hydro2d
mdljdp2
wave5
su2cor
doduc
swm256
tomcatv
fpppp
ear
mdljsp2
compress
xlisp
espresso
eqntott
0
Floating Point
• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle missCS252/Kubiatowicz
10/29/00
Lec 16.31
5. Second level cache
• L2 Equations
AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1
Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2
AMAT = Hit TimeL1 +
Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2)
• Definitions:
– Local miss rate— misses in this cache divided by the total number of
memory accesses to this cache (Miss rateL2)
– Global miss rate—misses in this cache divided by the total number of
memory accesses generated by the CPU
(Miss RateL1 x Miss RateL2)
– Global Miss Rate is what matters
10/29/00
CS252/Kubiatowicz
Lec 16.32
Comparing Local and Global
Miss Rates
• 32 KByte 1st level cache;
Increasing 2nd level cache
• Global miss rate close to
single level cache rate
provided L2 >> L1
• Don’t use local miss rate
• L2 not tied to CPU clock
cycle!
• Cost & A.M.A.T.
• Generally Fast Hit Times
and fewer misses
• Since hits are few, target
miss reduction
10/29/00
Linear
Cache Size
Log
Cache Size
CS252/Kubiatowicz
Lec 16.33
Reducing Misses:
Which apply to L2 Cache?
• Reducing Miss Rate
1.
2.
3.
4.
5.
6.
7.
10/29/00
Reduce Misses via Larger Block Size
Reduce Conflict Misses via Higher Associativity
Reducing Conflict Misses via Victim Cache
Reducing Conflict Misses via Pseudo-Associativity
Reducing Misses by HW Prefetching Instr, Data
Reducing Misses by SW Prefetching Data
Reducing Capacity/Conf. Misses by Compiler Optimizations
CS252/Kubiatowicz
Lec 16.34
L2 cache block size &
A.M.A.T.
Relative CPU Time
2
1.9
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
1.95
1.54
1.36
16
1.28
1.27
32
64
1.34
128
256
512
Block Size
• 32KB L1, 8 byte path to memory
10/29/00
CS252/Kubiatowicz
Lec 16.35
Reducing Miss Penalty Summary

CPUtime  IC  CPI

E xecutio n

Memory accesses

 Miss rate Miss penalty  Clock cycle time

Instruction
• Five techniques
–
–
–
–
–
Read priority over write on miss
Subblock placement
Early Restart and Critical Word First on miss
Non-blocking Caches (Hit under Miss, Miss under Miss)
Second Level Cache
• Can be applied recursively to Multilevel Caches
– Danger is that time to DRAM will grow with multiple levels in
between
– First attempts at L2 caches can make things worse, since
increased worst case is worse
10/29/00
CS252/Kubiatowicz
Lec 16.36
Main Memory Background
• Performance of Main Memory:
– Latency: Cache Miss Penalty
» Access Time: time between request and word arrives
» Cycle Time: time between requests
– Bandwidth: I/O & Large Block Miss Penalty (L2)
• Main Memory is DRAM: Dynamic Random Access Memory
– Dynamic since needs to be refreshed periodically (8 ms, 1% time)
– Addresses divided into 2 halves (Memory as a 2D matrix):
» RAS or Row Access Strobe
» CAS or Column Access Strobe
• Cache uses SRAM: Static Random Access Memory
– No refresh (6 transistors/bit vs. 1 transistor
Size: DRAM/SRAM 4-8,
Cost/Cycle time: SRAM/DRAM 8-16
10/29/00
CS252/Kubiatowicz
Lec 16.37
Main Memory Deep Background
•
•
•
•
•
“Out-of-Core”, “In-Core,” “Core Dump”?
“Core memory”?
Non-volatile, magnetic
Lost to 4 Kbit DRAM (today using 64Kbit DRAM)
Access time 750 ns, cycle time 1500-3000 ns
10/29/00
CS252/Kubiatowicz
Lec 16.38
DRAM logical organization
(4 Mbit)
11
A0…A10
Column Decoder
…
Sense Amps & I/O
Memory Array
(2,048 x 2,048)
D
Q
Storage
Word Line Cell
• Square root of bits per RAS/CAS
10/29/00
CS252/Kubiatowicz
Lec 16.39
4 Key DRAM Timing
Parameters
• tRAC: minimum time from RAS line falling to
the valid data output.
– Quoted as the speed of a DRAM when buy
– A typical 4Mb DRAM tRAC = 60 ns
– Speed of DRAM since on purchase sheet?
• tRC: minimum time from the start of one row
access to the start of the next.
– tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns
• tCAC: minimum time from CAS line falling to
valid data output.
– 15 ns for a 4Mbit DRAM with a tRAC of 60 ns
• tPC: minimum time from the start of one
column access to the start of the next.
– 35 ns for a 4Mbit DRAM with a tRAC of 60 ns
10/29/00
CS252/Kubiatowicz
Lec 16.40
DRAM Performance
• A 60 ns (tRAC) DRAM can
– perform a row access only every 110 ns (tRC)
– perform column access (tCAC) in 15 ns, but time
between column accesses is at least 35 ns (tPC).
» In practice, external address delays and turning
around buses make it 40 to 50 ns
• These times do not include the time to drive
the addresses off the microprocessor nor
the memory controller overhead!
10/29/00
CS252/Kubiatowicz
Lec 16.41
DRAM History
• DRAMs: capacity +60%/yr, cost –30%/yr
– 2.5X cells/area, 1.5X die size in 3 years
• ‘98 DRAM fab line costs $2B
– DRAM only: density, leakage v. speed
• Rely on increasing no. of computers & memory
per computer (60% market)
– SIMM or DIMM is replaceable unit
=> computers use any generation DRAM
• Commodity, second source industry
=> high volume, low profit, conservative
– Little organization innovation in 20 years
• Order of importance: 1) Cost/bit 2) Capacity
– First RAMBUS: 10X BW, +30% cost => little impact
10/29/00
CS252/Kubiatowicz
Lec 16.42
DRAM Future: 1 Gbit DRAM
(ISSCC ‘96; production ‘02?)
•
•
•
•
Blocks
Clock
Data Pins
Die Size
Mitsubishi
512 x 2 Mbit
200 MHz
64
24 x 24 mm
Samsung
1024 x 1 Mbit
250 MHz
16
31 x 21 mm
– Sizes will be much smaller in production
• Metal Layers
• Technology
10/29/00
3
0.15 micron
4
0.16 micron
CS252/Kubiatowicz
Lec 16.43
Main Memory Performance
• Simple:
– CPU, Cache, Bus, Memory
same width
(32 or 64 bits)
• Wide:
– CPU/Mux 1 word;
Mux/Cache, Bus, Memory
N words (Alpha: 64 bits &
256 bits; UtraSPARC 512)
• Interleaved:
– CPU, Cache, Bus 1 word:
Memory N Modules
(4 Modules); example is
word interleaved
10/29/00
CS252/Kubiatowicz
Lec 16.44
Main Memory Performance
• Timing model (word size is 32 bits)
– 1 to send address,
– 6 access time, 1 to send data
– Cache Block is 4 words
• Simple M.P.
= 4 x (1+6+1) = 32
• Wide M.P.
= 1 + 6 + 1 = 8
• Interleaved M.P. = 1 + 6 + 4x1 = 11
10/29/00
CS252/Kubiatowicz
Lec 16.45
Independent Memory Banks
• Memory banks for independent accesses
vs. faster sequential accesses
– Multiprocessor
– I/O
– CPU with Hit under n Misses, Non-blocking Cache
• Superbank: all memory active on one block transfer
(or Bank)
• Bank: portion within a superbank that is word
interleaved (or Subbank)
…
Superbank
Superbank Number
10/29/00
Bank
Superbank Offset
Bank Number
Bank Offset
CS252/Kubiatowicz
Lec 16.46
Independent Memory Banks
• How many banks?
number banks  number clocks to access word in bank
– For sequential accesses, otherwise will return to original bank
before it has next word ready
– (like in vector case)
• Increasing DRAM => fewer chips => harder to have
banks
10/29/00
CS252/Kubiatowicz
Lec 16.47
Minimum Memory Size
DRAMs per PC over Time
10/29/00
‘86
1 Mb
32
4 MB
8 MB
16 MB
32 MB
64 MB
DRAM Generation
‘89
‘92
‘96
‘99
‘02
4 Mb 16 Mb 64 Mb 256 Mb 1 Gb
8
16
4
8
2
4
1
8
2
128 MB
4
1
256 MB
8
2
CS252/Kubiatowicz
Lec 16.48
Avoiding Bank Conflicts
• Lots of banks
int x[256][512];
for (j = 0; j < 512; j = j+1)
for (i = 0; i < 256; i = i+1)
x[i][j] = 2 * x[i][j];
• Even with 128 banks, since 512 is multiple of 128,
conflict on word accesses
• SW: loop interchange or declaring array not power of
2 (“array padding”)
• HW: Prime number of banks
–
–
–
–
–
10/29/00
bank number = address mod number of banks
address within bank = address / number of words in bank
modulo & divide per memory access with prime no. banks?
address within bank = address mod number words in bank
bank number? easy if 2N words per bank
CS252/Kubiatowicz
Lec 16.49
Fast Bank Number
• Chinese Remainder Theorem
As long as two sets of integers ai and bi follow these rules
bi  x mod ai,0  bi  ai, 0  x  a 0  a1  a 2 
and that ai and aj are co-prime if i  j, then the integer x has only one
solution (unambiguous mapping):
– bank number = b0, number of banks = a0 (= 3 in example)
– address within bank = b1, number of words in bank = a1
(= 8 in example)
– N word address 0 to N-1, prime no. banks, words power of 2
Bank Number:
Address
within Bank: 0
1
2
3
4
5
6
7
10/29/00
Seq. Interleaved
0
1
2
0
3
6
9
12
15
18
21
1
4
7
10
13
16
19
22
2
5
8
11
14
17
20
23
Modulo Interleaved
0
1
2
0
9
18
3
12
21
6
15
16
1
10
19
4
13
22
7
8
17
2
11
20
5
14
23
CS252/Kubiatowicz
Lec 16.50
Fast Memory Systems: DRAM specific
• Multiple CAS accesses: several names (page mode)
– Extended Data Out (EDO): 30% faster in page mode
• New DRAMs to address gap;
what will they cost, will they survive?
– RAMBUS: startup company; reinvent DRAM interface
» Each Chip a module vs. slice of memory
» Short bus between CPU and chips
» Does own refresh
» Variable amount of data returned
» 1 byte / 2 ns (500 MB/s per chip)
– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM,
transfer synchronous to system clock (66 - 150 MHz)
– Intel claims RAMBUS Direct (16 b wide) is future PC memory
• Niche memory or main memory?
– e.g., Video RAM for frame buffers, DRAM + fast serial output
10/29/00
CS252/Kubiatowicz
Lec 16.51
DRAM Latency >> BW
• More App Bandwidth =>
Cache misses
=> DRAM RAS/CAS
• Application BW =>
Lower DRAM Latency
• RAMBUS, Synch DRAM
increase BW but higher
latency
• EDO DRAM < 5% in PC
10/29/00
Proc
I$ D$
L2$
Bus
D
R
A
M
D
R
A
M
D
R
A
M
D
R
A
M
CS252/Kubiatowicz
Lec 16.52
Potential
DRAM Crossroads?
• After 20 years of 4X every 3 years,
running into wall? (64Mb - 1 Gb)
• How can keep $1B fab lines full if buy
fewer DRAMs per computer?
• Cost/bit –30%/yr if stop 4X/3 yr?
• What will happen to $40B/yr DRAM
industry?
10/29/00
CS252/Kubiatowicz
Lec 16.53
Main Memory Summary
• Wider Memory
• Interleaved Memory: for sequential or
independent accesses
• Avoiding bank conflicts: SW & HW
• DRAM specific optimizations: page mode &
Specialty DRAM
• DRAM future less rosy?
10/29/00
CS252/Kubiatowicz
Lec 16.54
Big storage (such as DRAM/DISK):
Potential for Errors!
• On board discussion of Parity and ECC.
10/29/00
CS252/Kubiatowicz
Lec 16.55
Review: Improving Cache
Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
10/29/00
CS252/Kubiatowicz
Lec 16.56
1. Fast Hit times
via Small and Simple Caches
• Why Alpha 21164 has 8KB Instruction and
8KB data cache + 96KB second level cache?
– Small data cache and clock rate
• Direct Mapped, on chip
10/29/00
CS252/Kubiatowicz
Lec 16.57
2. Fast hits by Avoiding Address
Translation
• Send virtual address to cache? Called Virtually
Addressed Cache or just Virtual Cache vs. Physical
Cache
– Every time process is switched logically must flush the cache; otherwise
get false hits
» Cost is time to flush + “compulsory” misses from empty cache
– Dealing with aliases (sometimes called synonyms);
Two different virtual addresses map to same physical address
– I/O must interact with cache, so need virtual address
• Solution to aliases
– HW guaranteess covers index field & direct mapped, they must be
unique;
called page coloring
• Solution to cache flush
– Add process identifier tag that identifies process as well as address
within process: can’t get a hit if wrong process
10/29/00
CS252/Kubiatowicz
Lec 16.58
Virtually Addressed Caches
CPU
CPU
VA
VA
VA
VA
Tags
TB
CPU
PA
Tags
$
$
TB
VA
PA
PA
L2 $
TB
$
PA
10/29/00
PA
MEM
MEM
Conventional
Organization
Virtually Addressed Cache
Translate only on miss
Synonym Problem
MEM
Overlap $ access
with VA translation:
requires $ index to
remain invariant
CS252/Kubiatowicz
across translation
Lec 16.59
2. Fast Cache Hits by Avoiding
Translation: Process ID impact
• Black is uniprocess
• Light Gray is multiprocess
when flush cache
• Dark Gray is multiprocess
when use Process ID tag
• Y axis: Miss Rates up to 20%
• X axis: Cache size from 2 KB
to 1024 KB
10/29/00
CS252/Kubiatowicz
Lec 16.60
2. Fast Cache Hits by Avoiding
Translation: Index with Physical
Portion of Address
• If index is physical part of address, can
start tag access in parallel with translation
so that can compare to physical tag
Page Address
Address Tag
Page Offset
Index
Block Offset
• Limits cache to page size: what if want
bigger caches and uses same trick?
– Higher associativity moves barrier to right
– Page coloring
10/29/00
CS252/Kubiatowicz
Lec 16.61
3. Fast Hit Times Via Pipelined
Writes
• Pipeline Tag Check and Update Cache as separate
stages; current write tag check & previous write cache
update
• Only STORES in the pipeline; empty during a miss
Store r2, (r1)
Add
Sub
Store r4, (r3)
Check r1
--M[r1]<-r2&
check r3
• In shade is “Delayed Write Buffer”; must be checked on
10/29/00
reads; either complete write or read from bufferCS252/Kubiatowicz
Lec 16.62
4. Fast Writes on Misses Via
Small Subblocks
• If most writes are 1 word, subblock size is 1 word, &
write through then always write subblock & tag
immediately
– Tag match and valid bit already set: Writing the block was proper, &
nothing lost by setting valid bit on again.
– Tag match and valid bit not set: The tag match means that this is the
proper block; writing the data into the subblock makes it appropriate to
turn the valid bit on.
– Tag mismatch: This is a miss and will modify the data portion of the
block. Since write-through cache, no harm was done; memory still has
an up-to-date copy of the old value. Only the tag to the address of
the write and the valid bits of the other subblock need be changed
because the valid bit for this subblock has already been set
• Doesn’t work with write back due to last case
10/29/00
CS252/Kubiatowicz
Lec 16.63
Cache Optimization Summary
hit time
miss
penalty
miss rate
Technique
10/29/00
Larger Block Size
Higher Associativity
Victim Caches
Pseudo-Associative Caches
HW Prefetching of Instr/Data
Compiler Controlled Prefetching
Compiler Reduce Misses
Priority to Read Misses
Subblock Placement
Early Restart & Critical Word 1st
Non-Blocking Caches
Second Level Caches
Small & Simple Caches
Avoiding Address Translation
Pipelining Writes
MR
+
+
+
+
+
+
+
MP HT
–
+
+
+
+
+
–
–
+
+
+
+
Complexity
0
1
2
2
2
3
0
1
1
2
3
2
0
2
1
CS252/Kubiatowicz
Lec 16.64
What is the Impact of What
You’ve Learned About Caches?
1000
10/29/00
2000
1999
1998
1997
1996
1995
1994
1993
1992
DRA M
1991
1990
1989
1988
1987
1986
1985
1984
1983
1982
1981
1980
• 1960-1985: Speed
= ƒ(no. operations)
• 1990
100
– Pipelined
Execution &
Fast Clock Rate
10
– Out-of-Order
execution
– Superscalar
Instruction Issue 1
• 1998: Speed =
ƒ(non-cached memory accesses)
• What does this mean for
– Compilers?,Operating Systems?, Algorithms?
Data Structures?
CPU
CS252/Kubiatowicz
Lec 16.65
Cache Cross Cutting Issues
• Superscalar CPU & Number Cache Ports
must match: number memory
accesses/cycle?
• Speculative Execution and non-faulting
option on memory/TLB
• Parallel Execution vs. Cache locality
– Want far separation to find independent operations vs.
want reuse of data accesses to avoid misses
• I/O and consistencyCaches => multiple
copies of data
– Consistency
10/29/00
CS252/Kubiatowicz
Lec 16.66
Alpha 21064
• Separate Instr & Data
TLB & Caches
• TLBs fully associative
• TLB updates in SW
(“Priv Arch Libr”)
Instr
• Caches 8KB direct
mapped, write thru
• Critical 8 bytes first
• Prefetch instr. stream
buffer
• 2 MB L2 cache, direct
mapped, WB (off-chip)
• 256 bit path to main
Stream
memory, 4 x 64-bit
Buffer
modules
• Victim Buffer: to give
read priority over
write
• 4 entry write buffer
Victim Buffer
10/29/00
between D$ & L2$
Data
Write
Buffer
CS252/Kubiatowicz
Lec 16.67
Alpha Memory Performance:
Miss Rates of SPEC92
100.00%
AlphaSort
I$ miss = 6%
D$ miss = 32%
L2 miss = 10%
Li
Compress
Ear
Tomcatv
Spice
Miss Rate
10.00%
I$
8K
D $ 8K
1.00%
L2 2M
0.10%
0.01%
10/29/00
I$ miss = 2%
D$ miss = 13%
L2 miss =
0.6%
I$ miss = 1%
D$ miss = 21%
L2 miss = 0.3%
CS252/Kubiatowicz
Lec 16.68
Alpha CPI Components
CPI
• Instruction stall: branch mispredict (green);
• Data cache (blue); Instruction cache (yellow); L2$
(pink)
Other: compute + reg conflicts, structural conflicts
5.00
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
AlphaSort Espresso
10/29/00
L2
I$
D$
I Stall
Other
Sc
Mdljsp2
Ear
Alvinn
Mdljp2
CS252/Kubiatowicz
Lec 16.69
Pitfall: Predicting Cache Performance
from Different Prog. (ISA, compiler,
...)
35%
D$, Tom
30%
D: tomcatv
• 4KB Data cache miss
rate 8%,12%, or
25%
28%?
• 1KB Instr cache miss 20% D$, gcc
rate 0%,3%,or 10%?Miss
Rate
• Alpha vs. MIPS
15% D$, esp
for 8KB Data $:
17% vs. 10%
10%
• Why 2X Alpha v.
I$, gcc
MIPS?
5%
0%I$, esp
1
2
I$, Tom
10/29/00
D: gcc
D: espresso
I: gcc
I: espresso
I: tomcatv
4
8
16
Cache Size (KB)
32
64
128
CS252/Kubiatowicz
Lec 16.70
Pitfall: Simulating Too Small
an Address Trace
4.5
4
Cummlati
3.5
ve
3
Average
Memory 2.5
Access
2
Time
1.5
1
I$ = 4 KB, B=16B
0
D$ = 4 KB, B=16B
L2 = 512 KB, B=128B
MP = 12, 200
10/29/00
1
2
3
4
5
6
7
8
9 10 11 12
Instructions Executed (billions)
CS252/Kubiatowicz
Lec 16.71
Main Memory Summary
• Wider Memory
• Interleaved Memory: for sequential or
independent accesses
• Avoiding bank conflicts: SW & HW
• DRAM specific optimizations: page mode &
Specialty DRAM
• DRAM future less rosy?
10/29/00
CS252/Kubiatowicz
Lec 16.72
Cache Optimization Summary
hit time
miss
penalty
miss rate
Technique
10/29/00
Larger Block Size
Higher Associativity
Victim Caches
Pseudo-Associative Caches
HW Prefetching of Instr/Data
Compiler Controlled Prefetching
Compiler Reduce Misses
Priority to Read Misses
Subblock Placement
Early Restart & Critical Word 1st
Non-Blocking Caches
Second Level Caches
Small & Simple Caches
Avoiding Address Translation
Pipelining Writes
MR
+
+
+
+
+
+
+
MP HT
–
+
+
+
+
+
–
–
+
+
+
+
Complexity
0
1
2
2
2
3
0
1
1
2
3
2
0
2
1
CS252/Kubiatowicz
Lec 16.73
Practical Memory Hierarchy
• Issue is NOT inventing new mechanisms
• Issue is taste in selecting between many
alternatives in putting together a memory
hierarchy that fit well together
– e.g., L1 Data cache write through, L2 Write back
– e.g., L1 small for fast hit time/clock cycle,
– e.g., L2 big enough to avoid going to DRAM?
10/29/00
CS252/Kubiatowicz
Lec 16.74