Tomasulo, ILP, Branch prediction

Download Report

Transcript Tomasulo, ILP, Branch prediction

1
Instruction Level Parallelism
Vincent H. Berk
October 15, 2008
Reading for today: A.7 – A.8
Reading for Friday: 2.1 – 2.5
Project Proposals Due Right NOW!
2
Instruction Level Parallelism
• Pipeline CPI = Ideal pipeline CPI + Structural stalls +
Data hazard stalls + Control stalls
• Reduce stalls, reduce CPI
• Reduce CPI, increase IPC
• Instruction-level parallelism (ILP) seeks to reduce stalls
• Loop-level parallelism is easiest to see:
for (i=1; i<100; i=i+1)
{
A[i] = B[i] + C[i];
D[i] = E[i] + F[i];
}
3
Instruction Level Parallelism
• ILP in SW (static) or HW (dynamic)
• HW intensive ILP dominates desktop and server
markets
• SW compiler intensive approaches more likely seen in
embedded systems
4
Dependences
• Two instructions are parallel if they can execute
simultaneously in a pipeline without causing any stalls
(assuming no structural hazards) and can be reordered
• Two instructions that are dependent are not parallel and
cannot be reordered
• Types of dependences
– Data dependences
– Name dependences
– Control dependences
5
Dependences
• Dependences are properties of programs
• Hazards are properties of the pipeline organization
• Dependence indicates the potential for a hazard
• Compiler concerned about dependences in program,
whether or not a HW hazard occurs depends on a given
pipeline
6
Review of Hazards
Consider instructions i and j, where i occurs before j.
RAW (read after write) — j tries to read a source before i writes it,
so j gets the old value
WAW (write after write) — j tries to write an operand before it is written
by i (only possible in pipelines that write in more than one pipe stage
or allow an instruction to proceed even when a previous instruction is
stalled)
WAR (write after read) — j tries to write a destination before it is read by
i, so i incorrectly gets the new value (only possible when some
instructions can write results early in the pipeline and other instructions
can read sources late in the pipeline)
7
Data Dependences
• (True) Data dependences (RAW if a hazard for HW)
– Instruction i produces a result used by instruction j, or
– Instruction j is data dependent on instruction k, and
instruction k is data dependent on instruction i.
• Easy to determine for registers (fixed names)
• Hard for memory:
– Does 100(R4) = 20(R6)?
– From different loop iterations, does 20(R6) = 20(R6)?
8
Name Dependences
• Another kind of dependence called name dependence:
two instructions use same name but don’t exchange data
• Antidependence (WAR if a hazard for HW)
– Instruction j writes a register or memory location that
instruction i reads from and instruction i is executed first
• Output dependence (WAW if a hazard for HW)
– Instruction i and instruction j write the same register or
memory location; ordering between instructions must be
preserved
9
Name Dependences
• Hard for memory accesses
– Does 100(R4) = 20 (R6)?
– From different loop iterations, does 20(R6) = 20(R6)?
• Example of renaming:
DIV.D
ADD.D
S.D
SUB.D
MUL.D
F0,F2,F4
F6,F0,F8
F6, 0(R1)
F8,F10,F14
F6,F10,F8
DIV.D
ADD.D
S.D
SUB.D
MUL.D
F0,F2,F4
S,F0,F8
S, 0(R1)
T,F10,F14
F6,F10,T
10
Control Dependence
• Final kind of dependence called control dependence
• Example
if pl {S1;}
if p2 {S2;}
S1 is control dependent on p1 and S2 is control dependent
on p2 but not on p1.
Note that S2 could be data dependent on S1.
11
Control Dependences
• Two (obvious) constraints on control dependences:
– An instruction that is control dependent on a branch
cannot be moved before the branch so that its execution
is no longer controlled by the branch
– An instruction that is not control dependent on a branch
cannot be moved to after the branch so that its
execution is controlled by the branch
• Control dependences often relaxed to get parallelism; get
same effect if we preserve order of exceptions and data
flow
12
Basic Loop Unrolling
for (i=1000; i>0; i=i-1)
x[i] = x[i] + s;
Loop:
LD
F0, 0(R1)
; F0=array element
ADDD
F4, F0, F2
; add scalar in F2
SD
0 (R1), F4
; store result
SUBI
R1, R1, #8
; decrement pointer 8 bytes (DW)
BNEZ
R1, Loop
; branch R1! = zero
NOP
; delayed branch slot
13
FP Loop Hazards
Loop:
LD
F0, 0(R1)
; F0=vector element
ADDD
F4, F0, F2
; add scalar in F2
SD
0 (R1), F4
; store result
SUBI
R1, R1, #8
; decrement pointer 8 bytes (DW)
BNEZ
R1, Loop
; branch R1! = zero
NOP
Instruction
producing result
FP ALU op
FP ALU op
Load double
Load double
Integer op
; delayed branch slot
Instruction
using result
Another FP ALU op
Store double
FP ALU op
Store double
Branch
Latency in
clock cycles
3
2
1
0
1
Where are the stalls? (Note: latencies due to pipeline organization1)
14
FP Loop Showing Stalls
1 L
oop:
2
3
4
5
6
7
8
9
10
L
D
F0, 0(R1
)
stall
ADD
D F4, F0, F2
stall
stall
SD
0(R
1)
, F4
SUB
I
R
1, R
1, #8
stall
BN
EZ R
1, L
oop
stall
Inst
r uc
tio
n
pro
duc
in
ge
rsu
lt
FP A
LUop
FP A
LUop
Load o
dubl
e
; F0=vec
to
releme
nt
; ad
dsca
la
rinF2
; store result
; de
cr
em
ent point
er8bytes(D
W)
; wa
itfores
rult R
1
; branchR
1!
=z
er
o
; de
la
ye
dbranchslot
Inst
r uc
tio
n
using resu
lt
Anoth
er FPALUop
St
ore double
FP A
LUop
Latencyin
clo
ck yc
cles
3
2
1
Rewrite code to minimize stalls?
15
Revised FP Loop Minimizing Stalls
1L
o
o
p :L
D
F
0
,0R
1
()
2
S
U
B
I R
1
,R
1#
,8
3
A
D
DD
F
4
,F
0
,F
2
4
stal
l
5
B
N
EZR
1
,L
o
o
p ;d
el
a
yd
e
b
ran
c
h
6
S
D
8(R
1
),F
4 ;a
lt r
e
e
dwh
en
m
o
v
e
dps
a
tS
U
B
I
Instru
ction
pro
ducing e
rsult
FPALUop
FPALUop
Loa
ddouble
Instru
ctionusing
Lat
ency n
i
result
clock cy
cles
AnotherFPALUop
3
St o
re o
duble
2
FPALUop
1
Can we unroll the loop to make it faster?
16
Loop Unrolling
Short loop minimizes parallelism, induces significant overhead
Branches per instruction is high
Replicate the loop body several times and adjust the loop termination code
for (i = 0; i < 100; i = i + 4)
{ x[i] = x[i] + y[i];
x[i + 1] = x[i + 1] + y[i + 1];
x[i + 2] = x[i + 2] + y[i + 2];
x[i + 3] = x[i + 3] + y[i + 3]; }
Improves scheduling since instructions from different iterations can be scheduled
together
This is done very early in the compilation process
All dependences have to be found beforehand
Need to use different registers for each iteration
17
Where are the control dependences?
1 Loop:
2
3
4
5
6
7
8
9
10
11
12
13
14
15
....
LD
ADDD
SD
SUBI
BEQZ
LD
ADDD
SD
SUBI
BEQZ
LD
ADDD
SD
SUBI
BEQZ
F0, 0 (R1)
F4, F0, F2
0 (R1), F4
R1, R1, #8
R1, exit
F0, 0 (R1)
F4, F0, F2
0 (R1), F4
R1, R1, #8
R1, exit
F0, 0 (R1)
F4, F0, F2
0 (R1), F4
R1, R1, #8
R1, exit
18
Name Dependences
1 Loop:
2
3
4
2
3
7
8
9
10
11
12
13
14
15
LD
ADDD
SD
LD
ADDD
SD
LD
ADDD
SD
LD
ADDD
SD
SUBI
BNEZ
NOP
F0, 0 (R1)
F4, F0, F2
0 (R1), F4
F0, –8 (R1)
F4, F0, F2
–8 (R1), F4
F0, –16 (R1)
F4, F0, F2
–16 (R1), F4
F0, –24 (R1)
F4, F0, F2
–24 (R1), F4
R1, R1, #32
R1, LOOP
; drop SUBI & BNEZ
; drop SUBI & BNEZ
; drop SUBI & BNEZ
; alter to 4*8
19
Name Dependences
1 Loop:
LD
2
ADDD
3
SD
4
LD
5
ADDD
6
SD
7
LD
8
ADDD
9
SD
10
LD
11
ADDD
12
SD
13
SUBI
14
BNEZ
15
NOP
Register renaming
F0, 0 (R1)
F4, F0, F2
0 (R1), F4
F6, –8 (R1)
F8, F6, F2
–8 (R1), F8
F10, –16 (R1)
F12, F10, F2
–16 (R1), F12
F14, –24 (R1)
F16, F14, F2
–24 (R1), F16
R1, R1, #32
R1, LOOP
; drop SUBI & BNEZ
; F0 becomes F6
; F4 becomes F8
; drop SUBI & BNEZ
; F0 becomes F10
; F4 becomes F12
; drop SUBI & BNEZ
; F0 becomes F14
; F4 becomes F16
; alter to 4*8
20
Reschedule code to minimize stalls
1 Loop:
2
3
4
5
6
7
8
9
10
11
12
13
14
15
LD
ADD
D
SD
LD
ADD
D
SD
LD
ADD
D
SD
LD
ADD
D
SD
SUBI
BN
EZ
NOP
F0, 0(R1)
F4, F0,F2
0 (R1), F4
F6, -8 (R1)
F8, F6,F2
-8 (R1), F8
F10, -16 (R1)
F12, F10,F2
-16 (R1), F12
F14, -24 (R1)
F16, F14,F2
-24 (R1), F16
R1, R1, #32
R1, Loop
; drop SUBI &BN
EZ
; drop SUBI &BN
EZ
; drop SUBI &BN
EZ
; alter to 4  8
15 + 4  (1+2) +1 = 28 clock cycles to initiate, or 7 per iteration
Assumes R1 is multiple of 4
Rewrite loop to
minimize stalls?
21
Unrolled Loop That Minimizes Stalls
1 Loop:
2
3
4
5
6
7
8
9
10
11
12
13
14
LD
LD
LD
LD
ADDD
ADDD
ADDD
ADDD
SD
SD
SD
SUBI
BNEZ
SD
F0, 0 (R1)
• What assumptions were
F6, -8 (R1)
made when we moved code?
F10, -16 (R1)
F14, -24 (R1)
- OK to move store past SUBI
F4, F0, F2
even though SUBI changes
F8, F6, F2
the register
F12, F10, F2
- OK to move loads before
F16, F14, F2
stores: get right data?
0 (R1), F4
-8 (R1),F8
- When is it safe for compiler
-16 (R1), F12
to do such changes?
R1,R1, #32
R1,LOO
P
8 (R1), F16 ; 8-32 = -24
Can we eliminate the
14+1=15 clock cycles, or 3.75 per iteration
remaining stall?
22
Compiler Loop Unrolling
Most important: Code Correctness
Unrolling produces larger code that might interfere with cache:
– Code sequence no longer fits in L1 cache
– Cache to memory bandwidth might not be wide enough
Compiler must understand hardware:
– Enough registers must be available OR
– Compiler must rely on hardware register renaming
Compiler must understand the code:
– Determine that loop iterations are independent
– Eliminate branch instructions while preserving correctness
– Determine that the LD and SD are independent over the loop
– Rescheduling of instructions and adjusting the offsets
23
Multiple Issue Machines
Superscalar: multiple parallel dedicated pipelines:
– Varying number of instructions per cycle, scheduled by compiler
and/or by hardware (Tomasulo)
– IBM PowerPC, Sun UltraSparc, DEC Alpha, IA32 Pentium
VLIW (Very Long Instruction Word): multiple operations encoded in
instruction:
– Instructions have wide template (4-16 operations)
– IA-64 Itanium
24
Getting CPI < 1: Issuing Multiple Instructions/Cycle
Superscalar DLX: 2 instructions, 1 FP & 1 anything else
– Fetch 64-bits/clock cycle; integer on left, FP on right
– Can only issue 2nd instruction if 1st instruction issues
– More ports for FP registers to do FP load & FP op in a pair
Type
Int. instruction
FP instruction
Int. instruction
FP instruction
Int. instruction
FP instruction
Pipe Stages
IF ID EX
IF ID EX
IF ID
IF ID
IF
IF
MEM
MEM
EX
EX
ID
ID
WB
WB
MEM
MEM
EX
EX
WB
WB
MEM WB
MEM WB
• 1 cycle load delay expands to 3 instructions in superscalar DLX
– Instruction in right half can’t use it, nor instructions in next slot
25
Superscalar Example
Superscalar:
– Our system can issue one floating point and one other (non-floating point)
instruction per cycle.
– Instructions are dynamically scheduled from the window
– Unroll the loop 5 times and reschedule to minimize cycles per iteration.
(WHY?)
While Integer/FP split is simple for the HW, get CPI of 0.5 only for
programs with:
– Exactly 50% FP operations
– No hazards
If more instructions issued at same time, greater difficulty in decode and
issue
– Even 2-way superscalar  examine 2 opcodes, 6 register specifiers, &
decide if 1 or 2 instructions can issue
26
Loop Unrolling in Superscalar
Loop:
Integer instruction
LD F0, 0 (R1)
LD F6, –8 (R1)
LD F10, –16 (R1)
LD F14, –24 (R1)
LD F18, –32 (R1)
SD 0 (R1), F4
SD –8 (R1), F8
SD –16 (R1), F12
SUBI R1, R1, #40
SD 16 (R1), F16
BNEZ R1, Loop
SD 8 (R1), F20
FP instruction
ADDD F4, F0, F2
ADDD F8, F6, F2
ADDD F12, F10, F2
ADDD F16, F14, F2
ADDD F20, F18, F2
Unrolled 5 times to avoid delays (+ 1 due to SS)
12 clocks to initiate, or 2.4 clocks per iteration
Clock cycle
1
2
3
4
5
6
7
8
9
10
11
12
27
VLIW Example
VLIW:
– 5 instructions in one very long instruction word.
• 2 FP, 2 Memory, 1 branch/integer
– Compiler avoids hazards
– Not all slots are always full
VLIW: tradeoff instruction space for simple decoding
– The long instruction word has room for many operations
– By definition, all the operations the compiler puts in the long instruction
word are independent  execute in parallel
– E.g., 2 integer operations, 2 FP ops, 2 memory refs, 1 branch  16 to 24
bits per field  7*16 or 112 bits to 7*24 or 168 bits wide
– Need compiling technique that schedules across several branches
28
Loop Unrolling in VLIW
Memory
reference 1
LD F0, 0 (R1)
LD F10, –16 (R1)
LD F18, –32 (R1)
LD F26, –48 (R1)
Memory
reference 2
LD F6, –8 (R1)
LD F14, –24 (R1)
LD F22, –40 (R1)
SD 0 (R1), F4
SD –16 (R1), F12
SD –32 (R1), F20
SD 0 (R1), F28
SD –8 (R1), F8
SD –24 (R1), F16
SD –40 (R1), F24
FP
operation 1
ADDD F4, F0, F2
ADDD F12, F10, F2
ADDD F20, F18, F2
ADDD F28, F26, F2
FP
op. 2
Int. op/
branch
Clock
ADDD F8, F6, F2
ADDD F16, F14, F2
ADDD F24, F22, F2
Unrolled 7 times to avoid delays
9 clocks to initiate, or 1.3 clocks per iteration
Average: 2.5 ops per clock, 50% efficiency
Note: Need more registers in VLIW (15 vs. 6 in SS)
SUBI R1, R1, #48
BNEZ R1, LOOP
1
2
3
4
5
6
7
8
9
29
Limits to Multi-Issue Machines
Inherent limitations of instruction-level parallelism
– 1 branch in 5: How to keep a 5-way VLIW busy?
– Latencies of units: many operations must be scheduled
– Easy: More instruction bandwidth
– Easy: Duplicate functional units to get parallel execution
– Hard: Increase ports to register file (bandwidth)
• VLIW example needs 7 reads and 3 writes for integer registers
& 5 reads and 3 writes for FP registers
– Harder: Increase ports to memory (bandwidth)
– Pipelines in lockstep:
• One pipeline stall, stalls all others to avoid hazards
30
Limits to Multi-Issue Machines
Limitations specific to either superscalar or VLIW implementation
– Decode issue in superscalar: how wide is practical?
– VLIW code size: unroll loops + wasted fields in VLIW
• IA-64 compresses dependent instructions, but still larger
– VLIW lock step  1 hazard & all instructions stall
• IA-64 not lock step? Dynamic pipeline?
– VLIW & binary compatibility: IA-64 promises binary
compatibility
31
Dependences
Two instructions are parallel if they can execute simultaneously in a pipeline
without causing any stalls (assuming no structural hazards) and can be
reordered (depending on code semantics)
Two instructions that are dependent are not parallel and cannot be reordered
Types of dependences
– Data dependences
– Name dependences
– Control dependences
Dependences are properties of programs
Hazards are properties of the pipeline organization
Dependence indicates the potential for a hazard
32
Compiler Perspectives on Code Movement
Hard for memory accesses
– Does 100(R4) = 20 (R6)?
– From different loop iterations, does 20(R6) = 20(R6)?
Our example required compiler to know that if R1 doesn’t change then:
0(R1)  -8 (R1)  -16 (R1)  -24 (R1)
There were no dependences between some loads and stores so they could
be moved by each other
33
Detecting Loop Level Dependences
for (i=1; i<=100; i=i+1) {
A[i] = A[i] + B[i];
/* S1 */
B[i+1] = C[i] + D[i]; /* S2 */
}
Loop carried dependence:
•
S1 relies on the S2 of the previous iteration
There is no dependence between S1 and S2, consider:
A[1] = A[1] + B[1];
for (i=1; i<=99; i=i+1) {
B[i+1] = C[i] + D[i];
A[i+1] = A [i+1] + B[i+1];
}
B[101] = C[100] + D[100];
34
Dependence Distance
for (i=6; i<=100; i=i+1)
Y[i] = Y[i-5] + Y[i];
•
Loop carried dependence in the form of a recurrence of Y
•
Dependence distance of 5
•
Higher dependence distance allows for more ILP
35
Greatest Common Divisor test
Affine array indices:
– All array indices DIRECTLY depend on loop variable i
Assume the code properties:
– for loop runs from n to m with index i
– loop has an access pattern: X [a * i +b] = X [c * i +d] …
– two values for i: j and k both between n and m
– store indexed by j and a load later on index by k with:
a*j+b = c*k+d
A loop carried dependence exists if GCD (c,a) must divide (d-b)
for (i=1; i<=100; i=i+1)
X[2*i+3] = X[2*i] * 5.0;
a=2, b=3, c=2, d=0 GDC(a,c) = 2 and d-b = -3
There is no loop dependence since 2 does not divide -3
36
Problem Cases
Reference by pointers instead of array indices
– partly eliminated by strict type checking
Sparse arrays with indexing through other arrays (similar to pointers)
When a dependence exists for values of the indices but those values are
never reached
The loop-carried dependence has a distance far greater than what loopunrolling would cover