Transcript Lecture 14

CS 152: Computer Architecture
and Engineering
Lecture 14
Advanced Pipelining/Compiler Scheduling
Randy H. Katz, Instructor
Satrajit Chatterjee, Teaching Assistant
George Porter, Teaching Assistant
CS 152
Lec 14.1
Review: Pipelining
• Key to pipelining: smooth flow
– Making all instructions the same length can increase
performance!
• Hazards limit performance
– Structural: need more HW resources
– Data: need forwarding, compiler scheduling
– Control: early evaluation & PC, delayed branch, prediction
• Data hazards must be handled carefully:
– RAW (Read-After-Write) data hazards handled by forwarding
– WAW (Write-After-Write) and WAR (Write-After-Read)
hazards don’t exist in 5-stage pipeline
• MIPS I instruction set architecture made pipeline
visible (delayed branch, delayed load)
– Change in programmer semantics to make hardware simpler
CS 152
Lec 14.2
Recap: Data Hazards
I-Fet ch
DCD MemOpFetch OpFetch
IFetch
Structural
Hazard
I-Fet ch
DCD
DCD
OpFetch
Jump
IFetch
IF
DCD EX
IF
Mem WB
DCD EX
IF
DCD
IF
°°°
Control Hazard
°°°
WB
Mem WB
WAW Data Hazard
(write after write)
DCD
IF
Store
RAW (read after write) Data Hazard
Mem
DCD EX
Exec
DCD OF
OF
Ex
RS
Ex
Mem
WAR Data Hazard
(write after read)
CS 152
Lec 14.3
Recap: Data Stationary Control
• Main Control generates control signals during Reg/Dec
– Control signals for Exec (ExtOp, ALUSrc, ...) are used 1 cycle
later
– Control signals for Mem (MemWr Branch) are used 2 cycles later
– Control signals for Wr (MemtoReg MemWr) are used 3 cycles
later
Reg/Dec
ALUSrc
ALUSrc
ALUOp
ALUOp
RegDst
MemWr
Branch
MemtoReg
RegWr
RegDst
MemWr
Branch
MemtoReg
RegWr
MemWr
Branch
MemtoReg
RegWr
Wr
Mem/Wr Register
ExtOp
Mem
Ex/Mem Register
ExtOp
ID/Ex Register
IF/ID Register
Main
Control
Exec
MemtoReg
RegWr
CS 152
Lec 14.4
Review: Resolve RAW by “Forwarding” (or Bypassing)
IAU
npc
I mem
Regs
op rw rs rt
Forward
mux
B
A
im
n op rw
alu
S
n op rw
D mem
m
Regs
PC
• Detect nearest
valid write op
operand register
and forward into
op latches,
bypassing
remainder of the
pipe
• Increase muxes
to add paths from
pipeline registers
• Data Forwarding =
Data Bypassing
n op rw
CS 152
Lec 14.5
Question: Critical Path???
• Bypass path is invariably trouble
• Options?
– Make logic really fast
– Move forwarding after muxes
Regs
PC Sel
Forward
mux
B
A
alu
im
Equal
» Problem: screws up branches that
require forwarding!
» Use same tricks as “carry-skip”
adder to fix this?
» This option may just push delay
around….!
S
– Insert an extra cycle for branches
that need forwarding?
D mem
» Or: hit common case of forwarding
from EX stage and stall for
forward from memory?
m
Regs
CS 152
Lec 14.6
What About Interrupts, Traps, Faults?
• External Interrupts:
– Allow pipeline to drain, Fill with NOPs
– Load PC with interrupt address
• Faults (within instruction, restartable)
– Force trap instruction into IF
– Disable writes till trap hits WB
– Must save multiple PCs or PC + state
• Recall: Precise Exceptions  State of the machine is
preserved as if program executed up to the
offending instruction
– All previous instructions completed
– Offending instruction and all following instructions act as if
they have not even started
– Same system code will work on different implementations
CS 152
Lec 14.7
Exception/Interrupts: Implementation questions
5 instructions, executing in 5 different pipeline stages!
• Who caused the interrupt?
Stage
Problem interrupts occurring
IF
Page fault on instruction fetch; misaligned memory
access; memory-protection violation
Undefined or illegal opcode
Arithmetic exception
Page fault on data fetch; misaligned memory
access; memory-protection violation; memory error
ID
EX
MEM
• How do we stop the pipeline? How do we restart it?
• Do we interrupt immediately or wait?
• How do we sort all of this out to maintain preciseness?
CS 152
Lec 14.8
Exception Handling
IAU
npc
I mem
Regs
B
lw $2,20($5)
A
S
D mem
Regs
n op
rw
PC
Excp
detect bad instruction
Excp
detect overflow
alu
m
im
detect bad instruction address
Excp
detect bad data address
Excp
Allow exception to take effect
CS 152
Lec 14.9
Another Look at the Exception Problem
Time
Bad Inst
Inst TLB fault
Overflow
IFetch Dcd
Exec
IFetch Dcd
Program Flow
Data TLB
Mem
WB
Exec
Mem
WB
Exec
Mem
WB
Exec
Mem
IFetch Dcd
IFetch Dcd
WB
• Use pipeline to sort this out!
– Pass exception status along with instruction.
– Keep track of PCs for every instruction in pipeline.
– Don’t act on exception until it reache WB stage
• Handle interrupts through “faulting noop” in IF stage
• When instruction reaches end of MEM stage:
– Save PC  EPC, Interrupt vector addr  PC
– Turn all instructions in earlier stages into noops!
CS 152
Lec 14.10
Resolution: Freeze Above & Bubble Below
IAU
npc
I mem
freeze
Regs
op rw rs rt
PC
bubble
B
A
im
n op rw
alu
S
n op rw
D mem
m
Regs
• Flush accomplished
by setting “invalid”
bit in pipeline
n op rw
CS 152
Lec 14.11
FYI: MIPS R3000 clocking discipline
phi1
phi2
• 2-phase non-overlapping clocks
• Pipeline stage is two (level sensitive) latches
Edge-triggered
phi1
phi2
phi1
CS 152
Lec 14.12
MIPS R3000 Instruction Pipeline
Decode
Reg. Read
Inst Fetch
TLB
I-Cache
RF
ALU / E.A
Memory
Operation
E.A.
TLB
Write Reg
WB
D-Cache
Resource Usage
TLB
TLB
I-cache
RF
WB
ALUALU
D-Cache
Write in phase 1, read in phase 2 => eliminates bypass from WB
CS 152
Lec 14.13
Recall: Data Hazard on r1
xor r10,r1,r11
Reg
Dm
Im
Reg
Dm
Im
Reg
Dm
Im
Reg
ALU
or r8,r1,r9
W
B
Reg
ALU
and r6,r1,r7
Im
ME
M
Dm
ALU
sub r4,r1,r3
E
X
ALU
O
r
d
e
r
add r1,r2,r3
ID/R
FReg
ALU
I
n
s
t
r.
Time (clock cycles)
I
F
Im
Reg
Reg
Reg
Dm
Reg
With MIPS R3000 pipeline, no need to forward from WB stage
CS 152
Lec 14.14
MIPS R3000 Multicycle Operations
Use control word of local stage to
step through multicycle operation
op Rd Ra Rb
Stall all stages above multicycle
operation in the pipeline
mul Rd Ra Rb
Rd
A
B
R
Rd
T
to reg
file
Drain (bubble) stages below it
Alternatively, launch multiply/divide
to autonomous unit, only stall pipe if
attempt to get result before ready
- This means stall mflo/mfhi in
decode stage if multiply/divide still
executing
- Extra credit in Lab 5 does this
Ex: Multiply, Divide, Cache Miss
CS 152
Lec 14.15
Is CPI = 1 for our pipeline?
• Remember that CPI is an “Average # cycles/inst
IFetch Dcd
Exec
IFetch Dcd
Mem
WB
Exec
Mem
WB
Exec
Mem
WB
Exec
Mem
IFetch Dcd
IFetch Dcd
WB
• CPI here is 1, since the average throughput is 1
instruction every cycle.
• What if there are stalls or multi-cycle execution?
• Usually CPI > 1. How close can we get to 1??
CS 152
Lec 14.16
Recall: Compute CPI?
• Start with Base CPI
• Add stalls
CPI  CPIbase  CPI stall
CPI stall  STALLtype1  freq type1  STALLtype 2  freq type 2
• Suppose:
–
–
–
–
CPIbase=1
Freqbranch=20%, freqload=30%
Suppose branches always cause 1 cycle stall
Loads cause a 100 cycle stall 1% of time
• Then: CPI = 1 + (10.20)+(100  0.300.01)=1.5
• Multicycle? Could treat as:
CPIstall=(CYCLES-CPIbase)  freqinst
CS 152
Lec 14.17
Administrivia
• Get moving on Lab 5!
– This lab is even harder than Lab 4.
– Much trickier to debug … !
– Start with unpipelined version?
» Interesting thought… May or may not help
• Dynamic scheduling techniques discussed in the Other
Hennessy & Patterson book:
– “Computer Architecture: A Quantitative Approach”
– Chapter 3, handout of relevant section for HW
that is due Friday
CS 152
Lec 14.18
Administrivia: Be Careful About Clock Edges in Lab5!
D
B
Reg.
File
M
Mem
Acces
s
Data
Mem
PC
Next PC
Equal
IRmem
WB Ctrl
Ex Ctrl
Exec
S
IRwb
IRex
A
Mem Ctrl
Dcd Ctrl
Reg
File
IR
Inst. Mem
Valid
• Since Register file has edge-triggered write:
– Must have everything set up at end of memory stage
– This means that “M” register here is not necessary!
CS 152
Lec 14.19
Case Study: MIPS R4000 (200 MHz)
• 8 Stage Pipeline:
– IF–first half of fetching of instruction; PC selection happens here
as well as initiation of instruction cache access.
– IS–second half of access to instruction cache.
– RF–instruction decode and register fetch, hazard checking and also
instruction cache hit detection.
– EX–execution, which includes effective address calculation, ALU
operation, and branch target computation and condition evaluation.
– DF–data fetch, first half of access to data cache.
– DS–second half of access to data cache.
– TC–tag check, determine whether the data cache access hit.
– WB–write back for loads and register-register operations.
• 8 Stages:
What is impact on Load delay? Branch delay? Why?
CS 152
Lec 14.20
Case Study: MIPS R4000
TWO Cycle
IF
IS
IF
RF
IS
IF
EX
RF
IS
IF
DF
EX
RF
IS
IF
DS
DF
EX
RF
IS
IF
TC
DS
DF
EX
RF
IS
IF
WB
TC
DS
DF
EX
RF
IS
IF
THREE Cycle
IF
IS
IF
RF
IS
IF
EX
RF
IS
IF
DF
EX
RF
IS
IF
DS
DF
EX
RF
IS
IF
TC
DS
DF
EX
RF
IS
IF
WB
TC
DS
DF
EX
RF
IS
IF
Load Latency
Branch Latency
(conditions evaluated
during EX phase)
Delay slot plus two stalls
Branch likely cancels delay slot if not taken
CS 152
Lec 14.21
MIPS R4000 Floating Point
• FP Adder, FP Multiplier, FP Divider
• Last step of FP Multiplier/Divider uses FP Adder HW
• 8 kinds of stages in FP units:
Stage Functional unit
Description
A
D
E
M
N
R
S
U
Mantissa ADD stage
Divide pipeline stage
Exception test stage
First stage of multiplier
Second stage of multiplier
Rounding stage
Operand shift stage
Unpack FP numbers
FP adder
FP divider
FP multiplier
FP multiplier
FP multiplier
FP adder
FP adder
CS 152
Lec 14.22
MIPS FP Pipe Stages
FP Instr
Add, Subtract
Multiply
Divide
Square root
Negate
Absolute value
FP compare
Stages:
M
N
R
S
U
1
U
U
U
U
U
U
U
2
S+A
E+M
A
E
S
S
A
3
4
5
A+R R+S
M
M
M
R
D28 …
(A+R)108 …
6
7
8
…
N
N+A R
D+A D+R, D+R, D+A, D+R, A, R
A
R
R
First stage of multiplier
Second stage of multiplier
Rounding stage
Operand shift stage
Unpack FP numbers
A
D
E
Mantissa ADD stage
Divide pipeline stage
Exception test stage
CS 152
Lec 14.23
R4000 Performance
Not ideal CPI of 1:
Base
Load stalls
Branch stalls
FP result st alls
tomcatv
su2cor
spice2g6
ora
li
gcc
espresso
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
nasa7
FP structural stalls: Not enough FP hardware (parallelism)
FP result stalls: RAW data hazard (latency)
Branch stalls (2 cycles + unfilled slots)
Load stalls (1 or 2 clock cycles)
doduc
–
–
–
–
eqntott
•
FP structural
stalls
CS 152
Lec 14.24
Can We Somehow Make CPI Closer to 1?
• Let’s assume full pipelining:
– If we have a 4-cycle instruction, then we need 3 instructions
between a producing instruction and its use:
multf $F0,$F2,$F4
delay-1
delay-2
delay-3
addf $F6,$F10,$F0
Earliest forwarding for
4-cycle instructions
Earliest forwarding for
1-cycle instructions
Fetch Decode
addf
Ex1
Ex2
Ex3
delay3 delay2 delay1
Ex4
WB
multf
CS 152
Lec 14.25
FP Loop: Where are the Hazards?
Loop:
•
LD
ADDD
SD
SUBI
BNEZ
NOP
F0,0(R1)
F4,F0,F2
0(R1),F4
R1,R1,8
R1,Loop
;F0=vector element
;add scalar from F2
;store result
;decrement pointer 8B (DW)
;branch R1!=zero
;delayed branch slot
Instruction
producing result
Instruction
using result
Latency in
clock cycles
FP ALU op
FP ALU op
Load double
Load double
Integer op
Another FP ALU op
Store double
FP ALU op
Store double
Integer op
3
2
1
0
0
Where are the stalls?
CS 152
Lec 14.26
FP Loop Showing Stalls
1 Loop: LD
F0,0(R1)
2
stall
3
ADDD F4,F0,F2
4
stall
5
stall
6
SD
0(R1),F4
7
SUBI R1,R1,8
8
BNEZ R1,Loop
9
stall
;F0=vector element
;add scalar in F2
;store result
;decrement pointer 8B (DW)
;branch R1!=zero
;delayed branch slot
Instruction
producing result
Instruction
using result
Latency in
clock cycles
FP ALU op
FP ALU op
Load double
Another FP ALU op
Store double
FP ALU op
3
2
1
• 9 clocks: Rewrite code to minimize stalls?
CS 152
Lec 14.27
Revised FP Loop Minimizing Stalls
1 Loop: LD
F0,0(R1)
2
stall
3
ADDD F4,F0,F2
4
SUBI R1,R1,8
5
BNEZ R1,Loop
6
SD
8(R1),F4
;delayed branch
;altered when move past SUBI
Swap BNEZ and SD by changing address of SD
Instruction
producing result
Instruction
using result
Latency in
clock cycles
FP ALU op
FP ALU op
Load double
Another FP ALU op
Store double
FP ALU op
3
2
1
6 clocks: Unroll loop 4 times code to make faster?
CS 152
Lec 14.28
Unroll Loop Four Times (Straightforward Way)
1 Loop:LD
2
ADDD
3
SD
4
LD
5
ADDD
6
SD
7
LD
8
ADDD
9
SD
10
LD
11
ADDD
12
SD
13
SUBI
14
BNEZ
15
NOP
F0,0(R1)
F4,F0,F2
0(R1),F4
F6,-8(R1)
F8,F6,F2
-8(R1),F8
F10,-16(R1)
F12,F10,F2
-16(R1),F12
F14,-24(R1)
F16,F14,F2
-24(R1),F16
R1,R1,#32
R1,LOOP
1 cycle stall
2 cycles stall
;drop SUBI & BNEZ
Rewrite loop to
minimize stalls?
;drop SUBI & BNEZ
;drop SUBI & BNEZ
;alter to 4*8
15 + 4 x (1+2) = 27 clock cycles, or 6.8 per iteration
Assumes R1 is multiple of 4
CS 152
Lec 14.29
Unrolled Loop That Minimizes Stalls
1 Loop:LD
2
LD
3
LD
4
LD
5
ADDD
6
ADDD
7
ADDD
8
ADDD
9
SD
10
SD
11
SD
12
SUBI
13
BNEZ
14
SD
F0,0(R1)
F6,-8(R1)
F10,-16(R1)
F14,-24(R1)
F4,F0,F2
F8,F6,F2
F12,F10,F2
F16,F14,F2
0(R1),F4
-8(R1),F8
-16(R1),F12
R1,R1,#32
R1,LOOP
8(R1),F16
• What assumptions
made when moved
code?
– OK to move store past
SUBI even though
changes register
– OK to move loads before
stores: get right data?
– When is it safe for
compiler to do such
changes?
; 8-32 = -24
14 clock cycles, or 3.5 per iteration
When safe to move instructions?
CS 152
Lec 14.30
Getting CPI < 1: Issuing Multiple Instructions/Cycle
• Two main variations: Superscalar and VLIW
• Superscalar: varying no. instructions/cycle (1 to 6)
– Parallelism and dependencies determined/resolved by HW
– IBM PowerPC 604, Sun UltraSparc, DEC Alpha 21164, HP 7100
• Very Long Instruction Words (VLIW): fixed number of
instructions (16) parallelism determined by compiler
– Pipeline is exposed; compiler must schedule delays to get right
result
• Explicit Parallel Instruction Computer (EPIC)/ Intel
– 128 bit packets containing 3 instructions (can execute
sequentially)
– Can link 128 bit packets together to allow more parallelism
– Compiler determines parallelism,
HW checks dependencies and fowards/stalls
CS 152
Lec 14.31
Getting CPI < 1: Issuing Multiple Instructions/Cycle
• Superscalar DLX: 2 instructions, 1 FP & 1 anything else
– Fetch 64-bits/clock cycle; Int on left, FP on right
– Can only issue 2nd instruction if 1st instruction issues
– More ports for FP registers to do FP load & FP op in a pair
Type
Int. instruction
FP instruction
Int. instruction
FP instruction
Int. instruction
FP instruction
Pipe Stages
IF
IF
ID
IF
ID EX MEM
EX MEM WB
IF ID EX
ID EX MEM
IF ID
IF ID EX
WB
MEM WB
WB
EX MEM WB
MEM WB
• 1 cycle load delay expands to 3 instructions in SS
– instruction in right half can’t use it, nor instructions in next slot
CS 152
Lec 14.32
Loop Unrolling in Superscalar
Integer instruction
Loop:
LD F0,0(R1)
LD F6,-8(R1)
LD F10,-16(R1)
LD F14,-24(R1)
LD F18,-32(R1)
SD 0(R1),F4
SD -8(R1),F8
SD -16(R1),F12
SD -24(R1),F16
SUBI R1,R1,#40
BNEZ R1,LOOP
SD -32(R1),F20
FP instruction
ADDD F4,F0,F2
ADDD F8,F6,F2
ADDD F12,F10,F2
ADDD F16,F14,F2
ADDD F20,F18,F2
Clock cycle
1
2
3
4
5
6
7
8
9
10
11
12
• Unrolled 5 times to avoid delays (+1 due to SS)
• 12 clocks, or 2.4 clocks per iteration
CS 152
Lec 14.33
Limits of Superscalar
• While Integer/FP split is simple for the HW, get CPI of
0.5 only for programs with:
– Exactly 50% FP operations
– No hazards
• If more instructions issue at same time, greater
difficulty of decode and issue
– Even 2-scalar => examine 2 opcodes, 6 register specifiers, &
decide if 1 or 2 instructions can issue
• VLIW: tradeoff instruction space for simple decoding
– The long instruction word has room for many operations
– By definition, all the operations the compiler puts in the long
instruction word can execute in parallel
– E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch
» 16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide
– Need compiling technique that schedules across several
branches
CS 152
Lec 14.34
Loop Unrolling in VLIW
Memory
reference 1
Memory
FP
reference 2 operation 1
FP
op. 2
Int. op/
branch
Clock
LD F0,0(R1)
LD F6,-8(R1)
LD F10,-16(R1) LD F14,-24(R1)
LD F18,-32(R1) LD F22,-40(R1) ADDD F4, F0, F2 ADDD F8,F6,F2
LD F26,-48(R1)
ADDD F12, F10, F2 ADDD F16,F14,F2
ADDD F20, F18, F2 ADDD F24,F22,F2
SD 0(R1),F4
SD -8(R1),F8 ADDD F28, F26, F2
SD -16(R1),F12 SD -24(R1),F16
SD -32(R1),F20 SD -40(R1),F24
SUBI R1,R1,#48
SD -0(R1),F28
BNEZ R1,LOOP
Unrolled 7 times to avoid delays
7 results in 9 clocks, or 1.3 clocks per iteration
Need more registers in VLIW(EPIC => 128int + 128FP)
CS 152
Lec 14.35
1
2
3
4
5
6
7
8
9
Software Pipelining
• Observation: if iterations from loops are independent,
then can get more ILP by taking instructions from
different iterations
• Software pipelining: reorganizes loops so that each
iteration is made from instructions chosen from different
iterations of the original loop ( Tomasulo in SW)
Iteration
0
Iteration
Iteration
1
2
Iteration
3
Iteration
4
Soft warepipelined
it eration
CS 152
Lec 14.36
Software Pipelining Example
After: Software Pipelined
1
2
3
4
5
• Symbolic Loop Unrolling
SD
ADDD
LD
SUBI
BNEZ
0(R1),F4 ; Stores M[i]
F4,F0,F2 ; Adds to M[i-1]
F0,-16(R1); Loads M[i-2]
R1,R1,#8
R1,LOOP
overlapped ops
Before: Unrolled 3 times
1
LD
F0,0(R1)
2 ADDD F4,F0,F2
3 SD
0(R1),F4
4 LD
F6,-8(R1)
5 ADDD F8,F6,F2
6 SD
-8(R1),F8
7 LD
F10,-16(R1)
8 ADDD F12,F10,F2
9 SD
-16(R1),F12
10 SUBI R1,R1,#24
11 BNEZ R1,LOOP
SW Pipeline
Time
Loop Unrolled
– Maximize result-use distance
– Less code space than unrolling
Time
– Fill & drain pipe only once per loop
vs. once per each unrolled iteration in loop unrolling
CS 152
Lec 14.37
Software Pipelining with Loop Unrolling in VLIW
Memory
reference 1
Memory
reference 2
FP
operation 1
FP
op. 2
Int. op/
branch
Clock
LD F0,-48(R1)
ST 0(R1),F4
ADDD F4,F0,F2
LD F6,-56(R1)
ST -8(R1),F8
ADDD F8,F6,F2
SUBI R1,R1,#24
2
LD F10,-40(R1)
ST 8(R1),F12
ADDD F12,F10,F2
BNEZ R1,LOOP
3
1
• Software pipelined across 9 iterations of original loop
– In each iteration of above loop, we:
» Store to m,m-8,m-16
» Compute for m-24,m-32,m-40
» Load from m-48,m-56,m-64
(iterations I-3,I-2,I-1)
(iterations I,I+1,I+2)
(iterations I+3,I+4,I+5)
• 9 results in 9 cycles, or 1 clock per iteration
• Average: 3.3 ops per clock, 66% efficiency
Note: Need less registers for software pipelining
(only using 7 registers here, was using 15)
CS 152
Lec 14.38
Can we use HW to get CPI closer to 1?
• Why in HW at run time?
– Works when can’t know real dependence at compile time
– Compiler simpler
– Code for one machine runs well on another
• Key idea: Allow instructions behind stall to proceed:
DIVD
ADDD
SUBD
F0,F2,F4
F10,F0,F8
F12,F8,F14
• Out-of-order execution => out-of-order completion.
• Disadvantages?
– Complexity
– Precise interrupts harder! (Talk about this next time)
CS 152
Lec 14.39
Problems?
• How do we prevent WAR and WAW hazards?
• How do we deal with variable latency?
– Forwarding for RAW hazards harder.
Clock Cycle Number
Instruction
LD
F6,34(R2)
LD
F2,45(R3)
MULTD F0,F2,F4
SUBD
F8,F6,F2
DIVD
F10,F0,F6
ADDD F6,F8,F2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
IF ID EX MEM WB
RAW
IF ID EX MEM WB
IF ID stall M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 MEM WB
IF
ID
A1 A2 MEM WB
IF
ID stall stall stall stall stall stall stall stall stall D1
IF ID
A1
A2 MEM WB
WAR
CS 152
Lec 14.40
D2
Summary
• Precise interrupts are easy to implement in a 5-stage pipeline:
– Mark exception on pipeline state rather than acting on it
– Handle exceptions at one place in pipeline
(memory-stage/beginning of writeback)
• Loop unrolling  Multiple iterations of loop in software:
– Amortizes loop overhead over several iterations
– Gives more opportunity for scheduling around stalls
• Software Pipelining  take one instruction from each of
several iterations of the loop
– Software overlapping of loop iterations
• Very Long Instruction Word machines (VLIW)  Multiple
operations coded in single, long instruction
– Requires compiler to decide which ops can be done in parallel
– Trace scheduling  find common path and schedule code as if
branches didn’t exist (+ add “fixup code”)
CS 152
Lec 14.41