CS252 Graduate Computer Architecture Lecture 11 Vector (finished)

Download Report

Transcript CS252 Graduate Computer Architecture Lecture 11 Vector (finished)

CS252
Graduate Computer Architecture
Lecture 11
Vector (finished)
Branch Prediction
October 6th, 2003
Prof. John Kubiatowicz
http://www.cs.berkeley.edu/~kubitron/courses/cs252-F03
10/05/03
CS252/Kubiatowicz
Lec 11.1
Review: Vector Processing
• Vector processors have high-level operations that
work on linear arrays of numbers: "vectors"
SCALAR
(1 operation)
r2
r1
v1 v2
+
+
r3
v3
add r3, r1, r2
10/05/03
VECTOR
(N operations)
vector
length
add.vv v3, v1, v2
CS252/Kubiatowicz
25
Lec 11.2
Review: Vector Processing
• Vector Processing represents an alternative to
complicated superscalar processors.
• Primitive operations on large vectors of data
• Load/store architecture:
– Data loaded into vector registers; computation is register to register.
– Memory system can take advantage of predictable access patterns:
» Unit stride, Non-unit stride, indexed
• Vector processors exploit large amounts of parallelism
without data and control hazards:
– Every element is handled independently and possibly in parallel
– Same effect as scalar loop without the control hazards or complexity
of tomasulo-style hardware
• Hardware parallelism can be varied across a wide
range by changing number of vector lanes in each
vector functional unit.
CS252/Kubiatowicz
10/05/03
Lec 11.3
Review: Vector Terminology
4 lanes, 2 vector functional units
(Vector
Functional
Unit)
10/05/03
CS252/Kubiatowicz
34
Lec 11.4
Designing a Vector Processor
•
•
•
•
•
•
10/05/03
Changes to scalar
How Pick Vector Length?
How Pick Number of Vector Registers?
Context switch overhead
Exception handling
Masking and Flag Instructions
CS252/Kubiatowicz
Lec 11.5
Changes to scalar processor to
run vector instructions
• Decode vector instructions
• Send scalar registers to vector unit
(vector-scalar ops)
• Synchronization for results back from
vector register, including exceptions
• Things that don’t run in vector don’t have
high ILP, so can make scalar CPU simple
10/05/03
CS252/Kubiatowicz
Lec 11.6
How Pick Vector Length?
• Longer good because:
1) Hide vector startup
2) lower instruction bandwidth
3) tiled access to memory reduce scalar processor
memory bandwidth needs
4) if know max length of app. is < max vector length, no
strip mining overhead
5) Better spatial locality for memory access
• Longer not much help because:
1) diminishing returns on overhead savings as keep
doubling number of element
2) need natural app. vector length to match physical
register length, or no help (lots of short vectors in
modern codes!)
10/05/03
CS252/Kubiatowicz
Lec 11.7
How Pick Number of
Vector Registers?
• More Vector Registers:
1) Reduces vector register “spills” (save/restore)
» 20% reduction to 16 registers for su2cor and tomcatv
» 40% reduction to 32 registers for tomcatv
» others 10%-15%
2) Aggressive scheduling of vector instructinons: better
compiling to take advantage of ILP
• Fewer:
1) Fewer bits in instruction format (usually 3 fields)
2) Easier implementation
10/05/03
CS252/Kubiatowicz
Lec 11.8
Context switch overhead:
Huge amounts of state!
• Extra dirty bit per processor
– If vector registers not written, don’t need to save on
context switch
• Extra valid bit per vector register, cleared
on process start
– Don’t need to restore on context switch until needed
10/05/03
CS252/Kubiatowicz
Lec 11.9
Exception handling:
External Interrupts?
• If external exception, can just put pseudoop into pipeline and wait for all vector ops
to complete
– Alternatively, can wait for scalar unit to complete and
begin working on exception code assuming that vector
unit will not cause exception and interrupt code does
not use vector unit
10/05/03
CS252/Kubiatowicz
Lec 11.10
Exception handling:
Arithmetic Exceptions
• Arithmetic traps harder
• Precise interrupts => large performance loss!
• Alternative model: arithmetic exceptions set
vector flag registers, 1 flag bit per element
• Software inserts trap barrier instructions
from SW to check the flag bits as needed
• IEEE Floating Point requires 5 flag bits
10/05/03
CS252/Kubiatowicz
Lec 11.11
Exception handling: Page Faults
• Page Faults must be precise
• Instruction Page Faults not a problem
– Could just wait for active instructions to drain
– Also, scalar core runs page-fault code anyway
• Data Page Faults harder
• Option 1: Save/restore internal vector unit
state
– Freeze pipeline, dump vector state
– perform needed ops
– Restore state and continue vector pipeline
10/05/03
CS252/Kubiatowicz
Lec 11.12
Exception handling: Page Faults
• Option 2: expand memory pipeline to check addresses
before send to memory + memory buffer between
address check and registers
– multiple queues to transfer from memory buffer to registers; check
last address in queues before load 1st element from buffer.
– Per Address Instruction Queue (PAIQ) which sends to TLB and
memory while in parallel go to Address Check Instruction Queue
(ACIQ)
– When passes checks, instruction goes to Committed Instruction
Queue (CIQ) to be there when data returns.
– On page fault, only save intructions in PAIQ and ACIQ
10/05/03
CS252/Kubiatowicz
Lec 11.13
Masking and Flag Instructions
• Flag have multiple uses (conditional, arithmetic
exceptions)
• Alternative is conditional move/merge
• Clear that fully masked is much more effiecient that
with conditional moves
– Not perform extra instructions, avoid exceptions
• Downside is:
1) extra bits in instruction to specify the flag regsiter
2) extra interlock early in the pipeline for RAW
hazards on Flag registers
10/05/03
CS252/Kubiatowicz
Lec 11.14
Flag Instruction Ops
• Do in scalar processor vs. in vector unit with vector ops?
• Disadvantages to using scalar processor to do flag
calculations (as in Cray):
1) if MVL > word size => multiple instructions;
also limits MVL in future
2) scalar exposes memory latency
3) vector produces flag bits 1/clock, but scalar consumes at
64 per clock, so cannot chain together
• Proposal: separate Vector Flag Functional Units and
instructions in VU
10/05/03
CS252/Kubiatowicz
Lec 11.15
Alternate use of Vectors –
Virtual Processor Vector Model:
Treat like SIMD multiprocessor
• Vector operations are SIMD
(single instruction multiple data) operations
– Each virtual processor has as many scalar “registers” as there are
vector registers
– There are as many virtual processors as current vector length.
– Each element is computed by a virtual processor (VP)
• This model can increase the domain of usefulness
10/05/03
CS252/Kubiatowicz
Lec 11.16
Vector Architectural State
Virtual Processors ($vlr)
VP0
General
Purpose
Registers
VP1
VP$vlr-1
vr0
vr1
Control
Registers
vr31
$vdw bits
Flag
Registers
(32)
vcr0
vcr1
vf0
vf1
vcr31
32 bits
vf31
1 bit
10/05/03
CS252/Kubiatowicz
Lec 11.17
“Vector” for Multimedia?
• Intel MMX: 57 additional 80x86 instructions (1st
since 386)
– similar to Intel 860, Mot. 88110, HP PA-71000LC, UltraSPARC
• 3 data types: 8 8-bit, 4 16-bit, 2 32-bit in 64bits
– reuse 8 FP registers (FP and MMX cannot mix)
• short vector: load, add, store 8 8-bit operands
+
• Claim: overall speedup 1.5 to 2X for 2D/3D
graphics, audio, video, speech, comm., ...
– use in drivers or added to library routines; no compiler
10/05/03
CS252/Kubiatowicz
Lec 11.18
MMX Instructions
• Move 32b, 64b
• Add, Subtract in parallel: 8 8b, 4 16b, 2 32b
– opt. signed/unsigned saturate (set to max) if overflow
• Shifts (sll,srl, sra), And, And Not, Or, Xor
in parallel: 8 8b, 4 16b, 2 32b
• Multiply, Multiply-Add in parallel: 4 16b
• Compare = , > in parallel: 8 8b, 4 16b, 2 32b
– sets field to 0s (false) or 1s (true); removes branches
• Pack/Unpack
– Convert 32b<–> 16b, 16b <–> 8b
– Pack saturates (set to max) if number is too large
10/05/03
CS252/Kubiatowicz
Lec 11.19
CS252 Administrivia
• Exam:
Monday 10/13  Monday 10/20??
Location: 277 Cory
TIME: 5:30 - 8:30
• Assignment due Monday 10/20
– Done in pairs. Put both names on papers.
• Select Project by Wednesday 10/22
– Need to have a partner for this. News group/email list?
– Web site will have a number of suggestions by tonight
– I am certainly open to other suggestions
» make one project fit two classes?
» Something close to your research?
10/05/03
CS252/Kubiatowicz
Lec 11.20
Problem: “Fetch” unit
Stream of Instructions
To Execute
Instruction Fetch
with
Branch Prediction
Out-Of-Order
Execution
Unit
Correctness Feedback
On Branch Results
• Instruction fetch decoupled from execution
• Often issue logic (+ rename) included with Fetch
10/05/03
CS252/Kubiatowicz
Lec 11.21
Branches must be resolved
quickly for loop overlap!
• In our loop-unrolling example, we relied on the fact that branches
were under control of “fast” integer unit in order to get overlap!
Loop:
LD
MULTD
SD
SUBI
BNEZ
F0
F4
F4
R1
R1
0
F0
0
R1
Loop
R1
F2
R1
#8
• What happens if branch depends on result of multd??
– We completely lose all of our advantages!
– Need to be able to “predict” branch outcome.
– If we were to predict that branch was taken, this would be
right most of the time.
• Problem much worse for superscalar machines!
10/05/03
CS252/Kubiatowicz
Lec 11.22
Prediction:
Branches, Dependencies, Data
• Prediction has become essential to getting good
performance from scalar instruction streams.
• We will discuss predicting branches. However,
architects are now predicting everything:
data dependencies, actual data, and results of
groups of instructions:
– At what point does computation become a probabilistic operation +
verification?
– We are pretty close with control hazards already…
• Why does prediction work?
– Underlying algorithm has regularities.
– Data that is being operated on has regularities.
– Instruction sequence has redundancies that are artifacts of way
that humans/compilers think about problems.
• Prediction  Compressible information streams?
10/05/03
CS252/Kubiatowicz
Lec 11.23
Dynamic Branch Prediction
• Is dynamic branch prediction better than
static branch prediction?
– Seems to be. Still some debate to this effect
– Josh Fisher had good paper on “Predicting Conditional
Branch Directions from Previous Runs of a Program.”
ASPLOS ‘92. In general, good results if allowed to
run program for lots of data sets.
» How would this information be stored for later use?
» Still some difference between best possible static
prediction (using a run to predict itself) and
weighted average over many different data sets
– Paper by Young et all, “A Comparative Analysis of
Schemes for Correlated Branch Prediction” notices that
there are a small number of important branches in
programs which have dynamic behavior.
10/05/03
CS252/Kubiatowicz
Lec 11.24
Need Address
at Same Time as Prediction
• Branch Target Buffer (BTB): Address of branch index to get
prediction AND branch address (if taken)
– Note: must check for branch match now, since can’t use wrong branch address
(Figure 4.22, p. 273)
Branch PC
Predicted PC
PC of instruction
FETCH
=?
Predict taken or untaken
• Return instruction addresses predicted with stack
• Remember branch folding (Crisp processor)?
10/05/03
CS252/Kubiatowicz
Lec 11.25
Dynamic Branch Prediction
• Prediction could be “Static” (at compile time) or
“Dynamic” (at runtime)
– For our example, if we were to statically predict
“taken”, we would only be wrong once each pass
through loop
– Static information passed through bits in opcode
• Is dynamic branch prediction better than static
branch prediction?
– Seems to be. Still some debate to this effect
– Today, lots of hardware being devoted to dynamic
branch predictors.
• Does branch prediction make sense for 5-stage,
in-order pipeline? What about 8-stage pipeline?
– Perhaps: eliminate branch delay slots
– Then predict branches
10/05/03
CS252/Kubiatowicz
Lec 11.26
Branch History Table
Predictor 0
Predictor 1
Branch PC
Predictor 7
• BHT is a table of “Predictors”
– Usually 2-bit, saturating counters
– Indexed by PC address of Branch – without tags
• In Fetch state of branch:
– BTB identifies branch
– Predictor from BHT used to make prediction
• When branch completes
– Update corresponding Predictor
10/05/03
CS252/Kubiatowicz
Lec 11.27
Dynamic Branch Prediction
(standard technologies)
• Combine Branch Target Buffer and History Tables
– Branch Target Buffer (BTB): identify branches and hold taken addresses
» Trick: identify branch before fetching instruction!
» Must be careful not to misidentify branches or destinations
– Branch History Table makes prediction
» Can be complex prediction mechanisms with long history
» No address check: Can be good, can be bad (aliasing)
• Simple 1-bit BHT: keep last direction of branch
• Problem: in a loop, 1-bit BHT will cause two mispredictions (avg
is 9 iteratios before exit):
– End of loop case, when it exits instead of looping as before
– First time through loop on next time through code, when it predicts exit
instead of looping
• Performance = ƒ(accuracy, cost of misprediction)
– Misprediction  Flush Reorder Buffer
10/05/03
CS252/Kubiatowicz
Lec 11.28
Dynamic Branch Prediction
(Jim Smith, 1981)
• Solution: 2-bit scheme where change prediction
only if get misprediction twice: (Figure 4.13, p.
264)
T
NT
Predict Taken
Predict Taken
T
T
NT
NT
Predict Not
Taken
Predict Not
Taken
T
NT
• Red: stop, not taken
• Green: go, taken
10/05/03• Adds hysteresis to decision making process
CS252/Kubiatowicz
Lec 11.29
BHT Accuracy
• Mispredict because either:
– Wrong guess for that branch
– Got branch history of wrong branch when index the table
• 4096 entry table programs vary from 1%
misprediction (nasa7, tomcatv) to 18% (eqntott),
with spice at 9% and gcc at 12%
• 4096 about as good as infinite table
(in Alpha 211164)
10/05/03
CS252/Kubiatowicz
Lec 11.30
Correlating Branches
• Hypothesis: recent branches are correlated; that is, behavior of
recently executed branches affects prediction of current branch
• Two possibilities; Current branch depends on:
– Last m most recently executed branches anywhere in program
Produces a “GA” (for “global adaptive”) in the Yeh and Patt
classification (e.g. GAg)
– Last m most recent outcomes of same branch.
Produces a “PA” (for “per-address adaptive”) in same classification
(e.g. PAg)
• Idea: record m most recently executed branches as taken or not
taken, and use that pattern to select the proper branch history
table entry
– A single history table shared by all branches (appends a “g” at end),
indexed by history value.
– Address is used along with history to select table entry (appends a
“p” at end of classification)
– If only portion of address used, often appends an “s” to indicate
“set-indexed” tables (I.e. GAs)
10/05/03
CS252/Kubiatowicz
Lec 11.31
Correlating Branches
• For instance, consider global history, set-indexed
BHT. That gives us a GAs history table.
(2,2) GAs predictor
– First 2 means that we keep
two bits of history
– Second means that we have 2
bit counters in each slot.
– Then behavior of recent
branches selects between,
say, four predictions of next
branch, updating just that
prediction
– Note that the original two-bit
counter solution would be a
(0,2) GAs predictor
– Note also that aliasing is
possible here...
10/05/03
Branch address
2-bits per branch predictors
Prediction
Each slot is
2-bit counter
2-bit global branch history register
CS252/Kubiatowicz
Lec 11.32
Discussion of Yeh and Patt
classification
• Paper Discussion of “Alternative Implementations of
Two-Level Adaptive Branch Prediction”
10/05/03
CS252/Kubiatowicz
Lec 11.33
Accuracy of Different Schemes
(Figure 4.21, p. 272)
4096 Entries 2-bit BHT
Unlimited Entries 2-bit BHT
1024 Entries (2,2) BHT 11%
16%
14%
12%
10%
8%
6%
6%
6%
5%
4%
4%
2%
1%
1%
0%
Unlimited entries: 2-bits/entry
li
eqntott
espresso
gcc
fpppp
spice
doducd
tomcatv
matrix300
0%
0%
4,096 entries: 2-bits per entry
10/05/03
6%
5%
nasa7
of Mispredictions
Frequency
Frequency of Mispredictions
18%
18%
1,024 entries (2,2)
CS252/Kubiatowicz
Lec 11.34
Re-evaluating Correlation
• Several of the SPEC benchmarks have less
than a dozen branches responsible for 90%
of taken branches:
program
compress
eqntott
gcc
mpeg
real gcc
branch %
14%
25%
15%
10%
13%
static
236
494
9531
5598
17361
# = 90%
13
5
2020
532
3214
• Real programs + OS more like gcc
• Small benefits beyond benchmarks for
correlation? problems with branch aliases?
10/05/03
CS252/Kubiatowicz
Lec 11.35
Predicated Execution
• Avoid branch prediction by turning branches
into conditionally executed instructions:
if (x) then A = B op C else NOP
– If false, then neither store result nor cause exception
– Expanded ISA of Alpha, MIPS, PowerPC, SPARC have
conditional move; PA-RISC can annul any following
instr.
– IA-64: 64 1-bit condition fields selected so conditional
execution of any instruction
– This transformation is called “if-conversion”
x
A=
B op C
• Drawbacks to conditional instructions
– Still takes a clock even if “annulled”
– Stall if condition evaluated late
– Complex conditions reduce effectiveness;
condition becomes known late in pipeline
10/05/03
CS252/Kubiatowicz
Lec 11.36
Dynamic Branch Prediction
Summary
• Prediction becoming important part of scalar
execution.
– Prediction is exploiting “information compressibility” in execution
• Branch History Table: 2 bits for loop accuracy
• Correlation: Recently executed branches correlated
with next branch.
– Either different branches (GA)
– Or different executions of same branches (PA).
• Branch Target Buffer: include branch address &
prediction
• Predicated Execution can reduce number of
branches, number of mispredicted branches
10/05/03
CS252/Kubiatowicz
Lec 11.37
CS252 Projects
•
•
•
•
•
•
10/05/03
DynaCOMP related (or Introspective Computing)
OceanStore related
Smart Dust/NEST
ROC Related Projects
BRASS project related
Others?
CS252/Kubiatowicz
Lec 11.38
Summary #1
Dynamic Branch Prediction
• Prediction becoming important part of scalar
execution.
– Prediction is exploiting “information compressibility” in execution
• Branch History Table: 2 bits for loop accuracy
• Correlation: Recently executed branches correlated
with next branch.
– Either different branches (GA)
– Or different executions of same branches (PA).
• Branch Target Buffer: include branch address &
prediction
• Predicated Execution can reduce number of
branches, number of mispredicted branches
10/05/03
CS252/Kubiatowicz
Lec 11.39
Summary #2
• Prediction, prediction, prediction!
– Over next couple of lectures, we will explore prediction of
everything! Branches, Dependencies, Data
• The high prediction accuracies will cause us to ask:
– Is the deterministic Von Neumann model the right one???
10/05/03
CS252/Kubiatowicz
Lec 11.40