CMPE 49B Sp. Top. in CMPE: Multi

Download Report

Transcript CMPE 49B Sp. Top. in CMPE: Multi

CMPE 478 Parallel Processing
picture of
ASCI WHITE,
the most powerful
computer in the world in 2001
CMPE 4784
1
Von Neumann Architecture
CPU
RAM
Device
Device
BUS
• sequential computer
CMPE 4784
2
Memory Hierarchy
Fast
Registers
Cache
Real Memory
Disk
Slow
CMPE 4784
CD
3
History of Computer Architecture
•
4 Generations (identified by logic technology)
1. Tubes
2. Transistors
3. Integrated Circuits
4. VLSI (very large scale integration)
CMPE 4784
4
PERFORMANCE TRENDS
CMPE 4784
5
PERFORMANCE TRENDS
• Traditional mainframe/supercomputer performance 25%
increase per year
• But … microprocessor performance 50% increase per year
since mid 80’s.
CMPE 4784
6
Moore’s Law
• “Transistor density
doubles every 18
months”
• Moore is co-founder of
Intel.
• 60 % increase per year
• Exponential growth
• PC costs decline.
• PCs are building bricks
of all future systems.
CMPE 4784
7
VLSI Generation
CMPE 4784
8
Bit Level Parallelism
(upto mid 80’s)
• 4 bit microprocessors replaced by 8 bit, 16 bit, 32 bit etc.
• doubling the width of the datapath reduces the number of
cycles required to perform a full 32-bit operation
• mid 80’s reap benefits of this kind of parallelism (full 32bit word operations combined with the use of caches)
CMPE 4784
9
Instruction Level Parallelism
(mid 80’s to mid 90’s)
• Basic steps in instruction processing (instruction decode,
integer arithmetic, address calculations, could be performed in
a single cycle)
• Pipelined instruction processing
• Reduced instruction set (RISC)
• Superscalar execution
• Branch prediction
CMPE 4784
10
Thread/Process Level Parallelism
(mid 90’s to present)
• On average control transfers occur roughly once in five
instructions, so exploiting instruction level parallelism at a
larger scale is not possible
• Use multiple independent “threads” or processes
• Concurrently running threads, processes
CMPE 4784
11
Evolution of the Infrastructure
•
•
Electronic Accounting Machine Era: 1930-1950
General Purpose Mainframe and Minicomputer Era: 1959Present
• Personal Computer Era: 1981 – Present
• Client/Server Era: 1983 – Present
• Enterprise Internet Computing Era: 1992- Present
CMPE 4784
12
Sequential vs Parallel Processing
• physical limits reached
• “raw” power unlimited
• easy to program
• more memory, multiple cache
• expensive supercomputers
• made up of COTS, so cheap
• difficult to program
CMPE 4784
13
What is Multi-Core Programming ?
•
Answer: It is basically parallel programming on a single
computer box (e.g. a desktop, a notebook, a blade)
CMPE 4784
14
Amdahl’s Law
• The serial percentage of a program is fixed. So speed-up obtained by
employing parallel processing is bounded.
• Lead to pessimism in in the parallel processing community and prevented
development of parallel machines for a long time.
1
Speedup =
s +
1-s
P
• In the limit:
Spedup = 1/s
s
CMPE 4784
15
Gustafson’s Law
• Serial percentage is dependent on the number of
processors/input.
• Demonstrated achieving more than 1000 fold speedup using
1024 processors.
• Justified parallel processing
CMPE 4784
16
Grand Challenge Applications
• Important scientific & engineering problems identified by
U.S. High Performance Computing & Communications
Program (’92)
CMPE 4784
17
Flynn’s Taxonomy
• classifies computer architectures according to:
1. Number of instruction streams it can process at a time
2. Number of data elements on which it can operate
simultaneously
Data Streams
Single
SISD
Multiple
SIMD
Single
Instruction Streams
MISD
CMPE 4784
MIMD
Multiple
18
SPMD Model
(Single Program Multiple Data)
• Each processor executes the same program asynchronously
• Synchronization takes place only when processors need to
exchange data
• SPMD is extension of SIMD (relax synchronized instruction
execution)
• SPMD is restriction of MIMD (use only one source/object)
CMPE 4784
19
Parallel Processing Terminology
• Embarassingly Parallel:
-applications which are trivial to parallelize
-large amounts of independent computation
-Little communication
•Data Parallelism:
-model of parallel computing in which a single operation can be
applied to all data elements simultaneously
-amenable to SIMD or SPMD style of computation
•Control Parallelism:
-many different operations may be executed concurrently
-require MIMD/SPMD style of computation
CMPE 4784
20
Parallel Processing Terminology
• Scalability:
- If the size of problem is increased, number of processors that can be
effectively used can be increased (i.e. there is no limit on
parallelism).
- Cost of scalable algorithm grows slowly as input size and the
number of processors are increased.
- Data parallel algorithms are more scalable than control parallel
alorithms
• Granularity:
- fine grain machines: employ massive number of weak processors
each with small memory
- coarse grain machines: smaller number of powerful processors each
with large amounts of memory
CMPE 4784
21
Shared Memory Machines
Shared Address Space
process
(thread)
process
(thread)
process
(thread)
process
(thread)
process
(thread)
•Memory is globally shared, therefore processes (threads) see single address
space
•Coordination of accesses to locations done by use of locks provided by
thread libraries
•Example Machines: Sequent, Alliant, SUN Ultra, Dual/Quad Board Pentium PC
•Example Thread Libraries: POSIX threads, Linux threads.
CMPE 4784
22
Shared Memory Machines
• can be classified as:
-UMA: uniform memory access
-NUMA: nonuniform memory access
based on the amount of time a processor takes to access local and
global memory.
P
P
M
M
M
P
P
..
M
Interconnection
network/
or BUS
P
(a)
CMPE 4784
P
Interconnection
network
M
P
M
M
M
M
..
..
..
..
M
P
M
P
Interconnection
network
M
M
(b)
(c)
23
Distributed Memory Machines
M
process
process
M
Network
M
process
process
M
M
process
•Each processor has its own local memory (not directly accessible by others)
•Processors communicate by passing messages to each other
•Example Machines: IBM SP2, Intel Paragon, COWs (cluster of workstations)
•Example Message Passing Libraries: PVM, MPI
CMPE 4784
24
Beowulf Clusters
•Use COTS, ordinary PCs and networking equipment
•Has the best price/performance ratio
PC cluster
CMPE 4784
25
Multi-Core Computing
• A multi-core microprocessor is one which combines two or more
independent processors into a single package, often a single integrated
circuit.
• A dual-core device contains only two independent microprocessors.
CMPE 4784
26
Comparison of Different Architectures
CPU State
Execution
unit
Cache
Single Core Architecture
CMPE 4784
27
Comparison of Different Architectures
CPU State
Execution
unit
CPU State
Execution
unit
Cache
Cache
Multiprocessor
CMPE 4784
28
Comparison of Different Architectures
CPU State
CPU State
Execution
unit
Cache
Hyper-Threading Technology
CMPE 4784
29
Comparison of Different Architectures
CPU State
Execution
unit
Cache
CPU State
Execution
unit
Cache
Multi-Core Architecture
CMPE 4784
30
Comparison of Different Architectures
CPU State
CPU State
Execution
unit
Execution
unit
Cache
Multi-Core Architecture with Shared Cache
CMPE 4784
31
Comparison of Different Architectures
CPU State
CPU State
Execution
unit
Cache
CPU State
CPU State
Execution
unit
Cache
Multi-Core with Hyper-Threading Technology
CMPE 4784
32
CMPE 4784
33
Top 500 Most Power
Supercomputer Lists
•
•
http://www.top500.org/
……..
CMPE 4784
34
Grid Computing
• provide access to computing power and various resources
just like accessing electrical power from electrical grid
• Allows coupling of geographically distributed resources
• Provide inexpensive access to resources irrespective of their
physical location or access point
• Internet & dedicated networks can be used to interconnect
distributed computational resources and present them as a
single unified resource
• Resources: supercomputers, clusters, storage systems, data
resources, special devices
CMPE 4784
35
Grid Computing
• the GRID is, in effect, a set of software tools, which when
combined with hardware, would let users tap processing power
off the Internet as easily as the electrical power can be drawn
from the electricty grid.
• Examples of Grids:
-TeraGrid (USA)
-EGEE Grid (Europe)
- TR-Grid (Turkey)
CMPE 4784
36
GRID COMPUTING
Power Grid
CMPE 4784
Compute Grid
Archeology
Astronomy
Astrophysics
Civil Protection
Comp. Chemistry
Earth Sciences
Finance
Fusion
Geophysics
High Energy Physics
Life Sciences
Multimedia
Material Sciences
…
CMPE 4784
>250 sites
48 countries
>50,000 CPUs
>20 PetaBytes
>10,000 users
>150 VOs
>150,000 jobs/day
38
Virtualization
• Virtualization is abstraction of computer resources.
• Make a single physical resource such as a server, an
•
•
operating system, an application, or storage device appear
to function as multiple logical resources
It may also mean making multiple physical resources such
as storage devices or servers appear as a single logical
resource
Server virtualization enables companies to run more than
one operating system at the same time on a single machine
CMPE 4784
39
Advantages of Virtualization
•
•
•
•
Most servers run at just 10-15 %capacity – virtualization
can increase server utilization to 70% or higher.
Higher utilization means fewer computers are required to
process the same amount of work. Fewer machines means
less power consumption.
Legacy applications can also be run on older versions of an
operating system
Other advantages: easier administration, fault tolerancy,
security
CMPE 4784
40
VMware Virtual Platform
Virtual machine 1
Virtual machines
Virtual machine 2
Apps 1
Apps 2
OS 1
OS 2
X86, motherboard
disks, display, net ..
X86, motherboard
disks, display, net ..
VMware Virtual Platform
Real machines
X86, motherboard, disks, display, net ..
•VMware is now 17 billion dollar company !!
CMPE 4784
41
Cloud Computing
•Style of computing in which IT-related capabilities are provided “as a
service”,allowing users to access technology-enabled services from the Internet
("in the cloud") without knowledge of, expertise with, or control over the
technology infrastructure that supports them.
•General concept that incorporates software as a service (SaaS), Web 2.0 and
other recent, well-known technology trends, in which the common theme is
reliance on the Internet for satisfying the computing needs of the users.
CMPE 4784
42
Cloud Computing
•
•
•
•
•
Virtualisation provides separation between infrastructure and
user runtime environment
Users specify virtual images as their deployment building
blocks
Pay-as-you-go allows users to use the service when they want
and only pay for what they use
Elasticity of the cloud allows users to start simple and explore
more complex deployment over time
Simple interface allows easy integration with existing systems
CMPE 4784
43
Cloud: Unique Features
•
Ease of use
– REST and HTTP(S)
•
Runtime environment
– Hardware virtualisation
– Gives users full control
•
Elasticity
– Pay-as-you-go
– Cloud providers can buy hardware faster than you!
CMPE 4784
44
Cloud computing is about much more than
technological capabilities.
Technology is the mechanism, but,
as in any shift in business,
the driver is economics.
Nicholas Carr,The author of “The Big Switch”
CMPE 4784
Better Economics
We want to pay only for what we use
And we want to control it accurately.
CMPE 4784
Facing New Challenges
•
•
•
•
•
Complexity of modern IT infrastructures: physical
servers, virtual machines, clusters, Grids,
geographical distribution
Cost of electricity
Credit crunch
Further pressures to reduce costs
Openness to the acceptable security concept
CMPE 4784
Develop
Test
Release
Develop
Test
Operate
CMPE 4784
Install
Configure
Operate
http://www....
Develop
CMPE 4784
Test
Undifferentiated
heavy lifting
• Hardware costs
• Software costs
• Maintenance
• Load balancing
• Scaling
• Utilization
• Idle machines
• Bandwidth
management
• Server hosting
• Storage
Management
• High availability
Operate
The 70/30 Switch
CMPE 4784
Finding Solutions
•
•
•
Improving utilisation rates through market based algorithms for
resource allocation
Accessing external infrastructures on-demand
Using a single management platform for all computing
resources
CMPE 4784
Cloud vs Grid
From the customers/end users
point of view
They are the same
CMPE 4784
Grid/cloud market structure
Applications
Middleware
Hardware
(owned)
CMPE 4784
Hardware
(service)
Network
Customer
The Grid/Cloud
Advantages
Disadvantages
• Lower cost
• Access to larger
• Very complicated
• Security
• Lack of confidence
infrastructure
– Faster calculations
– More storage
•
Speed
– Faster calculations
– Easier provisioning
CMPE 4784
– Trust
– Compatibility
Improving Utilization
Grid
Enterprise/Departmental Grid:
(+) improves utilisation rates of physical
servers, enables collaboration
(-) limited scalability, lack of interoperability
between vendors, limited efficiency of policy
based mechanisms
Virtual Servers:
(+) improved utilisation rates, better scalability,
easy disaster recovery
(-) increased number of servers to manage,
incompatible virtualization platforms
Hardware Servers:
(-) low utilisation rates, scalability problems
CMPE 4784
100%
Utilisation
Cloud Computing:
(+) no need to own hardware, shared access,
improved utilisation through pay-as-you-use
(-) incompatible platforms, ‘fair price’ is
dubious to users
0%
Grid and Clouds
Issue
Classic Grid Computing
Cloud computing
Why we need it?
(The Problem)
To enable the R&D community to
achieve its research goals in
reasonable time.
Computation over large data
sets, or of paralleizable
compute-intensive applications.
Reduce IT costs.
On-demand scalability for all
applications, including
research, development and
business applications.
Main Target
Market
First - Academia
Second – certain industries
Mainly Industry
Business Model
– Where the
money comes
from?
Academia
Sponsor-based (Mainly
government money).
Hosted by commercial
companies, paid-for by users.
Based on the economies of
scale and expertise. Only pay
for what you need, when you
need it:
(On- Demand + Pay per Use).
Industry pays
Internal Implementations.
CMPE 4784
Competition
Key differentiators:
• Open source –
no vendor lock-in
• Scalability
Interfaces and Market
Mechanisms
Constellation
Technologies
Interfaces
Enterprise
Cloud
Enterprise
Grid
Virtualisation
CMPE 4784
Hardware
Operating
System
Cloud
Incompatible
Standards
Example Cloud: Amazon Web Services
•
EC2 (Elastic Computing Cloud) is the computing service
of Amazon
– Based on hardware virtualisation
– Users request virtual machine instances, pointing to
an image (public or private) stored in S3
– Users have full control over each instance (e.g.
access as root, if required)
– Requests can be issued via SOAP and REST
CMPE 4784
58
Example Cloud: Amazon Web Services
•
S3 (Simple Storage Service) is a service for
storing and accessing data on the Amazon
cloud
– From a user’s point-of-view, S3 is independent
•
from the other Amazon services
– Data is built in a hierarchical fashion, grouped in
buckets (i.e. containers) and objects
– Data is accessible via various protocols
Elastic Block Store
– Locally mounted storage
– Highly available
CMPE 4784
59
Example Cloud: Amazon Web Services
•
Other AWS services:
– SQS (Simple Queue Service)
– SimpleDB
– Billing services: DevPay
– Elastic IP (Static IPs for Dynamic Cloud Computing)
– Multiple Locations
CMPE 4784
60
Example Cloud: Amazon Web Services
•
Pricing information
http://aws.amazon.com/ec2/
CMPE 4784
61
Challenges
Security and Trust
Customer SLA – compare Cost/Performance
Dynamic VM migration – Unique Universal IP
Clouds Interoperability
Data Protection & Recovery
Standards: Security
Management Tools
Integration with Internal Infrastructure
Small compact economical applications
Cost/Performance prediction and measurement
Keep it Transparent and Simple
CMPE 4784
Cloud Market
"The future is about having a platform
in the cloud,"
Microsoft Chief Steve Ballmer said of the trend in a
July, 2008 e-mail to employees.
CMPE 4784
Cloud Market
“By 2012,
80 percent of Fortune 1000 companies
will pay for
some cloud computing service,
And
30 percent of them
will pay for
cloud computing infrastructure”.
Gartner, 2008
CMPE 4784
EC2 – “Google of the Clouds”
According to Vogels (Amazon CTO), 370,000 developers have
registered for Amazon Web Services since their start in 2002, and
the company now spends more bandwidth on the developers than
it does on e-commerce.
http://www.theregister.co.uk/2008/06/26/amazon_trumpets_web_services/
In the last two months of 2007 usage of Amazon Web
Services grew by 40%
$131 million revenues in Q1 from AWS
60,000 customers
The majority of usage comes from banks,
pharmaceuticals and other large
corporations
CMPE 4784
CMPE 4784
Why Now? (Economy)
•
-
- CIOs -> Do more with Less (Energy costs / Recession will boost it)
Lower cost for Scalability
Enterprise IT budget - Spending 80% on MAINTENANCE
In average, we utilize only 15% of our computing resources capacity
Peak Times economy
The Enterprise IT is not its core business
Psychology of Internet/Cloud trust (SalesForce, Gmail, Internet banking, etc.)
Ideal for Developers
CMPE 4784
Why Now? (Benefits)
Cost savings, leveraging economies of scale
Pay only for what you use
Resource flexibility
Rapid prototyping and market testing
Increased speed to market
Improved service levels and availability
Self-service deployment
Reduce lock-in and switching costs
CMPE 4784
Clouds Types
VM Based (EC2, GoGrid)
Storage Based (EMC, S3)
Customers Applications based (Google)
Cloud Applications based (SalesForce)
Grid Computing/HPC Applications
Mobile Clouds (iPhone UI, WEB APPS)
Private Clouds
Cloud of Clouds
CMPE 4784
Summary
Cloud Computing - The New IT Economy
Pay-per-Use for On-Demand Scalability
All major vendors are investing in Clouds
Cloud Trading Market will evolve
VM will be mobile across clouds
Mobile phones (iPhone) cloud users
International implications (Access to Data)
CMPE 4784
Models of Parallel Computers
1.
Message Passing Model
-
Distributed memory
Multicomputer
2. Shared Memory Model
- Multiprocessor
- Multi-core
3. Theoretical Model
- PRAM
•
New architectures: combination of 1 and 2.
CMPE 4784
71
Theoretical PRAM Model
•
•
Used by parallel algorithm designers
Algorithm designers do not want to worry about low level
details: They want to concentrate on algorithmic details
• Extends classic RAM model
• Consist of :
– Control unit (common clock), synchronous
– Global shared memory
– Unbounded set of processors, each with its private own
memory
CMPE 4784
72
Theoretical PRAM Model
•
Some characteristics
– Each processor has a unique identifier, mypid=0,1,2,…
– All processors operate synhronously under the control of a
common clock
– In each unit of time, each procesor is allowed to execute an
instruction or stay idle
CMPE 4784
73
Various PRAM Models
weakest
EREW (exlusive read / exclusive write)
CREW (concurrent read / exclusive write)
CRCW (concurrent read / concurrent write)
Common (must write the same value)
Arbitrary (one processor is chosen arbitrarily)
Priority (processor with the lowest index writes)
strongest
CMPE 4784
(how write conflicts to the same memory location
are handled)
74
Algorithmic Performance Parameters
•
Notation
Input size
Time Complexity of the best sequential algorithm
Number of processors
Time complexity of the parallel algorithm when run on P
processors
Time complexity of the parallel algorithm when run on 1
processors
CMPE 4784
75
Algorithmic Performance Parameters
•
Speed-Up
•
Efficiency
CMPE 4784
76
Algorithmic Performance Parameters
•
Work = Processors X Time
– Informally: How much time a parallel algorithm will take to
simulate on a serial machine
– Formally:
CMPE 4784
77
Algorithmic Performance Parameters
•
Work Efficient:
– Informally: a work efficient parallel algorithm does no more
work than the best serial algorithm
– Formally: a work efficient algorithm satisfies:
CMPE 4784
78
Algorithmic Performance Parameters
•
Scalability:
– Informally, scalability implies that if the size of the problem
is increased, the number of processors effectively used can
be increased (i.e. there is no limit on parallelism)
– Formally, scalability means:
CMPE 4784
79
Algorithmic Performance Parameters
•
Some remarks:
– Cost of scalable algorithm grows slowly as input size and
the number of procesors are increased
– Level of ‘control parallelism’ is usually a constant
independent of problem size
– Level of ‘data parallelism’ is an increasing function of
problem size
– Data parallel algorithms are more scalable than control
parallel algorithms
CMPE 4784
80
Goals in Designing Parallel Algorithms
•
Scalability:
– Algorithm cost grows slowly, preferably in a
polylogarithmic manner
•
Work Efficient:
– We do not want to waste CPU cycles
– May be an important point when we are worried about
power consumption or ‘money’ paid for CPU usage
CMPE 4784
81
Summing N numbers in Parallel
x1
x2
x3
x4
x5
x6
x7
x8
step 1
x1+x2
x2
x3+x4
x4
x5+x6
x6
x7+x8
x8
step 2
x1+..+x4
x2
x3+x4
x4
x5+..+x8
x6
x7+x8
x8
step 3
x1+..+x8
x2
x3+x4
x4
x5+..+x8
x6
x7+x8
x8
result
•Array of N numbers can be summed in log(N) steps using
N/2 processors
CMPE 4784
Prefix Summing N numbers in Parallel
x1
x2
x3
x4
x5
x6
x7
x8
step 1
x1+x2
x2+x3
x3+x4
x4+x5
x5+x6
x6+x7
x7+x8
x8
step 2
x1+..+x4 x2+..+x4 x3+..+x6 x4+..+x7 x5+..+x8 x6+..+x8 x7+x8
x8
step 3
x1+..+x8 x2+..+x8 x3+..+x8 x4+..+x8 x5+..+x8 x6+..+x8 x7+x8
x8
•Computing partial sums of an array of N numbers can be done in
log(N) steps using N processors
CMPE 4784
Prefix Paradigm for Parallel Algorithm
Design
•Prefix computation forms a paradigm for parallel algorithm
development, just like other well known paradigms such as:
– divide and conquer, dynamic programming, etc.
•Prefix Paradigm:
– If possible, transform your problem to prefix type
computation
– Apply the efficient logarithmic prefix computation
•Examples of Problems solved by Prefix Paradigm:
– Solving linear recurrence equations
– Tridiagonal Solver
– Problems on trees
– Adaptive triangular mesh refinement
CMPE 4784
Solving Linear Recurrence Equations
• Given the linear recurrence equation:
zi  ai zi 1  bi zi 2
• we can rewrite it as:
 zi  ai
z    1
 i 1  
bi   zi 1 
0  zi 2 
• if we expand it, we get the solution in terms of partial products of
coefficients and the initial values z1 and z0 :
 zi  ai bi  ai 1 bi 1  ai 2
• use
computepartial
products 
z prefix
  to 1

0  1
0  1
 i 1  
CMPE 4784
bi 2  a2
... 

0  1
b2   z1 
0  z0 
Pointer Jumping Technique
x1
x2
x3
x4
x5
x6
x7
x8
step 1
x1+.x2
x2+x3
x3+x4
x4+x5
x5+x6
x6+x7
x7+x8
x8
step 2
x1+..+x4
x2+..+x5
x3+..+x6
x4+..+x7
x5+..+x8
x6+x7
x7+x8
x8
step 3
x1+..+x8
x2+..+x8
x3+..+x8
x4+..+x8
x5+..+x8
x6+..+x8
x7+x8
x8
•A linked list of N numbers can be prefix-summed in log(N)
steps using N processors
CMPE 4784
Euler Tour Technique
a
Tree Problems:
b
e
c
f
g
h
d
•Preorder numbering
•Postorder numbering
•Number of Descendants
•Level of each node
i
•To solve such problems, first transform the tree by linearizing it
into a linked-list and then apply the prefix computation
CMPE 4784
Computing Level of Each Node by Euler
Tour Technique
a
1
-1
b
1
weight assignment:
1
-1
d
-1
1
e
-1
c
1
-1
1
-1
f
-1 1
g
-1
1
h
-1
level(v) = pw(<v,parent(v)>)
level(root) = 0
1
i
initial weights: w(<u,v>)
-1 1 -1 1 -1 -1 -1 1 -1 1 1 -1 1 -1 1 1
a d a c a b g i g h g b f
b e b a
0
1 0 1 0 1 2 3 2 3 2 1
2 1 2 1
pw(<u,v>)
prefix:
CMPE
4784
Computing Number of Descendants by
Euler Tour Technique
a
0
1
b
0
weight assignment:
0
1
d
1
0
e
1
c
0
1
0
1
f
1 0
g
1
0
h
# of descendants(v)
1
0
= pw(<parent(v),v>) pw(<v,parent(v)>)
# of descendants(root) = n
i
initial weights: w(<u,v>)
1 0 1
0 1 1 1 0 1 0 0 1 0 1 0 0
a d a c a b g i g h g b f
b e b a
8
7 7 6 6 5 4 3 3 2 2 2
1 1 0 0
pw(<u,v>)
prefix:
CMPE
4784
Preorder Numbering by Euler Tour
Technique
1
a
1
2
0
b
e
0
8
0
1
4 f
0
c
1
3
1
0
0 1
g
6 h
0
1
0
d
9
5
0
1
1
weight assignment:
preorder(v) = 1 + pw(<v,parent(v)>)
preorder(root) = 1
1
i 7
initial weights: w(<u,v>)
0 1 0
1 0 0 0 1 0 1 1 0 1 0 1 1
a d a c a b g i g h g b f
b e b a
8
8 7 7 6 6 6 6 5 5 4 3
3 2 2 1
pw(<u,v>)
prefix:
CMPE
4784
Postorder Numbering by Euler Tour
Technique
9
a
0
6
1
b
1
e
2
7
1
0
f
1
c
0
1
0
1
1 0
0
1
d 8
5
g
postorder(v)
= pw(<parent(v),v>)
1
0
3 h
0
weight assignment:
1
0
postorder(root) = n
i 4
initial weights: w(<u,v>)
1 0 1
0 1 1 1 0 1 0 0 1 0 1 0 0
a d a c a b g i g h g b f
b e b a
8
7 7 6 6 5 4 3 3 2 2 2
1 1 0 0
pw(<u,v>)
prefix:
CMPE
4784
Brent’s Theorem
• Given a parallel algorithm with computation time D, if parallel
algorithm performs W operations then P processors can execute
the algorithm in time D + (W-D)/P
For proof: consider DAG representation of computation
CMPE 4784