Document 7660627

Download Report

Transcript Document 7660627

Introduction to Parallel
Programming using MPI
@ CS TechLunch
Amit Kumar
Source of the entire presentation: http://www.llnl.gov/computing/tutorials/parallel_comp/
►
►
►
►
►
Why Parallel Computing
Parallel Computer Memory Architectures
Parallel Programming Models
MPI
MPI Examples
Why Parallel Computing
Primary reasons:
► Save time
► Solve larger problems
► Provide concurrency (do multiple things at
the same time)
► My Motivation: Having Parallel Girl Friends
…
Parallel Computer Memory
Architectures
► Shared
Memory
 Uniform Memory Access (UMA)
 Non-Uniform Memory Access (NUMA)
Parallel Computer Memory
Architectures …contd.
► Distributed
Memory
Parallel Computer Memory
Architectures …contd.
► Hybrid
Distributed-Shared Memory
 The largest and fastest computers in the
world today employ both shared and
distributed memory architectures.
Comparison of Shared and Distributed Memory Architectures
Architecture
UMA
NUMA
Distributed
Examples
SMPs
Sun Vexx
DEC/Compaq
SGI Challenge
IBM POWER3
SGI Origin
Sequent
HP Exemplar
DEC/Compaq
IBM POWER4 (MCM)
Cray T3E
Maspar
IBM SP2
Communications
MPI
Threads
OpenMP
shmem
MPI
Threads
OpenMP
Shmem
MPI
Scalability
to 10s of
processors
to 100s of processors
to 1000s of processors
Draw Backs
Memory-CPU
bandwidth
Memory-CPU
bandwidth Nonuniform access
times
System administration
Programming is hard to develop and
maintain
Software
Availability
many 1000s ISVs
many 1000s ISVs
100s ISVs
Parallel Programming Models
There are several parallel programming models in common use:
► Shared Memory
 tasks share a common address space, which they read and write
asynchronously.
►
Threads
 a single process can have multiple, concurrent execution paths. Ex
implementations: POSIX threads & OpenMP
►
Message Passing
 tasks exchange data through communications by sending and receiving
messages. Ex: MPI & MPI-2 specification.
►
Data Parallel
 tasks perform the same operation on their partition of work. Ex: High
Performance Fortran – support data parallel constructs.
►
Hybrid
Parallel Programming Models
…contd.
► Parallel
programming models exist as an
abstraction above hardware and memory
architectures.
► Which model to use is often a combination
of what is available and personal choice.
There is no "best" model, although there
certainly are better implementations of
some models over others.
MPI - Message Passing Interface
►
►
MPI or MPI-1 is a library specification for message-passing.
MPI-2: Adds in Parallel I/O, Dynamic Process management,
Remote Memory Operation, C++ & F90 extension …
►
MPI Standard: http://wwwunix.mcs.anl.gov/mpi/standard.html
MPI Standard 1.1 Index: http://www.mpiforum.org/docs/mpi-11-html/node182.html
MPI-2 Standard Index: http://wwwunix.mcs.anl.gov/mpi/mpi-standard/mpi-report2.0/node306.htm
MPI Forum Home Page: http://www.mpiforum.org/index.html
►
►
►
MPI Examples …
A few MPI Implementations:
MPICH-1, MPICH2, LAM/MPI ..