Introduction to Parallel Programming with C and MPI at MCSR Part 2

Download Report

Transcript Introduction to Parallel Programming with C and MPI at MCSR Part 2

Introduction to
Parallel Programming
with C and MPI at MCSR
Part 2
Broadcast/Reduce
Collective Message Passing
• Broadcast
– Sends a message from one to all processes in the group
• Scatter
– Distributes each element of a data array to a different
process for computation
• Gather
– The reverse of scatter…retrieves data elements into an
array from multiple processes
Collective Message Passing w/MPI
MPI_Bcast()
Broadcast from root to all other processes
MPI_Gather()
Gather values for group of processes
MPI_Scatter()
Scatters buffer in parts to group of processes
MPI_Alltoall()
Sends data from all processes to all processes
MPI_Reduce()
Combine values on all processes to single val
MPI_Reduce_Scatter() Broadcast from root to all other processes
MPI_Bcast()
Broadcast from root to all other processes
Log in to mimosa & get workshop files
A. Use secure shell to login to mimosa using your assigned
training account:
ssh [email protected]
ssh [email protected]
See lab instructor for password.
B. Copy workshop files into your home directory by running:
/usr/local/apps/ppro/prepare_mpi_workshop
Examine, compile, and execute add_mpi.c
Examine, compile, and execute add_mpi.c
Examine, compile, and execute add_mpi.c
Examine, compile, and execute add_mpi.c
Examine, compile, and execute add_mpi.c
Examine, compile, and execute add_mpi.c
Examine add_mpi.pbs
Submit PBS Script: add_mpi.pbs
Examine Output and Errors add_mpi.c
Determine Speedup
Determine Parallel Efficiency
How Could Speedup/Efficiency Improve?
What Happens to Results
When MAXSIZE Not
Evenly Divisible by n?
Exercise 1:
Change Code to Work When
MAXSIZE is Not Evenly
Divisible by n
Exercise 2:
Change Code to Improve Speedup