Welcome [www.cs.uc.edu]

Download Report

Transcript Welcome [www.cs.uc.edu]

Fall 2007
CS 668/788
Parallel Computing
Fred Annexstein
[email protected]
513-556-1807
Lecture 1: Welcome
•
•
•
•
•
•
Goals of this course
Syllabus, policies, grading
Blackboard Resources
LINC Linux cluster
Introduction/Motivation for HPPC
Scope of the Problems in Parallel Computing
Goals
• Primary:
– Provide an introduction to the computing systems,
programming approaches, some common numerical
and algorithmic methods used for high performance
parallel computing
• Secondary:
– Have an course meeting competency requirements of
RRSCS
– Provide hands-on parallel programming experience
• Official Syllabus Available on Blackboard
• Recommended Textbooks
– 1. Parallel Programming in C with MPI and
OpenMP, Michael J. Quinn
– 2. Parallel Programming With Mpi, Peter Pacheco
– 3. Introduction to Parallel Computing: Design and
Analysis of Algorithms: Ananth Grama, Anshul
Gupta, George Karpis, Vipin Kumar
– 4. Using MPI - 2nd Edition: Portable Parallel
Programming with the Message Passing Interface
by William Gropp
Workload/Grading
• Exams (2)
– Graded 40% of Grade
• Written exercises (3-4)
– May/may not be graded
• Programming Assignments (3-4)
– May be done in Groups
– MPI programming, performance measurement
• Research papers (1)
– Discussion research questions, strengths, weaknesses,
interesting points, contemporary bibliography
• Final project (1)
– Group programming project and report
Policies
• Missed Exams:
– Missed exams can not be made up unless preapproved. Please see the instructor as soon as
possible in the event of a conflict.
• Academic Misconduct:
– Plagiarism on assignments, quizzes or exams will not
be tolerated. See your student code of conduct
(http://www.uc.edu/conduct/Code_of_Conduct.html)
for more on the consequences of academic
misconduct. There are no “small” offenses.
Blackboard
•
•
•
•
•
•
•
Syllabus and my contact info
Announcements
Lecture slides
Assignment handouts
Web resources relevant to the course
Discussion board
Grades
What is the Ralph Regula
School?
• The Ralph Regula School of Computational Science
is a statewide, virtual school focused on computational
science. It is a collaborative effort of the Ohio Board of
Regents, Ohio Supercomputer Center, Ohio Learning
Network and Ohio's colleges and universities. With
funding from NSF, the school acts as a coordinating
entity for a variety of computational science education
activities aimed at making education in computational
science available to students across Ohio, as well as to
workers seeking continuing education about this
technology.
• Website: http://www.rrscs.org
CS LINC Cluster
• Michal Kouril’s links
– http://www.ececs.uc.edu/~kourilm/clusters/
– See README file for instructions on running MPI
code on beowulf.linc.uc.edu
• Accounts
– ECE/CS students should already have an account
– I can request accounts for the non-ECE/CS students
• Access
– Remote access only, the cluster is in the ECE/CS
server/machine room on the 8th floor of Rhodes,
visible through windows in the 890’s hallway
Why HPPC?
• Who needs a
roomful of computers
anyway?
• My PC and XBOX
run at GFLOP rates
(Billion Floating Point
Operations per second)
NCSA TeraGrid IA-64 Linux Cluster
(http://www.ncsa.uiuc.edu/UserInfo/Resources/
Hardware/TGIA64LinuxCluster/)
Needed by People who study
Science and Engineering
•
•
•
•
•
•
Materials / Superconductivity
Fluid Flow
Weather/Climate
Structural Deformation
Genetics / Protein interactions
Seismic
Many Research Projects in Natural Sciences and
Engineering cannot exist without HPPC
Why are the problems so large?
• 3-Dimensional
– If you want to increase the level of resolution by factor
of 10, problem size increases by 103
• Many Length Scales (both time and space)
– If you want to observe the interactions between very
small local phenomenon and larger more global
phenomenon
• The number of relationships between data items
grows quadraticly.
– Example: human genome 3.2 G base pairs means
about 500000000000000000=5E relations
How can you solve these
problems?
• Take advantage of parallelism
– Large problems generally have many operations
which can be performed concurrently
• Parallelism can be exploited at many levels by
the computer hardware
– Within the CPU core, multiple functional units,
pipelining
– Within the Chip, many cores
– On a node, multiple chips
– In a system, many nodes
However….
• Parallelism has overheads
– At the core and chip level the cost is
complexity and money
– Most applications get only a fraction of peak
performance (10%-20%)
– At the chip and node level, memory bus can
get saturated if too many cores
– Between nodes, the communication
infrastructure is typically much slower than the
CPU
Necessity Yields Modest Success
• Power of CPUs keeps growing
exponentially
• Parallel programming environments
changing very slowly – much harder than
sequential
Two standards have emerged
• MPI library, for processes that do not share
memory
• OpenMP directives, for processes that do
share memory
Why MPI?
• MPI = “Message Passing Interface”
• Standard specification for messagepassing libraries
• Very Portable
• Libraries available on virtually all parallel
computers
• Free libraries also available for networks
of workstations or commodity clusters
Why OpenMP?
• OpenMP an application programming
interface (API) for shared-memory
systems
• Based on model of creating and
scheduling multi-threaded computations.
• Supports higher performance parallel
programming of symmetrical
multiprocessors
“All About the Benjamins”
Commercial Parallel Systems
• Relatively costly per processor
• Primitive programming environments
• Focus on commercial sales
• Scientists looked for alternative
Beowulf Concept
• NASA (Sterling and Becker)
• Commodity processors
• Commodity interconnect
• Linux operating system
• Message Passing Interface (MPI) library
• High performance/$ for certain applications
Programmer Desperately
Task Dependence GraphSeeking Concurrency
• Directed graph
• Vertices = tasks
• Edges = dependences
Data Parallelism
• Independent tasks apply same operation to different elements of a
data set
• Okay to perform operations concurrently
Functional Parallelism
• Independent tasks apply different operations to different data
elements
• First and second statements
• Third and fourth statements
Pipelining
• Divide a process into stages
• Produce several items simultaneously
Why not just use a Compiler?
• Parallelizing compiler - Detect parallelism in sequential program
• Produce parallel executable program
Advantages
Can leverage millions of lines of existing serial programs
• Saves time and labor- Requires no retraining of programmers
• Sequential programming easier than parallel programming
Disadvantages
• Parallelism may be irretrievably lost when programs written in
sequential languages
• Simple example: Compute all partial sums in an array
• Performance of parallelizing compilers on broad range of
applications still up in air
Or we could Extend Languages
Programmer can give directives or clues to the
complier about how to parallelize
Advantages
• Easiest, quickest, and least expensive
• Allows existing compiler technology to be
leveraged
• New libraries can be ready soon after new
parallel computers are available
Disadvantages
• Lack of compiler support to catch errors
• Easy to write programs that are difficult to debug
Or Create New Parallel Languages
Advantages
• Allows programmer to communicate parallelism
to compiler directly
• Improves probability that executable will achieve
high performance
Disadvantages
• Requires development of new compilers
• New languages may not become standards
• Programmer resistance
Where are we in 2007?
• Low-level approach is most popular
• Augment existing language with low-level
parallel constructs and directives
• MPI and OpenMP are prime examples
Advantages
• Efficiency
• Portability
Disadvantages
• More difficult to program and debug
Programming Assignment #1
• Log into beowulf.linc.uc.edu and run
sample programs.
Reading Assignment #1 on
Blackboard