CS 615: Design & Analysis of Algorithms

Download Report

Transcript CS 615: Design & Analysis of Algorithms

CS 615:
Design & Analysis of Algorithms
Chapter 2: Efficiency of Algorithms
Course Content
1.
2.
3.
4.
5.
6.
7.
8.
9.
Introduction, Algorithmic Notation and Flowcharts
(Brassard & Bratley, Chap. Chapter 3)
Efficiency of Algorithms (Brassard & Bratley,
Chap. 2)
Basic Data Structures (Brassard & Bratley, Chap. 5)
Sorting (Weiss, Chap. 7)
Searching (Brassard & Bratley Chap.: 9)
Graph Algorithms (Weiss, Chap.: 9)
Randomized Algorithms (Weiss, Chap.: 10)
String Searching (Sedgewick, Chap. 19)
NP Completeness (Sedgewick, Chap. 40)
17 July 2015
CS 615 Design & Analysis of Algorithms
2
Definitions
Problem:
A situation to be solved by an algorithm
Example:
Multiply two integers
Instance
A special case of the problem
Example:
Multiply(981, 1234)
An algorithm must work correctly
On every instance of the problem it claims to solve
To prove an algorithm is not correct
Find an instance which the algorithm cannot solve
correctly
17 July 2015
CS 615 Design & Analysis of Algorithms
3
Efficiency of Algorithms
To decide which algorithm to choose:
Empirical Approach
Program the competing algorithms
Try them on different instances
with the help of the computer(s)
Resources:
Computing time
Storage space
Number of processes (for parallel algorithms)
17 July 2015
CS 615 Design & Analysis of Algorithms
4
Efficiency of Algorithms
To decide which algorithm to choose:
Theoritical Approach
Using formal methods to anlyze the efficiency
Does not depend on the computer
No need to make “programming”
17 July 2015
CS 615 Design & Analysis of Algorithms
5
Efficiency of Algorithms
To decide which algorithm to choose
Hybrid Approach
Describe algorithm’s efficiency function theoritically
Empirically determine numerical parameters
for a particular machine
Predict the time an actual implementation will take
to solve an instance
17 July 2015
CS 615 Design & Analysis of Algorithms
6
Principle of Invariance
Two different implementations of an algorithm
Will not differ in efficiency
by more than some multiplicative constant
Example
If the constant is 5:
if the first implementation
takes 1 second to solve an instance
then a second implementation (possible on a
different machine)
Will not take more than 5 seconds
17 July 2015
CS 615 Design & Analysis of Algorithms
7
Principle of Invariance
For an instance of size n
Implementation 1:
Takes time of t1(n)
Implementation 2:
Takes time of t2(n)
There always exist positive constants c and d such
that
t1(n)c*t2(n) and
t2(n)d *t1(n)
where n is sufficiently large
17 July 2015
CS 615 Design & Analysis of Algorithms
8
Results of Principle of Invarience
1.
A change on the implementation of the same
algorithm
1.
Can only cause a constant of change in efficiency
2. The principle does not depend on
The computer we use
The compiler we implement
The abilities of the person making the coding
3. If we want a radical change in efficiency
We need to change the algorithm itself
17 July 2015
CS 615 Design & Analysis of Algorithms
9
Theoritical Efficiency
For a given function t
an algorithm for some problem
takes a time in the order of t(n),
if there exist a positive constant c
the algorithm is capable of
then
solving every instance of size
n is not more than
c*t(n) seconds/hours/years.
For numerical problems
n may sometimes be the value
rather than the size of the instance
17 July 2015
CS 615 Design & Analysis of Algorithms
10
Algorithm Types
Time takes to solve an instance of a
Linear Algorithm is
Never greater than c*n
Quadratic Algorithm is
Never greater than c*n2
Cubic Algorithm is
Never greater than c*n3
Polynomial Algorithm is
Never greater than nk
Exponential Algorithm is
Never greater than cn
where c & k are appropriate constants
17 July 2015
CS 615 Design & Analysis of Algorithms
11
Worst Case Analysis
When to analyse an algorithm
Considering the cases only take
maximum amount of time
If the algorithm is capable of solving cases in t(n)
then the worst case should not be greater than
c*t(n)
Useful if the algorithm is to be applied to cases
the upper bound of an algorithm must be known
Example:
Response time for a nuclear power plant.
17 July 2015
CS 615 Design & Analysis of Algorithms
12
Average Time Analysis
If the algorithm is going to be used many
times
it is useful to know
the average execution time
on instances of size n
It is harder to analyse the average case
the distribution of data should be known
Insertion sorting average time is
in the order of n2
17 July 2015
CS 615 Design & Analysis of Algorithms
13
Elementary Operation
Is one whose execution time is
bounded above by a constant
The constant does not depend on
The size
or other parameters of the instance considered
Example
x=y+w*z is it elementary operation?
Suppose
ta : time to execute an addition (constant)
tm : time to execute a multiplication (constant)
ts : time to execute an assignment (constant)
t: time required to execute an addition,
multiplication, & asignment:
ta ta + m tm + ts s where a,m,s are constants
t  max(ta , tm , ts ) x (a+m+s)
17 July 2015
CS 615 Design & Analysis of Algorithms
14
Elementary Operation
A single line of of program
may correspond to a variable number of elemenary
operations
x=min{T[i] | 1 i n}
Time required to compute min
increases with n
min() is not an elementary operation !
17 July 2015
CS 615 Design & Analysis of Algorithms
15
Elementary Operation
addition, multiplication:
Normally
Time required to compute
addition, multiplication
Depends on the length of the operands
But
it is reasonable to assume
addition and multiplication are elementary operations
when the operands are in fixed length
17 July 2015
CS 615 Design & Analysis of Algorithms
16
Some Algorithm Examples
Calculating Determinants
Sorting
Multiplication of Large Integers
Calculating the Greatest Common Divisor
Calculating Fibonacci Sequences
Fourier Transforms
17 July 2015
CS 615 Design & Analysis of Algorithms
17
Calculating Determinants
Recursive definition of Algorithm
To compute a determinant of n x n matrix
Takes time proportional to n!
Worse than taking exponential time
Experiments:
5 x 5 matrix 20 sec.
10 x 10 matrix 10 min.
Estimation
20 x 20 matrix 10 million years.
Gauss-Jordan Elimination
To compute a determinant of n x n matrix
Takes time proportional to n3
Experiments:
10 x 10 matrix 0.01 sec.
20 x 20 matrix 0.05 sec.
100 x 100 matrix 5.5 sec.
17 July 2015
CS 615 Design & Analysis of Algorithms
18
Sorting
Arranging n objects based on
the “ordering function” defined for these objects
No sorting algorithm is
faster than order of nlogn
Insertion sorting
Takes time proportional to n2
Experiment:
Sorting 1000 elements takes 3 sec.
Estimation:
Sorting 100 000 elements would take 9.5 hrs.
Selection sorting
Takes time proportional to n2
17 July 2015
CS 615 Design & Analysis of Algorithms
19
Sorting
Heapsort
Takes time proportional to nlogn
even in worst cases
Mergesort
Takes time proportional to nlogn
even in worst cases
Quicksort
Takes time proportional to nlogn
Experiment:
Sorting 1000 elements takes 0.2 sec.
Sorting 100 000 elements takes 30 sec.
17 July 2015
CS 615 Design & Analysis of Algorithms
20
Multiplication of Large Integers
When multiplying large integers
Operands might become too large
to hold in a single word
Assume two large integers
of sizes m and n are to be multiplied
Multiply each digit of one integer
by the digit of the second digit
Takes time proportional to m x n
More efficient algorithms:
Divide-and-conquer:
Takes time proportional to n x mlg(3/2)
=n
x m0.59
m is the size of the smaller integer
17 July 2015
CS 615 Design & Analysis of Algorithms
21
Calculating the Greatest Common Divisor
function gcd(m,n)
i=min(m,n)+1
repeat
i=i-1
until i divides both m and n exactly
return i
function Euclid(m,n)
while m>0 do
t=m
m=n mod m
n=t
return n
17 July 2015
Denoted by gcd(m,n)
Finding the largest integer
divides both m and n exactly
gcd(6,15)=3
gcd(10,21)=1
gcd algorithm takes time
of order n
Euclid’s algorithm takes
order of logn
CS 615 Design & Analysis of Algorithms
22
Calculating Fibonacci Sequences
Fibonacci Sequence:
function Fibrec(n)
if n<2 then return n
else return Fibrec(n-1)+Fibrec(n-2)
f0=0;
f1=1;
fn= fn-1 + fn-2
Order of Fibrec is fn
Order of Fibiter is n
function Fibiter(n)
i=1
j=0
for k=1 to n do
j=i+j
i=j-i
return j
17 July 2015
n
10
20
30
50
100
Fibrec
8ms
1sec
2min
21days
109years
Fibiter
0.17ms
0.33ms
0.5ms
0.75ms
1.5ms
CS 615 Design & Analysis of Algorithms
23
Fourier Transforms
One of the most useful algorithm in history
Used in
Optics
Acoustics
Quantum physics
Telecommunications
System theory
Signal processing
Speech processing
Example
Used to analyze data from
earthquake in Alaska 1964
Classic algorithm takes 26 minutes of computation
A new algorithm need less than 2.5 seconds
17 July 2015
CS 615 Design & Analysis of Algorithms
24
End of Chapter 3
Efficiency of Algorithms
17 July 2015
CS 615 Design & Analysis of Algorithms
25