web.uettaxila.edu.pk

Download Report

Transcript web.uettaxila.edu.pk

Advanced Algorithms Analysis and Design Lecture 5 By Engr Huma Ayub Vine

1

Introduction Elementary of algorithmic

We should be clear about the

distinction between the terms such as a process, a program and an algorithm.

A

process is a sequence of activities actually being carried out or executed, to solve a problem. But algorithm and programs are just descriptions of a process in some notation. Further, a program is an algorithm in a notation that can be understood and be executed by a computer system

As we are constantly involved in solving problem. We may take the unsatisfactory/ unacceptable/ undesirable situation itself, as a problem. One way of looking at a possible

solution of a problem, is as a sequence of activities (if such a sequence exists at all), that if carried out using allowed/available tools, leads us from the unsatisfactory (initial) position to an acceptable, satisfactory or desired position.

Problems and Instances

An instance of a problem is also called a

question.

We know that the roots of a

general quadratic equation

The

issue of finding roots of the general quadratic equation ax 2 + bx + c = 0, with a ≠ 0 is called a problem

Whereas the issue of finding the roots of the particular equation

3x 2 + 4x+1 = 0 is called

a question or an instance of the (general) problem.

Problems and Instances

Problem:

How to multiply two positive integers?

(Domain of definition is important)

Solution:

Ways to solve the given problem.

Algorithm:

Sequence of steps to provide solution

Instances:

Different numbers to be multiplied which are within the range of defined data type.

620 x 800 an instance 111 x 222 an instance -111 x 2.22

Not an instance -111 is not +ive 2.22 is not an integer

Limits

• Size of the instances to be handled • Storage size • Constraints of computing machines • Constraints of programming tools

Efficiency of Algorithms (Selection of Algorithms)

When solving a problem there may be several suitable algorithms are available.

Obviously choose the best. How to decide? If we had one or two instances (may be small too),simply choose the easiest to program or use the one for which a program exists. Otherwise, we have to choose very carefully.

Efficiency of Algorithms The empirical (or a posteriori) Approach

The empirical (or a posteriori) approach to choosing an algorithm consists of programming the computing techniques and trying them on different instances with the help of a computer.

The theoretical (or a priori) Approach

The theoretical (or a priori) approach, which we favor consists of determining mathematically the quantity of resources needed by each algorithm as function of the size of the instances considered. The resources of most interest are: computing time (most critical) and storage space.

.

Throughout the class when we speak of the efficiency of an algorithm, we shall mean how fast it runs. Occasionally, we'll also be interested in an algorithm's storage requirements or any other resources (like number of processors required by a parallel algorithm).

Efficiency of Algorithms

The size of an instance corresponds formally to the number of bits needed to represent the instance on a computer. To be less formal in analysis, the use of “size” to mean any integer that in some way measures the number of components in an instance. Examples: . In sorting size is the number of elements to be sorted. . In graphs size is the number of nodes or edges or both.

The advantage of the theoretical approach is that it doesn't depend on the computer used, nor the programming language, nor the skill of the programmer.

Hybrid approach

There is the hybrid approach where efficiency is determined theoretically, then any required numerical parameters are determined empirically for a particular program and a machine.

Efficiency of an Algorithm(Summery)

• Execution time • Storage requirement • Size of instances to be handled • Type of language to be used for programming • Type of machine for implementation

Average and worst-case Analysis

• Worst-case time is considered for implementation of an algorithm whose response time is critical.

• Average execution time is taken if an algorithm is to be used many times on many different instances.

• A useful analysis of the average behaviour of an algorithm requires a priori knowledge of the distribution of the instances to be solved- an unrealistic requirement

Sorting

Average and worst-case Analysis

Sorting by Insertion algorithm

The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., Θ (n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.

The worst case input is an array sorted in reverse order. In this case every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. For this case insertion sort has a Quadratic running time (i.e., O(n 2 )).

The average case is also quadratic, which makes insertion sort not practical for sorting large arrays. -

Insertion sort pseudocode (recall) InsertionSort(A) **sort A[1..n] in place for j 2 to n do key A[j] **insert A[j] into sorted sublist A[1..j − 1] i j − 1 while (i > 0 and A[i] > key) do A[i+1] A[i] i i − 1 A[i+1] key 1 5 7 0 3 4 2 6 1 (0) 5 7 0 3 4 2 6 1 (0) 0 5 7 3 4 2 6 1 (2) 0 3 5 7 4 2 6 1 (2) 0 3 4 5 7 2 6 1 (2) 0 2 3 4 5 7 6 1 (4) 0 2 3 4 5 6 7 1 (1) 0 1 2 3 4 5 6 7 (6)

Quick Analysis

Quadratic running time (i.e., O(n 2 ))

faster sorting algorithm.

Shell Sort

• Uses insertion sort on periodic subsequences of the input to produce a • Algorithm sorts an array with n elements.

• The subsequences to be sorted are determined by increments (h t , h t-1 ,….., h 1 ).

• The number of subsequences are obtained by the number of increments (h values).

• Provides better performance due the fact that when the last passes are made using small increments, few elements will be out of order because of all the work that was done in earlier passes.

• The number of comparisons done by Shell sort is a function of the sequence of increments used so a complete analysis is extremely difficult.

• It has been shown that the best choice of h is approximately 1.72

3 n and with this choice the average running time is proportional to n 5/3 .

• The Average - case : • The worst - case • The best - case : : C(n) = O(n 5/3 ) when h = 1.72

3 n C(n) = O(n 2 ) when h = 1 C(n) = O(n (logn) 2 ) if increments are carefully chosen for fairly large n

Shellsort

• • • • • Shell sort is also known as diminishing increment sort.

The distance between comparisons decreases as the sorting algorithm runs until the last phase in which adjacent elements are compared Not stable Advantage of Shellsort is that its only efficient for medium size lists. For bigger lists, the algorithm is not the best choice 5 times faster than the bubble twice as fast as the insertion sort and a little over sort, its closest competitor.

Shellsort

Disadvantage of Shellsort is that it is a complex algorithm and its not nearly as efficient as the merge , heap , and quick sorts.

The shell sort is still significantly slower than the merge , heap , and quick sorts, but its relatively simple algorithm makes it a good choice for sorting lists of less than 5000 items unless speed important

Shell Sort Algorithm

The algorithms for shell sort can be defined in two steps Step 1: divide the original list into smaller lists.

Step 2: sort individual sub lists using any known sorting algorithm (like bubble sort, insertion sort, selection sort, etc).

H 4 List of numbers: 7 19 24 13 31 8 82 18 44 63 5 29 H = 1.72

3 12 ≈ 4 =4 1 7 19 2 24 3 13 4 31 5 6 8 82 7 18 8 44 9 63 10 11 5 29 12 H 3 =3 1 7 2 8 24 3 13 4 31 5 19 6 7 5 18 8 44 9 63 10 82 11 29 12 H H 2 1 =2 =1 Sorted list 1 7 2 8 19 3 4 5 18 5 24 6 13 7 31 8 29 9 63 10 82 11 44 12 1 7 2 5 18 3 4 8 13 5 24 6 19 7 31 8 29 9 44 10 82 11 63 12 1 5 2 7 3 8 13 4 18 5 19 6 24 7 29 8 31 9 44 10 63 11 82 12

Radix Sort

• It is used to sort numbers using 10 buckets for decimal numbers ranging 0 - 9. as well as for words • The given numbers are first sorted according to the unit digit and each number is placed in the concerned bucket, and then the buckets are combined in order before distributing according to the tens digit.

• This sorting process continues to the last digit (i.e. the most significant bit) in d-passes for d-digit numbers.

• For a 5- digit numbers, radix sort places numbers in the bucket in 5 passes and combines the buckets five times.

• Radix sort is sometimes used to sort records of information that are keyed by multiple fields (i.e. to sort dates by year, month and day).

• Number of comparisons (Cn) = d*s*n Where d = Digits in a number (d=10 for decimal digit) s = Number of digits in a number (s = 4 for 972, 8345 & 89 numbers) n = Number of items (given numbers to be sorted)

Radix Sort

Limit input to fixed-length numbers or words.

Represent symbols in some base b.

Each input has exactly d “digits”.

Sort numbers d times, using 1 digit as key.

Must sort from least-significant to most significant digit.

Must use any “stable” sort, keeping equal-keyed items in same order.

Radix Sort Example

Input data: a b a b a c c a a a c b b a b c c a a a c b b a

Radix Sort Example

Pass 1: Looking at rightmost position.

a b a b a c c a a a c b b a b c c a a a c b b a Place into appropriate pile.

a b c

Radix Sort Example

Pass 1: Looking at rightmost position.

b b a c c a c a a a b a a Join piles.

b a b a c b b a a c b a c c

Radix Sort Example

Pass 2: Looking at next position.

a b a c a a c c a b b a a c b b a b b a c a a c Place into appropriate pile.

a b c

Radix Sort Example

Pass 2: Looking at next position.

a a c b a c b a b c a a a Join piles.

b b a a b a b a c b c c a c

Radix Sort Example

Pass 3: Looking at last position.

c a a b a b b a c a a c a b a b b a c c a a c b Place into appropriate pile.

a b c

Radix Sort Example

Pass 3: Looking at last position.

a c b a b a a a c a Join piles.

b b a b a c b a b b c c a c a a c

Radix Sort Example

Result is sorted.

a a c a b a a c b b a b b a c b b a c a a c c a

Radix Sort Algorithm

(d

n) time, where d is taken to be a constant.

Numerical

Analyzing merge sort

T

(

n

) Θ(1) 2

T

(

n

/2) Θ(

n

)

M

ERGE

-S

ORT

A

[1 . .

n

]

1.

If

n

= 1 , done.

2.

Recursively sort

A

[ 1 . .

⎡ n

/2

and

A

[

n

/2]+1 . .

n

] 3. “Merge” the 2 sorted lists ]

The worst case and average case running time will be O(n log2 n).

September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.39

Execution Example

• Example : 20 13 7 2 12 11 9 1 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 7 2 9 4  2 4 7 9 7 2  2 7 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6 7  7 2  2 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 35

Execution Example (cont.)

• Recursive call, partition 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 7 2  2 7 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6 7  7 2  2 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 36

Execution Example (cont.)

• Recursive call, partition 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 20  13  2 7 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6 7  7 2  2 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 37

Execution Example (cont.)

• Recursive call, base case 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 20  13  2 7 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6 20  20 2  2 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 38

Execution Example (cont.)

• Recursive call, base case 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 20  13  2 7 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6 20  9 20

13

 13 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 39

Execution Example (cont.)

• Merge 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 20  13  13 20 9 4  4 9 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6

20

 20 13  13 9  9 4  4 3  3 Merge Sort 8  8 6  6 1  1 40

Execution Example (cont.)

• Recursive call, …, base case, merge 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 4 7 9 20  13  13 20 7 2  2 7 3 8 6 1  1 3 8 6 3 8  3 8 6 1  1 6

20

 20 13  13 7  7 2  2 3  3 Merge Sort 8  8 6  6 1  1 41

Execution Example (cont.)

• Merge 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 7 13 20 3 8 6 1  1 3 8 6 20  13  13 20 7 2  2 7

20

 20 13  13 7  7 2  2 3 8  3 8 6 1  1 6 3  3 Merge Sort 8  8 6  6 1  1 42

Execution Example (cont.)

• Recursive call, …, merge, merge 20 13 7 2  12 11 9 1  1 2 3 4 6 7 8 9 20 13  7 2  2 7 13 20 12 11 9 1  1 9 11 12 20  13  13 20 7 2  2 7 12 11  11 12 9 1  1 9

20

 20 13  13 7  7 2  2 12  12 11  11 Merge Sort 9  9 1  1 43

Execution Example (cont.)

• Merge 20 13 7 2  12 11 9 1  1 2 7 9 11 12 13 20 20 13  7 2  2 7 13 20 12 11 9 1  1 9 11 12 20  13  13 20 7 2  2 7 12 11  11 12 9 1  1 9

20

 20 13  13 7  7 2  2 12  12 11  11 Merge Sort 9  9 1  1 44

20 12 13 11 7 2 9 1

Merging two sorted arrays

September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.26

20 12 13 11 7 2 9 1 1

Merging two sorted arrays

September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.27

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 1 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.28

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 1 2 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.29

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 1 2 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.30

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 1 2 7 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.31

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 1 2 7 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.32

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 1 2 7 9 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.33

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 20 12 13 11 1 2 7 9 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.34

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 20 12 13 11 1 2 7 9 11 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.35

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 20 12 13 11 20 12 13 1 2 7 9 11 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.36

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 20 12 13 11 20 12 13 1 2 7 9 11 12 September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.37

Merging two sorted arrays

20 12 13 11 7 2 9 1 20 12 13 11 7 2 9 20 12 13 11 7 9 20 12 13 11 9 20 12 13 11 20 12 13 12 1 2 7 9 11

Time = Θ(

n

) to merge a total of

n

elements (linear time).

September 7, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson Introduction to Algorithms L1.38

Counting Sort

• Basic Idea - For each input element x, determine the number of elements less than or equal to x - For each integer i (0 ≤ i ≤ k), count how many elements whose values are i • Then we know how many elements are less than or equal to i • Algorithm storage - A[1..n]: input elements - B[1..n]: sorted elements - C[0..k]: holds the number of elements less than or equal to i where (0 ≤ i ≤ k)

Counting Sort

• The first for loop initializes C[ ] to zero.

• The second for loop increments the values in C[], according to their frequencies in the data.

• The third for loop adds all previous values, making C[] contain a cumulative total.

• The fourth for loop writes out the sorted data into array B[].

• Running time: O(n+k)

Counting Sort

• The algorithm makes two passes over A and one pass over B. If size of the range k is smaller than size of input n, then time complexity=O(n).

• Two of the for loops take O(k) time, and two take O(n) time.

• An important point to note is that Counting Sort is stable: All elements of the same value will appear in the same order in the output array that they do in the input array.

COUNTING SORT(A,B,k) 1 for i = 0 to k 2 do C[i] = 0 3 for j = 1 to length[A] 4 do C[A[j]] = C[A[j]] + 1 5 //C[i] now contains the number of elements equal to i 6 for i = 1 to k 7 do C[i] = C[i] + C[i-1] 8 //C[i] now contains the number of elements less than or equal to i 9 for j = length[A] downto 1 10 do B[C[A[j]]] = A[j] 11 C[A[j]] = C[A[J]] - 1

0 0 1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 A 9 0 C 1 - for i = 0 to k 2 - do C[i] = 0

0 1 1 0 1 0 2 2 2 0 4 3 4 5 3 0 5 1 4 0 6 1 7 1 5 0 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 1 do C [A [1]] = C [A [1]] + 1 =>C [0] = C [0] + 1

0 1 1 0 1 0 2 2 2 1 4 3 4 5 3 0 5 1 4 0 6 1 7 1 5 0 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 2 do C [A [2]] = C [A [2]] + 1 =>C [2] = C [2] + 1

0 1 1 0 1 0 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 0 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 3 do C [A [3]] = C [A [3]] + 1 =>C [4] = C [4] + 1

0 1 1 0 1 0 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 4 do C [A [4]] = C [A [4]] + 1 =>C [5] = C [5] + 1

0 1 1 0 1 1 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 5 do C [A [5]] = C [A [5]] + 1 =>C [1] = C [1] + 1

0 1 1 0 1 2 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 6 do C [A [6]] = C [A [6]] + 1 =>C [1] = C [1] + 1

0 1 1 0 1 3 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 0 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 7 do C [A [7]] = C [A [7]] + 1 =>C [1] = C [1] + 1

0 1 1 0 1 3 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 0 9 6 10 1 11 12 3 9 7 0 8 1 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 8 do C [A [8]] = C [A [8]] + 1 =>C [8] = C [8] + 1

0 1 1 0 1 3 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 1 9 6 10 1 11 12 3 9 7 0 8 1 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 9 do C [A [9]] = C [A [9]] + 1 =>C [6] = C [6] + 1

0 1 1 0 1 4 2 2 2 1 4 3 4 5 3 0 5 1 4 1 6 1 7 1 5 1 8 8 6 1 9 6 10 1 11 12 3 9 7 0 8 1 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 10 do C [A [10]] = C [A [10]] + 1 =>C [1] = C [1] + 1

0 1 1 0 1 4 2 2 2 1 4 3 4 5 3 1 5 1 4 1 6 1 7 1 5 1 8 8 6 1 9 6 10 1 11 12 3 9 7 0 8 1 9 0 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 11 do C [A [11]] = C [A [11]] + 1 =>C [3] = C [3] + 1

0 1 1 0 1 4 2 2 2 1 4 3 4 5 3 1 5 1 4 1 6 1 7 1 5 1 8 8 6 1 9 6 10 1 11 12 3 9 7 0 8 1 9 1 A C 3 - for j = 1 to length [A] 4 - do C [A [j]] = C [A [j]] + 1 for j = 12 do C [A [12]] = C [A [12]] + 1 =>C [9] = C [9] + 1

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 1 2 1 3 1 4 1 5 1 6 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [1] = C [1] + C [0]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 1 3 1 4 1 5 1 6 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [2] = C [2] + C [1]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 1 4 1 5 1 6 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [3] = C [3] + C [2]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 1 5 1 6 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [4] = C [4] + C [3]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 9 5 1 6 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [5] = C [5] + C [4]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 9 5 6 10 0 7 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [6] = C [6] + C [5]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 9 5 6 10 7 10 1 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [7] = C [7] + C [6]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 9 5 6 10 7 10 11 8 1 9 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [8] = C [8] + C [7]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 9 A 1 0 5 1 6 2 7 3 8 4 9 5 6 10 7 10 11 8 9 12 C 6 7 for i = 1 to k do C [i] = C [i] + C[i-1] C [9] = C [9] + C [8]

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 3 A 1 0 1 1 5 2 2 6 3 7 3 4 5 4 8 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 6 5 9 6 7 8 9 10 10 11 11 C 7 8 9 10 11 12 9 for j =12 do B [C [A [12]]] = A [12] ⇒ B [C [9]] = 9 =>B [12] = 9 C [A [12]] = C[A[12]] - 1 =>C [9] = C[9] - 1 B

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 1 11 12 A 1 0 1 1 5 2 2 6 3 6 3 4 5 4 8 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 6 5 9 6 7 8 9 10 10 11 11 C 7 3 8 9 10 11 12 9 for j =11 do B [C [A [11]]] = A [11] ⇒ B [C [3]] = 3 =>B [7] = 3 C [A [11]] = C[A[11]] - 1 =>C [3] = C[3] - 1 B

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 6 10 11 12 A 1 0 1 1 4 2 2 6 3 6 4 8 3 4 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 7 8 9 10 10 11 11 C 7 3 8 9 10 11 12 9 for j =10 do B [C [A [10]]] = A [10] ⇒ B [C [1]] = 1 =>B [5] = 1 C [A [10]] = C[A[10]] - 1 =>C [1] = C[1] - 1 B

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 8 9 10 11 12 A 1 0 1 1 4 2 2 6 3 6 4 8 3 4 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 9 7 8 9 10 11 11 C 7 3 8 9 10 11 12 6 9 for j =9 do B [C [A [9]]] = A [9] ⇒ B [C [6]] = 6 =>B [10] = 6 C [A [9]] = C[A[9]] - 1 =>C [6] = C[6] - 1 B

1 0 2 2 4 3 4 5 5 1 6 1 7 1 8 9 10 11 12 A 1 0 1 1 4 2 2 6 3 6 4 8 3 4 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 9 7 8 9 10 10 11 C 7 3 8 9 10 11 12 6 8 9 for j =8 do B [C [A [8]]] = A [8] ⇒ B [C [8]] = 8 =>B [11] = 8 C [A [8]] = C[A[8]] - 1 =>C [8] = C[8] - 1 B

1 0 2 2 4 3 4 5 5 1 6 1 7 8 9 10 11 12 A 1 0 1 1 3 2 2 6 3 6 4 8 3 4 1 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 9 7 8 9 10 10 11 C 7 3 8 9 10 11 12 6 8 9 for j =7 do B [C [A [7]]] = A [7] ⇒ B [C [1]] = 1 =>B [4] = 1 C [A [7]] = C[A[7]] - 1 =>C [1] = C[1] - 1 B

1 0 2 2 4 3 4 5 5 1 6 7 8 9 10 11 12 A 1 0 1 1 2 2 2 6 3 6 4 8 1 3 4 1 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 9 7 8 9 10 10 11 C 7 3 8 9 10 11 12 6 8 9 for j =6 do B [C [A [6]]] = A [6] ⇒ B [C [1]] = 1 =>B [3] = 1 C [A [6]] = C[A[6]] - 1 =>C [1] = C[1] - 1 B

1 0 2 2 4 3 4 5 5 6 7 8 9 10 11 12 A 1 0 1 1 1 2 1 2 6 3 6 4 8 1 3 4 1 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 9 6 9 7 8 9 10 10 11 C 7 3 8 9 10 11 12 6 8 9 for j =5 do B [C [A [5]]] = A [5] ⇒ B [C [1]] = 1 =>B [2] = 1 C [A [5]] = C[A[5]] - 1 =>C [1] = C[1] - 1 B

1 0 2 2 4 3 4 5 6 7 8 9 10 11 12 A 1 0 1 1 1 2 1 2 6 3 6 4 8 1 3 4 1 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 8 6 9 7 8 9 10 10 11 C 7 3 8 9 5 10 11 12 6 8 9 for j =4 do B [C [A [4]]] = A [4] ⇒ B [C [5]] = 5 =>B [9] = 5 C [A [4]] = C[A[4]] - 1 =>C [5] = C[5] - 1 B

1 0 2 2 3 4 5 6 7 8 9 10 11 12 A 1 0 1 1 1 2 1 2 6 3 6 4 7 1 3 4 1 5 1 6 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 8 6 9 7 8 9 10 10 11 C 7 3 8 4 9 5 10 11 12 6 8 9 for j =3 do B [C [A [3]]] = A [3] ⇒ B [C [4]] = 4 =>B [8] = 4 C [A [3]] = C[A[3]] - 1 =>C [4] = C[4] - 1 B

1 0 2 3 4 5 6 7 8 9 10 11 12 A 1 0 1 1 1 2 1 2 5 3 6 4 7 1 3 4 1 5 1 6 2 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 8 6 9 7 8 9 10 10 11 C 7 3 8 4 9 5 10 11 12 6 8 9 for j =2 do B [C [A [2]]] = A [2] ⇒ B [C [2]] = 2 =>B [6] = 2 C [A [2]] = C[A[2]] - 1 =>C [2] = C[2] - 1 B

1 2 3 4 5 6 7 8 9 10 11 12 A 0 0 1 1 0 1 2 1 2 5 3 6 4 7 1 3 4 1 5 1 6 2 9 for j = length [A] down to 1 10 do B [C [A [j]]] = A [j] 11 C [A [j]] = C[A[J]] - 1 5 8 6 9 7 8 9 10 10 11 C 7 3 8 4 9 5 10 11 12 6 8 9 for j =1 do B [C [A [1]]] = A [1] ⇒ B [C [0]] = 0 =>B [1] = 0 C [A [1]] = C[A[1]] - 1 =>C [0] = C[0] - 1 B

Advanced Algorithms Analysis and Design Lecture 6 (Continuation of 5

th

Lecture) By Engr Huma Ayub Vine

96

Bucket Sort

• Assumption: Keys to be sorted are uniformly distributed over a known range (say 1 to m ) • Method: - Set up n buckets where each bucket is responsible for an equal portion of the range - Sort items in buckets using insertion sort - Concatenate sorted lists of items from buckets to get final sorted order

Bucket Sort

• The bucket sort is a non-comparison-based sorting algorithm .

• Allocate one storage location for each item to be sorted .

• Assigning each item into its corresponding bucket.

• In order to bucket sort n unique items in the range of 1 through m, allocate m buckets and then iterate over the n items assigning each one to the proper bucket. Finally loop through the buckets and collect the items putting them into final order.

Bucket Sort

• Bucket sorts work well for data sets where the possible key values are known and relatively small and there are on average just a few elements per bucket.

• Time Complexity: Best Case : O(N) Average Case : O(N) Worst Case : O(N*N) (i.e. Insertion Sort)

• Bucket-Sort (A) 1 n = length [A] 2 for i = 1 to n 3 do insert A [i] into list B [n A [i]] 4 5 for i = 0 to n - 1 do sort list B [i] with insertion sort 6 concatenate the lists B[0], B[1] ... B[n-1] together in order

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .78

1 n = 10 2 for i = 1 3 do insert A[1] into list B[10 A [1]]

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.78

1 n = 10 2 for i = 2 3 do insert A[2] into list B[10 A [2]]

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.39

.78

1 n = 10 2 for i = 3 3 do insert A[3] into list B[10 A [3]]

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .78

.17

.26

.39

1 n = 10 2 for i = 4 3 do insert A[4] into list B[10 A [4]]

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .78

.17

.26

.39

1 n = 10 2 for i = 5 3 do insert A[5] into list B[10 A [5]] .72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.78

.94

1 n = 10 2 for i = 6 3 do insert A[6] into list B[10 A [6]] .72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.78

.94

1 n = 10 2 for i = 7 3 do insert A[7] into list B[10 A [7]] .21

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.78

.94

1 n = 10 2 for i = 8 3 do insert A[8] into list B[10 A [8]] .12

.21

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.78

.94

1 n = 10 2 for i = 9 3 do insert A[9] into list B[10 A [9]] .12

.21

.23

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.68

.78

.94

1 n = 10 2 for i = 10 3 do insert A[10] into list B[10 A [10]] .12

.21

.23

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .17

.26

.39

.68

.78

.94

4 i = 0 5 - do sort list B[0] with insertion sort .12

.21

.23

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.26

.39

.68

.78

.94

4 i = 1 5 - do sort list B[1] with insertion sort .17

.21

.23

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.78

.94

4 i = 2 5 - do sort list B[2] with insertion sort .17

.23

.26

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.78

.94

4 i = 3 5 - do sort list B[3] with insertion sort .17

.23

.26

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.78

.94

4 i = 4 5 - do sort list B[4] with insertion sort .17

.23

.26

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.78

.94

4 i = 5 5 - do sort list B[5] with insertion sort .17

.23

.26

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.78

.94

4 i = 6 5 - do sort list B[6] with insertion sort .17

.23

.26

.72

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.72

.94

4 i = 7 5 - do sort list B[7] with insertion sort .17

.23

.26

.78

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.72

.94

4 i = 8 5 - do sort list B[8] with insertion sort .17

.23

.26

.78

1 2 3 4 5 6 7 8 9 10 A .78

.17

.39

.26

.72

.94

.21

.12

.23

.68

0 1 2 3 8 9 4 5 6 7 B .12

.21

.39

.68

.72

.94

4 i = 9 5 - do sort list B[9] with insertion sort .17

.23

.26

.78

Algorithm Comparison (5 elements)

Algorithm Comparison (2K Elements)

Computation Time for 100,000 elements

250 200 150 100 50 0 Series1

Elementary Operations

• An operation whose execution time can be bounded above by a constant depending only on the particular implementation used: - The type of computing machine - The type of programming language • Number of elementary operations executed do matter for analysis of an algorithm t < = at a + mt m + st s t < = max(t a , t m , , t s ) x (a+m+s) • Where t a , t m , , t s are time constants in nanoseconds for an addition, a multiplication and an assignments; a, m and s are addition, multiplication and assignment instructions respectively.

• An elementary operation is executed at unit cost.

Elementary Operations

• The required to compute

x

← increases with n.

{T[i] | 1<=i<=n}

• Some mathematical operations are too complex to be considered elementary such as factorial.

• Wilson’s theorem determines prime number where division can be considered to be a unit caused operation..

- Factorial and a test for divisibility can not be considered at unit cost since the time needed to execute n! increases with the length of the operands - They can only be taken as elementary operations as long as the operands are of a reasonable size in the instances expected to be encountered

Function Wilson (n)

{ Returns true if and only if n is prime, n > 1}

if

n divides (n - 1) ! + 1 exactly

then return

true

else return

false Wilson (6) : (5.4.3.2.1 + 1) / 6 = 121 / 6 = 20.16 -> F Wilson (7) : (6.5.4.3.2.1 + 1) / 7 = 721 / 7 = 103 -> T Wilson (11): (10.9.8.7.6.5.4.3.2.1 + 1) / 11 = 3628801 / 7 = 329891 -> T

Function Sum (n)

{ Calculates the sum of the integers from 1 to n} sum ← 0 for i ← 1 to n do sum ← sum + i return sum i Sum (7) Sum = 0 for loop : Sum : 1 2 3 4 5 6 7 1 3 6 10 15 21 28

Function Fibonacci (n) i { Calculates the n-th term of the Fibonacci sequence;}

1; j

0 for k

j 1 to

n ←

i

j - i do i + j return j Fibonacci (5) i = 1 j = 0 k = j = 1 1 2 1 3 2 4 3 5 5

• • •

i= 0 1 1 2 3 Addition and multiplication can be considered as elementary operations as long as the operands are of reasonable size Domain of application is necessary for analysis of an algorithm in order to avoid any overflow due to arithmetic operations.

Elementary operations are executed at unit cost for sufficiently small size of operands.

• •

Looking for Efficiency Investment on computing equipment Investment on algorithmics

• • •

Investment on computing equipment Computer solves a particular problem in 10 where n is size of instances -4 x 2 n

-

If n = 10, 10 -4 x 2 10 = 0.1024 sec

-

If n = 20, 10 -4 x 2 20 = 104.85 sec

-

If n = 30, 10 -4 x 2 30

-

If n = 38, 10 -4 x 2 38 = 107374 sec = 29.82 hours = 7635.49 hours = 254 days sec Purchase of new machine which solves the problem in 10 -6 x 2 n sec

-

If n = 38, 10 -6 x 2 38 If n = 45, 10 -6 x 2 45 = 763.54 hours = 31 days = 11 years No big gain on investment on new machine

Looking for Efficiency

• • •

Investment on algorithmics A cubic algorithm can solve the problem at very high speed. An old machine can compute the result in 10 x n 3 sec.

-

If n = 10, 10 -2 If n = 20, 10 -2 x 10 x 20 3 3 = 10 sec = 80 sec

-

If n = 30, 10 -2

-

If n > 200, 10 -2 x 30 3 x 200 = 270 sec = 4.5 minutes 3 = 1 day New algorithm offers a much greater improvement than the purchase of new computing equipment New algorithm + new computing machine can provide results 100 times faster and enables to solve instances 4 or 5 times bigger than with the new algorithm alone in the same length of time.

-2

Algorithmics versus Hardware

10 5 10 4 10 3 1 day 1 hour 10 -4 x 2 n 10 -6 x 2 n 10 2 1 minute 10 1 10 -1 1 second 5 10 15 20 25 30 35 40 Size of the instance 10 -2 x n 3 10 -4 x n 3

Calculating Determinants Two methods can be used to compute determinants:-

-

Recursive Algorithm

Takes a time proportional to n!

Finds determinant of 5 x 5 matrix = 20 sec

Finds determinant of 10 x 10 matrix = 10 min

Finds determinant of 20 x 20 matrix = 10 m years

-

Gauss - Jordan Algorithm

Takes a proportional time to n 3

Finds determinant of 10 x 10 matrix = 0.01 sec

Finds determinant of 20 x 20 matrix = 0.05 sec

Finds determinant of 100 x 100 matrix = 5.5 sec

• • • •

Sorting Algorithms A large number of sorting algorithms have been designed to arrange n objects in ascending or descending order Insertion and selection sorting take quadratic time both in the worst and on the average case but they are excellent as long as n is small Other algorithms such as William ’s heapsort, mergsort, quicksort etc are much more efficient when n is large and they take a time in the order of n log n Performance - example

-

When n is small both provide almost same performance

-

When n = 50

When n = 100

When n = 1000

When n = 5000 : quick sort is almost 2 times as fast as insertion : quick sort is almost 3 times as fast as insertion : quick sort is almost 15 times as fast as insertion : quick sort is almost 90 times as fast as insertion

When n = 100000 : quick sort is almost 1140 times as fast as insertion

Procedure pigeonhole ( T [ 1 .. n ] ) {Sorts integers between 1 and 10000, there must be a separate pigeon-hole for every possible element that might be found in T} array U [1 … 10000] {An array of pigeon-holes} for k

1 to 10000 do U [k]

0 for i

1 to n do i

0 k

T [ i ] U [ k ]

U [ k ] + 1 for k

1 to 10000 do while U [k]

0 do i

i + 1 T [ i ]

k U [ k ]

U [ k ] - 1 Time taken is in the order of n

Sort Method Bubble Quick Heap Insertion Selection Merge Radix Shell Counting Bucket Pigeonhole

COMPARISON

Worst Average O(n 2 ) O(n log n) O(n log n) O(n 2 ) O(n + k) O(n 2 ) O(n + s) O(n log n) O(n log n) O(n log n) O(s*n) O(n) O(n) O(n) Best O(n) O(log n) O(log n) O(n) O(n) O(log n) O(n log n) O(n) O(n) O(n) O(n)

ALGORITHMS AND THEIR COMPLEXITY (Limits on Problem Size as Determined by Growth Rate) Algo Time Complexity Maximum Problem Size (n) 1 sec 1 min 1 hour A1 A2 A3 A4 A5 n n log n n 2 n 3 2 n 1000 140 31 10 9 6 x 10 4 4893 244 39 15 3.6 x 10 6 2.0 x 10 5 1897 153 21

Algo A1 ALGORITHMS AND THEIR COMPLEXITY (Effect on Tenfold Speed-up)) Time Complexity n Maximum Problem Size Maximum Problem Size Before Speed-up After Speed-up S1 10S1 A2 A3 A4 A5 n log n n n 2 2 3 n S2 S3 S4 S5 10S2 (for large s2) 3.16S3

2.15S4

S5 + 3.3

Calculating the Greatest Common Divisor Function gcd (m, n)

i ← min (m, n) + 1 repeat i ← i - 1 until i divides both m and n exactly return i i gcd (14, 21) i = 15 14 Division F 13 F 12 11 F F 10 9 F F 8 F T 7 Takes a time in the order of n

Calculating the Greatest Common Divisor Function Euclid (m,n)

while m > 0 do t ← m m ← n ← t n mod m return n

Euclid (14, 21) : t 14 7 m n 7 0 14 7 return = 7 Euclid (6, 15) : 6 3 t m n 3 6 0 3 return = 3 Takes a time in the order of the algorithm of its arguments

Calculating the Fibonacci Sequence

ƒ

0

ƒ

n = 0,

ƒ

1 = 1 =

ƒ

n-1 +

ƒ

n-2 for n > = 2 Function Fibrec (n)

if n < 2 then return n else return Fibrec (n - 1) + Fibrec (n - 2) f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 0 1 1 2 3 5 8 13 21 34 55 89 Time required to calculate f n is in the order of the value of f n . It is very inefficient compared to de Moivre ’s formula • It is better to use Fibiter algorithm to calculate the Fibonnaci Sequence

Function Fibiter (n) { Calculates the n-th term of the Fibonacci sequence;} i

1; j

0 for k

1 to

n

do j

i + j i

j - i return j Fibonacci (5) i = 1 j = 0 k = 1 2 3 4 5 j = 1 1 2 i= 0 1 1

• Time required to calculate f n

3 2 5 3

is in the order of n

Calculating the Fibonacci Sequence de Moivre ’s formula

fn = 1 5 n [ φ− ( −φ ) − ] n where φ= 1+ 2 5 ( φ=1.61803

) Time required to calculate f n order of φ n is value of f n is in the

Example

f10 = f10-1 + f10-2 = f9 + f8 = 34+21 = 55 --------------------- f 10= = 1 5 n [ φ− ( −φ ) 1 2.236

[ 1.61803

) 10 − n ] − ( −1.61803

) −10 ] = 1 2.236

[ 122.98883−0.008131

] = 1 [ 2.236

= 55.0003

122.9807

] = 55 (i) (ii)

Function Fibiter (n)

i ← for k ← 1; j ← 1 to n d i j ← ← j - i i+j 0 return j

Comparison of modulo 10 7 fibonacci algorithms

n Fibrec Fibiter 10 8 msec 1/6 msec 20 1 sec 1/3 msec 30 2 min ½ msec 50 21 days ¾ msec 100 10 9 years 1 ½ msec

Important Points for an Algorithm

• Correct in execution • Execution time • Storage needs • Limitation of computing equipment to support operations for desired numbers to have precision within in limits • Efficient methodology for the specific task • Programming/ hardware implementation tools • Average/worst-case performance