Properties and parameter selection for phase synchrony

Download Report

Transcript Properties and parameter selection for phase synchrony

Lecture 01: Introduction

Instructor: Dr. Gleb V. Tcheslavski Contact: [email protected]

Office Hours:

Room 2030

Class web site:

http://ee.lamar.edu/gleb/dsp/ind ex.htm

Based on ECE 4624 by Dr. A.A. (Louis) Beex, Virginia Tech ELEN 4304/5346 Digital Signal Processing Fall 2008 1

Syllabus overview

• • • • •

Pre-req.:

ELEN 3313 Signals and Systems or their equivalent

Required book:

Sanjit K. Mitra, Digital-Signal Processing: A Computer-Based Approach, McGraw-Hill Co., Third edition, 2004, ISBN: 0-07-286546-6.

Required software:

2006a or later.

The Mathworks, The Student Edition of MATLAB, Release

Structure:

Two 75-minute lectures per week. One midterm exam, and the final examination.

homework, three projects,

Tests:

The Midterm and Final exams will be closed book/notes. Your performance on the Projects will account for the bulk of your grade.

Honor System:

Discussions on lecture subject material, to clarify your understanding, are highly encouraged. However, it is your personal understanding only that should be reflected in all work that you turn in.

Any copyright violations (including copying articles and/or web pages to your reports) will be

prosecuted!

ELEN 4304/5346 Digital Signal Processing Fall 2008 2

Styles, notations, legends…

1. Colors: Normal text and formulas Something more important (imho) Important formulas and results Very Important Formulas Miscellaneous 2. Equations notations: (2.17.3) Lecture # Slide # Formula # 3 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

It is convenient in many applications to represent signals and system coefficients by vectors and matrices. Therefore, a brief overview of linear algebra is considered.

1. Vectors A vector (denoted by a lowercase bold letter) is an array of real-valued or complex valued numbers or functions. We will assume column vectors. The

N

-dimensional vector is:

x

  

x

2    

x

(1.4.1) The transpose of a vector is a row vector:

x

T

x

1

x

2

x N

 (1.4.2) ELEN 4304/5346 Digital Signal Processing Fall 2008 4

Linear algebra overview

The Hermitian transpose is the complex conjugate of the transpose of

x

:

x

H

  *

x

* 1

x

* 2

x

*

N

(1.5.1) It might be convenient in some cases to consider a set of values

x

n

signal values in the certain range: containing the

x

n

      

x x n n

 1

x n N

1       (1.5.2) 5 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

The measure of the “magnitude” of the vector is the

norm

.

The Euclidean or

L 2

norm : The

L 1

norm : The

L

 norm :

x

2 

i N

  1

x i

2

x x

 1 

i N

 1  max

i x i x i

We will be using the second norm unless stated otherwise.

If the vector has a non-zero norm, it can be normalized as follows:

v

x

x x

The vector

v

x

is a

unit norm vector

that lies in the same direction as

x

.

ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.6.1) (1.6.2) (1.6.3) (1.6.4) 6

Linear algebra overview

The squared norm represents the energy in the signal:

x

2 

n N

  1 0

x n

2 Lastly, the norm can be used to measure the distance between two vectors: (1.7.1)

d

 

i N

  1

x i

y i

2 (1.7.2) For two complex vectors

a

and

b

defined as of the same length, the inner product is a scalar 

H

a b

i N

  1 *

a b i i

(1.7.3) For two real vectors, the inner product becomes 

T

a b

i N

 1

a b i i

(1.7.4) ELEN 4304/5346 Digital Signal Processing Fall 2008 7

Linear algebra overview

The inner product defines the geometrical relationship between two vectors: 

a b

cos  (1.8.1) where  is the angle between two vectors. Therefore, two nonzero vectors

a

and

b

are

orthogonal

if their inner product is zero:  0 (1.8.2) Two vectors that are orthogonal and have unit norms are called

orthonormal

.

Since |cos  |  1, the inner product is bounded: 

a b

(1.8.3) (1.8.3) is known as the Cauchy-Schwarz inequality . The equality holds iff

a

and

b

are colinear , i.e.

a

 

b

,

const

(1.8.4) ELEN 4304/5346 Digital Signal Processing Fall 2008 8

Linear algebra overview

Another useful inequality is 2 

a

2 

b

2 (1.9.1) The output of an LTI FIR filter can be represented using the inner product. If the filter output is the convolution of its unit pulse response and the input

y n

 1

N k

  0

h x k

(1.9.2) then, expressing

x

as in (1.5.2)

h

  and representing

h n h

0

h

1

h N

as  1 

T

(1.9.3) the filter output may be written as the inner product:

y n

T

h x

n

(1.9.4) ELEN 4304/5346 Digital Signal Processing Fall 2008 9

Linear algebra overview

A set of

n

vectors

v

1 ,

v

2 , …

v

n

v

1 1 is

linearly independent

2

v

2  

n

v

n

 if 0 (1.10.1) implies that 

i

= 0 for all

i

. If a set of nonzero 

i

can be found satisfying (1.10.1) , the vectors are linearly dependent and at least one of the vectors, say

v

1 can be expressed as a linear combination of the remaining vectors:

v

1 

2

v

2 

3

v

3  

n

v

n

(1.10.2) for some set of scalars 

i

.

For vectors of dimension

N

, no more than

N

vectors may be linearly independent. Therefore, any set of

N

-dimensional vectors containing more than

N

vectors will always be linearly dependent.

10 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

For a set of

N

vectors

v

1 ,

v

2 , …

v

n

V

 

v

1

v

2

v

n

 (1.11.1) Consider the set of all vectors

V

that may be formed from a linear combination of the vectors

v

i

v

i N

  1

i

v

i

(1.11.2) This set forms a

vector space

, and the vectors

v

i

are said to span the space

V

. Furthermore, if the vectors

v

i

are linearly independent, they form a space

V

, and the number of vectors in the basis

N

is the dimension

basis

for the of the space.

For instance, the set of all real vectors of the form

x

x

1

x

2

x N

T

(1.11.3) forms an

N

-dimensional vector space, denoted by

R

N

that is spanned by the basis vectors ELEN 4304/5346 Digital Signal Processing Fall 2008 11

Linear algebra overview

u

1

u

2 

1 0 0 

0 1 0 0

T

0

T

(1.12.1)

u

N

0 0 0 1

T

In terms of basis, any vector

v

v

1

v

2 may be uniquely decomposed as follows:

v

i N

 1

v i

u

i v N

T

It should be pointed out, however, that the basis for a vector space is not unique.

(1.12.2) (1.12.3) 12 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

2. Matrices An

n

x and

m m

matrix is an array of real or complex numbers or functions having columns.

n

rows An

n

x

m

numbers matrix of

A

    

a

11

a

21

a a

12 22

a a

13 23

a

1

m a

2

m

    (1.13.1)

a n

2

a n

3

a nm

An

n

x

m

functions matrix of

A

a n

1     

a

11

a

21

a a

12 22

a a

13 23

a a

2 1

m m

    (1.13.2)

a n

1

a n

2

a n

3

a nm

If

n

= m, A is a square matrix.

ELEN 4304/5346 Digital Signal Processing Fall 2008 13

Linear algebra overview

The output of an LTI FIR filter can be written in vector form as follows: If

x n

= 0 for

n y n

T

h x

n

T

x h

n

< 0, the filter output may be expressed for

n

y

X h

0  0 as where

X

0 is the convolution matrix We observe that, in addition to its structure of having equal values along each of the diagonals,

X

0 has

N

-1 columns and an infinite number of rows.

X

0          

x x x

0 1 2

x N

 1 and

y

 

y

0

y

1

y

2 0

x

0

x

1

x N

 2 

T

0 0

x

0

x N

 3 0 0 0

x

0          ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.14.1) (1.14.2) (1.14.3) (1.14.4) 14

Linear algebra overview

An

n

x

m

matrix can be represented as a set of

m

A

c

1

c

2 column vectors

c

m

or as a set of

n

row vectors

A

r

 

r

   

r

n

2

H

An

n

x

m

matrix may be partitioned into submatrices as follows

A

A A

11 21

A

12

A

22   where

A

11 is

p

x

q

,

A

12 is

p

x (

m

-

q

),

A 2

1 is (

n-p

) x

q

, and

A

22 is (

n-p

) x (

m-q

).

(1.15.1) (1.15.2) (1.15.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 15

Linear algebra overview

If

A

is an

n

x

m

matrix, then the transpose

A

T is the

m

x

n

matrix that is formed by interchanging the rows and columns of

A

. Therefore, the (

i, j

) th element becomes the (

j, i

) th element and vice versa. If the matrix is square, the transpose is formed by reflecting its elements with respect to the diagonal.

A square matrix

A

is symmetric if

A

A

T

For complex matrices, the Hermitian transpose is defined as

A

H

   

* A square complex matrix

A

is Hermitian if

A

A

H

(1.16.1) (1.16.2) (1.16.3) 16 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Useful properties of Hermitian transpose are 1.



 

H

A

H

B

H

2.



H

A

3.



H

H

B A

H

Replacing the Hermitian transpose with an ordinary transpose, we obtain 1.



A

B

T

A

T

B

T

2.

  

T

A

3.



 

T

T

B A

T

(1.17.1) (1.17.2) 17 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Let

A

be an

n

x

m

matrix partitioned in a set of

m

A

c

1

c

2 column vectors

c

m

(1.18.1) The

rank

of

A

, rank(

A

) is defined as the number of linearly independent columns in

A

, i.e., the number of linearly independent vectors in the set {

c

1 ,

c

2 , … ,

c

m

}. One of the properties of the rank is that

rank

A



rank

(

A

H

)

(1.18.2) Therefore, if

A

is partitioned in a set of

n

row vectors

A

r

 

r

   

r

n

2

H

(1.18.3) Then, the rank of

A

is also equal to the number of linearly independent row vectors.

ELEN 4304/5346 Digital Signal Processing Fall 2008 18

Linear algebra overview

The important property of the rank is

rank

rank

(

AA

H

) 

rank

(

H

A A

) (1.19.1) Since the rank of a matrix is equal to the number of linearly independent rows and the number of linearly independent columns, then, if

A

is an

m

x

n

matrix:

rank

 (1.19.2) If

A

If

A

is an

m

is a square matrix of full rank, then there exists a unique matrix

inverse

of

A

x

n

matrix and

rank

(

A

) = min(

n,m

), then

A

such that is of full rank .

A

-1 called the 

AA

 1 

I

(1.19.3) 19 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

where

I

    1  0 0 1 0 0     (1.20.1)  0 0 1 is the identity matrix with ones along the main diagonal and zeros everywhere else.

In this case,

A

noninvertible is said to be or singular invertible or nonsingular . If

A

and does not have an inverse.

is not of full rank, then it is If

A

and

B

are invertible, then 1.

    1  1

B A

 1 2.



A

 1 

A

H

3.

    1 

A

 1  1

 1   1

DA B

 1

DA

 1 (1.20.2) (1.20.3) (1.20.4) ELEN 4304/5346 Digital Signal Processing Fall 2008 20

Linear algebra overview

(1.20.4) is called the

matrix inversion lemma

. It is assumed that

A

is

n

x

n

,

B

is

n

x

m

,

C

is

m

x

m

, and

D

is

m

x

n

with

A

and

C

nonsingular matrices.

A special case of this lemma occurs when

C

= 1,

B

are

n-

dimensional vectors. In this case =

u

, and

D

=

v

H

where

u

and

v

H

  1 

A

 1   1

H

A uv A

 1 1 

H

 1

v A u

(1.21.1) Which is sometimes referred to as a Woodbury’s identity . As a special case, if

A

=

I

, (1.21.1) becomes 

H

  1

uv

H

1

H

v u

(1.21.2) 21 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

If

A

=

a 11

is a 1 x 1 matrix, the

determinant

is defined as det(

A

) = a 11 . For an

n

matrix, the determinant is defined recursively as x

n

i n

  1

a ij

det (1.22.1) where

A

ij

of

A

.

is the (

n

-1) x (

n

-1) matrix formed by deleting the

i th

row and the

j th

column

Property

: An

n

x

n

matrix

A

is invertible iff

det( )



0

(1.22.2) 22 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

For

A

and

B

being properties hold:

n

x

n

matrices, if

A

is invertible, and a constant  , the following  det

A B

(1.23.1)  det (1.23.2)  

n

det (1.23.3)  det 1

 

(1.23.4) For an

n

x

n

matrix

A

, the trace function is defined as

tr

i n

  1

a

ii

(1.23.5) 23 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

3. Linear equations Consider the following set of

n

linear equations in the

m

unknowns

x i , i =

1,2,…,

m a x

11 1

a x

21 1 

a x

12 2 

a x

22 2

a x

1

m m a x

2

m m

 

b

1

b

2 (1.24.1)

a x n

1 1 

a x n

2 2

a x nm m

b n

These equations may be written in matrix form as follows

Ax

b

Where

A

is an unknowns

x i n

x

m

, and

b

matrix with entries

a ij

,

x

is an

m

-dimensional vector of is an

n

-dimensional vector with elements

b i

.

(1.24.2) ELEN 4304/5346 Digital Signal Processing Fall 2008 24

Linear algebra overview

A convenient way to view (1.24.2) is as an expansion of vector

b

in terms of a linear combination of the column vectors

a

i

of the matrix

A

:

b

i m

  1

x i

a

i

Solving (1.25.1) depends on a number of factors including the relative size of

m

and

n

, the rank of

A

, and the elements of

b

.

(1.25.1) 3.1. Square matrix:

m

=

n.

The solution to (1.25.1) depends upon whether or not

A

nonsingular, then the inverse

A

-1 is singular. If

A

is exists and the solution is uniquely defined by

x

  1

A b

(1.25.2) However, if

A

is singular, then there may either be no solution (the equations are inconsistent) or many solutions.

ELEN 4304/5346 Digital Signal Processing Fall 2008 25

Linear algebra overview

For the case in which

A

is singular, the columns of

A

are linearly dependent and there exist nonzero solutions to the homogeneous equation

Az

0

(1.26.1) In fact, there will be

k

=

n

– rank(

A

) linearly independent solutions to the homogeneous equations. Therefore, if there is at least one vector

x 0

(1.24.2) , then any vector of the form that solves

x

x

0 

z

1 1

k

z

k

will also be a solution where

z

i , i =

1,2,…,

k

(1.24.2) .

are linearly independent solutions to (1.26.2) 26 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

3.2. Rectangular matrix:

n

<

m.

In this situation, there are fewer equations then unknowns. Therefore, if the equations are not inconsistent, there are many vectors satisfying the equations: the solution is undetermined or incompletely specified . A common approach to define a unique solution is to find the vector that satisfies the equations and has a minimum norm; i.e.

min

x

  

Ax

b

(1.27.1) If rank(

A

) =

n

(the rows of

A

are linearly independent), then the

n

x

n

matrix

AA

H

invertible and the minimum norm solution

x

0 

A

H

 is

AA

H

  1

b

is (1.27.2) The matrix

A

 

A

H

AA

H

 1 (1.27.3) is called the pseudo-inverse of the matrix

A

for the undetermined problem.

ELEN 4304/5346 Digital Signal Processing Fall 2008 27

Linear algebra overview

3.3. Rectangular matrix:

n

>

m.

In this situation, there are more equations then unknowns and, in general, no solution exists. The equations are inconsistent and the solution is overdetermined .

For the case of 3 equations in 2 unknowns, this problem is illustrated: Since an arbitrary vector

b

cannot be represented as a linear combination of the columns of

A

, the goal is to find the coefficients

x i

producing the best approximation to

b

: 

i m

 1

x a i i

(1.28.1) ELEN 4304/5346 Digital Signal Processing Fall 2008 28

Linear algebra overview

The approach commonly used in this situation is to find the least squares solution ; i.e., the vector

x

minimizing the norm of the error

e

2 

b

Ax

2 (1.29.1) The least squares solution has the property that the error

e

is orthogonal to the column vectors of

A

. This orthogonality implies that

H

A e

0 or  which are known as the normal equations . (1.29.2) (1.29.3) (1.29.4) 29 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

If the columns of

A

are linearly independent (

A

has full rank), the matrix

A

H

A

invertible and the least square solution

x

0  is 

H

A A

  1

H

A b

is (1.30.1) or where

A

x

 0

A b

H

A A

 1

A

H

(1.30.2) (1.30.3) is the pseudo-inverse of the matrix the best approximation of

b

subspace spanned by the vectors

a

i

A

for the overdetermined problem. Furthermore, is given by the projection of the vector

b

onto the 

Ax

0  

H

  1

H

A b

(1.30.4) 30 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

or where

P

A

H

P b

A

  1

A

H

is called the projection matrix . Finally, using the orthogonality condition, the minimum mean square error is (1.31.1) (1.31.2) min

e

2 

H

b e

H

H

b b b Ax

0 (1.31.3) 31 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4. Special matrix forms 4.1. Diagonal matrix – a square matrix having all entries equal to zero except, possibly, for those along the main diagonal:

A

      

a

11 0 0 0

a

22 0 0 0

a nn

      (1.32.1) The diagonal matrix may be written as

A

diag 

a

11

a

22

a

nn

 (1.32.2) 32 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4.2. Identity matrix – a diagonal matrix with ones along the diagonal:

I

 1       1 0 0 1 0 0     0 0 1 (1.33.1) 4.3. Block diagonal matrix – a diagonal matrix whose entries along the diagonal are replaced with matrices:

A

      

A

11 0 0 0

A

22 0 0 0

A

nn

      (1.33.2) 33 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4.4.

Exchange matrix – a symmetric matrix with ones along the cross-diagonal and zeros everywhere else. Since

J

2 =

I

, then

J

is its own inverse.

J

     0 0 0 1 1 0     (1.34.1)  1 0 0 The effect of multiplying a vector

v

by the exchange matrix is a vector with reversed order of the entries; i.e.,

J

v v

2   

v v n n

 1    

v n v

1 (1.34.2) 34 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Similarly, if a matrix

A

is multiplied on the left by the exchange matrix, the effect is to reverse the order of each column. For example:

A

    

a

11

a

21

a

31

a a a

12 22 32

a a

23

a

13 33     (1.35.1) then

T

J A

    

a

31

a

21

a

11

a a a

32 22 12

a

33

a

23

a

13     (1.35.2) 35 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Similarly, if a matrix

A

is multiplied on the right by the exchange matrix, the effect is to reverse the order of each row:

AJ

    

a

13

a

23

a

33

a a a

12 22 32

a

11

a

21

a

31     (1.36.1) Finally, the effect of forming the product

J

T

AJ

is to reverse the order of each row and column

T

J AJ

    

a

33

a

23

a

13

a a a

32 22 12

a a

21

a

31 11     (1.36.2) thereby reflecting each element of

A

about the central element.

ELEN 4304/5346 Digital Signal Processing Fall 2008 36

Linear algebra overview

4.5.

Upper triangular matrix diagonal are zero: – a square matrix, in which all entries below the

a ij j

(1.37.1) For example, for

n

= 4

A

     

a

11 0 0 0

a

12

a

22 0 0

a

13

a

23

a

33 0

a a

24

a

14 34    

a

44 (1.37.2) 4.6.

Lower triangular matrix diagonal are zero: – a square matrix, in which all entries above the

a ij j

(1.37.3) 37 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Some properties of lower and upper triangular matrices are: 1. The transpose of a lower (upper) triangular matrix is an upper (lower) triangular matrix; 2. The determinant of a lower or upper triangular matrix is 

i n

 1

a ii

(1.38.1) 3.

4.

The inverse of an upper (lower) triangular matrix is upper (lower) triangular; The product of two upper (lower) triangular matrices is upper (lower) triangular.

38 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4.7.

Toeplitz matrix – a square

n

x

n

matrix, in which all entries along each of the diagonals have the same value

a ij

a i

 1,

j

 1

i n

(1.39.1) For example, for

n

= 4

A

      1 3 5 7 2 1 3 5 4 2 1 3 6 4 2 1     (1.39.2) 39 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4.8.

Hankel matrix – a square

n

x

n

matrix, in which all entries along each of the cross-diagonals have the same value

a ij

a i

 1,

j

 1

i n

(1.40.1) For example, for

n

= 4

A

      1 3 5 7 3 5 7 4 5 7 4 2 7 4 2 1     Another example of Hankel matrices is the exchange matrix

J.

(1.40.2) 40 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Toeplitz matrices are a special case of matrices known as 4.9.

Persymmetric matrix persymmetric .

– a matrix symmetric about the cross-diagonal:

a ij

a n j

1,

n i

1 (1.41.1) For example, for

n

= 4, a persymmetric but non-Toeplitz matrix:

A

      1 3 5 7 2 2 4 5 4 4 2 3 6 4 2 1     (1.41.2) If a Toeplitz matrix is symmetric (or only need to specify the Toeplitz matrix.

first row Hermitian and the in the case of a complex matrix), we first column to completely determine a 41 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Symmetric Toeplitz matrices are a special case of matrices known as centrosymmetric . 4.10. Centrosymmetric matrix – a matrix that is both symmetric and persymmetric.

For example, for

n

= 4, a centrosymmetric but non-Toeplitz matrix:

A

    1 3 5 6   3 2 4 5 5 4 2 3 6 5 3 1     If

A

is a symmetric Toeplitz matrix, then

T

J AJ

A

If

A

is a Hermitian Toeplitz matrix, then

T

J AJ

A

* (1.42.1) (1.42.2) (1.42.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 42

Linear algebra overview

Some important properties are… 1.

2.

3.

The inverse of a symmetric matrix is symmetric ; The inverse of a persymmetric The inverse of a Toeplitz matrix is matrix is persymmetric ; not, in general, Toeplitz . However, since a Toeplitz matrix is persymmetric, the inverse will always be persymmetric. Furthermore, the inverse of a symmetric Toeplitz matrix will be centrosymmetric.

43 The relationship between the symmetries of a matrix and its inverse ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

If 4.11. Orthogonal matrix – a real

n

A

 

x

n

a

1 matrix with orthogonal columns (and rows):

a

2

a

n

(1.44.1) and

T

a a

i j

  1 0  

j j

(1.44.2) then

A

is orthogonal.

We observe that if

A

is orthogonal, then

T

A A

I

(1.44.3) Therefore, the inverse of an orthogonal matrix is equal to its transpose:

A

 1 

A

T

(1.44.4) 44 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

4.12. Unitary matrix – a complex

n

x

n

matrix with orthogonal columns (rows):

i H

a a

j

  1 0  

j j

then

H

A A

I

(1.45.1) (1.45.2) The inverse of a unitary matrix equals to its Hermitian transpose

A

 1 

A

H

(1.45.3) 45 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

is The quadratic form of a real symmetric

n

x

n

matrix

A

is the scalar defined by

Q

A

T

x Ax

i n

  1

j n

 1

x a x

i ij j

(1.46.1) where

x

is a vector of

n

quadratic function in the real variables. Observe that the quadratic form is a

n

variables

x 1

,

x 2

, …,

x n

. For example, the quadratic form of

A

  3 1 1 2   (1.46.2)

Q A

T

x Ax

 3

x

1 2  2

x x

1 2  2

x

2 2 (1.46.3) 46 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

Similarly, for a Hermitian matrix, the Hermitian form is

Q

A

H

x Ax

i n

  1

j n

 1 *

x a x

i ij j

If the quadratic form of a matrix

A

is positive for all nonzero vectors

x

,

Q A

0 then

A

is said to be positive definite and we write

A

> 0. For example, if

A

    2 0 0 3   which has the quadratic form

Q A

x

1

x

2

  2 0 0 3  

x

1    is positive definite since

Q A

(

x

) > 0 for all

x

 0.

 2

x

1 2  3

x

2 2 ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.47.1) (1.47.2) (1.47.3) (1.47.4) 47

Linear algebra overview

If the quadratic form of a matrix

A

is non negative for all nonzero vectors

x

,

Q A

then

A

is said to be positive semidefinite.

0 If the quadratic form of a matrix

A

is negative for all nonzero vectors

x

,

Q A

0 then

A

is said to be negative definite.

If the quadratic form of a matrix

A

is non positive for all nonzero vectors

x

,

Q A

then

A

is said to be negative semidefinite.

0 A matrix

A

that is none of the above is called indefinite .

(1.48.1) (1.48.2) (1.48.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 48

Linear algebra overview

5. Eigenvalues and Eigenvectors Let

A

where be an

n

x

n

matrix and consider the following set of linear equations:

Av

 

v

 is a constant. Equivalently: 

A

  

0 (1.49.1) (1.49.2) In order for a nonzero vector the matrix

A

I v

to be a solution to this equation, it is necessary for to be singular. Therefore, the determinant of

A

I

must be zero:

p

 det

A

 

I

 0 (1.49.3) where

p

(  ) is an n th order polynomial in  . This polynomial is called the characteristic polynomial of the matrix

A

and its

n

roots 

i

for

i

= 1, 2,…,

n

are called the eigenvalues of

A

.

49 ELEN 4304/5346 Digital Signal Processing Fall 2008

Linear algebra overview

For each eigenvalue 

i

for

i

= 1, 2, …, will be at least one nonzero vector

v

i n

the matrix that solves

A

I

will be singular and there (1.49.1) ; i.e.

Av

i

 

i

v

i

(1.50.1) These vectors

v

i

are called the eigenvectors of

A

. For any eigenvector

v

i

, 

v

i

also be an eigenvector for any constant  . Therefore, eigenvectors are often will normalized to have unit norm.

Property 1 : The nonzero eigenvectors

v

1 ,

v 2

, …

v

n

eigenvalues  1 ,  2 , …, 

n

are linearly independent.

corresponding to the distinct If

A

is an

n

x

n

singular matrix, then there are nonzero solutions to the homogeneous equation

Av

i

and it follows that  = 0 is an eigenvalue of

A

.

0 (1.50.2) There are

n

– rank(

A

) linearly independent solutions to (1.50.2) . Therefore,

A

will have rank(

A

) nonzero eigenvalues and

n

– rank(

A

) eigenvalues that are equal zero.

ELEN 4304/5346 Digital Signal Processing Fall 2008 50

Linear algebra overview

Property 2 : The eigenvalues of a Hermitian matrix are real.

Property 3 : A Hermitian matrix is positive definite

A

positive: 

k

> 0.

> 0 iff eigenvalues of

A

are The determinant of a matrix is related to its eigenvalues as

n

det   

i

  1 

i

Property 4 : The eigenvectors of a Hermitian matrix corresponding to distinct eigenvalues are orthogonal; i.e.

if

 

i j

, 

then

v v

i j

 0 (1.51.1) (1.51.2) Spectral Theorem : Any Hermitian matrix

A

can be decomposed as

A

VΛV

H

1

v v

1 1

H

2

v v

2 2

H

n

v v

n H n

where 

i

are the eigenvalues of

A

and

v

i are a set of orthonormal eigenvectors.

(1.51.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 51

Linear algebra overview

Property 5 : Let

B

be an

n

related to

B

as follows x

n

matrix with eigenvalues 

i

and let

A

be a matrix that is

A

I

(1.52.1) Then

A

and

B

have the same eigenvectors and the eigenvalues of

A

are 

i +

.

Property 6 : For a symmetric positive definite matrix

A

, the equation

T

x Ax

 1 defines an ellipse in

n

dimensions whose axes are in the directions of the eigenvectors

v

j

of these axes equal to 1 

j

of

A

with the half-length (1.52.2) 52 ELEN 4304/5346 Digital Signal Processing Fall 2008

Concepts of signals and systems

Signals play an important role in our daily life. Examples of signals are speech, music, pictures, video… a signal is a function of independent variables such as time, distance, temperature, etc.

For instance, a speech signal represents air pressure (or a voltage output from a microphone) as a function of time; 53 ELEN 4304/5346 Digital Signal Processing a black and white picture is a representation of light intensity as a function of two spatial coordinates; Video is a function of two spatial coordinates and time.

Fall 2008

Concepts of signals and systems

Most signals are generated naturally; however, a signal can also be generated synthetically by a computer simulation.

A signal carries information, and one of the objective of signal processing is to extract useful information from the signal . The method of information extraction depends on the type of signal and the nature of information.

54 ELEN 4304/5346 Digital Signal Processing Fall 2008

Concepts of signals and systems

The term system usually indicates a physical device used to generate or manipulate signals. Thus, we may consider a system as a mathematical function applied to a signal.

Alternatively, a system can be viewed as any process that results in the transformation of signals. Therefore, a system may have an input signal and an output signal that is related to the input through the system transformation.

For instance, an audio amplifier takes a recorded music signal, amplifies it and outputs to the speakers usually allowing us to control the amplification (i.e. “loudness”) and tone (amounts of “bass” and “treble”).

ELEN 4304/5346 Digital Signal Processing Fall 2008 55

Why Digital?? Music recording and reproduction: some history

Past… 56 ELEN 4304/5346 Digital Signal Processing Fall 2008

Music recording and reproduction

Still past… And (almost) present 57 ELEN 4304/5346 Digital Signal Processing Fall 2008

Music recording and reproduction: some physics behind

1. Recording

 Mechanical wave (sound) generates electrical signal in a microphone.

 Signal is amplified  And recorded (stored), which usually involves some conversion…

2. Reproduction

 A recorded signal is converted back to electrical current.

 Amplified  A mechanical wave (sound) is generated by a speaker (electrical oscillation is converted into mechanical one).

Music recording techniques can be divided into

analog

and

digital

according to the signal’s representation at the storage stage.

ELEN 4304/5346 Digital Signal Processing Fall 2008 58

Music recording and reproduction: The media

59 CD/DVD – digital recording: Optical properties are modified to encode a

digital

signal Magnetic tape – Analog recording: Magnetization of the media changes according to an

analog

electrical signal ELEN 4304/5346 Digital Signal Processing Fall 2008

So, why DSP???

Most of the portable (and not only portable!) electronic items we are dealing with in our everyday life are digital… Digitally recorded data needs to be manipulated… ELEN 4304/5346 Digital Signal Processing Fall 2008 60

Sampling

Let us consider a sinusoid

x

(t) =

A

cos( 

t +

 ),

t

is a continuous time variable 61

T s

By sampling, we can reduce storage requirements considerably!

ELEN 4304/5346 Digital Signal Processing Fall 2008

Sampling: Ideal C/D converter

t



nT

s

Therefore, a sampled signal:

x n

s

) (1.62.1)

x

(

t

) Ideal C/D converter

x n

The sampling frequency:

f s

 1

T s

for a sinusoid:

T s x n

 Normalized radial frequency: Fractional frequency:

s

) 

A

cos( 

nT s

    

s f n

f f s f s A

cos( 

n

  ) (1.62.2) (1.62.3) (1.62.4) (1.62.5) 62 ELEN 4304/5346 Digital Signal Processing Fall 2008

Aliasing

Let us consider two discrete-time sinusoids:

x

1,

n

 cos(0.4

n

)

x

2,

n

 cos(2.4

n

) They are obviously at two different frequencies BUT trigonometry shows:

x

2,

n

 cos(2.4

n

)  cos(2 

n

 0.4

n

)  cos(0.4

n

) 

x

1,

n

1 0.5

0 x 1 (t) x 2 (t) x 1n -0.5

-1 0 100 200 300 400 500 600 700 800 900 1000 Result: we may reconstruct more than one sinusoid from samples! Aliasing (more than one signal represented by the same frequency samples) is due to the

2

n

periodicity of sin/cos… ELEN 4304/5346 Digital Signal Processing Fall 2008 63

Aliasing

How would we mitigate aliasing?

Restrict ourselves to the signals at frequencies [

-

  ]… …which leads us to an appropriate selection of sampling rate (sampling frequency)!

Intuitively, we would lower the storage requirements if the sampling frequency can be reduced… Then we would collect less samples… 64 ELEN 4304/5346 Digital Signal Processing Fall 2008

How many samples do we need to reconstruct the initial sinusoid?

Let us consider two periods of a sinusoid and a uniform sampling… As many as:

4 uniform samples/ period

Can fit >1 sin?

Can be reconstructed!

65 ELEN 4304/5346 Digital Signal Processing Fall 2008

How many samples do we need to reconstruct the initial sinusoid?

66 66 How many signal’s samples do we need to reconstruct the initial sinusoid?

Let us consider two periods of a sinusoid and uniform sampling… As many as:

2 uniform samples/ period.

Can fit >1 sin?

Still can be reconstructed!

ELEN 4304/5346 Digital Signal Processing Fall 2008

67

How many samples do we need to reconstruct the initial sinusoid?

Let us consider two periods of a sinusoid and a uniform sampling… As many as:

1 sample/ period.

Can fit >1 sin?

Can draw MORE than one sinusoid!

Initial signal is

lost

.

ELEN 4304/5346 Digital Signal Processing Fall 2008

The Sampling Theorem

We can conclude that

at least

two uniform signal samples per period are necessary to recover the initial sinusoidal signal.

68 Harry Nyquist Claude Shannon Kotelnikov Nyquist-Shannon Theorem: a sinusoidal signal of frequency

f 0

can be perfectly reconstructed from it’s uniformly sampled version if samples were taken at the frequency at least 2

f 0

(twice the signal’s frequency).

ELEN 4304/5346 Digital Signal Processing Fall 2008

The Sampling Theorem

More accurately: A band-limited signal can be perfectly reconstructed from its uniformly sampled version iff samples were taken at a frequency at least twice higher than the highest frequency in the signal’s spectrum.

One half of the sampling frequency – the maximum frequency component that can be represented – is called the Nyquist frequency .

What’s about systems? Do they become discrete too? ELEN 4304/5346 Digital Signal Processing Fall 2008 69

Classification of sequences

A discrete-time signal can be classified into several types based on its specific characteristics… 1. Duration : a discrete-time signal may be a

finite-length

or an

infinite length

sequence.

A finite-length (finite-duration; finite-extend) sequence is defined for a finite time interval

n

where

N

1

N

1

N N

2

 

2 (1.70.1) Therefore, the length (duration)

N

of the above finite-length digital sequence may be computed as

N

N

2

N

1

1 (1.70.2) A length-

N

discrete-time sequence consists of

N

samples.

ELEN 4304/5346 Digital Signal Processing Fall 2008 70

Classification of sequences

There are three types of infinite-length sequences: A

right-sided sequence x n

has zero-valued samples for

n

<

N

1 :

x n N

1 where

N

1 is a finite integer that can be positive or negative. If

N

1 is positive, a right-sided sequence is usually called a

causal sequence

.

(1.71.1) A

left-sided sequence x n

has zero-valued samples for

n

>

N

2 :

x n N

2 (1.71.2) where

N

2 is a finite integer that can be positive or negative. If

N

2 is negative, a left-sided sequence is usually called an

anticausal sequence

.

A general

two-sided sequence x n

negative values of

n

.

is defined for both positive and 71 ELEN 4304/5346 Digital Signal Processing Fall 2008

Classification of sequences

2. Symmetry with respect to the time-index

n

= 0 .

A sequence

x n

is called a

conjugate-symmetric sequence x n

x

* 

n

if A real conjugate-symmetric sequence is called an even sequence .

(1.72.1) A sequence

x n

is called a

conjugate-antisymmetric sequence x n

 

x

 *

n

if A real conjugate-antisymmetric sequence is called an odd sequence .

(1.72.2) For a conjugate-antisymmetric sequence

x n

, the sample value at

n

must be purely imaginary.

= 0 For an odd sequence

x 0

= 0.

ELEN 4304/5346 Digital Signal Processing Fall 2008 72

Classification of sequences

Any complex sequence

x n

symmetric part

x cs n x

can be expressed as a sum of its conjugate and its conjugate-antisymmetric part

x ca n

:

n

x cs n

x ca n

(1.73.1) where

x

n cs

x

n ca

 

1 2 

x

n

1 2 

x

n

x

 *

n

x

* 

n

 (1.73.2) (1.73.3) Therefore, the computation of conjugate-symmetric and conjugate antisymmetric parts of a sequence involves conjugation, time-reversal, addition, and multiplication operations.

73 ELEN 4304/5346 Digital Signal Processing Fall 2008

Classification of sequences

Similarly, any real sequence

x n

part

x ev n

and its odd part

x od n

:

x n

can be expressed as a sum of its even

x ev n

x od n

(1.74.1) where

x

n ev

x

n od

 

1 2 

x

n

1 2 

x

n

x

n

x

n

 (1.74.2) (1.74.3) The symmetry properties of sequences often simplify their respective frequency-domain representation.

74 ELEN 4304/5346 Digital Signal Processing Fall 2008

Classification of sequences

3. Periodicity .

A sequence

x n

satisfying

x n

x



n

(1.75.1) is called a

periodic sequence

integer and

k

with a period

N

, where

N

is a positive is any integer. Otherwise, a sequence is called an aperiodic sequence. The fundamental period

N f

smallest value of

N

for which (1.75.1) holds.

of a periodic signal is the Sum or product of two or more periodic sequences is also a periodic sequence. For instance, a sum of two periodic sequences

x a n

with fundamental periods fundamental period

N N a

and

N b

and is a periodic sequence with a

x b n

The greatest common divisor

N

N N

a a b b

 ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.75.2) 75

Classification of sequences

4. Energy and Power signals .

The total energy of a sequence

x n E x

is defined as

n

  

x n

2 An infinite-length sequence with finite sample values may or may not have finite energy.

(1.76.1) The average power of an aperiodic sequence

x n

is defined as

P x

K

lim  2

K

1

1

n K

 

K x n

2 (1.76.2) 76 ELEN 4304/5346 Digital Signal Processing Fall 2008

Classification of sequences

The average power of a sequence can be related to its energy by defining its energy over a finite interval

-K

n

K

as

E

n K

 

K x n

2 Then

P

x

K

lim  2

K

1

1

E

The average power of a periodic sequence

x n

P

x

1

N

N n

   1 0

x

n

2 with a period

N

is The average power of an infinite-length sequence may be finite or infinite.

ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.77.1) (1.77.2) (1.77.3) 77

Classification of sequences

An infinite energy signal with finite average power is called a

power signal

.

A finite energy signal with zero average power is called an

energy signal

.

An example of a power signal is a periodic sequence that has a finite average power but infinite energy.

An example of an energy signal is a finite-length sequence which has finite energy but zero average power.

78 ELEN 4304/5346 Digital Signal Processing Fall 2008

Classification of sequences

5. Other classifications .

1) A sequence

x n

magnitude: is called

bounded

if each of its samples is of finite

x n

B x

  (1.79.1) 2) A sequence

x n

is called

absolutely summable

n

 

x n

  if (1.79.2) 3) A sequence

x n

is called

square-summable

n

  

x n

2

 

if (1.79.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 79

Classification of sequences

Therefore, a square-summable sequence has finite energy and is an energy signal if it also has zero power.

An example of a sequence that is square-summable but not absolutely summable is

x

n a

sin

 

c

n n

,

n

(1.80.1) Examples of sequences that are neither absolutely summable nor square-summable are

x b n

sin 

c n

,

    

(1.80.2)

x n c

where

K

is a constant.

K n

(1.80.3) ELEN 4304/5346 Digital Signal Processing Fall 2008 80

Important quantities in discrete time

Kronecker delta (unit sample, unit impulse) function: 

n

1,

n

0  0,

n

0 Shifted by

k

function: samples unit sample    1, 0,

n

 

k k

(1.81.1) (1.81.2) 81

k

= 2 ELEN 4304/5346 Digital Signal Processing Fall 2008

Important quantities in discrete time

The unit step function: Shifted by

k

function: samples unit step 

n

  1, 0,

n

0 0 

n

  1, 0,

n k k

(1.82.1) (1.82.2) 82

k

= -2 ELEN 4304/5346 Digital Signal Processing Fall 2008

Important quantities in discrete time

The unit sample and unit step sequences are related as follows: 

n

n

m

  0  

k

  

k

n

 

n

 

n

 1 The real sinusoidal sequence

x n

A

cos 

0 with constant amplitude:

n

 

 ,

n

where

A

, 

0

,  are real numbers: the amplitude, the angular frequency, and the phase of the sinusoidal sequence

x n

.

(1.83.1) (1.83.2) (1.83.3) 83 ELEN 4304/5346 Digital Signal Processing Fall 2008

Important quantities in discrete time

The real sinusoidal sequence can also be written as

x n

x i n

x q n

(1.84.1)

x i n

where

x i n

A

and

x q n

are the in-phase and quadrature of

x n

  0

;

x n A

that are given by   0 (1.84.2) 84 ELEN 4304/5346 Digital Signal Processing Fall 2008

Important quantities in discrete time

The exponential sequence is

x n

A

n

,

n

where

A

and  are real or complex numbers computed as

 

e

  0 

j

 0  ,

A A e j

 (1.85.1) (1.85.2) Therefore:

x

n

If 

Ae

  0 

j

 0 

n

A e

 0

n

cos 

 

0

A e

 0

n

e

j

  0

n

  

n

 

j A e

 0

n

sin

 

0

n

 

(1.85.3)

x n

x n re

x im n

(1.85.4) then

x

n re

A e

 0

n

cos 

0

n

 

 ;



x

im n

A e

 0

n

sin 

0

n

 

 (1.85.5) ELEN 4304/5346 Digital Signal Processing Fall 2008 85

Important quantities in discrete time

86 With both

A

and  real, (1.85.1) reduces to a real exponential sequence.

For

n

 0, such a sequence with |  | < 1 decays exponentially as

n

| increases and with  | > 1 grows exponentially as

n

increases.

ELEN 4304/5346 Digital Signal Processing Fall 2008

Important quantities in discrete time

System’s impulse response:

h

n

 

k

b

k

Discrete convolution:

y n

x n n h n x n

l

  

x h l

All LTI systems satisfy the convolution equation where:

x

n is an input to the system,

y

n system’s impulse response.

is its output, and

h

n is the Convolution with an impulse:

x n

 

 0

x

 0 ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.87.1) (1.87.2) (1.87.3) 87

Question time!

Remark: strictly speaking, to make a signal

digital

, in addition to

discretization

in time (sampling), we need to

quantize

the samples (make them discrete in amplitude), which will be discussed later.

88 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

Matlab is an interactive, matrix-based software complex for scientific and engineering computations. Interface: the main window 89 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

Interface: the editor window 90 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

Interface: figure windows Whenever you plot results, they will be plotted in one of the figure windows.

ELEN 4304/5346 Digital Signal Processing Fall 2008 91

Introduction to Matlab

We may say that there are two ways to use Matlab: • We can enter the data (manually or load from files or external sources), manipulate the data (filter in some manner) and plot (or save) the results through the main window.

• Or we can write a code (actually, the same sequence of commands we would enter one by one if we used the main window) and save it as an *.m file to be able to come back to it later. This is what the editor window is used for.

92 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

We will concentrate on the “command (main) window (line)” approach.

For help in Matlab: 1. Type

help commandname

where “commandname” stands for a name of a command.

Ex:

help plot

You will get all necessary information about the command: its syntactic, arguments, etc. Pay attention to similar (or related) commands listed at the end.

2. If you don’t really know the name of a command you need, type

lookfor keyword

where “keyword” is a word corresponding to what you are looking for.

Ex:

lookfor matrix

You will get a list of names and brief descriptions of commands containing the “keyword”.

ELEN 4304/5346 Digital Signal Processing Fall 2008 93

Introduction to Matlab

• There is no need to describe (pre-allocate) any variables, constants etc. before you use them.

• Matlab normally treats all the variables as matrices.

• For our purposes, we will usually work with 1D matrices – vectors.

• Matlab is case sensitive!

A

and

a

are two different variables.

• Matlab remembers your entries: use the up-arrow to call previous commands.

• Matlab is an expression language:

variable = expression.

If

variable

and = are omitted, a variable

ans

is automatically created, to which the result is assigned.

ELEN 4304/5346 Digital Signal Processing Fall 2008 94

Introduction to Matlab

To manually enter a matrix A = [1,2,3;4,5,6;7,8,9]; or A = [1 2 3;4 5 6;7 8 9];

A

     1 4 7 2 5 8 3 6 9     from the keyboard, type Commas (or spaces) divide entries within the same row; semicolons divide rows. All rows MUST have the same number of elements!

We can call (and modify if needed) matrix elements by their indexes: A(2,3) would bring 6 on the screen; A(2,3) = 21 will replace an entry 6 by 21.

The semicolon “;” at the end of the expression instructs not to show the result on the screen.

So, if you need to check what A equals to, just type A and press “Enter”!

95 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

You can merge together two or more matrices Ex: A = [1 2 3]; B = [4 5 6 7 8]; C = [A B A]; Will create a vector C = [1 2 3 4 5 6 7 8 1 2 3] Ex: A = [1 2 3]; A = [A;A.^2]; Will modify a vector A as follows:  1 2

A

 1 4 3 9   Btw, we can call particular rows or columns of a matrix (a vector) or their combinations: A(2,:) will return [1 4 9] – all entries of the second row A(:,2) will return 2 4 - all entries of the second column C = A(1:end,2:end) is

C

 2 3 4 9   ELEN 4304/5346 Digital Signal Processing Fall 2008 96

Introduction to Matlab

There are numerous functions that generate some common matrices:

zeros(n,m) –

n by m array of zeros

ones(n,m) eye(n) –

– n by m array of ones n by n identity matrix

rand(n,m)

– n by m array of UNIFORMLY distributed pseudo-random numbers

randn(n,m)

– n by m array of NORMALLY distributed pseudo-random numbers Trick: If you need to create a vector

A

of linearly increasing (decreasing) numbers, Type something like:

A = first:step:end

“end” is the value of the last element.

; where “first” is a value of the first element, “step” is the increment (decrement), Ex: n = 1:100; Ex: a = 21:0.14:531.2; ELEN 4304/5346 Digital Signal Processing Fall 2008 97

Introduction to Matlab

Matrix operations

symbol + / \ ‘ ^ * description

Addition Subtraction Transpose Power Multiplication Right division Left division Array operations (entry by entry)

symbol .^ .* ./ .\ description

Power Multiplication Right division Left division Ex:

A

   1 2 3 4   ;

B

  5 7 6 8     19 22 43 50   ;     5 12 21 32   Note: “*” denotes multiplication here only! In this class, “*” implies convolution!

ELEN 4304/5346 Digital Signal Processing Fall 2008 98

Introduction to Matlab

All data you enter during the current session is stored in the memory usually referred to as a

workspace

.

You may save your workspace by a command

save

.

You may erase (clear) any variable from the workspace by a command

clear variablename

, where “variablename” stands for a name of the variable you wish to erase. Command

clear all

erases all nonpermanent variables!

To see what is in the workspace, use the command

whos

.

99 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

FOR, WHILE, IF

1. FOR – repeat the statement(s) a number of times

FOR variable = expression, statement, ..., statement END

Ex: A = []; b = 4.35; for ind = 1:5 A = [A ind^3]; B = b*A; end Better style: A = []; b = 4.35; for ind = 1:5 A = [A ind^3]; end B = b*A; Equivalent: A = []; b = 4.35; for ind = 1:5, A = [A ind^3] end; B = b*A; ELEN 4304/5346 Digital Signal Processing Fall 2008 100

Introduction to Matlab

2. WHILE – repeat the statement(s) while the expression is true

WHILE expression, statement, ..., statement END

Ex: A = []; b = 4.35; ind = 1; while ind <= 5 A = [A ind^3]; ind = ind + 1; end B = b*A; 101 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

3. IF – execute the statement(s) if the expression is true

IF expression, statement, ..., statement END

Ex: A = randn(1); B = 0.5; if A < B A = A+B; end ELEN 4304/5346 Digital Signal Processing Fall 2008 102

Introduction to Matlab

Relational operators

Operator

< > <= >= == ~=

Description

Less than Greater than Less or equal Greater or equal Equal to Non equal to Logical operators

Operator

& | ~

Description

And Or Not We may compare entire matrices (vectors).

ELEN 4304/5346 Digital Signal Processing Fall 2008 103

Introduction to Matlab

Example: generate 100 samples of a sinusoid at the frequency 0.04

 1.

“Traditional approach”: for n = 1:100 x(n) = sin(0.04*pi*n); end 2. “Matlab approach”: x = sin(0.04*pi*[1:100]); Much faster! Whenever data (or operations) can be vectorized, it speeds up your code.

ELEN 4304/5346 Digital Signal Processing Fall 2008 104

Introduction to Matlab

Some build in Matlab functions… sin(x) – “x” must be in radians cos(x) tan(x) asin(x) acos(x) abs(x) – where “x” is complex - modulus angle(x) – phase angle in radians real(x) – real part of complex “x” imag(x) – imaginary part of “x” atan(x) exp(x) log(x) – natural log rem(x) – remainder abs(x) – absolute value sqrt(x) – square root sign(x) – sign of x round(x) – rounds towards nearest integer floor(x) – rounds towards minus infinity (down) ceil(x) – rounds towards plus infinity (up) ELEN 4304/5346 Digital Signal Processing Fall 2008 105

Introduction to Matlab

Some build in Matlab vector functions… max min sort sum prod median mean std var any all And some build in constants pi eps – machine precision j, i

size(x)

shows the size of “

x”

rows and columns) (number of 106 ELEN 4304/5346 Digital Signal Processing Fall 2008

Introduction to Matlab

Figures and plotting stuff figure – creates new figure plot(x) – a planar plot of vector x on the current axes polar(x) – a polar plot of vector x on the current axes hold on ; - tells Matlab to keep (hold) the existing plot such that the new plot hold off ; will be added “unhold” the existing plot will be replaced by the new plot xlabel(string) ylabel(string) – puts the “string” as a label on x-axis – puts the “string” as a label on y-axis title(string) – puts the string as a figure title legend(string1,string2,…) – ads a legend to the current axes grid – manipulates gridlines subplot – allows placing multiple axes on a single figure xlim(v) – sets the limits (given in the vector v) for the x-axis ylim(v) – sets the limits (given in the vector v) for the y-axis ELEN 4304/5346 Digital Signal Processing Fall 2008 107

Introduction to Matlab

Example: the following code generates this figure: x = sin(0.04*pi*[1:100]); plot(x,’-’,’linewigth’,2) grid; ylim([-1.1,1.1]) xlabel(‘Time, sample’) ylabel(‘Amplitude’) To copy your figure to another application (such as MS Word), go to the Edit menu in the Figure window and select “Copy Figure”. Now, your figure is in the computer memory and can be pasted into another application.

ELEN 4304/5346 Digital Signal Processing Fall 2008 108

Matlab summary

Remember: 1. Use help! No point to memorize everything.

2. Think in “matrix/vector” terms.

3. If Matlab stops responding and says it’s Busy, “Ctrl”+”c” breaks the operation 4. If “Ctrl”+”c” did not help, “Ctrl”+”q” quits Matlab.

109 ELEN 4304/5346 Digital Signal Processing Fall 2008

Considerations and Definitions

We need to make distinctions between a software model (or our paper/pencil work) and its hardware implementation.

System: Filter = System Continuous time Discrete time We may use:

time-, frequency-,

and

z-domain

descriptions of a system: h n - impulse response; H(  ) - frequency response (BIBO!) ; H(z) – system function.

These descriptions (if they exist) are equivalent .

Additionally may be used: (x n , y n ) - Input/output pairs; {a, b, c,…} state matrices; difference equations… ELEN 4304/5346 Digital Signal Processing Fall 2008 110

Continuous vs. discrete time: sampling

from Mitra’s book: 111 Sampling frequency:

f s

1

T s

(1.111.1) In general, sampling may lead to information been lost… ELEN 4304/5346 Digital Signal Processing Fall 2008

System’s characteristics

1. Linearity: 2. Time-invariance: 3. Causality: 4. Stability (BIBO):

x x

1

n

x n n

1

n

 

n x y n

2

n

y

2

n

ax

1

n

bx

2

n

ay

1

n

by

2

n



x

1

n

, 

x

2

n

   0  

y k

 0  

n

0 

x n

B x

  

y n



B y

112

Important:

we don’t really know whether the system is stable: it is stable only for the particular (bounded) input!

Important:

use random input signals to test your system!

Important:

these properties are specified for relaxed systems only!

ELEN 4304/5346 Digital Signal Processing Fall 2008

113

Description of systems…

   :  

n

k x k

 

where

n is

  n-k 1 If the system is linear:  since 

n

 

h n

;   :   

k x k

h k

) 0 ).

  {

and I

}  

k

:   

h y n

 Therefore: 

k x h k

 {

m

  }  

m x h

 

k h x k h n x n



h n

is a complete I/O (input/output) description of an LTI system

x k h

n = k n (1.113.1) 

n

(1.113.2) ELEN 4304/5346 Digital Signal Processing Fall 2008

Notes on stability (BIBO) of LTI systems

114 

n

k h x k

k h x k

 {  :  for BIBO it is sufficient 

n

 

x

k

k n

0 

k

that:

h x k n

0 

k

k h k

    Maximum  out put:  ( )

k x k

) 

B x

k h k

        

k

  

h n h k

 

B h

 

h u n n

 (

h n

B x

} 

B x

k h k

(1.114.1) (1.114.2)

n

0 ) 

k h B si k x gn h k o

n ecessary

 (1.114.3) 

n

 0 ) (1.114.4) (1.114.5) ELEN 4304/5346 Digital Signal Processing Fall 2008

115

Causality test (for LTI)

y n

    

y

1,

n y

2,

n

k

 

h x k

    0    0

h x k

1,

h x k

2,    0  

h x k

    0    0    0

h x k

1,

h x k

2, 

h x k

 1,

n

x

2,

n n n

0

y

1,

n

y

2,

n

   0

k

( 1, 

x

2, 

h k k n n and n

0

n

0



n x

1,

n

n

0 0 

x

2, ) ELEN 4304/5346 Digital Signal Processing Fall 2008 (1.115.1)

n n

0 (1.115.2)