Introduction to Mobile Robotics

Download Report

Transcript Introduction to Mobile Robotics

Probabilistic Robotics

Localization

KF, EKF

Bayes Filter Reminder

Prediction

bel

(

x t

)  

p

(

x t

|

u t

,

x t

 1 )

bel

(

x t

 1 )

dx t

 1 •

Correction

bel

(

x t

)  

p

(

z t

|

x t

)

bel

(

x t

)

Localization Taxonomy

• Local vs. Global localization.

• Static vs. Dynamic environments.

• Passive vs. Active approaches.

• Single-robot vs. Multirobot localization.

3

Review topics…

• KF-EKF tutorial on Welch-Bishop website.

• Table 7.2: EKF with known correspondences.

• Table 7.3: EKF with unknown correspondences.

• Table 7.4: Unscented Kalman Filter.

• Read Chapter 7, 8 very carefully!

Need volunteers to discuss topics in class…

4

Discrete Kalman Filter

Estimates the state

x

of a discrete-time controlled process that is governed by the linear stochastic difference equation

x t

A t x t

 1 

B t u t

 

t

with a measurement

z t

C t x t

 

t

5

Components of a Kalman Filter

A t B t C t

t

t

Matrix (nxn) that describes how the state evolves from

t

to

t-1

without controls or noise.

Matrix (nxl) that describes how the control

u t

changes the state from

t

to

t-1

.

Matrix (kxn) that describes how to map the state

x t

to an observation

z t

.

Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance

R t

and

Q t

respectively.

6

Kalman Filter Algorithm

1.

Algorithm Kalman_filter ( m

t-1 ,

S

t-1 , u t , z t

): 2.

3.

4.

5.

6.

7.

8.

9.

Prediction: m

t

A t

m

t

 1 

B t u t

S

t

A t

S

t

 1

A t T

R t

Correction:

K t

m

t

  S

t

m

t C t

T

(

K t C t

(

z

S

t t C t

T C t

 m

Q t t

) S

t

 (

I

K t C t

) S

t

)  1 Return m

t ,

S

t

7

The Prediction-Correction-Cycle

bel

(

x t

)  m

t

 

t

2 m

t

 

K t

( 1  (

z t K t

 ) 

t

2 m

t

) ,

K t

 

t

2   2

t

 2

obs

,

t bel

(

x t

)  m

t

 S

t

m

t

  (

I K t

 (

z t

K t C t

)

C t

S

t

m

t

) ,

K t

 S

t C t T

(

C t

S

t C t T

Q t

)  1 Correction Prediction

bel

(

x t

)      m

t t

2  

a t

m

a t

2 

t t

 1 2  

b t u t

 2

act

,

t bel

(

x t

)     m S

t t

 

A t

m

t

 1 

A t

S

t

 1

A t T B t u t

R t

8

Nonlinear Dynamic Systems

• Most realistic robotic problems involve nonlinear functions

x t

g

(

u t

,

x t

 1 )

z t

h

(

x t

) 9

EKF Algorithm

1. Extended_Kalman_filter

( m

t-1 ,

S

t-1 , u t , z t

): 2.

3.

4.

5.

6.

7.

8.

9.

Prediction: m

t

S

t

g

(

u t

, m

t

 1 ) 

G t

S

t

 1

G t T

R t

Correction:

K t

m

t

S

t

  S m

t t H

t T

(

K t H

(

t z t

S

t

H t h

(

T

 m

t Q t

)) )  1  (

I

K t H t

) S

t

Return m

t ,

S

t H t

 

h

( 

x t

m

t

) m S

t K t

m

t

S

t t

 

A t

m

t

 1 

B t u t A t

S

t

 1

A t T

R t

   S

t

m

t C t

T

(

K t C

(

t z

S

t t C t

T C t

 m

Q t t

) (

I

K t C t

) S

t

)  1

G t

 

g

(

u t

x t

, m  1

t

 1 ) 10

Localization

“Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91] • • •

Given

• Map of the environment.

• Sequence of sensor measurements.

Wanted

• Estimate of the robot’s position.

Problem classes

• Position tracking • • Global localization Kidnapped robot problem (recovery) 11

Landmark-based Localization

12

1. EKF_localization

( m

t-1 ,

S

t-1 , u t , z t

, m):

Prediction:

3.

G t

 

g

(

u t

x t

, m  1

t

 1 )        

x

'  m 

t y

 1 , '

x

  m  m

t t

  1 ,  1 , '

x x

x

'  m 

t y

 1 , '

y

  m  m

t t

  1 ,  1 , '

y y

x

'  m 

t y

 1 ,  '   m  m

t t

  1 ,  '  1 ,        Jacobian of g w.r.t location 5.

6.

V t M t

  

g

(

u t

, 

u t

m

t

 1 )     1 |

v t

|   2 | 

t

0  |  2        

x

'  

v y t

'  

v t

 ' 

v t

  3 |

v t

x

'     

y

  ' '

t t

 

t

       | 0   4 | 

t

|  2   7.

8.

m

t

S

t

 

g

(

u t

, m

t

 1 )

G t

S

t

 1

G t T

V t M t V t T

Jacobian of g w.r.t control Motion noise Predicted mean Predicted covariance 13

1. EKF_localization

( m

t-1 ,

S

t-1 , u t , z t

, m):

Correction:

3.

z

ˆ

t

   atan 

m x

2 

m y

 m

t

 ,

x

m

t

 2 ,

y

 ,

m x

m y

 m 

t

,

x

m

t

 , 

y

 2 m

t

,    Predicted measurement mean 5.

6.

7.

H t

 

h

( m

t

x t

,

m

) 

Q t S t

     0

r

2

H t

 0 S

t r

2  

H t T

Q t

        

r t

m m 

t t

, ,

t x x

8.

9.

K t

 S

t H t T S t

 1 m

t

10.

S

t

  m

t

I

 

K t K t

(

z t H t

  S

t z

ˆ

t

) 

r t

 m   m 

t t

, ,

t y y

   

r t

m m 

t t t

,  ,       Jacobian of h w.r.t location Pred. measurement covariance Kalman gain Updated mean Updated covariance 14

UKF_localization

( m

t-1 ,

S

t-1 , u t , z t

, m):

Prediction:

R t

     1 |

v t

|   2 | 

t

0 |  2   3 |

v t

| 0   4 | 

t

|  2  

Q t

m

t a

 1      0

r

2  m

t T

 1 0 

r

2      

T

 S

t a

 1 

t a

 1     S

t

 1  0 0 m

t a

 1

M t

0 0 m

t a

 1 0 0

Q t

    S

t a

 1 m

t a

 1   

t

* 

g

( 

t

 1 ,

u t

) m

t

S

t

i

2  

n

0

w i m

i

2  

n

0

w c

[

i

] 

t

* [

i

]  

t

* [

i

]  m

t

 

t

* [

i

]  m

t

T

Motion noise Measurement noise Augmented state mean Augmented covariance S

t a

 1  Sigma points Prediction of sigma points Predicted mean Predicted covariance 15

UKF_localization

( m

t-1 ,

S

t-1 , u t , z t

, m):

Correction:

t

 ( m

t

m

t

  S

t

m

t

  

t

h

 

t z

ˆ

t

i

2  

n

0 [

w m i

] 

t

[

i

]

S t

 2

i

 

n

0

w c

[

i

]  

t

[

i

] 

z

ˆ

t

 

t

[

i

] 

z

ˆ

t

T

Q t

S

t x

,

z

i

2  

n

0

w c

[

i

]  

t

[

i

]  m

t

 

t

[

i

] 

z

ˆ

t T

 S

t

) Measurement sigma points Predicted measurement mean Pred. measurement covariance Cross-covariance

K t

 S

t x

,

z S t

 1 m

t

 m

t

K t

(

z t

z

ˆ

t

) Kalman gain Updated mean S

t

 S

t

K t S t K t T

Updated covariance 16

Prediction Quality

EKF UKF 17

Summary

Gaussian filters.

• Kalman filter: linearity assumption.

• • Robot systems non-linear.

Works well in practice.

• Extended Kalman filters: linearization.

• Tangent at the mean.

• Unscented Kalman filters: better linearization.

• Sigma control points.

• •

ReBEL Matlab software.

Information filter (Section 3.5).

18