下載/瀏覽Download

Download Report

Transcript 下載/瀏覽Download

Department of Electrical Engineering, Southern Taiwan University
Application of a Hybrid Controller
to a Mobile Robot
Simulation Modelling Practice and Theory 16(2008) 783-795
Juing-Shian Chiou, Kuo-Yang Wang
Robotic Interaction Learning Lab
1
Department of Electrical Engineering, Southern Taiwan University
Outline







Abstract
Introduction
Using generalized predictive control to predict the goal
position
Using SVM to improve the angle followed by the mobile
robot to reach the target
Using a hybrid controller to improve the optimal velocity
of the mobile robot
Experiments
Conclusions
Robotic Interaction Learning Lab
2
Department of Electrical Engineering, Southern Taiwan University
Abstract

This paper presents the application of a hybrid controller to
the optimization of the movement of a mobile robot.

Through hybrid controller processes, the optimal angle and
velocity of a robot moving in a work space was determined.
More effective movement resulted from these hybrid
controller processes.

The experimental scenarios involved a five-versus-five
soccer game and a MATLAB simulation, where the
proposed system dynamically assigned the robot to the
target position.
Robotic Interaction Learning Lab
3
Department of Electrical Engineering, Southern Taiwan University
Introduction

The hybrid controller that is proposed includes a Support
Vector Machine (SVM) and a Fuzzy Logic Controller
(FLC). An SVM is a set of related supervised learning
methods used for classification and regression. They
belong to a family of generalized linear classifiers.

We utilized the robot soccer system as our test platform
since it can fully implement a multi-agent system.
Robotic Interaction Learning Lab
4
Department of Electrical Engineering, Southern Taiwan University
Simulation platform
Fig. 1. Five-versus-five simulation platform.
Robotic Interaction Learning Lab
5
Department of Electrical Engineering, Southern Taiwan University
System architecture
Fig. 2. System architecture.
Robotic Interaction Learning Lab
6
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(1/6)

Adaptive predictive control machines include a classified
on-line structure and control system.

The design parameters of the GPC include the
autoregressive exogenous classification of on-line model
levels and the controlled weighting of the control force.

These two design parameters have to be determined before
the on-line implementation.
Robotic Interaction Learning Lab
7
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(2/6)


We utilized the current position and sampling time of the
target to predict the target position at the next sampling
time.
The following steps illustrate the procedure followed.

(I) Although this system is nonlinear, extremely short
sampling times were used to make the system linear.

(II) At first, we gathered useful conditions from the
system:
Robotic Interaction Learning Lab
8
Department of Electrical Engineering, Southern Taiwan University
Useful conditions



1) The position of the robot R x , R y 
2) The former position of the target GFx , GFy 
3) The current position of the target G x , G y  (The former
and current positions were determined on the basis of the
sampling time T0 , as shown in Fig.3.)
Fig.3. sampling time.
Robotic Interaction Learning Lab
9
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(3/6)

(III) We calculated the distances between the former
and current positions of the target, and we determined
the speed V0 of the target by using the sampling time T0.
The equation below shows the calculations involved.
d0 
GFx  G x 2  GFy  G y 2
V0  d 0 / T0
Robotic Interaction Learning Lab
10
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(4/6)

(IV) After calculating the velocityV0 of the target, we
designed a GPC by using V0 , the direction of the target
and the sampling time T0 , which was then used to find
the subsequent position of the target GLx , GLy  , as
calculated by means of the equation below and
illustrated in Figure 4.
d  G  GL 2  G  GL 2
x
x
y
y
 0
 GFy  G y G y  GLy


 GFx  G x G x  GLx
Robotic Interaction Learning Lab
11
Department of Electrical Engineering, Southern Taiwan University
Subsequent position of the target
Fig.4. Subsequent position of the target.
Robotic Interaction Learning Lab
12
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(5/6)

(V) We then calculated the time T1 needed for the
robot to reach the target at its central speed Vc , based
on the distance (d ) that it had to cover to reach its
current position, as shown in Figure 1. The equation
below shows the calculations involved ( VL is the
speed of the left wheel of the Robot, VR is the speed
of the right wheel).
Vc 
VL  VR
2
T1  d / Vc
Robotic Interaction Learning Lab
13
Department of Electrical Engineering, Southern Taiwan University
Using generalized predictive control to
predict the goal position(6/6)

(VI) If T1 is larger than T0 , then the robot followed to
the next position of the target; if T1 is smaller than T0 ,
the robot proceeded to the current position of the target.
By repeating steps I to VI, the target could be reached
in less time. The sequence is shown in Figure 5.
Robotic Interaction Learning Lab
14
Department of Electrical Engineering, Southern Taiwan University
GPC system flowchart
Fig.5. GPC system flowchart.
Robotic Interaction Learning Lab
15
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(1/6)

For a mobile robot like the one illustrated in Figure 6,
choosing a path is very important. We designed an SVM to
help our robot reach the target point in the shortest time.

The SVM technique stems from attempts to identify the
optimal classification of a hyperplane under conditions of
linear division. The optimal hyperplane refers to the
hyperplane which can correctly distinguish samples of two
categories with a maximum margin.

The SVM is shown in Figure 7.
Robotic Interaction Learning Lab
16
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(2/6)
Fig.6. Relationship between d and ψ.
Fig.7. Support vector machine.
Robotic Interaction Learning Lab
17
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(3/6)

(I) Based on Figure 6, we determined  .
  tan 1

by  R y
bx  R x
 R
(II) In this case, training patterns were calculated
according to
 , y1,  ,  , y ,
'
1
Robotic Interaction Learning Lab
'
l
l
'
i
  
 R n i  1,2,  , l , yi   , 
2 2
18
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(4/6)
Consider also
w    b  2  y
'
i
i


2
w    b   2  y   2 , i  1,2,  ,500
We had to resolve the question of quadratic
optimization. The constraints were
'
i
i

y i w   ' i  b  , i  1,2,  ,500
2


We also had to determine the minimum value of
 w 
1
w
2
Robotic Interaction Learning Lab
2
19
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(5/6)
because the equation above is quadratic with a linear
constraint. This is a typical quadratic optimization
problem. So, we used the Lagrange multiplier to resolve
the question of quadratic optimization with linear
constraints. We obtained
1 2 500 

Lw, b,    w    i  yi w   'i  b   ,  i  0
2
2

i 1
Robotic Interaction Learning Lab
20
Department of Electrical Engineering, Southern Taiwan University
Using SVM to improve the angle followed
by the mobile robot to reach the target(6/6)

However, using an SVM still did not produce the optimal
solution. The way in which we dealt with this problem was
to address the dual question.
500
500
L
L
'
  y  0
 w    i y i i  0  w    i y i ' i
b
500
w

i 1
i 1
i
i
After performing the substitution, we were left with the
new equation
LD    i 

i 1
1
 i j y i y j  ' i ' j

2 i, j
The following is the final function


f    sgn   y     b
500
'
'
 i 1
i
Robotic Interaction Learning Lab
i
i

21
Department of Electrical Engineering, Southern Taiwan University
Using a hybrid controller to improve the
optimal velocity of the mobile robot

We describe a hybrid controller that combines an SVM
and an FLC to look for the optimal velocity.

First, we designed an FLC to control the velocity of the
mobile robot’s wheels. Afterwards, we utilized the SVM to
adjust the fuzzy rules produced by the FLC.




The design of a fuzzy logic controller
The state equation
The support vector machine
Modifying the output membershipfunction
Robotic Interaction Learning Lab
22
Department of Electrical Engineering, Southern Taiwan University
The design of a fuzzy logic controller

We designed an FLC to generate the velocity of both
wheels of the robot.
Fig. 8. The distance between the
robot and the goal.
Robotic Interaction Learning Lab
Fig.9. The orientation of the robot with
respect to the straight line path to the
goal.
23
Department of Electrical Engineering, Southern Taiwan University
Fuzzy rule table
Robotic Interaction Learning Lab
24
Department of Electrical Engineering, Southern Taiwan University
The state equation


With the velocities generated by the FLC, we determined
the maximum velocity of the robot.
We defined the mathematical model of the equation of the
robot’s movements as follows:
Robotic Interaction Learning Lab
r

  D  r   l 

 x  Vl _ x
 y  V
l_ y

m
  Vr _ x

n  Vr _ y

q  r   l  cos 
w
  r   l  sin 

  r   r  cos 
p

 s  r   r  sin 
 r  a r

 l  a l


25
Department of Electrical Engineering, Southern Taiwan University
The state equation

Having defined the state variables,
x1  x, x2  y, x3  m, x4  n, x5  Vl _ x , x6  Vl _ y , x7  Vr _ x , x8  Vr _ y , x9  r , x10  l
We were able to determine the state equation:
T
 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 
  x5 x6 x7 x8 r  al  cos  r  al  sin  r  ar  cos  r  ar  sin 


ar al 
T
Consequently, we identified the optimal vector of the
velocity of the left wheel as GVl  x5  x6 , and that of the
right wheel as GVr  x7  x8
Robotic Interaction Learning Lab
26
Department of Electrical Engineering, Southern Taiwan University
The support vector machine

we changed the vector quantity into a pure quantity and
determined the optimal velocity of the left wheel of the
mobile robot.

We then used the same method of the page 16 to determine
the optimal velocity of the right wheel of the mobile robot.
Robotic Interaction Learning Lab
27
Department of Electrical Engineering, Southern Taiwan University
Modifying the output membershipfunction

With the optimal velocity of the robot thus determined, the
output membershipfunction was modified.
Fig. 10. Adjustments to the fuzzy rule.
Robotic Interaction Learning Lab
28
Department of Electrical Engineering, Southern Taiwan University
Experiments

In order to show the effectiveness of the hybrid controller,
three kinds of experiments were executed.



The first of these utilized the GPC to predict the target’s next
position.
The second utilized the SVM to determine the optimal heading
angle.
The third utilized the hybrid controller to control the velocity of
the mobile robot.
Robotic Interaction Learning Lab
29
Department of Electrical Engineering, Southern Taiwan University
Using the GPC to predict the target’s next position
Fig.11(a). Before using the GPC to
predict the next target position.
Robotic Interaction Learning Lab
Fig.11(b). Using the GPC to predict the
next target position.
30
Department of Electrical Engineering, Southern Taiwan University
Using the SVM to determine the optimal heading angle
Fig.12(a). Before using the SVM to
determine the heading angle in a
MATLAB simulation.
Robotic Interaction Learning Lab
Fig.12(b). Using the SVM to determine
the optimal heading angle in a MATLAB
simulation.
31
Department of Electrical Engineering, Southern Taiwan University
Using the SVM to determine the optimal heading angle
Fig.13(a). Before using the SVM to
determine the heading angle in a FIRA
five-versus-five simulation platform.
Robotic Interaction Learning Lab
Fig.13(b). Using the SVM to determine
the heading angle in a FIRA fiveversus-five simulation platform.
32
Department of Electrical Engineering, Southern Taiwan University
Using the hybrid controller to control the velocity of
the mobile robot
Fig.14(a). Using the hybrid controller
to control the velocity of the mobile
robot in a MATLAB simulation.
Robotic Interaction Learning Lab
Fig.14(b). Using the traditional FLC
to control the velocity of the mobile
robot in a MATLAB simulation.
33
Department of Electrical Engineering, Southern Taiwan University
Using the hybrid controller to control the velocity of
the mobile robot
Fig.15(a). Using the hybrid controller
to control the velocity of the mobile
robot in a FIRA five-versus-five
simulation platform.
Robotic Interaction Learning Lab
Fig.15(b). Using the traditional FLC to
control the velocity of the mobile robot
in a FIRA five-versus-five simulation
platform.
34
Department of Electrical Engineering, Southern Taiwan University
Conclusions

These three kinds of experiments confirm the effectiveness
of the use of a hybrid controller with a mobile robot. The
advantages and contributions of this study can be listed as
follows.



Online learning system
Dynamic and quick to obtain data on the circumstances
In the future, the computational time required by our SVM
and hybrid controller will be reduced to increase the speed
of the response of the mobile robot. The hybrid controller
and the GPC will also be used in other systems to achieve
optimization.
Robotic Interaction Learning Lab
35
Department of Electrical Engineering, Southern Taiwan University
Thanks for your attention!
Robotic Interaction Learning Lab
36