Transcript 投影片 1
Lecturer:
Prof. Hung-Yuan Chung, Ph.D.
Department of Electrical Engineering
National Central University
ADAPTIVE CONTROL
Text book :
‧K. J. Astrom & B. Wittenmark : Adaptive
Control, second edition, Addison Wesley1995.
‧J.-J Slotine & W. Li : Applied Nolinear Control,
Prentice-Hall, 1991.
CHAPTER 1
• WHAT IS ADAPTIVE CONTROL?
Parameter
adjustment
Controller
parameters
Setpoint
Controller
Control
Plant
signal
Figure 1.1 Block diagram of an adaptive system.
Output
A Brief History
Figure 1.2 Several advanced flight control systems were tested on the X-15
experimental aircraft.(By courtesy of Smithsonian Institution.)
Feedforward
uc
G ff
Feedback
ym
Σ
Process
u
G fb
Gp
-1
Figure 1.3 Block diagram of a robust high-gain system.
y
1.2 LINEAR FEEDBACK
Robust High-Gain Control
T=
GpG fb
1 GpG fb
dGp
dT
1
T 1 GpG fb Gp
L GpG fb
Judging Criticality of Process Variations
EXAMPLE 1.1 Different open-loop responses
Go ( s )
1
( s 1)( s a)
Figure 1.4 (a)Open-loop unit step responses for the process in Example 1.1
with a = - 0.01 , 0 and 0.01 .(b)closed-loop step responses for the same
system,with the feedback u uc y .Notice the difference in time scales.
Figure 1.5 (a) Open-loop and (b) closed-loop Bode diagrams for process in
Example 1.1.
EXAMPLE 1.2 Similar open-loop responses
400(1 sT )
Go ( s )
( s 1)( s 20)(1 Ts)
Figure 1.6 (a) Open-loop unit step responses for the process in Example 1.2 with
T= 0 , 0.015, and 0.03 .(b)closed-loop step responses for the same
system,with the feedback u uc y .Notice the difference in time scales.
Figure 1.7 Bode diagrams for the process in Example 1.2. (a) The open-loop
system; (b) The closed-loop system.
Example1.3 Integrator with unknown sign
G0 s
kp
s
P s sR s k p S s
1.3 EFFECTS OF PROCESS VARIATIONS
Nonlinear Actuators
EXAMPLE 1.4 Nonlinear valve
v f u u
4 ,
PI controller
uc
K (1
1
)
Ti s
u 0
Valve
u
Process
v
f(‧)
y
Go (s)
-1
Figure 1.8 Block diagram of a flow control loop with a PI controller and a nonlinear valve.
Figure 1.9 Step responses for PI control of the simple flow loop in Example 1.4
at different operating levels.The parameters of the PI controller are
k=0.15,Ti=1.The process characteristice are f (u) u 4 and Go (s) 1/ (s 1)3 .
Flow and Speed Variations
EXAMPLE 1.5 Concentration control
dc(t )
Vm
q(t )(cin (t ) c(t ))
dt
(1.3)
Where
Vd / q(t )
Introduce
T Vm / q(t )
(1.4)
the process has the transfer function
e st
Go ( s )
1 sT
Figure 1.10 Schematic diagram of
a concentration control system.
Figure 1.11 Change in reference value for different flows for the system in
Example 1.5.(a)Output c and reference c r concentration,(b)control signal.
Flight Control
EXAMPLE 1.6 Short-period aircraft dynamics
a11 a12 a13 b1
dx
a21 a22 a23 x 0 u
dt
0 0 a a
where
x Nz
T
.
(1.6)
e
Figure 1.12 Schematic diagram of the aircraft in Example 1.6.
Figure 1.13 Flight envelope of the F4-E.Fout different flight conditions are
indicated.( Form Ackermann(1983),courtesy of Springer-Verlag.)
Table 1.1 Parameters of the airplane state model of Eq.(1.6) for different flight
conditions (FC).
Variations in Disturbance Characteristics
EXAMPLE 1.7 Ship steering
Figure 1.14 Measurements and spectra of waves at diffedent conditions at
Hoburgen.(a)Wind speed 3-4m/s.(b)Wind speed18-20m/s(Courtesy of SSPA
Maritime Consulting AB,Sweden.)
EXAMPLE 1.8 Regulation of a quality variable in process control
White noise
s
s 2 2 s 2
b0 s 2 b1s b2
s 2 e 2
1
s 1
Figure 1.15 Block diagram of the system with disturbances used in
Example 1.8.
y
Figure 1.16 Illustrates performance of controllers that are tuned to the
disturbance characteristics.Output error when (a) e 0.1 ;(b) 0.05,
e 0.1;(c) e 0.05
1.4 ADAPTIVE SCHEMES
Gain Scheduling
Controller
parameters
Gain schedule
Command
signal
Controller
Control
signal
Operating
condition
Process
Output
Figure 1.17 Block diagram of a system with gain scheduling.
Model-Reference Adaptive Systems(MRAS)
d
de
e
dt
d
Model
ym
Controller parameters
uc
(1.7)
Adjustment
mechanism
u
Controller
y
Plant
Figure 1.18 Block diagram of a model-reference adaptive system
(MRAS)
Self-tuning Regulators (STR)
Specification
Self-tuning regulator
Process parameters
Controller
design
Estimation
Controller
parameters
Reference
Controller
Process
Input
Output
Figure 1.19 Block diagram of a self-tuning regulator (STR)
Dual Control
d
0
dt
uc
Nonlinear
Control law
(1.8)
u
Process
y
Calculation
of hyperstate
Hyperstate
Figure 1.20 Block diagram of a dual controller.
T
• V=E (G (( zT ), u (T )) g ( z , u )dt )
0
1.5 THE ADAPTIVE CONTROL PROBLEM
Process Descriptions
dx
Ax Bu
dt
(1.9)
y Cx
B s
b0 s m b1s m1 ...bm
Gp s
n
A s
s a1s n1 ...an
(1.10)
x(t+1)=Φx(t)+Γu(t)
y(t)=C(t)x(t)
B z b0 z m b1z m1 ...bm
H p z
n
A z z a1z n1 ...an
(1.11)
A Remark on Notation
y(t)=G(p)u(t)
qy(t)=y(t+1)
y(t)=H(q)u(t)
Gain Scheduling
Figure 1.21 Gain scheduling is an important ingredient in modern flight
control system.(By courtest of Nawrocki Stock Photo,Inc.,Neil Hargreave.)
Controller Structures
EXAMPLE 1.9 Adjustment of gains in a state feedback
u=-Lx
EXAMPLE 1.10 A general linear controller
R(s)U(s)=S(s)Y(s)+T(s) Uc (s)
EXAMPLE 1.11 Adjustment of a friction compensator
u fc
u
u
If v>0
If v<0
The Adaptive Control Problem
•
•
•
•
Characterize the desired behavior of the close-loop system.
Determine a suitable control law with adjustable parameters.
Find a mechanism for adjusting the parameters.
Implement the control law.
1.6 APPLICATIONS
•
•
•
•
•
Automatic Tuning
Gain Scheduling
Continuous Adaptation
Abuses of Adaptive Control
Industrial Products
Abuses of Adaptive Control
Process dynamics
Varying
Use a controller with
varying parameters
Unpredictable
variations
Use an adaptive
controller
Constant
Use a controller with
constant parameters
Predictable variations
Use gain scheduling
Figure 1.22 Procedure to decide what type of controller to use.
Industrial Products
EXAMPLE1.12 An adaptive autopilot for ship steering
EXAMPLE1.13 Novatune
1.7 CONCLUTIONS
• Variations in process dynamics,
• Variations in the character of the disturbances,and
• Engineering efficiency and ease of use.
CHAPTER 2
REAL-TIME PARAMETER ESTIMATION
2.1 INTRODUCTION
2.2 LEAST SQUARES AND REGRESSION MODELS
yi 1 i 10 2 i 20 n i n0 T i 0
(2.1)
T i 1 i 2 i n i
0
0
1
V ,t
0
2
0 T
n
1
T
yi i
2 i 1
t
2
(2.2)
Y t y1 y2 yt
T
Et 1 2 t
T
T 1
t
T t
Pt t t
T
1
T
i i
i 1
t
1
(2.3)
i yi yˆ i yi T i
1 t 2
1 T
1
V , t i E E E
2 i 1
2
2
E Y Yˆ Y
2
(2.4)
THOREM 2.1 Least-squares estimation
T ˆ T Y
(2.5)
ˆ T T Y
1
Proof:
(2.6)
2V , t ET E Y
Y T Y Y T T T Y T T
(2.7)
2V , t Y T Y Y T T T Y T T
1
T
T
1
Y Y Y Y
T
T
T
T
1
Y I T Y
T
T
1
T
1
Y T Y
T
T
T
(2.8)
ˆ T
1
T Y
Remark 1. Equation (2.5) is called the normal equation. Equation (2.6) can be
written as
1
t
t
T
ˆ
t i i i yi Pt i yi
i 1
i 1
i 1
t
(2.9)
Remark 2. The condition that the matrix T is
invertible is called an excitation condition.
Remark 3. The least-squares criterion weights all errors ε(i) equally, and this
corresponds to the assumption that all measurements have the same precision.
1 T
V E WE
2
1 T
T
ˆ
W WY
(2.10)
(2.11)
EXAMPLE 2.1 Least-squares estimation of static system
yi b0 b1ui b2u 2 i ei
T i 1 ui u 2 i
T b0
b1
b2
Model1 : yi b0
Model 2 : yi b0 b1u
Model 3 : yi b0 b1u b2u 2
Model 4 : yi b0 b1u b2u 2 b3u3
Geometric Interpretation
n 1
1 y 1 1 1
n 2
2 y 2 1 2
1 n
t y t t
t
1
n
E Y 11 22 nn
Y
i T
1
1
2 2 n n 0
i 1,, t
Which is identical to the normal equation (2.5).
Statistical Interpretation
yi T i 0 ei
(2.12)
Y 0 E
1
1 T
0
T
ˆ
ˆ
Y E
T
T
(2.13)
THEOREM 2.2 Statistical properties of least-squares estimation
(i) Eˆt 0
1
2
T
ˆ
(ii) cov t
(iii) ˆ 2 t 2V ˆ, t t n
EXAMPLE 2.2 Decrease of variance
cov ˆ
2
t
2
k
k 1
const a 0.5
2
k
~
log t a 0.5
k 1
t 1 2 a a 0.5
t
Recursive Computations
t
P t t t i T i
1
T
t 1
i 1
i T i t T t
i 1
P1 t 1 t T t
t
t 1
ˆt Pt i yi Pt i yi t yt
i 1
i 1
t 1
1
ˆt 1 P 1 t ˆt 1 t T t ˆt 1
i
y
i
P
t
1
i 1
(2.14)
ˆt ˆt 1 Pt t T t ˆt 1 Pt t yt
ˆt 1 Pt t yt T t ˆt 1
ˆt 1 K t t
K t Pt t
t yt T t ˆt 1
LEMMA 2.1 Matrix inversion lemma
A BCD
1
1
1
A A BC
1
1
DA B
1
DA1
P
Proof:
r
oP
oA
r BCD
fo
A
1
1
1
A B C DA B DA1
1
1
1
1
I BCDA B C DA B DA1
1
1
1
1
1
1
BCDA B C DA B DA1
1
1
1
1
1
1
I BCDA BC C DA B C DA B DA1
I
t 1t 1 t t
Pt t t
T
1
T
Pt 1 t t
1
T
1
T
1
Pt 1 Pt 1 t I t Pt 1 t T t Pt 1
1
T
K t Pt t Pt 1 t I t Pt 1 t
T
1
THEOREM 2.3 Recursive least-squares estimation
ˆt ˆt 1 K t yt T t ˆt 1
(2.15)
K t Pt t Pt 1 t I t Pt 1 t
T
1
(2.16)
Pt Pt 1 Pt 1 t I t Pt 1 t T t Pt 1
T
1
I K t T t Pt 1
(2.17)
Remark 1. Equation (2.15) has strong intutive appeal.
Remark 2. The least-squares estimate can be interpreted as a Kalman filter
for the process
t 1 t
yt T t t et
(2.18)
2V , t 2V , t 1 2 , t
Y t 1 I t 1 t 1t 1 t 1 Y t 1
T
1
T
T
ˆ
t 1 T t 1 t 1 ˆt 1
yt t
T
yt t
T
T
(2.19)
t
t t i T i
T
i 1
Pt0 T t0 t0
1
ˆt0 Pt0 T t0 Y t0
P0 P0
P t P01 T t t
1
Time-Varying Parameters
1 t t i
V , t yi T i
2 i 1
2
(2.20)
THEOREM 2.4 Recursive least squares with exponential forgetting
ˆt ˆt 1 K t yt T t ˆt 1
K t Pt t Pt 1 t I t Pt 1 t
Pt I K t T t Pt 1
t 1 v t vt
T
1
(2.21)
Simplified Algorithms
yt T t
(2.22)
T
1 ˆ
ˆ
V t t 1 ˆt ˆt 1 y t T t ˆt
2
ˆt ˆt 1 t 0
yt T t ˆt 0
ˆt ˆt 1
ˆt ˆt 1
t
T
ˆt 1
y
t
t
T t t
t
T
ˆt 1
y
t
t
T t t
(2.23)
ALGORITHM 2.1 Projection algorithm
ˆt ˆt 1
t
T
t ˆt 1
y
t
T
t t
(2.24)
Remark 1. In some textbooks this is called the normalized projection
algorithm.
Re mark 2. The bound for the parameter γ is obtained from the following analysis.
~
0 ˆ
~
~
t At t 1
t T t
At I
T t t
1 T
T
ˆt ˆt 1 Pt t yt T t ˆt 1
Pt T i i
i 1
t
(2.25)
1
(2.26)
ˆt ˆt 1 t yt T t ˆt 1
Continuous-Time Models
V e
t
0
t
y d
T
2
t e t T d ˆt t e t y d
0
0
(2.27)
(2.28)
Rt e t T d
t
(2.29)
0
dˆ
Pt t et
dt
(2.30)
et yt t ˆt
(2.31)
T
dP t
Pt Pt t T t Pt
dt
dR
R T
dt
Pt T d
0
t
1
(2.32)
2.3 ESTIMATING PARAMETERS IN DYNAMICAL SYSTEMS
Finite-Impulse Response (FIR) Models
yt b1ut 1 b2ut 2 bnut n
yt T t 1
T b1 bn
T t 1 ut 1 ut n
yˆ t bˆ1 t 1ut 1 bˆn t 1ut n
(2.33)
Transfer Function Models
Aq yt Bq ut
(2.34)
Aq q n a1q n1 an
Bq b1qm1 b2qm2 bm
yt a1 yt 1 an yt n b1ut m n 1 bmut n
T a1 an b1 bm
(2.35)
T t 1 yt 1 yt n ut m n 1 ut n
yt T t 1
T n
T t 1
Aq yt Bq ut et n
Bq
y t
u t et
Aq
yˆ t a1 yˆ t 1 an yˆ t n b1ut m n 1 bmut n
yˆ t
Bq
u t
Aq
t
2
ˆ
y
k
y
k
k 1
ˆt ˆt 1 Pt t 1 t
T t 1 yˆ t 1 yˆ t n ut m n 1 ut n
t yt T t 1ˆt 1
Continuous-Time Transfer Functions
dny
d n1 y
d m1u
a1 n 1 an y b1 m1 bmu
n
dt
dt
dt
A p yt B p ut
(2.36)
A py f t B pu f t
(2.37)
y f t H f pyt
u f t H f put
a1 an b1 bm
T
T t pn1 y f y f
pm1u f
pn1H f py H f py
pn y f t pn H f pyt T t
uf
pm1H f pu H f pu
Nonlinear Models
EXAMPLE 2.3 Nonlinear system
yt ayt 1 b1ut 1 b2 sinut 1
a b1 b2 T
T t yt ut sinut 1
yt T t 1
Stochastic Models
Aq yt Bq ut Cq et
(2.38)
t yt T t 1ˆt 1
a1 an b1 bn c1 cn
T t 1 yt 1 yt nut 1ut n t 1 t n
yt T t 1 et
ˆt ˆt 1 Pt t 1 t
P1 t P1 t 1 t 1 T t 1
(2.39)
Cˆ q t Aˆ qyt Bˆ qut
Cˆ q f t t
(2.40)
(2.41)
t yt Tf t 1ˆt 1
This algorithm is called the recursive maximum likelihood (RML) method.
p t yt T t 1ˆt
Bq
C (q)
y t
u t
et
Aq
D(q)
Unification
ˆt ˆt 1 Pt t 1 t
1
P(t 1) (t 1) T (t 1) P(t 1)
Pt Pt 1
T
t 1Pt 1 t 1
2.4 EXPERIMENTAL CONDITIONS
Persistent Excitation
t
u 2 k 1
n 1
t
u k 1u k 2
T
n 1
t
u k 1u k n
n 1
t
u
(
k
1
)
u
(
k
n
)
n 1
t
u (k 2)u (k n)
n 1
t
2
u
k
n
n 1
uk 1uk 2
n 1
t
2
u
k 2
n 1
t
(2.42)
c(1)
c(0)
c(1)
t
c(0)
1
T
Cn lim
t t
i 1
c(n 1) c(n 2)
1 t
c(k ) lim u (i )u (i k )
t t
i 1
c(n 1)
c(n 2)
c(0)
(2.43)
(2.44)
DEFINITION 2.1 Persistent excitation
t m
1 I ( k ) T ( k ) 2 I
k 1
(t ) [u (t 1)
t
u (t 2)
Cn lim ( k ) T ( k )
t
k 1
...
u (t n)]T
THEOREM 2.7 persistently exciting signals
1 n
U lim ( A(q)u (k )) 2 0
t t
i 1
A(q) a0 q n 1 a1q n 2
an 1
1
2
T
U lim (a0u (k n 1) ... an 1u (k )) a Cn a
t t
(2.45)
EXAMPLE 2.5 Step
( q 1)u (t ) {
C1
1
t 0
0
t 0
1 t
2
u
(k ) 1
t k 1
EXAMPLE 2.6 Sinusoid
( q 2 2 q cos 1)u (t ) 0
C2
1
1
2 cos
sin
1
EXAMPLE 2.7 Periodic signal
u (t ) u (t n)
( q n 1)u (t ) 0
THEOREM 2.8 Parseval’s theorem
H (q ) hk q
1
k
k 0
G (q 1 ) g k q k
k 0
2
2
hk g k
2
k 0
H (e
i
)G (e )d
i
EXAMPLE 2.9 Frequency domain characterization
1 t
1
2
i
2
lim ( A(q)u(k ))
|
A
(
e
)
|
u ()d
x t
2
k 1
(2.46)
THEOREM 2.9 PE of filtered signals
v (t ) A( q )u (t )
1
(t )
u (t )
A( q )
Transfer Function Models
A0 (q) y(t ) B0 (q)u(t ) e(t n)
(2.47)
THEOREM 2.10 Transfer function estimation
1
1
V ( ) T lim T lim ( T ( k ) ) 2
t t
t t
v(t ) T (t n 1) B (q )u (t ) ( A(q ) q n ) y (t )
A(q ) q n 0
B ( q )u (t )
B (q )u (t )
0
A (q)
1
{ A ( q ) B ( q ) ( A(q ) q ) B (q )} 0
u (t )
A (q)
0
B 0 (q)
B(q )
0
A (q)
A(q ) q n
n
0
Identification in Closed Loop
y ( n)
y(n 1)
y(t 1)
y (1)
y(2)
u ( n)
u (n 1)
y(t n) u (t 1)
u (1)
u (2)
u (t n)
(2.48)
EXAMPLE 2.10 Loss of identifiability due to
feedback
y (t 1) ay (t ) bu (t )
u (t ) ky (t )
(2.49)
(2.50)
y (t 1) (a ak ) y (t ) (b )u (t )
^
a a k
^
b b
^
^
1
b b (a a)
k
(2.51)
u(t ) k1 y(t ) k2 y(t 1)
EXAMPLE 2.11 Convergence rate
y (t ) ay (t 1) bu (t 1) e(t )
v (t )
u (t ) k (1
) y (t )
t
bkv(t )
y (t 1) ( a bk
) y (t ) e(t 1)
t
t 2
y ( j)
j 1
T t
y ( j )u ( j )
j 1
y
(
j
)
u
(
j
)
j 1
t
2
u
(
j
)
j 1
t
(2.52)
t
v( j ) y 2 ( j )
y ( j )u ( j ) k y ( j ) k
k y 2 ( j ) kt y2
j
j 1
j 1
j 1
j 1
t
t
t
2
t
t
t 2
v( j ) y 2 ( j )
v2 ( j) y 2 ( j)
u ( j ) k y ( j ) 2
j
j
j 1
j 1
j 1
j 1
t
2
2
t
t 2
v2 ( j) y 2 ( j)
2 2
2
k y ( j)
k y (t v log t )
j
j 1
j 1
2
T
2
y
( )
2
e
T
^
1
cov( a k b)
^
k 2 (t v2
e2
2 2
y v
^
^
kt
t
kt
cov( k a b)
log t )
1
v2
t
log t
1
k log t
2
e
1
k ( )
2
e
k
1 ( )
T
T
1
k log t
1
k 2 log t
1
1
1
k
k
T
1
T
e2
t y2
e2
2 2
y v log t
2.5 SIMULATION OF RECURSIVE ESTIMATION
y (t ) ay (t 1) bu (t 1) e(t ) ce(t 1)
^
^
a
^
b
(t 1) y(t 1) u (t 1)
2
(2.53)
EXAMAPLE 2.12 Excitation
a (1000) 0.976
0.511
b(1000)
0.550 1.114
p (1000)
103
1.114 3.258
a 0.5 5.50 102 0.012
b 0.5 32.58 102 0.029
EXAMPLE 2.13 Model structure
a
(1000)
0.702
0.697
b(1000)
0.710 1.435
p (1000)
103
1.435 3.903
a 0.5 7.10 102 0.013
b 0.5 39.03 102 0.031
EXAMPLE 2.14 Closed-loop estimation
EXAMPLE 2.15 Influence of forgetting factor
2
N
1
EXAMPLE 2.16 Different estimation methods
(t ) (t 1) p(t)(t)( y(t) T (t) (t 1))
(2.54)
P (t )
P (t )
P (t )
(t ) (t )
T
t
i 1
0,
T (i ) (i )
0 2
EXAMPLE 2.17 Prior information in continuous
time
G (s)
u
(1 1 s )(1 2 s )
1
1 1 p
y 2
3
u
dy
3 u
dt
(2.55)
EXAMPLE 2.18 Prior information in sampled models
H (q)
q
2
b1q b2
a1q a2
h /
) 2 (1 e h / )
3 1 (1 e
1 2
2 (1 e h / ) 1 (1 e h / )e h /
3
1 2
1
b1
2
2
b2
1 (e h /
2 e (1/ 1/ ) h
1
1
1
2
e h / 2 )
2
q 2 a1q a2 ( q e h / 1 )( q e h / 2 ) ( q 1 )( q 2 )
H (q)
b1q b2
( q 1 )( q 2 )
3
3 ( 1 2 ) 3
h2
h
2 1 2
612 22
3
3 ( 1 2 ) 3
b2
h2
h
2 1 2
12 22
3
b1 b2
h2
2 1 2
b1
H (q)
k ( q 1)
( q 1 )( q 2 )
EXAMPLE 2.19 Reparameterization
1 / C
0
1 / C
dx
x
0 u
1
/
L
R
/
L
dt
y 0
R x
G ( s)
G ( s)
k3
s2
k1
k 2 s k3
1 2
1 2
2 3
k1
k2
s2
1 2 3
1 2 s
3
2
3
Process Model
A(q) y(t ) B(q)(u(t ) v(t ))
A * ( q 1 ) q n A( q )
A * ( q 1 ) y (t ) B * ( q 1 )(u (t d 0 ) v (t d 0 ))
A * ( q 1 ) 1 a1q 1
B * ( q 1 ) b0 b1q 1
an q n
am q m
Ay (t ) B (u (t ) v (t ))
Ru (t ) Tuc (t ) Sy (t )
BT
BR
uc (t )
v (t )
AR BS
AR BS
AT
BS
y (t )
uc (t )
v (t )
AR BS
AR BS
AR BS Ac
(3.1)
(3.2)
y (t )
(3.3)
(3.4)
Model-Following
Am ym (t ) Bmuc (t )
B
BT
BT
m
AR BS
Ac
Am
B B B
R R' B
(3.6)
(3.7)
Bm B Bm'
Ac A0 Am B
(3.5)
(3.8)
(3.9)
(3.10)
AR ' B S A0 Am Ac'
(3.11)
T A0 Bm'
(3.12)
Causality Conditions
deg S deg R
deg T deg R
(3.13)
R R 0 QB
S S 0 QA
deg R deg Ac deg A
(3.14)
deg Ac 2 deg A 1
deg Am deg Bm' deg A deg B
deg Ac 2 deg A 1
deg Am deg Bm deg A deg B d 0
(3.15)
ALGORITHM 3.1 Minimum-degree pole placement(MDPP)
Data: Polynomials A, B.
Specifications: Polynomials Am , Bm , and Ao .
Compatibility Conditions:
deg Am deg A
deg Bm deg B
deg A0 deg A deg B 1
Bm B Bm'
AR ' B S A0 Am
Ru Tuc Sy
AR ' b0 S Ac' A0 Am
AR BS Ac A0 Am
EXAMPLE 3.1 Model-following with cancellation
1
s ( s 1)
b q b1
B(q)
0.1065q 0.0902
H (q)
2 0
2
A( q )
q a1q a2
q 1.6065q 0.6065
G ( s)
Bm (q )
bm 0 q
0.1761q
2
2
Am ( q )
q am1q am 2
q 1.3205q 0.4966
B (q ) q b1 / b0
B ( q ) b0
Bm' (q ) bm 0 q / b0
A0 ( q ) 1
( q 2 a1q a2 ) 1 b0 ( s0 q s1 ) q 2 am1q am 2
a1 b0 s0 am1
a2 b0 s1 am 2
s0
am1 a1
b0
s1
am 2 a2
b0
R(q) B q
b1
b0
S ( q ) s0 q s1
'
T ( q ) A0 Bm
bm 0 q
b0
(3.16)
(3.17)
(3.18)
EXAMPLE3.2 Model-following without zero cancellation
B 1
B B b0 q b1
H m (q)
b0 q b1
b q bm1
2 m0
q am1q am 2 q am1q am 2
2
1 am1 am 2
b0 b1
(q 2 a1q a2 )(q r1 ) (b0 q b1 )( s0 q s1 ) (q 2 am1q am 2 )(q a0 )
(3.19)
b0 (b12 am1b0b1 am 2b02 )(b1 a0b0 )
r1
b1
b0 (b12 a1b0b1 a2b02 )
a0 am 2b02 (a2 am 2 a0 am1 )b0b1 (a0 am1 a1 )b12
b12 a1b0b1 a2b02
(3.20)
b1 (a0 am1 a2 am1a1 a12 am 2 a1a0 )
s0
b12 a1b0b1 a2b02
b0 ( am1a2 a1a2 a0 am 2 a0 a2 )
b12 a1b0b1 a2b02
T (q ) A0 (q ) (q a0 )
(3.21)
EXAMPLE 3.3 Continuous-time system
b
s(s a)
A0 ( s ) s a0
G( s)
Bm (s)
2
2
Am (s) s 2 s 2
s(s a)(s r1 ) b(s0 s s1 ) (s 2 2s 2 )(s a0 )
a r1 2 a0
ar1 bs0 2 2 a0
bs1 2 a0
If b ≠0, these equations can be solved, and we get
r1 2 a0 a
s0
s1
2 2 a0 ar1
b
2 a0
b
Furthermore, we have
2
B 1, B b and Bm '
.
b
-
T ( s) Bm ' ( s) A0 ( s)
2
b
It then follows from Eq. (3.12) that
( s a0 )
An Interpretation of Polynomial Ao
Relations to Model-Following
T A0 Bm ' ( AR' B S ) Bm ' ABm SBm
R
R
Am R
BAm RAm
ABm
SBm
T
S
S
u uc y
uc
uc y
R
R
BAm
RAm
R
ABm
S
uc ( y y m )
BAm
R
Figure 3.3 Alternative representation of model-following based on output feedback.
3.3 INDIRECT SELF-TUNING REGULATORS
y(t ) a1 y(t 1) a2 y(t 2) ... an y(t n)
b0u(t d 0 ) ... bmu(t d 0 m)
y(t ) T (t 1)
T [a1 a2 ... an b0 ... bm ]
T (t 1) [ y(t 1) y(t n) u(t d0 ) u(t d0 m)]
ˆ(t ) ˆ(t - 1) K(t) (t)
(t) y(t)- T (t - 1)ˆ(t - 1)
K(t) P(t - 1) (t - 1)( (t - 1)P(t- 1) (t - 1))
T
T
(I
K(t)
(t - 1))P(t- 1)
P(t)
-1
(3.22)
N n m 1 max(n, m d0 )
An Indirect Self-Tuner
ALGORITHM 3.2 Indirect self-tuning regulator using RLS and MDPP
Ru(t ) Tuc (t ) Sy(t )
EXAMPLE 3.4 Indirect self-tuner with cancellation of Process zero
y(t ) a1 y(t 1) a2 y(t 2) b0u(t 1) b1u(t 2)
(3.23)
u(t ) r1u(t 1) t0uc (t ) S0 y(t ) S1 y(t 1)
aˆ1 (100) 1.60 (1.6065)
aˆ2 (100) 0.60 (0.6065)
bˆ0 (100) 0.107 (0.1065)
bˆ1 (100) 0.092 (0.0902)
r1 (100) 0.85 (0.8467)
t0 (100) 1.65 (1.6531)
s0 (100) 2.64 (2.6852)
s1 (100) 0.99 (1.0321)
aˆ1 (100) 1.57 (1.6065)
aˆ2 (100) 0.57 (0.6065)
bˆ0 (100) 0.092 (0.1065)
bˆ1 (100) 0.112 (0.0902)
r1 (100) 0.114 (0.1111)
t0 (100) 0.86 (0.8951)
s0 (100) 1.44
s1 (100) 0.58 (0.7471)
(1.6422)
r1 (20) 0.090 (0.1111)
t0 (20) 0.83 (0.8951)
s0 (20) 1.13
s1 (20) 0.29 (0.7471)
(1.6422)
B(1)T (1)
T (1)
A(1) R(1) B(1) S (1) S (1)
T (1) A0 (1) Bm ' (1) A0 (1)
T (1) A0 (1) Am (1)
1
ˆ
S (1)
B(1) S (1)
Am (1)
Bˆ (1)
3.4 CONTINUOUS-TIME SELF-TUNERS
A( p) y(t ) B( p)u(t )
A( p) P n a1 p n 1 an
B( p) b1 p n 1 bn
y f (t ) H f y(t )
u f (t ) H f u(t )
pn y f (t ) T (t )
(t ) [ p n 1 y f y f p n 1u f u f ]T
[a1 an b1 bn ]T
dˆ(t )
P(t ) (t )( p n y f (t ) T (t )ˆ(t ))
dt
dP(t )
P(t ) P(t ) (t )T (t ) P(t )
dt
Example 3.6 Continuous-time self-tuner
b
G(s)
s( s a)
1
H f ( s)
Am ( s)
s0 p s1
t0 ( p a0 )
u (t )
y(t )
uc (t )
p r1
p r1
2
Gm ( s) 2
s 2s 2
r1 2 a0 a
a0 2 2 ar1
s0
b
2 a0
s1
b
t0
2
b
aˆ(100) 1.004
(1.0000)
bˆ(100) 1.001
(1.0000)
3.5 DIRECT SELF-TUNING REGULATOR
Ay(t ) Bu(t )
Am ym (t ) Bmuc (t )
A0 Am AR' B S
A0 Am y(t ) R' Ay(t ) B Sy(t ) R' Bu(t ) B Sy(t )
R' B R' B B RB
A0 Am y(t ) B ( Ru(t ) Sy(t ))
(3.24)
Minimum-Phase Systems
~
~
Am A0 y(t ) b0 (Ru(t ) Sy(t )) Ru(t ) Sy(t )
Bm q d 0 Am (1)
[r0 r s0 s ]
(t ) [u(t ) u(t ) y(t ) y(t )]
(t ) A* (q1 ) A* (q1 ) y(t ) T (t d0 )
0
y (t )
m
1
( Ru (t ) Sy (t )) R*u f (t d 0 ) S * y f (t d 0 )
A0 Am
(3.26)
(3.27)
u f (t )
1
u (t )
*
1
*
1
A0 (q ) Am (q )
y f (t )
1
y (t )
*
A0* ( q 1 ) Am
( q 1 )
[r0 r s0 s ]
(t ) [u f (t ) u f (t ) y f (t ) y f (t )]
y(t ) T (t d0 )
(3.28)
ALGORITHM 3.3 Simple direct self-tuner
Data: Given specification in term of Am, Bm, and A0 and the relative
degree d0 of the system
Step 1: Estimate the coefficients of the polynomials R and S in the model
(3.27), that is,
y(t ) R*u f (t d0 ) Syf (t d0 )
by recursive least squares, Eqs. (3.22)
Step 2: Compute the control signal from
R*u(t ) T *uc (t ) S * y(t )
where R and S are obtained from the estimates in Step 1 and
T * A0* Am (1)
(3.29)
with deg A0 = d0 – 1. Repeat Step1 and 2 at each sampling period.
Equation (3.29) is obtained from the observation that the closed-loop
transfer operator from command signal uc to process output is
Tb0 B
TB
T
AR BS b0 A0 Am B
A0 Am
Requiring that this be equal to qd0Am(1)/Am given Eq. (3.29)
Example 3.7 Direct self-tuner with d0 1
y(t ) r0u f (t 1) r1u f (t 2) s0 y f (t 1) s1 y f (t 2)
u f (t ) am1u f (t 1) am1u f (t 2) u (t )
y f (t ) am1 y f (t 1) am1 y f (t 2) y(t )
rˆ0u(t ) rˆ1u(t 1) tˆ0uc(t ) sˆ0 y(t ) sˆ1 y(t 1)
tˆ0 1 am1 am2
(3.30)
rˆ1 (100)
0.850
rˆ0 (100)
(0.8467)
tˆ1 (100)
1.65
rˆ0 (100)
sˆ0 (100)
2.68
rˆ0 (100)
(2.6852)
sˆ1 (100)
1.03 (1.0321)
rˆ0 (100)
(1.6531)
Example 3.8 Direct self-tuner with d0 2
rˆ1 (100)
0.337
rˆ0 (100)
sˆ0 (100)
1.20
rˆ0 (100)
sˆ1 (100)
0.67
rˆ0 (100)
tˆ0 (100)
0.52
rˆ0 (100)
Feed forward Control
1
y(t )
( Ru(t ) Sy(t ) Uv(t ))
A0 Am
(3.31)
Ru(t ) Tuc (t ) Sy(t ) Uv(t )
ym (t )
e(t )
Bm
T
uc(t )
uc(t )
Am
A0 Am
1
( Ru(t ) Sy(t ) Tuc (t ))
A0 Am
R *u f (t d 0 ) S * y f (t d 0 ) T *ucf (t d 0 )
ucf (t )
1
uc (t )
*
1
*
1
A0 (q ) Am (q )
(3.32)
Non-minimum-Phase (NMP) Systems
The case in which process zeros cannot be canceled will now be discussed.
Consider the transformed process model Eq. (3.24), that is,
A0 Am y(t ) B ( Ru(t ) Sy(t ))
where deg R= deg S = deg(A0Am) – deg B-. If we introduce
B R
and
B S
the equation can be written as
y(t )
1
(u (t ) y(t )) *u f (t d 0 ) * y f (t d 0 )
A0 Am
(3.33)
Where uf and yf are the filtered inputs and outputs given by Eqs. (3.28).
Notice that the polynomial is not monic. The polynomials and have
a common factor, which represents poorly damped zeros. This factor should
be canceled before the control law is calculated. The following direct adaptive
control algorithm is then obtained.
ALGORITHM 3.4 Direct self-tuning regulator for NMP systems
Data: Given specification in term of Am, Bm, and A0 and the relative
degree d0 of the system.
Step 1: Estimate the coefficients of the polynomials and in the model
(3.33) by recursive least squares.
Step 2: Cancel possible common factors in and
to obtain R and S.
Step 3: Calculate the control signal from Eq. (3.2) where R and S are those
Obtained in Step 2 and T is given by Eq. (3.12).
Repeat Step 1, 2, and 3 at each sampling period.
Calculation of polynomial T should be avoided. To do this, notice that
ym
The error
e y ym
B Bm'
uc
Am
can then be written as
B
e(t )
( Ru(t ) Sy(t ) Tuc (t ))
A0 Am
*u f (t d 0 ) * y f (t d 0 ) *ucf (t d 0 )
(3.34)
By basing parameter estimation on this equation , estimates of polynomials ,
, and can be determined.
Mixed Direct and Indirect Algorithm
A0 Am y(t ) B( Ru(t ) Sy(t ))
y(t )
B
( Ru(t ) Sy(t )) B* ( R*u f (t d 0 ) S * y f (t d 0 ))
A0 Am
(3.35)
ALGORITHM 3.5 A hybrid self-tuner
Ay Bu
Ru Tuc Sy
e(t ) y (t ) ym (t )
T t0 A0
t0
Am (1)
B (1)
B
( Ru(t ) Sy(t ) t0 A0uc (t ))
A0 Am
B * ( R *u f (t d 0) S * y f (t d 0 ) t0 A0*ucf (t d 0 ))
(3.36)
3.6 DISTURBANCES WITH KNOW CHARACTERISTICS
Example 3.9 Effect of Load Disturbancee
A Modified Design Procedure
Ad v e
(3.37)
Ad (q) q 1
Ad ( p) p
y
BT
BR
uc
e
AR BS
Ad ( AR BS )
u
AT
BS
uc
e
AR BS
Ad ( AR BS )
(3.38)
AR0 BS 0 Ac0
AR BS XAc0
R XR0 YB
S XS0 YA
R Ad R' XR0 YB
(3.39)
(3.40)
Integral Action
X q x0
(q 1)R' (q x0 )R0 y0 B
(1 x0 ) R 0 (1)
y0
B(1)
(3.41)
Modifications of the Estimator
Ad Ay (t ) Ad B(u(t ) v(t )) Ad By(t ) e(t )
Ay f (t ) B(u f (t ) e(t ))
(3.42)
Example 3.10 Load disturbances: Modified estimator and controller
R0 q r1
y0
S 0 s0q s1
1 r1
b0 b1
R q(q r1 ) y0 (b0 q b1 ) (q 1)(q b1 y0 )
S q(s0 q s1 ) y 0 (q 2 a1q a2 ) (s0 y0 )q 2 (s1 a1 y0 )q a2 y0
A Direct Self-tuner with Integral Action
A(q) y(t ) B(q)(u(t ) v(t ))
d deg A(q) deg B(q)
(3.43)
Am (q) y(t ) Am (1)uc (t d )
AR BS B A0 Am
B R' B
R R' B R1' B (q 1) R1' B
AR1' b0 S A0 Am
(3.44)
(3.45)
(3.46)
(3.47)
A0 Am y AR1' y b0 Sy
BR1' u b0 R' v b0 Sy
b0 ( R' u Sy) b0 R' v
(3.48)
A0* (q1 ) Am* (q1 ) y(t d ) b0 ( R'* (q1 )* (q1 )u(t ) S * (q1 ) y(t ))
b0 S * (1) A0* (1) Am* (1) A0 (1) Am (1)
(3.49)
(3.50)
b0 S * A0 (1) Am (1) (1 q 1 ) S '* (q 1 )
A0 (1) Am (1) S '* (q 1 )*
A0* (q 1 ) Am* (q 1 ) y (t d ) A0 (1) Am (1) y (t )
b0 ( R '* (q 1 )*u (t ) S '* (q 1 )* y (t ))
* (q 1 )*u (t ) * (q 1 )* y (t )
(3.51)
A0 (1) Am (1)
y(t d ) * 1 * 1 y(t ) * (q 1 )u f (t ) * (q 1 ) y f (t )
A0 (q ) Am (q )
(3.52)
1 q 1
u f (t ) * 1 * 1 u (t )
A0 (q ) Am (q )
1 q 1
y f (t ) * 1 * 1 y (t )
A0 (q ) Am (q )
* (q1 )*u(t ) * (q1 )* y(t ) A0 (1) Am (1) y(t ) A0* (q1 ) Am (1)uc (t )
A0* (q 1 )(u (t ) Am (1)uc (t ))
A0 (1) Am (1) y (t ) * (q 1 )* y (t )
(* (q 1 )* A0* (q 1 ))u (t )
u (t ) satu (t )
(3.53)
ALGORITHM 3.6 A direct self-tuning algorithm
Step 1: Estimate the parameters in Eq. (3.52) by recursive least squares.
Step 2: Compute the control signal from Eqs. (3.53) by using the estimates
from Step 1.
This algorithm may be viewed as a practical version of Algorithm 3.3