Linear Models

Download Report

Transcript Linear Models

The Use of Mathematica
in
Control Engineering
Neil Munro
Control Systems Centre
UMIST
Manchester, England.
• Linear Model Descriptions
• Linear Model Transformations
• Linear System Analysis Tools
• Design/Synthesis Techniques
• Pole Assignment
• Model-Reference Optimal Control
• PID Controller
• Concluding Remarks
Linear Model Descriptions
The Control System Professional currently provides
several ways of describing linear system models; e.g.
1
For systems described by the state-space equations
x  Ax  Bu
y  C x  Du
where y is a vector of the system outputs
u is a vector of the system inputs
and
2
x is a vector of the system state-variables
For systems described by transfer-function
relationships
y(s)  G(s) u(s)
where s is the complex variable
Examples: -
0
1
0 
 0
0
 0

0
0
0
1
x
x  
  (a b )
1
0
ab
0 



0

(
a
b
)
0
a

b


0
0
0
u
0

1
y   b  a 1 1x
ss = StateSpace[{{0, 0, 1, 0},{0, 0, 0, 1},
{-(a b), 0, a+b, 0},{0, -(a b), 0, a+b}},
{{0, 0},{0, 0},{1, 0},{0, 1}},{{-b, -a, 1, 1}}]
// Re vie wForm
0
0
1
0
A
1
0
0
0
ab 0
 (a b ) 0
ab
 (a b ) 0
0
B
0
0
1
0
C
b a 1 1
0
0
0
1
 1
G (s )  
a  s
1 
 b  s 
tf = TransferFunction[s,{{1/(s-a),1/(s-b)}}]
1
TransferFu nction[s,{{
as
1
}}]
 bs
New Data Formats have been implemented,
for these objects, which are fully editable, as
follows: ss = StateSpace[{{0, 0, 1, 0},{0, 0, 0, 1},
{-(a b), 0, a+b, 0},{0, -(a b), 0, a+b}},
{{0, 0},{0, 0},{1, 0},{0, 1}},{{-b, -a, 1, 1}}]
now results in the composite data matrix
 A B

ss  
 C D
s
0
0
1
0


0
0
0
1

   (a b )
0 ab
0

0  (a b )
0 ab


a
1
1
 b
0 0

0 0
1 0

0 1

0 0
S
tfsys = TransferFunction[s,
{{((s+2)(s+3))/(s+1)^2,1/(s+1)^2},
{(s+2)/(s+1)^2,(s+1)/((s+1)^2(s+3))},
{1/(s+2),1/(s+1)}}]
which now yields
iHHLHLLHLy
HL HLHL
k {
s+ 2
s+ 3
s+ 1
2
1
s+ 1
s+ 2
s+ 1
T
2
1
2
s+ 1
s+ 3
1
1
s+ 2
s+ 1
New Data Objects
Four new data objects have been introduced;
namely,
1. Rosenbrock’s system matrix in
polynomial form
2. Rosenbrock’s system matrix in statespace form
3. The right matrix-fraction description of a
system
4. The left matrix-fraction description of a
system.
The system matrix in polynomial form
provides a compact description of a linear
dynamical system described by arbitrary
ordered differential equations and algebraic
relationships, after the application of the
Laplace transform with zero initial
conditions; namely
T(s)  U(s)u
y  V(s)  W(s)u
where , u and y are vectors of the Laplace
transformed system variables. This set of
equations can equally be written as
 T(s) U(s)     0 
 V(s) W(s)  u   y 

   
The system matrix in polynomial form
is then defined as
 T(s) U(s)
P(s)  


V
(
s
)
W
(
s
)


When a system matrix in polynomial form is
being created, it is important to note that the
dimension of the square matrix T(s) must be
adjusted to be r, where r is  the degree of
Det[T(s)].
If the system description is known in statespace form then a special form of the
system matrix can be constructed, known as
the system matrix in state-space form, as
shown below
sI  A B 
P(s)  


C
D


Matrix Fraction Forms
Given a system description in transfer
function matrix form G(s), for certain
analysis and design purposes; e.g. the H-
approach to robust control system design;
it is often convenient to express this
model in the form of a left or right matrixfraction description; e.g.
1. A left matrix-fraction form of a given
transfer function matrix G(s) might be
G(s)  DL1 (s)NL (s)
2
A right matrix fraction form of a given
transfer function matrix G(s) might be
G(s)  NR (s)DR1 (s)
Linear Model Transformations
G(s)
[A, B, C, D]
G(s)
1 ( s) N ( s) or N ( s) D 1 ( s)
DL
L
R
R
G(s)
System Matrix P(s)
in polynomial form
[A, B, C, D]
System Matrix P(s)
in state-space form
[T(s),U(s),V(s),W(s)]
System Matrix P(s)
in polynomial form
Least Order
P1(s)  P2(s)
New Data Transformations
All data formats are fully editable
iHHLHLLHLy
HL HLHL
k {
tfsys = TransferFunction[s,
s+ 2
1
2
s+ 1
{{((s+2)(s+3))/(s+1)^2,1/(s+1)^2},
i
-
0
0
2
0
0
0
-
1
0
5
0
0
0
-
0
1
4
0
0
0
-
0
0
0
0
0
3
s+ 1
s+ 3
1
1
s+ 2
s+ 1
-
0
0
0
1
0
7
2
1
s+ 1 2
{1/(s+2),1/(s+1)}}]
T
s+ 1
s+ 2
{(s+2)/(s+1)^2,(s+1)/((s+1)^2(s+3))},
ss = StateSpace[tfsys]
s+ 3
0
0
0
0
1
5
-
0
0
1
0
0
0
0
0
0
0
0
1
y
S
k
{
LH
L
iHH
y
i
H
L
H
L
y
L
¤
k
H
L
H
L
{
kHLHLHL{
10
4
1
11
4
2
3
1
1
3
1
3
1
1
4
0 1 0
0 0 0
1 0 0
•
rff = RightMatrixFraction[tfsys]
s+
2
2 s+
s+
2
2
s+
1
2
3
s+
s+
s+
3
s+
1
1
2
s+
2
0
1 s+ 3
-
0
s+
1
2
F
s+
rff = RightMatrixFraction[ss,s]
3
H
L
iHL
y
i
H
L
H
L
y
H
L
¤
k
H
L
H
L
{
k HL HLHL{
s+
2 s2 + 5 s + 6
s+
2
2
s+
1
2
s+
s+
s+
3
1
1 s+ 3
s+
1
2
0
s+
2
-
0
s+
1
2
1
F
s+
3
1
New Data Transformations
i y
k{
i HL y
kHLHL{
JN
HL¤
iy
k{
s+ 1
tfsys = TransferFunction[s,
{{(s+1)/(s^2+2s+1)},{(s+2)/(s+1)}}]
T
s 2 + 2 s+ 1
s+ 2
s+ 1
ps = SystemMatrix[tfsys,TargetFormRightFraction]
1
0
0
0
s+ 12
1
0
0
- s-
1
0
0
- s+
1
rff = RightMatrixFraction[ps]
1 s+2
s+ 1
s+ 2
1
1
-
M
F
T
s+ 1
TransferFunction[%]
s+ 2
s+ 1
ps = SystemMatrix[tfsys,TargetForm LeftFraction]
i
k
1
0
0 s2 + 2 s + 1
0
0
0
lf = LeftFractionForm[
0
-
0
0
0
s+
s+
1
0
M
1
1 s+ 2
0
- 1
0
0
 1  s   1  2s  s 2 0 
,s ]

 , 
0
1  s 
 2  s 
y
{
ik y
¤
{JN
s2 + 2 s + 1
0
0
s+ 1
-
1 s+
s+
1
2
F
New Data Transformations
tfsys =TransferFunction[s,
{{(s+1)/(s^2+2s+1)},{(s+2)/(s+1)}]
i y
k{
T
s+ 1
s2 + 2 s+ 1
s+ 2
s+ 1
Ä
JHL
N
H
HLL¤
HL
Ä
rff = RightMatrixFraction[tfsys]
s+ 1
s + 1 s+ 2
s+1
2 -1
F
i y
k {
dt = ToDiscreteTime[tfsys,Sampled->20]//Simplify
- 1 + ã 20
T
ã 20 z - 1
ã
20 z + ã 20 - 2
ã 20 z - 1
20
New Data Transformations
Ä
ik y
H
L
¤
{ Ä
RightMatrixFraction[%]
- 1 + ã 20
F
ã 20 z - 1
ã 20 z + ã 20 - 2
- 1
20
SystemMatrix[dt,TargetForm->RightFraction]
i
k
i
k
ã 20 z - 1
1
1 - ã 20
0
y
{
M
- ã 20 z - ã 20 + 2 0 2 0
SystemMatrix[dt]
ã 20 z - 1
0
0
- 1 + ã 20
ã 20 z - 1 ã 20 z + ã 20 - 2
-1
0
0
0
- 1
0
y
{
M
20
A Least-Order form of a System-Matrix
in polynomial form is one in which there
are no input-decoupling zeros and no
output-decoupling zeros, and would yield a
minimal state-space realization, when
directly converted to state-space form.
For example, the polynomial system matrix
 s 2 (s  1) s 3  s 2  1 1  s 2 (s  1)

 s(s  2) s 2  3s  2  s(s  2)
P(s)   s(s  2) s 2  3s  1 1  s(s  2)

 0
0
0

0
0
 0
0
0
0
1
1
M, S
0

1

1
0

0.
is not least order, since T(s) and U(s) have 3
input-decoupling zeros at s = {0, 0, -1}; i.e.
[T(s) U(s)] has rank  4 at these values of s.
M, S
 s 2 (s  1) s 3  s 2  1 1  s 2 (s  1)

 s(s  2) s 2  3s  2  s(s  2)
P1 (s )   s(s  2) s 2  3s  1 1  s(s  2)

 0
0
0

0
0
 0
0 0

0 1

0 1
1 0

1 0.
 s 2 (s  1) s 3  s 2  1 1  s 2 (s  1)

 s(s  2) s 2  3s  2  s(s  2)
~ P2 (s )    1
1
1

 0
0
0

0
0
 0
0

1

0
0

0.
0
0
0
1
1
Hence
G(s)  V1 (s)T11 (s)U1 (s)  W(s)
 V2 (s)T21 (s)U 2 (s)  W(s)
1

s2
M, S
System Analysis
Controllable[ss]
Observable[ss]
ss =[A, B, C, D]
Controllable[ps]
Observable[ps]
sI  A B 
P( s )  

  C D
Controllable[ps]
Observable[ps]
 T( s ) U ( s ) 
P( s )  

  V ( s ) W ( s )
SmithForm[T(s) U(s)]
McMillanForm[G(s)]
MatrixLeftGCD[T(s) U(s)]
MatrixRightGCD[T(s) V(s)]
Decoupling Zeros
sI  A B 
P( s )  

  C D
Decoupling Zeros
 T( s ) U ( s ) 
P( s )  

  V ( s ) W ( s )
Controllability and
Observability
In the same way that the controllability and
observability of a system described by a set of
state space equations can be determined in the
Control System Professional by entering the
commands
Controllable[ss] and Observable[ss]
where ss is a StateSpace object.
These tests can now also be directly applied to a
system matrix object by entering the commands
Controllable[sm] and Observable[sm]
where sm is a SystemMatrix object in either
polynomial form or state space form.
Preliminary Analysis
• Reduction of State-Space Equations
• Given a system matrix in state-space form
sI  A B 
P(s)   n
D
 C
• then an input-decoupling zeros algorithm,
• implemented in Mathematica, reduces P(s) to
sI  A 11
P1 (s)    A 21
 C
1

0  ,n
sIn  A 22
 C2
0  ,m 
B2 
D 
• The completely controllable part is then given by
sI
n 
 A 22 , B 2 
Gasifier
PSINK
TGAS
PGAS
CVGAS
WAIR
WSTM
WCOL
WLS
MASS
WCHR
Model Format
A is 25 x 25
C is 4 x 25
Inputs:- 1 char
2 air
3 coal
4 steam
5 limestone
6 upstream disturbance
x  Ax  Bu
y  Cx  Du
B is 25 x 6
D is 4 x 6
Outputs:- 1 gas cv
2 bed mass
3 gas pressure
4 gas temperature
Preliminary Analysis
The original 25th order system is numerically very ill
conditioned.
The eigenvalues cover a significant range in the complex
plane, ranging from -0.00033 to -33.1252.
The condition number is 5.24 x 1019.
At w = 0 the maximum and minimum singular values are
147500 and 50, respectively.
The Kalman controllability and observability tests yield a
rank of 1, and the controllability and observability gramians
are :-
 1.07  1016 






5
 7.64  10 

7 
7
.
29

10


 4.63  1015 


 22
 8.08  10 
Wc  
3.17  10 31 


 33
1.59  10 

 43 
3
.
09

10


 9.35  10 69 


0




0
 4.47  1014 





4
 2.84  10 

0 
 4.49  10 
 5.29  10 2 


2
 3.29  10 
Wo  
2.29  10 2 


3
 6.15  10 

3 
1
.
78

10


1.22  10 3 

4 
7
.
02

10



5
 9.11 10 
Preliminary Analysis
Application of the decoupling zeros algorithm to [sI-A, B] yielded
Dimensions of
A B

  ( 29, 31)
 C D
Dimensions of
 A r Br 

  ( 22, 24)
 Cr D r 
Dimensions of A z  (7,7)
0
0
0
0
0
0
  0.05677



0
 0.05677
0
0
0
0
0




0
0
 0.05677
0
0
0
0


Az 
0
0
0
 0.0002426
0
0
0



0
0
0
0
 0.05677
0
0




0
0
0
0
0
 0.05677
0


0
0
0
0
0
0
 0.05677

indicating that the system had 7 input-decoupling zeros,
which was confirmed by transforming A and B to spectral form.
Coprime Factorizations
@
D
@
D
@
@
@
@
D
D
D
D
@
@
@
@
D
D
D
D
@
@
@
@
D
D
D
D
@
@
@
D
D
@
@
D
D
D
@
D
@
@
@
D
D
@
@
D
D
D
@@
@
D
D
D
iH
HL
L H
HLLy
HL HL
ki
{
y
H
L
ki HL{
y
H
L
H
L
ki y
{
k{
ps = SystemMatrix t, u, v, w, s
test = MatrixLeftGCD s, t, u ;
Print "L = ", MatrixForm test
1
;
Print "T now = ", MatrixForm test
2
;
Print "U now = ", MatrixForm test
3
;
Expand test
1
Simplify test
Det test
L
. test
1
2
. test
= = = Expand t
3
=== u
2
s2 s + 1
s3
s s+ 2
s2 + 3 s + 2
s s+ 2
s2 + 3 s + 1
1- s s+ 2
0 1
0
0
0
1 0
0
0
0
1 0
=
1
0
1
0
+
0
1
1 s2
0
s2 - 1
U now
True
True
2+ s
=
=
s 2+ s
- 1
0
0
1
0
0
-
0
0
1+ s
0
s2 1 + s
T now
1 - s2 s + 1
s s+ 2
0 0
M
,s
0 1
•
0
0
0
1
-
1 + s2 + s3 1 - s2 - s3 0
2 + 3 s + s2
- 1
0
-
s 2+ s
1
0
0
0
1
Smith and McMillan
Forms
The Smith form of a polynomial matrix and
the McMillan form of a rational polynomial
matrix are both important in control systems
analysis.
Consider an  x m polynomial matrix N(s),
then the Smith form of N(s) is defined as
S(s) = L(s)N(s)R(s)
wh e re S(s)  diag{ i (s )} 0 ,m   , m  
S(s)  diag{ i (s )}, m  
diag{ i (s)}
S( s )  
 ,m  
0
  m ,m


and L(s) and R(s) are unimodular matrices.
Smith and McMillan
Forms
Consider now an  x m rational polynomial
matrix G(s), and let
G(s) = N(s)/d(s)
where d(s) is the monic least common
denominator of G(s), then the McMillan
form of G(s) is defined as
M(s)  diag{ i (s ) /  i (s)} 0 ,m  , m  
S(s)  diag{ i (s) /  i (s)}, m  
diag{ i (s) /  i (s )}
S( s )  
,m  

0   m ,m


where M(s) is the result of dividing the
Smith form of N(s) by d(s), and cancelling
out all common factors
Synthesis Methods
Design Methods
Pole
Assignment
PID
Controller
Optimal
Control
Nyquist
Array
Model Ref.
Opt. Control
Robust NA
Model-Order
Reduction
Nonlinear
Systems
Design/Synthesis Methods
Methods implemented are:1 Pole Assignment - Some Observations
2 Model-Reference Optimal Control
3 The Systematic Design of PID Controllers
4 Uncertain Nonlinear Systems
5 Robust Direct Nyquist Array Design Method
6 Model-Order Reduction
Pole Assignment
We consider four main types of approaches
• ACKERMANN’S FORMULA
• SPECTRAL APPROACH
• MAPPING APPROACH
• EIGENVECTOR METHODS
Control Systems Centre - UMIST
Ackermann’s Formula
k  0 0  11 pc ( A)
Here  is the controllability matrix of [A,b],
and pc(s) is the desired closed-loop system
characteristic polynomial.
Spectral Approach
q
k   i vi
i 1
   


,
     
q
where
i
j1
q
i
j1
j i
i
j
i
 i  v i .b
j
Here, i and i are the open-loop system and
desired closed-loop system poles, respectively,
and the vi´ are the associated reciprocal
eigenvectors.
Control Systems Centre - UMIST
Mapping Approach
The state-feedback matrix is given as
1 1

k    X 
where  is the controllability matrix of [A, b]

  b Ab  A n  1 b
0
0
 1
a
0
 n 1 1
X   a n  2 a n 1 1



 
 a 1
a2 a3

0
0
0


1
 a n 2   0  a 0 





   n1  a n1  n 2
Here, the ai and i are the coefficients of the
open-loop and closed-loop system characteristic
polynomials, respectively.
Control Systems Centre - UMIST
Eigenvector Methods
It is also possible to determine the statefeedback pole assignment compensator as

k  m1 m2  mn  uc1 uc2  ucn

1
where the mi are randomly chosen scalars
and the uci are the closed-loop system
eigenvectors calculated from
uci  mi  A   i I n  b
1
Selecting mi =1, for example, the statefeedback compensator can be found as

k  1 1  1 uc1 uc2  ucn
Control Systems Centre - UMIST

1
Comparison of Dyadic Methods
under Numeric Considerations
2
0
-2
Error
-4
-6
-8
Spectral
-10
Mapping
-12
Ackermann's
-14
EVAssign
-16
2
4
6
8
10
12
14
16
18
20
14
16
18
20
Order
0.7
0.6
Tim e [s]
0.5
0.4
0.3
0.2
0.1
0
2
4
6
8
10
12
Order
Control Systems Centre - UMIST
Comparison of Dyadic Methods
under Symbolic Considerations
135
115
Mapping
Tim e [s]
95
Ackermann's
EV Assign
75
55
35
15
-5
2
4
6
8
Order
Control Systems Centre - UMIST
10
12
14
Model-Reference Optimal Control

J   [e t Q e u t Ru]dt
o
e  yM  y
 CM x M  C x
 [C  C]~
x
M
u
y
System
Model
-
e
+
Reference
Model
yM
Model Reference LQR System
The resulting optimal feedback controller Ko can be partitioned as
 K 11
Ko  
K 21
K 12 

K 22 
where K11 and K21 operate on the reference-model state vector xM
and K12 and K22 operate on the system state vector x.
+
-
-
B
+
 I dt
y
C
+
A
K22
K21
r
e
+
K12
+
BM
-
+
 I dt
y
CM
+
AM
K11
Model Reference LQR feedback paths.
r
Model xM
K21
+
-
+
System
K22
Model Reference LQR System Closed-Loop.
y
x
 1
 (1  4s)

 0.6

(1  5s)
G (s )  
 0.35

 (1  5s)
 0.2

 (1  5s)
 1
 (1  s )

 0

M (s )  
 0


 0

0.7
(1  5s)
0.3
(1  5s)
1
(1  4s)
0.4
(1  5s)
0.4
(1  5s)
1
(1  4s)
0.3
(1  5s)
0.7
(1  5s)
0.2 
(1  5s) 

0.35 

(1  5s) 
0.6 

(1  5s) 
1 

(1  4s) 
0
0
1
(1  0.5s )
0
0
1
(1  0.5s)
0
0



0 


0 

1 
(1  s ) 
0
0
0
0
0
0
0 
0.01 0


 0
0.01 0
0
0
0
0
0 


 0
0
0.01 0
0
0
0
0 


 0
0
0
0.01
0
0
0
0 


Q
 0
0
0
0 1000 0
0
0 


 0
0
0
0
0
1000 0
0 


 0
0
0
0
0
0
1000 0 




0
0
0
0
0
0
1000
 0
0
0
0
0
0
0 
1000 0


 0
1000 0
0
0
0
0
0 


 0
0
1000 0
0
0
0
0 


 0
0
0
1000 0
0
0
0 

R
 0
0
0
0
0.01 0
0
0 


 0
0
0
0
0
0.01 0
0 


 0
0
0
0
0
0
0.01 0 




0
0
0
0
0
0
0.01
 0
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
1
2
3
4
5
Unit-step on reference input 1
1
2
3
4
5
Unit-step on reference input 2
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
1
1
2
3
4
2
3
4
5
5
Unit-step on reference input 3
Unit-step on reference input 4
The PID Controller
• In recent years, several researchers have been
re-examining the PID controller to determine
the limiting Kp, Ki, and Kd parameter values to
guarantee a stable closed-loop system; namely,
•
•
•
•
•
Keel and Bhattacharyya
Ho, Datta, and Bhattacharyya
Shafei and Shenton
Astrom and Hagglund
Munro and Soylemez
The PID Controller
r
e
+
PID
u
y
Plant
-
Kp
e

K i dt
+
+
u
+
Kd
d
dt
e
r
+
2-D Test
Compensator
k
u
Plant
+
-
-
Kx
Test compensator arrangement
s 3  6s 2  2s  1
g(s )  5
s  3s4  29s 3  15s 2  3s  60
Ki
+1.0
Kp
-1.0
+1.0
Test compensator space
y
The Nyquist plot for Kp = 0.5 and Ki = 0.5
The admissible PI compensator space
27
(s  1)(s  3)3
27
 4
s  10s 3  36s 2  54s  27
g (s ) 
100
Ki
75
50
25
0
10
7.5
Kp
5
2.5
0
0
5
Kd
10
Design Requirements
• Stability
• Performance
• Robustness
• Simplicity
• Transparency
increasing
difficulty
Acknowledgements
My thanks to Dr Igor Bakshee
of Wolfram Research for his
interest and help in carrying out
this work.
Control Systems Centre
UMIST
D-Stability
 = Sin()
Im

D
Re
d
Control Systems Centre
UMIST
The Nyquist Plot Approach
• Here, we detect 5 axis crossings,
(-2,+2,+2,-1,-1), where the last is
due to the infinite arc, on the
right, due to the pole at the origin.
The Nyquist Plot Approach
The resulting stability boundary is
Ki
30
25
20
15
10
5
0
Kp
0
5
10
15
20
Note that the origin is not included in the
region because the basic system is unstable.
Control Systems Centre
UMIST
The Nyquist Plot Approach
• Here, with Kp = 5 and Ki = 18
the system is stable, even with
an additional gain of k =1.3134
•
•
•
•
yielding closed-loop poles =
-0.2519 ± 5.4879i
-1.2320 ± 1.5258i
-0.0161 ± 0.4510i
Diagonal Dominance Concepts
Various definitions of Diagonal Dominance exist,
namely :Rosenbrock’s row/column form
Limebeer’s Generalised Diagonal Dominance
Bryant & Yeung’s Fundamental Dominance
~R
~L
~Y
where the conservatism of the resulting dominance
criterion reduces as
Y<L<R
Mathematica Code
@
88
8

H
L

H
L

H
L
<
H
L

H
L

H
L
<
8
HL@

HLD

HL
<<D
@
8 <D
tfsys = TransferFunction s,
3
s+2 , 1
s+2 , 1
s+ 2
,
1
s+1 , 8
s+4 , 1
s+ 2
,
1
s+2 , 3
s + 2 , 10
freqs = logList 0.01, 30, 50 ;
NyquistArray tfsys, freqs,
DominanceCriterion - > Column,
CircleCriterion - > Ostrowski,
DominanceSteps - > 1,
FeedbackGains - >
1, 0.5, 2
s+ 5
Nyquist Array Example
i y
k {
T
3
1
1
s+ 2
s+ 2
s+ 2
1
8
1
s+ 1
s+ 4
s+ 2
1
3
10
s+ 2
s+ 2
s+ 5
Ostrowski circles are shown in red for gains
=
8<
1, 0.5, 2
Nyquist Array with Ostrowskicircles
1.5
1
0.5
-1
-0.5
1
2
3
-1
-0.05
-0.1
-0.15
-0.2
-0.25
0.1 0.2 0.3 0.4 0.5
-0.05
-0.1
-0.15
-0.2
-0.25
0.1 0.2 0.3 0.4 0.5
-0.05
-0.1
-0.15
-0.2
-0.25
0.1 0.2 0.3 0.4 0.5
-1.5
2
-0.1
-0.2
-0.3
-0.4
-0.5
0.2 0.4 0.6 0.8
1
1
-2
-1
1
2
3
4
-1
-2
1
-0.05
-0.1
-0.15
-0.2
-0.25
0.1 0.2 0.3 0.4 0.5
-0.1 0.20.40.60.8 1 1.21.4
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
0.5
-0.5
-0.5
-1
0.5 1 1.5 2 2.5 3
Gasifier
PSINK
TGAS
PGAS
CVGAS
WAIR
WSTM
WCOL
WLS
MASS
WCHR
Model Format
A is 25 x 25
C is 4 x 25
Inputs:- 1 char
2 air
3 coal
4 steam
5 limestone
6 upstream disturbance
x  Ax  Bu
y  Cx  Du
B is 25 x 6
D is 4 x 6
Outputs:- 1 gas cv
2 bed mass
3 gas pressure
4 gas temperature
Combined Sequential Loop Closing and
Diagonal Dominance Method
• This approach is a new combination of Bryant’s
Sequential Loop Closing Approach with MacFarlane
and Kouvaritakis’ ALIGN Algorithm, Edmunds’
Scaling and Normalization Technique, and
Rosenbrock’s Diagonal Dominance.
• It is particularly appropriate in cases where a simple
controller structure is desired.
• Advantages:
– It can be implemented by closing one loop at a
time.
– Usually, the resulting control scheme is quite
simple and can be easily realized in practice.
Achieving Diagonal Dominance
• Normalization :
– Generates the input-output scaling to be applied to the
system in order to minimize interaction.
– Determines the best input-output pairing for control
purposes.
– Produces good diagonal dominance properties at low and
intermediate frequencies.
– Results are obtained by using simple, wholly real
permutation matrices.
• High frequency decoupling :
– Aims at improving the transient response of the system.
– Emphasis is on frequencies close to the bandwidth,
around which interaction is most severe.
– Results are obtained by making use of wholly real
matrices.
Preliminary Analysis
The original 25th order system is numerically very ill
conditioned.
The eigenvalues cover a significant range in the complex
plane, ranging from -0.00033 to -33.1252.
The condition number is 5.24 x 1019.
At w = 0 the maximum and minimum singular values are
147500 and 50, respectively.
The Kalman controllability and observability tests yield a
rank of 1, and the controllability and observability gramians
are :-
 107
.  1016 





 7.64  105 

7 
 7.29  10 
 4.63  1015 

 22 
8.08  10

Wc  
 317
.  10 31 


 33
.  10
 159


 43 
3
.
09

10



69
9.35  10



0




0
 4.47  1014 





 2.84  104 

0 
 4.49  10 
5.29  10 2 

2 
3.29  10 
Wo  
2.29  10 2 


3
.  10 
 615

3 
1
.
78

10



3
 122
.  10 

4 
7
.
02

10



5
 9.11  10 
Preliminary Analysis
Application of the decoupling zeros algorithm to [sI-A, B] yielded
Dimensions of
A B

  ( 29, 31)
 C D
Dimensions of
 A r Br 

  ( 22, 24)
 Cr D r 
Dimensions of A z  (7,7)
0
0
0
0
0
0
  0.05677



0
 0.05677
0
0
0
0
0




0
0
 0.05677
0
0
0
0


Az 
0
0
0
 0.0002426
0
0
0



0
0
0
0
 0.05677
0
0




0
0
0
0
0
 0.05677
0


0
0
0
0
0
0
 0.05677

indicating that the system had 7 input-decoupling zeros,
which was confirmed by transforming A and B to spectral form.
H
L
@

D
 @8<D
@
8
<
D
@
D
@
@
D
@
D
88 8 <<8<<8<8<D
@
@
@
D
D
D
@
8
<
88 @D
<8 <<
@
8
8
<
8
@
D
<

8
<
<
D
D
*
GEC
PI Controller
*
kp = - 0.003; ki = - 0.00001;
pPlusi = Together kp + ki s
ki kp
ipcol = TakeColumns b4, 1, 1
oprow = TakeRows c, 2, 2
;
;
ss21 = StateSpace a, ipcol, oprow ;
clss = GenericConnect TransferFunction 1 , TransferFunction s, pPlusi ,
ss21,
2, 1, 3, Negative
, 3, 2
, 1 , 3
tfrl = TransferFunction s, pPlusi ghigh
2, 1
;
;
RootLocusPlot tfrl, k, 0, 0.25 , PlotPoints - > 1000,
PlotRange - >
- 0.001, 0.0001 ,
- 0.0005, 0.0005
PoleStyle - > PointSize 0.01 ,
Epilog - > Line
0, 0 ,
- zeta wn, wn Sqrt 1 - zeta^ 2
wn - > 0.002, zeta - > .690107
.
,
Im
0.0004
0.0002
Re
-0.001
-0.0008
-0.0006
-0.0004
-0.0002
-0.0002
-0.0004
H
*
GEC
L
@
@
D
8
<
@
DD


PI Controller
*
SimulationPlot clss, UnitStep t , t, 4500 ,
Sampled - > Period 5 , PlotRange - > All,
GridLines ® Automatic
Timing
1.2
1
0.8
0.6
0.4
0.2
8
1000
7.481 Second,
2000
<
… Graphics …
3000
4000
H
*
Moving the PI zero closer to the origin
more gain
*
L
kp = - 0.003; ki = - 0.00001;
@

D
 @8<D @
8
<
D
@
D
@
@
D
@
D
88 8 @
<<8<<@
8@
<8D
<
D
D
D
@
8
<
88 @D
<8 @
<
<
8
8
<
8
@
D
<

8
<
<
D
D
kp = - 0.01; ki = - 0.000002;
pPlusi = Together kp + ki s
ki kp
ipcol = TakeColumns b4, 1, 1
; oprow = TakeRows c, 2, 2
;
ss21 = StateSpace a, ipcol, oprow ;
clss = GenericConnect TransferFunction 1 , TransferFunction s, pPlusi ,
ss21,
2, 1, 3, Negative
, 3, 2
, 1 , 3
tfrl = TransferFunction s, pPlusi ghigh
2, 1
;
;
RootLocusPlot tfrl, k, 0, 1 , PlotPoints - > 200,
PlotRange - >
- 0.003, 0.0001 ,
- 0.0015, 0.0015
PoleStyle - > PointSize 0.01 , Epilog - > Line
- zeta wn, wn Sqrt 1 - zeta^ 2
,
0, 0 ,
. wn - > 0.002, zeta - > .690107
Im
0.0015
0.001
0.0005
Re
-0.003
-0.0025
-0.002
-0.0015
-0.001
-0.0005
-0.0005
-0.001
-0.0015
H
*
Moving the PI zero closer to the origin
@
@
D
8
<
@
DD


SimulationPlot clss, UnitStep t , t, 4500 ,
Sampled - > Period 5 , PlotRange - > All,
GridLines ® Automatic
Timing
1
0.8
0.6
0.4
0.2
8
1000
7.871 Second,
2000
<
… Graphics …
3000
4000
*
L
Design Procedure - 1
• The Nyquist Array after an initial output
scaling of diag{0.00001 , 0.001 , 0.001 ,
0.1} looks like :
Design Procedure - 2
• The Nyquist Array after swapping the
first two outputs (calorific value of fuel
gas and bedmass) and closing the
bedmass/char off-take loop is :
Design Procedure - 3
• The Nyquist Array of the 3 x 3 subsystem after
normalisation and high frequency decoupling at
w = 0.001 rad/sec is (where the outputs are
pressure, temperature and calorific value of fuel
gas) :
Design Summary
•
•
•
•
•
•
Implement PI controller on bedmass/char-extraction loop.
Scale inputs and outputs, to normalize them.
Use ALIGN Algorithm for the remaining 3-input 3-output
subsystem.
Design a PI controller for the fast Calorific Value Loop.
Design a PI controller for the fast Pressure Loop.
Design a Lag-Lead controller for the remaining slow
Temperature Loop.
The control scheme resulting from this approach is as follows :
bedmass PI
cv PI
pressure PI
temperature
control
constant
decoupling
Input
Scaling
Gasifier
Output
Scaling
Controller
Constant Pre-compensator


 
 1
  0.65 0.54 0.66 


  0.75 1.19 3.22 




0
.
06

0
.
78

1
.
29


Constant Post-compensator
0.001

 
 
 


0
.
0001



0.00001 

 





0
.
05


Dynamic Controller

100 








0.0001
s






0.01

1


s

0.02


1


s
0.0015( s  0.0005) 



s(s  0.001)




Model Simplification
250
200
150
100
50
20
40
60
Green
80
100
= high - order
2000
1000
2500
5000
7500
10000
12500
15000
-1000
-2000
-3000
-4000
8
<
I
I
8
<
8@@
<
@
D
D
 D
0.991 Second, Null
3499.26 s7 + 238.081 s6
+
0.885487 s5 +
0.00135158 s4 + 1.05604 ´ 10- 6 s3 + 4.42642 ´ 10-
8
7
6
10 2
5
s + 0.455558 s + 0.0239052 s + 0.0000735781 s +
1.00076 ´ 10- 7 s4 + 7.42698 ´ 10-
11 3
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1
0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1
0.0330784
3, 2
M
‘
M
s + 3.1183 ´ 10- 14 s2 + 7.0044 ´ 10- 18 s + 6.55111 ´ 10- 22 ^
T
264 Abs ghigh
s + 9.42066 ´ 10- 14 s + 7.95078 ´ 10- 18
. s - > 0.1707 I
Model Simplification
Im
0.0004
0.0002
Re
-0.001
-0.0005
0.0005
-0.0002
-0.0004
Root-locus diagram of g1,1(s)
Theorem: By using just the first input of a given
MIMO system, it is almost always possible to
arbitrarily assign 1 self conjugate poles of the
system, and make these poles uncontrollable
from the other inputs, provided that the system
[A, b1,C] has 1 controllable and observable
poles, where b1 is the first column of the input
matrix (B), where

1   
m
This result can be compared with a previous result
developed by Munro and Novin-Hirbod (1979) for
the case of dynamic output feedback, where the
degree r of the necessary compensator is given by
 n  ( m    1) 
r

max(
m
,

)

