Brief Review of Control Theory

Download Report

Transcript Brief Review of Control Theory

Brief Review of Control Theory

E0397 Lecture.

Some PPT slides based on Columbia Course by authors of the book

Feedback Control of Computing Systems

Hellerstein, Diao, Parekh, Tilbury

A Feedback Control System

Error

Components •Target system: what is controlled •Controller: exercises control •Transducer: translates measured outputs Data •Reference input: objective •Control input: manipulated to affect output •Disturbance input: other factors that affect the target system •Transduced output: result of manipulation Given target system, transducer, Control theory finds controller that adjusts control input to achieve given measured output in the presence of disturbances.

Control System Examples

 AC temperature control – Controlled Variable: temperature – Control variables: fan speed, time for which compressor is on?

 Car cruise control – Controlled variable: Car speed – Control variable: Angle of pushing of accelerator and brake

Desired Speed

A Feedback Control System

Road surface, gradient Random variation Embedded System Pedal

Error

that does Angle cruise control Setting Car Distance Travelled, Time taken Speed Speed calculator Actuator: The entity that

implements

the control Mechanical Device that Changes pedal angle Feedback!

Sensor: that Car clock measures the output

A virtual machine as a FBCS

Other Virtual Machines Doing some interfering work Random variation Desired Response time

Error

Controller: Process in Domain 0 that periodically changes Cap value of Guest Domain CPU Cap Value Xen VM (Guest Domain) With applications running Response Time (of Applications) Actuator: The entity that

implements

the control Xen hypervisor scheduler Smoothed Response time E.g.

Smoothing “filter” Sensor: Can measure that Time taken measures the output Feedback!

A Feedback Control System

Error

Components •Target system: what is controlled •Controller: exercises control •Transducer: translates measured outputs Data •Reference input: objective •Control input: manipulated to affect output •Disturbance input: other factors that affect the target system •Transduced output: result of manipulation Given target system, transducer, Control theory finds controller that adjusts control input to achieve given measured output in the presence of disturbances.

Control System Goals

  

Reference Tracking

– Ensure that measured output “follows” (tracks) a target desired level

Disturbance Rejection

– Maintain measured output at a given stable level even in presence of “disturbances”

Optimization

– No reference input may be given. Maintain measured output and control input at “optimal” levels  Minimize petrol consumption, maximize speed (

not available in reality!!)

 Minimize power consumed by CPU, maximize performance

Control System: Basic Working

r(k) e(k) u(k) x(k) y(k)   “Control Loop” executed every

T

– – – – time units.

y(k):

Value of measure at time instant

k u(k):

Value of control input at time instant

k r(k):

Value of “reference” at time instant

k e(k):

Error between reference and measured value Next value of control input, i.e.

u(k+1)

on

feedback:

e(k) to be computed based

Determining Change in Control Input

 Change in

u

should depend on relationship between

y

and

u

 Relationship should be known – If known, then why feedback? Why not:  y = f(u) – (Car speed as function of accelerator pedal angle)   u = f -1 (y). For a desired reference, just calculate the control input required by using inverse. This is called

feed forward

Feed forward

Feedforward (model based)

 Problems: – Need accurate model  If model wrong, control input can be totally wrong – Imagine wrong model b/w accel. pedal and car speed – Does not take into account time-dependent behaviour  E.g. how long will system require to attain desired value – If car was at speed S1, we set cruise control to speed S2, if we set pedal angle to f -1 (S2), will it immediately go to speed S2?

– Cannot take care of “disturbances”  E.g. if environmental conditions change, model may not be applicable. – What if car was climbing a slope? (And angle was measured on flat ground?)

Feedback

addresses many of these problems.

  

Feedback Control

Rather than use

only

some offline model, also take into account the immediate measured effect of the value of your control variable – If car at S1, want to go to S2, S1 < S2 (positive error), pedal must be pushed further down.

We still need some idea of the relationship between “input” and “output” – Pedal should be pushed? Or released? If pushed – by what angle?

 (Intuitively, for larger S2 - S1, angle of pushing in must be larger) – Another e.g. if e(k) = r(k) – y(k) is positive  Should u(k) increase or decrease?

– If

u

is CPU frequency,

y

is response time. If e(k) is positive,

u

should?

 Increase/Decrease by how much?

If increase/decrease too much: – Measured output will keep on oscillating above and below reference: Unstable behavior

…Feedback control

  Since we want control to work in a timely manner (not only in “steady-state”), relationship should be time dependent New value of

y: y(k+1) y(k)

and

u(k)

will generally depend on – Speed of car at this instant, will depend on what speed was at the beginning of previous interval, and the pedal angle in the previous interval – Similarly, queuing delay at this intv’l will depend on queuing delay of previous instant, and CPU frequency setting of previous interval

System Modeling

  Need a model to capture this

relationship over time

– Can be first order (depending on 1 previous value), or higher order (more than 1 previous values). – First Order Linear model:

y(k+1) = a y(k) + b u(k)

Can be done by “first principles” or “analytical modeling” – E.g. If

y

is response time of queueing system,

u

is service time, it may be possible to find an equation to relate

y(k+1)

to

y(k)

and

u(k)

System Modeling

  If “first principles” difficult:

empirical

model Run controlled experiments on target system, record values of y(k), u(k), run regression models to find

a

and

b

. – How to do this correctly is a field called “system identification”  Main issues: – Relationship may not be actually be linear

N Number in System K

Non-linear relationships

l m

K Utilization Response Time

•E.g. number in queueing system vs buffer size •Relationship can be “linearized” in certain regions. Centre of such a region: Operating Point

Operating point

: )

Operating range

 Values of

N,K

for which model applies.

Offset value

 Difference from operating point

y N u K

Linear Relationship is made between

y

as “offsets” and

u

defined

Control “Laws”

    Once system model is made, need to design “control laws”

Control Laws

– From

e(k)

relate error to new value of control input: determine value of

u(k)

Controllers should have good SASO properties: – STABILITY - ACCURACY - SETTLING TIME - OVERSHOOT Deeply mathematical theory for deriving these laws – Transfer functions, Poles – Estimating SASO properties of control laws – Using this understanding to

design

good SASO properties good control laws which have

Cannot go into details

Properties of Control Systems

Stability Accuracy Short settling Small overshoot Unstable System

Types of Controllers (Control Laws)

 Proportional Control Law – u(k) = K p e(k)

(K p

is a constant, called

controller gain

) – Can be made stable, short settling time, less overshoot – but may be inaccurate  Integral Control Law – u(k) = u(k-1) + K I  e(k) =u(k-2) + K I e(k-1) + K I

(K I

e(k) is

controller gain

)  = u(0) +…+ K I * [ e(1) + e(2) + … +e(k) ] – Keeps reacting to previous errors also – Can show that this law will lead to zero error – Has longer settling time

Conclusion

 Padala paper uses integral control law in most places.

 It uses empirical methods for generating system model  Slides gave just enough required background.

Control theory is a huge field with lots of books (mainly used in EE, MECH, CHEM)

Read books/papers/if further interested