GRADIENT ALGORITHMS for COMMON LYAPUNOV FUNCTIONS Daniel Liberzon Univ. of Illinois at Urbana-Champaign, U.S.A. Roberto Tempo IEIIT-CNR, Politecnico di Torino, Italy CDC ’03 PROBLEM Given Hurwitz matrices matrix and matrix ,

Download Report

Transcript GRADIENT ALGORITHMS for COMMON LYAPUNOV FUNCTIONS Daniel Liberzon Univ. of Illinois at Urbana-Champaign, U.S.A. Roberto Tempo IEIIT-CNR, Politecnico di Torino, Italy CDC ’03 PROBLEM Given Hurwitz matrices matrix and matrix ,

GRADIENT ALGORITHMS for COMMON LYAPUNOV FUNCTIONS

Daniel Liberzon

Univ. of Illinois at Urbana-Champaign, U.S.A.

Roberto Tempo

IEIIT-CNR, Politecnico di Torino, Italy

CDC ’03

PROBLEM

Given Hurwitz matrices and matrix , find matrix : Motivation: stability of uncertain and switched systems Analytical results: LMI methods: Our approach: • hard to come by (beyond ) • require special structure • can handle large finite families • provide limited insight • gradient descent iterations • handle inequalities sequentially Goal: algorithmic approach with theoretical insight

MOTIVATING EXAMPLE

In the special case when matrices commute:  quadratic common Lyapunov function (Narendra & Balakrishnan, 1994) .

..

Nonlinear extensions: Shim et al. (1998), Vu & L (2003)

I

TERATIVE

A

LGORITHMS

: PRIOR WORK

Algebraic inequalities: Agmon, Motzkin, Schoenberg (1954) Polyak (1964) Yakubovich (1966) Matrix inequalities: Polyak & Tempo (2001) Calafiore & Polyak (2001)

G

RADIENT

A

LGORITHMS

: PRELIMINARIES

– convex differentiable real-valued functional on the space of symmetric matrices, Examples: 1.

(need this to be a simple eigenvalue) 2.

( is Frobenius norm, is projection onto matrices )

G

RADIENT

A

LGORITHMS

: PRELIMINARIES

– convex differentiable real-valued functional on the space of symmetric matrices, – convex in 1.

given Gradient: ( is unit eigenvector of with eigenvalue ) 2.

G

RADIENT

A

LGORITHMS

: DETERMINISTIC CASE

– finite family of Hurwitz matrices – visits each index times – arbitrary symmetric matrix Gradient iteration: ( – suitably chosen stepsize) Theorem: Solution , if it exists, is found in a finite number of steps Idea of proof: distance from to solution set decreases at each correction step

G

RADIENT

A

LGORITHMS

: PROBABILISTIC CASE

– compact (possibly infinite) family – picked using probability distribution on s.t. every relatively open subset has positive measure Gradient iteration (randomized version): Theorem: Solution , if it exists, is found in a finite number of steps with probability 1 Idea of proof: still get closer with each correction step correction step is executed with prob. 1

SIMULATION EXAMPLE

Interval family of triangular Hurwitz matrices: vertices Deterministic gradient: ( ineqs): 10,000 iterations (a few seconds) ( ineqs): 10,000,000 iterations (a few hours) Randomized gradient gives faster convergence Both are quite easy to program Compare: quadstab ( MATLAB ) stacks when

CONCLUSIONS

Contribution: • Gradient iteration algorithms for solving simultaneous Lyapunov matrix inequalities • Deterministic convergence for finite families, probabilistic convergence for infinite families Open issues: • performance comparison of different methods • addressing existence of solutions • optimal schedule of iterations • additional requirements on solution