General Nonlinear Programming (NLP) Software

Download Report

Transcript General Nonlinear Programming (NLP) Software

General Nonlinear
Programming (NLP) Software
CAS 737 / CES 735
Kristin Davies
Hamid Ghaffari
Alberto Olvera-Salazar
Voicu Chis
January 12, 2006
Outline


Intro to NLP
Examination of:







IPOPT
PENNON
CONOPT
LOQO
KNITRO
Comparison of Computational Results
Conclusions
Intro to NLP
The general problem:
min f ( x)
s.t. hi ( x)  0 , i  I  1,..., p
g j ( x)  0 , j  J  1,...,m
(NLP)
x C
where x n , C  n is a certainset and f , hi , g j
are functionsdefinedon C.

Either the objective function or some of the
constraints may be nonlinear
Intro to NLP (cont’d…)

Recall:



The feasible region of any LP is a convex
set
if the LP has an optimal solution, there is
an extreme point of the feasible set that is
optimal
However:

even if the feasible region of an NLP is a
convex set, the optimal solution might not
be an extreme point of the feasible region
Intro to NLP (cont’d…)

Some Major approaches for NLP

Interior Point Methods


Penalty and Augmented Lagrange Methods


Use the idea of penalty to transform a constrained
problem into a sequence of unconstrained problems.
Generalized reduced gradient (GRG)


Use a log-barrier function
Use a basic Descent algorithm.
Successive quadratic programming (SQP)

Solves a quadratic approximation at every iteration.
Summary of NLP Solvers
NLP
Augmented
Lagrangian Methods
Interior Point
Methods
Reduced Gradient
Methods
PENNON
KNITRO (TR)
IPOPT, LOQO
(line search)
CONOPT
IPOPT SOLVER
(Interior Point
OPTimizer)

Creators


Aims


Andreas Wachter and L.T. Biegler at CMU (~2002)
Solver for Large-Scale Nonlinear Optimization
problems
Applications


General Nonlinear optimization
Process Engineering, DAE/PDE Systems, Process
Design and Operations, Nonlinear Model Predictive
control, Design Under Uncertainty
IPOPT SOLVER
(Interior Point
OPTimizer)

Input Format


Language / OS



Can be linked to Fortran and C code MATLAB and
AMPL.
Fortran 77, C++ (Recent Version IPOPT 3.x)
Linux/UNIX platforms and Windows
Commercial/Free


Released as open source code under the Common
Public License (CPL).
It is available from the COIN-OR repository
IPOPT SOLVER
(Interior Point
OPTimizer)

Key Claims

Global Convergence by using a Line Search.



Exploits Exact Second Derivatives




Find a KKT point
Point that Minimizes Infeasibility (locally)
AMPL (automatic differentiation)
If not Available use QN approx (BFGS)
Sparsity of the KKT matrix.
IPOPT has a version to solve problems with
MPEC Constraints. (IPOPT-C)
IPOPT SOLVER
(Interior Point
OPTimizer)

Algorithm

Interior Point method with a novel line search
filter.
Optimization Problem

min f ( x)
x
n
s.t. c ( x)  0
x0
min l ( x)  f ( x)  l  log(x (i ) )
xn
i
s.t. c( x)  0
The bounds are replaced by a logarithmic Barrier
term. The method solves a sequence of barrier
problems for decreasing values of l

IPOPT SOLVER
(Interior Point
OPTimizer)


Algorithm
(For a fixed value of l )
 Solve the Barrier Problem



Search Direction (Primal-Dual IP)
 Use a Newton method to solve the primal dual
equations.
 Hessian Approximation (BFGS update)
Line Search (Filter Method)
Feasibility Restoration Phase
IPOPT SOLVER
(Interior Point
OPTimizer)

Optimization Problem
min f ( x)
x n
Outer
Loop
s.t. c ( x)  0
x0
The bounds are replaced by a logarithmic Barrier
term.

The method solves a sequence of barrier problems for
decreasing values of l

min l ( x)  f ( x)  l  log(x (i ) )
xn
s.t. c( x)  0
i
IPOPT SOLVER
(Interior Point
OPTimizer)
Algorithm (For a fixed value of
 Solve the Barrier Problem



l
)
Search Direction (Primal-Dual IP)
Use a Newton method to solve the primal
dual equations
Hessian Approximation (BFGS update)
IPOPT SOLVER
(Interior Point
OPTimizer)
Inner
Loop
min l ( x)  f ( x)  l  log(x (i ) )
xn
i
s.t. c( x)  0
Optimality conditions
Barrier
NLP
At a Newton's iteration (xk,k,vk)
f ( x)  c( x)  v  0
XVe  e  0
f ( xk )  c( xTk )T k  vk 
c( xk )  I x  d kx 
 Hk
 HT I c( xk )  d k   ( xk )  c( xk ) k  vk  
H k 
ck(
0
c ( xk )
c( x)  0
0 dk   
 
  c( x x)kT)



I
d
c
(
x
)
c
  k
  k v  
k
 
X
0
V
d
X
V
e


e
k  k 
k k


Dual Variables  k
v  X 1e
Algorithm
Solution
Hk Core:
 2xx L
( xk , k )of this Linear system
IPOPT SOLVER
(Interior Point
OPTimizer)

Algorithm (For a fixed value of l )
 Line Search (Filter Method)

A trial point is accepted if improves feasibility
or if improves the barrier function
xk 1  xk   k d
vk 1  vk   k d

x
k
v
k
If
cxk ( k )  cxk 
or
 xk ( k )   xk 
Assumes Newton directions are “Good”
especially when using Exact 2nd Derivatives
IPOPT SOLVER
(Interior Point
OPTimizer)

Line Search - Feasibility Restoration Phase

When a new trial point does not provides
sufficient improvement.
Restore Feasibility
Minimize constraint
violation
2
min c( x) 2
xn
s.t. x  0
Force Unique Solution
Find closest feasible point.
Add Penalty function
min x  xk
x n
s.t. c( x)  0
x0
2
2
IPOPT SOLVER
(Interior Point
OPTimizer)
 The complexity of the problem increases when
complementarity conditions are introduced from:
minm
x , w , y
n
m
f ( x, w, y )
st.
c( x, w, y )  0
min
x n , w m , y m

f ( x, w, y )
 lnx  lnw  lny 
n
(i )
i 1
m
(i )
i 1
x, w, y  0
st.
c( x, w, y )  0
w(i ) y (i )  0, i  1 m
w(i ) y (i )  0, i  1 m
•The interior Point method for NLPs has been extended to handle
complementarity problems. (Raghunathan et al. 2003).
w(i ) y (i )  0
is relaxed as
w(i ) y (i )  s (i )  
(i ) (i )
w
y  
(i )
s 0
m
i 1
(i )
IPOPT SOLVER
(Interior Point
OPTimizer)

Additional



IPOPT 3x. Is now programmed in C++.
Is the primary NLP Solver in an undergoing project
for MINLP with IBM.
References


Ipopt homepage:
http://www.coin-or.org/Ipopt/ipopt-fortran.html
A. Wächter and L. T. Biegler, On the Implementation of a Primal-Dual
Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear
Programming, Research Report, IBM T. J. Watson Research Center,
Yorktown, USA, (March 2004 - accepted for publication in Mathematical
Programming)
PENNON
(PENalty method for NONlinear
& semidefinite programming)

Creators


Aims


Michal Kocvara & Michael Stingl (~2001)
NLP, Semidefinite Programming (SDP), Linear
& Bilinear Matrix Inequalities (LMI & BMI),
Second Order Conic Programming (SOCP)
Applications

General purpose nonlinear optimization,
systems of equations, control theory,
economics & finance, structural optimization,
engineering
SDP

(SemiDefinite Programming)
Minimization of a linear function subject to the
constraint that an affine combination of
symmetric matrices is positive semidefinite
min cT x
s.t. F ( x)  0
Linear Matrix Inequality (LMI)
defines a convex constraint on x
m
where F ( x)  F0   xi Fi
i 1
m  1 symmetricmatrices( F0 ,...,Fm )
SDP
(SemiDefinite Programming)
-always an optimal point on the
boundary
-boundary consists of
piecewise algebraic
surfaces
SOCP

(Second-Order Conic Programming)
Minimization of a linear function subject to a
second-order cone constraint
min cT x
s.t. Ai x  bi  ciT x  d i

Called a second-order cone constraint since the unit
second-order cone of dimension k is defined as:
u 

k 1
Ck    u  R , t  R, u  t 
 t 

Which is called the
quadratic, ice-cream,
or Lorentz cone
PENNON
(PENalty method for NONlinear
& semidefinite programming)

Input Format


Language


MATLAB function, routine called from C or
Fortran, stand-alone program with AMPL
Fortran 77
Commercial/Free

Variety of licenses ranging from
Academic – single user ($460 CDN)
to
Commercial – company ($40,500 CDN)
PENNON
(PENalty method for NONlinear
& semidefinite programming)

Key Claims





1st available code for combo NLP, LMI, & BMI
constraints
Aimed at (very) large-scale problems
Efficient treatment of different sparsity
patterns in problem data
Robust with respect to feasibility of initial
guess
Particularly efficient for large convex problems
PENNON
(PENalty method for NONlinear
& semidefinite programming)

Algorithm

Generalized version of the Augmented
Langrangian method (originally by Ben-Tal &
Zibulevsky)
Augmented Problem
min f ( x)
s.t. pi g gi ( x) / pi   0
Augmented Lagrangian
mg
i  1,...,mg
mg  # of inequality constraints
pi  0  penaltyparameter
 g  penaltyfunction
ui  Lagrange multiplier
F ( x, u, p)  f ( x)   ui pi g gi ( x) / pi 
i 1
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm



Consider only inequality constraints from (NLP)
Based on choice of a penalty function, φg, that
penalizes the inequality constraints
Penalty function must satisfy multiple properties
such that the original (NLP) has the same solution
as the following “augmented” problem:
min f ( x) , x  
s.t. pi g gi ( x) / pi   0 , i  1,...mg
(NLPφ)
with pi  0
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm (Cont’d…)

The Lagrangian of (NLPφ) can be viewed as a
(generalized) augmented Lagrangian of (NLP):
mg
F ( x, u, p)  f ( x)   ui pi g gi ( x) / pi 
i 1
Penalty parameter
Lagrange multiplier
Inequality constraint
Penalty function
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm STEPS
(0) Let x1 and ui1 be given. Let pi1  0, i  1,...,mg .
For k  1,2,...repeat until a stopping criterium
is satisfied.

,
(i ) Find x k 1 such that  x F x k 1 , u k , p k
 
(ii ) uik 1  uik g' g i x k 1 / pik

K
i  1,...,mg
(iii) pik 1  pik , i  1,...,mg
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm STEPS
(0) Let x1 and ui1 be given. Let pi1  0, i  1,...,mg .

Initialization



Can start with an arbitrary primal variable x ,
therefore, choose x1  0
Calculate initial multiplier values ui1
Initial p=  0 , typically between 10 - 10000
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm STEPS


(i ) Find x k 1 such that  x F x k 1 , u k , p k  K

(Approximate) Unconstrained Minimization


Performed either by Newton with Line Search, or by
k 1
k
k
x

arg
min
F
(
x
,
u
,
p
)
Trust Region
x
 '  ( ' )1
Stopping criteria:

 F x
 F x
 x F x k 1 , u k , p k
k 1
x
x
k 1
, uk , pk
, uk , pk



2
2
  or
  0.1
  
    F x , u , p 
   uik  uik g' gi x k 1 / pik
k
H
1
x
k

2
or
k
H 1
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm STEPS
 

(ii) uik 1  uikg' gi xk 1 / pik , i  1,...,mg

Update of Multipliers
 Restricted in order to satisfy:
uik 1 1
 k 
ui



with a positive   1, typically 0.5
new
u
If left-side violated, let i  
If right side violate, let uinew  1 / 
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm STEPS
(iii) pik 1  pik , i  1,...,mg

Update of Penalty Parameter
 No update during first 3 iterations
 Afterwards, updated by a constant factor
dependent on initial penalty parameter
-6
 Penalty update is stopped if peps (10 ) is
reached
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm

Choice of Penalty Function
 Most efficient penalty function for convex NLP
is the quadratic-logarithmic function:
 g (t )  c1 12 t 2  c2t  c3
tr
c4 log(t  c5 )  c6 t  r
where r  (1,1) and
ci  1,...,6 so that properties hold
[4] Ben-Tal & Zibulevsky
PENNON
(PENalty method for NONlinear
& semidefinite programming)
The Algorithm

Overall Stopping Criteria
f ( x k )  F ( x k , u k , p)
1 f (x )
k

or
f ( x k )  f ( x k 1 )
1 f (x )
k

where   107
[3] Kocvara & Stingl
PENNON
(PENalty method for NONlinear
& semidefinite programming)

Assumptions / Warnings


More tuning for nonconvex problems is still
required
Slower at solving linear SDP problems since
algorithm is generalized
PENNON
(PENalty method for NONlinear
& semidefinite programming)

References

Kocvara, Michal & Michael Stingl. PENNON: A Code for Convex
and Semidefinite Programming. Optimization Methods and
Software, 8(3):317-333, 2003.


Kocvara, Michal & Michael Stingl. PENNON-AMPL User’s Guide.
www.penopt.com . August 2003.
Ben-Tal, Aharon & Michael Zibulevsky. Penalty/Barrier Multiplier
Methods for Convex Programming Problems. Siam J. Optim.,
7(2):347-366, 1997.

Pennon Homepage. www.penopt.com/pennon.html Available
online January 2007.