Source
This following is a link to the source code for this example:
qp_interior.hpp
.
Purpose
This routine could be used to create a version of abs_min_linear
that solved Quadratic programs (instead of linear programs).
Problem
We are given
@(@
C \in \B{R}^{m \times n}
@)@,
@(@
c \in \B{R}^m
@)@,
@(@
G \in \B{R}^{n \times n}
@)@,
@(@
g \in \B{R}^n
@)@,
where @(@
G
@)@ is positive semi-definite
and @(@
G + C^T C
@)@ is positive definite.
This routine solves the problem
@[@
\begin{array}{rl}
\R{minimize} &
\frac{1}{2} x^T G x + g^T x \; \R{w.r.t} \; x \in \B{R}^n
\\
\R{subject \; to} &
C x + c \leq 0
\end{array}
@]@
Vector
The type
Vector
is a
simple vector with elements of type double.
level
This value is zero or one.
If
level == 0
,
no tracing is printed.
If
level == 1
,
a trace of the qp_interior optimization is printed.
G
This is a row-major
of the matrix @(@
G
@)@ in the problem.
epsilon
This argument is the convergence criteria;
see KKT conditions
below.
It must be greater than zero.
maxitr
This is the maximum number of newton iterations to try before giving up
on convergence.
xin
This argument has size
n
and is the initial point for the algorithm.
It must strictly satisfy the constraints; i.e.,
@(@
C x - c < 0
@)@ for
x = xin
.
xout
This argument has size is
n
and
the input value of its elements does no matter.
Upon return it is the primal variables corresponding to the problem solution.
yout
This argument has size is
m
and
the input value of its elements does no matter.
Upon return it the components of
yout
are all positive
and it is the dual variables corresponding to the program solution.
sout
This argument has size is
m
and
the input value of its elements does no matter.
Upon return it the components of
sout
are all positive
and it is the slack variables corresponding to the program solution.
ok
If the return value
ok
is true, convergence is obtained; i.e.,
@[@
| F_0 (xout , yout, sout) |_\infty \leq epsilon
@]@
where @(@
| v |_\infty
@)@ is the maximum absolute element
for the vector @(@
v
@)@ and @(@
F_\mu (x, y, s)
@)@ is defined below.
KKT Conditions
Give a vector @(@
v \in \B{R}^m
@)@ we define
@(@
D(v) \in \B{R}^{m \times m}
@)@ as the corresponding diagonal matrix.
We also define @(@
1_m \in \B{R}^m
@)@ as the vector of ones.
We define
@(@
F_\mu : \B{R}^{n + m + m } \rightarrow \B{R}^{n + m + m}
@)@
by
@[@
F_\mu ( x , y , s )
=
\left(
\begin{array}{c}
g + G x + y^T C \\
C x + c + s \\
D(s) D(y) 1_m - \mu 1_m
\end{array}
\right)
@]@
The KKT conditions for a solution of this problem is
@(@
0 \leq y
@)@,
@(@
0 \leq s
@)@, and
@(@
F_0 (x , y, s) = 0
@)@.
Newton Step
The derivative of @(@
F_\mu
@)@ is given by
@[@
F_\mu^{(1)} (x, y, s) =
\left( \begin{array}{ccc}
G & C^T & 0_{n,m} \\
C & 0 & I_{m,m} \\
0_{m,m} & D(s) & D(y)
\end{array} \right)
@]@
The Newton step solves the following equation for
@(@
\Delta x
@)@, @(@
\Delta y
@)@, and @(@
\Delta z
@)@
@[@
F_\mu^{(1)} (x, y, s)
\left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right)
=
- F_\mu (x, y, s)
@]@
To simplify notation, we define
@[@
\begin{array}{rcl}
r_x (x, y, s) & = & g + G x + y^T C \\
r_y (x, y, s) & = & C x + c + s \\
r_s (x, y, s) & = & D(s) D(y) 1_m - \mu 1_m
\end{array}
@]@
It follows that
@[@
\left( \begin{array}{ccc}
G & C^T & 0_{n,m} \\
C & 0 & I_{m,m} \\
0_{m,m} & D(s) & D(y)
\end{array} \right)
\left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right)
=
-
\left( \begin{array}{c}
r_x (x, y, s) \\
r_y (x, y, s) \\
r_s (x, y, s)
\end{array} \right)
@]@
Elementary Row Reduction
Subtracting @(@
D(y)
@)@ times the second row from the third row
we obtain:
@[@
\left( \begin{array}{ccc}
G & C^T & 0_{n,m} \\
C & 0 & I_{m,m} \\
- D(y) C & D(s) & 0_{m,m}
\end{array} \right)
\left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right)
=
-
\left( \begin{array}{c}
r_x (x, y, s) \\
r_y (x, y, s) \\
r_s (x, y, s) - D(y) r_y(x, y, s)
\end{array} \right)
@]@
Multiplying the third row by @(@
D(s)^{-1}
@)@ we obtain:
@[@
\left( \begin{array}{ccc}
G & C^T & 0_{n,m} \\
C & 0 & I_{m,m} \\
- D(y/s) C & I_{m,m} & 0_{m,m}
\end{array} \right)
\left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right)
=
-
\left( \begin{array}{c}
r_x (x, y, s) \\
r_y (x, y, s) \\
D(s)^{-1} r_s (x, y, s) - D(y/s) r_y(x, y, s)
\end{array} \right)
@]@
where @(@
y/s
@)@ is the vector in @(@
\B{R}^m
@)@ defined by
@(@
(y/s)_i = y_i / s_i
@)@.
Subtracting @(@
C^T
@)@ times the third row from the first row we obtain:
@[@
\left( \begin{array}{ccc}
G + C^T D(y/s) C & 0_{n,m} & 0_{n,m} \\
C & 0 & I_{m,m} \\
- D(y/s) C & I_{m,m} & 0_{m,m}
\end{array} \right)
\left( \begin{array}{c} \Delta x \\ \Delta y \\ \Delta s \end{array} \right)
=
-
\left( \begin{array}{c}
r_x (x, y, s)
- C^T D(s)^{-1} \left[ r_s (x, y, s) - D(y) r_y(x, y, s) \right] \\
r_y (x, y, s) \\
D(s)^{-1} r_s (x, y, s) - D(y/s) r_y(x, y, s)
\end{array} \right)
@]@
Solution
It follows that @(@
G + C^T D(y/s) C
@)@ is invertible and
we can determine @(@
\Delta x
@)@ by solving the equation
@[@
[ G + C^T D(y/s) C ] \Delta x
=
C^T D(s)^{-1} \left[ r_s (x, y, s) - D(y) r_y(x, y, s) \right] - r_x (x, y, s)
@]@
Given @(@
\Delta x
@)@ we have that
@[@
\Delta s = - r_y (x, y, s) - C \Delta x
@]@
@[@
\Delta y =
D(s)^{-1}[ D(y) r_y(x, y, s) - r_s (x, y, s) + D(y) C \Delta x ]
@]@
Example
The file qp_interior.cpp
contains an example and test of
qp_interior.
Input File: example/abs_normal/qp_interior.hpp