Implementing a custom KKT system
This tutorial explains how to implement a custom AbstractKKTSystem
in MadNLP.
Structure exploiting methods
MadNLP gives the user the possibility to exploit the problem's structure at the linear algebra level, when solving the KKT system at every Newton iteration. By default, the KKT system is factorized using a sparse linear solver (MUMPS or HSL ma27/ma57). A sparse linear solver analyses the problem's algebraic structure when computing the symbolic factorization, with a heuristic that determines an optimal elimination tree. As an alternative, the problem's structure can be exploited directly, by specifying the order of the pivots to perform (e.g. using a block elimination algorithm). Doing so usually leads to significant speed-up in the algorithm.
We recall that the KKT system solved at each Newton iteration has the structure:
\[\overline{ \begin{pmatrix} W & J^\top & - I & I \\ J & 0 & 0 & 0 \\ -Z_\ell & 0 & X_\ell & 0 \\ Z_u & 0 & 0 & X_u \end{pmatrix}}^{K} \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z_\ell \\ \Delta z_u \end{pmatrix} = \begin{pmatrix} r_1 \\ r_2 \\ r_3 \\ r_4 \end{pmatrix}\]
with $W$ a sparse matrix storing the Hessian of the Lagrangian, and $J$ a sparse matrix storing the Jacobian of the constraints. We note the diagonal matrices $Z_\ell = diag(z_\ell)$, $Z_u = diag(z_u)$, $X_\ell = diag(x_\ell - x)$, $X_u = diag(x - x_u)$.
In MadNLP, every linear system with the structure $K$ is implemented as an AbstractKKTSystem
. By default, MadNLP represents a KKT system as a SparseKKTSystem
:
nlp = HS15Model()
results = madnlp(nlp; kkt_system=MadNLP.SparseKKTSystem, linear_solver=LapackCPUSolver)
nothing
This is MadNLP version v0.8.7, running with Lapack-CPU (BUNCHKAUFMAN)
Number of nonzeros in constraint Jacobian............: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 1
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 1.01e+00 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.9758855e-01 1.00e+00 4.61e+01 -1.0 1.01e+00 - 4.29e-01 9.80e-03h 1
2 9.9664309e-01 1.00e+00 5.00e+02 -1.0 4.81e+00 - 1.00e+00 9.93e-05h 1
3 1.3615174e+00 9.99e-01 4.41e+02 -1.0 5.73e+02 - 9.98e-05 4.71e-04H 1
4 1.3742697e+00 9.99e-01 3.59e+02 -1.0 3.90e+01 - 2.30e-02 2.68e-05h 1
5 1.4692139e+00 9.99e-01 4.94e+02 -1.0 5.07e+01 - 2.76e-04 1.46e-04h 1
6 3.1727722e+00 9.97e-01 3.76e+02 -1.0 8.08e+01 - 1.88e-06 9.77e-04h 11
7 3.1746542e+00 9.96e-01 1.53e+02 -1.0 9.93e-01 4.0 9.96e-01 7.98e-04h 1
8 8.2322599e+00 9.85e-01 2.47e+02 -1.0 1.51e+01 - 1.43e-03 7.81e-03h 8
9 8.2886173e+00 9.84e-01 4.78e+02 -1.0 3.92e+00 - 1.00e+00 2.48e-04h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 4.0285546e+01 8.72e-01 4.57e+02 -1.0 4.94e+00 - 1.00e+00 6.25e-02h 5
11 2.8602303e+02 2.66e-01 4.91e+02 -1.0 1.16e+00 - 6.87e-01 5.00e-01h 2
12 3.9870638e+02 6.41e-03 4.21e+02 -1.0 1.89e-01 - 2.69e-01 1.00e+00h 1
13 3.4928113e+02 2.93e-02 2.62e+01 -1.0 5.07e-01 - 1.00e+00 1.00e+00h 1
14 3.5909050e+02 6.98e-03 1.66e+00 -1.0 2.60e-01 - 6.84e-01 1.00e+00h 1
15 3.6047026e+02 6.05e-05 1.44e-02 -1.0 2.91e-02 - 1.00e+00 1.00e+00h 1
16 3.6038251e+02 2.50e-07 7.99e-05 -2.5 1.37e-03 - 1.00e+00 1.00e+00h 1
17 3.6037976e+02 2.63e-10 7.93e-08 -5.7 4.63e-05 - 1.00e+00 1.00e+00h 1
18 3.6037976e+02 1.11e-16 5.96e-14 -8.6 3.06e-08 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 18
(scaled) (unscaled)
Objective...............: 3.6037976240508465e+02 3.6037976240508465e+02
Dual infeasibility......: 5.9563010060664213e-14 5.9563010060664213e-14
Constraint violation....: 1.1102230246251565e-16 1.1102230246251565e-16
Complementarity.........: 1.5755252336109936e-09 1.5755252336109936e-09
Overall NLP error.......: 1.5755252336109936e-09 1.5755252336109936e-09
Number of objective function evaluations = 46
Number of objective gradient evaluations = 19
Number of constraint evaluations = 46
Number of constraint Jacobian evaluations = 19
Number of Lagrangian Hessian evaluations = 18
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.827
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 3.827
EXIT: Optimal Solution Found (tol = 1.0e-08).
Solving AbstractKKTSystem in MadNLP
The AbstractKKTSystem
object is an abstraction to solve the generic system $K x = b$. Depending on the implementation, the structure of the linear system is exploited in different fashions. Solving a KKT system amounts to the four following operations:
- Querying the current sensitivities to assemble the different blocks constituting the matrix $K$.
- Assembling a reduced sparse matrix condensing the sparse matrix $K$ to an equivalent smaller symmetric system.
- Calling a linear solver to solve the condensed system.
- Calling a routine to unpack the condensed solution to get the original descent direction $(\Delta x, \Delta y, \Delta z_\ell, \Delta z_u)$.
Exploiting the problem's structure usually happens in steps (2) and (4). We skim through the four successive steps in more details.
Getting the sensitivities
The KKT system requires the following information:
- the (approximated) Hessian of the Lagrangian $W$ ;
- the constraints' Jacobian $J$ ;
- the diagonal matrices $Z_\ell$, $Z_u$ and $X_\ell$, $X_u$.
The Hessian and the Jacobian are assumed sparse by default.
At every IPM iteration, MadNLP updates automatically the values in $W$, $J$ and in the diagonal matrices $Z_\ell, Z_u, X_\ell, X_u$. By default, we expect the following attributes available in every instance kkt
of an AbstractKKTSystem
:
kkt.hess
: stores the nonzeroes of the Hessian $W$;kkt.jac
: stores the nonzeroes of the Jacobian $J$;kkt.l_diag
: stores the diagonal entries in $X_\ell$;kkt.u_diag
: stores the diagonal entries in $X_u$;kkt.l_lower
: stores the diagonal entries in $Z_\ell$;kkt.u_lower
: stores the diagonal entries in $Z_u$.
The attributes kkt.hess
and kkt.jac
are accessed respectively using the getters get_hessian
and get_jacobian
.
Every time MadNLP queries the Hessian and the Jacobian, it updates the nonzeroes values in kkt.hess
and kkt.jac
. Rightafter, MadNLP calls respectively the functions compress_hessian!
and compress_jacobian!
to update all the internal values in the KKT system kkt
.
To recap, every time we evaluate the Hessian and the Jacobian, MadNLP calls automatically the functions:
hess = MadNLP.get_hessian(kkt)
MadNLP.compress_hessian!(kkt)
to update the values in the Hessian, and for the Jacobian:
jac = MadNLP.get_jacobian(kkt)
MadNLP.compress_jacobian!(kkt)
Assembling the KKT system
Once the sensitivities have been updated, we can assemble the KKT matrix $K$ and condense it to an equivalent system $K_{c}$ before factorizing it with a linear solver. The assembling of the KKT system is done in the function build_kkt!
.
The system is usually stored in the attribute kkt.aug_com
. Its dimension depends on the condensation used. The matrix kkt.aug_com
can be dense or sparse, depending on the condensation used. MadNLP uses the getter get_kkt
to query the matrix kkt.aug_com
stored in the KKT system kkt
.
Solving the system
Once the matrix $K_c$ is assembled, we pass it to the linear solver for factorization. The linear solver is stored internally in kkt
: by default, it is stored in the attribute kkt.linear_solver
. The factorization is handled internally in MadNLP.
Once factorized, it remains to solve the linear system using a backsolve. The backsolve has to be implemented by the user in the function solve!
. It reduces the right-hand-side (RHS) down to a form adapted to the condensed matrix $K_c$ and calls the linear solver to perform the backsolve. Then the condensed solution is unpacked to recover the full solution $(\Delta x, \Delta y, \Delta z_\ell, \Delta z_u)$.
To recap, MadNLP assembles and solves the KKT linear system using the following operations:
# Assemble
MadNLP.build_kkt!(kkt)
# Factorize the KKT system
MadNLP.factorize!(kkt.linear_solver)
# Backsolve
MadNLP.solve!(kkt, w)
Example: Implementing a new KKT system
As an example, we detail how to implement a custom KKT system in MadNLP. Note that we consider this usage as an advanced use of MadNLP. After this work of caution, let's dive into the details!
In this example, we want to approximate the Hessian of the Lagrangian $W$ as a diagonal matrix $D_w$ and solve the following KKT system at each IPM iteration:
\[\begin{pmatrix} D_w & J^\top & - I & I \\ J & 0 & 0 & 0 \\ -Z_\ell & 0 & X_\ell & 0 \\ Z_u & W & 0 & X_u \end{pmatrix} \begin{pmatrix} \Delta x \\ \Delta y \\ \Delta z_\ell \\ \Delta z_u \end{pmatrix} = \begin{pmatrix} r_1 \\ r_2 \\ r_3 \\ r_4 \end{pmatrix}\]
This new system is not equivalent to the original system $K$, but it's much easier to solve at it does not involve the generic Hessian $W$. If the diagonal values of $D_w$ are constant and are equal to $\alpha$, the algorithm becomes equivalent to a gradient descent with step $\alpha^{-1}$.
Using the relations $\Delta z_\ell = X_\ell^{-1} (r_3 + Z_\ell \Delta x)$ and $\Delta z_u = X_u^{-1} (r_3 - Z_u \Delta x)$, we condense the matrix down to the reduced form:
\[\begin{pmatrix} D_w + \Sigma_x & J^\top \\ J & 0 \\ \end{pmatrix} \begin{pmatrix} \Delta x \\ \Delta y \end{pmatrix} = \begin{pmatrix} r_1 + X_\ell^{-1} r_3 - X_u^{-1} r_4\\ r_2 \end{pmatrix}\]
with the diagonal matrix $\Sigma_x = -X_\ell^{-1} Z_\ell - X_u^{-1} Z_u$. The new system is symmetric indefinite, but much easier to solve than the original one.
The previous reduction is standard in NLP solvers: MadNLP implements the reduced KKT system operating in the space $(\Delta x, \Delta y)$ using the abstraction AbstractReducedKKTSystem
. If $D_w$ is replaced by the original Hessian matrix $W$, we recover exactly the SparseKKTSystem
used by default in MadNLP.
Creating the KKT system
We create a new KKT system DiagonalHessianKKTSystem
, inheriting from AbstractReducedKKTSystem
. Using generic types, the structure DiagonalHessianKKTSystem
is defined as:
struct DiagonalHessianKKTSystem{T, VT, MT, QN, LS, VI, VI32} <: MadNLP.AbstractReducedKKTSystem{T, VT, MT, QN}
# Nonzeroes values for Hessian and Jacobian
hess::VT
jac_callback::VT
jac::VT
# Diagonal matrices
reg::VT
pr_diag::VT
du_diag::VT
l_diag::VT
u_diag::VT
l_lower::VT
u_lower::VT
# Augmented system K
aug_raw::MadNLP.SparseMatrixCOO{T,Int32,VT, VI32}
aug_com::MT
aug_csc_map::Union{Nothing, VI}
# Diagonal of the Hessian
diag_hess::VT
# Jacobian
jac_raw::MadNLP.SparseMatrixCOO{T,Int32,VT, VI32}
jac_com::MT
jac_csc_map::Union{Nothing, VI}
# LinearSolver
linear_solver::LS
# Info
n_var::Int
n_ineq::Int
n_tot::Int
ind_ineq::VI
ind_lb::VI
ind_ub::VI
# Quasi-Newton approximation
quasi_newton::QN
end
Here, we define a DiagonalHessianKKTSystem as a subtype of a AbstractReducedKKTSystem
. Depending on the condensation, the following alternatives are available:
AbstractUnreducedKKTSystem
: no condensation is applied.AbstractCondensedKKTSystem
: the reduced KKT system is condensed further by removing the blocks associated to the slack variables.
The attributes pr_diag
and du_diag
store respectively the primal regularization (terms in the diagonal of the (1, 1) block) and the dual regularization (terms in the diagonal of the (2, 2) block). By default, the dual regularization is keep equal to 0, whereas the primal regularization is set equal to $\Sigma_x$.
MadNLP instantiates a new KKT system with the function create_kkt_system
, with the following signature:
function MadNLP.create_kkt_system(
::Type{DiagonalHessianKKTSystem},
cb::MadNLP.SparseCallback{T,VT},
ind_cons,
linear_solver::Type;
opt_linear_solver=MadNLP.default_options(linear_solver),
hessian_approximation=MadNLP.ExactHessian,
qn_options=MadNLP.QuasiNewtonOptions(),
) where {T,VT}
We pass as arguments:
- the type of the KKT system to build (here,
DiagonalHessianKKTSystem
), - the structure used to evaluate the callbacks
cb
, - the index of the constraints,
- a generic linear solver
linear_solver
.
This function instantiates all the data structures needed in DiagonalHessianKKTSystem
. The most difficult part is to assemble the sparse matrices aug_raw
and jac_raw
, here stored in COO format. This is done in four steps:
Step 1. We import the sparsity pattern of the Jacobian :
jac_sparsity_I = MadNLP.create_array(cb, Int32, cb.nnzj)
jac_sparsity_J = MadNLP.create_array(cb, Int32, cb.nnzj)
MadNLP._jac_sparsity_wrapper!(cb, jac_sparsity_I, jac_sparsity_J)
Step 2. We build the resulting KKT matrix aug_raw
in COO format, knowing that $D_w$ is diagonal:
# System's dimension
n_hess = n_tot # Diagonal Hessian!
n_jac = length(jac_sparsity_I)
aug_vec_length = n_tot+m
aug_mat_length = n_tot+m+n_hess+n_jac+n_slack
# Build vectors to store COO coortinates
I = MadNLP.create_array(cb, Int32, aug_mat_length)
J = MadNLP.create_array(cb, Int32, aug_mat_length)
V = VT(undef, aug_mat_length)
fill!(V, 0.0) # Need to initiate V to avoid NaN
offset = n_tot+n_jac+n_slack+n_hess+m
# Primal regularization block
I[1:n_tot] .= 1:n_tot
J[1:n_tot] .= 1:n_tot
# Hessian block
I[n_tot+1:n_tot+n_hess] .= 1:n_tot # diagonal Hessian!
J[n_tot+1:n_tot+n_hess] .= 1:n_tot # diagonal Hessian!
# Jacobian block
I[n_tot+n_hess+1:n_tot+n_hess+n_jac] .= (jac_sparsity_I.+n_tot)
J[n_tot+n_hess+1:n_tot+n_hess+n_jac] .= jac_sparsity_J
# Slack block
I[n_tot+n_hess+n_jac+1:n_tot+n_hess+n_jac+n_slack] .= ind_ineq .+ n_tot
J[n_tot+n_hess+n_jac+1:n_tot+n_hess+n_jac+n_slack] .= (n+1:n+n_slack)
# Dual regularization block
I[n_tot+n_hess+n_jac+n_slack+1:offset] .= (n_tot+1:n_tot+m)
J[n_tot+n_hess+n_jac+n_slack+1:offset] .= (n_tot+1:n_tot+m)
aug_raw = MadNLP.SparseMatrixCOO(aug_vec_length, aug_vec_length, I, J, V)
Step 3. We convert aug_raw
from COO to CSC using the following utilities:
aug_com, aug_csc_map = MadNLP.coo_to_csc(aug_raw)
Step 4. We pass the matrix in CSC format to the linear solver:
_linear_solver = linear_solver(
aug_com; opt = opt_linear_solver
)
Storing the Hessian and Jacobian, even in sparse format, is expensive in term of memory. For that reason, MadNLP stores the Hessian and Jacobian only once in the KKT system.
Getting the sensitivities
MadNLP requires the following getters to update the sensitivities. As much as we can, we try to update the values inplace in the matrix aug_raw
. We use the default implementation of compress_jacobian!
in MadNLP:
function MadNLP.compress_jacobian!(kkt::DiagonalHessianKKTSystem)
ns = length(kkt.ind_ineq)
kkt.jac[end-ns+1:end] .= -1.0
MadNLP.transfer!(kkt.jac_com, kkt.jac_raw, kkt.jac_csc_map)
return
end
The term -1.0
accounts for the slack variables used to reformulate the inequality constraints as equality constraints.
For compress_hessian!
, we take into account that the diagonal matrix $D_w$ is the diagonal of the Hessian:
function MadNLP.compress_hessian!(kkt::DiagonalHessianKKTSystem)
kkt.diag_hess .= 1.0
return
end
MadNLP also needs the following basic functions to get the different matrices and the dimension of the linear system:
MadNLP.num_variables(kkt::DiagonalHessianKKTSystem) = length(kkt.diag_hess)
MadNLP.get_kkt(kkt::DiagonalHessianKKTSystem) = kkt.aug_com
MadNLP.get_jacobian(kkt::DiagonalHessianKKTSystem) = kkt.jac_callback
function MadNLP.jtprod!(y::AbstractVector, kkt::DiagonalHessianKKTSystem, x::AbstractVector)
mul!(y, kkt.jac_com', x)
end
Assembling the KKT system
Once the sensitivities are updated, we assemble the new matrix $K_c$ first in COO format in kkt.aug_raw
, before converting the matrix to CSC format in kkt.jac_com
using the utility MadNLP.transfer!
:
function MadNLP.build_kkt!(kkt::DiagonalHessianKKTSystem)
MadNLP.transfer!(kkt.aug_com, kkt.aug_raw, kkt.aug_csc_map)
end
Solving the system
It remains to implement the backsolve. For the reduced KKT formulation, the RHS $r_1 + X_\ell^{-1} r_3 - X_u^{-1} r_4$ is built automatically using the function MadNLP.reduce_rhs!
. The backsolve solves for $(\Delta x, \Delta y)$. The dual's descent direction $\Delta z_\ell$ and $\Delta z_u$ are recovered afterwards using the function MadNLP.finish_aug_solve!
:
function MadNLP.solve!(kkt::DiagonalHessianKKTSystem, w::MadNLP.AbstractKKTVector)
MadNLP.reduce_rhs!(w.xp_lr, dual_lb(w), kkt.l_diag, w.xp_ur, dual_ub(w), kkt.u_diag)
MadNLP.solve!(kkt.linear_solver, primal_dual(w))
MadNLP.finish_aug_solve!(kkt, w)
return w
end
The function solve!
takes as second argument a vector w
being an AbstractKKTVector
. An AbstractKKTVector
is a convenient data structure used in MadNLP to store and access the elements in the primal-dual vector $(\Delta x, \Delta y, \Delta z_\ell, \Delta z_u)$.
When calling solve!
, the values in the vector w
are updated inplace. The vector w
should be initialized with the RHS $(r_1, r_2, r_3, r_4)$ before calling the function solve!
. The function modifies the values directly in the vector w
to return the solution $(\Delta x, \Delta y, \Delta z_\ell, \Delta z_u)$.
Last, MadNLP implements an iterative refinement method to get accurate descent directions in the final iterations. The iterative refinement algorithm implements Richardson's method, which requires multiplying the KKT matrix $K$ on the right by any vector $w = (w_x, w_y, w_{z_l}, w_{z_u})$. This is provided in MadNLP by overloading the function LinearAlgebra.mul!
:
function LinearAlgebra.mul!(
w::MadNLP.AbstractKKTVector{T},
kkt::DiagonalHessianKKTSystem,
x::MadNLP.AbstractKKTVector{T},
alpha = one(T),
beta = zero(T),
) where {T}
mul!(primal(w), Diagonal(kkt.diag_hess), primal(x), alpha, beta)
mul!(primal(w), kkt.jac_com', dual(x), alpha, one(T))
mul!(dual(w), kkt.jac_com, primal(x), alpha, beta)
# Reduce KKT vector
MadNLP._kktmul!(w,x,kkt.reg,kkt.du_diag,kkt.l_lower,kkt.u_lower,kkt.l_diag,kkt.u_diag, alpha, beta)
return w
end
Demonstration
We now have all the elements needed to solve the problem with the new KKT linear system DiagonalHessianKKTSystem
. We just have to pass the KKT system to MadNLP using the option kkt_system
:
nlp = HS15Model()
results = madnlp(nlp; kkt_system=DiagonalHessianKKTSystem, linear_solver=LapackCPUSolver)
nothing
This is MadNLP version v0.8.7, running with Lapack-CPU (BUNCHKAUFMAN)
Number of nonzeros in constraint Jacobian............: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 1
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 1.01e+00 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.9758678e-01 1.00e+00 4.59e+01 -1.0 1.01e+00 - 4.27e-01 9.80e-03h 1
2 1.3281589e+00 1.00e+00 5.00e+02 -1.0 3.42e+02 - 1.00e+00 1.68e-04h 1
3 1.3006439e+00 1.00e+00 5.00e+02 -1.0 1.64e+02 - 3.71e-03 7.01e-04h 6
4 1.0646881e+00 1.00e+00 5.00e+02 -1.0 4.07e+01 - 8.23e-02 6.76e-04h 2
5 9.7015669e-01 1.00e+00 4.74e+02 -1.0 8.53e+01 - 1.00e+00 3.80e-04h 7
6 9.8003254e-01 1.00e+00 4.80e+02 -1.0 6.56e+01 - 7.39e-01 1.22e-04h 14
7 1.0034592e+00 1.00e+00 4.84e+02 -1.0 6.66e+01 - 4.67e-01 1.22e-04h 14
8 1.0405590e+00 1.00e+00 4.83e+02 -1.0 6.75e+01 - 4.78e-03 1.22e-04h 14
9 6.5449345e+00 7.54e-01 4.76e+02 -1.0 2.55e+01 - 2.10e-03 1.99e-02h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 6.1668009e+00 7.53e-01 1.84e+01 -1.0 3.60e+00 - 1.00e+00 1.33e-03h 1
11 3.0904080e+02 9.13e-03 4.63e+02 -1.0 1.72e+00 - 2.97e-01 1.00e+00H 1
12 3.0697414e+02 8.63e-03 2.18e+02 -1.0 3.12e-01 - 1.00e+00 5.73e-02h 1
13 3.0670014e+02 6.87e-07 1.22e-01 -1.0 5.26e-03 - 1.00e+00 1.00e+00h 1
14 3.0650569e+02 2.50e-07 6.70e-02 -2.5 1.95e-03 - 1.00e+00 1.00e+00h 1
15 3.0650563e+02 1.42e-14 1.50e-05 -2.5 2.08e-07 - 1.00e+00 1.00e+00h 1
16 3.0649998e+02 2.11e-10 1.95e-03 -5.7 5.65e-05 - 1.00e+00 1.00e+00h 1
17 3.0649998e+02 1.11e-16 1.77e-08 -5.7 7.18e-10 - 1.00e+00 1.00e+00h 1
18 3.0649998e+02 8.88e-16 1.27e-06 -9.0 3.69e-08 - 1.00e+00 1.00e+00h 1
19 3.0649998e+02 0.00e+00 3.61e-14 -9.0 3.54e-16 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 19
(scaled) (unscaled)
Objective...............: 3.0649997549181882e+02 3.0649997549181882e+02
Dual infeasibility......: 3.6079608235885943e-14 3.6079608235885943e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.1127505952435713e-10 1.1127505952435713e-10
Overall NLP error.......: 1.1127505952435713e-10 1.1127505952435713e-10
Number of objective function evaluations = 78
Number of objective gradient evaluations = 20
Number of constraint evaluations = 78
Number of constraint Jacobian evaluations = 20
Number of Lagrangian Hessian evaluations = 19
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.193
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 3.194
EXIT: Optimal Solution Found (tol = 1.0e-08).