Warmstarting MadNLP
We use a parameterized version of the instance HS15 used in the introduction. This updated version of HS15Model
stores the parameters of the model in an attribute nlp.params
:
nlp = HS15Model()
println(nlp.params)
[100.0, 1.0]
By default the parameters are set to [100.0, 1.0]
. In a first solve, we find a solution associated to these parameters. We want to warmstart MadNLP from the solution found in the first solve, after a small update in the problem's parameters.
It is known that the interior-point method has a poor support of warmstarting, on the contrary to active-set methods. However, if the parameter changes remain reasonable and do not lead to significant changes in the active set, warmstarting the interior-point algorithm can significantly reduces the total number of barrier iterations in the second solve.
The warm-start described in this tutorial remains basic. Its main application is updating the solution of a parametric problem after an update in the parameters. The warm-start always assumes that the structure of the problem remains the same between two consecutive solves. MadNLP cannot be warm-started if variables or constraints are added to the problem.
Naive solution: starting from the previous solution
By default, MadNLP starts its interior-point algorithm at the primal variable stored in nlp.meta.x0
. We can access this attribute using the function get_x0
:
x0 = NLPModels.get_x0(nlp)
2-element Vector{Float64}:
0.0
0.0
Here, we observe that the initial solution is [0, 0]
.
We solve the problem using the function madnlp
:
results = madnlp(nlp)
nothing
This is MadNLP version v0.8.5, running with umfpack
Number of nonzeros in constraint Jacobian............: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 1
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 1.01e+00 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.9758855e-01 1.00e+00 4.61e+01 -1.0 1.01e+00 - 4.29e-01 9.80e-03h 1
2 9.9664309e-01 1.00e+00 5.00e+02 -1.0 4.81e+00 - 1.00e+00 9.93e-05h 1
3 1.3615174e+00 9.99e-01 4.41e+02 -1.0 5.73e+02 - 9.98e-05 4.71e-04H 1
4 1.3742697e+00 9.99e-01 3.59e+02 -1.0 3.90e+01 - 2.30e-02 2.68e-05h 1
5 1.4692139e+00 9.99e-01 4.94e+02 -1.0 5.07e+01 - 2.76e-04 1.46e-04h 1
6 3.1727722e+00 9.97e-01 3.76e+02 -1.0 8.08e+01 - 1.88e-06 9.77e-04h 11
7 3.1726497e+00 9.97e-01 2.12e+02 -1.0 9.98e-01 - 1.00e+00 7.94e-04h 1
8 8.2350196e+00 9.85e-01 4.29e+02 -1.0 1.51e+01 - 1.44e-03 7.81e-03h 8
9 8.2918294e+00 9.84e-01 4.71e+02 -1.0 3.94e+00 - 2.51e-01 2.49e-04h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 4.0282504e+01 8.72e-01 4.57e+02 -1.0 4.94e+00 - 1.00e+00 6.25e-02h 5
11 2.8603735e+02 2.66e-01 4.85e+02 -1.0 1.16e+00 - 5.54e-01 5.00e-01h 2
12 3.9918132e+02 6.89e-03 2.90e+02 -1.0 1.90e-01 - 8.75e-01 1.00e+00h 1
13 3.9783737e+02 3.06e-04 2.86e+02 -1.0 4.68e-02 - 2.50e-02 1.00e+00h 1
14 3.5241265e+02 2.33e-02 2.70e+01 -1.0 4.59e-01 - 1.00e+00 1.00e+00h 1
15 3.5876922e+02 7.37e-03 4.63e+00 -1.0 2.66e-01 - 6.99e-01 1.00e+00h 1
16 3.6046938e+02 7.34e-05 8.17e-03 -1.0 3.14e-02 - 1.00e+00 1.00e+00h 1
17 3.6038250e+02 2.71e-07 8.49e-05 -2.5 1.42e-03 - 1.00e+00 1.00e+00h 1
18 3.6037976e+02 2.63e-10 7.95e-08 -5.7 4.63e-05 - 1.00e+00 1.00e+00h 1
19 3.6037976e+02 1.11e-16 5.96e-14 -8.6 3.06e-08 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 19
(scaled) (unscaled)
Objective...............: 3.6037976240508465e+02 3.6037976240508465e+02
Dual infeasibility......: 5.9563010060664200e-14 5.9563010060664200e-14
Constraint violation....: 1.1102230246251565e-16 1.1102230246251565e-16
Complementarity.........: 1.5755252497604069e-09 1.5755252497604069e-09
Overall NLP error.......: 1.5755252497604069e-09 1.5755252497604069e-09
Number of objective function evaluations = 47
Number of objective gradient evaluations = 20
Number of constraint evaluations = 47
Number of constraint Jacobian evaluations = 20
Number of Lagrangian Hessian evaluations = 19
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.583
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 3.584
EXIT: Optimal Solution Found (tol = 1.0e-08).
MadNLP converges in 19 barrier iterations. The solution is:
println("Objective: ", results.objective)
println("Solution: ", results.solution)
Objective: 360.37976240508465
Solution: [-0.7921232178470455, -1.2624298435831807]
Solution 1: updating the starting solution
We have found a solution to the problem. Now, what happens if we update the parameters inside nlp
?
nlp.params .= [101.0, 1.1]
2-element Vector{Float64}:
101.0
1.1
As MadNLP starts the algorithm at nlp.meta.x0
, we pass the previous solution to the initial vector:
copyto!(NLPModels.get_x0(nlp), results.solution)
2-element Vector{Float64}:
-0.7921232178470455
-1.2624298435831807
Solving the problem again with MadNLP, we observe that MadNLP converges in only 6 iterations:
results_new = madnlp(nlp)
nothing
This is MadNLP version v0.8.5, running with umfpack
Number of nonzeros in constraint Jacobian............: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 1
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.6431987e+02 1.00e-02 5.30e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.5966077e+02 9.78e-03 3.35e+01 -1.0 9.46e-01 - 1.00e+00 1.97e-02h 1
2 3.6511632e+02 1.62e-04 3.18e-01 -1.0 3.10e-02 - 1.00e+00 1.00e+00h 1
3 3.6443966e+02 1.27e-05 3.40e-04 -1.7 1.00e-02 - 1.00e+00 1.00e+00h 1
4 3.6432064e+02 4.57e-07 4.25e-05 -3.8 1.92e-03 - 1.00e+00 1.00e+00h 1
5 3.6431987e+02 3.20e-11 3.56e-09 -5.7 1.61e-05 - 1.00e+00 1.00e+00h 1
6 3.6431986e+02 3.66e-15 3.55e-13 -9.0 1.78e-07 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: 5.9863692672462129e+01 3.6431986176592244e+02
Dual infeasibility......: 3.5527136788005009e-13 2.1621188045252371e-12
Constraint violation....: 3.6637359812630166e-15 3.6637359812630166e-15
Complementarity.........: 1.4944579942401076e-10 9.0950074338964201e-10
Overall NLP error.......: 9.0950074338964201e-10 9.0950074338964201e-10
Number of objective function evaluations = 7
Number of objective gradient evaluations = 7
Number of constraint evaluations = 7
Number of constraint Jacobian evaluations = 7
Number of Lagrangian Hessian evaluations = 6
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.001
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 0.001
EXIT: Optimal Solution Found (tol = 1.0e-08).
By decreasing the initial barrier parameter, we can reduce the total number of iterations to 5:
results_new = madnlp(nlp; mu_init=1e-7)
nothing
This is MadNLP version v0.8.5, running with umfpack
Number of nonzeros in constraint Jacobian............: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 1
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.6431987e+02 1.00e-02 5.30e+01 -7.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.5960153e+02 9.81e-03 3.36e+01 -7.0 1.10e+00 - 1.00e+00 1.74e-02h 1
2 3.6432961e+02 7.65e-05 5.35e-01 -7.0 1.97e-02 - 9.75e-01 1.00e+00h 1
3 3.6431986e+02 1.20e-08 1.48e-05 -7.0 2.52e-04 - 1.00e+00 1.00e+00h 1
4 3.6431986e+02 2.43e-14 9.52e-13 -7.0 4.87e-07 - 1.00e+00 1.00e+00h 1
5 3.6431986e+02 2.22e-16 2.84e-14 -9.0 9.55e-09 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 5
(scaled) (unscaled)
Objective...............: 5.9863692672462285e+01 3.6431986176592341e+02
Dual infeasibility......: 2.8421709430404007e-14 1.7296950436201898e-13
Constraint violation....: 2.2204460492503131e-16 2.2204460492503131e-16
Complementarity.........: 1.4937865084190974e-10 9.0909208897731444e-10
Overall NLP error.......: 9.0909208897731444e-10 9.0909208897731444e-10
Number of objective function evaluations = 6
Number of objective gradient evaluations = 6
Number of constraint evaluations = 6
Number of constraint Jacobian evaluations = 6
Number of Lagrangian Hessian evaluations = 5
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.001
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 0.001
EXIT: Optimal Solution Found (tol = 1.0e-08).
The final solution is slightly different from the previous one, as we have updated the parameters inside the model nlp
:
results_new.solution
2-element Vector{Float64}:
-0.7920519043023881
-1.26254350829728
Similarly as with the primal solution, we can pass the initial dual solution to MadNLP using the function get_y0
. We can overwrite the value of y0
in nlp
using:
copyto!(NLPModels.get_y0(nlp), results.multipliers)
In our particular example, setting the dual multipliers has only a minor influence on the convergence of the algorithm.
Advanced solution: keeping the solver in memory
The previous solution works, but is wasteful in resource: each time we call the function madnlp
we create a new instance of MadNLPSolver
, leading to a significant number of memory allocations. A workaround is to keep the solver in memory to have more fine-grained control on the warm-start.
We start by creating a new model nlp
and we instantiate a new instance of MadNLPSolver
attached to this model:
nlp = HS15Model()
solver = MadNLP.MadNLPSolver(nlp)
Interior point solver
number of variables......................: 2
number of constraints....................: 2
number of nonzeros in lagrangian hessian.: 3
number of nonzeros in constraint jacobian: 4
status...................................: INITIAL
Note that
nlp === solver.nlp
true
Hence, updating the parameter values in nlp
will automatically update the parameters in the solver.
We solve the problem using the function solve!
:
results = MadNLP.solve!(solver)
"Execution stats: Optimal Solution Found (tol = 1.0e-08)."
Before warmstarting MadNLP, we proceed as before and update the parameters and the primal solution in nlp
:
nlp.params .= [101.0, 1.1]
copyto!(NLPModels.get_x0(nlp), results.solution)
2-element Vector{Float64}:
-0.7921232178470455
-1.2624298435831807
MadNLP stores in memory the dual solutions computed during the first solve. One can access to the (scaled) multipliers as
solver.y
2-element Vector{Float64}:
-477.17046873911977
-3.126200044003132e-9
and to the multipliers of the bound constraints with
[solver.zl.values solver.zu.values]
4×2 Matrix{Float64}:
0.0 1.93936e-9
0.0 0.0
477.17 0.0
3.1262e-9 0.0
If we call the function solve!
a second-time, MadNLP will use the following rule:
- The initial primal solution is copied from
NLPModels.get_x0(nlp)
- The initial dual solution is directly taken from the values specified in
solver.y
,solver.zl
andsolver.zu
. (MadNLP is not using the values stored innlp.meta.y0
in the second solve).
As before, it is advised to decrease the initial barrier parameter: if the initial point is close enough to the solution, this reduces drastically the total number of iterations. We solve the problem again using:
MadNLP.solve!(solver; mu_init=1e-7)
nothing
The options set during resolve may not have an effect
19 3.6431987e+02 1.11e-16 3.24e+00 -7.0 3.06e-08 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
20 3.6431986e+02 1.31e-08 2.85e-04 -7.0 3.61e-04 - 1.00e+00 1.00e+00h 1
21 3.6431986e+02 5.72e-13 9.59e-11 -7.0 2.38e-06 - 1.00e+00 1.00e+00h 1
22 3.6431986e+02 3.33e-16 2.95e-14 -9.0 1.57e-09 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 22
(scaled) (unscaled)
Objective...............: 3.6431986176130016e+02 3.6431986176130016e+02
Dual infeasibility......: 2.9483973221675962e-14 2.9483973221675962e-14
Constraint violation....: 3.3306690738754696e-16 3.3306690738754696e-16
Complementarity.........: 5.6584736895854676e-10 5.6584736895854676e-10
Overall NLP error.......: 5.6584736895854676e-10 5.6584736895854676e-10
Number of objective function evaluations = 51
Number of objective gradient evaluations = 24
Number of constraint evaluations = 51
Number of constraint Jacobian evaluations = 25
Number of Lagrangian Hessian evaluations = 23
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.372
Total wall-clock secs in linear solver = 0.000
Total wall-clock secs in NLP function evaluations = 0.000
Total wall-clock secs = 0.372
EXIT: Optimal Solution Found (tol = 1.0e-08).
Three observations are in order:
- The iteration count starts directly from the previous count (as stored in
solver.cnt.k
). - MadNLP converges in only 4 iterations.
- The factorization stored in
solver
is directly re-used, leading to significant savings. As a consequence, the warm-start does not work if the structure of the problem changes between the first and the second solve (e.g, if variables or constraints are added to the constraints).