Running MadNLP in arbitrary precision
MadNLP is written in pure Julia, and as such support solving optimization problems in arbitrary precision. By default, MadNLP adapts its precision according to the NLPModel
passed in input. Most models use Float64
(in fact, almost all optimization modelers are implemented using double precision), but for certain applications it can be useful to use arbitrary precision to get more accurate solution.
There exists different packages to instantiate a optimization model in arbitrary precision in Julia. Most of them leverage the flexibility offered by NLPModels.jl. In particular, we recommend:
Defining a problem in arbitrary precision
As a demonstration, we implement the model airport from CUTEst using ExaModels. The code writes:
using ExaModels
function airport_model(T)
N = 42
# Data
r = T[0.09 , 0.3, 0.09, 0.45, 0.5, 0.04, 0.1, 0.02, 0.02, 0.07, 0.4, 0.045, 0.05, 0.056, 0.36, 0.08, 0.07, 0.36, 0.67, 0.38, 0.37, 0.05, 0.4, 0.66, 0.05, 0.07, 0.08, 0.3, 0.31, 0.49, 0.09, 0.46, 0.12, 0.07, 0.07, 0.09, 0.05, 0.13, 0.16, 0.46, 0.25, 0.1]
cx = T[-6.3, -7.8, -9.0, -7.2, -5.7, -1.9, -3.5, -0.5, 1.4, 4.0, 2.1, 5.5, 5.7, 5.7, 3.8, 5.3, 4.7, 3.3, 0.0, -1.0, -0.4, 4.2, 3.2, 1.7, 3.3, 2.0, 0.7, 0.1, -0.1, -3.5, -4.0, -2.7, -0.5, -2.9, -1.2, -0.4, -0.1, -1.0, -1.7, -2.1, -1.8, 0.0]
cy = T[8.0, 5.1, 2.0, 2.6, 5.5, 7.1, 5.9, 6.6, 6.1, 5.6, 4.9, 4.7, 4.3, 3.6, 4.1, 3.0, 2.4, 3.0, 4.7, 3.4, 2.3, 1.5, 0.5, -1.7, -2.0, -3.1, -3.5, -2.4, -1.3, 0.0, -1.7, -2.1, -0.4, -2.9, -3.4, -4.3, -5.2, -6.5, -7.5, -6.4, -5.1, 0.0]
# Wrap all data in a single iterator for ExaModels
data = [(i, cx[i], cy[i], r[i]) for i in 1:N]
IJ = [(i, j) for i in 1:N-1 for j in i+1:N]
# Write model using ExaModels
core = ExaModels.ExaCore(T)
x = ExaModels.variable(core, 1:N, lvar = -10.0, uvar=10.0)
y = ExaModels.variable(core, 1:N, lvar = -10.0, uvar=10.0)
ExaModels.objective(
core,
((x[i] - x[j])^2 + (y[i] - y[j])^2) for (i, j) in IJ
)
ExaModels.constraint(core, (x[i]-dcx)^2 + (y[i] - dcy)^2 - dr for (i, dcx, dcy, dr) in data; lcon=-Inf)
return ExaModels.ExaModel(core)
end
airport_model (generic function with 1 method)
The function airport_model
takes as input the type used to define the model in ExaModels. For example, ExaCore(Float64)
instantiates a model with Float64
, whereas ExaCore(Float32)
instantiates a model using Float32
. Thus, instantiating the instance airport
using Float32
simply amounts to
nlp = airport_model(Float32)
An ExaModel{Float32, Vector{Float32}, ...}
Problem name: Generic
All variables: ████████████████████ 84 All constraints: ████████████████████ 42
free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ████████████████████ 42
low/upp: ████████████████████ 84 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: (-47.06% sparsity) 5250 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 42
nnzj: ( 97.62% sparsity) 84
We verify that the model is correctly instantiated using Float32
:
x0 = NLPModels.get_x0(nlp)
println(typeof(x0))
Vector{Float32}
Solving a problem in Float32
Now that we have defined our model in Float32
, we solve it using MadNLP. As nlp
is using Float32
, MadNLP will automatically adjust its internal types to Float32
during the instantiation. By default, the convergence tolerance is also adjusted to the input type, such that tol = sqrt(eps(T))
. Hence, in our case the tolerance is set automatically to
tol = sqrt(eps(Float32))
0.00034526698f0
We solve the problem using Lapack as linear solver:
results = madnlp(nlp; linear_solver=LapackCPUSolver)
"Execution stats: Optimal Solution Found (tol = 1.0e-03)."
Note that the distribution of Lapack shipped with Julia supports Float32
, so here we do not have to worry whether the type is supported by the linear solver. Almost all linear solvers shipped with MadNLP supports Float32
.
The final solution is
results.solution
84-element Vector{Float32}:
-6.1031427
-7.3193717
-8.700601
-6.5470595
-5.1562366
-1.8462974
-3.324512
-0.49233565
1.357289
3.8221405
⋮
-2.6825128
-3.1404326
-3.998562
-4.9741974
-6.140154
-7.1049857
-5.742854
-4.6158533
0.31056255
and the objective is
results.objective
47925.777f0
For completeness, we compare with the solution returned when we solve the same problem using Float64
:
nlp_64 = airport_model(Float64)
results_64 = madnlp(nlp_64; linear_solver=LapackCPUSolver)
"Execution stats: Optimal Solution Found (tol = 1.0e-08)."
The final objective is now
results_64.objective
47952.701409727124
As expected when solving an optimization problem with Float32
, the relative difference between the two solutions is far from being negligible:
rel_diff = abs(results.objective - results_64.objective) / results_64.objective
0.0005614713078846997
Solving a problem in Float128
Now, we go in the opposite direction and solve a problem using Float128
to get a better accuracy. We start by loading the library Quadmath
to work with quadruple precision:
using Quadmath
We can instantiate our problem using Float128
directly as:
nlp_128 = airport_model(Float128)
An ExaModel{Quadmath.Float128, Vector{Quadmath.Float128}, ...}
Problem name: Generic
All variables: ████████████████████ 84 All constraints: ████████████████████ 42
free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ████████████████████ 42
low/upp: ████████████████████ 84 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: (-47.06% sparsity) 5250 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 42
nnzj: ( 97.62% sparsity) 84
Unfortunately, a few linear solvers support Float128
out of the box. Currently, the only solver suporting quadruple in MadNLP is LDLSolver
, which implements an LDL factorization in pure Julia. The solver LDLSolver
is not adapted to solve large-scale nonconvex nonlinear programs, but works if the problem is small enough (as it is the case here).
Replacing the solver by LDLSolver
, solving the problem with MadNLP just amounts to
results_128 = madnlp(nlp_128; linear_solver=LDLSolver)
"Execution stats: Optimal Solution Found (tol = 1.0e-17)."
Note that the final tolerance is much lower than before. We get the solution in quadruple precision
results_128.solution
84-element Vector{Quadmath.Float128}:
-6.10422347863919356666749826573300073e+00
-7.32016752442977906900809171208160818e+00
-8.70225023173458012537896479556335666e+00
-6.54778301322462582207971287827160706e+00
-5.15677166601750991266181947550757393e+00
-1.84695049136416993178016283777128213e+00
-3.32537559819190676009880027960116339e+00
-4.92515252178894101681359752756705026e-01
1.35832328737832717498669874418824396e+00
3.82340249834614261935123940565134015e+00
⋮
-2.68405536653066915387202143374756106e+00
-3.14226951460080411136052431390010087e+00
-4.00022306673508456671657588093297198e+00
-4.97642234226336114863756460763420060e+00
-6.14153114492699394531224093340134921e+00
-7.10621612725666958169271928085987277e+00
-5.74357156698510078393620798929958382e+00
-4.61682283049559465153654287279013926e+00
3.09035770774407262389794499332043106e-01
as well as the final objective:
results_128.objective
4.79527016798316551722779579149440950e+04