Skip to content

Commit

Permalink
update doc with readme
Browse files Browse the repository at this point in the history
  • Loading branch information
tmigot committed Feb 15, 2020
1 parent 9c277a0 commit 6c594c8
Show file tree
Hide file tree
Showing 2 changed files with 101 additions and 51 deletions.
27 changes: 23 additions & 4 deletions docs/src/api.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
# State
## Types
```@docs
Stopping.GenericStatemod
Stopping.NLPAtX
Stopping.LSAtT
```

## General Functions
```@docs
Stopping.update!
Stopping.reinit!
```

# Stopping
## Types
```@docs
Stopping.GenericStopping
Expand All @@ -8,24 +23,28 @@ Stopping.StoppingMeta

## General Functions
```@docs
Stopping.update_and_start!
Stopping.update_and_stop!
Stopping.fill_in!
Stopping.start!
Stopping.update_and_start!
Stopping.stop!
Stopping.update_and_stop!
Stopping.reinit!
Stopping.fill_in!
Stopping.status
```

## Non linear admissibility functions
```@docs
Stopping.unconstrained
Stopping.KKT
Stopping.unconstrained_check
Stopping.unconstrained2nd_check
Stopping.optim_check_bounded
```

## Line search admissibility functions
```@docs
Stopping.armijo
Stopping.wolfe
Stopping.armijo_wolfe
Stopping.shamanskii_stop
Stopping.goldstein
```
125 changes: 78 additions & 47 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,107 +6,138 @@ Documentation for Stopping.jl

Tools to ease the uniformization of stopping criteria in iterative solvers.

When a solver is called on an optimization model, four outcome may happen:
When a solver is called on an optimization model, four outcomes may happen:

1. the approximate solution is obtained, the problem is considered solved
2. the problem is declared unsolvable (unboundedness, infeasibility ...)
3. the maximum available ressources is not sufficient to compute the solution
3. the maximum available resources are not sufficient to compute the solution
4. some algorithm dependent failure happens

This tool eases the first 3 items above. It defines a type
This tool eases the first three items above. It defines a type

mutable struct GenericStopping <: AbstractStopping
problem :: Any # an arbitrary instance of a problem
meta :: StoppingMeta # contains the used parameters
current_state :: State # the current state
meta :: AbstractStoppingMeta # contains the used parameters
current_state :: AbstractState # the current state

The [StoppingMeta](https://github.com/Goysa2/Stopping.jl/blob/master/src/StoppingMetamod.jl) provides default tolerances, maximum ressources, ... as well as (boolean) information on the result.
The [StoppingMeta](https://github.com/Goysa2/Stopping.jl/blob/master/src/Stopping/StoppingMetamod.jl) provides default tolerances, maximum resources, ... as well as (boolean) information on the result.

We provide some specialization of the GenericStopping for instance :
* [NLPStopping](https://github.com/Goysa2/Stopping.jl/blob/master/src/NLPStoppingmod.jl): for non-linear programming;
* [LS_Stopping](https://github.com/Goysa2/Stopping.jl/blob/master/src/LineSearchStoppingmod.jl): for 1d optimization;
### Your Stopping your way

The GenericStopping (with GenericState) provides a complete structure to handle stopping criteria.
Then, depending on the problem structure, you can specialize a new Stopping by
redefining a State and some functions specific to your problem.

We provide some specialization of the GenericStopping for optimization:
* [NLPStopping](https://github.com/Goysa2/Stopping.jl/blob/master/src/Stopping/NLPStoppingmod.jl) with [NLPAtX](https://github.com/Goysa2/Stopping.jl/blob/master/src/State/NLPAtXmod.jl) as a specialized State: for non-linear programming (based on [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl));
* [LS_Stopping](https://github.com/Goysa2/Stopping.jl/blob/master/src/Stopping/LineSearchStoppingmod.jl) with [LSAtT](https://github.com/Goysa2/Stopping.jl/blob/master/src/State/LSAtTmod.jl) as a specialized State: for 1d optimization;
* more to come...

In these examples, the function `optimality_residual` computes the residual of the optimality conditions is an additional attribute of the type.
In these examples, the function `optimality_residual` computes the residual of the optimality conditions is an additional attribute of the types.

## Functions

The tool provides two functions:
The tool provides two main functions:
* `start!(stp :: AbstractStopping)` initializes the time and the tolerance at the starting point and check wether the initial guess is optimal.
* `stop!(stp :: AbstractStopping)` checks optimality of the current guess as well as failure of the system (unboundedness for instance) and maximum ressources (number of evaluations of functions, elapsed time ...)
* `stop!(stp :: AbstractStopping)` checks optimality of the current guess as well as failure of the system (unboundedness for instance) and maximum resources (number of evaluations of functions, elapsed time ...)

The stopping uses the informations furnished by the State to evaluate its functions. Communication between the two can be done through the following functions:
Stopping uses the informations furnished by the State to evaluate its functions. Communication between the two can be done through the following functions:
* `update_and_start!(stp :: AbstractStopping; kwargs...)` updates the states with informations furnished as kwargs and then call start!.
* `update_and_stop!(stp :: AbstractStopping; kwargs...)` updates the states with informations furnished as kwargs and then call stop!.
* `fill_in!(stp :: AbstractStopping, x :: Iterate)` a function that fill in all the State with all the informations required to correctly evaluate the stopping functions. This can reveal useful, for instance, if the user do not trust the informations furnished by the algorithm in the State.
* `reinit!(stp :: AbstractStopping)` reinitialize the entries of
the Stopping to reuse for another call.

## How to install
Consult the [HowTo tutorial](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/runhowto.jl) to learn more about the possibilities offered by Stopping.

The stopping package can be installed and tested through the Julia package manager:
You can also access other examples of algorithms in the [test/examples](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/) folder, which for instance illustrate the strenght of Stopping with subproblems:
* Consult the [OptimSolver tutorial](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/run-optimsolver.jl) for more on how to use Stopping with nested algorithms.
* Check the [Benchmark tutorial](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/benchmark.jl) to see how Stopping can combined with [SolverBenchmark.jl](https://juliasmoothoptimizers.github.io/SolverBenchmark.jl/).
* Stopping can be adapted to closed solvers via a buffer function as in [Buffer tutorial](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/buffer.jl) for an instance with [Ipopt](https://github.com/JuliaOpt/Ipopt.jl) via [NLPModelsIpopt](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl).

## How to install
Install and test the Stopping package with the Julia package manager:
```julia
(v0.7) pkg> add https://github.com/Goysa2/Stopping.jl
(v0.7) pkg> test Stopping
pkg> add Stopping
pkg> test Stopping
```
Note that the package [State.jl](https://github.com/Goysa2/State.jl) is required and can be installed also through the package manager:
You can access the most up-to-date version of the Stopping package using:
```julia
(v0.7) pkg> add https://github.com/Goysa2/State.jl
pkg> add https://github.com/Goysa2/Stopping.jl
pkg> test Stopping
```
## Example

As an example, a naïve version of the Newton method is provided. First we import the packages:
As an example, a naive version of the Newton method is provided [here](https://github.com/Goysa2/Stopping.jl/blob/master/test/examples/newton.jl). First we import the packages:
```
using NLPModels, State, Stopping
using LinearAlgebra, NLPModels, Stopping
```

We create an uncontrained quadratic optimization problem using [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl):
We consider a quadratic test function, and create an uncontrained quadratic optimization problem using [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl):
```
A = rand(5, 5); Q = A' * A;
f(x) = 0.5 * x' * Q * x
nlp = ADNLPModel(f, ones(5))
```

We use our NLPStopping structure by creating our State and Stopping:

We now initialize the NLPStopping. First create a State.
```
nlp_at_x = NLPAtX(ones(5))
stop_nlp = NLPStopping(nlp, (x,y) -> Stopping.unconstrained(x,y), nlp_at_x)
```
We use [unconstrained_check](https://github.com/Goysa2/Stopping.jl/blob/master/src/Stopping/nlp_admissible_functions.jl) as an optimality function
```
stop_nlp = NLPStopping(nlp, unconstrained_check, nlp_at_x)
```
Note that, since we used a default State, an alternative would have been:
```
stop_nlp = NLPStopping(nlp)
```

Now a basic version of Newton to illustrate how to use State and Stopping.

Now a basic version of Newton to illustrate how to use Stopping.
```
function newton(stp :: NLPStopping)
state = stp.current_state; xt = state.x;
update!(state, x = xt, gx = grad(stp.pb, xt), Hx = hess(stp.pb, xt))
OK = start!(stp)
while !OK
d = -inv(state.Hx) * state.gx
xt = xt + d
#Notations
pb = stp.pb; state = stp.current_state;
#Initialization
xt = state.x
update!(state, x = xt, gx = grad(stp.pb, xt), Hx = hess(stp.pb, xt))
#First, call start! to check optimality and set an initial configuration
#(start the time counter, set relative error ...)
OK = update_and_start!(stp, x = xt, gx = grad(pb, xt), Hx = hess(pb, xt))
OK = stop!(stp)
while !OK
#Compute the Newton direction (state.Hx only has the lower triangular)
d = (state.Hx + state.Hx' - diagm(0 => diag(state.Hx))) \ (- state.gx)
#Update the iterate
xt = xt + d
#Update the State and call the Stopping with stop!
OK = update_and_stop!(stp, x = xt, gx = grad(pb, xt), Hx = hess(pb, xt))
end
return stp
end
```
Finally, we can call the algorithm with our Stopping:
```
stop_nlp = newton(stop_nlp)
```

We can look at the meta to know what happened
and consult the Stopping to know what happened
```
@show stop_nlp.meta.tired #ans: false
@show stop_nlp.meta.unbounded #ans: false
@show stop_nlp.meta.optimal #ans: true
#We can then ask stop_nlp the final status
@test :Optimal in status(stop_nlp, list = true)
#Explore the final values in stop_nlp.current_state
printstyled("Final solution is $(stop_nlp.current_state.x)", color = :green)
```

We reached optimality!
We reached optimality, and thanks to the Stopping structure this simple looking
algorithm verified at each step of the algorithm:
- time limit has been respected;
- evaluations of the problem are not excessive;
- the problem is not unbounded (w.r.t. x and f(x));
- there is no NaN in x, f(x), g(x), H(x);
- the maximum number of iteration (call to stop!) is limited.

## Long-Term Goals

Future work will adress more sophisticated problems such as mixed integer optimization problems, optimization with uncertainty. The list of suggester optimality functions will be enriched with state of the art conditions.
Stopping is aimed as a tool for improving the reusability and robustness in the implementation of iterative algorithms. We warmly welcome any feedback or comment leading to potential improvements.

Future work will address more sophisticated problems such as mixed-integer optimization problems, optimization with uncertainty. The list of suggested optimality functions will be enriched with state of the art conditions.

0 comments on commit 6c594c8

Please sign in to comment.