Skip to content

Commit

Permalink
Remove American spelling aliases and integrate examples into document…
Browse files Browse the repository at this point in the history
…ation
  • Loading branch information
rafaelbailo committed Jun 6, 2024
1 parent 1441acc commit fbccd17
Show file tree
Hide file tree
Showing 39 changed files with 112 additions and 157 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
*.jl.mem
/Manifest.toml
docs/build/
docs/parsed/
docs/Manifest.toml

build.*
Expand Down
49 changes: 46 additions & 3 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,53 @@
# using Pkg;
# Pkg.activate(".");

push!(LOAD_PATH, "../src/")

using ConsensusBasedX
using Documenter

extension(s) = split(s, ".")[end]
is_md(s) = extension(s) == "md"

const SRC_DIR = joinpath(@__DIR__, "src")
const PARSED_DIR = joinpath(@__DIR__, "parsed")
const EXAMPLE_DIR = joinpath(@__DIR__, "../examples")

rm(PARSED_DIR, force = true, recursive = true)
mkdir(PARSED_DIR)

function parse(source, target)
touch(target)
open(target, "w") do file
for line readlines(source)
if first(line, 2) == "{{" && last(line, 2) == "}}"
example_name = line[3:(end - 2)]
example = joinpath(EXAMPLE_DIR, example_name)

println(file, "!!! details \"Full example\"")
println(file, "\t```julia")
for src_line readlines(example)
print(file, "\t")
println(file, src_line)
end
println(file, "\t```")
else
println(file, line)
end
end
end
return nothing
end

for (root, dirs, files) walkdir(SRC_DIR)
for file files
if is_md(file)
source = joinpath(root, file)
target = replace(source, SRC_DIR => PARSED_DIR)
mkpath(dirname(target))
# cp(source, target)
parse(source, target)
end
end
end

DocMeta.setdocmeta!(
ConsensusBasedX,
:DocTestSetup,
Expand All @@ -26,6 +68,7 @@ makedocs(;
assets = String[],
footer = "Copyright © 2024 [Dr Rafael Bailo](https://rafaelbailo.com/) and [Purpose-Driven Interacting Particle Systems Group](https://github.com/PdIPS). [MIT License](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/LICENSE).",
),
source = "parsed",
pages = [
"Home" => "index.md",
"Mathematical background" => [
Expand Down
6 changes: 4 additions & 2 deletions docs/src/distribution_sampling.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@ For instance, if `D = 2`, you can sample `exp(-αf)` by running:
out = sample(f, D = 2, extended_output=true)
out.sample
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/sample_with_keywords.jl).

!!! note
You must always provide `D`.

{{basic_usage/sample_with_keywords.jl}}


## Using a `config` object

Expand All @@ -23,11 +24,12 @@ config = (; D = 2, extended_output=true)
out = sample(f, config)
out.sample
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/sample_with_config.jl).

!!! note
If you pass a `Dict` instead, it will be converted to a `NamedTuple` automatically.

{{basic_usage/sample_with_config.jl}}


## Running on minimisation mode

Expand Down
2 changes: 0 additions & 2 deletions docs/src/extended_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,3 @@ The extended output is a `NamedTuple` which contains:
- `method_cache`, by default a `ConsensusBasedOptimisationCache` object;
- `particle_dynamic`, by default a `ParticleDynamic` object;
- `particle_dynamic_cache`, by default a `ParticleDynamicCache` object.

TODO: ref objects.
17 changes: 11 additions & 6 deletions docs/src/function_minimisation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,12 @@ For instance, if `D = 2`, you can minimise `f` by running:
```julia
minimise(f, D = 2)
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/minimise_with_keywords.jl).

!!! note
You must always provide `D`.

{{basic_usage/minimise_with_keywords.jl}}


## Using a `config` object

Expand All @@ -19,20 +20,22 @@ For more advanced usage, you will select several options. You can pass these as
config = (; D = 2)
minimise(f, config)
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/minimise_with_config.jl).

!!! note
If you pass a `Dict` instead, it will be converted to a `NamedTuple` automatically.

This is a version of the full-code example above, using `config` instead:
{{basic_usage/minimise_with_config.jl}}


## Aliases

ConsensusBasedX.jl also defines `minimize`, `optimise`, and `optimize`. These are all aliases of `minimise`.
ConsensusBasedX.jl also defines `optimise` as an alias of `minimise`.


## Maximisation

ConsensusBasedX.jl also defines `maximise` (and its alias, `maximize`) for convenience. If you call
ConsensusBasedX.jl also defines `maximise` for convenience. If you call
```julia
maximise(f, D = 2)
```
Expand All @@ -43,8 +46,10 @@ maximise(f, config)
```
`maximise` will attempt to define `g(x) = -f(x)` and call `minimise(g, config)`.


Full-code examples are provided for the [keyword](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/maximise_with_keywords.jl) and [config](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/maximise_with_config.jl) approaches.
These are full-code examples using keywords
{{basic_usage/maximise_with_keywords.jl}}
or using `config`
{{basic_usage/maximise_with_config.jl}}


## Method reference
Expand Down
70 changes: 2 additions & 68 deletions docs/src/low_level_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,75 +5,9 @@ We provide two low-level interface examples for the convenience of advanced user
## Manual method definition

This example bypasses the `minimise` interface, and defines the `ParticleDynamic` and `ConsensusBasedOptimisation` structs directly. However, [`ConsensusBasedX.construct_particle_dynamic_cache`](@ref) is used to construct the caches:
```julia
config =
(; D = 2, N = 20, M = 1, α = 10.0, λ = 1.0, σ = 1.0, Δt = 0.1, verbosity = 0)

f(x) = ConsensusBasedX.Ackley(x, shift = 1)

X₀ = [[rand(config.D) for n 1:(config.N)] for m 1:(config.M)]

correction = HeavisideCorrection()
noise = IsotropicNoise
method =
ConsensusBasedOptimisation(f, correction, noise, config.α, config.λ, config.σ)

Δt = 0.1
particle_dynamic = ParticleDynamic(method, Δt)

particle_dynamic_cache =
construct_particle_dynamic_cache(config, X₀, particle_dynamic)

method_cache = particle_dynamic_cache.method_cache

initialise_particle_dynamic_cache!(X₀, particle_dynamic, particle_dynamic_cache)
initialise_dynamic!(particle_dynamic, particle_dynamic_cache)
compute_dynamic!(particle_dynamic, particle_dynamic_cache)
finalise_dynamic!(particle_dynamic, particle_dynamic_cache)

out = wrap_output(X₀, particle_dynamic, particle_dynamic_cache)

@show out.minimiser
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/low_level/low_level.jl).
{{low_level/low_level.jl}}

## Manual stepping

This bypasses the `compute_dynamic!` method, performing the stepping manually instead:
```julia
config =
(; D = 2, N = 20, M = 1, α = 10.0, λ = 1.0, σ = 1.0, Δt = 0.1, verbosity = 0)

f(x) = ConsensusBasedX.Ackley(x, shift = 1)

X₀ = [[rand(config.D) for n 1:(config.N)] for m 1:(config.M)]

correction = HeavisideCorrection()
noise = IsotropicNoise
method =
ConsensusBasedOptimisation(f, correction, noise, config.α, config.λ, config.σ)

Δt = 0.1
particle_dynamic = ParticleDynamic(method, Δt)

particle_dynamic_cache =
construct_particle_dynamic_cache(config, X₀, particle_dynamic)

method_cache = particle_dynamic_cache.method_cache

initialise_particle_dynamic_cache!(X₀, particle_dynamic, particle_dynamic_cache)
initialise_dynamic!(particle_dynamic, particle_dynamic_cache)

for it 1:100
for m 1:(config.M)
compute_dynamic_step!(particle_dynamic, particle_dynamic_cache, m)
end
end

finalise_dynamic!(particle_dynamic, particle_dynamic_cache)

out = wrap_output(X₀, particle_dynamic, particle_dynamic_cache)

@show out.minimiser
```
[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/low_level/manual_stepping.jl).
{{low_level/manual_stepping.jl}}
2 changes: 1 addition & 1 deletion docs/src/method_parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ Consensus-based optimisation requires three parameters:
!!! tip
A low value of `σ` and a high value of `λ` make the particles converge towards the consensus point more directly; this is a good idea if you have a very good guess for the global minimiser (see [Particle initialisation](@ref)). A high value of `σ` and a low value of `λ` make the particles explore more of the landscape before converging, which is useful if your initial guess is bad. Similarly, a higher value of `α` biases the consensus point towards the current best particle, which is only desirable if your initial guess is good.

[Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/method_parameters.jl).
{{basic_usage/method_parameters.jl}}
8 changes: 6 additions & 2 deletions docs/src/noise_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ By default, [Consensus-Based Optimisation](@ref) uses so-called *isotropic noise
+ \sqrt{2\sigma^2}
\left\| x_t^i - c_\alpha(x_t) \right\| \mathrm{d}B_t^i,
```
where ``B_t^i`` are independent Brownian motions in ``D`` dimensions. The intensity of the noise depends on the distance of each particle to the consensus point, ``\left\| x_t^i - c_\alpha(x_t) \right\|``. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/isotropic_noise.jl).
where ``B_t^i`` are independent Brownian motions in ``D`` dimensions. The intensity of the noise depends on the distance of each particle to the consensus point, ``\left\| x_t^i - c_\alpha(x_t) \right\|``.

{{advanced_usage/isotropic_noise.jl}}

## Anisotropic noise

Expand All @@ -20,4 +22,6 @@ ConsensusBasedX.jl also offers *anisotropic noise*, given by
+ \sqrt{2\sigma^2}
\operatorname*{diag} \left( x_t^i - c_\alpha(x_t) \right) \mathrm{d}B_t^i,
```
The intensity of the noise now varies along each dimension. This can be selected with the option `noise = :AnisotropicNoise`. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/anisotropic_noise.jl).
The intensity of the noise now varies along each dimension. This can be selected with the option `noise = :AnisotropicNoise`.

{{advanced_usage/anisotropic_noise.jl}}
6 changes: 5 additions & 1 deletion docs/src/output_visualisation.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,8 @@ out = minimise(f, D = 2, extended_output = true)
using ConsensusBasedXPlots
plot_CBO(out)
```
Full-code examples in [one dimension](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/output_visualisation_1D.jl) and [two dimensions](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/output_visualisation_2D.jl).

Examples in one dimension
{{advanced_usage/output_visualisation_1D.jl}}
and in two dimensions
{{advanced_usage/output_visualisation_2D.jl}}
4 changes: 3 additions & 1 deletion docs/src/parallelisation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

Consensus-based optimisation is often used to tackle minimisation problems where `f(x)` is expensive to evaluate (for instance, parameter estimation in a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation) model). Therefore, ConsensusBasedX.jl does not use parallelisation by default, as it assumes the implementation of `f` will be parallelised if possible.

However, you can enable parallelisation by passing the option `parallelisation=:EnsembleParallelisation`. With this option, ConsensusBasedX.jl will run each of the `M` particle ensembles in parallel. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/parallelisation.jl).
However, you can enable parallelisation by passing the option `parallelisation=:EnsembleParallelisation`. With this option, ConsensusBasedX.jl will run each of the `M` particle ensembles in parallel.

!!! warning
Parallelisation leads to memory allocations which cannot be avoided, as there is overhead associated with distributing the tasks. If you activate parallelisation, and then run `minimise` in benchmark mode (see [Performance and benchmarking](@ref)), you will detect some allocations, and this is expected.

{{advanced_usage/parallelisation.jl}}
20 changes: 15 additions & 5 deletions docs/src/particle_initialisation.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,34 @@ If no options are provided, ConsensusBasedX.jl initialises its particles by samp

## Initial guess

If you have an initial guess for the global minimiser of the function `f`, you can pass the option `initial_guess` (or `initial_mean`). This can be a `Real`, if you want to use the same value for each coordinate of the initial guess, or an `AbstractVector` of size `size(initial_guess) = (D,)`. The particles will be initialised by sampling a normal distribution with mean `initial_guess`/`initial_mean` and unit variance. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/initial_guess.jl).
If you have an initial guess for the global minimiser of the function `f`, you can pass the option `initial_guess` (or `initial_mean`). This can be a `Real`, if you want to use the same value for each coordinate of the initial guess, or an `AbstractVector` of size `size(initial_guess) = (D,)`. The particles will be initialised by sampling a normal distribution with mean `initial_guess`/`initial_mean` and unit variance.

{{basic_usage/initial_guess.jl}}


### Specify a normal distribution

If you want to specify the variance of the normal distribution sampled around `initial_guess`/`initial_mean`, you can pass the option `initial_variance` (or `initial_covariance`). This can be a `Real`, if you want an isotropic distribution, an `AbstractVector` of size `size(initial_variance) = (D,)`, if you want to specify the variance along each axis, or an `AbstractMatrix` of size `size(initial_variance) = (D, D)`, if you want a general [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution). [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/initial_variance.jl).
If you want to specify the variance of the normal distribution sampled around `initial_guess`/`initial_mean`, you can pass the option `initial_variance` (or `initial_covariance`). This can be a `Real`, if you want an isotropic distribution, an `AbstractVector` of size `size(initial_variance) = (D,)`, if you want to specify the variance along each axis, or an `AbstractMatrix` of size `size(initial_variance) = (D, D)`, if you want a general [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution).

{{basic_usage/initial_variance.jl}}


### Specify a uniform distribution

You can instead initialise the particles by sampling uniformly from a box around `initial_guess`/`initial_mean`. To do so, pass the option `initialisation = :uniform`. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/initialisation_uniform.jl).
You can instead initialise the particles by sampling uniformly from a box around `initial_guess`/`initial_mean`. To do so, pass the option `initialisation = :uniform`.

{{basic_usage/initialisation_uniform.jl}}

You can specify the radius of the box with the option `initial_radius`, or the diameter with `initial_diameter`. This can be a `Real`, if you want a hypercube, or an `AbstractVector` of size `size(initial_guess) = (D,)`, if you want a hyperbox with different dimensions along each axis. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/initial_radius.jl).
You can specify the radius of the box with the option `initial_radius`, or the diameter with `initial_diameter`. This can be a `Real`, if you want a hypercube, or an `AbstractVector` of size `size(initial_guess) = (D,)`, if you want a hyperbox with different dimensions along each axis.

{{basic_usage/initial_radius.jl}}


## Custom initialisation

You can provide the initial position of the particles direcly by passing the option `initial_particles`. This must be an `AbstractArray{<:Real,3}` of size `(D, N, M)`. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/initial_particles.jl).
You can provide the initial position of the particles direcly by passing the option `initial_particles`. This must be an `AbstractArray{<:Real,3}` of size `(D, N, M)`.

{{basic_usage/initial_particles.jl}}

!!! tip
If you are initialising the particles yourself, you might find the [Distributions.jl](https://juliastats.org/Distributions.jl/stable/) package useful.
4 changes: 3 additions & 1 deletion docs/src/performance_benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@ minimise(f, config)
This will run the beginning of the `minimise` routine as normal, creating the required caches. However, instead of computing the full particle evolution, it will only calculate a few steps, printing the output of `@time` to console, and returning the output of `@timed`.

!!! tip
The benchmark mode reports zero allocations with all the [Example objectives](@ref) provided by ConsensusBasedX.jl. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/advanced_usage/benchmark.jl). Wherever possible, your function should also lead to zero allocations.
The benchmark mode reports zero allocations with all the [Example objectives](@ref) provided by ConsensusBasedX.jl. Wherever possible, your function should also lead to zero allocations.

{{advanced_usage/benchmark.jl}}

!!! warning
If you are running [Consensus-Based Sampling](@ref) by calling `sample`, allocations might occur whenever the `root = :SymmetricRoot` is automatically selected (see [Root-covariance types](@ref)). To have zero allocations, you must run with the option `root = :AsymmetricRoot`. Nevertheless, despite the allocations, the `root = :SymmetricRoot` option offers better performance when `N` is large (roughly if `N > 10 * D`).
19 changes: 14 additions & 5 deletions docs/src/stopping_criteria.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,34 @@ You can apply any of these criteria by passing them as keywords to the `minimise

## Energy threshold

`energy_threshold::Real = -Inf` sets a stopping threshold for the value of `f(v)`, where `v` is the current consensus point. For each ensemble, if `f(v) < energy_threshold`, the minimisation stops. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/energy_threshold.jl).
`energy_threshold::Real = -Inf` sets a stopping threshold for the value of `f(v)`, where `v` is the current consensus point. For each ensemble, if `f(v) < energy_threshold`, the minimisation stops.

{{basic_usage/energy_threshold.jl}}


## Energy tolerance

`energy_tolerance::Real = 1e-8` dictates a tolerance for the change in `f(v)`, where `v` is the current consensus point. For each ensemble, if `abs(f(v) - f(v_prev)) < energy_tolerance`, where `v_prev` is the previous consensus point, the minimisation stops. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/energy_tolerance.jl).
`energy_tolerance::Real = 1e-8` dictates a tolerance for the change in `f(v)`, where `v` is the current consensus point. For each ensemble, if `abs(f(v) - f(v_prev)) < energy_tolerance`, where `v_prev` is the previous consensus point, the minimisation stops.

{{basic_usage/energy_tolerance.jl}}


## Max evaluations

`max_evaluations::Real = Inf` determines the maximum number of times `f` may be evaluated by the minimisation. If the value is exceeded, the minimisation stops. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/max_evaluations.jl).
`max_evaluations::Real = Inf` determines the maximum number of times `f` may be evaluated by the minimisation. If the value is exceeded, the minimisation stops.

{{basic_usage/max_evaluations.jl}}


## Max iterations

`max_iterations::Real = 1000` specifies the maximal number of iterations that the time integrator can perform. If the number is reached, the minimisation stops. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/max_iterations.jl).
`max_iterations::Real = 1000` specifies the maximal number of iterations that the time integrator can perform. If the number is reached, the minimisation stops.

{{basic_usage/max_iterations.jl}}


## Max time

`max_time::Real = Inf` determines the maximal simulation time. If the number of iterations times `Δt` surpasses this value, the minimisation stops. [Full-code example](https://github.com/PdIPS/ConsensusBasedX.jl/blob/main/examples/basic_usage/max_time.jl).
`max_time::Real = Inf` determines the maximal simulation time. If the number of iterations times `Δt` surpasses this value, the minimisation stops.

{{basic_usage/max_time.jl}}
2 changes: 0 additions & 2 deletions examples/advanced_usage/anisotropic_noise.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
using ConsensusBasedX

f(x) = ConsensusBasedX.Ackley(x, shift = 1)

config = (; D = 2, noise = :AnisotropicNoise)
minimise(f, config)
2 changes: 0 additions & 2 deletions examples/advanced_usage/benchmark.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
using ConsensusBasedX

f(x) = ConsensusBasedX.Ackley(x, shift = 1)

config = (; D = 2, benchmark = true)
minimise(f, config)
Loading

0 comments on commit fbccd17

Please sign in to comment.