Skip to content

Commit

Permalink
Merge pull request #17 from alan-turing-institute/dev
Browse files Browse the repository at this point in the history
For a 0.1.4 release
  • Loading branch information
ablaom authored Feb 25, 2020
2 parents 2b14fa6 + cf7ad01 commit 853b394
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 13 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "MLJTuning"
uuid = "03970b2e-30c4-11ea-3135-d1576263f10f"
authors = ["Anthony D. Blaom <[email protected]>"]
version = "0.1.3"
version = "0.1.4"

[deps]
ComputationalResources = "ed09eef8-17a6-5b46-8889-db040fac31e3"
Expand All @@ -12,7 +12,7 @@ RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"

[compat]
ComputationalResources = "^0.3"
MLJBase = "^0.11.1"
MLJBase = "^0.11"
RecipesBase = "^0.8"
julia = "^1"

Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,8 @@ This repository contains:
- [ ] Latin hypercubes

- [ ] random search

- [ ] bandit

- [ ] simulated annealing

Expand Down
10 changes: 5 additions & 5 deletions src/learning_curves.jl
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ has the following keys: `:parameter_name`, `:parameter_scale`,
`:parameter_values`, `:measurements`.
To generate multiple curves for a `model` with a random number
generator (RNG) as a hyperparameter, specify the name of the (possibly
nested) RNG field, and a vector `rngs` of RNG's, one for each
curve. Alternatively, set `rngs` to the number of curves desired, in
which case RNG's are automatically generated. The individual curve
generator (RNG) as a hyperparameter, specify the name, `rng_name`, of
the (possibly nested) RNG field, and a vector `rngs` of RNG's, one for
each curve. Alternatively, set `rngs` to the number of curves desired,
in which case RNG's are automatically generated. The individual curve
computations can be distributed across multiple processes using
`acceleration=CPUProcesses()`. See the second example below for a
demonstration.
Expand All @@ -46,7 +46,7 @@ plot(curve.parameter_values,
If using a `Holdout()` `resampling` strategy (with no shuffling) and
if the specified hyperparameter is the number of iterations in some
iterative model (and that model has an appropriately overloaded
`MLJBase.update` method) then training is not restarted from scratch
`MLJModelInterface.update` method) then training is not restarted from scratch
for each increment of the parameter, ie the model is trained
progressively.
Expand Down
6 changes: 3 additions & 3 deletions src/strategies/grid.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ Example 2:
[(range(model, :hyper1, lower=1, upper=10), 15),
range(model, :hyper2, lower=2, upper=4),
range(model, :hyper3, values=[:ball, :tree]]
range(model, :hyper3, values=[:ball, :tree])]
Note: All the `field` values of the `ParamRange` objects (`:hyper1`,
`:hyper2`, `:hyper3` in the precedng example) must refer to field
`:hyper2`, `:hyper3` in the preceding example) must refer to field
names a of single model (the `model` specified during `TunedModel`
construction).
Expand All @@ -46,7 +46,7 @@ resolutions apply.
In all cases the models generated are shuffled using `rng`, unless
`shuffle=false`.
See also [TunedModel](@ref), [range](@ref).
See also [`TunedModel`](@ref), [`range`](@ref).
"""
mutable struct Grid <: TuningStrategy
Expand Down
6 changes: 3 additions & 3 deletions src/tuned_models.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Calling `fit!(mach)` on a machine `mach=machine(tuned_model, X, y)` or
- Fit an internal machine, based on the optimal model
`fitted_params(mach).best_model`, wrapping the optimal `model`
object in *all* the provided data `X, y` (or in `task`). Calling
object in *all* the provided data `X`, `y`(, `w`). Calling
`predict(mach, Xnew)` then returns predictions on `Xnew` of this
internal machine. The final train can be supressed by setting
`train_best=false`.
Expand All @@ -90,9 +90,9 @@ every measure specified will be computed and reported in
generated report.
Specify `repeats > 1` for repeated resampling per model evaluation. See
[`evaluate!](@ref) options for details.
[`evaluate!`](@ref) options for details.
*Important.* If a custom measure `measure` is used, and the measure is
*Important.* If a custom `measure` is used, and the measure is
a score, rather than a loss, be sure to check that
`MLJ.orientation(measure) == :score` to ensure maximization of the
measure, rather than minimization. Override an incorrect value with
Expand Down

0 comments on commit 853b394

Please sign in to comment.