Skip to content

Commit

Permalink
Merge pull request #3 from alan-turing-institute/dev
Browse files Browse the repository at this point in the history
Minor code refactor
  • Loading branch information
ablaom committed Jan 27, 2020
2 parents dab955d + eb8c9e0 commit 2cec09e
Show file tree
Hide file tree
Showing 4 changed files with 50 additions and 46 deletions.
18 changes: 10 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,12 @@ MLJ user. Rather, MLJTuning is a dependency of the
learning platform, which allows MLJ users to perform a variety of
hyperparameter optimization tasks from there.

MLJTUning is the place for developers to integrate hyperparameter
MLJTuning is the place for developers to integrate hyperparameter
optimization algorithms (here called *tuning strategies*) into MLJ,
either by adding code to [/src/strategies](/src/strategies), or by
importing MLJTuning into a third-pary package and and implementing
MLJTuning's interface.
importing MLJTuning into a third-pary package and implementing
MLJTuning's [tuning strategy
interface](#implementing-a-new-tuning-strategy).

MLJTuning is a component of the [MLJ
stack](https://github.com/alan-turing-institute/MLJ.jl#the-mlj-universe)
Expand All @@ -49,9 +50,10 @@ This repository contains:
hyperparameters (using cross-validation or other resampling
strategy) before training the optimal model on all supplied data

- an abstract **tuning strategy interface** to allow developers to
conveniently implement common hyperparameter optimization
strategies, such as:
- an abstract **[tuning strategy
interface]((#implementing-a-new-tuning-strategy))** to allow
developers to conveniently implement common hyperparameter
optimization strategies, such as:

- [x] search a list of explicitly specified models `list = [model1,
model2, ...]`
Expand Down Expand Up @@ -100,7 +102,7 @@ elaboration on those terms given in *italics*.

All tuning in MLJ is conceptualized as an iterative procedure, each
iteration corresponding to a performance *evaluation* of a single
*model*. Each such model is a mutation of a fixed *prototype*. In the
*model*. Each such model is a mutated clone of a fixed prototype. In the
general case, this prototype is a composite model, i.e., a model with
other models as hyperparameters, and while the type of the prototype
mutations is fixed, the types of the sub-models are allowed to vary.
Expand Down Expand Up @@ -293,7 +295,7 @@ preferred "central value". These default to `(upper - lower)/2` and
`(upper + lower)/2`, respectively, in the bounded case (neither `upper
= Inf` nor `lower = -Inf`). The fields `origin` and `unit` are used in
generating grids for unbounded ranges (and could be used in other
strategies for fitting two-parameter probability distributions, for
strategies - for fitting two-parameter probability distributions, for
example).

A `ParamRange` object is always associated with the name of a
Expand Down
Empty file removed src/inhomogeneous_list.jl
Empty file.
70 changes: 32 additions & 38 deletions src/learning_curves.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,29 +57,39 @@ plot!(curves.parameter_values,
curves.measurements,
xlab=curves.parameter_name,
ylab="Holdout estimate of RMS error")
```
learning_curve(model::Supervised, X, y; kwargs...)
learning_curve(model::Supervised, X, y, w; kwargs...)
Plot a learning curve (or curves) directly, without first constructing
a machine.
"""
function learning_curve(mach::Machine{<:Supervised};
resolution=30,
resampling=Holdout(),
weights=nothing,
measure=nothing,
operation=predict,
range::Union{Nothing,ParamRange}=nothing,
repeats=1,
acceleration=default_resource(),
acceleration_grid=CPU1(),
verbosity=1,
rngs=nothing,
rng_name=nothing,
check_measure=true)

if measure == nothing
measure = default_measure(mach.model)
verbosity < 1 ||
@info "No measure specified. Using measure=$measure. "
end
learning_curve(mach::Machine{<:Supervised}; kwargs...) =
learning_curve(mach.model, mach.args...; kwargs...)

# for backwards compatibility
learning_curve!(mach::Machine{<:Supervised}; kwargs...) =
learning_curve(mach; kwargs...)

function learning_curve(model::Supervised, args...;
resolution=30,
resampling=Holdout(),
weights=nothing,
measures=nothing,
measure=measures,
operation=predict,
ranges::Union{Nothing,ParamRange}=nothing,
range::Union{Nothing,ParamRange},
repeats=1,
acceleration=default_resource(),
acceleration_grid=CPU1(),
verbosity=1,
rngs=nothing,
rng_name=nothing,
check_measure=true)

range !== nothing || error("No param range specified. Use range=... ")

Expand All @@ -97,7 +107,7 @@ function learning_curve(mach::Machine{<:Supervised};
end
end

tuned_model = TunedModel(model=mach.model,
tuned_model = TunedModel(model=model,
range=range,
tuning=Grid(resolution=resolution,
shuffle=false),
Expand All @@ -109,15 +119,14 @@ function learning_curve(mach::Machine{<:Supervised};
repeats=repeats,
acceleration=acceleration_grid)

tuned = machine(tuned_model, mach.args...)
tuned = machine(tuned_model, args...)

results = _tuning_results(rngs, acceleration, tuned, rng_name, verbosity)

parameter_name=results.parameter_names[1]
parameter_scale=results.parameter_scales[1]
parameter_values=[results.parameter_values[:, 1]...]
measurements = results.measurements
measurements = (rngs == nothing) ? [measurements...] : measurements

return (parameter_name=parameter_name,
parameter_scale=parameter_scale,
Expand Down Expand Up @@ -176,18 +185,3 @@ function _tuning_results(rngs::AbstractVector, acceleration::CPUProcesses,
return ret
end

learning_curve!(machine::Machine, args...) =
learning_curve(machine, args...)

"""
learning_curve(model::Supervised, args...; kwargs...)
Plot a learning curve (or curves) without first constructing a
machine. Equivalent to `learing_curve(machine(model, args...);
kwargs...)
See [learning_curve](@ref)
"""
learning_curve(model::Supervised, args...; kwargs...) =
learning_curve(machine(model, args...); kwargs...)
8 changes: 8 additions & 0 deletions test/learning_curves.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,14 @@ y = 2*x1 .+ 5*x2 .- 3*x3 .+ 0.2*rand(100);
rng_name=:rng)
@test curves2.measurements curves.measurements

# alternative signature:
curves3 = learning_curve(ensemble, X, y; range=r_n, resolution=7,
acceleration=accel,
rngs = 3,
rng_name=:rng)

@test curves2.measurements curves3.measurements

end

end # module
Expand Down

0 comments on commit 2cec09e

Please sign in to comment.