Skip to content

Releases: JuliaAI/MLJTuning.jl

v0.8.8

19 Jul 04:06
1421263
Compare
Choose a tag to compare

MLJTuning v0.8.8

Diff since v0.8.7

  • Change default logger from nothing to MLJBase.default_logger() (which can be reset with MLJBase.default_logger(new_logger)) #221

Merged pull requests:

  • Make the global default_logger() the default logger in TunedModel(logger=...) (#221) (@ablaom)
  • For a 0.8.8 release (#222) (@ablaom)

Closed issues:

  • Use measures that are not of the form f(y, yhat) but f(fitresult) (#202)

v0.8.7

03 Jun 09:20
66581bc
Compare
Choose a tag to compare

MLJTuning v0.8.7

Diff since v0.8.6

Merged pull requests:

v0.8.6

21 May 11:09
1af0202
Compare
Choose a tag to compare

MLJTuning v0.8.6

Diff since v0.8.5

  • (new feature) Add logger option to TunedModel wrapper, for logging internal model evaluations to an ML tracking platform, such as MLflow via MLJFlow.jl. Default should be nothing for no logging (#193). The logger must support asynchronous messaging if TunedModel(model, ...) is specified with the option acceleration=CPUThreads() or CPUProcesses(). MLJFlow.jl 0.4.3 supports asynchronous messaging.

Merged pull requests:

Closed issues:

v0.8.5

06 May 23:09
bac0ac9
Compare
Choose a tag to compare

MLJTuning v0.8.5

Diff since v0.8.4

  • Write the PerformanceEvaluation objects computed for each model (hyper-parameter set) to the history, or write compact versions of the same (CompactPeformanceEvaluation objects) by providing TunedModel(...) a new option compact_history=true. The evaluation objects are accessed like this: evaluation = report(mach).history[index].evaluation, where mach is a machine associated with the TunedModel instance. For more on the differences between PerformanceEvaluation and CompactPerformanceEvaluation objects, refer to their document strings. (In MLJTuning 0.5.3 and 0.5.4 an experimental feature already introduced PerformanceEvalution objects to the history, but with no option to write the compact form. In the current release, compact objects are written by default.)

Merged pull requests:

  • Create option to write CompactPerformanceEvaluation objects to history (#215) (@ablaom)
  • For a 0.8.5 release (#216) (@ablaom)

v0.8.4

24 Mar 21:56
9fe1f52
Compare
Choose a tag to compare

MLJTuning v0.8.4

Diff since v0.8.3

  • (enhancement) Implement feature importances that expose the feature importances of the optimal atomic model (#213)

Merged pull requests:

v0.8.3

18 Mar 00:14
4f1dd71
Compare
Choose a tag to compare

MLJTuning v0.8.3

Diff since v0.8.2

  • Include full evaluation objects (key = :evaluation) in history entries (#210)

Merged pull requests:

v0.8.2

07 Mar 19:56
39d6cb4
Compare
Choose a tag to compare

MLJTuning v0.8.2

Diff since v0.8.1

Merged pull requests:

Closed issues:

  • Overload save/restore to fix serialisation when atomic model has ephemeral fitresult (#207)

v0.8.1

23 Jan 20:20
60ad344
Compare
Choose a tag to compare

MLJTuning v0.8.1

Diff since v0.8.0

Merged pull requests:

v0.8.0

26 Sep 00:29
7e4ae96
Compare
Choose a tag to compare

MLJTuning v0.8.0

Diff since v0.7.4

  • (breaking) Bump MLJBase compatibility to version 1. When using without MLJ, users may need to explicitly import StatisticalMeasures.jl. See also the MLJBase 1.0 migration guide (#194)

Merged pull requests:

  • Get rid of test/Project.toml (#190) (@ablaom)
  • Fix some tests that use deprecated MLJBase code (#191) (@ablaom)
  • Update code and tests to address migration of measures MLJBase -> StatisticalMeasures (#194) (@ablaom)
  • For a 0.8 release (#195) (@ablaom)
  • add compat for julia (#196) (@ablaom)

Closed issues:

  • Are GridSearch using the update! method? (#82)
  • Improper loss functions silently accepted in training a TunedModel (#184)
  • Typo in error message for TunedModel missing arguments (#188)
  • Skipping parts of search space? (#189)

v0.7.4

07 Nov 06:29
094dbe8
Compare
Choose a tag to compare

MLJTuning v0.7.4

Diff since v0.7.3

Merged pull requests: