Releases: pytorch/botorch
Releases · pytorch/botorch
Increased robustness to dimensionality with updated hyperparameter priors
[0.12.0] -- Sep 17, 2024
Major changes
- Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion #2451 for details. The only models that are not changed are those
for fully Bayesian models andPairwiseGP
; for models that utilize a
composite kernel, such as multi-fidelity/task/context, this change only
affects the base kernel (#2449, #2450, #2507). - Use
Standarize
by default in all the models using the upgraded priors. In
addition to reducing the amount of boilerplate needed to initialize a model,
this change was motivated by the change to default priors, because the new
priors will work less well when data is not standardized. Users who do not
want to use transforms should explicitly pass inNone
(#2458, #2532).
Compatibility
New features
- Introduce
PathwiseThompsonSampling
acquisition function (#2443). - Enable
qBayesianActiveLearningByDisagreement
to accept a posterior
transform, and improve its implementation (#2457). - Enable
SaasPyroModel
to sample via NUTS when training data is empty (#2465). - Add multi-objective
qBayesianActiveLearningByDisagreement
(#2475). - Add input constructor for
qNegIntegratedPosteriorVariance
(#2477). - Introduce
qLowerConfidenceBound
(#2517). - Add input constructor for
qMultiFidelityHypervolumeKnowledgeGradient
(#2524). - Add
posterior_transform
toApproximateGPyTorchModel.posterior
(#2531).
Bug fixes
- Fix
batch_shape
default inOrthogonalAdditiveKernel
(#2473). - Ensure all tensors are on CPU in
HitAndRunPolytopeSampler
(#2502). - Fix duplicate logging in
generation/gen.py
(#2504). - Raise exception if
X_pending
is set on the underlyingAcquisitionFunction
in prior-guidedAcquisitionFunction
(#2505). - Make affine input transforms error with data of incorrect dimension, even in
eval mode (#2510). - Use fidelity-aware
current_value
in input constructor forqMultiFidelityKnowledgeGradient
(#2519). - Apply input transforms when computing MLL in model closures (#2527).
- Detach
fval
intorch_minimize
to remove an opportunity for memory leaks
(#2529).
Documentation
- Clarify incompatibility of inter-point constraints with
get_polytope_samples
(#2469). - Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials passStandardize
unnecessarily, and other
simplifications and cleanup (#2462, #2463, #2490, #2495, #2496, #2498, #2499). - Remove deprecated
FixedNoiseGP
(#2536).
Other changes
- More informative warnings about failure to standardize or normalize data
(#2489). - Suppress irrelevant warnings in
qHypervolumeKnowledgeGradient
helpers
(#2486). - Cleaner
botorch/acquisition/multi_objective
directory structure (#2485). - With
AffineInputTransform
, always require data to have at least two
dimensions (#2518). - Remove deprecated argument
data_fidelity
toSingleTaskMultiFidelityGP
and
deprecated modelFixedNoiseMultiFidelityGP
(#2532). - Raise an
OptimizationGradientError
when optimization produces NaN gradients (#2537). - Improve numerics by replacing
torch.log(1 + x)
withtorch.log1p(x)
andtorch.exp(x) - 1
withtorch.special.expm1
(#2539, #2540, #2541).
Maintenance Release, I-BNN Kernel
Compatibility
New features
- Support evaluating posterior predictive in
MultiTaskGP
(#2375). - Infinite width BNN kernel (#2366) and the corresponding tutorial (#2381).
- An improved elliptical slice sampling implementation (#2426).
- Add a helper for producing a
DeterministicModel
using a Matheron path (#2435).
Deprecations and Deletions
- Stop allowing some arguments to be ignored in acqf input constructors (#2356).
- Reap deprecated
**kwargs
argument fromoptimize_acqf
variants (#2390). - Delete
DeterministicPosterior
andDeterministicSampler
(#2391, #2409, #2410). - Removed deprecated
CachedCholeskyMCAcquisitionFunction
(#2399). - Deprecate model conversion code (#2431).
- Deprecate
gp_sampling
module in favor of pathwise sampling (#2432).
Bug Fixes
- Fix observation noise shape for batched models (#2377).
- Fix
sample_all_priors
to not sample one value for all lengthscales (#2404). - Make
(Log)NoisyExpectedImprovement
create a correct fantasy model with
non-defaultSingleTaskGP
(#2414).
Other Changes
Maintenance Release
New Features
- Implement
qLogNParEGO
(#2364). - Support picking best of multiple fit attempts in
fit_gpytorch_mll
(#2373).
Deprecations
- Many functions that used to silently ignore arbitrary keyword arguments will now
raise an exception when passed unsupported arguments (#2327, #2336). - Remove
UnstandardizeMCMultiOutputObjective
andUnstandardizePosteriorTransform
(#2362).
Bug Fixes
- Remove correlation between the step size and the step direction in
sample_polytope
(#2290). - Fix pathwise sampler bug (#2337).
- Explicitly check timeout against
None
so that0.0
isn't ignored (#2348). - Fix boundary handling in
sample_polytope
(#2353). - Avoid division by zero in
normalize
&unnormalize
when lower & upper bounds are equal (#2363). - Update
sample_all_priors
to support wider set of priors (#2371).
Other Changes
- Clarify
is_non_dominated
behavior with NaN (#2332). - Add input constructor for
qEUBO
(#2335). - Add
LogEI
as a baseline in theTuRBO
tutorial (#2355). - Update polytope sampling code and add thinning capability (#2358).
- Add initial objective values to initial state for sample efficiency (#2365).
- Clarify behavior on standard deviations with <1 degree of freedom (#2357).
Maintenance Release, SCoreBO
Compatibility
- Reqire Python >= 3.10 (#2293).
New Features
- SCoreBO and Bayesian Active Learning acquisition functions (#2163).
Bug Fixes
- Fix non-None constraint noise levels in some constrained test problems (#2241).
- Fix inverse cost-weighted utility behaviour for non-positive acquisition values (#2297).
Other Changes
- Don't allow unused keyword arguments in
Model.construct_inputs
(#2186). - Re-map task values in MTGP if they are not contiguous integers starting from zero (#2230).
- Unify
ModelList
andModelListGP
subset_output
behavior (#2231). - Ensure
mean
andinterior_point
ofLinearEllipticalSliceSampler
have correct shapes (#2245). - Speed up task covariance of
LCEMGP
(#2260). - Improvements to
batch_cross_validation
, support for model init kwargs (#2269). - Support custom
all_tasks
for MTGPs (#2271). - Error out if scipy optimizer does not support bounds / constraints (#2282).
- Support diagonal covariance root with fixed indices for
LinearEllipticalSliceSampler
(#2283). - Make
qNIPV
a subclass ofAcquisitionFunction
rather thanAnalyticAcquisitionFunction
(#2286). - Increase code-sharing of
LCEMGP
& defineconstruct_inputs
(#2291).
Deprecations
- Remove deprecated args from base
MCSampler
(#2228). - Remove deprecated
botorch/generation/gen/minimize
(#2229). - Remove
fit_gpytorch_model
(#2250). - Remove
requires_grad_ctx
(#2252). - Remove
base_samples
argument ofGPyTorchPosterior.rsample
(#2254). - Remove deprecated
mvn
argument toGPyTorchPosterior
(#2255). - Remove deprecated
Posterior.event_shape
(#2320). - Remove
**kwargs
& deprecatedindices
argument ofRound
transform (#2321). - Remove
Standardize.load_state_dict
(#2322). - Remove
FixedNoiseMultiTaskGP
(#2323).
Maintenance Release, Updated Community Contributions
New Features
- Introduce updated guidelines and a new directory for community contributions (#2167).
- Add
qEUBO
preferential acquisition function (#2192). - Add Multi Information Source Augmented GP (#2152).
Bug Fixes
- Fix
condition_on_observations
in fully Bayesian models (#2151). - Fix for bug that occurs when splitting single-element bins, use default BoTorch kernel for BAxUS. (#2165).
- Fix a bug when non-linear constraints are used with
q > 1
(#2168). - Remove unsupported
X_pending
fromqMultiFidelityLowerBoundMaxValueEntropy
constructor (#2193). - Don't allow
data_fidelities=[]
inSingleTaskMultiFidelityGP
(#2195). - Fix
EHVI
,qEHVI
, andqLogEHVI
input constructors (#2196). - Fix input constructor for
qMultiFidelityMaxValueEntropy
(#2198). - Add ability to not deduplicate points in
_is_non_dominated_loop
(#2203).
Other Changes
- Minor improvements to
MVaR
risk measure (#2150). - Add support for multitask models to
ModelListGP
(#2154). - Support unspecified noise in
ContextualDataset
(#2155). - Update
HVKG
sampler to reflect the number of model outputs (#2160). - Release restriction in
OneHotToNumeric
that the categoricals are the trailing dimensions (#2166). - Standardize broadcasting logic of
q(Log)EI
'sbest_f
andcompute_best_feasible_objective
(#2171). - Use regular inheritance instead of dispatcher to special-case
PairwiseGP
logic (#2176). - Support
PBO
inEUBO
's input constructor (#2178). - Add
posterior_transform
toqMaxValueEntropySearch
's input constructor (#2181). - Do not normalize or standardize dimension if all values are equal (#2185).
- Reap deprecated support for objective with 1 arg in
GenericMCObjective
(#2199). - Consistent signature for
get_objective_weights_transform
(#2200). - Update context order handling in
ContextualDataset
(#2205). - Update contextual models for use in MBM (#2206).
- Remove
(Identity)AnalyticMultiOutputObjective
(#2208). - Reap deprecated support for
soft_eval_constraint
(#2223). Please usebotorch.utils.sigmoid
instead.
Compatibility
- Pin
mpmath <= 1.3.0
to avoid CI breakages due to removed modules in the latest alpha release (#2222).
Hypervolume Knowledge Gradient (HVKG)
New features
Hypervolume Knowledge Gradient (HVKG):
- Add
qHypervolumeKnowledgeGradient
, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950, #1982, #2101). - Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094).
Other new features:
- Add
MultiOutputFixedCostModel
, which is useful for decoupled scenarios where the objectives have different costs (#2093). - Enable
q > 1
in acquisition function optimization when nonlinear constraints are present (#1793). - Support different noise levels for different outputs in test functions (#2136).
Bug fixes
- Fix fantasization with a
FixedNoiseGaussianLikelihood
whennoise
is known andX
is empty (#2090). - Make
LearnedObjective
compatible with constraints in acquisition functions regardless ofsample_shape
(#2111). - Make input constructors for
qExpectedImprovement
,qLogExpectedImprovement
, andqProbabilityOfImprovement
compatible withLearnedObjective
regardless ofsample_shape
(#2115). - Fix handling of constraints in
qSimpleRegret
(#2141).
Other changes
- Increase default sample size for
LearnedObjective
(#2095). - Allow passing in
X
with or without fidelity dimensions inproject_to_target_fidelity
(#2102). - Use full-rank task covariance matrix by default in SAAS MTGP (#2104).
- Rename
FullyBayesianPosterior
toGaussianMixturePosterior
; add_is_ensemble
and_is_fully_bayesian
attributes toModel
(#2108). - Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.
Bugfix release
Compatibility
- Re-establish compatibility with PyTorch 1.13.1 (#2083).
Multi-Objective "Log" acquisition functions
Highlights
- Additional "Log" acquisition functions for multi-objective optimization with better numerical behavior, which often leads to significantly improved BO performance over their non-"Log" counterparts:
FixedNoiseGP
andFixedNoiseMultiFidelityGP
have been deprecated, their functionalities merged intoSingleTaskGP
andSingleTaskMultiFidelityGP
, respectively (#2052, #2053).- Removed deprecated legacy model fitting functions:
numpy_converter
,fit_gpytorch_scipy
,fit_gpytorch_torch
,_get_extra_mll_args
(#1995, #2050).
New Features
- Support multiple data fidelity dimensions in
SingleTaskMultiFidelityGP
and (deprecated)FixedNoiseMultiFidelityGP
models (#1956). - Add
logsumexp
andfatmax
to handle infinities and control asymptotic behavior in "Log" acquisition functions (#1999). - Add outcome and feature names to datasets, implement
MultiTaskDataset
(#2015, #2019). - Add constrained Hartmann and constrained Gramacy synthetic test problems (#2022, #2026, #2027).
- Support observed noise in
MixedSingleTaskGP
(#2054). - Add
PosteriorStandardDeviation
acquisition function (#2060).
Bug fixes
- Fix input constructors for
qMaxValueEntropy
andqMultiFidelityKnowledgeGradient
(#1989). - Fix precision issue that arises from inconsistent data types in
LearnedObjective
(#2006). - Fix fantasization with
FixedNoiseGP
and outcome transforms and useFantasizeMixin
(#2011). - Fix
LearnedObjective
base sample shape (#2021). - Apply constraints in
prune_inferior_points
(#2069). - Support non-batch evaluation of
PenalizedMCObjective
(#2073). - Fix
Dataset
equality checks (#2077).
Other changes
- Don't allow unused
**kwargs
in input_constructors except for a defined set of exceptions (#1872, #1985). - Merge inferred and fixed noise LCE-M models (#1993).
- Fix import structure in
botorch.acquisition.utils
(#1986). - Remove deprecated functionality:
weights
argument ofRiskMeasureMCObjective
andsqueeze_last_dim
(#1994). - Make
X
,Y
,Yvar
into properties in datasets (#2004). - Make synthetic constrained test functions subclass from
SyntheticTestFunction
(#2029). - Add
construct_inputs
to contextual GP modelsLCEAGP
andSACGP
(#2057).
Bug fix release
This release fixes bugs that affected Ax's modular BotorchModel
and silently ignored outcome constraints due to naming mismatches.
Bug fixes
- Hot fix (#1973) for a few issues:
- A naming mismatch between Ax's modular
BotorchModel
and the BoTorch's acquisition input constructors, leading to outcome constraints in Ax not being used with single-objective acquisition functions in Ax's modularBotorchModel
. The naming has been updated in Ax and consistent naming is now used in input constructors for single and multi-objective acquisition functions in BoTorch. - A naming mismatch in the acquisition input constructor
constraints
inqNoisyLogExpectedImprovement
, which kept constraints from being used. - A bug in
compute_best_feasible_objective
that could lead to-inf
incumbent values.
- A naming mismatch between Ax's modular
- Fix setting seed in
get_polytope_samples
(#1968)
Other changes
Dependency fix release
This is a very minor release; the only change from v0.9.0 is that the linear_operator
dependency was bumped to 0.5.1 (#1963). This was needed since a bug in linear_operator
0.5.0 caused failures with some BoTorch models.