Releases: pytorch/botorch
Releases · pytorch/botorch
LogEI acquisition functions, L0 regularization & homotopy optimization, PiBO, orthogonal additive kernel, nonlinear constraints
Compatibility
- Require Python >= 3.9.0 (#1924).
- Require PyTorch >= 1.13.1 (#1960).
- Require linear_operator == 0.5.0 (#1961).
- Require GPyTorch == 1.11 (#1961).
Highlights
- Introduce
OrthogonalAdditiveKernel
(#1869). - Speed up LCE-A kernel by over an order of magnitude (#1910).
- Introduce
optimize_acqf_homotopy
, for optimizing acquisition functions with homotopy (#1915). - Introduce
PriorGuidedAcquisitionFunction
(PiBO) (#1920). - Introduce
qLogExpectedImprovement
, which provides more accurate numerics thanqExpectedImprovement
and can lead to significant optimization improvements (#1936). - Similarly, introduce
qLogNoisyExpectedImprovement
, which is analogous toqNoisyExpectedImprovement
(#1937).
New Features
- Add constrained synthetic test functions
PressureVesselDesign
,WeldedBeam
,SpeedReducer
, andTensionCompressionString
(#1832). - Support decoupled fantasization (#1853) and decoupled evaluations in cost-aware utilities (#1949).
- Add
PairwiseBayesianActiveLearningByDisagreement
, an active learning acquisition function for PBO and BOPE (#1855). - Support custom mean and likelihood in
MultiTaskGP
(#1909). - Enable candidate generation (via
optimize_acqf
) with bothnon_linear_constraints
andfixed_features
(#1912). - Introduce
L0PenaltyApproxObjective
to support L0 regularization (#1916). - Enable batching in
PriorGuidedAcquisitionFunction
(#1925).
Other changes
- Deprecate
FixedNoiseMultiTaskGP
; allowtrain_Yvar
optionally inMultiTaskGP
(#1818). - Implement
load_state_dict
for SAAS multi-task GP (#1825). - Improvements to
LinearEllipticalSliceSampler
(#1859, #1878, #1879, #1883). - Allow passing in task features as part of X in MTGP.posterior (#1868).
- Improve numerical stability of log densities in pairwise GPs (#1919).
- Python 3.11 compliance (#1927).
- Enable using constraints with
SampleReducingMCAcquisitionFunction
s when usinginput_constructor
s andget_acquisition_function
(#1932). - Enable use of
qLogExpectedImprovement
andqLogNoisyExpectedImprovement
with Ax (#1941).
Bug Fixes
- Enable pathwise sampling modules to be converted to GPU (#1821).
- Allow
Standardize
modules to be loaded once trained (#1874). - Fix memory leak in Inducing Point Allocators (#1890).
- Correct einsum computation in
LCEAKernel
(#1918). - Properly whiten bounds in MVNXPB (#1933).
- Make
FixedFeatureAcquisitionFunction
convert floats to double-precision tensors rather than single-precision (#1944). - Fix memory leak in
FullyBayesianPosterior
(#1951). - Make
AnalyticExpectedUtilityOfBestOption
input constructor work correctionly with multi-task GPs (#1955).
Maintenance Release
Maintenance Release
Compatibility
- Require GPyTorch == 1.10 and linear_operator == 0.4.0 (#1803).
New Features
- Polytope sampling for linear constraints along the q-dimension (#1757).
- Single-objective joint entropy search with additional conditioning, various improvements to entropy-based acquisition functions (#1738).
Other changes
- Various updates to improve numerical stability of
PairwiseGP
(#1754, #1755). - Change batch range for
FullyBayesianPosterior
(1176a38, #1773). - Make
gen_batch_initial_conditions
more flexible (#1779). - Deprecate
objective
in favor ofposterior_transform
forMultiObjectiveAnalyticAcquisitionFunction
(#1781). - Use
prune_baseline=True
as default forqNoisyExpectedImprovement
(#1796). - Add
batch_shape
property toSingleTaskVariationalGP
(#1799). - Change minimum inferred noise level for
SaasFullyBayesianSingleTaskGP
(#1800).
Bug fixes
Maintenance Release
New Features
- Add BAxUS tutorial (#1559).
Other changes
- Various improvements to tutorials (#1703, #1706, #1707, #1708, #1710, #1711, #1718, #1719, #1739, #1740, #1742).
- Allow tensor input for
integer_indices
inRound
transform (#1709). - Expose
cache_root
in qNEHVI input constructor (#1730). - Add
get_init_args
helper toNormalize
&Round
transforms (#1731). - Allowing custom dimensionality and improved gradient stability in
ModifiedFixedSingleSampleModel
(#1732).
Bug fixes
Pathwise Sampling, Inducing Point Allocators, Ensemble Posterior, Bug Fixes & Improvements
[0.8.2] - Feb 23, 2023
Compatibility
- Require PyTorch >= 1.12 (#1699).
New Features
- Introduce pathwise sampling API for efficiently sampling functions from (approximate) GP priors and posteriors (#1463).
- Add
OneHotToNumeric
input transform (#1517). - Add
get_rounding_input_transform
utility for constructing rounding input transforms (#1531). - Introduce
EnsemblePosterior
(#1636). - Inducing Point Allocators for Sparse GPs (#1652).
- Pass
gen_candidates
callable inoptimize_acqf
(#1655). - Adding
logmeanexp
andlogdiffexp
numerical utilities (#1657).
Other changes
- Warn if inoperable keyword arguments are passed to optimizers (#1421).
- Add
BotorchTestCase.assertAllClose
(#1618). - Add
sample_shape
property toListSampler
(#1624). - Do not filter out
BoTorchWarning
s by default (#1630). - Introduce a
DeterministicSampler
(#1641). - Warn when optimizer kwargs are being ignored in BoTorch optim utils
_filter_kwargs
(#1645). - Don't use
functools.lru_cache
on methods (#1650). - More informative error when someone adds a module without updating the corresponding rst file (#1653).
- Make indices a buffer in
AffineInputTransform
(#1656). - Clean up
optimize_acqf
and_make_linear_constraints
(#1660, #1676). - Support NaN
max_reference_point
ininfer_reference_point
(#1671). - Use
_fast_solves
inHOGP.posterior
(#1682). - Approximate qPI using
MVNXPB
(#1684). - Improve filtering for
cache_root
inCachedCholeskyMCAcquisitionFunction
(#1688). - Add option to disable retrying on optimization warning (#1696).
Bug fixes
- Fix normalization in Chebyshev scalarization (#1616).
- Fix
TransformedPosterior
missing batch shape error in_update_base_samples
(#1625). - Detach
coefficient
andoffset
inAffineTransform
in eval mode (#1642). - Fix pickle error in
TorchPosterior
(#1644). - Fix shape error in
optimize_acqf_cyclic
(#1648). - Fixed bug where
optimize_acqf
didn't work with different batch sizes (#1668). - Fix EUBO optimization error when two Xs are identical (#1670).
- Bug fix:
_filter_kwargs
was erroring when provided a function without a__name__
attribute (#1678).
Compatibility release
[0.8.1] - Jan 5, 2023
Highlights
- This release includes changes for compatibility with the newest versions of linear_operator and gpytorch.
- Several acquisition functions now have "Log" counterparts, which provide better
numerical behavior for improvement-based acquisition functions in areas where the probability of
improvement is low. For example,LogExpectedImprovement
(#1565) should behave better than
ExpectedImprovement
. These new acquisition functions are - Bug fix: Stop
ModelListGP.posterior
from quietly ignoringLog
,Power
, andBilog
outcome transforms (#1563). - Turn off
fast_computations
setting in linear_operator by default (#1547).
Compatibility
- Require linear_operator == 0.3.0 (#1538).
- Require pyro-ppl >= 1.8.4 (#1606).
- Require gpytorch == 1.9.1 (#1612).
New Features
- Add
eta
toget_acquisition_function
(#1541). - Support 0d-features in
FixedFeatureAcquisitionFunction
(#1546). - Add timeout ability to optimization functions (#1562, #1598).
- Add
MultiModelAcquisitionFunction
, an abstract base class for acquisition functions that require multiple types of models (#1584). - Add
cache_root
option for qNEI inget_acquisition_function
(#1608).
Other changes
- Docstring corrections (#1551, #1557, #1573).
- Removal of
_fit_multioutput_independent
andallclose_mll
(#1570). - Better numerical behavior for fully Bayesian models (#1576).
- More verbose Scipy
minimize
failure messages (#1579). - Lower-bound noise in
SaasPyroModel
to avoid Cholesky errors (#1586).
Bug fixes
Posterior, MCSampler & Closure Refactors, Entropy Search Acquisition Functions
Highlights
This release includes some backwards incompatible changes.
- Refactor
Posterior
andMCSampler
modules to better support non-Gaussian distributions in BoTorch (#1486).- Introduced a
TorchPosterior
object that wraps a PyTorchDistribution
object and makes it compatible with the rest ofPosterior
API. PosteriorList
no longer accepts Gaussian base samples. It should be used with aListSampler
that includes the appropriate sampler for each posterior.- The MC acquisition functions no longer construct a Sobol sampler by default. Instead, they rely on a
get_sampler
helper, which dispatches an appropriate sampler based on the posterior provided. - The
resample
andcollapse_batch_dims
arguments toMCSampler
s have been removed. TheForkedRNGSampler
andStochasticSampler
can be used to get the same functionality. - Refer to the PR for additional changes. We will update the website documentation to reflect these changes in a future release.
- Introduced a
- #1191 refactors much of
botorch.optim
to operate based on closures that abstract away how losses (and gradients) are computed. By default, these closures are created using multiply-dispatched factory functions (such asget_loss_closure
), which may be customized by registering methods with an associated dispatcher (e.g.GetLossClosure
). Future releases will contain tutorials that explore these features in greater detail.
New Features
- Add mixed optimization for list optimization (#1342).
- Add entropy search acquisition functions (#1458).
- Add utilities for straight-through gradient estimators for discretization functions (#1515).
- Add support for categoricals in Round input transform and use STEs (#1516).
- Add closure-based optimizers (#1191).
Other Changes
- Do not count hitting maxiter as optimization failure & update default maxiter (#1478).
BoxDecomposition
cleanup (#1490).- Deprecate
torch.triangular_solve
in favor oftorch.linalg.solve_triangular
(#1494). - Various docstring improvements (#1496, #1499, #1504).
- Remove
__getitem__
method fromLinearTruncatedFidelityKernel
(#1501). - Handle Cholesky errors when fitting a fully Bayesian model (#1507).
- Make eta configurable in
apply_constraints
(#1526). - Support SAAS ensemble models in RFFs (#1530).
- Deprecate
botorch.optim.numpy_converter
(#1191). - Deprecate
fit_gpytorch_scipy
andfit_gpytorch_torch
(#1191).
Bug Fixes
Bug fix release
Highlights
- #1454 fixes a critical bug that affected multi-output
BatchedMultiOutputGPyTorchModel
s that were using aNormalize
orInputStandardize
input transform and trained usingfit_gpytorch_model/mll
withsequential=True
(which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release. - #1479 changes the inheritance structure of
Model
s in a backwards-incompatible way. If your code relies onisinstance
checks with BoTorchModel
s, especiallySingleTaskGP
, you should revisit these checks to make sure they still work as expected.
Compatibility
- Require linear_operator == 0.2.0 (#1491).
New Features
- Introduce
bvn
,MVNXPB
,TruncatedMultivariateNormal
, andUnifiedSkewNormal
classes / methods (#1394, #1408). - Introduce
AffineInputTransform
(#1461). - Introduce a
subset_transform
decorator to consolidate subsetting of inputs in input transforms (#1468).
Other Changes
- Add a warning when using float dtype (#1193).
- Let Pyre know that
AcquisitionFunction.model
is aModel
(#1216). - Remove custom
BlockDiagLazyTensor
logic when usingStandardize
(#1414). - Expose
_aug_batch_shape
inSaasFullyBayesianSingleTaskGP
(#1448). - Adjust
PairwiseGP
ScaleKernel
prior (#1460). - Pull out
fantasize
method into aFantasizeMixin
class, so it isn't so widely inherited (#1462, #1479). - Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
- Use
get_default_partitioning_alpha
for NEHVI input constructor (#1481).
Bug Fixes
- Fix
batch_shape
property ofModelListGPyTorchModel
(#1441). - Tutorial fixes (#1446, #1475).
- Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
- Handle
RuntimeError
due to constraint violation while sampling from priors (#1451). - Fix bug in model list with output indices (#1453).
- Fix input transform bug when sequentially training a
BatchedMultiOutputGPyTorchModel
(#1454). - Fix a bug in
_fit_multioutput_independent
that failed mll comparison (#1455). - Fix box decomposition behavior with empty or None
Y
(#1489).
Improve model fitting functionality
New Features
- A full refactor of model fitting methods (#1134).
- This introduces a new
fit_gpytorch_mll
method that multiple-dispatches on the model type. Users may register custom fitting routines for different combinations of MLLs, Likelihoods, and Models. - Unlike previous fitting helpers,
fit_gpytorch_mll
does not passkwargs
tooptimizer
and instead introduces an optionaloptimizer_kwargs
argument. - When a model fitting attempt fails,
botorch.fit
methods restore modules to their original states. fit_gpytorch_mll
throws aModelFittingError
when all model fitting attempts fail.- Upon returning from
fit_gpytorch_mll
,mll.training
will beTrue
if fitting failed andFalse
otherwise.
- This introduces a new
- Allow custom bounds to be passed in to
SyntheticTestFunction
(#1415).
Deprecations
- Deprecate weights argument of risk measures in favor of a
preprocessing_function
(#1400), - Deprecate
fit_gyptorch_model
; to be superseded byfit_gpytorch_mll
.
Other Changes
- Support risk measures in MOO input constructors (#1401).