Releases: pytorch/botorch
Releases · pytorch/botorch
Compatibility Release
Compatibility
- Require python >= 3.8 (via #1347).
- Support for python 3.10 (via #1379).
- Require PyTorch >= 1.11 (via (#1363).
- Require GPyTorch >= 1.9.0 (#1347).
- Require pyro >= 1.8.2 (#1379).
New Features
- Add ability to generate the features appended in the
AppendFeatures
input transform via a generic callable (#1354). - Add new synthetic test functions for sensitivity analysis (#1355, #1361).
Other Changes
- Use
time.monotonic()
instead oftime.time()
to measure duration (#1353). - Allow passing
Y_samples
directly inMARS.set_baseline_Y
(#1364).
Bug Fixes
Maintenance release
[0.6.6] - Aug 12, 2022
Compatibility
- Require GPyTorch >= 1.8.1 (#1347).
New Features
- Support batched models in
RandomFourierFeatures
(#1336). - Add a
skip_expand
option toAppendFeatures
(#1344).
Other Changes
- Allow
qProbabilityOfImprovement
to use batch-shapedbest_f
(#1324). - Make
optimize_acqf
re-attempt failed optimization runs and handle optimization
errors inoptimize_acqf
andgen_candidates_scipy
better (#1325). - Reduce memory overhead in
MARS.set_baseline_Y
(#1346).
Bug Fixes
- Fix bug where
outcome_transform
was ignored forModelListGP.fantasize
(#1338). - Fix bug causing
get_polytope_samples
to sample incorrectly when variables
live in multiple dimensions (#1341).
Documentation
Robust Multi-Objective BO, Multi-Objective Multi-Fidelity BO, Scalable Constrained BO, Improvements to Ax Integration
Compatibility
New Features
- Add MOMF (Multi-Objective Multi-Fidelity) acquisition function (#1153).
- Support
PairwiseLogitLikelihood
and modularizePairwiseGP
(#1193). - Add in transformed weighting flag to Proximal Acquisition function (#1194).
- Add
FeasibilityWeightedMCMultiOutputObjective
(#1202). - Add outcome_transform to
FixedNoiseMultiTaskGP
(#1255). - Support Scalable Constrained Bayesian Optimization (#1257).
- Support
SaasFullyBayesianSingleTaskGP
inprune_inferior_points
(#1260). - Implement MARS as a risk measure (#1303).
- Add MARS tutorial (#1305).
Other Changes
- Add
Bilog
outcome transform (#1189). - Make
get_infeasible_cost
return a cost value for each outcome (#1191). - Modify risk measures to accept
List[float]
for weights (#1197). - Support
SaasFullyBayesianSingleTaskGP
in prune_inferior_points_multi_objective (#1204). - BotorchContainers and BotorchDatasets: Large refactor of the original
TrainingData
API to allow for more diverse types of datasets (#1205, #1221). - Proximal biasing support for multi-output
SingleTaskGP
models (#1212). - Improve error handling in
optimize_acqf_discrete
with a check thatchoices
is non-empty (#1228). - Handle
X_pending
properly inFixedFeatureAcquisition
(#1233, #1234). - PE and PLBO support in Ax (#1240, #1241).
- Remove
model.train
call fromget_X_baseline
for better caching (#1289). - Support
inf
values inbounds
argument ofoptimize_acqf
(#1302).
Bug Fixes
- Update
get_gp_samples
to support input / outcome transforms (#1201). - Fix cached Cholesky sampling in
qNEHVI
when usingStandardize
outcome transform (#1215). - Make
task_feature
as required input inMultiTaskGP.construct_inputs
(#1246). - Fix CUDA tests (#1253).
- Fix
FixedSingleSampleModel
dtype/device conversion (#1254). - Prevent inappropriate transforms by putting input transforms into train mode before converting models (#1283).
- Fix
sample_points_around_best
when using 20 dimensional inputs orprob_perturb
(#1290). - Skip bound validation in
optimize_acqf
if inequality constraints are specified (#1297). - Properly handle RFFs when used with a
ModelList
with individual transforms (#1299). - Update
PosteriorList
to support deterministic-only models and fixevent_shape
(#1300).
Documentation
- Add a note about observation noise in the posterior in
fit_model_with_torch_optimizer
notebook (#1196). - Fix custom botorch model in Ax tutorial to support new interface (#1213).
- Update MOO docs (#1242).
- Add SMOKE_TEST option to MOMF tutorial (#1243).
- Fix
ModelListGP.condition_on_observations
/fantasize
bug (#1250). - Replace space with underscore for proper doc generation (#1256).
- Update PBO tutorial to use EUBO (#1262).
Maintenance Release
New Features
- Implement
ExpectationPosteriorTransform
(#903). - Add
PairwiseMCPosteriorVariance
, a cheap active learning acquisition function (#1125). - Support computing quantiles in the fully Bayesian posterior, add
FullyBayesianPosteriorList
(#1161). - Add expectation risk measures (#1173).
- Implement Multi-Fidelity GIBBON (Lower Bound MES) acquisition function (#1185).
Other Changes
- Add an error message for one shot acquisition functions in
optimize_acqf_discrete
(#939). - Validate the shape of the
bounds
argument inoptimize_acqf
(#1142). - Minor tweaks to
SAASBO
(#1143, #1183). - Minor updates to tutorials (24f7fda, #1144, #1148, #1159, #1172, #1180).
- Make it easier to specify a custom
PyroModel
(#1149). - Allow passing in a
mean_module
toSingleTaskGP/FixedNoiseGP
(#1160). - Add a note about acquisitions using gradients to base class (#1168).
- Remove deprecated
box_decomposition
module (#1175).
Bug Fixes
- Bug-fixes for
ProximalAcquisitionFunction
(#1122). - Fix missing warnings on failed optimization in
fit_gpytorch_scipy
(#1170). - Ignore data related buffers in
PairwiseGP.load_state_dict
(#1171). - Make
fit_gpytorch_model
properly honor thedebug
flag (#1178). - Fix missing
posterior_transform
ingen_one_shot_kg_initial_conditions
(#1187).
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
New Features
- Implement SAASBO -
SaasFullyBayesianSingleTaskGP
model for sample-efficient high-dimensional Bayesian optimization (#1123). - Add SAASBO tutorial (#1127).
- Add
LearnedObjective
(#1131),AnalyticExpectedUtilityOfBestOption
acquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE). - Add BOPE tutorial (#1138).
Other Changes
Bug Fixes
Bug fix release
Non-linear input constraints, new MOO problems, bug fixes, and performance improvements.
New Features
- Add
Standardize
input transform (#1053). - Low-rank Cholesky updates for NEI (#1056).
- Add support for non-linear input constraints (#1067).
- New MOO problems: MW7 (#1077), disc brake (#1078), penicillin (#1079), RobustToy (#1082), GMM (#1083).
Other Changes
- Add
Dispatcher
(#1009). - Modify qNEHVI to support deterministic models (#1026).
- Store tensor attributes of input transforms as buffers (#1035).
- Modify NEHVI to support MTGPs (#1037).
- Make
Normalize
input transform input column-specific (#1047). - Improve
find_interior_point
(#1049). - Remove deprecated
botorch.distributions
module (#1061). - Avoid costly application of posterior transform in Kronecker & HOGP models (#1076).
- Support heteroscedastic perturbations in
InputPerturbations
(#1088).
Performance Improvements
- Make risk measures more memory efficient (#1034).
Bug Fixes
- Properly handle empty
fixed_features
in optimization (#1029). - Fix missing weights in
VaR
risk measure (#1038). - Fix
find_interior_point
for negative variables & allow unbounded problems (#1045). - Filter out indefinite bounds in constraint utilities (#1048).
- Make non-interleaved base samples use intuitive shape (#1057).
- Pad small diagonalization with zeros for
KroneckerMultitaskGP
(#1071). - Disable learning of bounds in
preprocess_transform
(#1089). - Catch runtime errors with ill-conditioned covar (#1095).
- Fix
compare_mc_analytic_acquisition
tutorial (#1099).
Approximate GP model, Multi-Output Risk Measures, Bug Fixes and Performance Improvements
Compatibility
New Features
- New
ApproximateGPyTorchModel
wrapper for various (variational) approximate GP models (#1012). - New
SingleTaskVariationalGP
stochastic variational Gaussian Process model (#1012). - Support for Multi-Output Risk Measures (#906, #965).
- Introduce
ModelList
andPosteriorList
(#829). - New Constraint Active Search tutorial (#1010).
- Add additional multi-objective optimization test problems (#958).
Other Changes
- Add
covar_module
as an optional input ofMultiTaskGP
models (#941). - Add
min_range
argument toNormalize
transform to prevent division by zero (#931). - Add initialization heuristic for acquisition function optimization that samples around best points (#987).
- Update initialization heuristic to perturb a subset of the dimensions of the best points if the dimension is > 20 (#988).
- Modify
apply_constraints
utility to work with multi-output objectives (#994). - Short-cut
t_batch_mode_transform
decorator on non-tensor inputs (#991).
Performance Improvements
- Use lazy covariance matrix in
BatchedMultiOutputGPyTorchModel.posterior
(#976). - Fast low-rank Cholesky updates for
qNoisyExpectedHypervolumeImprovement
(#747, #995, #996).
Bug Fixes
- Update error handling to new PyTorch linear algebra messages (#940).
- Avoid test failures on Ampere devices (#944).
- Fixes to the
Griewank
test function (#972). - Handle empty base_sample_shape in
Posterior.rsample
(#986). - Handle
NotPSDError
and hittingmaxiter
infit_gpytorch_model
(#1007). - Use TransformedPosterior for subclasses of GPyTorchPosterior (#983).
- Propagate
best_f
argument toqProbabilityOfImprovement
in input constructors (f5a5f8b)
Maintenance Release + New Tutorials
Compatibility
- Require GPyTorch >=1.5.1 (#928).
New Features
- Add
HigherOrderGP
composite Bayesian Optimization tutorial notebook (#864). - Add Multi-Task Bayesian Optimization tutorial (#867).
- New multi-objective test problems from (#876).
- Add
PenalizedMCObjective
andL1PenaltyObjective
(#913). - Add a
ProximalAcquisitionFunction
for regularizing new candidates towards previously generated ones (#919, #924). - Add a
Power
outcome transform (#925).
Bug Fixes
- Batch mode fix for
HigherOrderGP
initialization (#856). - Improve
CategoricalKernel
precision (#857). - Fix an issue with
qMultiFidelityKnowledgeGradient.evaluate
(#858). - Fix an issue with transforms with
HigherOrderGP
. (#889) - Fix initial candidate generation when parameter constraints are on different device (#897).
- Fix bad in-place op in
_generate_unfixed_lin_constraints
(#901). - Fix an input transform bug in
fantasize
call (#902). - Fix outcome transform bug in
batched_to_model_list
(#917).
Other Changes
- Make variance optional for
TransformedPosterior.mean
(#855). - Support transforms in
DeterministicModel
(#869). - Support
batch_shape
inRandomFourierFeatures
(#877). - Add a
maximize
flag toPosteriorMean
(#881). - Ignore categorical dimensions when validating training inputs in
MixedSingleTaskGP
(#882). - Refactor
HigherOrderGPPosterior
for memory efficiency (#883). - Support negative weights for minimization objectives in
get_chebyshev_scalarization
(#884). - Move
train_inputs
transforms tomodel.train/eval
calls (#894).
Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling
Compatibility
- Require PyTorch >=1.8.1 (#832).
- Require GPyTorch >=1.5 (#848).
- Changes to how input transforms are applied:
transform_inputs
is applied inmodel.forward
if the model is intrain
mode, otherwise it is applied in theposterior
call (#819, #835).
New Features
- Improved multi-objective optimization capabilities:
qNoisyExpectedHypervolumeImprovement
acquisition function that improves onqExpectedHypervolumeImprovement
in terms of tolerating observation noise and speeding up computation for largeq
-batches (#797, #822).qMultiObjectiveMaxValueEntropy
acqusition function (913aa0e, #760).- Heuristic for reference point selection (#830).
FastNondominatedPartitioning
for Hypervolume computations (#699).DominatedPartitioning
for partitioning the dominated space (#726).BoxDecompositionList
for handling box decompositions of varying sizes (#712).- Direct, batched dominated partitioning for the two-outcome case (#739).
get_default_partitioning_alpha
utility providing heuristic for selecting approximation level for partitioning algorithms (#793).- New method for computing Pareto Frontiers with less memory overhead (#842, #846).
- New
qLowerBoundMaxValueEntropy
acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749). - Support for discrete and mixed input domains:
CategoricalKernel
for categorical inputs (#771).MixedSingleTaskGP
for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).optimize_acqf_discrete
for optimizing acquisition functions over fully discrete domains (#777).- Extend
optimize_acqf_mixed
to allow batch optimization (#804).
- Support for robust / risk-aware optimization:
- Risk measures for robust / risk-averse optimization (#821).
AppendFeatures
transform (#820).InputPerturbation
input transform for for risk averse BO with implementation errors (#827).- Tutorial notebook for Bayesian Optimization of risk measures (#823).
- Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
- More scalable multi-task modeling and sampling:
- Various changes to simplify and streamline integration with Ax:
- Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
DelaunayPolytopeSampler
for fast uniform sampling from (simple) polytopes (#741).- Add
evaluate
method toScalarizedObjective
(#795).
Bug Fixes
- Handle the case when all features are fixed in
optimize_acqf
(#770). - Pass
fixed_features
to initial candidate generation functions (#806). - Handle batch empty pareto frontier in
FastPartitioning
(#740). - Handle empty pareto set in
is_non_dominated
(#743). - Handle edge case of no or a single observation in
get_chebyshev_scalarization
(#762). - Fix an issue in
gen_candidates_torch
that caused problems with acqusition functions using fantasy models (#766). - Fix
HigherOrderGP
dtype
bug (#728). - Normalize before clamping in
Warp
input warping transform (#722). - Fix bug in GP sampling (#764).
Other Changes
- Modify input transforms to support one-to-many transforms (#819, #835).
- Make initial conditions for acquisition function optimization honor parameter constraints (#752).
- Perform optimization only over unfixed features if
fixed_features
is passed (#839). - Refactor Max Value Entropy Search Methods (#734).
- Use Linear Algebra functions from the
torch.linalg
module (#735). - Use PyTorch's
Kumaraswamy
distribution (#746). - Improved capabilities and some bugfixes for batched models (#723, #767).
- Pass
callback
argument toscipy.optim.minimize
ingen_candidates_scipy
(#744). - Modify behavior of
X_pending
in in multi-objective acqusiition functions (#747). - Allow multi-dimensional batch shapes in test functions (#757).
- Utility for converting batched multi-output models into batched single-output models (#759).
- Explicitly raise
NotPSDError
in_scipy_objective_and_grad
(#787). - Make
raw_samples
optional ifbatch_initial_conditions
is passed (#801). - Use powers of 2 in qMC docstrings & examples (#812).