Skip to content

Releases: pytorch/botorch

Compatibility Release

07 Sep 05:04
Compare
Choose a tag to compare

Compatibility

  • Require python >= 3.8 (via #1347).
  • Support for python 3.10 (via #1379).
  • Require PyTorch >= 1.11 (via (#1363).
  • Require GPyTorch >= 1.9.0 (#1347).
    • GPyTorch 1.9.0 is a major refactor that factors out the lazy tensor functionality into a new LinearOperator library, which required a number of adjustments to BoTorch (#1363, #1377).
  • Require pyro >= 1.8.2 (#1379).

New Features

  • Add ability to generate the features appended in the AppendFeatures input transform via a generic callable (#1354).
  • Add new synthetic test functions for sensitivity analysis (#1355, #1361).

Other Changes

  • Use time.monotonic() instead of time.time() to measure duration (#1353).
  • Allow passing Y_samples directly in MARS.set_baseline_Y (#1364).

Bug Fixes

  • Patch state_dict loading for PairwiseGP (#1359).
  • Fix batch_shape handling in Normalize and InputStandardize transforms (#1360).

Maintenance release

12 Aug 21:19
Compare
Choose a tag to compare

[0.6.6] - Aug 12, 2022

Compatibility

  • Require GPyTorch >= 1.8.1 (#1347).

New Features

  • Support batched models in RandomFourierFeatures (#1336).
  • Add a skip_expand option to AppendFeatures (#1344).

Other Changes

  • Allow qProbabilityOfImprovement to use batch-shaped best_f (#1324).
  • Make optimize_acqf re-attempt failed optimization runs and handle optimization
    errors in optimize_acqf and gen_candidates_scipy better (#1325).
  • Reduce memory overhead in MARS.set_baseline_Y (#1346).

Bug Fixes

  • Fix bug where outcome_transform was ignored for ModelListGP.fantasize (#1338).
  • Fix bug causing get_polytope_samples to sample incorrectly when variables
    live in multiple dimensions (#1341).

Documentation

Robust Multi-Objective BO, Multi-Objective Multi-Fidelity BO, Scalable Constrained BO, Improvements to Ax Integration

15 Jul 17:08
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.10 (#1293).
  • Require GPyTorch >=1.7 (#1293).

New Features

  • Add MOMF (Multi-Objective Multi-Fidelity) acquisition function (#1153).
  • Support PairwiseLogitLikelihood and modularize PairwiseGP (#1193).
  • Add in transformed weighting flag to Proximal Acquisition function (#1194).
  • Add FeasibilityWeightedMCMultiOutputObjective (#1202).
  • Add outcome_transform to FixedNoiseMultiTaskGP (#1255).
  • Support Scalable Constrained Bayesian Optimization (#1257).
  • Support SaasFullyBayesianSingleTaskGP in prune_inferior_points (#1260).
  • Implement MARS as a risk measure (#1303).
  • Add MARS tutorial (#1305).

Other Changes

  • Add Bilog outcome transform (#1189).
  • Make get_infeasible_cost return a cost value for each outcome (#1191).
  • Modify risk measures to accept List[float] for weights (#1197).
  • Support SaasFullyBayesianSingleTaskGP in prune_inferior_points_multi_objective (#1204).
  • BotorchContainers and BotorchDatasets: Large refactor of the original TrainingData API to allow for more diverse types of datasets (#1205, #1221).
  • Proximal biasing support for multi-output SingleTaskGP models (#1212).
  • Improve error handling in optimize_acqf_discrete with a check that choices is non-empty (#1228).
  • Handle X_pending properly in FixedFeatureAcquisition (#1233, #1234).
  • PE and PLBO support in Ax (#1240, #1241).
  • Remove model.train call from get_X_baseline for better caching (#1289).
  • Support inf values in bounds argument of optimize_acqf (#1302).

Bug Fixes

  • Update get_gp_samples to support input / outcome transforms (#1201).
  • Fix cached Cholesky sampling in qNEHVI when using Standardize outcome transform (#1215).
  • Make task_feature as required input in MultiTaskGP.construct_inputs (#1246).
  • Fix CUDA tests (#1253).
  • Fix FixedSingleSampleModel dtype/device conversion (#1254).
  • Prevent inappropriate transforms by putting input transforms into train mode before converting models (#1283).
  • Fix sample_points_around_best when using 20 dimensional inputs or prob_perturb (#1290).
  • Skip bound validation in optimize_acqf if inequality constraints are specified (#1297).
  • Properly handle RFFs when used with a ModelList with individual transforms (#1299).
  • Update PosteriorList to support deterministic-only models and fix event_shape (#1300).

Documentation

  • Add a note about observation noise in the posterior in fit_model_with_torch_optimizer notebook (#1196).
  • Fix custom botorch model in Ax tutorial to support new interface (#1213).
  • Update MOO docs (#1242).
  • Add SMOKE_TEST option to MOMF tutorial (#1243).
  • Fix ModelListGP.condition_on_observations/fantasize bug (#1250).
  • Replace space with underscore for proper doc generation (#1256).
  • Update PBO tutorial to use EUBO (#1262).

Maintenance Release

21 Apr 23:47
Compare
Choose a tag to compare

New Features

  • Implement ExpectationPosteriorTransform (#903).
  • Add PairwiseMCPosteriorVariance, a cheap active learning acquisition function (#1125).
  • Support computing quantiles in the fully Bayesian posterior, add FullyBayesianPosteriorList (#1161).
  • Add expectation risk measures (#1173).
  • Implement Multi-Fidelity GIBBON (Lower Bound MES) acquisition function (#1185).

Other Changes

  • Add an error message for one shot acquisition functions in optimize_acqf_discrete (#939).
  • Validate the shape of the bounds argument in optimize_acqf (#1142).
  • Minor tweaks to SAASBO (#1143, #1183).
  • Minor updates to tutorials (24f7fda, #1144, #1148, #1159, #1172, #1180).
  • Make it easier to specify a custom PyroModel (#1149).
  • Allow passing in a mean_module to SingleTaskGP/FixedNoiseGP (#1160).
  • Add a note about acquisitions using gradients to base class (#1168).
  • Remove deprecated box_decomposition module (#1175).

Bug Fixes

  • Bug-fixes for ProximalAcquisitionFunction (#1122).
  • Fix missing warnings on failed optimization in fit_gpytorch_scipy (#1170).
  • Ignore data related buffers in PairwiseGP.load_state_dict (#1171).
  • Make fit_gpytorch_model properly honor the debug flag (#1178).
  • Fix missing posterior_transform in gen_one_shot_kg_initial_conditions (#1187).

Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization

28 Mar 00:30
Compare
Choose a tag to compare

New Features

  • Implement SAASBO - SaasFullyBayesianSingleTaskGP model for sample-efficient high-dimensional Bayesian optimization (#1123).
  • Add SAASBO tutorial (#1127).
  • Add LearnedObjective (#1131), AnalyticExpectedUtilityOfBestOption acquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE).
  • Add BOPE tutorial (#1138).

Other Changes

  • Use qKG.evaluate in optimize_acqf_mixed (#1133).
  • Add construct_inputs to SAASBO (#1136).

Bug Fixes

  • Fix "Constraint Active Search" tutorial (#1124).
  • Update "Discrete Multi-Fidelity BO" tutorial (#1134).

Bug fix release

09 Mar 22:48
Compare
Choose a tag to compare

New Features

  • Use BOTORCH_MODULAR in tutorials with Ax (#1105).
  • Add optimize_acqf_discrete_local_search for discrete search spaces (#1111).

Bug Fixes

  • Fix missing posterior_transform in qNEI and get_acquisition_function (#1113).

Non-linear input constraints, new MOO problems, bug fixes, and performance improvements.

28 Feb 22:41
Compare
Choose a tag to compare

New Features

  • Add Standardize input transform (#1053).
  • Low-rank Cholesky updates for NEI (#1056).
  • Add support for non-linear input constraints (#1067).
  • New MOO problems: MW7 (#1077), disc brake (#1078), penicillin (#1079), RobustToy (#1082), GMM (#1083).

Other Changes

  • Add Dispatcher (#1009).
  • Modify qNEHVI to support deterministic models (#1026).
  • Store tensor attributes of input transforms as buffers (#1035).
  • Modify NEHVI to support MTGPs (#1037).
  • Make Normalize input transform input column-specific (#1047).
  • Improve find_interior_point (#1049).
  • Remove deprecated botorch.distributions module (#1061).
  • Avoid costly application of posterior transform in Kronecker & HOGP models (#1076).
  • Support heteroscedastic perturbations in InputPerturbations (#1088).

Performance Improvements

  • Make risk measures more memory efficient (#1034).

Bug Fixes

  • Properly handle empty fixed_features in optimization (#1029).
  • Fix missing weights in VaR risk measure (#1038).
  • Fix find_interior_point for negative variables & allow unbounded problems (#1045).
  • Filter out indefinite bounds in constraint utilities (#1048).
  • Make non-interleaved base samples use intuitive shape (#1057).
  • Pad small diagonalization with zeros for KroneckerMultitaskGP (#1071).
  • Disable learning of bounds in preprocess_transform (#1089).
  • Catch runtime errors with ill-conditioned covar (#1095).
  • Fix compare_mc_analytic_acquisition tutorial (#1099).

Approximate GP model, Multi-Output Risk Measures, Bug Fixes and Performance Improvements

09 Dec 00:16
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.9 (#1011).
  • Require GPyTorch >=1.6 (#1011).

New Features

  • New ApproximateGPyTorchModel wrapper for various (variational) approximate GP models (#1012).
  • New SingleTaskVariationalGP stochastic variational Gaussian Process model (#1012).
  • Support for Multi-Output Risk Measures (#906, #965).
  • Introduce ModelList and PosteriorList (#829).
  • New Constraint Active Search tutorial (#1010).
  • Add additional multi-objective optimization test problems (#958).

Other Changes

  • Add covar_module as an optional input of MultiTaskGP models (#941).
  • Add min_range argument to Normalize transform to prevent division by zero (#931).
  • Add initialization heuristic for acquisition function optimization that samples around best points (#987).
  • Update initialization heuristic to perturb a subset of the dimensions of the best points if the dimension is > 20 (#988).
  • Modify apply_constraints utility to work with multi-output objectives (#994).
  • Short-cut t_batch_mode_transform decorator on non-tensor inputs (#991).

Performance Improvements

  • Use lazy covariance matrix in BatchedMultiOutputGPyTorchModel.posterior (#976).
  • Fast low-rank Cholesky updates for qNoisyExpectedHypervolumeImprovement (#747, #995, #996).

Bug Fixes

  • Update error handling to new PyTorch linear algebra messages (#940).
  • Avoid test failures on Ampere devices (#944).
  • Fixes to the Griewank test function (#972).
  • Handle empty base_sample_shape in Posterior.rsample (#986).
  • Handle NotPSDError and hitting maxiter in fit_gpytorch_model (#1007).
  • Use TransformedPosterior for subclasses of GPyTorchPosterior (#983).
  • Propagate best_f argument to qProbabilityOfImprovement in input constructors (f5a5f8b)

Maintenance Release + New Tutorials

02 Sep 20:44
Compare
Choose a tag to compare

Compatibility

  • Require GPyTorch >=1.5.1 (#928).

New Features

  • Add HigherOrderGP composite Bayesian Optimization tutorial notebook (#864).
  • Add Multi-Task Bayesian Optimization tutorial (#867).
  • New multi-objective test problems from (#876).
  • Add PenalizedMCObjective and L1PenaltyObjective (#913).
  • Add a ProximalAcquisitionFunction for regularizing new candidates towards previously generated ones (#919, #924).
  • Add a Power outcome transform (#925).

Bug Fixes

  • Batch mode fix for HigherOrderGP initialization (#856).
  • Improve CategoricalKernel precision (#857).
  • Fix an issue with qMultiFidelityKnowledgeGradient.evaluate (#858).
  • Fix an issue with transforms with HigherOrderGP. (#889)
  • Fix initial candidate generation when parameter constraints are on different device (#897).
  • Fix bad in-place op in _generate_unfixed_lin_constraints (#901).
  • Fix an input transform bug in fantasize call (#902).
  • Fix outcome transform bug in batched_to_model_list (#917).

Other Changes

  • Make variance optional for TransformedPosterior.mean (#855).
  • Support transforms in DeterministicModel (#869).
  • Support batch_shape in RandomFourierFeatures (#877).
  • Add a maximize flag to PosteriorMean (#881).
  • Ignore categorical dimensions when validating training inputs in MixedSingleTaskGP (#882).
  • Refactor HigherOrderGPPosterior for memory efficiency (#883).
  • Support negative weights for minimization objectives in get_chebyshev_scalarization (#884).
  • Move train_inputs transforms to model.train/eval calls (#894).

Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling

29 Jun 19:31
Compare
Choose a tag to compare

Compatibility

  • Require PyTorch >=1.8.1 (#832).
  • Require GPyTorch >=1.5 (#848).
  • Changes to how input transforms are applied: transform_inputs is applied in model.forward if the model is in train mode, otherwise it is applied in the posterior call (#819, #835).

New Features

  • Improved multi-objective optimization capabilities:
    • qNoisyExpectedHypervolumeImprovement acquisition function that improves on qExpectedHypervolumeImprovement in terms of tolerating observation noise and speeding up computation for large q-batches (#797, #822).
    • qMultiObjectiveMaxValueEntropy acqusition function (913aa0e, #760).
    • Heuristic for reference point selection (#830).
    • FastNondominatedPartitioning for Hypervolume computations (#699).
    • DominatedPartitioning for partitioning the dominated space (#726).
    • BoxDecompositionList for handling box decompositions of varying sizes (#712).
    • Direct, batched dominated partitioning for the two-outcome case (#739).
    • get_default_partitioning_alpha utility providing heuristic for selecting approximation level for partitioning algorithms (#793).
    • New method for computing Pareto Frontiers with less memory overhead (#842, #846).
  • New qLowerBoundMaxValueEntropy acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749).
  • Support for discrete and mixed input domains:
    • CategoricalKernel for categorical inputs (#771).
    • MixedSingleTaskGP for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).
    • optimize_acqf_discrete for optimizing acquisition functions over fully discrete domains (#777).
    • Extend optimize_acqf_mixed to allow batch optimization (#804).
  • Support for robust / risk-aware optimization:
    • Risk measures for robust / risk-averse optimization (#821).
    • AppendFeatures transform (#820).
    • InputPerturbation input transform for for risk averse BO with implementation errors (#827).
    • Tutorial notebook for Bayesian Optimization of risk measures (#823).
    • Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
  • More scalable multi-task modeling and sampling:
    • KroneckerMultiTaskGP model for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (#637).
    • Support for transforms in Multi-Task GP models (#681).
    • Posterior sampling based on Matheron's rule for Multi-Task GP models (#841).
  • Various changes to simplify and streamline integration with Ax:
    • Handle non-block designs in TrainingData (#794).
    • Acquisition function input constructor registry (#788, #802, #845).
  • Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
  • DelaunayPolytopeSampler for fast uniform sampling from (simple) polytopes (#741).
  • Add evaluate method to ScalarizedObjective (#795).

Bug Fixes

  • Handle the case when all features are fixed in optimize_acqf (#770).
  • Pass fixed_features to initial candidate generation functions (#806).
  • Handle batch empty pareto frontier in FastPartitioning (#740).
  • Handle empty pareto set in is_non_dominated (#743).
  • Handle edge case of no or a single observation in get_chebyshev_scalarization (#762).
  • Fix an issue in gen_candidates_torch that caused problems with acqusition functions using fantasy models (#766).
  • Fix HigherOrderGP dtype bug (#728).
  • Normalize before clamping in Warp input warping transform (#722).
  • Fix bug in GP sampling (#764).

Other Changes

  • Modify input transforms to support one-to-many transforms (#819, #835).
  • Make initial conditions for acquisition function optimization honor parameter constraints (#752).
  • Perform optimization only over unfixed features if fixed_features is passed (#839).
  • Refactor Max Value Entropy Search Methods (#734).
  • Use Linear Algebra functions from the torch.linalg module (#735).
  • Use PyTorch's Kumaraswamy distribution (#746).
  • Improved capabilities and some bugfixes for batched models (#723, #767).
  • Pass callback argument to scipy.optim.minimize in gen_candidates_scipy (#744).
  • Modify behavior of X_pending in in multi-objective acqusiition functions (#747).
  • Allow multi-dimensional batch shapes in test functions (#757).
  • Utility for converting batched multi-output models into batched single-output models (#759).
  • Explicitly raise NotPSDError in _scipy_objective_and_grad (#787).
  • Make raw_samples optional if batch_initial_conditions is passed (#801).
  • Use powers of 2 in qMC docstrings & examples (#812).