Skip to content

Commit

Permalink
Update latest version of site
Browse files Browse the repository at this point in the history
  • Loading branch information
docusaurus-bot committed Aug 30, 2024
1 parent 4e6ad3c commit d25c733
Show file tree
Hide file tree
Showing 18 changed files with 1,075 additions and 4,030 deletions.
4 changes: 2 additions & 2 deletions v/latest/en/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,13 @@
</code></pre>
</span></div></li><li><h4>Construct an acquisition function:</h4><div><span><pre><code class="hljs css language-python"><span class="hljs-keyword">from</span> botorch.acquisition <span class="hljs-keyword">import</span> LogExpectedImprovement

logNEI = LogExpectedImprovement(model=gp, best_f=Y.max())
logEI = LogExpectedImprovement(model=gp, best_f=Y.max())
</code></pre>
</span></div></li><li><h4>Optimize the acquisition function:</h4><div><span><pre><code class="hljs css language-python"><span class="hljs-keyword">from</span> botorch.optim <span class="hljs-keyword">import</span> optimize_acqf

bounds = torch.stack([torch.zeros(<span class="hljs-number">2</span>), torch.ones(<span class="hljs-number">2</span>)]).to(torch.double)
candidate, acq_value = optimize_acqf(
logNEI, bounds=bounds, q=<span class="hljs-number">1</span>, num_restarts=<span class="hljs-number">5</span>, raw_samples=<span class="hljs-number">20</span>,
logEI, bounds=bounds, q=<span class="hljs-number">1</span>, num_restarts=<span class="hljs-number">5</span>, raw_samples=<span class="hljs-number">20</span>,
)
candidate <span class="hljs-comment"># tensor([[0.2981, 0.2401]], dtype=torch.float64)</span>
</code></pre>
Expand Down
1,159 changes: 56 additions & 1,103 deletions v/latest/files/closed_loop_botorch_only.ipynb

Large diffs are not rendered by default.

77 changes: 40 additions & 37 deletions v/latest/files/closed_loop_botorch_only.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python3
# coding: utf-8

# ## Closed-loop batch, constrained BO in BoTorch with qEI and qNEI
# ## Closed-loop batch, constrained BO in BoTorch with qLogEI and qLogNEI
#
# In this tutorial, we illustrate how to implement a simple Bayesian Optimization (BO) closed loop in BoTorch.
#
Expand All @@ -10,7 +10,7 @@
# However, you may want to do things that are not easily supported in Ax at this time (like running high-dimensional BO using a VAE+GP model that you jointly train on high-dimensional input data). If you find yourself in such a situation, you will need to write your own optimization loop, as we do in this tutorial.
#
#
# We use the batch Expected Improvement (qEI) and batch Noisy Expected Improvement (qNEI) acquisition functions to optimize a constrained version of the synthetic Hartmann6 test function. The standard problem is
# We use the batch Log Expected Improvement (`qLogEI`) and batch Noisy Expected Improvement (`qLogNEI`) acquisition functions to optimize a constrained version of the synthetic Hartmann6 test function. The standard problem is
#
# $$f(x) = -\sum_{i=1}^4 \alpha_i \exp \left( -\sum_{j=1}^6 A_{ij} (x_j - P_{ij})^2 \right)$$
#
Expand All @@ -20,7 +20,7 @@
#
# Since botorch assumes a maximization problem, we will attempt to maximize $-f(x)$ to achieve $\max_{x} -f(x) = 3.32237$.

# In[1]:
# In[14]:


import os
Expand All @@ -37,7 +37,7 @@
#
# First, we define the constraint used in the example in `outcome_constraint`. The second function `weighted_obj` is a "feasibility-weighted objective," which returns zero when not feasible.

# In[2]:
# In[15]:


from botorch.test_functions import Hartmann
Expand All @@ -62,13 +62,14 @@ def weighted_obj(X):
#
# Each component is a `FixedNoiseGP`. The models are initialized with 10 points drawn randomly from $[0,1]^6$.

# In[3]:
# In[16]:


from botorch.models.transforms.input import Normalize
from botorch.models import FixedNoiseGP, ModelListGP
from gpytorch.mlls.sum_marginal_log_likelihood import SumMarginalLogLikelihood

NOISE_SE = 0.5
NOISE_SE = 0.25
train_yvar = torch.tensor(NOISE_SE**2, device=device, dtype=dtype)


Expand All @@ -85,12 +86,18 @@ def generate_initial_data(n=10):

def initialize_model(train_x, train_obj, train_con, state_dict=None):
# define models for objective and constraint
model_obj = FixedNoiseGP(train_x, train_obj, train_yvar.expand_as(train_obj)).to(
train_x
)
model_con = FixedNoiseGP(train_x, train_con, train_yvar.expand_as(train_con)).to(
train_x
)
model_obj = FixedNoiseGP(
train_x,
train_obj,
train_yvar.expand_as(train_obj),
input_transform=Normalize(d=train_x.shape[-1]),
).to(train_x)
model_con = FixedNoiseGP(
train_x,
train_con,
train_yvar.expand_as(train_con),
input_transform=Normalize(d=train_x.shape[-1]),
).to(train_x)
# combine into a multi-output GP model
model = ModelListGP(model_obj, model_con)
mll = SumMarginalLogLikelihood(model.likelihood, model)
Expand All @@ -103,11 +110,10 @@ def initialize_model(train_x, train_obj, train_con, state_dict=None):
# #### Define a construct to extract the objective and constraint from the GP
# The methods below take the outputs of the GP and return the objective and the constraint. In general, these can be any `Callable`, but here we simply need to index the correct output.

# In[4]:

# In[17]:

from botorch.acquisition.objective import ConstrainedMCObjective

from botorch.acquisition.objective import GenericMCObjective

def obj_callable(Z: torch.Tensor, X: Optional[torch.Tensor] = None):
return Z[..., 0]
Expand All @@ -117,17 +123,13 @@ def constraint_callable(Z):
return Z[..., 1]


# define a feasibility-weighted objective for optimization
constrained_obj = ConstrainedMCObjective(
objective=obj_callable,
constraints=[constraint_callable],
)
objective = GenericMCObjective(objective=obj_callable)


# #### Define a helper function that performs the essential BO step
# The helper function below takes an acquisition function as an argument, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=3$. The function `optimize_acqf` optimizes the $q$ points jointly. A simple initialization heuristic is used to select the 10 restart initial locations from a set of 50 random points.

# In[5]:
# In[18]:


from botorch.optim import optimize_acqf
Expand Down Expand Up @@ -170,7 +172,7 @@ def update_random_observations(best_random):
return best_random


# ### Perform Bayesian Optimization loop with qNEI
# ### Perform Bayesian Optimization loop with qLogNEI
# The Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps:
# 1. given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$
# 2. observe $f(x)$ for each $x$ in the batch
Expand All @@ -181,16 +183,16 @@ def update_random_observations(best_random):
#
# *Note*: Running this may take a little while.

# In[6]:
# In[19]:


import time
import warnings

from botorch import fit_gpytorch_mll
from botorch.acquisition.monte_carlo import (
qExpectedImprovement,
qNoisyExpectedImprovement,
from botorch.acquisition import (
qLogExpectedImprovement,
qLogNoisyExpectedImprovement,
)
from botorch.exceptions import BadInitialCandidatesWarning
from botorch.sampling.normal import SobolQMCNormalSampler
Expand All @@ -208,7 +210,6 @@ def update_random_observations(best_random):

best_observed_all_ei, best_observed_all_nei, best_random_all = [], [], []


# average over multiple trials
for trial in range(1, N_TRIALS + 1):

Expand Down Expand Up @@ -245,23 +246,25 @@ def update_random_observations(best_random):
qmc_sampler = SobolQMCNormalSampler(sample_shape=torch.Size([MC_SAMPLES]))

# for best_f, we use the best observed noisy values as an approximation
qEI = qExpectedImprovement(
qLogEI = qLogExpectedImprovement(
model=model_ei,
best_f=(train_obj_ei * (train_con_ei <= 0).to(train_obj_ei)).max(),
sampler=qmc_sampler,
objective=constrained_obj,
objective=objective,
constraints=[constraint_callable],
)

qNEI = qNoisyExpectedImprovement(
qLogNEI = qLogNoisyExpectedImprovement(
model=model_nei,
X_baseline=train_x_nei,
sampler=qmc_sampler,
objective=constrained_obj,
objective=objective,
constraints=[constraint_callable],
)

# optimize and get new observation
new_x_ei, new_obj_ei, new_con_ei = optimize_acqf_and_get_observation(qEI)
new_x_nei, new_obj_nei, new_con_nei = optimize_acqf_and_get_observation(qNEI)
new_x_ei, new_obj_ei, new_con_ei = optimize_acqf_and_get_observation(qLogEI)
new_x_nei, new_obj_nei, new_con_nei = optimize_acqf_and_get_observation(qLogNEI)

# update training points
train_x_ei = torch.cat([train_x_ei, new_x_ei])
Expand Down Expand Up @@ -314,7 +317,7 @@ def update_random_observations(best_random):
# #### Plot the results
# The plot below shows the best objective value observed at each step of the optimization for each of the algorithms. The confidence intervals represent the variance at that step in the optimization across the trial runs. The variance across optimization runs is quite high, so in order to get a better estimate of the average performance one would have to run a much larger number of trials `N_TRIALS` (we avoid this here to limit the runtime of this tutorial).

# In[7]:
# In[20]:


import numpy as np
Expand All @@ -337,13 +340,13 @@ def ci(y):

fig, ax = plt.subplots(1, 1, figsize=(8, 6))
ax.errorbar(iters, y_rnd.mean(axis=0), yerr=ci(y_rnd), label="random", linewidth=1.5)
ax.errorbar(iters, y_ei.mean(axis=0), yerr=ci(y_ei), label="qEI", linewidth=1.5)
ax.errorbar(iters, y_nei.mean(axis=0), yerr=ci(y_nei), label="qNEI", linewidth=1.5)
ax.errorbar(iters, y_ei.mean(axis=0), yerr=ci(y_ei), label="qLogEI", linewidth=1.5)
ax.errorbar(iters, y_nei.mean(axis=0), yerr=ci(y_nei), label="qLogNEI", linewidth=1.5)
plt.plot(
[0, N_BATCH * BATCH_SIZE],
[GLOBAL_MAXIMUM] * 2,
"k",
label="true best objective",
label="true best feasible objective",
linewidth=2,
)
ax.set_ylim(bottom=0.5)
Expand Down
Loading

0 comments on commit d25c733

Please sign in to comment.