Skip to content

Commit d25c733

Browse files
Update latest version of site
1 parent 4e6ad3c commit d25c733

18 files changed

+1075
-4030
lines changed

v/latest/en/index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,13 +48,13 @@
4848
</code></pre>
4949
</span></div></li><li><h4>Construct an acquisition function:</h4><div><span><pre><code class="hljs css language-python"><span class="hljs-keyword">from</span> botorch.acquisition <span class="hljs-keyword">import</span> LogExpectedImprovement
5050

51-
logNEI = LogExpectedImprovement(model=gp, best_f=Y.max())
51+
logEI = LogExpectedImprovement(model=gp, best_f=Y.max())
5252
</code></pre>
5353
</span></div></li><li><h4>Optimize the acquisition function:</h4><div><span><pre><code class="hljs css language-python"><span class="hljs-keyword">from</span> botorch.optim <span class="hljs-keyword">import</span> optimize_acqf
5454

5555
bounds = torch.stack([torch.zeros(<span class="hljs-number">2</span>), torch.ones(<span class="hljs-number">2</span>)]).to(torch.double)
5656
candidate, acq_value = optimize_acqf(
57-
logNEI, bounds=bounds, q=<span class="hljs-number">1</span>, num_restarts=<span class="hljs-number">5</span>, raw_samples=<span class="hljs-number">20</span>,
57+
logEI, bounds=bounds, q=<span class="hljs-number">1</span>, num_restarts=<span class="hljs-number">5</span>, raw_samples=<span class="hljs-number">20</span>,
5858
)
5959
candidate <span class="hljs-comment"># tensor([[0.2981, 0.2401]], dtype=torch.float64)</span>
6060
</code></pre>

v/latest/files/closed_loop_botorch_only.ipynb

Lines changed: 56 additions & 1103 deletions
Large diffs are not rendered by default.

v/latest/files/closed_loop_botorch_only.py

Lines changed: 40 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
#!/usr/bin/env python3
22
# coding: utf-8
33

4-
# ## Closed-loop batch, constrained BO in BoTorch with qEI and qNEI
4+
# ## Closed-loop batch, constrained BO in BoTorch with qLogEI and qLogNEI
55
#
66
# In this tutorial, we illustrate how to implement a simple Bayesian Optimization (BO) closed loop in BoTorch.
77
#
@@ -10,7 +10,7 @@
1010
# However, you may want to do things that are not easily supported in Ax at this time (like running high-dimensional BO using a VAE+GP model that you jointly train on high-dimensional input data). If you find yourself in such a situation, you will need to write your own optimization loop, as we do in this tutorial.
1111
#
1212
#
13-
# We use the batch Expected Improvement (qEI) and batch Noisy Expected Improvement (qNEI) acquisition functions to optimize a constrained version of the synthetic Hartmann6 test function. The standard problem is
13+
# We use the batch Log Expected Improvement (`qLogEI`) and batch Noisy Expected Improvement (`qLogNEI`) acquisition functions to optimize a constrained version of the synthetic Hartmann6 test function. The standard problem is
1414
#
1515
# $$f(x) = -\sum_{i=1}^4 \alpha_i \exp \left( -\sum_{j=1}^6 A_{ij} (x_j - P_{ij})^2 \right)$$
1616
#
@@ -20,7 +20,7 @@
2020
#
2121
# Since botorch assumes a maximization problem, we will attempt to maximize $-f(x)$ to achieve $\max_{x} -f(x) = 3.32237$.
2222

23-
# In[1]:
23+
# In[14]:
2424

2525

2626
import os
@@ -37,7 +37,7 @@
3737
#
3838
# First, we define the constraint used in the example in `outcome_constraint`. The second function `weighted_obj` is a "feasibility-weighted objective," which returns zero when not feasible.
3939

40-
# In[2]:
40+
# In[15]:
4141

4242

4343
from botorch.test_functions import Hartmann
@@ -62,13 +62,14 @@ def weighted_obj(X):
6262
#
6363
# Each component is a `FixedNoiseGP`. The models are initialized with 10 points drawn randomly from $[0,1]^6$.
6464

65-
# In[3]:
65+
# In[16]:
6666

6767

68+
from botorch.models.transforms.input import Normalize
6869
from botorch.models import FixedNoiseGP, ModelListGP
6970
from gpytorch.mlls.sum_marginal_log_likelihood import SumMarginalLogLikelihood
7071

71-
NOISE_SE = 0.5
72+
NOISE_SE = 0.25
7273
train_yvar = torch.tensor(NOISE_SE**2, device=device, dtype=dtype)
7374

7475

@@ -85,12 +86,18 @@ def generate_initial_data(n=10):
8586

8687
def initialize_model(train_x, train_obj, train_con, state_dict=None):
8788
# define models for objective and constraint
88-
model_obj = FixedNoiseGP(train_x, train_obj, train_yvar.expand_as(train_obj)).to(
89-
train_x
90-
)
91-
model_con = FixedNoiseGP(train_x, train_con, train_yvar.expand_as(train_con)).to(
92-
train_x
93-
)
89+
model_obj = FixedNoiseGP(
90+
train_x,
91+
train_obj,
92+
train_yvar.expand_as(train_obj),
93+
input_transform=Normalize(d=train_x.shape[-1]),
94+
).to(train_x)
95+
model_con = FixedNoiseGP(
96+
train_x,
97+
train_con,
98+
train_yvar.expand_as(train_con),
99+
input_transform=Normalize(d=train_x.shape[-1]),
100+
).to(train_x)
94101
# combine into a multi-output GP model
95102
model = ModelListGP(model_obj, model_con)
96103
mll = SumMarginalLogLikelihood(model.likelihood, model)
@@ -103,11 +110,10 @@ def initialize_model(train_x, train_obj, train_con, state_dict=None):
103110
# #### Define a construct to extract the objective and constraint from the GP
104111
# The methods below take the outputs of the GP and return the objective and the constraint. In general, these can be any `Callable`, but here we simply need to index the correct output.
105112

106-
# In[4]:
107-
113+
# In[17]:
108114

109-
from botorch.acquisition.objective import ConstrainedMCObjective
110115

116+
from botorch.acquisition.objective import GenericMCObjective
111117

112118
def obj_callable(Z: torch.Tensor, X: Optional[torch.Tensor] = None):
113119
return Z[..., 0]
@@ -117,17 +123,13 @@ def constraint_callable(Z):
117123
return Z[..., 1]
118124

119125

120-
# define a feasibility-weighted objective for optimization
121-
constrained_obj = ConstrainedMCObjective(
122-
objective=obj_callable,
123-
constraints=[constraint_callable],
124-
)
126+
objective = GenericMCObjective(objective=obj_callable)
125127

126128

127129
# #### Define a helper function that performs the essential BO step
128130
# The helper function below takes an acquisition function as an argument, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=3$. The function `optimize_acqf` optimizes the $q$ points jointly. A simple initialization heuristic is used to select the 10 restart initial locations from a set of 50 random points.
129131

130-
# In[5]:
132+
# In[18]:
131133

132134

133135
from botorch.optim import optimize_acqf
@@ -170,7 +172,7 @@ def update_random_observations(best_random):
170172
return best_random
171173

172174

173-
# ### Perform Bayesian Optimization loop with qNEI
175+
# ### Perform Bayesian Optimization loop with qLogNEI
174176
# The Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps:
175177
# 1. given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$
176178
# 2. observe $f(x)$ for each $x$ in the batch
@@ -181,16 +183,16 @@ def update_random_observations(best_random):
181183
#
182184
# *Note*: Running this may take a little while.
183185

184-
# In[6]:
186+
# In[19]:
185187

186188

187189
import time
188190
import warnings
189191

190192
from botorch import fit_gpytorch_mll
191-
from botorch.acquisition.monte_carlo import (
192-
qExpectedImprovement,
193-
qNoisyExpectedImprovement,
193+
from botorch.acquisition import (
194+
qLogExpectedImprovement,
195+
qLogNoisyExpectedImprovement,
194196
)
195197
from botorch.exceptions import BadInitialCandidatesWarning
196198
from botorch.sampling.normal import SobolQMCNormalSampler
@@ -208,7 +210,6 @@ def update_random_observations(best_random):
208210

209211
best_observed_all_ei, best_observed_all_nei, best_random_all = [], [], []
210212

211-
212213
# average over multiple trials
213214
for trial in range(1, N_TRIALS + 1):
214215

@@ -245,23 +246,25 @@ def update_random_observations(best_random):
245246
qmc_sampler = SobolQMCNormalSampler(sample_shape=torch.Size([MC_SAMPLES]))
246247

247248
# for best_f, we use the best observed noisy values as an approximation
248-
qEI = qExpectedImprovement(
249+
qLogEI = qLogExpectedImprovement(
249250
model=model_ei,
250251
best_f=(train_obj_ei * (train_con_ei <= 0).to(train_obj_ei)).max(),
251252
sampler=qmc_sampler,
252-
objective=constrained_obj,
253+
objective=objective,
254+
constraints=[constraint_callable],
253255
)
254256

255-
qNEI = qNoisyExpectedImprovement(
257+
qLogNEI = qLogNoisyExpectedImprovement(
256258
model=model_nei,
257259
X_baseline=train_x_nei,
258260
sampler=qmc_sampler,
259-
objective=constrained_obj,
261+
objective=objective,
262+
constraints=[constraint_callable],
260263
)
261264

262265
# optimize and get new observation
263-
new_x_ei, new_obj_ei, new_con_ei = optimize_acqf_and_get_observation(qEI)
264-
new_x_nei, new_obj_nei, new_con_nei = optimize_acqf_and_get_observation(qNEI)
266+
new_x_ei, new_obj_ei, new_con_ei = optimize_acqf_and_get_observation(qLogEI)
267+
new_x_nei, new_obj_nei, new_con_nei = optimize_acqf_and_get_observation(qLogNEI)
265268

266269
# update training points
267270
train_x_ei = torch.cat([train_x_ei, new_x_ei])
@@ -314,7 +317,7 @@ def update_random_observations(best_random):
314317
# #### Plot the results
315318
# The plot below shows the best objective value observed at each step of the optimization for each of the algorithms. The confidence intervals represent the variance at that step in the optimization across the trial runs. The variance across optimization runs is quite high, so in order to get a better estimate of the average performance one would have to run a much larger number of trials `N_TRIALS` (we avoid this here to limit the runtime of this tutorial).
316319

317-
# In[7]:
320+
# In[20]:
318321

319322

320323
import numpy as np
@@ -337,13 +340,13 @@ def ci(y):
337340

338341
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
339342
ax.errorbar(iters, y_rnd.mean(axis=0), yerr=ci(y_rnd), label="random", linewidth=1.5)
340-
ax.errorbar(iters, y_ei.mean(axis=0), yerr=ci(y_ei), label="qEI", linewidth=1.5)
341-
ax.errorbar(iters, y_nei.mean(axis=0), yerr=ci(y_nei), label="qNEI", linewidth=1.5)
343+
ax.errorbar(iters, y_ei.mean(axis=0), yerr=ci(y_ei), label="qLogEI", linewidth=1.5)
344+
ax.errorbar(iters, y_nei.mean(axis=0), yerr=ci(y_nei), label="qLogNEI", linewidth=1.5)
342345
plt.plot(
343346
[0, N_BATCH * BATCH_SIZE],
344347
[GLOBAL_MAXIMUM] * 2,
345348
"k",
346-
label="true best objective",
349+
label="true best feasible objective",
347350
linewidth=2,
348351
)
349352
ax.set_ylim(bottom=0.5)

0 commit comments

Comments
 (0)