Skip to content

Commit

Permalink
Detach fval in torch_minimize to avoid memory leak (#2529)
Browse files Browse the repository at this point in the history
Summary:
## Motivation

This fixes #2526

### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)?

Yes.

Pull Request resolved: #2529

Test Plan:
The tests are passing.

## Related PRs
N/A

Reviewed By: esantorella

Differential Revision: D62529029

Pulled By: Balandat

fbshipit-source-id: 50da389d8a01659e5b510ac27c25363fc25af207
  • Loading branch information
mikkelbue authored and facebook-github-bot committed Sep 12, 2024
1 parent 47a10f9 commit db96db3
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions botorch/optim/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,11 +192,11 @@ def torch_minimize(
else {name: limits for name, limits in bounds.items() if name in parameters}
)
for step in range(1, step_limit + 1):
fval, _ = closure()
fval = closure()[0].detach()
runtime = monotonic() - start_time
result = OptimizationResult(
step=step,
fval=fval.detach().cpu().item(),
fval=fval.cpu().item(),
status=OptimizationStatus.RUNNING,
runtime=runtime,
)
Expand Down

0 comments on commit db96db3

Please sign in to comment.