Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding optimization using Gaussian Process #2

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

weslleyspereira
Copy link
Collaborator

New features:

  • GP model using ScikitLearn GaussianProcessRegression class.
  • Sampler from Mitchel (1991) that aims to improve coverage of samples.
  • Acquisition through maximization of EI.
  • Acquisition through maximization of EI avoiding clusters in the batch of samples. Adapted from Che Y, Müller J, Cheng C (2024).

More:

  • Tests for the new features.

References:

  • Mitchell, D. P. (1991). Spectrally optimal sampling for distribution ray tracing. Computer Graphics, 25, 157–164.
  • Che Y, Müller J, Cheng C. Dispersion-enhanced sequential batch sampling for adaptive contour estimation. Qual Reliab Eng Int. 2024; 40: 131–144. https://doi.org/10.1002/qre.3245

Weslley da Silva Pereira added 7 commits October 4, 2024 13:37
Also adds more functionality to the GaussianProcess class.
- Fix inconsistencies in the MaximizeEI acquisition class
- Replace kernel() by get_kernel() in the GP class
- Use better default configuration for the GP in the bayesian optimization
- Add more tests for the bayesian optimization
The former is preferrable for more modularity in the code.
It makes more sense now that we have multiple surrogate models.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant