-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial point samplers #808
Conversation
this can work as a short term solution, but we would want to make this more flexible - we could pass generators of initial samples to the optimizer, and provide a nice function/class for random sample generators - but this would then allow us to pass other types of generators when needed, e.g. qp generated by some other optimization process which would provide better starting points |
…dmind-labs/trieste into uri/split_initial_point_generation
Is generators definitely the way to go here? Space.sample doesn't currently support this, and I worry that a generator giving 10,000 samples one at a time would be much less efficient than a single call to sample(10_000). Another approach would be to provide an additional optional argument |
it doesn't have to be generators, that's just what seemed like what could allow us to pass a variety of initial samples, but @vpicheny any thoughts? |
I've added something that I think is flexible enough but still easy to use. Opinions? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a few comments for improvement, but overall looks much better, I think this would fit the bill - with a sampler I can create a custom one where I mix random samples with points obtained in other ways to provide a better start to the optimiser
trieste/acquisition/optimizer.py
Outdated
if len(sequence) < offset + num_samples: | ||
raise ValueError( | ||
f"Insufficient samples ({offset+num_samples} required, {len(sequence)} available" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also do a basic check that dimensionality is correct?
trieste/acquisition/optimizer.py
Outdated
def generate_continuous_optimizer( | ||
num_initial_samples: int = NUM_SAMPLES_MIN, | ||
num_optimization_runs: int = 10, | ||
num_recovery_runs: int = 10, | ||
optimizer_args: Optional[dict[str, Any]] = None, | ||
split_initial_samples: Optional[int] = 100_000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking that with initial samplers we can simplify arguments here, i.e. we could subsume split_initial_samples
and num_initial_samples
within the samplers, then each call of the sampler gives a splitted sample and we call them until they are exhausted, what do you think?
it would be a breaking change, but we are releasing 3.0 anyway :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a much better idea! I've pushed a first attempt at this but will see what breaks and write some tests before asking for a review.
As an aside, it's actually possible to do this without breaking the interface by still letting people specify an int for num_initial_samples
if they want to. There's an argument that we should rename the argument from num_initial_samples
to initial_samples
, but I'm worried that this would break a lot of code (and not just ours). Another option is to add a separate initial_samples
argument and make people specify just one of this and num_initial_samples
.
…dmind-labs/trieste into uri/split_initial_point_generation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work! left a few minor comments, but I think this is it :)
it would still be good to pass it by @khurram-ghani as well before merging
p.s. this will make it way easier to test alternative generate_initial_points
, e.g. with clustering to achieve better diversity of starting points between the runs
@pytest.mark.parametrize("num_initial_points", [0, 1, 2, 3, 4]) | ||
def test_generate_initial_points(num_initial_points: int) -> None: | ||
def sampler(space: SearchSpace) -> Iterable[TensorType]: | ||
assert space == Box([-1], [2]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are using this space in multiple tests, perhaps create a constant at the top and reuse it? or a fixture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One important comment I think (on -inf
), and some minor ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a really good and flexible implementation!
Related issue(s)/PRs:
Summary
Add support for custom samplers for generating the optimization initial point candidates. This solves two problems:
Fully backwards compatible: yes
Given how often
generate_continuous_optimizer
is used (and often with keyword arguments) I think we should make an extra effort to maintain backwards compatibility here. The current approach is simply to expandnum_initial_samples
so it can take either anint
or a sampler. An alternate approach would be to add an additional optionalinitial_point_sampler
parameter, but the downside of that is that it wouldn't be possible to catch cases where people accidentally pass in both anum_initial_samples
and ainitial_sampler
.PR checklist