Skip to content

Commit

Permalink
Fix option to continue model-based inference (#460)
Browse files Browse the repository at this point in the history
* move init round

move init round from update to prepare new batch so that the fit method in BOLFIRE can be called to continue optimisation

* update changelog

* move init round

move init round from prepare new batch to iterate

* move init round

move init round back to update but additionally check in infer if inference is continued and a new round needs to be initialised
  • Loading branch information
uremes authored Jan 25, 2023
1 parent c8083e3 commit 66283bf
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 0 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Changelog
=========

- Fix the option to continue inference in model-based inference
- Move classifiers
- Fix readthedocs configuration
- Update penalty to shrinkage parameter conversion in synthetic likelihood calculation
Expand Down
15 changes: 15 additions & 0 deletions elfi/methods/inference/parameter_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -515,6 +515,21 @@ def current_params(self):
"""
raise NotImplementedError

def infer(self, *args, **kwargs):
"""Set the objective and start the iterate loop until the inference is finished.
Initialise a new data collection round if needed.
Returns
-------
result : Sample
"""
if self.state['round'] > 0:
self._init_round()

return super().infer(*args, **kwargs)

def _merge_batch(self, batch):
simulated = batch_to_arr2d(batch, self.feature_names)
n_sim = self.state['n_sim_round']
Expand Down

0 comments on commit 66283bf

Please sign in to comment.