-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reimplement objective scaling #9985
Conversation
a6a29ce
to
00285ba
Compare
CodSpeed Performance ReportMerging #9985 will not alter performanceComparing Summary
|
5a23e90
to
a3d335e
Compare
4a2abee
to
871533a
Compare
...ots/test_ropt_initialization/test_everest2ropt_snapshot/config_multiobj.yml/ropt_config.json
Show resolved
Hide resolved
871533a
to
0aad2c2
Compare
) -> EvaluatorResult: | ||
realizations = [ | ||
self._everest_config.model.realizations[evaluator_context.realizations[idx]] | ||
for idx in range(control_values.shape[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we pull out range(control_values.shape[0])
to a separate variable simulation_ids
here for readability? Really it is a list of all ert realizations / sim_ids, I think pulling it out would clarify that too. Similarly (even though there is just one instance) within _run_forward_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
# implicitly given by there position in the evaluated_control_indices | ||
# list. We store for each control vector that id, or -1 if it was not | ||
# evaluated: | ||
sim_ids = np.fromiter( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, it is more understandable I think, and could be revisited a few more times. In the long run we could maybe get rid of the -1 as evaluation id, maybe just use the ERT realization instead of it, since all evaluations are ultimately stored in ERT storage as a realization. This is future work either way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are going to re-evaluate these ID's anyway. These ID's, and the batch-ID are passed, so you can map a result produced by ropt
to a runpath on disk. So, if you have the batch and the model realizations, you look for the folder with the right simulation_#
name, where # is the simulation ID. That is a bit primitive and cumbersome, we should think about improving this, and indeed ultimately there is some mapping to a location in ERT storage.
0aad2c2
to
9229e37
Compare
9229e37
to
7f42b79
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Issue
This PR reimplements objective scaling using the new approach of
ropt
to transform variables and objectives between the model and optimization domains.Approach
There are three commits:
ropt
to simplify the storage code.everest
run model to introduce a clearer boundary betweenropt
andeverest
code.(Screenshot of new behavior in GUI if applicable)
git rebase -i main --exec 'just rapid-tests'
)When applicable