Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate log statistics #64

Merged
merged 8 commits into from
Dec 5, 2024
Merged

Separate log statistics #64

merged 8 commits into from
Dec 5, 2024

Conversation

fabioseel
Copy link
Contributor

Adds a class for Logging Statistics instead of using a Loss for everything to make the interface a bit more expressive:

  • Loss is an extension of LogStatistics
  • added keyword 'all' if a loss should just target the whole model
    • if so, the number of weights either has to match the number of circuits or be one (and then is repeated to match the number of circuits)
  • added a test that runs the forward and backward pass once for the classification case and ensure that weights are being computed (nicer would be some proper test that correct weights are computed, maybe in the future)

Please double check the changes in the objectives _weighted_params function, as it's not sufficiently covered by the test 🙃

@fabioseel fabioseel marked this pull request as ready for review November 27, 2024 15:03
@fabioseel fabioseel requested a review from alex404 November 27, 2024 15:03
retinal_rl/classification/loss.py Show resolved Hide resolved
retinal_rl/models/loss.py Show resolved Hide resolved
retinal_rl/models/objective.py Show resolved Hide resolved
tests/modules/test_objective.py Show resolved Hide resolved
@alex404
Copy link
Contributor

alex404 commented Nov 28, 2024

I don't understand actions well enough to know if this is feasible, but it might even be a good idea to have a system where it prompts the user to run a full simulation before making a merge, and provides the template command to run, i.e.

main.py +experiment=classification logging.use_wandb=True logging.wandb_project=default-master-simulations run_name={commit_hash} command=train

with the commit_hash filled out.

@fabioseel
Copy link
Contributor Author

I don't understand actions well enough to know if this is feasible, but it might even be a good idea to have a system where it prompts the user to run a full simulation before making a merge, and provides the template command to run, i.e.

main.py +experiment=classification logging.use_wandb=True logging.wandb_project=default-master-simulations run_name={commit_hash} command=train

with the commit_hash filled out.

yeah the 'actions way' of doing this would be to run it automated... but for that we'd need a super mini example as we don't have access to a gpu at least not either free or trivially 😆
also should this cover only the classification setting?

@alex404
Copy link
Contributor

alex404 commented Dec 4, 2024

Maybe we can just have a script, outside of the action setup, that fills out the command with e.g. the commit. Regardless this is a low priority item, so I'd say don't worry about it for now.

Copy link
Contributor Author

@fabioseel fabioseel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adjusted everything / added requested defaults etc, should be mergeable

@alex404 alex404 merged commit a197daa into master Dec 5, 2024
3 checks passed
@alex404 alex404 deleted the separate-log-statistics branch December 5, 2024 07:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants