Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frames averager can output additional uncertainty estimates #121

Open
toqduj opened this issue Oct 12, 2022 · 3 comments
Open

Frames averager can output additional uncertainty estimates #121

toqduj opened this issue Oct 12, 2022 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@toqduj
Copy link

toqduj commented Oct 12, 2022

I'm assuming Frames averager propagates uncertainties...

However, in addition, it can output not just the mean but also the standard deviation, or (🍻) the standard error on the mean. Those actual uncertainties can be used to replace or be compared with the theoretical uncertainties at a later stage.

@toqduj toqduj changed the title Frames averager can output its own uncertainties Frames averager can output additional uncertainty estimates Oct 12, 2022
@garryod garryod self-assigned this Oct 12, 2022
@garryod garryod added the enhancement New feature or request label Oct 12, 2022
@garryod
Copy link
Member

garryod commented Oct 12, 2022

You're correct, if average_all_frames is passed an array of numcertain.uncertain values then the resultant frame will have the correct attributed uncertainties.

Am I correct in understanding that in addition to outputting the averaged values you would like to output the standard deviation / variance of the input frames? This would make for a trivial addition which I am happy to do, but I am unsure as to how you intend to use these downstream, could you possibly elaborate on this a bit further?

@toqduj
Copy link
Author

toqduj commented Oct 13, 2022

Your understanding is correct. We initially only start with (Poisson) counting statistics-based uncertainty estimates, and propagate these. Poisson statistics are the smallest possible uncertainty estimate. However, there can be other sources of uncertainty that bring the actual uncertainty up.

These actual uncertainties can occasionally be determined, for example by comparing multiple ostensibly identical frames. we then get additional estimates for the uncertainties, for example by looking at the SEM or STD.

So we end up with multiple uncertainty arrays. These can be used later on, either for:

  1. estimating a new "user-facing" uncertainty array which is closer to the actual practical uncertainties, by combining the multiple uncertainty arrays in a clever way. I usually estimate this new array as the maximum of the available uncertainties, and to be no less than 1% of the intensity (as this is realistically the smallest effect magnitude of our extensive corrections). Though this 1% is rather arbitrary, it has held up well for subsequent data analysis further down the line.
  2. comparing uncertainties. If we have segments of the detector where the practical uncertainty is consistently larger than the Poisson uncertainty, we have an unknown additional source of noise in there, and some instrumental troubleshooting to do.

so as indicated in #123 , we would need to be able to add a list of uncertainties to each dataset, rather than trying to encapsulate all uncertainties in a single array.

@toqduj
Copy link
Author

toqduj commented Oct 25, 2022

Blocked by DiamondLightSource/numcertain#77

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants