-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use getAllUnreadResults #63
Comments
i can get this done quickly this weekend @aidenm |
@aidnem @jkleiber do you guys want me to adjust some other stuff to match the advantage kit stuff? ie photon vision just lopps through results and only takes into account the latest results. whereas advantage kit takes into account all of the unread results. I think this is primarily done to save all of the estimates to the inputs class as a way to view them later. Do you view this as worthwhile? i assume it will take more processing power but might be useful to look over when things go wrong? |
also, i think a lot of their structure is more condensed, this woould take more time to fix but i could do it to make our code much simpler to read. What do you guys think? |
I think as a first step, probably handle all of the results (maybe pass them all into pose estimator or something) and then update the IO with the latest one. |
yes, the pose estimator only takes the latest one, big difference is that
all are still logged. Also, how do you feel about moving structure to
closely match the template? there would be a lot less files to look
through, but not sure if it results in much improvement otherwise.
…On Fri, Dec 6, 2024 at 12:34 PM aidnem ***@***.***> wrote:
I think as a first step, probably handle all of the results (maybe pass
them all into pose estimator or something) and then update the IO with the
latest one.
Ideally I'd like to see all of the results being logged. @jkleiber
<https://github.com/jkleiber> do you think to moving to log every result
is worth our time?
—
Reply to this email directly, view it on GitHub
<#63 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZ7J5PUJ7GWSBZ6EM4MQMQD2EHNZFAVCNFSM6AAAAABTC7IZY2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRTHAYTQMBYGI>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
Let's log every result just because it's better to have more information. If it's too much or causes problems we can always turn off that specific logging or only log certain data |
Does it make sense to pull some of the rejection logic from their template? eg if our pose is floating in the air or outside of the field bounds. And some of their standard deviation calculations? |
I was definitely looking at stealing some of the rejections for out of field estimates. I like our standard deviations a bit more unless we can prove the others work better, just because the number we divide by for distance from tags is a configurable constants factor rather than just based on number of cameras. Do you think the number of cameras would lead to a better deviation though? Sent from my iPhoneOn Dec 6, 2024, at 4:02 PM, Preston ***@***.***> wrote:
Does it make sense to pull some of the rejection logic from their template? eg if our pose is floating in the air or outside of the field bounds.
And some of their standard deviation calculations?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were assigned.Message ID: ***@***.***>
|
Wouldn't using twice the number of cameras make a pose have half as much standard deviation? Perhaps it's not as intuitive as it feels but it seems like they would probably sort of average out each other's error right? We could also probably test this using advantagescope's statistics tab by logging raw pose estimates and number of tags and then keeping track of standard deviation vs distance. |
Currently, VisionIOPhoton uses getLatestResult, which is deprecated and planned for removal. We should replace it with getAllUnreadResults.
A reference for 2025 vision with logging with AdvantageKit can be found in the 2025 AdvantageKit vision template
The text was updated successfully, but these errors were encountered: