-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up Vision according to advantage kit #69
base: main
Are you sure you want to change the base?
Conversation
this is failing build because advantage kit isnt sure how to autolog its records created (tagObservation and poseObservation) . I assume as the 2025 version not in beta is released, or a future beta version, this issue will be fixed. The only workaround I could do is to break apart the observation into individual arrays of doubles, poses, etc. and keep track of the index for each when looking at the inputs |
overall though, this should lead to a small improvement in actual vision. If there is any it would just be from consuming all of the poses that might have been backed up since we last checked, which i could only see happening if our coprocessor was running without any lag. More importantly though, the logging capabilities of this vision system is much better. Once advantage kit fixes tha auto log, we will be able to go back and view each pose observed which should allow us to better tune standard deviations |
@linglejack06 does their template do the same thing? And if so, would you mind testing if that builds? |
@linglejack06 does this match the problem you're having? |
@PChild no, my issue says that it can't find a match to create a network tables instance. It searches through all types, string, int, array, list, pose2d,etc. but none match the custom record defined in template |
@aidnem i can test that, I haven't yet |
@jkleiber the build.gradle now follows advantage kit template exactly (other than our spotless config and maven publishing) it still throws same error related to json auto detect visibility. Do you think it could be related to not having installed the latest wpilib packages to my computer? i.e my wpilib is still 2024. I don't think this is it because I figure it would show an error more closely relating to this like a bad jdk etc. but maybe this is issue? |
if that is the issue, then its worth trying that on the older build.gradle where we were getting the json auto detect visibility issue without using the gradle rio and everything. |
@linglejack06 advantage kit 2025 depends on wpilib 2025 so we probably need wpilib 2025 to be installed locally for the GradleRio strategy to work If installing 2025 doesn't work then it might be an Advantage Kit problem |
ok, that's what I was thinking. I will try that out once I'm home this
afternoon (wpilib will take forever on school wifi). if that works, i'm
also gonna try to revert back to the commit with the build without gradle
rio to see if that can work
…On Tue, Dec 10, 2024 at 8:43 AM Justin Kleiber ***@***.***> wrote:
@linglejack06 <https://github.com/linglejack06> advantage kit 2025
depends on wpilib 2025 so we probably need wpilib 2025 to be installed
locally for the GradleRio strategy to work
If installing 2025 doesn't work then it might be an Advantage Kit problem
—
Reply to this email directly, view it on GitHub
<#69 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZ7J5PX3CEC3GTH653HIUR32E3VXDAVCNFSM6AAAAABTHLVUGGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZRGY3TGOBRGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jkleiber the only thing i am concerned about with using GradleRio is its compatibility with another GradleRio project when imported into robot code, because a different main class is set up under coppercore. Do you envision this being an issue? |
It could be an issue for sure, but I just want to rule it out as the problem first |
@jkleiber I just tested with wpilib 2025 fully installed. Now the compiler sees errors in wpilib_interface monitored subsystem (logger is used so same errors we were seeing with auto detect visibility). I have made the wpilib_interface build.gradle exactly match advantage kit template as well, which has not solved this issue. I am unsure where to go from here, any ideas? should i try going back to the old build.gradle but with the wpilib 2025 fully installed? |
@linglejack06 for now let's go back to the original build gradle technique. Then let's just not use any advantage kit logging features for now to get the code building. We could end up with a core vision system that doesn't depend on advantage kit at all (which would honestly be a good thing imo). If we are getting errors in Monitored subsystem due to akit logging failures, let's comment logging out for now and see about using WPILib alerts instead for the time being |
@linglejack06 It's bizarre to me that MonitoredSubsystem is causing issues with the logger. All it's doing is logging two booleans per monitor. Also, since there weren't any errors when building the main branch, can we reasonably conclude that the issue either lies somewhere within this branch's changes or on your machine? That could narrow down what to look for. Would you mind sending the error message so I could try and figure out if I messed up MonitoredSubsystem? |
@jkleiber @aidnem after further pondering, and killing gradle daemons a few times, there are no more advantage kit logging errors. Now the issue just resolves around the importing of monitors into the wpilib_interface, which i think is just an issue with maven that can easily be solved by justin explaining. here is the error: |
to clarify, vision fully compiles now with the logging features, wpilib_interface only fails compilation due to an issue with the monitors jar. Therefore, the issues revolving around auto detect visibility, logging, and autolog all appear to have been solved by using gradlerio and annotation processor. |
here is how i added the monitors project under dependencies of wpilib_interface |
@linglejack06 Considering that this error doesn't exist on main, it's worth figuring out why this is happening. It's probably related to some change on this branch It would be good to rebase and resolve merge conflicts to see if this is a problem that is on your computer or if it reproduces in CI |
…ted to jackson auto detext visibility
…without autosave)
@jkleiber after updating high key to 2025 and moving ground truth pose calculation to phoenix drive. vision estimates are great near amp and speaker. However, when going to corners or middle of field, both the ground truth and vision begin to randomly set a rotation. I assume its something wrong with my ground truth pose, as vision has no influence on ground truth and ground truth is still setting random rotation. Any clue as to why this is happening? Would you like me to create a repo for high key 2025 in team 401? |
@jkleiber i just ran simulation with advantage kits vision template exaclty (using their diff drive too) it works perfectly, so i think im gonna try to use their diff drive example in high key 2025 to see if i can isolate issue being vision or phoenix drive. then ill move from there depending on what issue is |
Rather than creating a new repo, I would just make a PR against the current repo that upgrades the project to 2025 + demonstrates vision |
ok i can do that
…On Thu, Dec 12, 2024 at 10:00 AM Justin Kleiber ***@***.***> wrote:
Rather than creating a new repo, I would just make a PR against the
current repo that upgrades the project to 2025 + demonstrates vision
—
Reply to this email directly, view it on GitHub
<#69 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZ7J5PWTJVJ4MOOCYTAMIL32FGQIPAVCNFSM6AAAAABTHLVUGGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZZGIYDGMBVGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jkleiber here is a demo with the demo drive (diff drive) template used. Oddly, when using my vision, i could only set the max voltage outputted as 1 rather tahn 12, otherwise the robot moved incredibly fast. This was not an issue when using advantagekit demo drive with their demo vision, could this be caused by vision? I also noticed that when moving through the center of the field the robot seemed to speed up, i feel like these things are related to my vision estimates, but im not sure how. Other than that, the new logging features are great as we can see when poses are rejected / accepted (shown to the left of screen looking at length of lists) also average distance updates much faster. Heres the video: Screen.Recording.2024-12-12.at.11.57.22.AM.mov@aidenm also, do you have any idea why vision might be moving robot very fast? |
looking at logs, it looks like robot periodic is consistently overshooting loop time by about 50 milliseconds, could it be getting backed up and then updating the poses all at once? |
Maybe when we're in the middle of the field we get more measurements and vision becomes non-performant? To see if that's the case you could drive to the middle and sit still to see if loop times and number of tags is considerably higher Other than the performance issues this looks good. Are you planning on trying this again with swerve? |
Yes for swerve, I was just ruling it out as the issue, which I'm not sure is ruled out yet due to center of field issues. As for swerve, do you want me to attempt to integrate vision with the phoenix drive train. Or just copy over the advantage kit swerve project (since that's what we're using in 2025). As for vision center of field issues, that could be it, but I'm not sure why that would be doing anything different than advantage kit's example as they are the same other than extraction of methods. @jkleiber |
I'll try sitting in the middle for a while and recording that though to officially rule it out |
If copying over the advantagekit 2025 swerve project is more likely to work, let's do that. Theoretically @sleepyghost-zzz will get the drivetrain working for 2025-Robot-Code soon and we'll be able to hard cut over to using that repo on highkey before we get comp bot |
I think it will be easier to avoid swerve ground truth issues. I'll do that as it'll also ensure that connection is fully working by the time we integrate vision into 2025-robot-code. I'll get that done soon. |
@linglejack06 sorry I didn't see this sooner. Could it be the same issue as before, where bad poses from the middle aren't getting rejected? If the pose estimator is somehow consistent but still accepting bad poses, could it conceivably be receiving a bunch of poses outside of the edge of the field and then blending those into your pose estimate so that it looks like the robot is zooming toward the edge of the field? |
@aidenm using the advantage kit example diff drive there is no ground truth, however, the issue would not be estimates outside of field as those are filtered out already. It also is only accepting middle of field poses when tag distance is low enough. It likely is neither of these issues as when i run the full advantage kit template on their github, none of these issues occur. My vision follows their template and my demo diff drive is the exact code from advantage kit github, thats what is so confusing. I also completely copy pasted their template robot container, so im unsure where the problem lies. The vision is the only slightly different thing, but all pose rejection and standard deviations are done the same way, just extracted into methods. |
im gonna try simulating with their talon fx swerve template soon and see if i run into the same issues |
@jkleiber @aidenm vision is integrated into high-key with 2025 talon fx swerve template. Everything works properly, poses rejected outside of field, etc. Attached is a video where you can see the list of accepted poses shrinking when leaving field and increasing when entering. Screen.Recording.2024-12-28.at.6.53.37.PM.movOddly, the robot is still moving very fast, it appears very sensitive. Im not sure if this is a vision issue or a swerve sim issue. vision seems unlikely as i feel it would be jerkier if that was the case. Any ideas? |
also, the issue where drive seemingly speeds up through the middle of the field doesnt appear to be happening, but its hard to tell with how fast the robot is moving all of the time. |
You could log the actual speed of the robot (not x or y but use distance formula to calculate total speed) and then graph it and see if it goes above what it should at any point if you want to definitively rule it out @linglejack06. That sensitivity does look pretty insane, did this also happen when you just added DriveWithJoysticks from coppercore before adding vision? |
Resolves #63
Resolves #70
Resolves #29
Resolves #42