Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up Vision according to advantage kit #69

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open

Conversation

linglejack06
Copy link
Contributor

@linglejack06 linglejack06 commented Dec 8, 2024

  • adds all poses to logging
  • adds all tags to logging
  • simplifies file structure
  • adds pose rejection
  • updates a few naming conventions (should better match the talon fx swerve though if thats 2025 plan)

Resolves #63
Resolves #70
Resolves #29
Resolves #42

@linglejack06 linglejack06 self-assigned this Dec 8, 2024
@linglejack06
Copy link
Contributor Author

this is failing build because advantage kit isnt sure how to autolog its records created (tagObservation and poseObservation) . I assume as the 2025 version not in beta is released, or a future beta version, this issue will be fixed. The only workaround I could do is to break apart the observation into individual arrays of doubles, poses, etc. and keep track of the index for each when looking at the inputs

@linglejack06
Copy link
Contributor Author

overall though, this should lead to a small improvement in actual vision. If there is any it would just be from consuming all of the poses that might have been backed up since we last checked, which i could only see happening if our coprocessor was running without any lag. More importantly though, the logging capabilities of this vision system is much better. Once advantage kit fixes tha auto log, we will be able to go back and view each pose observed which should allow us to better tune standard deviations

@aidnem
Copy link
Contributor

aidnem commented Dec 8, 2024

@linglejack06 does their template do the same thing? And if so, would you mind testing if that builds?

@PChild
Copy link
Member

PChild commented Dec 8, 2024

@linglejack06 does this match the problem you're having?

@linglejack06
Copy link
Contributor Author

@linglejack06 does this match the problem you're having?

@PChild no, my issue says that it can't find a match to create a network tables instance. It searches through all types, string, int, array, list, pose2d,etc. but none match the custom record defined in template

@linglejack06
Copy link
Contributor Author

@linglejack06 does their template do the same thing? And if so, would you mind testing if that builds?

@aidnem i can test that, I haven't yet

@linglejack06
Copy link
Contributor Author

@jkleiber the build.gradle now follows advantage kit template exactly (other than our spotless config and maven publishing) it still throws same error related to json auto detect visibility.

Do you think it could be related to not having installed the latest wpilib packages to my computer? i.e my wpilib is still 2024. I don't think this is it because I figure it would show an error more closely relating to this like a bad jdk etc. but maybe this is issue?

@linglejack06
Copy link
Contributor Author

if that is the issue, then its worth trying that on the older build.gradle where we were getting the json auto detect visibility issue without using the gradle rio and everything.

@jkleiber
Copy link
Member

@linglejack06 advantage kit 2025 depends on wpilib 2025 so we probably need wpilib 2025 to be installed locally for the GradleRio strategy to work

If installing 2025 doesn't work then it might be an Advantage Kit problem

@linglejack06
Copy link
Contributor Author

linglejack06 commented Dec 10, 2024 via email

@linglejack06
Copy link
Contributor Author

@jkleiber the only thing i am concerned about with using GradleRio is its compatibility with another GradleRio project when imported into robot code, because a different main class is set up under coppercore. Do you envision this being an issue?

@jkleiber
Copy link
Member

It could be an issue for sure, but I just want to rule it out as the problem first

@linglejack06
Copy link
Contributor Author

@jkleiber I just tested with wpilib 2025 fully installed. Now the compiler sees errors in wpilib_interface monitored subsystem (logger is used so same errors we were seeing with auto detect visibility). I have made the wpilib_interface build.gradle exactly match advantage kit template as well, which has not solved this issue. I am unsure where to go from here, any ideas? should i try going back to the old build.gradle but with the wpilib 2025 fully installed?

@jkleiber
Copy link
Member

@linglejack06 for now let's go back to the original build gradle technique. Then let's just not use any advantage kit logging features for now to get the code building.

We could end up with a core vision system that doesn't depend on advantage kit at all (which would honestly be a good thing imo).

If we are getting errors in Monitored subsystem due to akit logging failures, let's comment logging out for now and see about using WPILib alerts instead for the time being

@aidnem
Copy link
Contributor

aidnem commented Dec 11, 2024

@linglejack06 It's bizarre to me that MonitoredSubsystem is causing issues with the logger. All it's doing is logging two booleans per monitor.

Also, since there weren't any errors when building the main branch, can we reasonably conclude that the issue either lies somewhere within this branch's changes or on your machine? That could narrow down what to look for.

Would you mind sending the error message so I could try and figure out if I messed up MonitoredSubsystem?

@aidnem
Copy link
Contributor

aidnem commented Dec 11, 2024

@jkleiber I added a ticket for using alerts with monitors as well just to track that idea: #71

@linglejack06
Copy link
Contributor Author

@jkleiber @aidnem after further pondering, and killing gradle daemons a few times, there are no more advantage kit logging errors. Now the issue just resolves around the importing of monitors into the wpilib_interface, which i think is just an issue with maven that can easily be solved by justin explaining. here is the error:
Screenshot 2024-12-11 at 9 30 13 AM

@linglejack06
Copy link
Contributor Author

to clarify, vision fully compiles now with the logging features, wpilib_interface only fails compilation due to an issue with the monitors jar. Therefore, the issues revolving around auto detect visibility, logging, and autolog all appear to have been solved by using gradlerio and annotation processor.

@linglejack06
Copy link
Contributor Author

	implementation project(":monitors")
	implementation project(":math")

here is how i added the monitors project under dependencies of wpilib_interface

@jkleiber
Copy link
Member

@linglejack06 Considering that this error doesn't exist on main, it's worth figuring out why this is happening. It's probably related to some change on this branch

It would be good to rebase and resolve merge conflicts to see if this is a problem that is on your computer or if it reproduces in CI

@linglejack06
Copy link
Contributor Author

@jkleiber after updating high key to 2025 and moving ground truth pose calculation to phoenix drive. vision estimates are great near amp and speaker. However, when going to corners or middle of field, both the ground truth and vision begin to randomly set a rotation. I assume its something wrong with my ground truth pose, as vision has no influence on ground truth and ground truth is still setting random rotation. Any clue as to why this is happening? Would you like me to create a repo for high key 2025 in team 401?

@linglejack06
Copy link
Contributor Author

@jkleiber i just ran simulation with advantage kits vision template exaclty (using their diff drive too) it works perfectly, so i think im gonna try to use their diff drive example in high key 2025 to see if i can isolate issue being vision or phoenix drive. then ill move from there depending on what issue is

@jkleiber
Copy link
Member

Rather than creating a new repo, I would just make a PR against the current repo that upgrades the project to 2025 + demonstrates vision

@linglejack06
Copy link
Contributor Author

linglejack06 commented Dec 12, 2024 via email

@linglejack06
Copy link
Contributor Author

@jkleiber here is a demo with the demo drive (diff drive) template used. Oddly, when using my vision, i could only set the max voltage outputted as 1 rather tahn 12, otherwise the robot moved incredibly fast. This was not an issue when using advantagekit demo drive with their demo vision, could this be caused by vision?

I also noticed that when moving through the center of the field the robot seemed to speed up, i feel like these things are related to my vision estimates, but im not sure how.

Other than that, the new logging features are great as we can see when poses are rejected / accepted (shown to the left of screen looking at length of lists) also average distance updates much faster.

Heres the video:

Screen.Recording.2024-12-12.at.11.57.22.AM.mov

@aidenm also, do you have any idea why vision might be moving robot very fast?

@linglejack06
Copy link
Contributor Author

linglejack06 commented Dec 12, 2024

looking at logs, it looks like robot periodic is consistently overshooting loop time by about 50 milliseconds, could it be getting backed up and then updating the poses all at once?
EDIT: this is not the case, advantage kit exact template also has this loop overrun by 50ms and has no speed issues or changes in speed when fully pressing keyboard down and with 12 volts.

@jkleiber
Copy link
Member

Maybe when we're in the middle of the field we get more measurements and vision becomes non-performant? To see if that's the case you could drive to the middle and sit still to see if loop times and number of tags is considerably higher

Other than the performance issues this looks good. Are you planning on trying this again with swerve?

@linglejack06
Copy link
Contributor Author

Yes for swerve, I was just ruling it out as the issue, which I'm not sure is ruled out yet due to center of field issues. As for swerve, do you want me to attempt to integrate vision with the phoenix drive train. Or just copy over the advantage kit swerve project (since that's what we're using in 2025).

As for vision center of field issues, that could be it, but I'm not sure why that would be doing anything different than advantage kit's example as they are the same other than extraction of methods. @jkleiber

@linglejack06
Copy link
Contributor Author

I'll try sitting in the middle for a while and recording that though to officially rule it out

@jkleiber
Copy link
Member

If copying over the advantagekit 2025 swerve project is more likely to work, let's do that.

Theoretically @sleepyghost-zzz will get the drivetrain working for 2025-Robot-Code soon and we'll be able to hard cut over to using that repo on highkey before we get comp bot

@linglejack06
Copy link
Contributor Author

I think it will be easier to avoid swerve ground truth issues. I'll do that as it'll also ensure that connection is fully working by the time we integrate vision into 2025-robot-code. I'll get that done soon.

@aidnem
Copy link
Contributor

aidnem commented Dec 15, 2024

@linglejack06 sorry I didn't see this sooner. Could it be the same issue as before, where bad poses from the middle aren't getting rejected? If the pose estimator is somehow consistent but still accepting bad poses, could it conceivably be receiving a bunch of poses outside of the edge of the field and then blending those into your pose estimate so that it looks like the robot is zooming toward the edge of the field?
Do you know if ground truth also moves super fast or just the pose estimate? If you recorded it with ground truth logged and then estimated pose as a ghost you could watch the difference to see if ground truth is also speeding up near the middle of the field.

@linglejack06
Copy link
Contributor Author

@aidenm using the advantage kit example diff drive there is no ground truth, however, the issue would not be estimates outside of field as those are filtered out already. It also is only accepting middle of field poses when tag distance is low enough. It likely is neither of these issues as when i run the full advantage kit template on their github, none of these issues occur. My vision follows their template and my demo diff drive is the exact code from advantage kit github, thats what is so confusing. I also completely copy pasted their template robot container, so im unsure where the problem lies.

The vision is the only slightly different thing, but all pose rejection and standard deviations are done the same way, just extracted into methods.

@linglejack06
Copy link
Contributor Author

im gonna try simulating with their talon fx swerve template soon and see if i run into the same issues

@linglejack06
Copy link
Contributor Author

@jkleiber @aidenm vision is integrated into high-key with 2025 talon fx swerve template. Everything works properly, poses rejected outside of field, etc. Attached is a video where you can see the list of accepted poses shrinking when leaving field and increasing when entering.

Screen.Recording.2024-12-28.at.6.53.37.PM.mov

Oddly, the robot is still moving very fast, it appears very sensitive. Im not sure if this is a vision issue or a swerve sim issue. vision seems unlikely as i feel it would be jerkier if that was the case. Any ideas?

@linglejack06
Copy link
Contributor Author

also, the issue where drive seemingly speeds up through the middle of the field doesnt appear to be happening, but its hard to tell with how fast the robot is moving all of the time.

@aidnem
Copy link
Contributor

aidnem commented Dec 29, 2024

You could log the actual speed of the robot (not x or y but use distance formula to calculate total speed) and then graph it and see if it goes above what it should at any point if you want to definitively rule it out @linglejack06. That sensitivity does look pretty insane, did this also happen when you just added DriveWithJoysticks from coppercore before adding vision?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants