Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TrackOverlapMetrics to benchmarking #117

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

msschwartz21
Copy link
Collaborator

No description provided.

@msschwartz21 msschwartz21 added the tests Test case related issues label Nov 13, 2023
tests/bench.py Outdated Show resolved Hide resolved
@cmalinmayor
Copy link
Collaborator

Did the benchmarking fail just becuase it had an extra thing to run that wasn't in the baseline? How does the action deal with adding new benchmark functions?

@msschwartz21
Copy link
Collaborator Author

Did the benchmarking fail just becuase it had an extra thing to run that wasn't in the baseline? How does the action deal with adding new benchmark functions?

Honestly I have no idea what's happening. The updated benchmarking runs fine locally for me. I don't think the addition of a new function should cause a problem. We manually make a comparison when we build a pandas dataframe in the "generate report" section. In that case, I think we should just end up with a missing value for the new function on the base commit.

@tlambert03 Any guidance on troubleshooting action failures like this? https://github.com/Janelia-Trackathon-2023/traccuracy/actions/runs/6855566985/job/18642828648?pr=117

@tlambert03
Copy link
Contributor

huh, strange. hard to say what's canceling it huh? i would just do some brute force stuff. maybe add -v to the pytest call to get a bit more information on whether it's hanging on exactly the test you've added. exclude/comment that test again just to ensure that it is indeed that test. then dig into the test itself and perhaps put debugging statements to see where in the metric it's canceling?

@codecov-commenter
Copy link

codecov-commenter commented Nov 13, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (e5df26a) 83.64% compared to head (a2662f6) 83.66%.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #117      +/-   ##
==========================================
+ Coverage   83.64%   83.66%   +0.01%     
==========================================
  Files          19       19              
  Lines         899      900       +1     
==========================================
+ Hits          752      753       +1     
  Misses        147      147              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@msschwartz21 msschwartz21 marked this pull request as draft November 14, 2023 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Test case related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants