Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve developer experience with finding and comparing mock requests #2348

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

volodymyrss
Copy link
Contributor

Sometimes, it is difficult for developer to understand why request changed, and does not match the mock stored requests. Here add functionality to finding and compare request with available mock requests.

@@ -171,7 +171,7 @@ def query_region_async(self, position: Union[coordinates.SkyCoord, str],
# Generate the request
request_payload = self._args_to_payload(
mission=mission,
entry="{},{}".format(c.ra.degree, c.dec.degree),
entry="{:.10f},{:.10f}".format(c.ra.degree, c.dec.degree),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just an aside, this is in PR #2347 - so that should be merged first, then this will disappear from the diff

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's right. I meant this one to be merged after #2347 .

Comment on lines +85 to +88
available_requests.append([
SequenceMatcher(a=a, b=ua).ratio(),
Differ().compare(a=a, b=ua),
])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add a little commentary about what this does? I'm not familiar with the difflib library. This seems like something that could be more generally useful if I understood it better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It produces a diff between the request which was we are trying to substitute with mock response, and available mock requests. Sorting by similarity.
The diff looks like the on pytest uses to detail exceptions in assert.
I will add an example later, when I have time to work on this (not in the next couple of days).

@bsipocz
Copy link
Member

bsipocz commented Mar 29, 2022

Are the deletion of the file intended? If yes, then I think we should go ahead with this one first rather than modifying a bunch of them in the other PR and then get them deleted again. Basically, my only concern is that we don't really want to ship mock data that is due to change regularly.

@volodymyrss
Copy link
Contributor Author

Are the deletion of the file intended? If yes, then I think we should go ahead with this one first rather than modifying a bunch of them in the other PR and then get them deleted again. Basically, my only concern is that we don't really want to ship mock data that is due to change regularly.

The data are not so much deleted as renamed, from ".dat" to ".json", since I now stored json in them, including the request parameters and test name - for easier tracking what belongs where.

But besides that, since filename derives from request, new files are really needed when request changes. I suppose every time request has to change (like in #2347) it's reasonable to change the files? We could keep the files even then, with this approach:

patch_get.assume_fileid_for_request(
lambda url, params: f"last-month-{params['tablehead'].split()[-1]}")

But regenerating the file for new request actually helps to find module problems that that in #2347 - I will now explain there.

@bsipocz
Copy link
Member

bsipocz commented Mar 29, 2022

I suppose every time request has to change (like in #2347) it's reasonable to change the files?

No. The main purpose of the mock tests is to ensure the parsing, etc are correct, not to ensure we get the right answer from the archive. So it doesn't really matter if the data file has different coordinates or different number of entries based on the actual query.

I very much love the idea that the mock tests are in fact rerun with real data access when remote-data is enabled, but it shouldn't be come with the price of constantly changing bundled test files as it will be an even bigger headache to maintain than our current situation (we kind of have a similar approach with the mpl tests in astropy core, and it's everything but ideal when they need to be regenerated).

@volodymyrss
Copy link
Contributor Author

I suppose every time request has to change (like in #2347) it's reasonable to change the files?

No. The main purpose of the mock tests is to ensure the parsing, etc are correct, not to ensure we get the right answer from the archive. So it doesn't really matter if the data file has different coordinates or different number of entries based on the actual query.

Most queries made to return identical results - no change in number of entries.
Some are handled specially, e.g. "last month" results, which are changing (different time range), keep the same file name, the same output.

A lot of the time when the request changes - the result might change in a way which changes parsing. E.g. it will be empty.
The case of #2347 is peculiar - the request is different, but I can more or less be sure the response should be compatible.
Ok I can freeze the filename for this one. But it comes at a cost of an extra custom filename, not a big deal.

I very much love the idea that the mock tests are in fact rerun with real data access when remote-data is enabled, but it shouldn't be come with the price of constantly changing bundled test files as it will be an even bigger headache to maintain than our current situation (we kind of have a similar approach with the mpl tests in astropy core, and it's everything but ideal when they need to be regenerated).

I see, I quite understand. Nothing in this fixture approach really forces developer to regenerate mock data when service changes responses.
There is only one reason why it is more convenient right now: it's because generation of mock data is triggered when there is no mock data at all. It's kind of a workaround, it would be nicer to have an option pytest, something like --generate-remote-data. I did not want to do something like that in a module-specific PR. If this would be ok, it would make partial updates of mock data more easy.

@bsipocz
Copy link
Member

bsipocz commented Mar 29, 2022

Ok I can freeze the filename for this one. But it comes at a cost of an extra custom filename, not a big deal.

custom file names are fine, maybe you could use the name of the test function they belong to? (though if a file used in multiple that may not be the most logical idea).

I see, I quite understand. Nothing in this fixture approach really forces developer to regenerate mock data when service changes responses.
There is only one reason why it is more convenient right now: it's because generation of mock data is triggered when there is no mock data at all. It's kind of a workaround, it would be nicer to have an option pytest, something like --generate-remote-data. I did not want to do something like that in a module-specific PR. If this would be ok, it would make partial updates of mock data more easy.

And I very much appreciate it, and in fact looking forward to use your approach much wider than this module. It's just the details that need to be figured out.

@volodymyrss
Copy link
Contributor Author

Ok I can freeze the filename for this one. But it comes at a cost of an extra custom filename, not a big deal.

custom file names are fine, maybe you could use the name of the test function they belong to? (though if a file used in multiple that may not be the most logical idea).

That's right, I thought about this, and dismissed for this very reason. It still could be useful to have it, at a cost of doubling some files. With this PR, test name is stored inside the data files.

There is only one reason why it is more convenient right now: it's because generation of mock data is triggered when there is no mock data at all. It's kind of a workaround, it would be nicer to have an option pytest, something like --generate-remote-data. I did not want to do something like that in a module-specific PR. If this would be ok, it would make partial updates of mock data more easy.

And I very much appreciate it, and in fact looking forward to use your approach much wider than this module.
It's just the details that need to be figured out.

Thank you, and I absolutely agree that in this kind of case it is essential to have a design that makes it easy for the developer to take the right approach and quickly find issues. So it takes some thinking and discussion.

So, what do you think about something like --generate-remote-data option, can it be added? Here? Or rather in #2255? It could be used while selecting the tests to be updated.

Note that while this is actually quite interesting to do, I only have so much time. I will commit to fix ASAP at least #2347 and likely #2348, but the rest might have to wait a week or so. I'd really like to finish it, it's fun.

@bsipocz
Copy link
Member

bsipocz commented Mar 29, 2022

So, what do you think about something like --generate-remote-data option, can it be added? Here? Or rather in #2255? It could be used while selecting the tests to be updated.

yes, having that option would be great. I'm not sure whether it makes sense to add it to pytest-remote-data or just to astroquery, either of those could be an option.
Adding it in #2255 or in a separate new PR are all sounds good options.

Note that while this is actually quite interesting to do, I only have so much time. I will commit to fix ASAP at least #2347 and likely #2348, but the rest might have to wait a week or so. I'd really like to finish it, it's fun.

No rush for any of this. We can temporarily skip the failing tests to other PRs are not bogged down with it (they are not really, I merged a few as those failures are clearly unrelated), or fix #2347, either are fine.

@bsipocz
Copy link
Member

bsipocz commented Mar 29, 2022

Oh, and I should have started with this: thanks for working on this, improving the infrastructure is not something that is being picked up frequently though the impact of it is big!

@volodymyrss volodymyrss marked this pull request as draft March 30, 2022 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants