Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Examples Annex (Draft v0.0.7 some feedback) #186

Open
jzvolensky opened this issue Aug 22, 2024 · 5 comments
Open

Improve Examples Annex (Draft v0.0.7 some feedback) #186

jzvolensky opened this issue Aug 22, 2024 · 5 comments

Comments

@jzvolensky
Copy link

Hello, I had a chance to look through the document and wanted to leave some feedback/thoughts here.

Overall, I think the draft looking good so far. Perhaps as someone not so familiar with the standardization documents, more examples would be useful. We have also discussed including responses to the examples as well, as it has been done in other OGC standards. This would certainly make it easier to follow, especially when you can do the "same" operation in multiple ways e.g. using subset or bbox, time. The figures illustrating the point/area overlaps are nice and helpful.

7.2.4 Coverage data retrieval requirement J:
If we are not supporting scaling, why do we need to accept the parameter with a default value and error out otherwise? Wouldn't we just naturally provide the original data in the full scale since we do not support scaling? /per/core/limits below describes the limits so that point is also a bit confusing. I don't quite understand the purpose of this.

Thanks!

@jerstlouis
Copy link
Member

Thanks a lot for the feedback and for joining the meeting @jzvolensky .

more examples would be useful

Fully agreed. That will take some efforts, but I think it is well worth it, and we need to improve on this before the OAB review.

If we are not supporting scaling, why do we need to accept the parameter with a default value and error out otherwise?

I tried to explain this during the meeting, but it is indeed a bit confusing.

Because of the resampling permission, a server supporting scaling can return a downsampled version of the coverage when asking for a large area of a high resolution coverage.

So in order to allow a client which absolutely only wants the full resolution data, without having to care whether the server supports down-sampling or not, that client needs to be able to always pass scale-factor=1. Without the requirement for a non-scaling server to accept and ignore scale-factor=1, the client would need to check the conformance and only include scale-factor=1 for servers that do support scaling, since a non-scaling server would return an error. That would make it more difficult to guarantee getting the full resolution data than simply saying to clients: if you want the full resolution, always include scale-factor=1.

I hope these explanations makes it more clear. If you still feel this approach is too convoluted and confusing for its benefits, we could discuss further.

/per/core/limits below describes the limits so that point is also a bit confusing. I don't quite understand the purpose of this.

The server can have limits on the maximum size in a particular dimensions, or overall number of cells being returned, and these limits can be advertised in the service metadata.
It's an indication to the client to not request data beyond that size, and it also allows the server to return a 4xx error if asking for too much data at once.

@jerstlouis jerstlouis changed the title Draft v0.0.7 some feedback Improve Examples Annex (Draft v0.0.7 some feedback) Aug 28, 2024
@jerstlouis
Copy link
Member

I filed this related issue for scale-factor=1: #190

@jzvolensky
Copy link
Author

Hi Jerome,

I read your response earlier but forgot to reply haha!

Regarding the scale-factor. I see what you mean now looking at the permissions again. I guess it is a bit confusing, I get that you are trying to avoid 4xx errors, but I think this is unnecessary complexity.

To me doing it the other way sounds more reasonable or is probably more common. As a user I want to get a coverage. It is too large, and the server does not want to give it to me, instead returns an error that I should set the scale-factor to 0.5 or choose a smaller bbox, less timestamps etc. to fit within the limits.

In this draft spec, If I am a user and I want to get coverage data for my research project, I request the data and because it is huge, an automatic scale factor of 0.2 is applied for example, right? However, I am not aware of this because I did not specify scale-factor myself so the data may not be usable for my use case anymore and I have to go back again and update my request to fit the limits and ensure I get full-scale data back. This creates a redundant request to the server.

I think it is better to throw an error and be explicit instead of doing behind the scenes magic and auto-scaling just so the users do not get an error. Of course, all of this depends on the real-world implementations.

@jerstlouis
Copy link
Member

Thanks for the reply @jzvolensky .

There was a previous long discussion and resolution on this topic e.g., see #54 (comment) .

I personally strongly feel that a client asking for the whole coverage cares more about getting something back for the full extent at the best resolution possible. If no subsetting was used, it's probably because the area of interest is the whole area.
It's a bit tricky for the client to try to figure out the scale at which the server will accept to return the whole area -- this makes that a lot easier.

If the client wants to ensure the original resolution, it just needs to include scale-factor=1 from the start, so there's no need for a redundant request. An OGC API - Coverages client would be aware of that and could present that in an intuitive user interface that makes this obvious. This is why we have the requirement for servers not implementing the Scaling requirement class to still support (i.e., not throw an error but simply ignore) scale-factor=1.

Regarding the scale-factor, it is a bit counter-intuitive, but a scale factor of 2.0 is actually what means half the resolution in each direction. This follows the same convention as WCS.

@jzvolensky
Copy link
Author

Okay, I think I get it now. Thanks for the thorough explanations! I also read the comment you linked, and I can see what you mean now with the 4xx errors. I suppose time and future real-world implementations will show how this is perceived among general users (hopefully well). Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Next Discussions
Development

No branches or pull requests

2 participants