-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Other process encodings? #325
Comments
@m-mohr no. As you point out, the You can however, respond with an openEO process description at the Not sure if the specification mentions this explicitly but in the links section of the process summary you can include links to the process description in any number of process description languages or vocabularies so you could include a link to an openEO description from there. |
@pvretano Regarding what is returned in the process list at Clients can already negotiate an HTML representation of The process list uses the "self" relation type inside the summary to link to the full object for each individual process, so it would make sense that the representation of the summary and that of the full process description are consistent based on a particular negotiated representation. |
Thanks. I guess the content negotiation doesn't help if the media type for both is |
@m-mohr In Part 3, there is currently a suggestion in 14-Media Types to define specific media types. This is an issue that pops up everywhere Possibly we should always define new specific media types for new JSON schemas. |
The issue in /processes is also that it has a JSON "wrapper" (links, processes - which is actually the same in OAP and openEO) and only the individual processes included in the processes property differ. I assume |
@m-mohr If the component processes of OpenEO could be defined as regular OGC Process Description that would really be ideal. The OpenEO process graph is the equivalent of the process execution that is extended in Part 3 to be able to nest processes.
Correct, sorry for the confusion -- I edited my message for clarity. It's for making a request to execute the process, not describe it. Not a response, but the payload from the client to execute the process. In a sense it's a distinction between a workflow description (process execution request or OpenEO process graph) vs. a process description (single black box process). It is possible that a single process also happens to be defined by a workflow (process execution request or OpenEO process graph), in which case that could be made available as a visible "execution unit" (the definition of that process, not its description). That is related to Part 2: Deploy, Replace, Update and the Deployed Workflows requirements class of Part 3. The description is only the inputs and outputs of the process; the definition's execution unit is what the process actually does expressed in some way (Docker container, execution request, OpenEO process graph, CWL, Python script...). |
I agree about the default Also, I don't understand why it is not mandatory to ensure interoperability. As mentioned, the definition is what changes a lot between implementations and extensions, but the core description should be somewhat consistent regardless. Note that even if |
21-AUG-2023: There is a difference between what you get at The schema for the "list of processes" is fixed by the specification and is defined in processSummary.yaml. All implementations of OAPIP regardless of how they describe their processes must use the same summary schema for the list of processes. You can negotiate a different output format (e.g. XML) at The story is different at Assigning to @pvretano to create a PR to add a media type for an OGC processes description ... |
Thanks. Would it be an option that it's allowed to mix different process types for /processes? |
Even if we have do have separate media types for openEO vs. OGC process descriptions, with the openEO description friendly to existing openEO clients, ideally I think it should also be possible for openEO backends to offer an OGC process description for those openEO processes for clients implementing strictly OGC API - Processes. Still hoping we can validate the feasibility of this in T19-GDC (even if we don't have time to implement it). @pvretano I think it would also make sense to explore having an OpenAPI process description, which we could consider including in 2.0 as a separate requirements class, perhaps in the sprint next week if we have time. That is, |
IMO, it is not "should", but "must". Otherwise, they are not really an interoperable OGC API - Processes implementation... Interoperability is already barely accomplished with current implementations that should be using the same process description format. Adding alternatives will make it even more complicated than it already is. Nothing wrong in allowing additional media-types/encodings though, as long as the common one is fulfilled.
I'm curious by what you have in mind? Isn't this already offered for inputs/outputs using this portion: |
Technically the OGC process description is not a mandatory requirement class, but I agree that this is a very strong should, and I hope it can be achieved. There is on-going discussion about whether an implementtion of a GeoDataCube API supporting processing with an openEO backend should be fully aligned with OGC API - Processes, and I believe it should (including support for OGC Process Description), so that a generic OGC API - Processes / GeoDataCube API processing client can execute it, but this differs from the current published openEO specification. I will present on this topic and there will be a follow-on discussion on Monday the 25th at the Singapore meeting in the GeoDataCube SWG.
This is a JSON Schema within an OGC process description, not an OpenAPI definition. Essentially this would allow generic OpenAPI clients and developers to execute OGC API - Processes without knowing anything about OGC Process Descriptions or OGC API - Processes in general. |
@fmigneault @jerstlouis when we first wrote WFS, for the sake of interoperability, we mandated GML. That turned out the be both a good thing (for interoperability) and a bad thing (because GML was a beast to implement). So, when the time came to design OGC API Features we decided, instead, not to mandate any particular format and let the client and the server negotiate a mutally agreeable format. To HELP interoperability we added conformance classes to OGC API Features for GeoJSON and GML. |
@pvretano agreed, but where I hope we require it is as a dependency of the GeoDataCube API "Processing" conformance class (I added "OGC Process Description" to the processes-1 row in https://gitlab.ogc.org/ogc/T19-GDC/-/issues/25), which is sort of "profiling" OGC API standards to maximize interoperability (i.e., the chance of a successful client/server negotiation). |
@jerstlouis agreed! It is perfectly legal for a referencing specification like GDC to say that an optional conformance class (OGC Process Description in this case) has to be mandatory in the GDC profile of processes. |
Before we can require the OGC Process description we should make sure it's good enough to cater for most needs. I'm not sure whether we could encode openEO processes in OGC Process descriptions for example. The issue with content negotiation is that you may have two JSON-based descriptions that don't have specific media types for them. And then you must also be able to convert from one encoding to the other, which may not be possible with some losses (see above). |
That is very reasonable.
Can we perform the experiment and validate that? Would you have an example openEO process description that exercises most of the capabilities, and we can try to do the mapping? If something is missing, there would be no better time than right now to try to address this with the Processes Code Sprint early next week validating Processes 1.1 or 2.0. I really believe it is critical for interoperability to have this OGC Process Description support all use cases, including the openEO process descriptions. |
I'd love to, but I'm on vacation until October so I can't do it before the sprint. |
@m-mohr Enjoy your vacation :) But if you have time to just point us to a good sample openEO process description between a Mai Tai and a Piña Colada I could give it a try next week :) |
I can only point you to the official docs right now, so the process description schema at https://api.openeo.org/#tag/Process-Discovery/operation/list-processes and the processes at https://processes.openeo.org |
@jerstlouis if I understand what you are saying, I am not sure I agree. Interoperability is not a function of the "OGC Process Description". The "OGC Process Description" is one way to describe a process. OpenEO is another as is CWL. For that matter, so is OpenAPI (I have been experimenting with posting OpenAPI descriptions of a process to my server). What is required is that the API can accomodate each of these process description languages which it can via different conformance classes. The "OGC Process Description" conformance class already exists which means that a client can request the description (i.e. The same line of reasoning would apply to Part 2 where a process is described for the purpose of deployment. A server that claims to support multiple process description languages could then deploy a process described using "OGC Process Description" or OpenEO or CWL or ... So I guess what I am saying in response to your comment and @m-mohr original comment is that it should not be a matter of mandating support for one process description language/format and then making sure that format can accomodate other process description languages/formats. Each process description language/format should be supported in its own right (via separate conformance classes) and the server should be responsible (internally) for crosswalking one to other as per a client's request. How that I am writing this it occures to me that it may conflict with my previous agreement vis-a-vis GDC. Sorry about that but I have been thinking more about the situatiion and this comment reflects my current thinking. I could be complete wrong but I welcome response comments because this is an imporant interoperability point. |
@pvretano Are we perhaps mixing up Process Description and Process Definition here? By Process Description I am referring strictly to Although there is the notion that a Part 2 deployment can include a process description if the server can't figure out how to make up an OGC Process Description by itself, we are not talking about deployment at all here. By interoperability, I mean any client would be able to only implement Part 1 with OGC Process Description, and would be able to execute any process, regardless of how it was defined (whether with CWL, openEO, Part 3, or anything else). |
From a quick look at the first process from the EURAC GDC API end-point ( https://dev.openeo.eurac.edu/processes ), it seems to me that the basics of the openEO -> OGC process description mapping is quite straightforward. The openEO The openEO The |
I don't think it's that easy. For example, the returns and outputs to me are slightly different and in my conversion I had to make the OGC outputs openEO parameters (as there's a choice to be made). A couple of thinks can't be translated at all, I think. For example the process_graph and examples properties (but they are strictly descriptive and not required). Anyway, it looks like it's a lossy migration process. |
@jerstlouis @m-mohr The core requirements of an OGC Process Description would be to port what is compatible, namely, the critical For the |
@m-mohr Right, as @fmigneault points out, for the process_graph of a particular process, that should be possible to retrieve (if the service does wish to expose the inner working of the process) as a link to a definition of the process (the executionUnit of an application package). We do support that already in our implementation for ad-hoc execution e.g., https://maps.gnosis.earth/ogcapi/collections/temp-exec-48D2606E/workflow , but could also have that for deployed processes ( e.g., The examples is something that could already be added to the process description without breaking anything, and it could be something that gets specified in a later version to standardize the approach.
Could you please provide more details about this? To support the Part 3 approach of collection input / output, where a "collection / GeoDataCube" is a first class object, results should really always and only be outputs with no "storage" specified for them... things just flow out of the process and are requested / processing triggered using OGC API data access calls. |
While we might be able to translate it, why should we do it? We loose all the openEO clients and get OGC API - Processes clients, which honestly, I haven't seen a good example of. Why not just allow different process flavours in /processes via conformance classes as we pretty much do in GDC right now?
For the output you need to specify what format you want. This needs to be a parameter in openEO as for return values it just describes what you get, there's no choice. Everything you need to choose from must be a parameter. As such I also think your return values are a bit flawed as it effectively is an input that the user may need to provide. |
There might be a different set of supported input / output formats from the deployed process (that would be reflected in the deployed process description). Executing the process might also result in slight differences in the outputs as a result of being converted by different tools. However, that same execution unit could be still deployed to those different implementations and work as intended, so I would not call that not portable.
I am of the opinion that this should be up to the implementation / profiles / deployment to decide.
Those are the cases I was mentioning for which specifying an output format in the execution request beforehand makes sense, but they could be presented as different outputs altogether, or even as different processes. |
If all the conversion logic of I/Os are embedded into the execution unit of the process (or separate processes in a workflow chain), there is essentially no reason for any corresponding process description to be different from one server to another. The execution unit would basically dictate what it is able to accept and produce. The process description only normalizes and abstracts away how those I/O are represented are mapped to the execution unit, such that whether the execution unit is openEO, CWL, WPS, or a plain docker is irrelevant. Since conversion would be accomplished by exactly the same tools as pre/post process steps in a workflow chain, there should not be any difference in produced results. If there are variations (eg: rounding errors due to float), I would argue that would be a misbehavior from the server due to poorly described I/Os in the process description.
While there might be some cases were such optimization would be beneficial, the logic required in those cases is so specific to each individual process, their I/O combinations, and their execution units that it makes it again not possible to automate it. If the workflow becomes "locked" by this very specific convertion chain because specific I/O combinations and server running it must be respected exactly to take advantage of such optimization, I believe this simply becomes a "very large process" were the logic is spread out across different sources instead of being properly isolated. Data lineage and reproductibility of the processes is greatly reduced, if not impossible. I would argue that if a process requires this kind of optimization, it would be easier and more portable for the execution unit to simply implement a "step1->step2" script directly (and all necessary convertions between those steps) to avoid the intermediate encode/save/load/decode. So again, from the point of view of the process description and API, there would not be any additional conversion logic, and that execution unit combining the steps could be executed on any server. |
I am of the opinion that nothing in Part 2 should restrict the possibility to automatically support additional input/output formats, and thus automatically enhancing the process description with additional format support compared to the execution unit's. With Part 3 collection input / output in particular automatically requires such outside machinery as collection input implies an OGC API client that need to negotiate whatever format and APIs are supported by the remote collection which may not match the execution unit's, and collection output similarly need to support clients that support different APIs and will negotiate formats that will match the OGC API server implementation's support for different formats. Particular use cases or profile may have a preference for the approach you mention, where there is a very thin layer between the Processes server and the executionUnit, but this approach should not be mandatory by Part 2 (or that makes Part 2 incompatible with a lot of Part 3 such as collection input / output). Part 1: Core says nothing about this of course because it is completely agnostic of the concept of execution units.
I'm not sure I understand what you are saying in that paragraph. In Part 3 workflows, the idea is definitely not to lock any combination, and it does aim to facilitate preserving data lineage and reproducibility. However, it allows automatic negotiation of how things will be playing out (not involving end-user client) at every hop between two nodes in the workflow (whether happening internally on the same server, or spread across two servers where one server acts as a client to the other).
I was also considering cases where this process can be spread across datasets and processes spread across different deployments (potentially of the same software having an affinity for a particular format). While it is possible to create a single process that implements the full workflow (whether the components are all on the same server, or involves external Processes implementation), this single process can be implemented as a chain of processes, and this workflow chain of processes can also be exposed as the source workflow. |
I don't see what Part 2 or Part 3 have to do with how convertion logic should be encapsulated in respective processes.
You seem to be describing exactly what I mentionned using small building blocks. My recommendation is that the shared machinery would simply be a
Again, that machinery could be a process. The "execution unit" of the workflow that needs to chain two processes with a collection could simply call an intermediate
Even if "execution unit" is not explicitly in Part 1, the implementation will at some point run some code to do the processing. Call that however you want, but that code could be done in either of those methods:
My recommendation is again to go for the 3rd approach, because this can be ported into basically any other server (especially if the process was obtained by deploy Part 2), without side effets or hidden logic coming from the API as in approach 1.
The idea was that if you are using for example a GPU to do some processing, and that you want to leave the data in memory to allow it to be converted to somethig else, the convertion to be called would need very specific code to handle GPU logic and the specific convertion strategy for the input/output data format. If another process used CPUs instead, the same code would probably not work directly. Same for other data formats that need adapted logic. In other words, you would need a very specific implementation for every possible use case. Therefore, my point was that if you do have a use case that benefits from this specific implementation, you might as well package it is a dedicated process. For all other cases were preserving the data in memory this way would be negligeable, having dedicated processes that handle the convertion from one type to another, even if there are redundant save/load encode/decode between processes, would be much more scalable and portable across servers. |
Specifically the Section 8: Collection Input and Section 11: Collection Output requirements classes. See also Section 6.2.5: Considerations for collection input / output. What you're describing is similar to the openEO approach that requires an explicit process to "load" something from the collection and "publish" a collection. With Collection input and output we can write a workflow like: {
"process" : "https://maps.gnosis.earth/ogcapi/processes/RFClassify",
"inputs" : {
"data" : { "collection" : "https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a" }
}
} (from https://maps.gnosis.earth/ogcapi/processes/RFClassify/execution?response=collection) and access results triggering processing like: https://maps.gnosis.earth/ogcapi/collections/temp-exec-2744D845/coverage/tiles/GNOSISGlobalGrid/12/1989/2656.tif (coverage tile) https://maps.gnosis.earth/ogcapi/collections/temp-exec-2744D845/coverage/tiles/GNOSISGlobalGrid (coverage tileset)
The intent with Part 3 Collection Input / Output is specifically not to require that. Collection Output allows to present an OGC API Maps / Tiles / Coverages / EDR / Features / DGGS... as the front end, supporting content format/AoI/ToI/RoI/CRS/API negotiation on the output collection completely separately from the workflow definition. If you extend this Collection Input / Output mechanism to how servers talk to each other, the communication can also be done entirely in an OGC API data access way. The servers do not need to act as Processes client for accessing results, they can instead use OGC API - Coverage requests to trigger processing on demand. They only need to POST the subset of the execution request intended for the remote server to |
I am probably missing something...
Exactly, but I would do it using CWL and Docker apps in my case since this is what my server supports. There is however no need to "load" anything. The map/coverage tiles URL would simply be retrieved and passed down to the following process. To implement collection I/O on my server, I would simply create a The distinction I am highlighting is that, if I wanted to understand how To convert the output into a collection format, that would also be some kind of |
Yes :) With collection input / output in Part 3 workflows, the collection-parsing and collection-making is a pre-registration step that is done only once when first registering the workflow. That only happens when you click the Setup collection output button. This validates that the entire workflow is valid and set up negotiation for compatible APIs and formats between the different hop nodes (other the client will get a 400 Failure to validate workflow). It makes all components aware that they will be working together in that pipeline and are ready to roll. All future requests for a specific AoI/ToI/RoI (or Tile or DGGS Zone Data) uses that workflow already registered (that can span collections and processes across multiple servers), and only triggers the processing workflow chain (which does not involve any "parse collection" or "make collection" step) for that specific region/time/resolution being requested. It will not be creating any new resources (no POST methods, all resources already exist virtually and their content gets actualized/cached the first time they are created with a GET, or beforehand if some server is preempting further request). |
This feel like a shortcut naming convention (which is OK), but that could still be defined explicitly with a workflow like: {
"process": "https://{some-server}/processes/collection-maker",
"inputs": {
"process" : "https://maps.gnosis.earth/ogcapi/processes/RFClassify",
"inputs" : {
"data" : {
"process" : "https://{some-server}/processes/collection-parser",
"inputs": { "source-collection": { "href": "https://maps.gnosis.earth/ogcapi/collections/sentinel2-l2a" } }
"outputs": {"image-found": {"format": {"mediaType": "image/tiff; application=geotiff" } } }
}
}
}
} The obtained workflow would be validated in the same manner, and could be executed by a CWL or openEO approach. |
Yes you could do something like this, and that is using the Part 3 "Nested Processes" requirements class. But whether you execut it or deploy it, when that workflow is executed it will process the entire input collection, unless you add a parameter to restrict the execution request for an AoI/ToI/RoI. The whole point of Collection Input / Output is to have collections as first class objects where you do not need to specify API/AoI/ToI/RoI/format, and it allows you to express the workflow in a manner agnostic of all this. The output collections exists for the spatiotemporal extent and resolution and all possible formats, and is accessible by regular OGC API clients like GDAL without having to actually process the whole thing, triggering processing on demand. What you're saying essentially is that you can do things without Collection Input / Output. And you can integrate implementations that support it with some that don't either by using something like collection-maker / collection-parser, or by using an "href" pointing directly to the coverage output for example. |
Yes. Of course the example I provided is not complete. You would need additional parameters in What is important in that case is that I can easily map that nested OAP workflow structure to a CWL workflow representation. The same would be possible with an openEO processing graph after converting the If Depending on servers to somehow automatically negotiate/convert the types between steps greatly increases chances of workflow suddently failing. |
The idea of collection/input is you can represent the whole unfiltered input / output collections, preserving the ability to request small parts of it using OGC APIs. It's a late binding mechanism of these configuration options. Because CWL (and openEO) do not have notion of an OGC API collection as a first class object, I don't think it would be possible to directly map a workflow making use of them to either. However, a server-side implementation of Collection Input / Output could decide to map the workflow to an internal openEO or CWL workflow, taking additional parameters for AoI/ToI/RoI/format/API (or in the case of API or format, possibly selecting the appropriate helper processes for the task), which the Processes - Part 3 Collection/Input implementation could map to. For responding to client request for the not-fully-realized/on-demand Part 3 output collection, the Part 3 implementation would trigger that CWL or openEO workflow (which contains those extra processes e.g., to load a particular and convert to a particular format that the client expects) filling in the AoI/ToI/RoI/format/API parameters of that workflow to respond. This would be some of "extra machinery" of the API, but internally it could still use CWL directly or a pure Processes - Part 1 approach without the Collection Output first class object.
The workflow validation step which happens during registration would already perform the negociation and the idea is to report the failure before any actual processing is done. The negotiation happening at every hop has the client (or the server acting as a client) looking at the server conformance classes / capabilities and ensures the server supports an API / format working for the client side of the hop. It could also do further validation to make sure things work as expected. So if the workflow fails with another server, it will fail at the time of registering (POSTing the workflow to
My view is exactly the opposite. Not requiring a particular format at a specific step of a workflow greatly helps the chances that the client and server side of the hop can find a common ground between their respective capabilities. e.g., if I enforce GeoTIFF and EDR at a particular hop, and either the client and server does not support GeoTIFF, the workflow validation will fail. But if I leave that open, maybe they will find out that they both support JPEG-XL and OGC API - Tiles and can interoperate that way. Then I can take the same workflow and change one side of that hop, and this new hop now is able to operate with GeoTIFF and Coverages. Only the collection or process URL had to be changed in the workflwo execution request, everything else stays exactly the same. As an end-user client, I don't have to bother about figuring out which format / API each hop supports. I just discover compatible OGC API Collections and Processes and can easily assemble a workflow this way. If a hop is not interoperable (no common ground on API / format), this feedback is received as workflow validation failure before trying to do any processing in the regsitration step, and the workflow can be fixed. |
I don't see why that would not be possible. Using a OGC API client that knows how to interact with the concept of an OGC API collection is not different than another script. Whichever code that runs to resolve the OGC Workflow, even if doing a late binding, could be converted to a CWL/openEO workflow dynamically, filling in any necessary convertion process between steps, and then running it. I usually prefer to have static workflows that have well established steps and connections, so the user knows exactly what they are running, but still dynamic resolution could be supported.
Why not set them to use JPEG-XL directly? If they both support it, they should both advertise it, and it would possible to align them this way right of the start. Maybe for very specific formats, allowing some flexible format matching could make sense, but I can see a lot of cases were that could have the oppopsite effect. For exemple, if the servers figure how they both support |
openEO has a load_collection process and recently also got an export_collection process. This is "first class" in openEO :-) But I feel like we are departing from the original issue. Was there a final conculsion regarding the other process encodings? Even if I'd try to convert openEO processes into a process summary encoding, the required version number would be missing. I'm also not quite sure yet what the metadata property is used for. |
We definitely want to allow for alternative encoding of processes, an OpenAPI encoding is one that I would be curious to experiment with. If it would be useful to have the same resource also available as different encoding specific to openEO, that would require a different mediatype to negotiate it. Even if openEO community standards ends up using
You mean the version required by processSummary ? If that information is not available that could just default to 1.0 ?
Probably departing indeed from the original issue and we should move this to a new Collection Input / Output discussion issue but: What I mean by "first class" is that there is a Though they are designed to effortlessly chain with each other, and could internally be implemented as processes, Collection Input and Collection Output are quite different beasts in terms of how they are defined:
@m-mohr I am curious to what extent your load_collection is equivalent to WHU's loadCube process as discussed in https://gitlab.ogc.org/ogc/T19-GDC/-/issues/57 ? (and @fmigneault similar question for your similar process ) In their case, supporting Collection Input (which are about local collections -- Remote Collections is actually the equivalent for OGC API collections from external APIs which requires the server acting as a client) would be as simple as internally converting: "data": {
"collection": "http://oge.whu.edu.cn/geocube/gdc_api_t19/collections/SENTINEL-2%20Level-2A%20MSI"
} to: "data": {
"process": "http://oge.whu.edu.cn/geocube/gdc_api_t19/processes/loadCube",
"inputs": { "cubeName": "SENTINEL-2 Level-2A MSI" }
} The important aspect is that loadCube in this context does not mean requesting the entire collection -- only the spatiotemporal subset / resolution / fields relevant to satisfy the current requests (which may be coming in to the server as Collection Output client requests) needs to be retrieved. But the initial handshake with the remote server can all be established at the time the execution request is initially submitted (which for Collection Output is only when the client first did a POST of the execution requests, not for every coverage tile or subset requested later on). The Collection Input req. class accomplishes a few things:
|
SWG Meeting: 13-MAY-2024: There was some disucssion in the SWG today about using OpenAPI as the process description language. Basically, you do a |
just fyi: That only makes sense if you expose processes as HTTP endpoints, which is not the case for openEO. And that was the initial question. Can we have a conformance class that allows us to send openEO process descriptions via the GET /processes endpoint. |
@m-mohr A bit confused by your last comment... Aren't there process description HTTP end-points in openEO? I understand that there's no individual process execution end-points in openEO. |
Both of these are requirements :
Allowing alternate negotiation formats for the process description/execution makes sense, but the APIs should at least provide the minimal endpoint requirements to allow these negotiations to take place. |
We only have a single GET /processes endpoint which describes all endpoints according to the process definition language, there's no GET /processes/:id yet. |
SWG meeting from 2024-05-27: Add information about how we intent alternative process encodings to interact with the /processes and /processes/{id}. This will be based on content negotiation. Wherever we describe a process description with regard to API Processes, include the media type. Expand the content of section 7.10, include example of OGC process description and example of OpenAPI description. |
Please keep in mind that for example both OGC API Processes and openEO use application/json as media type, so content negotiation might be difficult. |
@m-mohr the discussion was that we would define media types that were not the generic ones. So to get an OGC process list from |
@pvretano Using something like |
@fmigneault I said "something like". I am not proposing One question, is "profile" a valid parameter for the |
Of course :) Just proposing ahead to consider this use case while I had it in mind. The For what it's worth, I have seen browsers respect I've also seen |
@pvretano The media type ( https://www.iana.org/assignments/media-types/application/json ) does not define any parameters I do believe we should stick to But I think what @fmigneault is pointing out is that the way negotiation by profile works is that you can always add that |
In Part 1 I found the following sentence:
This means I'd expect that I could for example use the openEO process encoding in OAP.
Requirement 11 in "Core" says:
The processList.yaml refers to the processSummary.yaml though which has a very specific encoding in mind:
http://schemas.opengis.net/ogcapi/processes/part1/1.0/openapi/schemas/processSummary.yaml
Thus, I think there's a conflict in the specification that should be resolved. Are we allowed to return e.g. openEO processes in /processses?
The text was updated successfully, but these errors were encountered: