You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TLDR
We need a bunch of examples from real world openapi specs to test the following things:
Discover which pagination is used and which are the correct fields to look at when paginating
Discover the correct primary key on an object
Discover the json path under which to find the "real" result objects of an endpoint, both list and single entity endpoints
Discover the parent/child relationships of endpoints and mapping the url parameters to a field on a parent object
Discover the correct auth setup (I think this is very well defined though in the openapi spec and we do not need many examples, you could still verify that it is done correctly though)
Still create useful output from the generator if the spec is incomplete or broken
Preparation
Basically this generator project converts an openapi spec into a dlt source that uses our new rest_api source that is configured with a dict. To prepare you need to:
Understand how our rest_api source works, since you've done the 2nd hackathon, you should know it I think, but anyhow it's good to recapitulate
Try out this tool on one or two specs. Check the makefile to see how to generate a pipeline from an url (the endpoint selector currently does not work, I'll fix that in a bit). You can also use the --path option to generate from a local file.
Task
I would start with looking at specs of the biggest APIs, convert them with this tool and see what the outcome is. Then:
If something is not detected right (which at this point is very likely) extract minimal examples with which we can test exactly this one thing and where there isn't anything else in there that is needed.
So for example if the jsonpath for the results is not detected correclty, we need a minimal openapi definition with one endpoint and this incorrectly detected object structure as a result, nothing more.
Add a new folder in tests/cases for incoming examples and think of some subfolder structure that makes sense to you where you put these examples, you can also add a comment in the openapi file about what does not work, but I will probably be able to figure it out.
Put it all on a branch with a PR, then I can have a look how it's going. If you have any questions, ask :)
Keep the full openapi spec that you collected also in a separate folder
The text was updated successfully, but these errors were encountered:
TLDR
We need a bunch of examples from real world openapi specs to test the following things:
Preparation
Basically this generator project converts an openapi spec into a dlt source that uses our new rest_api source that is configured with a dict. To prepare you need to:
Task
I would start with looking at specs of the biggest APIs, convert them with this tool and see what the outcome is. Then:
The text was updated successfully, but these errors were encountered: