Replies: 2 comments
-
Hello @gjb1002, Below are our responses.
Thanks for reporting this. We were able to reproduce this issue based on your input and have filed a separate issue (#860) to track the same.
As you may have already guessed the accuracy of proxy is based on the volume and variety of traffic that passes through it. With a single request we have below options.
Also in our experience assuming all fields to be required, to begin with, has been preferred by teams that we have worked with because it gives clearer feedback (fails noisily). Please let us know if we are missing something.
Thanks again. Issue #859 filed to track the same.
Yes, in case of API first design the OpenAPI specification indeed becomes a mechanism to collaborate and capture the API design between stakeholders and in this case there are several tools that aid the authoring OpenAPI specifications.
We did not understand this point. Can you please elaborate what you mean by "written your own HTTP client from scratch".
In our experience this has not been the case. We have consistently identified holes in the specifications that have been generated based on code. The quality (correctness and completeness) of generated specifications varies from tool to tool and is also subject to human error. The only way we have been able to conclusively avoid mismatch between specification and implementation is to run specifications as tests. Furthermore, specification generation requires provider to be built first, following which the OpenAPI spec is generated which is then shared with consumers for adoption. This is a sequential style of development. However, in our experience teams strongly prefer parallel development. There are several other such practical issues with specification generation which I have captured in this InfoQ article where I talk about a real world adoption journey. |
Beta Was this translation helpful? Give feedback.
-
Regarding my observations on the auto-generated contract: both the ubiquitous required fields and the mix of application and HTTP protocol information made me wonder what the aim was here. If you produce a contract purely by observation, at what point can you actually use it as a contract? I guess I just haven't really understood what the point of this artefact is or what it will be used for - for me contracts are something you design, not something you discover by observation. If something fails - is that because the contract should have banned it or because the provider code has a bug in it - and how can you tell this automatically? My comment on writing your own HTTP client was merely in response to this "contract" that had specified my application's behaviour and most of the details of the HTTP protocol at the same time. This would be useful if I had written my own HTTP client and thus really wanted to test both of these things together, but I couldn't see why I would. I guess you've seen this as you created an issue for it, so that's all good. Regarding generating contracts from code - thanks for the links, it was good to have this clarification. As I understand it you are actually recommending a break with established practice here ("Contract-Driven Development"), not just a new tool to better implement existing practice. An interesting point then would be to separate to what extent CDD can be done without Specmatic, and to what extent Specmatic is useful without switching to CDD, or whether they are irretrievably linked to one another. You raise two issues - that contracts can contain "holes" and that tooling isn't perfect. On holes - of course you can leave out important aspects of a contract by failing to specify parts of it (ignoring required markers etc) but surely that is true however you specify it. And indeed all contracts ever contain holes of some kind, because it's usually not practical to specify for example that if field "vehicletype" contains "bicycle" then field "wheelcount" can only contain numbers up to 3, while if it is "car" it can only be 4 (or perhaps 3...). There is always some combination of input that is allowed by the contract but the provider does not handle. Autogeneration at least does not allow for a field to be an integer in one place and a string in another, even if it cannot tell you what you missed. And yes, tooling isn't perfect, but my testing practices have always assumed that I am trying in the first case to test my code, not all of its dependencies through third party libraries and down to the compiler. It becomes a much harder problem if you assume that none of your world can be relied upon. I feel it is important to focus on where the problems are most likely to occur. It is far more likely that I just screwed up than that I found a bug in my compiler :) |
Beta Was this translation helpful? Give feedback.
-
I am trying to use "smart mocking" on a dependency on a toy example.
So I ran with a "specmatic proxy" instance as a man in the middle, and it recorded me a stub file and also a contract.
I then tried to use a "specmatic stub" instance on this recorded data and run the same test without my dependency.
...stub0.json didn't match ...proxy_generated.yaml
Error from contract ...proxy_generated.yaml
It's also a mix where some of it details the message it has seen but quite a lot is just documenting standard HTTP headers that are presumably the same every time in all systems. Why would it care about autogenerated headers like Server, Date etc that no user code has gone anywhere near? A single http GET request with one parameter and a string response produced a 68-line "contract" file - making me wonder what happens with real APIs.
I have experience of Swashbuckle (C#) and APIFlask (Python) and in both cases the approach is to write and annotate the code in such a way that the openapi.yaml file is autogenerated from it. It is therefore not possible for the code and the spec not to be in synch with each other - there is only one source of truth.
Maybe this was an issue in the past, but it seems impossible with modern tools such as the above. So it seems a bit strange when the very first documentation is encouraging us to investigate this: "Now lets leverage Specmatic to run the above specification as a test against the Provider / API to see if it is adhering the OpenAPI Specification." and then encouraging us to hack in it by hand to deliberately break it and see the effects. Doing that might be useful if you're testing Swashbuckle or APIFlask, but not if you want to test your own application code which presumably most people do.
Beta Was this translation helpful? Give feedback.
All reactions