-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a test suite #3
Comments
You mentioned you started with defining did doc deltas, do you have any test vectors for dids other than level 1 (mainly interested in level for DID Exchange). There's some tricky parts in the spec that I think will be interpreted differently by different implementors. Method 0 is easy to test with did key test vectors, but transforming method 2 did documents to dids and vice versa, or generating the did for a method 1 did document are trickier. If helpful I can provide some documents based on our implementation once ready, but not sure whether those are correct... |
I believe that DSR's peer DID implementations in python included some test vectors. Pinging @DenisRybas , who I believe knows about that. |
Yes, we have some test vectors of method 0 and 2 in our implementation. Here is the link: https://github.com/sicpa-dlab/peer-did-python/blob/main/tests/test_vectors.py |
This is really helpful, thanks! Spotted a few mistakes thanks to this :) |
@dhh1128 and @DenisRybas do you know if there are any test vectors available for method 1? I'm having a hard time determining whether my implementation is correct without being able to verify it against something. |
Define a strategy. (We currently have some sample data checked in. This gives well-formed and invalid peer DIDs. I started to create well-formed and invalid DID doc deltas as well, but I can't remember how far I got on this. The intent is that a test suite would consume all the well-formed peer DIDs and prove that an impl considers them well-formed, all the invalid DIDs and confirm that an impl complains, and would consume all the deltas and produce the defined results when doing a resolution. Thus, the data is the main driver; all we should have to do is drop new data files into the appropriate folders, and whatever test suite we write will get more robust. Perhaps this strategy needs improving; I haven't thought it through deeply. But I like the fact that it's not unduly tied to a particular test technology.)
Implement the suite.
The text was updated successfully, but these errors were encountered: