-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable reporting solution #186
Comments
Hi @PatStLouis, Yes, we agree. The intent was to collect all of the reports from the CLI tools within this repository and produce a final report (similar to what is produced by the w3c-vcdm-v2-test-suite), which would suffice. Our line of thinking has changed since then. The early feedback we received was that the CLI tools were difficult to use and not accessible to non-developers. Based on that feedback, we have since moved to a website (UNTP Playground) where implementors can upload their credentials, which I believe you’re familiar with. Uploading credentials to the UNTP Playground triggers a collection of test cases (not using a testing framework), and displays the results and presents helpful hints if something is wrong. Walking this path, we knew we would still require some type of report summarising the results, credentials tested and metadata. I’m currently working through this so that we have something 80-90% defined so that we can begin working over the next couple of weeks with a completion date before the end of Feb 2025. I’ve looked at the reporting framework you have proposed, and it looks neat—especially the ability to see the history of your previous runs and track conformance over time. I agree that this or a similar tool would be an excellent candidate for our CLI test suites. However, I’m unsure how this could work within the UNTP Playground, as we aren’t using a test framework for test case execution. Being new to Allure, I might be mistaken, so if you feel it can be done, please let us know. I feel that something like Allure incorporated into the CLI test suites (test harness) is perfect for implementors during the development phase, and the UNTP Playground is the accessible interface that would produce the final report of conformance that would be present to the UNTP community for a formal endorsement. |
Below, I’ve documented my understanding of the reporting components and the short-term objectives I believe we would like to achieve. Note that there is a bias in moving towards issuing the report as a VC or moving in that direction (dependent on time constraints). Proposal: Conformance Reporting in the UNTP PlaygroundObjectiveWe need to introduce a mechanism for generating UNTP conformance reports within the deployed instance of the UNTP Playground. The ultimate goal is to provide a tangible record of which test cases were run against a given credential (or set of credentials), the results of those tests, the credential(s) tested and the implementor’s overall compliance with the UNTP specification. Why We Need Reporting
What the Report Should Include1. Credential(s) Tested
2. Test Cases & Outcomes
3. Implementer/Submitter Metadata
Potential Approaches for Generating the Report1. Template-Based Rendering (Handlebars or Similar)
2. JSON Export
3. On-The-Fly Construction
Potential Approaches for Storing/Sharing the Report1. Downloadable File
2. Issue on the UNTP Specification Repository
3. Verifiable Credential
Key Consideration: Integrity of the ReportIf the report is intended to be an official conformance artefact, we need to consider preventing tampering.
|
@ashleythedeveloper I like the approaches suggested, option 1 seems to most convenient as a first step. For the integrity issue, OpenBadges are great and based on Verifiable Credentials. What comes to mind is to put a static did document in the tests-untp repo and have a seed in the gh secrets. a GH action could just sign the credentials with the issuer value: Another thing that might be interesting (where allure comes into play), is to have a public reporting, where implementers can deposit their implementations and public reports are ran weekly. This could be used as a discovery interface / interop starting point. I think we can start by having a json report that can be fed to a template like you suggested. |
It will become a necessity for implementers to have a report available to refer to about their samples/implementations.
I would recommend to start having a look at integrating an allure reporter in the test runs. This will ensure that the results can be exported and stored in an allure server. Having experimented with many testing framework, allure is by far the most interoperable framework and has a supported reporter for most major testing languages.
A quick look at their supported frameworks will validate this.
The text was updated successfully, but these errors were encountered: