Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable reporting solution #186

Open
PatStLouis opened this issue Dec 13, 2024 · 3 comments
Open

Enable reporting solution #186

PatStLouis opened this issue Dec 13, 2024 · 3 comments
Assignees

Comments

@PatStLouis
Copy link

It will become a necessity for implementers to have a report available to refer to about their samples/implementations.

I would recommend to start having a look at integrating an allure reporter in the test runs. This will ensure that the results can be exported and stored in an allure server. Having experimented with many testing framework, allure is by far the most interoperable framework and has a supported reporter for most major testing languages.

A quick look at their supported frameworks will validate this.

@ashleythedeveloper
Copy link
Collaborator

ashleythedeveloper commented Jan 21, 2025

Hi @PatStLouis,

Yes, we agree. The intent was to collect all of the reports from the CLI tools within this repository and produce a final report (similar to what is produced by the w3c-vcdm-v2-test-suite), which would suffice. Our line of thinking has changed since then. The early feedback we received was that the CLI tools were difficult to use and not accessible to non-developers.

Based on that feedback, we have since moved to a website (UNTP Playground) where implementors can upload their credentials, which I believe you’re familiar with. Uploading credentials to the UNTP Playground triggers a collection of test cases (not using a testing framework), and displays the results and presents helpful hints if something is wrong.

Walking this path, we knew we would still require some type of report summarising the results, credentials tested and metadata. I’m currently working through this so that we have something 80-90% defined so that we can begin working over the next couple of weeks with a completion date before the end of Feb 2025.

I’ve looked at the reporting framework you have proposed, and it looks neat—especially the ability to see the history of your previous runs and track conformance over time.

I agree that this or a similar tool would be an excellent candidate for our CLI test suites. However, I’m unsure how this could work within the UNTP Playground, as we aren’t using a test framework for test case execution. Being new to Allure, I might be mistaken, so if you feel it can be done, please let us know.

I feel that something like Allure incorporated into the CLI test suites (test harness) is perfect for implementors during the development phase, and the UNTP Playground is the accessible interface that would produce the final report of conformance that would be present to the UNTP community for a formal endorsement.

@ashleythedeveloper
Copy link
Collaborator

Below, I’ve documented my understanding of the reporting components and the short-term objectives I believe we would like to achieve. Note that there is a bias in moving towards issuing the report as a VC or moving in that direction (dependent on time constraints).

Proposal: Conformance Reporting in the UNTP Playground

Objective

We need to introduce a mechanism for generating UNTP conformance reports within the deployed instance of the UNTP Playground. The ultimate goal is to provide a tangible record of which test cases were run against a given credential (or set of credentials), the results of those tests, the credential(s) tested and the implementor’s overall compliance with the UNTP specification.

Why We Need Reporting

  1. Proof of Conformance: Implementers of the UNTP specification want to show that the credentials they produce conform to a specific version of the UNTP spec.
  2. Version-Specific Validation: Because multiple credential versions exist, the report should clearly indicate what credentials and corrispond version(s) were tested.
  3. Traceability: A persistent record is useful both for internal QA and for providing evidence to the UNTP Working Group if the implementer seeks formal endorsement.

What the Report Should Include

1. Credential(s) Tested

  • Type of credential(s) (e.g., DigitalProductPassport)
  • Version (e.g., v0.5.0)
  • Security mechanism used (e.g., enveloping proof)
  • The credential tested
  • Any additional relevant metadata

2. Test Cases & Outcomes

  • List of Test Cases: The specific checks, validations, or conformance criteria
  • Results: For each test case, an outcome (pass/fail)
  • Additional Data: Potentially helpful logs, error messages, or debugging info

3. Implementer/Submitter Metadata

Potential Approaches for Generating the Report

1. Template-Based Rendering (Handlebars or Similar)

  • As the user runs through test cases, we construct an object containing the test results
  • Then we populate a Handlebars (or similar) template with this object to produce a human-readable document
  • Allows for easy styling and transforms into a downloadable PDF/HTML

2. JSON Export

  • Simply export the object containing all test results in JSON form
  • Straightforward but less user-friendly
  • Will still be useful as an internal artifact or for further automation

3. On-The-Fly Construction

  • Build a report object or markup as each test case is executed (e.g. similar to what the w3c-vcdm-v2-test-suite is doing)
  • Potentially feed this into a UI or downloadable file once testing completes

Potential Approaches for Storing/Sharing the Report

1. Downloadable File

  • User simply downloads the report (in HTML, PDF, or JSON)
  • Pros: Straightforward
  • Cons: Integrity concerns (the user can modify it after download and origin not verifiable)

2. Issue on the UNTP Specification Repository

  • Once the user finishes testing, the Playground automatically creates a GitHub Issue on the UNTP spec's repo, embedding the report data in the issue
  • Pros: Transparent, easy to track. Facilitates collaboration and official endorsement
  • Cons: Requires some automation, will require a captcha to prevent abuse and potential issues with multiple submissions

3. Verifiable Credential

  • Issue the report itself as a verifiable credential to ensure integrity and tamper-evident properties
  • Pros: High integrity, cryptographically verifiable
  • Cons: Requires additional implementation work (e.g., deploying VCKit or other VC solution).

Key Consideration: Integrity of the Report

If the report is intended to be an official conformance artefact, we need to consider preventing tampering.
Approaches:

  • Verifiable Credential: Requires setting up issuance logic (e.g., a VCKit instance)
  • Publishing Directly to GitHub: The repository's commit history protects authenticity (harder to tamper with)
  • Local Download: Easiest, but has no baked-in integrity if the user modifies the file unless it has cryptograph integrity

@ashleythedeveloper ashleythedeveloper self-assigned this Jan 21, 2025
@PatStLouis
Copy link
Author

@ashleythedeveloper I like the approaches suggested, option 1 seems to most convenient as a first step.

For the integrity issue, OpenBadges are great and based on Verifiable Credentials.

What comes to mind is to put a static did document in the tests-untp repo and have a seed in the gh secrets. a GH action could just sign the credentials with the issuer value: did:web:uncefact.github.io:tests-untp.

Another thing that might be interesting (where allure comes into play), is to have a public reporting, where implementers can deposit their implementations and public reports are ran weekly. This could be used as a discovery interface / interop starting point.

I think we can start by having a json report that can be fed to a template like you suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants