Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🧪 Testing of third-party device implementations #112

Open
burgholzer opened this issue Dec 3, 2024 · 8 comments
Open

🧪 Testing of third-party device implementations #112

burgholzer opened this issue Dec 3, 2024 · 8 comments
Labels
documentation Improvements or additions to documentation usability Increasing usability of the library

Comments

@burgholzer
Copy link
Contributor

Anyone implementing the QDMI device specification will be interested in whether it has been implemented correctly so that when the shared library is distributed one can expect it to work.
It would be good to set up infrastructure (tests + documentation) on how to best set this up and provide some support in doing so.
Best case scenario, we get a setup where passing a set of test implies (with high probability) that the device implementation is compliant with the interface definition.

In principle, I see two (non-exclusive) options here:

  • Testing of (open-source, git-hosted) downstream projects as part of this repository. We could check out the respective repository, build the device, and hook it into the example QDMI driver to run the tests. This naturally only works if the repository for the device implementation is open source.
  • Providing more elaborate testing as part of the QDMI device template, potentially including a GitHub Workflow to test the device in CI. This even allows private device implementations to properly test their code.

For both options, the biggest question is how much we can reliably test without knowing any details about the device. We could definitely perform a lot of sanity checks, e.g., correct error behaviour as specified in the interface.
I believe it should also be possible to test the query interface without knowing too much about the device, e.g., that certain properties must be provided.
The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.

For the second option, it would be good to unify the tests being added to the template and the existing tests for the example devices to avoid too much code duplication.

@burgholzer burgholzer added documentation Improvements or additions to documentation usability Increasing usability of the library labels Dec 3, 2024
@echavarria-lrz
Copy link
Contributor

Regarding this:

The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.

I completely agree. We should avoid testing actual circuit execution on quantum devices as part of the CI pipeline. For instance, considering the LRZ's devices, if a provider wanted to test a specific device hosted here, such tests likely wouldn't be feasible due to restrictions on granting remote access. Additionally, many providers would likely be uncomfortable allowing access to their devices directly from a public platform such like GitHub.

This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.

@freetonik @kukushechkin: I believe this is inline with the concerns you raised during our last meeting. Could you kindly share any feedback you may have?

@burgholzer
Copy link
Contributor Author

Regarding this:

The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.

I completely agree. We should avoid testing actual circuit execution on quantum devices as part of the CI pipeline. For instance, considering the LRZ's devices, if a provider wanted to test a specific device hosted here, such tests likely wouldn't be feasible due to restrictions on granting remote access. Additionally, many providers would likely be uncomfortable allowing access to their devices directly from a public platform such like GitHub.

Fully agree 👍🏼

This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.

Short answer: yes. And definitely not a silly question.
@ystade and I had a pretty good discussion today about general testing of device implementations and how we could provide that as part of the QDMI repository and/or the QDMI device implementation template. @ystade can elaborate a little more on the things we discussed.

While working on this, we should keep in mind that there might be tests a device implementer might want to run in CI (general spec compliance, query interface tests, etc.), while there are others that should only be run on-demand and locally (e.g., functional control interface tests).

Note that the current functional tests that are part of the repository already "fully" test the control interface of the example devices. This is only possible, because the example backends just return random values for their job results. However, this already provides a good blueprint for such kinds of tests that could also be part of device implementations.

@kukushechkin
Copy link

On the IQM side we are thinking about not doing open-source development of the QDMI device implementation, as it would make it harder to track compatibility between shipped control software and that library. So let's assume there isn't necessary a repo on GitHub.

I think there are several valuable areas for testing:

  1. The control interface.

Requires programs submission, but it does not require real programs execution, so on the other side there can be a mock. This feels more like integration testing against a higher-level component using QDMI device library (something called Submitter @echavarria-lrz mentioned when we talked?). Having such a tool with a tests suite representing "the latest release version of MQSS" will allow us to test our dev changes for the QDMI on our side.

The other way around, dev changes of the MQSS can be verified against the latest shipped QDMI device implementation, for example received with the IQM QC Control Software package.

  1. Programs spec support.

A set of reference QIR programs for the supported device capabilities would be ideal, so any QDMI device provider can keep verifying not just the QDMI, but the underlying software/hardware. While there is the standard, mapping this to how QDMI device capabilities map to QIR is what should be tested.

@echavarria-lrz
Copy link
Contributor

@kukushechkin I really appreciate your feedback.

  1. I'll leave the technical details about the Submitter to @mnfarooqi, but to provide more context for this discussion, a Submitter is effectively a QDMI client that offloads circuits from the compiler to a target device.
  2. This bullet I believe was already addressed by @burgholzer in this issue: 🧪 Testing circuit support in device implementations #113 (comment)

@freetonik
Copy link

In addition to @kukushechkin 's points:

This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.

Device maintainers like ourselves would need versioned, reproducible packages of such test, so that we can a) easily and reliably run tests while developing, and b) set up pipelines, including in our private internal repos, for regression testing. Ideally, it should be just part of the MQSS SDK, so that we can validate "MQSS version X is compatible with IQM Software version Y", and keep records of these compatibility mappings.

@ystade
Copy link
Collaborator

ystade commented Dec 5, 2024

Short answer: yes. And definitely not a silly question.
@ystade and I had a pretty good discussion today about general testing of device implementations and how we could provide that as part of the QDMI repository and/or the QDMI device implementation template. @ystade can elaborate a little more on the things we discussed.

The following is more on the technical side: As discussed with @burgholzer, the tests of the example devices provided in the QDMI repository are tested end-to-end, meaning that they are not tested independently and individually but rather from a client through a driver. Hence, all tests contained in QDMI right now rely on an implementation of the driver.

However, device maintainers may want to test their device independently without starting a driver. Especially, those individual tests should be provided together with the template that can be exported from QDMI with a specified prefix. At the same time, those tests can also be used to test the included example devices.

Still, we do not want to duplicate too much code while the tests of the devices have to deal with the custom prefixes that device implementations use. To this end, we want to implement individual device tests in the top-level test directory and while building the project they are instantiated with the respective prefix to be compatible with the corresponding device implementation.

To summarise, those tests that allow testing the device independently from any driver can also be used to implement a validation check whether a device complies with the specification.

@mnfarooqi
Copy link
Contributor

  1. I'll leave the technical details about the Submitter to @mnfarooqi, but to provide more context for this discussion, a Submitter is effectively a QDMI client that offloads circuits from the compiler to a target device.

QDMI is the interface for MQSS to connect to devices. MQSS itself is a collection of different components, which can be used in different combinations depending on what the hosting site needs. To test a device implementation against a MQSS component, e.g. Submitter, tests can be provided in the component's repo.

Device maintainers like ourselves would need versioned, reproducible packages of such test, so that we can a) easily and reliably run tests while developing, and b) set up pipelines, including in our private internal repos, for regression testing. Ideally, it should be just part of the MQSS SDK, so that we can validate "MQSS version X is compatible with IQM Software version Y", and keep records of these compatibility mappings.

Regarding MQSS compatibility, my understanding is that you can claim that "MQSS version X is compatible with IQM Software version Y" if both (MQSS and IQM software) are compatible with a QDMI version Z.

@burgholzer @ystade
Has QDMI v1.0 been released? I don't see a release or version tag in the repo.

@burgholzer
Copy link
Contributor Author

@burgholzer @ystade Has QDMI v1.0 been released? I don't see a release or version tag in the repo.

Just briefly commenting on this. I will come back to the other comments in this thread at a later point in time.

v1 has not been officially released. Given some open issues that require breaking changes in the interface (#117, #118) and some issues that add features that almost seem necessary (#108, #109, #115), I would argue that we should get these resolved first before officially marking this as v1.
Since we are currently not expecting any further substantial (breaking) changes, we feel comfortable to realize this until the end of the year.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation usability Increasing usability of the library
Projects
None yet
Development

No branches or pull requests

6 participants