-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🧪 Testing of third-party device implementations #112
Comments
Regarding this:
I completely agree. We should avoid testing actual circuit execution on quantum devices as part of the CI pipeline. For instance, considering the LRZ's devices, if a provider wanted to test a specific device hosted here, such tests likely wouldn't be feasible due to restrictions on granting remote access. Additionally, many providers would likely be uncomfortable allowing access to their devices directly from a public platform such like GitHub. This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware. @freetonik @kukushechkin: I believe this is inline with the concerns you raised during our last meeting. Could you kindly share any feedback you may have? |
Fully agree 👍🏼
Short answer: yes. And definitely not a silly question. While working on this, we should keep in mind that there might be tests a device implementer might want to run in CI (general spec compliance, query interface tests, etc.), while there are others that should only be run on-demand and locally (e.g., functional control interface tests). Note that the current functional tests that are part of the repository already "fully" test the control interface of the example devices. This is only possible, because the example backends just return random values for their job results. However, this already provides a good blueprint for such kinds of tests that could also be part of device implementations. |
On the IQM side we are thinking about not doing open-source development of the QDMI device implementation, as it would make it harder to track compatibility between shipped control software and that library. So let's assume there isn't necessary a repo on GitHub. I think there are several valuable areas for testing:
Requires programs submission, but it does not require real programs execution, so on the other side there can be a mock. This feels more like integration testing against a higher-level component using QDMI device library (something called Submitter @echavarria-lrz mentioned when we talked?). Having such a tool with a tests suite representing "the latest release version of MQSS" will allow us to test our dev changes for the QDMI on our side. The other way around, dev changes of the MQSS can be verified against the latest shipped QDMI device implementation, for example received with the IQM QC Control Software package.
A set of reference QIR programs for the supported device capabilities would be ideal, so any QDMI device provider can keep verifying not just the QDMI, but the underlying software/hardware. While there is the standard, mapping this to how QDMI device capabilities map to QIR is what should be tested. |
@kukushechkin I really appreciate your feedback.
|
In addition to @kukushechkin 's points:
Device maintainers like ourselves would need versioned, reproducible packages of such test, so that we can a) easily and reliably run tests while developing, and b) set up pipelines, including in our private internal repos, for regression testing. Ideally, it should be just part of the MQSS SDK, so that we can validate "MQSS version X is compatible with IQM Software version Y", and keep records of these compatibility mappings. |
The following is more on the technical side: As discussed with @burgholzer, the tests of the example devices provided in the QDMI repository are tested end-to-end, meaning that they are not tested independently and individually but rather from a client through a driver. Hence, all tests contained in QDMI right now rely on an implementation of the driver. However, device maintainers may want to test their device independently without starting a driver. Especially, those individual tests should be provided together with the template that can be exported from QDMI with a specified prefix. At the same time, those tests can also be used to test the included example devices. Still, we do not want to duplicate too much code while the tests of the devices have to deal with the custom prefixes that device implementations use. To this end, we want to implement individual device tests in the top-level test directory and while building the project they are instantiated with the respective prefix to be compatible with the corresponding device implementation. To summarise, those tests that allow testing the device independently from any driver can also be used to implement a validation check whether a device complies with the specification. |
QDMI is the interface for MQSS to connect to devices. MQSS itself is a collection of different components, which can be used in different combinations depending on what the hosting site needs. To test a device implementation against a MQSS component, e.g. Submitter, tests can be provided in the component's repo.
Regarding MQSS compatibility, my understanding is that you can claim that "MQSS version X is compatible with IQM Software version Y" if both (MQSS and IQM software) are compatible with a QDMI version Z. @burgholzer @ystade |
Just briefly commenting on this. I will come back to the other comments in this thread at a later point in time.
|
Anyone implementing the QDMI device specification will be interested in whether it has been implemented correctly so that when the shared library is distributed one can expect it to work.
It would be good to set up infrastructure (tests + documentation) on how to best set this up and provide some support in doing so.
Best case scenario, we get a setup where passing a set of test implies (with high probability) that the device implementation is compliant with the interface definition.
In principle, I see two (non-exclusive) options here:
For both options, the biggest question is how much we can reliably test without knowing any details about the device. We could definitely perform a lot of sanity checks, e.g., correct error behaviour as specified in the interface.
I believe it should also be possible to test the query interface without knowing too much about the device, e.g., that certain properties must be provided.
The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.
For the second option, it would be good to unify the tests being added to the template and the existing tests for the example devices to avoid too much code duplication.
The text was updated successfully, but these errors were encountered: