You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During some discussion, I ended up telling that the NumPy backend would have been part of qibo-core.
Well, I'm not sure if this idea will survive to Rust...
By now, I'm trying to make qibo-core as dependency-less as possible. And the Rust core will also be NumPy free (of course).
However, at some point I will have to return some results to Python, wherever they are coming from. These results will of course contain arrays, and the only sensible choice will be to return them as NumPy arrays (well, I could try to make my own Python class in Rust, to hold a buffer following the buffer protocol, or even directly such the NumPy's array protocol, but NumPy is such a light and ubiquitous dependency for Python, and PyO3 has an optimal crate for the purpose, that I really believe not to be worth...).
However, despite the dependency not being an actual argument, I still tend to think that the NumpyBackend should belong to the actual Qibo package (unless we decide to move all backends somewhere else). But I mostly have in mind the bare execution (execute_circuit* methods), and I believe to be little controversial to keep this in qibo.
The rest of the backend is the complex matter...
Backend rich API
I would consider stripping part of the backend functions, and make them a result manipulation library.
However, this will require a careful scrutiny: there are some functions that are essentially never overwritten, so they are perfect candidates for librarization. But there are also functions that might require, or benefit, from being overwritten, e.g. to be run on a discrete hardware accelerator, or any kind of separate device.
These last functions are what the backend mechanism has been designed for, but they are not surviving serialization. So, e.g., they can not be used on a cloud backend.
So, the current Backend is doing many things:
executing the circuit, obtaining some kind of result (shots, state, TN)
efficiently manipulating the result
At this point, we should take a decision: functions like calculate_norm() should not be required to execute a circuit, whatever is the result type. But they would benefit from being executed in the same place of the circuit (e.g. a GPU, or a cloud node), since you might avoid fetching large amounts of data.
One option is to just deny them, and force the user to fetch the result. From there on, there is little need that this operations will be part of the backend (or, at least, part of the same backend structure executing the circuit). They could be easily provided as a separate library (or multiple ones, if you want to execute on GPU or wherever else).
The other is to allow a rather wide amount of operations to be executed together with the circuit.
To make this possible, we will need a language to encode them as well, and one or more runtimes matching the various backends (direclty using the same memory buffers on the computing device, or being auto-diff compatible).
Currently, a lot of the magic was happening thanks to the self.np assignment (assuming it was actually consistently used, that I believe it's not always the case), but this only works for backends implemented on NumPy compatible libraries.
However, with multilanguage support this can not work.
I'm trying to think to many possible solutions, but currently nothing truly clicked as the optimal one.
The text was updated successfully, but these errors were encountered:
NumpyBackend
as a case studyDuring some discussion, I ended up telling that the NumPy backend would have been part of
qibo-core
.Well, I'm not sure if this idea will survive to Rust...
By now, I'm trying to make
qibo-core
as dependency-less as possible. And the Rust core will also be NumPy free (of course).However, at some point I will have to return some results to Python, wherever they are coming from. These results will of course contain arrays, and the only sensible choice will be to return them as NumPy arrays (well, I could try to make my own Python class in Rust, to hold a buffer following the buffer protocol, or even directly such the NumPy's array protocol, but NumPy is such a light and ubiquitous dependency for Python, and PyO3 has an optimal crate for the purpose, that I really believe not to be worth...).
However, despite the dependency not being an actual argument, I still tend to think that the
NumpyBackend
should belong to the actual Qibo package (unless we decide to move all backends somewhere else). But I mostly have in mind the bare execution (execute_circuit*
methods), and I believe to be little controversial to keep this inqibo
.The rest of the backend is the complex matter...
Backend rich API
I would consider stripping part of the backend functions, and make them a result manipulation library.
However, this will require a careful scrutiny: there are some functions that are essentially never overwritten, so they are perfect candidates for librarization. But there are also functions that might require, or benefit, from being overwritten, e.g. to be run on a discrete hardware accelerator, or any kind of separate device.
These last functions are what the backend mechanism has been designed for, but they are not surviving serialization. So, e.g., they can not be used on a cloud backend.
So, the current
Backend
is doing many things:At this point, we should take a decision: functions like
calculate_norm()
should not be required to execute a circuit, whatever is the result type. But they would benefit from being executed in the same place of the circuit (e.g. a GPU, or a cloud node), since you might avoid fetching large amounts of data.One option is to just deny them, and force the user to fetch the result. From there on, there is little need that this operations will be part of the backend (or, at least, part of the same backend structure executing the circuit). They could be easily provided as a separate library (or multiple ones, if you want to execute on GPU or wherever else).
The other is to allow a rather wide amount of operations to be executed together with the circuit.
To make this possible, we will need a language to encode them as well, and one or more runtimes matching the various backends (direclty using the same memory buffers on the computing device, or being auto-diff compatible).
Currently, a lot of the magic was happening thanks to the
self.np
assignment (assuming it was actually consistently used, that I believe it's not always the case), but this only works for backends implemented on NumPy compatible libraries.However, with multilanguage support this can not work.
I'm trying to think to many possible solutions, but currently nothing truly clicked as the optimal one.
The text was updated successfully, but these errors were encountered: