You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The circuit that we pass in the estimator QNN contains two parts : the encoding part(non trainable) and the trainable part. If the encoding circuit changes for every data point, we will have to reinstantiate the model every-time. It would have been better if we can change that encoding part of the model every time we call the model. I know we can create a parameterized circuit and pass in the parameters everytime, but sometimes the circuit itself changes depending on the data point and it is difficult to use Torch Connector for that.
Thanks for pointing this out @Pingal-Pratyush-Nath, this is an interesting and tricky problem! Currently, EstimatorQNN only supports dynamic parameters but not a feature map with a fully dynamic circuit structure. In my view, this would be achieved in the following way.
Change the EstimatorQNN arguments to take a (constant) ansatz: QuantumCircuit and a variable feature_map: QuantumCircuit | Callable[[Any], QuantumCircuit] similar to QNNCircuit.
Note that changing the structure of the circuit for every data point (or GraphState) means that you have to transpile the composed circuit every time, if you want to run on real or emulated hardware, for two reasons (i) to optimise gates and pulses and (ii) to convert the circuit to the backend’s Instruction Set Architecture (ISA). Potentially, this is an expensive operation to repeat every time, so you may want to keep an eye on this step if you're after performance/speed.
Currently, we do not have a plan for including dynamic feature maps in EstimatorQNN, however, I warmly encourage you to share your ideas and code snippets in this threads so that us and the rest of the community can make suggestions.
@Pingal-Pratyush-Nath I am skeptical towards this being a feature but another idea could be to reframe the qnn to reduce computational overhead. What if instead, the QNN was instantiated with an ISA compliant ansatz and then the circuit was fed the datapoints in the form of transpiled ISA circuits.
So you would take your initial data, turn each into a virtual circuit (QuantumCircuit) and then run the passmanager on all of these to make them ISA, this forms your dataset. Then you feed the QNN the data ISA circs and internally it stitches together each of these with the ISA ansatz and sends them in as jobs.
This would just mean that you transpile the ansatz only once, reducing computational overhead. This kind of model could be worth investigating internally as a way to run "quantum data" through our NeuralNets. Honestly, I think I have convinced myself this is a good idea whilst writing this, maybe this should be a feature for 1.1 - @edoaltamura@OkuyanBoga.
What should we add?
The circuit that we pass in the estimator QNN contains two parts : the encoding part(non trainable) and the trainable part. If the encoding circuit changes for every data point, we will have to reinstantiate the model every-time. It would have been better if we can change that encoding part of the model every time we call the model. I know we can create a parameterized circuit and pass in the parameters everytime, but sometimes the circuit itself changes depending on the data point and it is difficult to use Torch Connector for that.
I have added a sample code.
The text was updated successfully, but these errors were encountered: