Skip to content

QCNN with Amplitude Encoding #880

Open
@DaoyiC

Description

@DaoyiC

What should we add?

I am following the Qiskit tutorial on the QCNN (https://qiskit-community.github.io/qiskit-machine-learning/tutorials/11_quantum_convolutional_neural_networks.html) and I wanted to try the circuit on the MNIST dataset. Due to the amount of input points, I thought it would it would be more appropriate to use Amplitude Encoding, which I believe can be done using RawFeatureVector. I have tested the circuit using the COBYLA optimiser but also want to try using a gradient based optimiser. The documentation states this is not possible and I believe has been mentioned here - #669. However, I believe it should be possible just to calculate the gradients just for the weights, for example the issue states that binding the parameters may help but then it seems that the parameters would have to rebound for each training image, which doesn't seem to be compatible with the training methodology used in the tutorial. Is it possible to use a gradient optimiser and if so, how would this be done?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions