Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NeuralNetworkClassifier Accuracy Updates #813

Open
kezmcd1903 opened this issue Jul 4, 2024 · 4 comments
Open

NeuralNetworkClassifier Accuracy Updates #813

kezmcd1903 opened this issue Jul 4, 2024 · 4 comments
Assignees
Labels
type: feature request 💡 New feature or request

Comments

@kezmcd1903
Copy link

What should we add?

The ability to save the training and test accuracy progress while training a QNN (with NeuralNetworkClassifier) per epoch / iteration. This tells us a lot more information than viewing the loss and is important to display in any QML paper.

I thought a nice way to do this would be to follow https://qiskit-community.github.io/qiskit-machine-learning/tutorials/09_saving_and_loading_models.html and break the training up in to epochs, then test and save the model at regular intervals.

I think there's quite a big issue here that the tutorial fails to mention - the fact that each time you save and load your model your optimizer 'memory' resets (I'm using COBYLA). That means that if your objective function landscape is difficult to navigate you get repeated behaviour.

See my slack post for more details https://qiskit.slack.com/archives/C7SJ0PJ5A/p1720017452449239.

@edoaltamura
Copy link
Collaborator

edoaltamura commented Jul 5, 2024

Hi Kieran, thank you for posting on qiskit-machine-learning! To summarise the Slack thread, could you please describe again what makes the callback function challenging to use in your case?

A similar callback feature was suggested for PegasosQSVC in #599.

@edoaltamura edoaltamura added the type: feature request 💡 New feature or request label Jul 5, 2024
@kezmcd1903
Copy link
Author

Hi, it would be great to see training and test accuracies in the callback.

The callback feature suggested for PegasosQSVC in #599 was for viewing the objective function value.

@oscar-wallis
Copy link
Collaborator

oscar-wallis commented Aug 5, 2024

Hi @kezmcd1903, so I think this is all already possible by utilising the callback feature, I'll dive into it here with a little high-level pseudocode.

The Goal

I am essentially going to give you a code implementation to have "The ability to save the training and test accuracy progress while training a QNN". I saw you were struggling trying to implement parameter save and loads due to the optimiser re-initialising, I don't have a good solution for that but I am confident we can sort this with the callback alone.

The Callback function

You're right to use the NeuralNetworkClassifier (NNC) for a custom QNN and we will be utilising the callback function variable. On initialisation of the NNC, this function is passed back to TrainableModel - a native umbrella qiskit machine learning training class. The callback function is called using self._callback(objective_weights, objective_value) when the objective function is calculated during the minimize procedure of TrainableModel. There are only 2 restraints on the callback function:

  1. It must be a function that only takes in these two parameters - however, there is a cheeky way around this using functools partial function.
  2. It can't, or I suppose shouldn't, return anything, as self._callback is called but not assigned to a variable.

Other than that, it can do whatever the user wants and is simply limited by the user's creativity. This includes storing the object weights and functions, performing tests, printing or storing test results, creating graphs or anything else.

Example - Running test suites at the end of each epoch.

Here's a little pseudocode for what I would write, though I can think of many other ways to do it, so if you want something else, let me know and we can workshop something more specific. The simplest method would be to simply store the weights across all training and then retrospectively run your testing. One thing to note before we get into this is that if we want to avoid, saving the model and then warm starting, you might need to do some funky stuff to form epochs. I would suggest padding your dataset to include multiple sets of data, as in having one huge array containing multiple copies of your data essentially forming your epochs. You would however need to specify the epoch length (len_epoch) to be the length of the original data.

test_data = test_data # array of test data
test_labels = test_labels # array of labels
QNN = qc # neural network object
weights_holder = [] # List

def test_suite(weights_holder):
    output = QNN.forward(input_data=test_data, weights=weights_holder)
    """ Do some testing with the output here, test against test_labels or whatever else """

def callback_0(weights, obj_func_eval):
    weights_holder.append(weights)

Model = NeuralNetworkClassifer(QNN, loss, optimizer, callback=callback_0)

Model.fit()

test_results = testing_suite(weights_holder)

Or better yet, save those results to a file, to not pollute the global space. If you avidly want the testing suite done dynamically during training you just need a way of checking whether you've finished an epoch.

test_data = test_data # array of test data
test_labels = test_labels # array of labels
QNN = qnn # neural network object

def test_suite(weights):
    output = QNN.forward(input_data=test_data, weights=weights)
    """ Do some testing with the output here, test against test_labels or whatever else """

i = 0
def callback_1(weights, obj_func_eval):
    i += 1
    if i % len_epoch == 0:
        test_results = testing_suite(weights)

Model = NeuralNetworkClassifer(QNN, loss, optimizer, callback=callback_1)

Model.fit()

Please let me know if you have any questions, I appreciate that this feels a little like a botch job but I wanted to show that the functionality is already there. However, saving optimizer 'memory' could be worth looking into to avoid the pitfalls you described seeing when warm starting with an initial point.

P.S. I saw you were struggling to use RawFeatureVector in the Slack channel, I am too, it is very frustrating. Keep in mind though RawFeatureVector doesn't work with gradient-based methods.

@edoaltamura
Copy link
Collaborator

@kezmcd1903 did the callback method work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature request 💡 New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants