Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad performance when running mnist_spiking_cnn.py #55

Open
ghost opened this issue Oct 22, 2017 · 2 comments
Open

Bad performance when running mnist_spiking_cnn.py #55

ghost opened this issue Oct 22, 2017 · 2 comments

Comments

@ghost
Copy link

ghost commented Oct 22, 2017

I ran the code of examples/keras/mnist_spiking_cnn.py and get a bad result. This is the final output:

Using Theano backend.
Train on 50000 samples, validate on 10000 samples
Epoch 1/6
50000/50000 [==============================] - 154s - loss: 2.5341 - acc: 0.1081 - val_loss: 2.3020 - val_acc: 0.1064
Epoch 2/6
50000/50000 [==============================] - 148s - loss: 2.3018 - acc: 0.1134 - val_loss: 2.3018 - val_acc: 0.1064
Epoch 3/6
50000/50000 [==============================] - 148s - loss: 2.3017 - acc: 0.1136 - val_loss: 2.3018 - val_acc: 0.1064
Epoch 4/6
50000/50000 [==============================] - 151s - loss: 2.3012 - acc: 0.1135 - val_loss: 2.3019 - val_acc: 0.1064
Epoch 5/6
50000/50000 [==============================] - 160s - loss: 2.3011 - acc: 0.1136 - val_loss: 2.3019 - val_acc: 0.1064
Epoch 6/6
50000/50000 [==============================] - 158s - loss: 2.3014 - acc: 0.1135 - val_loss: 2.3019 - val_acc: 0.1064
Test score: 2.30193813934
Test accuracy: 0.1064
Building finished in 0:00:01.
Simulating finished in 0:05:36.
Spiking accuracy (100 examples): 0.110

I altered some code to fit my computer, these are the alters.
When the code loads the training data and test data in (X_train, y_train), (X_test, y_test) = mnist.load_data(), an error happens saying

Using Theano backend.
Traceback (most recent call last):
  File "mnist_spiking_cnn.py", line 34, in <module>
    (X_train, y_train), (X_test, y_test) = mnist.load_data()
ValueError: too many values to unpack

So, I scanned the code of keras, finding that the load_data function returns a tuple and I can only get data in this way:

data = mnist.load_data()
X_train = data[0][0]  # shape is (50000. 768)
y_train = data[0][1]   # shape is (50000,)
X_test = data[1][0]   # shape is (10000, 768)
y_test = data[1][1]    # shape is (10000,)

Besides, I use the cpu instead of gpu by changing os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float32'.

My keras version is 1.2.0. I think the example code use a different version of keras, but I don't think the way I load data in makes this code perform bad.

So, can you give me some advice on how to tune the code?

@hunse
Copy link
Collaborator

hunse commented Jan 31, 2018

Yeah, there's definitely some problem in the optimization there.

Keras is changing all the time, so it's hard to know what the problem is with any particular version. I'd go look at Keras forums and try to figure out how to train a good MNIST net just using Keras. Then you can worry about making it spiking using the code here.

@drasmuss
Copy link
Member

There's also an example here https://www.nengo.ai/nengo_dl/examples/spiking_mnist.html that uses the same ideas to optimize a spiking MNIST network, if that is any help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants