Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add QIF neuron #92

Open
tcstewar opened this issue Nov 12, 2015 · 4 comments
Open

Add QIF neuron #92

tcstewar opened this issue Nov 12, 2015 · 4 comments

Comments

@tcstewar
Copy link
Contributor

This has come up a couple times, so I thought I'd post it here. I'm not sure if QIF should just be added to Nengo itself, or if it should be used as an example of how to define your own neuron model.

@tcstewar
Copy link
Contributor Author

A student just asked me about QIF neurons, so here's a quick implementation. We might want to include this somewhere as an example.

class QIF(nengo.neurons.NeuronType):
    def __init__(self, threshold=1, reset=0):
        super(QIF, self).__init__()
        self.threshold = threshold
        self.reset = reset

    def step_math(self, dt, J, spiked, voltage):
        # the actual neuron model
        voltage += voltage**2 + np.maximum(J,0) * dt
        spikes = voltage > self.threshold
        spiked[:] = spikes
        voltage[spikes] = self.reset
        
    def rates(self, x, gain, bias):
        # run the neurons for a bit to estimate firing rates
        J = self.current(x, gain, bias)
        voltage = np.zeros_like(J)
        return nengo.neurons.settled_firingrate(self.step_math, 
                                  J, [voltage],
                                  settle_time=0.001, sim_time=1.0)        
        
# connect the neuron model to the nengo builder system
@nengo.builder.Builder.register(QIF)
def build_qif(model, qif, neurons):
    model.sig[neurons]['voltage'] = nengo.builder.Signal(
        np.zeros(neurons.size_in), name="%s.voltage" % neurons)
    model.add_op(nengo.builder.neurons.SimNeurons(
        neurons=qif,
        J=model.sig[neurons]['in'],
        output=model.sig[neurons]['out'],
        states=[model.sig[neurons]['voltage']]))

Here's an example of using it

model = nengo.Network()
with model:
    a = nengo.Ensemble(n_neurons=50, dimensions=1,
                       gain=nengo.dists.Uniform(1,1),
                       bias=nengo.dists.Uniform(-0.5, 1.5),
                       neuron_type=QIF(threshold=1, reset=0))
    stim = nengo.Node(lambda t: np.sin(2*np.pi*t))
    nengo.Connection(stim, a)
    
    p = nengo.Probe(a, synapse=0.03)
    p_spikes = nengo.Probe(a.neurons)
sim = nengo.Simulator(model)
sim.run(2)

and the resulting decodes and spikes:

image

image

@tbekolay
Copy link
Member

Seems like a good candidate for nengo_extras, unless we see this as important enough for Nengo core.

@celiasmith
Copy link

it is a pretty common neuron, so i could see an argument for putting it in core.

@arvoelke
Copy link
Contributor

arvoelke commented Apr 11, 2018

I have a few things to note, while making reference to the paper: Neural dynamics, bifurcations and firing rates in a quadratic integrate-and-fire model with a recovery variable. I: deterministic behavior (Shlizerman and Holmes, 2011).

  • A typical reset value will be negative. I chose -0.1 to follow Figure 1.
  • The current should not be rectified at 0. It can be negative, which causes a bifurcation to two fixed points (v = +/- sqrt(-J)) . This means the voltage will either stabilize at -sqrt(-J), or it may spike once more due to the other fixed point being unstable, before finally stabilizing at -sqrt(-J). I also include voltage[voltage < self.reset] = self.reset. See Figure 2 for a nice summary.
  • The v**2 term is a part of the derivative. That is, the delta should be (voltage**2 + J) * dt instead of voltage**2 + (J * dt).
  • There is a formula for the rates (equation 7) assuming J > 0 (which is fine under constant-input assumption, due to the above bifurcation analysis).
  • The builder should initialize the voltage vector to start at the reset value (currently it's always set to 0).
  • The gain/bias calculation that is done automatically by the base neuron type is actually pretty good here (after all of the above changes), and so it is not necessary to manually set them in the model.
  • I've found with the default max firing rates that dt should be dropped by at least a factor of 10, otherwise spikes are under-counted. My example below reduces the firing rates in order to use the same dt.
  • The voltage can be made probeable.
  • Each spike should output 1/dt to be consistent with all other neuron models (this keeps the area of the output pulse constant under varying dt).
  • An amplitude parameter should also be included if this is to be added to Nengo.
class QIF(nengo.neurons.NeuronType):

    probeable = ('spikes', 'voltage')
    
    def __init__(self, threshold=1, reset=-0.1):
        super(QIF, self).__init__()
        self.threshold = threshold
        self.reset = reset

    def step_math(self, dt, J, spiked, voltage):
        voltage += (voltage**2 + J) * dt
        spikes = voltage > self.threshold
        spiked[:] = spikes / dt
        voltage[spikes] = self.reset
        voltage[voltage < self.reset] = self.reset
   
    def rates(self, x, gain, bias):
        J = self.current(x, gain, bias)
        r = np.zeros_like(J)
        Jmask = J > 0
        sqrtJ = np.sqrt(J[Jmask])
        r[Jmask] = sqrtJ / (np.arctan(self.threshold / sqrtJ) -
                            np.arctan(self.reset / sqrtJ))
        return r
        
# connect the neuron model to the nengo builder system
@nengo.builder.Builder.register(QIF)
def build_qif(model, qif, neurons):
    # initialize the voltage vector to the reset value
    model.sig[neurons]['voltage'] = nengo.builder.Signal(
        qif.reset * np.ones(neurons.size_in), name="%s.voltage" % neurons)
    model.add_op(nengo.builder.neurons.SimNeurons(
        neurons=qif,
        J=model.sig[neurons]['in'],
        output=model.sig[neurons]['out'],
        states=[model.sig[neurons]['voltage']]))
tau_probe = 0.03

with nengo.Network() as model:
    stim = nengo.Node(lambda t: np.sin(2*np.pi*t))
    a = nengo.Ensemble(n_neurons=50, dimensions=1, neuron_type=QIF(),
                       max_rates=nengo.dists.Uniform(50, 100))
    nengo.Connection(stim, a, synapse=None)

    p = nengo.Probe(a, synapse=tau_probe)
    p_ideal = nengo.Probe(stim, synapse=tau_probe)
    p_spikes = nengo.Probe(a.neurons, 'spikes')
    p_voltage = nengo.Probe(a.neurons, 'voltage')

with nengo.Simulator(model, dt=1e-3) as sim:
    sim.run(2)

plt.figure()
plt.title("QIF() Communication Channel")
plt.plot(sim.trange(), sim.data[p], label="Actual")
plt.plot(sim.trange(), sim.data[p_ideal], linestyle='--', label="Ideal")
plt.xlabel("Time (s)")
plt.ylabel("Decoded")
plt.show()

simulation

However, in the end, without modelling any refractory period or recovery dynamics, we basically end up with linear tuning curves (a spiking ReLU model). And so either refractory/recovery dynamics should be included, or the default threshold/reset need to be modified to obtain less linear curves.

u = np.linspace(-1, 1)
plt.figure()
plt.title("QIF() Tuning Curves")
plt.plot(u, nengo.builder.ensemble.get_activities(sim.data[a], a, u[:, None]))
plt.xlabel("x")
plt.ylabel("Firing Rate (Hz)")
plt.show()

curves

@drasmuss drasmuss transferred this issue from nengo/nengo Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants