Given multiple input steams (or just one)
where
strictly convex functions
The these functions are non-decreasing when the optimal dual variables
We can use projected gradient descent / ascent to solve a dual problem of the above problem in an online fasion, which leads to a bio-plausible neural network algorithm.
The steady activities of output neurons can be solve by offline projected gradient ascent:
The synaptic learning rule:
We can implemented different algorithms from similarity matching famility by specifying different
When
sim_match = CorrGame(n=n, k=k,
Phi=lambda W, X: (W*W).sum()/2,
Psi=lambda M, X: (M*M).sum()/2,
dPhi = lambda W, X:W,
dPsi = lambda M, X:M,
constraints = {'Y': lambda x:x,
'W': lambda x:x,
'M': lambda x:x},
eta= {'Y': eta_Y, 'W': eta_W, 'M': eta_M},
device=device)
where the derivatives dPhi
and dPsi
are optional. The algorithm will be more efficient if the analytical derivatives are provided, otherwise they will be approximated solved by auto-grad. We can also specify closed-form solutions to the online or the offline trainer to speed up convergence.
When
ncca = CorrGame(n=[n1, n2], k=top_k,
Phi=lambda W, X: (W.mm(X.mm(X.t())/X.size(1))*W).sum()/2,
Psi=lambda M, X: (M*M).sum()/2,
dPhi = lambda W, X:W.mm(X.mm(X.t())/X.size(1)),
dPsi = lambda M, X:M,
constraints = {'Y': F.relu,
'W': lambda x:x,
'M': lambda x:x},
eta= {'Y': eta_Y, 'W': eta_W, 'M': eta_M},
device=device)
Please see general-correlation-game.ipynb
and multi-source-correlation-game.ipynb
for more examples.
...