Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature - parallel training #38

Open
Ondrysak opened this issue Apr 10, 2021 · 3 comments
Open

Feature - parallel training #38

Ondrysak opened this issue Apr 10, 2021 · 3 comments

Comments

@Ondrysak
Copy link

Are there any plans to implement training in parallel manner as shown in

https://arxiv.org/pdf/2102.11417.pdf

@arvoelke
Copy link
Contributor

This has been implemented as keras_lmu.LMUFFT, which will be automatically selected if you use keras_lmu.LMU and satisfy these conditions:

if (
not self.hidden_to_memory
and not self.memory_to_memory
and self.memory_d == 1
and input_shapes[1] is not None
):

There is still a bit of support that can be added for the RNN flags in #35 but let us know if this works for your use case.

@NarsimhaChilkuri
Copy link

For now, you might want to look at the implementation here. This is essentially the same as keras_lmu.LMUFFT , with two exceptions: 1) it supports multi dimensional input; and 2) when return_sequences=False, it implements equation (25) from the paper, which is more efficient.

@drasmuss
Copy link
Member

Just a note that the multi dimensional input for LMUFFT is now supported in master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants