Compatible with TensorFlow 2.6 - 2.16
Added
- Added an
input_d
parameter toLMUCell
. This only needs to be specified whenhidden_cell=None
andinput_to_hidden=True
; in that scenario it is required in order to accurately setLMUCell.output_size
. (#56)
Compatible with TensorFlow 2.4 - 2.13
Changed
- Minimum supported Python version is now 3.8 (3.7 reached end of life in June 2023). (#54)
Compatible with TensorFlow 2.4 - 2.11
Changed
LMUFeedforward
can now be used with unknown sequence lengths, andLMU
will useLMUFeedforward
for unknown sequence lengths (as long as the other conditions are met, as before). (#52)- Allow
input_to_hidden=True
withhidden_cell=None
. This will act as a skip connection. (#52) - Changed order of LMU states so that the LMU memory state always comes first, and any states from the hidden cell come afterwards. (#52)
Fixed
- Fixed errors when setting non-default dtype on LMU layers. (#52)
Compatible with TensorFlow 2.4 - 2.11
Added
- Layers are registered with the Keras serialization system (no longer need to
be passed as
custom_objects
). (#49)
Compatible with TensorFlow 2.1 - 2.9
Added
- Added support for TensorFlow 2.9. (#48)
Compatible with TensorFlow 2.1 - 2.8
Added
- Added support for TensorFlow 2.8. (#46)
- Allow for optional bias on the memory component with the
use_bias
flag. (#44) - Added regularizer support for kernel, recurrent kernel, and bias. (#44)
Compatible with TensorFlow 2.1 - 2.7
Added
- Setting
kernel_initializer=None
now removes the dense input kernel. (#40) - The
keras_lmu.LMUFFT
layer now supportsmemory_d > 1
.keras_lmu.LMU
now uses this implementation for all values ofmemory_d
when feedforward conditions are satisfied (no hidden-to-memory or memory-to-memory connections, and the sequence length is notNone
). (#40) - Added
trainable_theta
option, which will allow thetheta
parameter to be learned during training. (#41) - Added
discretizer
option, which controls the method used to solve for theA
andB
LMU matrices. This is mainly useful in combination withtrainable_theta=True
, where settingdiscretizer="euler"
may improve the training speed (possibly at the cost of some accuracy). (#41) - The
keras_lmu.LMUFFT
layer can now use raw convolution internally (as opposed to FFT-based convolution). The newconv_mode
option exposes this. The newtruncate_ir
option allows truncating the impulse response when running with a raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster depends on the specific model, hardware, and amount of truncation. (#42)
Changed
- The
A
andB
matrices are now stored as constants instead of non-trainable variables. This can improve the training/inference speed, but it means that saved weights from previous versions will be incompatible. (#41) - Renamed
keras_lmu.LMUFFT
tokeras_lmu.LMUFeedforward
. (#42)
Fixed
- Fixed dropout support in TensorFlow 2.6. (#42)
Changed
- Raise a validation error if
hidden_to_memory
orinput_to_hidden
are True whenhidden_cell=None
. (#26)
Fixed
- Fixed a bug with the autoswapping in
keras_lmu.LMU
during training. (#28) - Fixed a bug where dropout mask was not being reset properly in the hidden cell. (#29)
Changed
- Renamed module from
lmu
tokeras_lmu
(so it will now be imported viaimport keras_lmu
), renamed package fromlmu
tokeras-lmu
(so it will now be installed viapip install keras-lmu
), and changed any references to "NengoLMU" to "KerasLMU" (since this implementation is based in the Keras framework rather than Nengo). In the future thelmu
namespace will be used as a meta-package to encapsulate LMU implementations in different frameworks. (#24)
Added
- Added documentation for package description, installation, usage, API, examples, and project information. (#20)
- Added LMU FFT cell variant and auto-switching LMU class. (#21)
- LMUs can now be used with any Keras RNN cell (e.g. LSTMs or GRUs) through the
hidden_cell
parameter. This can take an RNN cell (liketf.keras.layers.SimpleRNNCell
ortf.keras.layers.LSTMCell
) or a feedforward layer (liketf.keras.layers.Dense
) orNone
(to create a memory-only LMU). The output of the LMU memory component will be fed to thehidden_cell
. (#22) - Added
hidden_to_memory
,memory_to_memory
, andinput_to_hidden
parameters toLMUCell
, which can be used to enable/disable connections between components of the LMU. They default to disabled. (#22) - LMUs can now be used with multi-dimensional memory components. This is controlled
through a new
memory_d
parameter ofLMUCell
. (#22) - Added
dropout
parameter toLMUCell
(which applies dropout to the input) andrecurrent_dropout
(which applies dropout to thememory_to_memory
connection, if it is enabled). Note that dropout can be added in the hidden component through thehidden_cell
object. (#22)
Changed
- Renamed
lmu.lmu
module tolmu.layers
. (#22) - Combined the
*_encoders_initializer``parameters of ``LMUCell
into a singlekernel_initializer
parameter. (#22) - Combined the
*_kernel_initializer
parameters ofLMUCell
into a singlerecurrent_kernel_initializer
parameter. (#22)
Removed
- Removed
Legendre
,InputScaled
,LMUCellODE
, andLMUCellGating
classes. (#22) - Removed the
method
,realizer
, andfactory
arguments fromLMUCell
(they will take on the same default values as before, they just cannot be changed). (#22) - Removed the
trainable_*
arguments fromLMUCell
. This functionality is largely redundant with the new functionality added for enabling/disabling internal LMU connections. These were primarily used previously for e.g. setting a connection to zero and then disabling learning, which can now be done more efficiently by disabling the connection entirely. (#22) - Removed the
units
andhidden_activation
parameters ofLMUCell
(these are now specified directly in thehidden_cell
. (#22) - Removed the dependency on
nengolib
. (#22) - Dropped support for Python 3.5, which reached its end of life in September 2020. (#22)
Initial release of KerasLMU 0.1.0! Supports Python 3.5+.
The API is considered unstable; parts are likely to change in the future.
Thanks to all of the contributors for making this possible!