[ENH] xLSTMTime
implementation
#2034
Annotations
10 errors and 1 warning
Run pytest:
tests/test_models/test_xlstmtime.py#L156
TestXLSTMTime.test_initialization[mlstm]
assert False
+ where False = isinstance(None, <class 'torch.nn.modules.linear.Linear'>)
+ where None = xLSTMTime(\n "dataset_parameters": None\n "decomposition_kernel": 25\n "device": cpu\n "dropout": 0.1\n "hidden_size": 64\n "input_projection_size": None\n "input_size": 10\n "learning_rate": 0.001\n "log_gradient_flow": False\n "log_interval": -1\n "log_val_interval": -1\n "logging_metrics": ModuleList()\n "monotone_constaints": {}\n "num_layers": 1\n "optimizer": adam\n "optimizer_params": None\n "output_size": 5\n "output_transformer": None\n "reduce_on_plateau_min_lr": 1e-05\n "reduce_on_plateau_patience": 1000\n "reduce_on_plateau_reduction": 2.0\n "weight_decay": 0.0\n "xlstm_type": mlstm\n (loss): SMAPE()\n (logging_metrics): ModuleList()\n (decomposition): SeriesDecomposition(\n (avg_pool): AvgPool1d(kernel_size=(25,), stride=(1,), padding=(12,))\n )\n (batch_norm): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (lstm): mLSTMNetwork(\n (mlstm_layer): mLSTMLayer(\n (dropout): Dropout(p=0.1, inplace=False)\n (cells): ModuleList(\n (0): mLSTMCell(\n (Wq): Linear(in_features=64, out_features=64, bias=True)\n (Wk): Linear(in_features=64, out_features=64, bias=True)\n (Wv): Linear(in_features=64, out_features=64, bias=True)\n (Wi): Linear(in_features=64, out_features=64, bias=True)\n (Wf): Linear(in_features=64, out_features=64, bias=True)\n (Wo): Linear(in_features=64, out_features=64, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n (ln_q): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_k): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_v): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_i): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_f): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_o): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (sigmoid): Sigmoid()\n (tanh): Tanh()\n )\n )\n )\n (fc): Linear(in_features=64, out_features=64, bias=True)\n )\n (output_linear): Linear(in_features=64, out_features=5, bias=True)\n (instance_norm): InstanceNorm1d(5, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n).input_linear
+ and <class 'torch.nn.modules.linear.Linear'> = nn.Linear
|
Run pytest:
tests/test_models/test_xlstmtime.py#L156
TestXLSTMTime.test_initialization[slstm]
assert False
+ where False = isinstance(None, <class 'torch.nn.modules.linear.Linear'>)
+ where None = xLSTMTime(\n "dataset_parameters": None\n "decomposition_kernel": 25\n "device": cpu\n "dropout": 0.1\n "hidden_size": 64\n "input_projection_size": None\n "input_size": 10\n "learning_rate": 0.001\n "log_gradient_flow": False\n "log_interval": -1\n "log_val_interval": -1\n "logging_metrics": ModuleList()\n "monotone_constaints": {}\n "num_layers": 1\n "optimizer": adam\n "optimizer_params": None\n "output_size": 5\n "output_transformer": None\n "reduce_on_plateau_min_lr": 1e-05\n "reduce_on_plateau_patience": 1000\n "reduce_on_plateau_reduction": 2.0\n "weight_decay": 0.0\n "xlstm_type": slstm\n (loss): SMAPE()\n (logging_metrics): ModuleList()\n (decomposition): SeriesDecomposition(\n (avg_pool): AvgPool1d(kernel_size=(25,), stride=(1,), padding=(12,))\n )\n (batch_norm): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (lstm): sLSTMNetwork(\n (slstm_layer): sLSTMLayer(\n (cells): ModuleList(\n (0): sLSTMCell(\n (input_weights): Linear(in_features=64, out_features=256, bias=True)\n (hidden_weights): Linear(in_features=64, out_features=256, bias=True)\n (ln_cell): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_hidden): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n (ln_input): LayerNorm((256,), eps=1e-05, elementwise_affine=True)\n (ln_hidden_update): LayerNorm((256,), eps=1e-05, elementwise_affine=True)\n (dropout_layer): Dropout(p=0.1, inplace=False)\n (tanh): Tanh()\n (sigmoid): Sigmoid()\n )\n )\n (layer_norm_layers): ModuleList(\n (0): LayerNorm((64,), eps=1e-05, elementwise_affine=True)\n )\n )\n (fc): Linear(in_features=64, out_features=64, bias=True)\n )\n (output_linear): Linear(in_features=64, out_features=5, bias=True)\n (instance_norm): InstanceNorm1d(5, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n).input_linear
+ and <class 'torch.nn.modules.linear.Linear'> = nn.Linear
|
Run pytest:
tests/test_models/test_xlstmtime.py#L168
TestXLSTMTime.test_forward[mlstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L168
TestXLSTMTime.test_forward[slstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L193
TestXLSTMTime.test_predict[mlstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L193
TestXLSTMTime.test_predict[slstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L219
TestEdgeCases.test_single_sequence_length[mlstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L219
TestEdgeCases.test_single_sequence_length[slstm]
IndexError: too many indices for tensor of dimension 3
|
Run pytest:
tests/test_models/test_xlstmtime.py#L253
TestEdgeCases.test_input_nan_handling[mlstm]
AssertionError: Expected RuntimeError or ValueError with NaN input
assert False
+ where False = isinstance(IndexError('too many indices for tensor of dimension 3'), (<class 'RuntimeError'>, <class 'ValueError'>))
|
Run pytest:
tests/test_models/test_xlstmtime.py#L253
TestEdgeCases.test_input_nan_handling[slstm]
AssertionError: Expected RuntimeError or ValueError with NaN input
assert False
+ where False = isinstance(IndexError('too many indices for tensor of dimension 3'), (<class 'RuntimeError'>, <class 'ValueError'>))
|
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Loading