diff --git a/docs/docs/callbacks/train_logger.mdx b/docs/docs/callbacks/train_logger.mdx index d0985d4..022ce71 100644 --- a/docs/docs/callbacks/train_logger.mdx +++ b/docs/docs/callbacks/train_logger.mdx @@ -24,7 +24,7 @@ TrainLogger is a NeuralPy Callback class that generates training logs. It genera ## Supported Arguments - - `path`: (String) Directory where the logs will be stored +- `path`: (String) Directory where the logs will be stored ## Example Code diff --git a/docs/docs/layers/activation_functions/gelu.mdx b/docs/docs/layers/activation_functions/gelu.mdx index adfa1cc..709a520 100644 --- a/docs/docs/layers/activation_functions/gelu.mdx +++ b/docs/docs/layers/activation_functions/gelu.mdx @@ -26,7 +26,7 @@ To learn more about GELU, please check PyTorch [documentation](https://pytorch.o ## Supported Arguments -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/leaky_relu.mdx b/docs/docs/layers/activation_functions/leaky_relu.mdx index 047858a..6a32845 100644 --- a/docs/docs/layers/activation_functions/leaky_relu.mdx +++ b/docs/docs/layers/activation_functions/leaky_relu.mdx @@ -27,7 +27,7 @@ To learn more about LeakyReLU, please check PyTorch [documentation](https://pyto ## Supported Arguments - `negative_slope`: (Float) A negative slope for the LeakyReLU, default value is 0.01 -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/relu.mdx b/docs/docs/layers/activation_functions/relu.mdx index 80ec180..be3c272 100644 --- a/docs/docs/layers/activation_functions/relu.mdx +++ b/docs/docs/layers/activation_functions/relu.mdx @@ -26,7 +26,7 @@ To learn more about ReLU, please check PyTorch [documentation](https://pytorch.o ## Supported Arguments -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/selu.mdx b/docs/docs/layers/activation_functions/selu.mdx index 6d42aa0..09390cb 100644 --- a/docs/docs/layers/activation_functions/selu.mdx +++ b/docs/docs/layers/activation_functions/selu.mdx @@ -26,7 +26,7 @@ To learn more about SELU, please check PyTorch [documentation](https://pytorch.o ## Supported Arguments -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/sigmoid.mdx b/docs/docs/layers/activation_functions/sigmoid.mdx index 68b85c5..1f511b2 100644 --- a/docs/docs/layers/activation_functions/sigmoid.mdx +++ b/docs/docs/layers/activation_functions/sigmoid.mdx @@ -26,7 +26,7 @@ To learn more about Sigmoid, please check PyTorch [documentation](https://pytorc ## Supported Arguments -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/softmax.mdx b/docs/docs/layers/activation_functions/softmax.mdx index eafc491..bbb9016 100644 --- a/docs/docs/layers/activation_functions/softmax.mdx +++ b/docs/docs/layers/activation_functions/softmax.mdx @@ -27,7 +27,7 @@ To learn more about Softmax, please check PyTorch [documentation](https://pytorc ## Supported Arguments - `dim`: (Integer) A dimension along which Softmax will be computed (so every slice along dim will sum to 1). -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/activation_functions/tanh.mdx b/docs/docs/layers/activation_functions/tanh.mdx index f456185..02ee04d 100644 --- a/docs/docs/layers/activation_functions/tanh.mdx +++ b/docs/docs/layers/activation_functions/tanh.mdx @@ -26,7 +26,7 @@ To learn more about Tanh, please check PyTorch [documentation](https://pytorch.o ## Supported Arguments -- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. +- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer. ## Example Code diff --git a/docs/docs/layers/linear/bilinear.mdx b/docs/docs/layers/linear/bilinear.mdx index 7c3e9be..1d5a5c9 100644 --- a/docs/docs/layers/linear/bilinear.mdx +++ b/docs/docs/layers/linear/bilinear.mdx @@ -14,12 +14,13 @@ hide_title: true neuralpy.layers.linear.Bilinear(n_nodes, n1_features=None, n2_features=None, bias=True, name=None) ``` -:::info +:::danger -Bilinear Layer is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low. +Bilinear Layer is unstable and buggy, not ready for any real use ::: + Bilinear layer performs a bilinear transformation of the input. To learn more about Bilinear layers, please check [pytorch documentation](https://pytorch.org/docs/stable/nn.html?highlight=bilinear) for it. diff --git a/docs/docs/layers/pooling/avgpool3d.mdx b/docs/docs/layers/pooling/avgpool3d.mdx index dd67890..64c086f 100644 --- a/docs/docs/layers/pooling/avgpool3d.mdx +++ b/docs/docs/layers/pooling/avgpool3d.mdx @@ -1,7 +1,7 @@ --- id: avgpool3d title: AvgPool3D -sidebar_label: AvgPool1D +sidebar_label: AvgPool3D slug: /layers/pooling-layers/avgpool3d description: Applies Batch Normalization over a 2D or 3D input image: https://user-images.githubusercontent.com/34741145/81591141-99752900-93d9-11ea-9ef6-cc2c68daaa19.png diff --git a/docs/docs/layers/sparse/embedding.mdx b/docs/docs/layers/sparse/embedding.mdx index 81cf390..7af3833 100644 --- a/docs/docs/layers/sparse/embedding.mdx +++ b/docs/docs/layers/sparse/embedding.mdx @@ -14,12 +14,13 @@ hide_title: true neuralpy.layers.sparse.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, name=None) ``` -:::info +:::danger -Embedding is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low. +Embedding Layer is unstable and buggy, not ready for any real use ::: + A simple lookup table that stores embeddings of a fixed dictionary and size. For more information, check [this](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html?highlight=embedding#torch.nn.Embedding) page diff --git a/docs/support.mdx b/docs/support.mdx index 0086a06..30c0907 100644 --- a/docs/support.mdx +++ b/docs/support.mdx @@ -14,4 +14,5 @@ If you need help or you need some information regarding NeuralPy, then are the f 1. Raise an issue on Github 2. Join our discord server (https://discord.gg/6aTTwbW) -3. Contact with Abhishek Chatterjee(abhishek.chatterjee97@protonmail.com) +3. Start a discussion on Github discussion (https://github.com/imdeepmind/NeuralPy/discussions/new) +4. Contact with Abhishek Chatterjee(abhishek.chatterjee97@protonmail.com)