Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix documentation typos #1319

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ pred = mlp:forward(torch.randn(10,3)) -- 2D Tensor of size 10x3 goes through the
-- Each Linear+Reshape module receives a slice of dimension 1
-- which corresponds to a 1D Tensor of size 3.
-- Eventually all the Linear+Reshape modules' outputs of size 2x1
-- are concatenated alond the 2nd dimension (column space)
-- are concatenated along the 2nd dimension (column space)
-- to form pred, a 2D Tensor of size 2x10.

> pred
Expand Down
2 changes: 1 addition & 1 deletion doc/criterion.md
Original file line number Diff line number Diff line change
Expand Up @@ -657,7 +657,7 @@ prl:add(p1_mlp)
prl:add(p2_mlp)

-- now we define our top level network that takes this parallel table
-- and computes the pairwise distance betweem the pair of outputs
-- and computes the pairwise distance between the pair of outputs
mlp = nn.Sequential()
mlp:add(prl)
mlp:add(nn.PairwiseDistance(1))
Expand Down
2 changes: 1 addition & 1 deletion doc/simple.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Applies the following transformation to the incoming (optionally) normalized spa
- `b_i` is a per-feature bias,
- `x_i_max` is the maximum absolute value seen so far during training for feature `i`.

The normalization of input features is very useful to avoid explosions during training if sparse input values are really high. It also helps ditinguish between the presence and the absence of a given feature.
The normalization of input features is very useful to avoid explosions during training if sparse input values are really high. It also helps distinguish between the presence and the absence of a given feature.

#### Parameters ####
- `inputSize` is the maximum number of features.
Expand Down