You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs_input/notebooks/03_transforms.ipynb
+3-3
Original file line number
Diff line number
Diff line change
@@ -200,13 +200,13 @@
200
200
"### Reductions\n",
201
201
"A reduction operation takes multiple values and aggregates those into a smaller number of values. Most reductions take a large number of values and reduces them to a single value. Reductions are one of the most common operations perfomed on the GPU, which means they've been heavily researched and optimized for highly-parallel processors. Modern NVIDIA GPUs have special instructions for performing reductions to give even larger speedups over naive implementations. All of these details are hidden from the user and MatX automatically chooses the optimized path based on the hardware capabilities. \n",
202
202
"\n",
203
-
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`. Note that the max and min reductions use the name `rmin` and `rmax` to avoid name collision with the element-wise `min` and `max` operators.\n",
203
+
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`.\n",
204
204
"\n",
205
205
"#### Full Reduction\n",
206
206
"In this example we reduce an entire tensor to a single value by applying the reduction across all dimensions of the tensor. We apply the same random initialization from previous examples on a 2D tensor `A`. Note that the output tensor must be zeroed for a `sum` reduction since that value is continually added to during the reduction. Not initializing the output tensor will give undefined results since the variables are used as accumulators throughout the reduction. With the tensor initialized, we perform both a `max` and `sum` reduction across all dimensions of the tensor:\n",
207
207
"\n",
208
208
"```c++\n",
209
-
"rmax(MD0, A);\n",
209
+
"max(MD0, A);\n",
210
210
"sum(AD0, A);\n",
211
211
"```\n",
212
212
"\n",
@@ -248,7 +248,7 @@
248
248
"Reductions can also be performed across certain dimensions instead of the whole tensor. Dimensional reductions are useful in situations where each row contains data for a different user, for example, and we wish to sum up each user's data. By setting the output tensor view to a 1D tensor, independent reductions can be performed across the input tensor where each output element corresponds to a single row reduction from the input. Using the same tensor `A` from the previous example, we only change the output tensor type to be a 1D tensor instead of a scalar:\n",
0 commit comments