Skip to content

Commit 0f26036

Browse files
authored
Renamed rmin/rmax functions to min/max and element-wise are now minimum/maximum to match Python (#589)
* Renamed rmin/rmax functions to min/max and element-wise are now minimum/maximum to match Python * Deprecated rmin/rmax in warning
1 parent 9518fbf commit 0f26036

File tree

17 files changed

+159
-160
lines changed

17 files changed

+159
-160
lines changed

docs_input/api/math/extrema/max.rst

+16-2
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,30 @@
33
max
44
===
55

6-
Element-wise maximum
6+
Reduces the input by the maximum values across the specified axes or performs
7+
an element-wise maximum on each element in the input operators.
78

9+
.. doxygenfunction:: max(const InType &in, const int (&dims)[D])
10+
.. doxygenfunction:: max(const InType &in)
811
.. doxygenfunction:: max(Op t, Op t2)
912

1013
Examples
1114
~~~~~~~~
1215

13-
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
16+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
1417
:language: cpp
1518
:start-after: example-begin max-test-1
1619
:end-before: example-end max-test-1
1720
:dedent:
1821

22+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
23+
:language: cpp
24+
:start-after: example-begin max-test-2
25+
:end-before: example-end max-test-2
26+
:dedent:
27+
28+
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
29+
:language: cpp
30+
:start-after: example-begin max-el-test-1
31+
:end-before: example-end max-el-test-1
32+
:dedent:

docs_input/api/math/extrema/min.rst

+16-2
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,30 @@
33
min
44
===
55

6-
Element-wise minimum
6+
Reduces the input by the minimum values across the specified axes or performs
7+
an element-wise minimum on each element in the input operators.
78

9+
.. doxygenfunction:: min(const InType &in, const int (&dims)[D])
10+
.. doxygenfunction:: min(const InType &in)
811
.. doxygenfunction:: min(Op t, Op t2)
912

1013
Examples
1114
~~~~~~~~
1215

13-
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
16+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
1417
:language: cpp
1518
:start-after: example-begin min-test-1
1619
:end-before: example-end min-test-1
1720
:dedent:
1821

22+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
23+
:language: cpp
24+
:start-after: example-begin min-test-2
25+
:end-before: example-end min-test-2
26+
:dedent:
27+
28+
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
29+
:language: cpp
30+
:start-after: example-begin min-el-test-1
31+
:end-before: example-end min-el-test-1
32+
:dedent:

docs_input/api/math/extrema/rmax.rst

-25
This file was deleted.

docs_input/api/math/extrema/rmin.rst

-25
This file was deleted.

docs_input/notebooks/03_transforms.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -200,13 +200,13 @@
200200
"### Reductions\n",
201201
"A reduction operation takes multiple values and aggregates those into a smaller number of values. Most reductions take a large number of values and reduces them to a single value. Reductions are one of the most common operations perfomed on the GPU, which means they've been heavily researched and optimized for highly-parallel processors. Modern NVIDIA GPUs have special instructions for performing reductions to give even larger speedups over naive implementations. All of these details are hidden from the user and MatX automatically chooses the optimized path based on the hardware capabilities. \n",
202202
"\n",
203-
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`. Note that the max and min reductions use the name `rmin` and `rmax` to avoid name collision with the element-wise `min` and `max` operators.\n",
203+
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`.\n",
204204
"\n",
205205
"#### Full Reduction\n",
206206
"In this example we reduce an entire tensor to a single value by applying the reduction across all dimensions of the tensor. We apply the same random initialization from previous examples on a 2D tensor `A`. Note that the output tensor must be zeroed for a `sum` reduction since that value is continually added to during the reduction. Not initializing the output tensor will give undefined results since the variables are used as accumulators throughout the reduction. With the tensor initialized, we perform both a `max` and `sum` reduction across all dimensions of the tensor:\n",
207207
"\n",
208208
"```c++\n",
209-
"rmax(MD0, A);\n",
209+
"max(MD0, A);\n",
210210
"sum(AD0, A);\n",
211211
"```\n",
212212
"\n",
@@ -248,7 +248,7 @@
248248
"Reductions can also be performed across certain dimensions instead of the whole tensor. Dimensional reductions are useful in situations where each row contains data for a different user, for example, and we wish to sum up each user's data. By setting the output tensor view to a 1D tensor, independent reductions can be performed across the input tensor where each output element corresponds to a single row reduction from the input. Using the same tensor `A` from the previous example, we only change the output tensor type to be a 1D tensor instead of a scalar:\n",
249249
"\n",
250250
"```c++\n",
251-
" rmax(MD1, A);\n",
251+
" max(MD1, A);\n",
252252
" sum(AD1, A); \n",
253253
"```\n",
254254
"\n",

docs_input/notebooks/exercises/solutions/example3_assignment1.cu

+3-3
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ int main() {
8888
* value. Scale the original tensor by this max value and do another max
8989
* reduction. The final reduction should be 1.0.
9090
*
91-
* Hint: the reduction function is named rmax and takes the output, input, and
91+
* Hint: the reduction function is named max and takes the output, input, and
9292
* stream as parameters
9393
* https://devtech-compute.gitlab-master-pages.nvidia.com/matx/api/reduce.html
9494
****************************************************************************************************/
@@ -98,9 +98,9 @@ int main() {
9898
(dv = random<float>(dv.Shape(), NORMAL)).run();
9999

100100
tensor_t<float, 0> redv;
101-
rmax(redv, dv, 0);
101+
max(redv, dv, 0);
102102
(dv = dv / redv).run();
103-
rmax(redv, dv, 0);
103+
max(redv, dv, 0);
104104
/*** End editing ***/
105105

106106
cudaStreamSynchronize(0);

examples/cgsolve.cu

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ int main([[maybe_unused]] int argc, [[maybe_unused]] char **argv)
8282

8383
(Bout = matvec(A, X)).run();
8484
(norm = sum((Bout-B)*(Bout-B))).run();
85-
(maxn = matx::rmax(sqrt(norm))).run();
85+
(maxn = matx::max(sqrt(norm))).run();
8686

8787
cudaDeviceSynchronize();
8888
printf ("max l2 norm: %f\n", (float)sqrt(maxn()));

include/matx/operators/binary_operators.h

+4-4
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ namespace matx
248248
Op pow(Op t, Op t2) {}
249249

250250
/**
251-
* Compute max(t, t2) of two operators or tensors
251+
* Compute element-wise max(t, t2) of two operators or tensors
252252
* @param t
253253
* LHS tensor or operator input
254254
* @param t2
@@ -257,7 +257,7 @@ namespace matx
257257
Op max(Op t, Op t2) {}
258258

259259
/**
260-
* Compute min(t, t2) of two operators or tensors
260+
* Compute element-wise min(t, t2) of two operators or tensors
261261
* @param t
262262
* LHS tensor or operator input
263263
* @param t2
@@ -384,9 +384,9 @@ namespace matx
384384
DEFINE_BINARY_OP(operator&, detail::AndOp);
385385
DEFINE_BINARY_OP(operator^, detail::XorOp);
386386
DEFINE_BINARY_OP(pow, detail::PowOp);
387-
DEFINE_BINARY_OP(max, detail::MaxOp);
387+
DEFINE_BINARY_OP(max, detail::MaximumOp);
388388
DEFINE_BINARY_OP(atan2, detail::Atan2Op);
389-
DEFINE_BINARY_OP(min, detail::MinOp);
389+
DEFINE_BINARY_OP(min, detail::MinimumOp);
390390
DEFINE_BINARY_OP(operator<, detail::LTOp);
391391
DEFINE_BINARY_OP(operator>, detail::GTOp);
392392
DEFINE_BINARY_OP(operator<=, detail::LTEOp);

include/matx/operators/rmax.h include/matx/operators/max.h

+34-22
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
11
////////////////////////////////////////////////////////////////////////////////
22
// BSD 3-Clause License
33
//
4-
// COpBright (c) 2021, NVIDIA Corporation
5-
// rmax rights reserved.
4+
// Copyright (c) 2021, NVIDIA Corporation
5+
// All rights reserved.
66
//
77
// Redistribution and use in source and binary forms, with or without
88
// modification, are permitted provided that the following conditions are met:
99
//
10-
// 1. Redistributions of source code must retain the above cOpBright notice, this
10+
// 1. Redistributions of source code must retain the above copyright notice, this
1111
// list of conditions and the following disclaimer.
1212
//
13-
// 2. Redistributions in binary form must reproduce the above cOpBright notice,
13+
// 2. Redistributions in binary form must reproduce the above copyright notice,
1414
// this list of conditions and the following disclaimer in the documentation
1515
// and/or other materials provided with the distribution.
1616
//
17-
// 3. Neither the name of the cOpBright holder nor the names of its
17+
// 3. Neither the name of the copyright holder nor the names of its
1818
// contributors may be used to endorse or promote products derived from
1919
// this software without specific prior written permission.
2020
//
21-
// THIS SOFTWARE IS PROVIDED BY THE COpBRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21+
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
2222
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
2323
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24-
// DISCLAIMED. IN NO EVENT SHrmax THE COpBRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24+
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
2525
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
2626
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
2727
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -44,7 +44,7 @@ namespace matx {
4444

4545
namespace detail {
4646
template<typename OpA, int ORank>
47-
class RMaxOp : public BaseOp<RMaxOp<OpA, ORank>>
47+
class MaxOp : public BaseOp<MaxOp<OpA, ORank>>
4848
{
4949
private:
5050
OpA a_;
@@ -55,10 +55,10 @@ namespace detail {
5555
using matxop = bool;
5656
using scalar_type = typename remove_cvref_t<OpA>::scalar_type;
5757
using matx_transform_op = bool;
58-
using rmax_xform_op = bool;
58+
using max_xform_op = bool;
5959

60-
__MATX_INLINE__ std::string str() const { return "rmax(" + get_type_str(a_) + ")"; }
61-
__MATX_INLINE__ RMaxOp(OpA a) : a_(a) {
60+
__MATX_INLINE__ std::string str() const { return "max(" + get_type_str(a_) + ")"; }
61+
__MATX_INLINE__ MaxOp(OpA a) : a_(a) {
6262
for (int r = 0; r < ORank; r++) {
6363
out_dims_[r] = a_.Size(r);
6464
}
@@ -71,7 +71,7 @@ namespace detail {
7171

7272
template <typename Out, typename Executor>
7373
void Exec(Out &&out, Executor &&ex) const {
74-
rmax_impl(std::get<0>(out), a_, ex);
74+
max_impl(std::get<0>(out), a_, ex);
7575
}
7676

7777
static __MATX_INLINE__ constexpr __MATX_HOST__ __MATX_DEVICE__ int32_t Rank()
@@ -117,9 +117,6 @@ namespace detail {
117117
*
118118
* Returns an operator representing the max of all numbers in the reduction
119119
*
120-
* @note This function uses the name rmax instead of max to not collide with the
121-
* element-wise operator max.
122-
*
123120
* @tparam InType
124121
* Input data type
125122
* @tparam D
@@ -129,37 +126,52 @@ namespace detail {
129126
* Input data to reduce
130127
* @param dims
131128
* Array containing dimensions to reduce over
132-
* @returns Operator with reduced values of rmax-reduce computed
129+
* @returns Operator with reduced values of max-reduce computed
133130
*/
134131
template <typename InType, int D>
132+
__MATX_INLINE__ auto max(const InType &in, const int (&dims)[D])
133+
{
134+
static_assert(D < InType::Rank(), "reduction dimensions must be <= Rank of input");
135+
auto perm = detail::getPermuteDims<InType::Rank()>(dims);
136+
auto permop = permute(in, perm);
137+
138+
return detail::MaxOp<decltype(permop), InType::Rank() - D>(permop);
139+
}
140+
141+
template <typename InType, int D>
142+
[[deprecated("Use max() instead of rmax() for reductions")]]
135143
__MATX_INLINE__ auto rmax(const InType &in, const int (&dims)[D])
136144
{
137145
static_assert(D < InType::Rank(), "reduction dimensions must be <= Rank of input");
138146
auto perm = detail::getPermuteDims<InType::Rank()>(dims);
139147
auto permop = permute(in, perm);
140148

141-
return detail::RMaxOp<decltype(permop), InType::Rank() - D>(permop);
149+
return detail::MaxOp<decltype(permop), InType::Rank() - D>(permop);
142150
}
143151

144152
/**
145153
* Compute max reduction of an operator
146154
*
147155
* Returns an operator representing the max of all numbers in the reduction
148156
*
149-
* @note This function uses the name rmax instead of max to not collide with the
150-
* element-wise operator max.
151-
*
152157
* @tparam InType
153158
* Input data type
154159
*
155160
* @param in
156161
* Input data to reduce
157-
* @returns Operator with reduced values of rmax-reduce computed
162+
* @returns Operator with reduced values of max-reduce computed
158163
*/
159164
template <typename InType>
165+
__MATX_INLINE__ auto max(const InType &in)
166+
{
167+
return detail::MaxOp<decltype(in), 0>(in);
168+
}
169+
170+
template <typename InType>
171+
[[deprecated("Use max() instead of rmax() for reductions")]]
160172
__MATX_INLINE__ auto rmax(const InType &in)
161173
{
162-
return detail::RMaxOp<decltype(in), 0>(in);
174+
return detail::MaxOp<decltype(in), 0>(in);
163175
}
164176

165177
}

0 commit comments

Comments
 (0)