Skip to content

Commit 1f47854

Browse files
committed
Renamed rmin/rmax functions to min/max and element-wise are now minimum/maximum to match Python
1 parent 9518fbf commit 1f47854

File tree

19 files changed

+153
-170
lines changed

19 files changed

+153
-170
lines changed

docs_input/api/math/extrema/max.rst

+9-3
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,22 @@
33
max
44
===
55

6-
Element-wise maximum
6+
Reduces the input by the maximum values across the specified axes.
77

8-
.. doxygenfunction:: max(Op t, Op t2)
8+
.. doxygenfunction:: max(const InType &in, const int (&dims)[D])
9+
.. doxygenfunction:: max(const InType &in)
910

1011
Examples
1112
~~~~~~~~
1213

13-
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
14+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
1415
:language: cpp
1516
:start-after: example-begin max-test-1
1617
:end-before: example-end max-test-1
1718
:dedent:
1819

20+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
21+
:language: cpp
22+
:start-after: example-begin max-test-2
23+
:end-before: example-end max-test-2
24+
:dedent:
+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
.. _maximum_func:
2+
3+
maximum
4+
=======
5+
6+
Element-wise maximum
7+
8+
.. doxygenfunction:: maximum(Op t, Op t2)
9+
10+
Examples
11+
~~~~~~~~
12+
13+
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
14+
:language: cpp
15+
:start-after: example-begin maximum-test-1
16+
:end-before: example-end maximum-test-1
17+
:dedent:
18+

docs_input/api/math/extrema/min.rst

+9-3
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,22 @@
33
min
44
===
55

6-
Element-wise minimum
6+
Reduces the input by the minimum values across the specified axes.
77

8-
.. doxygenfunction:: min(Op t, Op t2)
8+
.. doxygenfunction:: min(const InType &in, const int (&dims)[D])
9+
.. doxygenfunction:: min(const InType &in)
910

1011
Examples
1112
~~~~~~~~
1213

13-
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
14+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
1415
:language: cpp
1516
:start-after: example-begin min-test-1
1617
:end-before: example-end min-test-1
1718
:dedent:
1819

20+
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
21+
:language: cpp
22+
:start-after: example-begin min-test-2
23+
:end-before: example-end min-test-2
24+
:dedent:
+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
.. _minimum_func:
2+
3+
minimum
4+
=======
5+
6+
Element-wise minimum
7+
8+
.. doxygenfunction:: minimum(Op t, Op t2)
9+
10+
Examples
11+
~~~~~~~~
12+
13+
.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
14+
:language: cpp
15+
:start-after: example-begin minimum-test-1
16+
:end-before: example-end minimum-test-1
17+
:dedent:
18+

docs_input/api/math/extrema/rmax.rst

-25
This file was deleted.

docs_input/api/math/extrema/rmin.rst

-25
This file was deleted.

docs_input/notebooks/03_transforms.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -200,13 +200,13 @@
200200
"### Reductions\n",
201201
"A reduction operation takes multiple values and aggregates those into a smaller number of values. Most reductions take a large number of values and reduces them to a single value. Reductions are one of the most common operations perfomed on the GPU, which means they've been heavily researched and optimized for highly-parallel processors. Modern NVIDIA GPUs have special instructions for performing reductions to give even larger speedups over naive implementations. All of these details are hidden from the user and MatX automatically chooses the optimized path based on the hardware capabilities. \n",
202202
"\n",
203-
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`. Note that the max and min reductions use the name `rmin` and `rmax` to avoid name collision with the element-wise `min` and `max` operators.\n",
203+
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`.\n",
204204
"\n",
205205
"#### Full Reduction\n",
206206
"In this example we reduce an entire tensor to a single value by applying the reduction across all dimensions of the tensor. We apply the same random initialization from previous examples on a 2D tensor `A`. Note that the output tensor must be zeroed for a `sum` reduction since that value is continually added to during the reduction. Not initializing the output tensor will give undefined results since the variables are used as accumulators throughout the reduction. With the tensor initialized, we perform both a `max` and `sum` reduction across all dimensions of the tensor:\n",
207207
"\n",
208208
"```c++\n",
209-
"rmax(MD0, A);\n",
209+
"max(MD0, A);\n",
210210
"sum(AD0, A);\n",
211211
"```\n",
212212
"\n",
@@ -248,7 +248,7 @@
248248
"Reductions can also be performed across certain dimensions instead of the whole tensor. Dimensional reductions are useful in situations where each row contains data for a different user, for example, and we wish to sum up each user's data. By setting the output tensor view to a 1D tensor, independent reductions can be performed across the input tensor where each output element corresponds to a single row reduction from the input. Using the same tensor `A` from the previous example, we only change the output tensor type to be a 1D tensor instead of a scalar:\n",
249249
"\n",
250250
"```c++\n",
251-
" rmax(MD1, A);\n",
251+
" max(MD1, A);\n",
252252
" sum(AD1, A); \n",
253253
"```\n",
254254
"\n",

docs_input/notebooks/exercises/solutions/example3_assignment1.cu

+3-3
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ int main() {
8888
* value. Scale the original tensor by this max value and do another max
8989
* reduction. The final reduction should be 1.0.
9090
*
91-
* Hint: the reduction function is named rmax and takes the output, input, and
91+
* Hint: the reduction function is named max and takes the output, input, and
9292
* stream as parameters
9393
* https://devtech-compute.gitlab-master-pages.nvidia.com/matx/api/reduce.html
9494
****************************************************************************************************/
@@ -98,9 +98,9 @@ int main() {
9898
(dv = random<float>(dv.Shape(), NORMAL)).run();
9999

100100
tensor_t<float, 0> redv;
101-
rmax(redv, dv, 0);
101+
max(redv, dv, 0);
102102
(dv = dv / redv).run();
103-
rmax(redv, dv, 0);
103+
max(redv, dv, 0);
104104
/*** End editing ***/
105105

106106
cudaStreamSynchronize(0);

examples/cgsolve.cu

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ int main([[maybe_unused]] int argc, [[maybe_unused]] char **argv)
8282

8383
(Bout = matvec(A, X)).run();
8484
(norm = sum((Bout-B)*(Bout-B))).run();
85-
(maxn = matx::rmax(sqrt(norm))).run();
85+
(maxn = matx::max(sqrt(norm))).run();
8686

8787
cudaDeviceSynchronize();
8888
printf ("max l2 norm: %f\n", (float)sqrt(maxn()));

include/matx/operators/binary_operators.h

+6-6
Original file line numberDiff line numberDiff line change
@@ -248,22 +248,22 @@ namespace matx
248248
Op pow(Op t, Op t2) {}
249249

250250
/**
251-
* Compute max(t, t2) of two operators or tensors
251+
* Compute element-wise maximum(t, t2) of two operators or tensors
252252
* @param t
253253
* LHS tensor or operator input
254254
* @param t2
255255
* RHS tensor or operator input
256256
*/
257-
Op max(Op t, Op t2) {}
257+
Op maximum(Op t, Op t2) {}
258258

259259
/**
260-
* Compute min(t, t2) of two operators or tensors
260+
* Compute element-wise minimum(t, t2) of two operators or tensors
261261
* @param t
262262
* LHS tensor or operator input
263263
* @param t2
264264
* RHS tensor or operator input
265265
*/
266-
Op min(Op t, Op t2) {}
266+
Op minimum(Op t, Op t2) {}
267267

268268
/**
269269
* Compute t < t2 of two operators or tensors
@@ -384,9 +384,9 @@ namespace matx
384384
DEFINE_BINARY_OP(operator&, detail::AndOp);
385385
DEFINE_BINARY_OP(operator^, detail::XorOp);
386386
DEFINE_BINARY_OP(pow, detail::PowOp);
387-
DEFINE_BINARY_OP(max, detail::MaxOp);
387+
DEFINE_BINARY_OP(maximum, detail::MaximumOp);
388388
DEFINE_BINARY_OP(atan2, detail::Atan2Op);
389-
DEFINE_BINARY_OP(min, detail::MinOp);
389+
DEFINE_BINARY_OP(minimum, detail::MinimumOp);
390390
DEFINE_BINARY_OP(operator<, detail::LTOp);
391391
DEFINE_BINARY_OP(operator>, detail::GTOp);
392392
DEFINE_BINARY_OP(operator<=, detail::LTEOp);

include/matx/operators/rmax.h include/matx/operators/max.h

+18-24
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
11
////////////////////////////////////////////////////////////////////////////////
22
// BSD 3-Clause License
33
//
4-
// COpBright (c) 2021, NVIDIA Corporation
5-
// rmax rights reserved.
4+
// Copyright (c) 2021, NVIDIA Corporation
5+
// All rights reserved.
66
//
77
// Redistribution and use in source and binary forms, with or without
88
// modification, are permitted provided that the following conditions are met:
99
//
10-
// 1. Redistributions of source code must retain the above cOpBright notice, this
10+
// 1. Redistributions of source code must retain the above copyright notice, this
1111
// list of conditions and the following disclaimer.
1212
//
13-
// 2. Redistributions in binary form must reproduce the above cOpBright notice,
13+
// 2. Redistributions in binary form must reproduce the above copyright notice,
1414
// this list of conditions and the following disclaimer in the documentation
1515
// and/or other materials provided with the distribution.
1616
//
17-
// 3. Neither the name of the cOpBright holder nor the names of its
17+
// 3. Neither the name of the copyright holder nor the names of its
1818
// contributors may be used to endorse or promote products derived from
1919
// this software without specific prior written permission.
2020
//
21-
// THIS SOFTWARE IS PROVIDED BY THE COpBRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21+
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
2222
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
2323
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24-
// DISCLAIMED. IN NO EVENT SHrmax THE COpBRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24+
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
2525
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
2626
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
2727
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -44,7 +44,7 @@ namespace matx {
4444

4545
namespace detail {
4646
template<typename OpA, int ORank>
47-
class RMaxOp : public BaseOp<RMaxOp<OpA, ORank>>
47+
class MaxOp : public BaseOp<MaxOp<OpA, ORank>>
4848
{
4949
private:
5050
OpA a_;
@@ -55,10 +55,10 @@ namespace detail {
5555
using matxop = bool;
5656
using scalar_type = typename remove_cvref_t<OpA>::scalar_type;
5757
using matx_transform_op = bool;
58-
using rmax_xform_op = bool;
58+
using max_xform_op = bool;
5959

60-
__MATX_INLINE__ std::string str() const { return "rmax(" + get_type_str(a_) + ")"; }
61-
__MATX_INLINE__ RMaxOp(OpA a) : a_(a) {
60+
__MATX_INLINE__ std::string str() const { return "max(" + get_type_str(a_) + ")"; }
61+
__MATX_INLINE__ MaxOp(OpA a) : a_(a) {
6262
for (int r = 0; r < ORank; r++) {
6363
out_dims_[r] = a_.Size(r);
6464
}
@@ -71,7 +71,7 @@ namespace detail {
7171

7272
template <typename Out, typename Executor>
7373
void Exec(Out &&out, Executor &&ex) const {
74-
rmax_impl(std::get<0>(out), a_, ex);
74+
max_impl(std::get<0>(out), a_, ex);
7575
}
7676

7777
static __MATX_INLINE__ constexpr __MATX_HOST__ __MATX_DEVICE__ int32_t Rank()
@@ -117,9 +117,6 @@ namespace detail {
117117
*
118118
* Returns an operator representing the max of all numbers in the reduction
119119
*
120-
* @note This function uses the name rmax instead of max to not collide with the
121-
* element-wise operator max.
122-
*
123120
* @tparam InType
124121
* Input data type
125122
* @tparam D
@@ -129,37 +126,34 @@ namespace detail {
129126
* Input data to reduce
130127
* @param dims
131128
* Array containing dimensions to reduce over
132-
* @returns Operator with reduced values of rmax-reduce computed
129+
* @returns Operator with reduced values of max-reduce computed
133130
*/
134131
template <typename InType, int D>
135-
__MATX_INLINE__ auto rmax(const InType &in, const int (&dims)[D])
132+
__MATX_INLINE__ auto max(const InType &in, const int (&dims)[D])
136133
{
137134
static_assert(D < InType::Rank(), "reduction dimensions must be <= Rank of input");
138135
auto perm = detail::getPermuteDims<InType::Rank()>(dims);
139136
auto permop = permute(in, perm);
140137

141-
return detail::RMaxOp<decltype(permop), InType::Rank() - D>(permop);
138+
return detail::MaxOp<decltype(permop), InType::Rank() - D>(permop);
142139
}
143140

144141
/**
145142
* Compute max reduction of an operator
146143
*
147144
* Returns an operator representing the max of all numbers in the reduction
148145
*
149-
* @note This function uses the name rmax instead of max to not collide with the
150-
* element-wise operator max.
151-
*
152146
* @tparam InType
153147
* Input data type
154148
*
155149
* @param in
156150
* Input data to reduce
157-
* @returns Operator with reduced values of rmax-reduce computed
151+
* @returns Operator with reduced values of max-reduce computed
158152
*/
159153
template <typename InType>
160-
__MATX_INLINE__ auto rmax(const InType &in)
154+
__MATX_INLINE__ auto max(const InType &in)
161155
{
162-
return detail::RMaxOp<decltype(in), 0>(in);
156+
return detail::MaxOp<decltype(in), 0>(in);
163157
}
164158

165159
}

0 commit comments

Comments
 (0)