Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Renamed rmin/rmax functions to min/max and element-wise are now minimum/maximum to match Python #589

Merged
merged 2 commits into from
Mar 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions docs_input/api/math/extrema/max.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,30 @@
max
===

Element-wise maximum
Reduces the input by the maximum values across the specified axes or performs
an element-wise maximum on each element in the input operators.

.. doxygenfunction:: max(const InType &in, const int (&dims)[D])
.. doxygenfunction:: max(const InType &in)
.. doxygenfunction:: max(Op t, Op t2)

Examples
~~~~~~~~

.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
:language: cpp
:start-after: example-begin max-test-1
:end-before: example-end max-test-1
:dedent:

.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
:language: cpp
:start-after: example-begin max-test-2
:end-before: example-end max-test-2
:dedent:

.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
:language: cpp
:start-after: example-begin max-el-test-1
:end-before: example-end max-el-test-1
:dedent:
18 changes: 16 additions & 2 deletions docs_input/api/math/extrema/min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,30 @@
min
===

Element-wise minimum
Reduces the input by the minimum values across the specified axes or performs
an element-wise minimum on each element in the input operators.

.. doxygenfunction:: min(const InType &in, const int (&dims)[D])
.. doxygenfunction:: min(const InType &in)
.. doxygenfunction:: min(Op t, Op t2)

Examples
~~~~~~~~

.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
:language: cpp
:start-after: example-begin min-test-1
:end-before: example-end min-test-1
:dedent:

.. literalinclude:: ../../../../test/00_operators/ReductionTests.cu
:language: cpp
:start-after: example-begin min-test-2
:end-before: example-end min-test-2
:dedent:

.. literalinclude:: ../../../../test/00_operators/OperatorTests.cu
:language: cpp
:start-after: example-begin min-el-test-1
:end-before: example-end min-el-test-1
:dedent:
25 changes: 0 additions & 25 deletions docs_input/api/math/extrema/rmax.rst

This file was deleted.

25 changes: 0 additions & 25 deletions docs_input/api/math/extrema/rmin.rst

This file was deleted.

6 changes: 3 additions & 3 deletions docs_input/notebooks/03_transforms.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -200,13 +200,13 @@
"### Reductions\n",
"A reduction operation takes multiple values and aggregates those into a smaller number of values. Most reductions take a large number of values and reduces them to a single value. Reductions are one of the most common operations perfomed on the GPU, which means they've been heavily researched and optimized for highly-parallel processors. Modern NVIDIA GPUs have special instructions for performing reductions to give even larger speedups over naive implementations. All of these details are hidden from the user and MatX automatically chooses the optimized path based on the hardware capabilities. \n",
"\n",
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`. Note that the max and min reductions use the name `rmin` and `rmax` to avoid name collision with the element-wise `min` and `max` operators.\n",
"MatX provides a set of optimized primitives to perform reductions on tensors for many common types. Reductions are supported across individual dimensions or on entire tensors, depending on the size of the output tensor. Currently supported reduction functions are `sum`, `min`, `max`,` mean`, `any`, and `all`.\n",
"\n",
"#### Full Reduction\n",
"In this example we reduce an entire tensor to a single value by applying the reduction across all dimensions of the tensor. We apply the same random initialization from previous examples on a 2D tensor `A`. Note that the output tensor must be zeroed for a `sum` reduction since that value is continually added to during the reduction. Not initializing the output tensor will give undefined results since the variables are used as accumulators throughout the reduction. With the tensor initialized, we perform both a `max` and `sum` reduction across all dimensions of the tensor:\n",
"\n",
"```c++\n",
"rmax(MD0, A);\n",
"max(MD0, A);\n",
"sum(AD0, A);\n",
"```\n",
"\n",
Expand Down Expand Up @@ -248,7 +248,7 @@
"Reductions can also be performed across certain dimensions instead of the whole tensor. Dimensional reductions are useful in situations where each row contains data for a different user, for example, and we wish to sum up each user's data. By setting the output tensor view to a 1D tensor, independent reductions can be performed across the input tensor where each output element corresponds to a single row reduction from the input. Using the same tensor `A` from the previous example, we only change the output tensor type to be a 1D tensor instead of a scalar:\n",
"\n",
"```c++\n",
" rmax(MD1, A);\n",
" max(MD1, A);\n",
" sum(AD1, A); \n",
"```\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ int main() {
* value. Scale the original tensor by this max value and do another max
* reduction. The final reduction should be 1.0.
*
* Hint: the reduction function is named rmax and takes the output, input, and
* Hint: the reduction function is named max and takes the output, input, and
* stream as parameters
* https://devtech-compute.gitlab-master-pages.nvidia.com/matx/api/reduce.html
****************************************************************************************************/
Expand All @@ -98,9 +98,9 @@ int main() {
(dv = random<float>(dv.Shape(), NORMAL)).run();

tensor_t<float, 0> redv;
rmax(redv, dv, 0);
max(redv, dv, 0);
(dv = dv / redv).run();
rmax(redv, dv, 0);
max(redv, dv, 0);
/*** End editing ***/

cudaStreamSynchronize(0);
Expand Down
2 changes: 1 addition & 1 deletion examples/cgsolve.cu
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ int main([[maybe_unused]] int argc, [[maybe_unused]] char **argv)

(Bout = matvec(A, X)).run();
(norm = sum((Bout-B)*(Bout-B))).run();
(maxn = matx::rmax(sqrt(norm))).run();
(maxn = matx::max(sqrt(norm))).run();

cudaDeviceSynchronize();
printf ("max l2 norm: %f\n", (float)sqrt(maxn()));
Expand Down
8 changes: 4 additions & 4 deletions include/matx/operators/binary_operators.h
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ namespace matx
Op pow(Op t, Op t2) {}

/**
* Compute max(t, t2) of two operators or tensors
* Compute element-wise max(t, t2) of two operators or tensors
* @param t
* LHS tensor or operator input
* @param t2
Expand All @@ -257,7 +257,7 @@ namespace matx
Op max(Op t, Op t2) {}

/**
* Compute min(t, t2) of two operators or tensors
* Compute element-wise min(t, t2) of two operators or tensors
* @param t
* LHS tensor or operator input
* @param t2
Expand Down Expand Up @@ -384,9 +384,9 @@ namespace matx
DEFINE_BINARY_OP(operator&, detail::AndOp);
DEFINE_BINARY_OP(operator^, detail::XorOp);
DEFINE_BINARY_OP(pow, detail::PowOp);
DEFINE_BINARY_OP(max, detail::MaxOp);
DEFINE_BINARY_OP(max, detail::MaximumOp);
DEFINE_BINARY_OP(atan2, detail::Atan2Op);
DEFINE_BINARY_OP(min, detail::MinOp);
DEFINE_BINARY_OP(min, detail::MinimumOp);
DEFINE_BINARY_OP(operator<, detail::LTOp);
DEFINE_BINARY_OP(operator>, detail::GTOp);
DEFINE_BINARY_OP(operator<=, detail::LTEOp);
Expand Down
56 changes: 34 additions & 22 deletions include/matx/operators/rmax.h → include/matx/operators/max.h
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
////////////////////////////////////////////////////////////////////////////////
// BSD 3-Clause License
//
// COpBright (c) 2021, NVIDIA Corporation
// rmax rights reserved.
// Copyright (c) 2021, NVIDIA Corporation
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above cOpBright notice, this
// 1. Redistributions of source code must retain the above copyright notice, this
// list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above cOpBright notice,
// 2. Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// 3. Neither the name of the cOpBright holder nor the names of its
// 3. Neither the name of the copyright holder nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COpBRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHrmax THE COpBRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
Expand All @@ -44,7 +44,7 @@ namespace matx {

namespace detail {
template<typename OpA, int ORank>
class RMaxOp : public BaseOp<RMaxOp<OpA, ORank>>
class MaxOp : public BaseOp<MaxOp<OpA, ORank>>
{
private:
OpA a_;
Expand All @@ -55,10 +55,10 @@ namespace detail {
using matxop = bool;
using scalar_type = typename remove_cvref_t<OpA>::scalar_type;
using matx_transform_op = bool;
using rmax_xform_op = bool;
using max_xform_op = bool;

__MATX_INLINE__ std::string str() const { return "rmax(" + get_type_str(a_) + ")"; }
__MATX_INLINE__ RMaxOp(OpA a) : a_(a) {
__MATX_INLINE__ std::string str() const { return "max(" + get_type_str(a_) + ")"; }
__MATX_INLINE__ MaxOp(OpA a) : a_(a) {
for (int r = 0; r < ORank; r++) {
out_dims_[r] = a_.Size(r);
}
Expand All @@ -71,7 +71,7 @@ namespace detail {

template <typename Out, typename Executor>
void Exec(Out &&out, Executor &&ex) const {
rmax_impl(std::get<0>(out), a_, ex);
max_impl(std::get<0>(out), a_, ex);
}

static __MATX_INLINE__ constexpr __MATX_HOST__ __MATX_DEVICE__ int32_t Rank()
Expand Down Expand Up @@ -117,9 +117,6 @@ namespace detail {
*
* Returns an operator representing the max of all numbers in the reduction
*
* @note This function uses the name rmax instead of max to not collide with the
* element-wise operator max.
*
* @tparam InType
* Input data type
* @tparam D
Expand All @@ -129,37 +126,52 @@ namespace detail {
* Input data to reduce
* @param dims
* Array containing dimensions to reduce over
* @returns Operator with reduced values of rmax-reduce computed
* @returns Operator with reduced values of max-reduce computed
*/
template <typename InType, int D>
__MATX_INLINE__ auto max(const InType &in, const int (&dims)[D])
{
static_assert(D < InType::Rank(), "reduction dimensions must be <= Rank of input");
auto perm = detail::getPermuteDims<InType::Rank()>(dims);
auto permop = permute(in, perm);

return detail::MaxOp<decltype(permop), InType::Rank() - D>(permop);
}

template <typename InType, int D>
[[deprecated("Use max() instead of rmax() for reductions")]]
__MATX_INLINE__ auto rmax(const InType &in, const int (&dims)[D])
{
static_assert(D < InType::Rank(), "reduction dimensions must be <= Rank of input");
auto perm = detail::getPermuteDims<InType::Rank()>(dims);
auto permop = permute(in, perm);

return detail::RMaxOp<decltype(permop), InType::Rank() - D>(permop);
return detail::MaxOp<decltype(permop), InType::Rank() - D>(permop);
}

/**
* Compute max reduction of an operator
*
* Returns an operator representing the max of all numbers in the reduction
*
* @note This function uses the name rmax instead of max to not collide with the
* element-wise operator max.
*
* @tparam InType
* Input data type
*
* @param in
* Input data to reduce
* @returns Operator with reduced values of rmax-reduce computed
* @returns Operator with reduced values of max-reduce computed
*/
template <typename InType>
__MATX_INLINE__ auto max(const InType &in)
{
return detail::MaxOp<decltype(in), 0>(in);
}

template <typename InType>
[[deprecated("Use max() instead of rmax() for reductions")]]
__MATX_INLINE__ auto rmax(const InType &in)
{
return detail::RMaxOp<decltype(in), 0>(in);
return detail::MaxOp<decltype(in), 0>(in);
}

}
Loading