|
| 1 | +.. _sparse_tensor_api: |
| 2 | + |
1 | 3 | Sparse Tensor Type
|
2 | 4 | ##################
|
3 | 5 |
|
@@ -89,8 +91,8 @@ operations for sparse-to-dense, dense-to-sparse, matmul, and solve::
|
89 | 91 |
|
90 | 92 | (A = sparse2dense(Acoo)).run(exec);
|
91 | 93 | (Acoo = dense2sparse(D)).run(exec);
|
92 |
| - (C = matmul(Acoo, B)).run(exec); |
93 |
| - (X = solve(Acsr, Y)).run(exec); // CSR only |
| 94 | + (C = matmul(Acoo, B)).run(exec); // only Sparse-Matrix x Matrix (SpMM) |
| 95 | + (X = solve(Acsr, Y)).run(exec); // only on CSR format |
94 | 96 |
|
95 | 97 | We expect the assortment of supported sparse operations and storage
|
96 | 98 | formats to grow if the experimental implementation is well-received.
|
@@ -150,6 +152,32 @@ to where the tensor is being used, including device memory, managed memory,
|
150 | 152 | and host memory. MatX sparse tensors are very similar to e.g. SciPy's or
|
151 | 153 | cuPy sparse arrays.
|
152 | 154 |
|
| 155 | +The implementation of the UST follows the MatX design philosophy of using |
| 156 | +a header-only, ``constexpr``-heavy, templated approach, which facilitates |
| 157 | +applications to only compile what is used, and nothing more. |
| 158 | +The ``sparse_tensor_t`` type is essentially the following class, |
| 159 | +where the tensor format ``TF`` is part of the template:: |
| 160 | + |
| 161 | + template <typename VAL, typename CRD, typename POS, typename TF, ...> |
| 162 | + class sparse_tensor_t : public detail::tensor_impl_t<...> { |
| 163 | + |
| 164 | + static constexpr int DIM = TF::DIM; |
| 165 | + static constexpr int LVL = TF::LVL; |
| 166 | + |
| 167 | + private: |
| 168 | + // Primary storage of sparse tensor (explicitly stored element values). |
| 169 | + StorageV values_; |
| 170 | + |
| 171 | + // Secondary storage of sparse tensor (coordinates and positions). |
| 172 | + StorageC coordinates_[LVL]; |
| 173 | + StorageP positions_[LVL]; |
| 174 | + } |
| 175 | + |
| 176 | +Using this design, many tests (e.g. is this tensor in COO format) |
| 177 | +evaluate as ``constexpr`` at compile-time, keeping the binary |
| 178 | +size restricted to only what is actually used in a MatX computation. |
| 179 | + |
| 180 | + |
153 | 181 | Matx Implementation of the Tensor Format DSL
|
154 | 182 | --------------------------------------------
|
155 | 183 |
|
@@ -237,14 +265,15 @@ Historical Background of the UST Type
|
237 | 265 | -------------------------------------
|
238 | 266 |
|
239 | 267 | The concept of the UST type has its roots in sparse compilers, first pioneered
|
240 |
| -for sparse linear algebra in [`B&W95`_, `Bik96`_, `Bik98`_] and formalized to |
241 |
| -sparse tensor algebra in [`Kjolstad20`_, `Chou22`_, `Yadav22`_]. The tensor |
242 |
| -format DSL for the UST type, including the generalization to higher-dimensional |
243 |
| -levels, was introduced in [`MLIR22`_, `MLIR`_]. Please refer to this literature |
244 |
| -for a more extensive presentation of all topics only briefly discussed in this |
245 |
| -online documentation. |
246 |
| - |
247 |
| -.. _B&W95: https://dl.acm.org/doi/10.1145/169627.169765 |
| 268 | +for sparse linear algebra in [`B&W95`_, `B&W96`_, `Bik96`_, `Bik98`_] and |
| 269 | +formalized to sparse tensor algebra in [`Kjolstad20`_, `Chou22`_, `Yadav22`_]. |
| 270 | +The tensor format DSL for the UST type, including the generalization to |
| 271 | +higher-dimensional levels, was introduced in [`MLIR22`_, `MLIR`_]. Please |
| 272 | +refer to this literature for a more extensive presentation of all topics only |
| 273 | +briefly discussed in this online documentation. |
| 274 | + |
| 275 | +.. _B&W95: https://dl.acm.org/doi/10.1006/jpdc.1995.1141 |
| 276 | +.. _B&W96: https://ieeexplore.ieee.org/document/485501 |
248 | 277 | .. _Bik96: https://theses.liacs.nl/1315
|
249 | 278 | .. _Bik98: https://dl.acm.org/doi/10.1145/290200.287636
|
250 | 279 | .. _Chou22: http://tensor-compiler.org/files/chou-phd-thesis-taco-formats.pdf
|
|
0 commit comments