Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse tensors #1998

Draft
wants to merge 40 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
60ee8e6
sparse
McArthur-Alford Jul 11, 2024
24daafd
Fixed errors from moving sparse
McArthur-Alford Jul 11, 2024
790ed5f
some better imports and fixes
McArthur-Alford Jul 11, 2024
6371a0a
Full sparse backend trait, lots unfinished
McArthur-Alford Jul 11, 2024
6f54864
sparse_reshape op
McArthur-Alford Jul 11, 2024
741d6cc
permute and transpose working
McArthur-Alford Jul 14, 2024
08143fb
swap dims
McArthur-Alford Jul 14, 2024
19de621
sparse flip
McArthur-Alford Jul 14, 2024
c56c25d
any, all, any_dim, all_dim
McArthur-Alford Jul 14, 2024
e75181d
repeat
McArthur-Alford Jul 14, 2024
cfe706b
coalesce, somewhat broken
McArthur-Alford Jul 16, 2024
57f6dbb
fixed coalesce
McArthur-Alford Jul 17, 2024
9a80208
fixed slice
McArthur-Alford Jul 17, 2024
2e0abd2
sparse density
McArthur-Alford Jul 17, 2024
60d8f67
numeric for sparse tensors, and add
McArthur-Alford Jul 17, 2024
1d9f856
add, sub, mul, div and some refactors
McArthur-Alford Jul 20, 2024
4782a3b
slice_assign
McArthur-Alford Jul 20, 2024
f898d79
sddmm + more numerics (sign, abs, etc)
McArthur-Alford Jul 21, 2024
80ab5d8
made unimplemented functions panic
McArthur-Alford Jul 28, 2024
f6c0ff8
style fixes
McArthur-Alford Aug 11, 2024
a67cb0a
Merge branch 'main' of github.com:tracel-ai/burn into sparse-tensor
McArthur-Alford Aug 11, 2024
b63ea7a
Refactor of tensor API in progress
McArthur-Alford Aug 11, 2024
1d0d366
New sparse tensor API, seems really good
McArthur-Alford Aug 14, 2024
b98ecfc
Changing up primitives for blanket impl
McArthur-Alford Aug 14, 2024
6ef4b4d
Seemingly everything but tensorchecks working
McArthur-Alford Aug 14, 2024
5d53f7a
Reintroduced the COO decorator to burn-sparse
McArthur-Alford Aug 19, 2024
6ebe15b
transferred accross most basic ops for float tensor
McArthur-Alford Aug 19, 2024
937480e
most functions transferred
McArthur-Alford Aug 19, 2024
bac85ac
Added use to mod
McArthur-Alford Aug 20, 2024
1dac1e8
Some more functions, a little broken
McArthur-Alford Aug 21, 2024
6150e8a
A huge overhaul, much nicer types and much less confusing, achieves t…
McArthur-Alford Aug 22, 2024
40d2afd
Cleanup of types
McArthur-Alford Aug 22, 2024
1c06aab
BasicSparseOps & into/from sparse
McArthur-Alford Aug 24, 2024
4535e37
Big cleanup of burn-sparse
McArthur-Alford Aug 25, 2024
ae8ab68
Removed old
McArthur-Alford Aug 25, 2024
7b90252
Removed unsupported sparse ops
McArthur-Alford Aug 25, 2024
e00de1e
Coordinates OP, plus basicsparse for float/int
McArthur-Alford Aug 30, 2024
d8603f3
Removed unsupported ops
McArthur-Alford Aug 30, 2024
ed46fd2
Merge branch 'sparse-tensor' of github.com:McArthur-Alford/burn into …
McArthur-Alford Aug 30, 2024
cedd197
values
McArthur-Alford Oct 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions crates/burn-core/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ vision = ["burn-dataset?/vision", "burn-common/network"]
# Backend
autodiff = ["burn-autodiff"]
fusion = ["burn-wgpu?/fusion"]
sparse = ["burn-sparse"]

## Backend features
metal = ["burn-candle?/metal"]
Expand Down Expand Up @@ -116,6 +117,7 @@ burn-cuda = { path = "../burn-cuda", version = "0.14.0", optional = true, defaul
burn-autodiff = { path = "../burn-autodiff", version = "0.14.0", optional = true }
burn-tch = { path = "../burn-tch", version = "0.14.0", optional = true }
burn-candle = { path = "../burn-candle", version = "0.14.0", optional = true }
burn-sparse = { path = "../burn-sparse", version = "0.14.0", optional = true }

derive-new = { workspace = true }
log = { workspace = true, optional = true }
Expand Down
3 changes: 3 additions & 0 deletions crates/burn-core/src/backend.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,6 @@ pub use burn_tch as libtorch;

#[cfg(feature = "tch")]
pub use burn_tch::LibTorch;

#[cfg(feature = "sparse")]
pub use burn_sparse as sparse;
43 changes: 43 additions & 0 deletions crates/burn-sparse/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
[package]
authors = []
categories = ["science", "no-std", "embedded", "wasm"]
description = "Sparse tensor crate that offers a default sparse backend wrapper around burn backends."
edition.workspace = true
keywords = ["deep-learning", "machine-learning", "tensor", "sparse"]
license.workspace = true
name = "burn-sparse"
readme.workspace = true
repository = "https://github.com/tracel-ai/burn/tree/main/burn-sparse"
version.workspace = true

[features]
default = ["std"]
doc = ["default"]
experimental-named-tensor = []
std = ["rand/std", "half/std", "num-traits/std"]
wasm-sync = []

[dependencies]
burn-common = { path = "../burn-common", version = "0.14.0", default-features = false }
burn-tensor = { path = "../burn-tensor", version = "0.14.0" }

proc-macro2 = { workspace = true }
quote = { workspace = true }
syn = { workspace = true }
derive-new = { workspace = true }
half = { workspace = true }
num-traits = { workspace = true }
rand = { workspace = true }
rand_distr = { workspace = true } # use instead of statrs because it supports no_std

# The same implementation of HashMap in std but with no_std support (only needs alloc crate)
hashbrown = { workspace = true } # no_std compatible

# Serialization
serde = { workspace = true }

[dev-dependencies]
rand = { workspace = true, features = ["std", "std_rng"] } # Default enables std

[package.metadata.docs.rs]
features = ["doc"]
76 changes: 76 additions & 0 deletions crates/burn-sparse/src/coo.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
use burn_tensor::backend::Backend;
use burn_tensor::ops::SparseBoolOps;
use burn_tensor::ops::SparseTensorOps;
use burn_tensor::Dense;
use burn_tensor::Device;
use burn_tensor::Float;
use burn_tensor::Int;
use burn_tensor::Shape;
use burn_tensor::Sparse;
use burn_tensor::SparseStorage;
use burn_tensor::Tensor;
use burn_tensor::TensorData;
use burn_tensor::TensorKind;

#[derive(Clone, Debug)]
pub struct COO;

#[derive(Clone, Debug)]
pub struct SparseCOOTensor<B: Backend, K: TensorKind<B>, const D: usize> {
pub coordinates: Option<Tensor<B, 2, Int>>,
pub values: Option<Tensor<B, 1, K>>,
pub shape: Shape<D>,
pub device: Device<B>,
}

impl<B: Backend> SparseStorage<B> for COO {
type SparsePrimitive<K: burn_tensor::TensorKind<B>, const D: usize> = SparseCOOTensor<B, K, D>;

fn name() -> &'static str {
"SparseCOO"
}
}

impl<B: Backend> SparseTensorOps<COO, B> for COO {}

pub(crate) fn flatten_coordinates<B: Backend, const D: usize, const S: usize>(
coordinates: Tensor<B, 2, Int>,
shape: Shape<D>,
device: &Device<B>,
) -> Tensor<B, 2, Int> {
let mut strides_data = [[1]; D];
for i in (0..D).rev() {
if D - 1 - i == S {
strides_data[i] = [1];
} else if D - 1 - i < S {
strides_data[i] = [0];
} else {
strides_data[i] = [strides_data[i + 1][0] * shape.dims[i + 1] as i64];
}
}
let strides_data: TensorData = TensorData::from(strides_data);
let strides: Tensor<B, 2, Int> = Tensor::from_data(strides_data, device);
let flat_coordinates: Tensor<B, 1, Int> = strides.mul(coordinates).sum_dim(0).flatten(0, 1);

flat_coordinates.unsqueeze_dim(0)
}

pub(crate) fn unflatten_coordinates<B: Backend, const D: usize>(
flat_coordinates: Tensor<B, 2, Int>,
new_shape: Shape<D>,
) -> Tensor<B, 2, Int> {
let flat_coordinates = flat_coordinates.squeeze::<1>(0);
let mut remaining_flat_coordinates = flat_coordinates.clone();
let mut new_coordinates = Vec::with_capacity(D);

for &dim_size in new_shape.dims.iter().rev() {
let size = dim_size as i64;
let new_coord = remaining_flat_coordinates.clone().remainder_scalar(size);
new_coordinates.push(new_coord.clone());
remaining_flat_coordinates = remaining_flat_coordinates.div_scalar(size);
}

new_coordinates.reverse();

Tensor::stack(new_coordinates, 0)
}
163 changes: 163 additions & 0 deletions crates/burn-sparse/src/coo_bool.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
use super::coo::COO;
use crate::SparseCOOTensor;
use crate::{flatten_coordinates, unflatten_coordinates};
use burn_tensor::Int;
use burn_tensor::ReprPrimitive;
use burn_tensor::Shape;
use burn_tensor::Tensor;
use burn_tensor::{
backend::Backend,
ops::{SparseBoolOps, SparseTensorOps},
SparseStorage,
};
use burn_tensor::{Bool, Dense};

impl<B: Backend> SparseBoolOps<COO, B> for COO {
fn bool_to_sparse<const D: usize>(
dense: <B as Backend>::BoolTensorPrimitive<D>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_empty<const D: usize>(
shape: burn_tensor::Shape<D>,
device: &burn_tensor::Device<B>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_shape<const D: usize>(
tensor: &<COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> burn_tensor::Shape<D> {
todo!()
}

fn bool_reshape<const D1: usize, const D2: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1>,
shape: burn_tensor::Shape<D2>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D2> {
todo!()
}

fn bool_transpose<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_swap_dims<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
dim1: usize,
dim2: usize,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_permute<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
axes: &[usize],
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_flip<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
axes: &[usize],
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_slice<const D1: usize, const D2: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1>,
indices: [std::ops::Range<usize>; D2],
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1> {
todo!()
}

fn bool_slice_assign<const D1: usize, const D2: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1>,
ranges: [std::ops::Range<usize>; D2],
value: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1> {
todo!()
}

fn bool_device<const D: usize>(
tensor: &<COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> burn_tensor::Device<B> {
todo!()
}

fn bool_to_device<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
device: &burn_tensor::Device<B>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_repeat_dim<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
dim: usize,
times: usize,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_cat<const D: usize>(
tensors: Vec<<COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>>,
dim: usize,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_any<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, 1> {
todo!()
}

fn bool_any_dim<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
dim: usize,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_all<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, 1> {
todo!()
}

fn bool_all_dim<const D: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
dim: usize,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D> {
todo!()
}

fn bool_expand<const D1: usize, const D2: usize>(
tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D1>,
shape: burn_tensor::Shape<D2>,
) -> <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D2> {
todo!()
}

fn bool_coordinates<const D: usize>(
mut tensor: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> Option<ReprPrimitive<B, Int, Dense, 2>> {
tensor.coordinates.map(|c| c.into_primitive())
}

fn bool_to_dense<const D: usize>(
sparse: <COO as SparseStorage<B>>::SparsePrimitive<burn_tensor::Bool, D>,
) -> B::BoolTensorPrimitive<D> {
todo!()
}

fn bool_values<const D: usize>(
tensor: ReprPrimitive<B, Bool, burn_tensor::Sparse<B, COO>, D>,
) -> Option<ReprPrimitive<B, Bool, Dense, 1>> {
tensor.values.map(|v| v.into_primitive())
}
}
Loading