-
Notifications
You must be signed in to change notification settings - Fork 40
Sparsity support #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@mlubin pointed out that @mauro3 has https://github.com/mauro3/MatrixColorings.jl which already implements these kinds of algorithms. What would be a good way to revive this package? I am wondering if you'd be willing to transfer it to JuliaDiffEq, then place Ragged.jl inside of this package (maybe to replace later with RaggedArrays.jl, but for now just try to get this done), and then I could try to get this updated to v0.6 and registered? I think it should be kept as a separate package since it would be a good source for seeding AD a la JuliaDiff/ForwardDiff.jl#43 . I wonder if, since it's basically sparse seeding, if it should like in JuliaDiff? @jrevels . Anyways, I would at least want to get it revived and get it registered+CI to stop the bitrot and start putting it to use! |
Yes, reviving this would be good. Although, I don't have time and, as I'm not currently solving PDEs, neither urge. Yes, transferring it to JuliaDiffEq would be good. Also, maybe https://github.com/mbauman/RaggedArrays.jl would be a better dependence, but also stale. |
Support for BandedMatrix would be very valuable to me (JuliaLinearAlgebra/BandedMatrices.jl#86) |
Indeed, that would be a nice one. In fact, that's probably the easiest coloring algorithm, right? There's no influence of the n+1th value on the 1st if there's a band size of n, so the seeding is just (1,0 (n times),1, 0 (n times), ...,0) then (0,1 (n times), 0, 1 (n times), .., 0 ). We plan on doing this for a GSoC, and this would be a good first step to get the interface together and set it up on the easy case of BandedMatrices. |
Jacobians now allow for a color vector. |
We can support sparsity in the in-place format (out of place doesn't really make sense anyways) by dispatching on the return type.
Currently there's no generic way to do this
(JuliaLang/julia#19372 (comment)).
That shows how sparse vectors can easily be handled.
But we can support sparse matrices like is shown there by simply looping over the non-zero elements like that. Our current kernels can be refactored to make use of stuff like:
https://pdfs.semanticscholar.org/ee06/57173bb0f0ed9571e205e970819346fb59de.pdf
for choosing the differencing and ordering. It might be nice to just handle the basic cases for that are easy to decouple, like tridiagonal and banded matrices, with a generic fallback that keeps output sparsity in tact without the computational advantage.
The text was updated successfully, but these errors were encountered: