Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sparse linear algebra operations work on all special matrices #110

Open
7 tasks
ViralBShah opened this issue Jan 24, 2022 · 4 comments
Open
7 tasks
Assignees

Comments

@ViralBShah
Copy link
Member

ViralBShah commented Jan 24, 2022

Just like LU in #131, we also need these:

  • cholesky(a')
  • ldlt(a')
  • det(a')
  • inv(a')
  • qr(a') works, but we should verify it is not taking a slow path

In addition, we should put promotion rules in place at the very least:

Also, need to make these all work for Symmetric, Hermitian, [etc.] (#13, #125) (https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#Special-matrices)

Much of this essentially needs to have promotion rules in place to convert the matrix to a regular sparse matrix and then call the usual method. Over time, we can optimize certain paths.

Similar issue in the dense case that has a complete list of factorizations (not all of which may have sparse counterparts): JuliaLang/LinearAlgebra.jl#776

@ViralBShah ViralBShah changed the title More missing adjoint methods Make sparse linear algebra operations work on all special matrices Jan 24, 2022
@rayegun rayegun self-assigned this Jan 24, 2022
@rayegun
Copy link
Member

rayegun commented Jan 24, 2022

We don't necessarily have to be clever for a lot of these right? If there's no simple, extant way to specialize, just perform a copy? And then maybe write faster versions where requested?

@fredrikekre
Copy link
Member

Isn't errors better? Then you could construct the correct (i.e. the transposed) matrix directly instead of thinking there is an efficient transpose solver?

@ViralBShah
Copy link
Member Author

My thought was - the transpose is a small fraction of the time compared to the factorization, which also usually has fill-in and memory growth. So, if we don't promote, the user code will just have lots of copy statements all over. In some of these special cases, the matrices are also symmetric, avoiding the need for much to be done. It is true that there are cases where people may prefer avoiding these copies, and perhaps we can add these notes to the documentation?

I believe in many of these cases, the underlying solvers do support transpose solves, and we can optimize them over time.

Right, now all this is a bit all over the place. Some work, some have fallbacks, and the rest error, and it is inconsistent with the dense versions.

@rayegun
Copy link
Member

rayegun commented Jan 24, 2022

This is also what LinearAlgebra does for dense anyway right? I see copy_oftype being done on lu(A) for Matrix (although in that case it's a byproduct of in-place lu!). I think matching dense here is better. I will make note of where copies occur in the docs.

On the other hand, at least for UMFPACK, Dr. Davis recommended we don't optimize by doing lu(A::Transpose{T, SparseMatrixCSC{T}}) = lu(parent(A))'. Because, while the solve results will be the same, the quality of the factorization itself can be different. So in some cases it may be better to keep the lu(A::Transpose) = lu(copy(A)) code path intact. Or in other words, support transpose solves by doing fact = lu(A); fact' / b. This should be documented of course, with the recommendation that users test both and determine which works better for them.

@rayegun rayegun transferred this issue from JuliaSparse/SuiteSparse.jl Jun 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants