Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backports to v1.11 #571

Merged
merged 9 commits into from
Nov 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 9 additions & 17 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,33 +23,26 @@ jobs:
fail-fast: false
matrix:
version:
- 'nightly'
- '1.11'
os:
- ubuntu-latest
- macOS-latest
- windows-latest
arch:
- x64
- x86
exclude:
include:
- os: macOS-latest
arch: aarch64
version: '1.11'
- os: ubuntu-latest
arch: x86
version: '1.11'
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v1
- uses: julia-actions/setup-julia@latest
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: actions/cache@v4
env:
cache-name: cache-artifacts
with:
path: ~/.julia/artifacts
key: ${{ runner.os }}-test-${{ env.cache-name }}-${{ hashFiles('**/Project.toml') }}
restore-keys: |
${{ runner.os }}-test-${{ env.cache-name }}-
${{ runner.os }}-test-${{ matrix.os }}
${{ runner.os }}-
- uses: julia-actions/cache@v2
- run: julia --color=yes .ci/test_and_change_uuid.jl
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
Expand All @@ -67,8 +60,7 @@ jobs:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
# version: '1.6'
version: 'nightly'
version: '1.11'
- name: Generate docs
run: |
julia --color=yes -e 'write("Project.toml", replace(read("Project.toml", String), r"uuid = .*?\n" =>"uuid = \"3f01184e-e22b-5df5-ae63-d93ebab69eaf\"\n"));'
Expand Down
1 change: 0 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ makedocs(
sitename = "SparseArrays",
pages = Any[
"SparseArrays" => "index.md",
"Sparse Linear Algebra" => "solvers.md",
];
warnonly = [:missing_docs, :cross_references],
)
Expand Down
53 changes: 53 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,27 @@ section of the standard library reference.
| [`sprandn(m,n,d)`](@ref) | [`randn(m,n)`](@ref) | Creates a *m*-by-*n* random matrix (of density *d*) with iid non-zero elements distributed according to the standard normal (Gaussian) distribution. |
| [`sprandn(rng,m,n,d)`](@ref) | [`randn(rng,m,n)`](@ref) | Creates a *m*-by-*n* random matrix (of density *d*) with iid non-zero elements generated with the `rng` random number generator |

## [Sparse Linear Algebra](@id stdlib-sparse-linalg)

Sparse matrix solvers call functions from [SuiteSparse](http://suitesparse.com). The following factorizations are available:

| Type | Description |
|:----------------------|:--------------------------------------------- |
| `CHOLMOD.Factor` | Cholesky and LDLt factorizations |
| `UMFPACK.UmfpackLU` | LU factorization |
| `SPQR.QRSparse` | QR factorization |

These factorizations are described in more detail in the [Sparse Linear Algebra API section](@ref stdlib-sparse-linalg-api):

1. [`cholesky`](@ref SparseArrays.CHOLMOD.cholesky)
2. [`ldlt`](@ref SparseArrays.CHOLMOD.ldlt)
3. [`lu`](@ref SparseArrays.UMFPACK.lu)
4. [`qr`](@ref SparseArrays.SPQR.qr)

```@meta
DocTestSetup = nothing
```

# [SparseArrays API](@id stdlib-sparse-arrays)

```@docs
Expand Down Expand Up @@ -245,6 +266,26 @@ SparseArrays.ftranspose!
```@meta
DocTestSetup = nothing
```

# [Sparse Linear Algebra API](@id stdlib-sparse-linalg-api)

```@docs
SparseArrays.CHOLMOD.cholesky
SparseArrays.CHOLMOD.cholesky!
SparseArrays.CHOLMOD.lowrankupdate
SparseArrays.CHOLMOD.lowrankupdate!
SparseArrays.CHOLMOD.lowrankdowndate
SparseArrays.CHOLMOD.lowrankdowndate!
SparseArrays.CHOLMOD.lowrankupdowndate!
SparseArrays.CHOLMOD.ldlt
SparseArrays.UMFPACK.lu
SparseArrays.SPQR.qr
```

```@meta
DocTestSetup = nothing
```

# Noteworthy External Sparse Packages

Several other Julia packages provide sparse matrix implementations that should be mentioned:
Expand All @@ -264,3 +305,15 @@ Several other Julia packages provide sparse matrix implementations that should b
7. [ExtendableSparse.jl](https://github.com/j-fu/ExtendableSparse.jl) enables fast insertion into sparse matrices using a lazy approach to new stored indices.

8. [Finch.jl](https://github.com/willow-ahrens/Finch.jl) supports extensive multidimensional sparse array formats and operations through a mini tensor language and compiler, all in native Julia. Support for COO, CSF, CSR, CSC and more, as well as operations like broadcast, reduce, etc. and custom operations.

External packages providing sparse direct solvers:
1. [KLU.jl](https://github.com/JuliaSparse/KLU.jl)
2. [Pardiso.jl](https://github.com/JuliaSparse/Pardiso.jl/)

External packages providing solvers for iterative solution of eigensystems and singular value decompositions:
1. [ArnoldiMethods.jl](https://github.com/JuliaLinearAlgebra/ArnoldiMethod.jl)
2. [KrylovKit](https://github.com/Jutho/KrylovKit.jl)
3. [Arpack.jl](https://github.com/JuliaLinearAlgebra/Arpack.jl)

External packages for working with graphs:
1. [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl)
44 changes: 0 additions & 44 deletions docs/src/solvers.md

This file was deleted.

6 changes: 3 additions & 3 deletions src/linalg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,11 @@
end
end

generic_matmatmul!(C::StridedMatrix, tA, tB, A::SparseMatrixCSCUnion2, B::DenseMatrixUnion, _add::MulAddMul) =
@inline generic_matmatmul!(C::StridedMatrix, tA, tB, A::SparseMatrixCSCUnion2, B::DenseMatrixUnion, _add::MulAddMul) =
spdensemul!(C, tA, tB, A, B, _add)
generic_matmatmul!(C::StridedMatrix, tA, tB, A::SparseMatrixCSCUnion2, B::AbstractTriangular, _add::MulAddMul) =
@inline generic_matmatmul!(C::StridedMatrix, tA, tB, A::SparseMatrixCSCUnion2, B::AbstractTriangular, _add::MulAddMul) =

Check warning on line 52 in src/linalg.jl

View check run for this annotation

Codecov / codecov/patch

src/linalg.jl#L52

Added line #L52 was not covered by tests
spdensemul!(C, tA, tB, A, B, _add)
generic_matvecmul!(C::StridedVecOrMat, tA, A::SparseMatrixCSCUnion2, B::DenseInputVector, _add::MulAddMul) =
@inline generic_matvecmul!(C::StridedVecOrMat, tA, A::SparseMatrixCSCUnion2, B::DenseInputVector, _add::MulAddMul) =
spdensemul!(C, tA, 'N', A, B, _add)

Base.@constprop :aggressive function spdensemul!(C, tA, tB, A, B, _add)
Expand Down
60 changes: 34 additions & 26 deletions src/solvers/cholmod.jl
Original file line number Diff line number Diff line change
Expand Up @@ -795,7 +795,7 @@
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return ssmult(A, B, stype, values, sorted)
end
function horzcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
function horzcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
{Tv1<:VRealTypes, Tv2<:VRealTypes, Ti1, Ti2}
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return horzcat(A, B, values)
Expand All @@ -809,7 +809,7 @@
A, X = convert(Sparse{Tv3, Ti}, A), convert(Dense{Tv3}, X)
return sdmult!(A, transpose, α, β, X, Y)
end
function vertcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
function vertcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
{Tv1<:VRealTypes, Ti1, Tv2<:VRealTypes, Ti2}
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return vertcat(A, B, values)
Expand Down Expand Up @@ -895,7 +895,7 @@
return Dense{T}(A)
end
# Don't always promote to Float64 now that we have Float32 support.
Dense(A::StridedVecOrMatInclAdjAndTrans{T}) where
Dense(A::StridedVecOrMatInclAdjAndTrans{T}) where
{T<:Union{Float16, ComplexF16, Float32, ComplexF32}} = Dense{promote_type(T, Float32)}(A)


Expand Down Expand Up @@ -1055,8 +1055,8 @@
Sparse{promote_type(Tv, Float64), Ti <: ITypes ? Ti : promote_type(Ti, Int)}(
A.data, A.uplo == 'L' ? -1 : 1
)
Sparse(A::Hermitian{Tv, SparseMatrixCSC{Tv,Ti}}) where
{Tv<:Union{Float16, Float32, ComplexF32, ComplexF16}, Ti} =
Sparse(A::Hermitian{Tv, SparseMatrixCSC{Tv,Ti}}) where
{Tv<:Union{Float16, Float32, ComplexF32, ComplexF16}, Ti} =
Sparse{promote_type(Float32, Tv), Ti <: ITypes ? Ti : promote_type(Ti, Int)}(
A.data, A.uplo == 'L' ? -1 : 1
)
Expand All @@ -1076,7 +1076,7 @@
a = unsafe_load(typedpointer(A))
S = allocate_sparse(a.nrow, a.ncol, a.nzmax, Bool(a.sorted), Bool(a.packed), a.stype, Tnew, Inew)
s = unsafe_load(typedpointer(S))

ap = unsafe_wrap(Array, a.p, (a.ncol + 1,), own = false)
sp = unsafe_wrap(Array, s.p, (s.ncol + 1,), own = false)
copyto!(sp, ap)
Expand Down Expand Up @@ -1376,7 +1376,7 @@

## Multiplication
(*)(A::Sparse, B::Sparse) = ssmult(A, B, 0, true, true)
(*)(A::Sparse, B::Dense) = sdmult!(A, false, 1., 0., B,
(*)(A::Sparse, B::Dense) = sdmult!(A, false, 1., 0., B,
zeros(size(A, 1), size(B, 2), promote_type(eltype(A), eltype(B)))
)
(*)(A::Sparse, B::VecOrMat) = (*)(A, Dense(B))
Expand Down Expand Up @@ -1413,7 +1413,7 @@
end

*(adjA::Adjoint{<:Any,<:Sparse}, B::Dense) = (
A = parent(adjA); sdmult!(A, true, 1., 0., B,
A = parent(adjA); sdmult!(A, true, 1., 0., B,
zeros(size(A, 2), size(B, 2), promote_type(eltype(A), eltype(B))))
)
*(adjA::Adjoint{<:Any,<:Sparse}, B::VecOrMat) = adjA * Dense(B)
Expand All @@ -1423,25 +1423,33 @@

## Compute that symbolic factorization only
function symbolic(A::Sparse{<:VTypes, Ti};
perm::Union{Nothing,AbstractVector{<:Integer}}=nothing,
postorder::Bool=isnothing(perm)||isempty(perm), userperm_only::Bool=true) where Ti
perm::Union{Nothing,AbstractVector{<:Integer}}=nothing,
postorder::Bool=isnothing(perm)||isempty(perm),
userperm_only::Bool=true,
nested_dissection::Bool=false) where Ti

sA = unsafe_load(pointer(A))
sA.stype == 0 && throw(ArgumentError("sparse matrix is not symmetric/Hermitian"))

@cholmod_param postorder = postorder begin
if perm === nothing || isempty(perm) # TODO: deprecate empty perm
return analyze(A)
else # user permutation provided
if userperm_only # use perm even if it is worse than AMD
@cholmod_param nmethods = 1 begin
# The default is to just use AMD. Use nested dissection only if explicitly asked for.
# https://github.com/JuliaSparse/SparseArrays.jl/issues/548
# https://github.com/DrTimothyAldenDavis/SuiteSparse/blob/26ababc7f3af725c5fb9168a1b94850eab74b666/CHOLMOD/Include/cholmod.h#L555-L574
@cholmod_param nmethods = (nested_dissection ? 0 : 2) begin
@cholmod_param postorder = postorder begin

Check warning on line 1438 in src/solvers/cholmod.jl

View check run for this annotation

Codecov / codecov/patch

src/solvers/cholmod.jl#L1437-L1438

Added lines #L1437 - L1438 were not covered by tests
if perm === nothing || isempty(perm) # TODO: deprecate empty perm
return analyze(A)
else # user permutation provided
if userperm_only # use perm even if it is worse than AMD
@cholmod_param nmethods = 1 begin

Check warning on line 1443 in src/solvers/cholmod.jl

View check run for this annotation

Codecov / codecov/patch

src/solvers/cholmod.jl#L1443

Added line #L1443 was not covered by tests
return analyze_p(A, Ti[p-1 for p in perm])
end
else
return analyze_p(A, Ti[p-1 for p in perm])
end
else
return analyze_p(A, Ti[p-1 for p in perm])
end
end
end

end

function cholesky!(F::Factor{Tv}, A::Sparse{Tv};
Expand All @@ -1467,7 +1475,7 @@

!!! note
This method uses the CHOLMOD library from SuiteSparse, which only supports
real or complex types in single or double precision.
real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.
"""
Expand Down Expand Up @@ -1587,8 +1595,8 @@

!!! note
This method uses the CHOLMOD[^ACM887][^DavisHager2009] library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse).
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will be
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will be
converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the
Expand Down Expand Up @@ -1633,8 +1641,8 @@
See also [`ldlt`](@ref).

!!! note
This method uses the CHOLMOD library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse),
which only supports real or complex types in single or double precision.
This method uses the CHOLMOD library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse),
which only supports real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.
"""
Expand Down Expand Up @@ -1695,7 +1703,7 @@

!!! note
This method uses the CHOLMOD[^ACM887][^DavisHager2009] library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse).
CHOLMOD only supports real or complex types in single or double precision.
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.

Expand Down Expand Up @@ -1767,7 +1775,7 @@
"""
lowrankupdate(F::Factor{Tv}, V::AbstractArray{Tv2}) where {Tv, Tv2} =
lowrankupdate!(
change_xdtype(F, promote_type(Tv, Tv2)),
change_xdtype(F, promote_type(Tv, Tv2)),
convert(AbstractArray{promote_type(Tv, Tv2)}, V)
)

Expand All @@ -1782,7 +1790,7 @@
"""
lowrankdowndate(F::Factor{Tv}, V::AbstractArray{Tv2}) where {Tv, Tv2} =
lowrankdowndate!(
change_xdtype(F, promote_type(Tv, Tv2)),
change_xdtype(F, promote_type(Tv, Tv2)),
convert(AbstractArray{promote_type(Tv, Tv2)}, V)
)

Expand Down
2 changes: 1 addition & 1 deletion src/solvers/spqr.jl
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ Matrix{T}(Q::QRSparseQ) where {T} = lmul!(Q, Matrix{T}(I, size(Q, 1), min(size(Q

# From SPQR manual p. 6
_default_tol(A::AbstractSparseMatrixCSC) =
20*sum(size(A))*eps(real(eltype(A)))*maximum(norm(view(A, :, i)) for i in 1:size(A, 2))
20*sum(size(A))*eps()*maximum(norm(view(A, :, i)) for i in 1:size(A, 2))

"""
qr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse
Expand Down
2 changes: 2 additions & 0 deletions src/sparsematrix.jl
Original file line number Diff line number Diff line change
Expand Up @@ -49,10 +49,12 @@ SparseMatrixCSC(m, n, colptr::ReadOnly, rowval::ReadOnly, nzval::Vector) =

"""
SparseMatrixCSC{Tv,Ti}(::UndefInitializer, m::Integer, n::Integer)
SparseMatrixCSC{Tv,Ti}(::UndefInitializer, (m,n)::NTuple{2,Integer})

Creates an empty sparse matrix with element type `Tv` and integer type `Ti` of size `m × n`.
"""
SparseMatrixCSC{Tv,Ti}(::UndefInitializer, m::Integer, n::Integer) where {Tv, Ti} = spzeros(Tv, Ti, m, n)
SparseMatrixCSC{Tv,Ti}(::UndefInitializer, mn::NTuple{2,Integer}) where {Tv, Ti} = spzeros(Tv, Ti, mn...)

"""
FixedSparseCSC{Tv,Ti<:Integer} <: AbstractSparseMatrixCSC{Tv,Ti}
Expand Down
Loading
Loading