diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index b65795c..39d6a26 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-20T08:40:37","documenter_version":"1.8.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-26T12:11:18","documenter_version":"1.8.0"}}
\ No newline at end of file
diff --git a/dev/changes/index.html b/dev/changes/index.html
index 1da66dd..746b3d2 100644
--- a/dev/changes/index.html
+++ b/dev/changes/index.html
@@ -1,2 +1,2 @@
-
AbstractFactorization and subtypes are now without element and index type information. They wrap more concretely typed info. This shall allow to construct a preconditioner without knowing matrix and element type.
Don't create new entry if the value to be assigned is zero, making things consistent with SparseMatrixCSC and ForwardDiff as suggested by @MaximilianJHuber
Tried to track down the source from which I learned the linked list based struct in order to document this. Ended up with SPARSEKIT of Y.Saad, however I believe this already was in SPARSEPAK by Chu,George,Liu.
Internal rename of SparseMatrixExtension to SparseMatrixLNK.
AbstractFactorization and subtypes are now without element and index type information. They wrap more concretely typed info. This shall allow to construct a preconditioner without knowing matrix and element type.
Don't create new entry if the value to be assigned is zero, making things consistent with SparseMatrixCSC and ForwardDiff as suggested by @MaximilianJHuber
Tried to track down the source from which I learned the linked list based struct in order to document this. Ended up with SPARSEKIT of Y.Saad, however I believe this already was in SPARSEPAK by Chu,George,Liu.
Internal rename of SparseMatrixExtension to SparseMatrixLNK.
Measure the time (in seconds) for assembling a SparseMatrixCSC:
t_csc = @belapsed begin
A = spzeros(10_000, 10_000)
assemble(A)
-end
0.018308223
An ExtendableSparseMatrix can be used as a drop-in replacement. However, before any other use, this needs an internal structure rebuild which is invoked by the flush! method.
t_ext = @belapsed begin
+end
0.018289501
An ExtendableSparseMatrix can be used as a drop-in replacement. However, before any other use, this needs an internal structure rebuild which is invoked by the flush! method.
t_ext = @belapsed begin
A = ExtendableSparseMatrix(10_000, 10_000)
assemble(A)
flush!(A)
-end
0.001306216
All specialized methods of linear algebra functions (e.g. \) for ExtendableSparseMatrix call flush! before proceeding.
The overall time gain from using ExtendableSparse is:
t_ext / t_csc
0.07134586464235225
The reason for this situation is that the SparseMatrixCSC struct just contains the data for storing the matrix in the compressed column format. Inserting a new entry in this storage scheme is connected with serious bookkeeping and shifts of large portions of array content.
Julia provides the sparse method which uses an intermediate storage of the data in two index arrays and a value array, the so called coordinate (or COO) format:
function assemble_coo(n)
+end
0.00131108
All specialized methods of linear algebra functions (e.g. \) for ExtendableSparseMatrix call flush! before proceeding.
The overall time gain from using ExtendableSparse is:
t_ext / t_csc
0.07168484257717037
The reason for this situation is that the SparseMatrixCSC struct just contains the data for storing the matrix in the compressed column format. Inserting a new entry in this storage scheme is connected with serious bookkeeping and shifts of large portions of array content.
Julia provides the sparse method which uses an intermediate storage of the data in two index arrays and a value array, the so called coordinate (or COO) format:
function assemble_coo(n)
I = zeros(Int64, 0)
J = zeros(Int64, 0)
V = zeros(0)
@@ -32,9 +32,9 @@
sparse(I, J, V)
end;
-t_coo = @belapsed assemble_coo(10_000)
0.000685107
While more convenient to use, the assembly based on ExtendableSparseMatrix is only slightly slower:
t_ext / t_coo
1.906586854316187
Below one finds a more elaborate discussion for a quasi-3D problem.
The method fdrand creates a matrix similar to the discretization matrix of a Poisson equation on a d-dimensional cube. The approach is similar to that of a typical finite element code: calculate a local stiffness matrix and assemble it into the global one.
The code uses the index access API for the creation of the matrix, inserting elements via A[i,j]+=v, using an intermediate linked list structure which upon return is flushed into a SparseMatrixCSC structure.
The method fdrand creates a matrix similar to the discretization matrix of a Poisson equation on a d-dimensional cube. The approach is similar to that of a typical finite element code: calculate a local stiffness matrix and assemble it into the global one.
The code uses the index access API for the creation of the matrix, inserting elements via A[i,j]+=v, using an intermediate linked list structure which upon return is flushed into a SparseMatrixCSC structure.
For repeated calculations on the same sparsity structure (e.g. for time dependent problems or Newton iterations) it is convenient to skip all but the first creation steps and to just replace the values in the matrix after setting the elements of the nzval vector to zero. Typically in finite element and finite volume methods this step updates matrix entries (most of them several times) by adding values. In this case, the current indexing interface of Julia requires to access the matrix twice:
A = spzeros(3, 3)
+ update = (A, v, i, j) -> updateindex!(A, +, v, i, j))
For repeated calculations on the same sparsity structure (e.g. for time dependent problems or Newton iterations) it is convenient to skip all but the first creation steps and to just replace the values in the matrix after setting the elements of the nzval vector to zero. Typically in finite element and finite volume methods this step updates matrix entries (most of them several times) by adding values. In this case, the current indexing interface of Julia requires to access the matrix twice:
For sparse matrices this requires to perform the index search in the structure twice. The packages provides the method updateindex! for both SparseMatrixCSC and for ExtendableSparse which allows to update a matrix element with just one index search.
A = fdrand(30, 30, 30; matrixtype = ExtendableSparseMatrix)
+ update = (A, v, i, j) -> A[i, j] += v)
0.004670713
A = fdrand(30, 30, 30; matrixtype = ExtendableSparseMatrix)
@belapsed fdrand!(A, 30, 30, 30,
- update = (A, v, i, j) -> updateindex!(A, +, v, i, j))
0.002521364
Note that the update process for ExtendableSparse may be slightly slower than for SparseMatrixCSC due to the overhead which comes from checking the presence of new entries.
Settings
This document was generated with Documenter.jl version 1.8.0 on Wednesday 20 November 2024. Using Julia version 1.11.1.