Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SpecialFunctions extension #82

Merged
merged 18 commits into from
May 22, 2024
Merged

Add SpecialFunctions extension #82

merged 18 commits into from
May 22, 2024

Conversation

gdalle
Copy link
Collaborator

@gdalle gdalle commented May 21, 2024

At the moment, the tests fail because I try to "eval into a closed module during precompilation".

For anyone who wants to take a look:

  • the new function overload_all calls multiple functions that define specific overloads with @eval
  • the typical logic for one of these functions can be seen here:
    function overload_connectivity_1_to_1(m::Module, fn::Function)
    ms, fns = nameof(m), nameof(fn)
    @eval function $ms.$fns(t::T) where {T<:ConnectivityTracer}
    return connectivity_tracer_1_to_1(t, is_influence_zero_global($ms.$fns))
    end
    @eval function $ms.$fns(d::D) where {P,T<:ConnectivityTracer,D<:Dual{P,T}}
    x = primal(d)
    p_out = $ms.$fns(x)
    t_out = connectivity_tracer_1_to_1(tracer(d), is_influence_zero_local($ms.$fns, x))
    return Dual(p_out, t_out)
    end
    end

I would love some guidance on:

  • whether this approach even makes sense
  • when to run the evals (during init or during precompilation)
  • where I should use the macro @eval or the function eval
  • in which module I should eval

@mkitti
Copy link

mkitti commented May 22, 2024

The problematic @eval is here:

https://github.com/adrhill/SparseConnectivityTracer.jl/blob/main/src%2Foverload_connectivity.jl#L15

The problem is that you are invoking overload_connectivity_1_to_1(m, op) on __init__ which evals code into the module after it has been precompiled.

The design of SCT us problematic. Instead of defining new methods in the calling module, it defines new methods in SCR. The better way to do this would be to define a macro to extend methods in SCT but define those methods in the calling module.

@gdalle
Copy link
Collaborator Author

gdalle commented May 22, 2024

I put this eval step in the initialization precisely because the error was warning me not to eval stuff during precompilation 🤔

What do you mean by "define new methods in the calling module"? Should I use the eval of Base/SpecialFunctions? Or is the calling module yet another one?

@codecov-commenter
Copy link

codecov-commenter commented May 22, 2024

Codecov Report

Attention: Patch coverage is 85.71429% with 18 lines in your changes are missing coverage. Please review.

Project coverage is 77.49%. Comparing base (0278700) to head (d523e07).

Files Patch % Lines
src/overload_connectivity.jl 74.07% 7 Missing ⚠️
src/overload_gradient.jl 88.09% 5 Missing ⚠️
src/overload_hessian.jl 84.37% 5 Missing ⚠️
src/SparseConnectivityTracer.jl 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #82      +/-   ##
==========================================
+ Coverage   76.19%   77.49%   +1.29%     
==========================================
  Files          14       16       +2     
  Lines         689      662      -27     
==========================================
- Hits          525      513      -12     
+ Misses        164      149      -15     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gdalle gdalle requested a review from adrhill May 22, 2024 06:32
@gdalle
Copy link
Collaborator Author

gdalle commented May 22, 2024

I finally managed to get it working. Here's a summary of the changes:

Source

Extensions

  • Add SpecialFunctions extension with Requires for Julia < 1.9
  • Classify the unary and binary operators there
  • Add the one-liner eval(SCT.overload_all(:SpecialFunctions))

Tests

  • Test classification for the operators from SpecialFunctions
  • Add connectivity, jacobian and hessian tests to check that they can digest tracer numbers

@gdalle gdalle mentioned this pull request May 22, 2024
Comment on lines +1 to +30
function overload_all(M)
exprs_1_to_1 = [
quote
$(overload_connectivity_1_to_1(M, op))
$(overload_gradient_1_to_1(M, op))
$(overload_hessian_1_to_1(M, op))
end for op in nameof.(list_operators_1_to_1(Val(M)))
]
exprs_2_to_1 = [
quote
$(overload_connectivity_2_to_1(M, op))
$(overload_gradient_2_to_1(M, op))
$(overload_hessian_2_to_1(M, op))
end for op in nameof.(list_operators_2_to_1(Val(M)))
]
exprs_1_to_2 = [
quote
$(overload_connectivity_1_to_2(M, op))
$(overload_gradient_1_to_2(M, op))
$(overload_hessian_1_to_2(M, op))
end for op in nameof.(list_operators_1_to_2(Val(M)))
]
return quote
$(exprs_1_to_1...)
$(exprs_2_to_1...)
$(exprs_1_to_2...)
end
end

eval(overload_all(:Base))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this combined with list_operators_* and the big quotes in overload_*.jl feels very hacky. 🫤
There are too many layers of metaprogramming with a lot of implicit dependencies in their design that interact across several files.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spent quite a lot of time trying out various things to make it work with minimal changes, and this seems to work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's in the quotes is exactly the same as what was directly eval-ed earlied (if you ignore the additional module prefixes, which I got from ForwardDiff).
The difference is that we split code generation (creating the expression) from code evaluation (running it through eval)

Copy link
Collaborator Author

@gdalle gdalle May 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The contents of overload_all are only designed so that it can be a one-liner in all the package extensions we will no doubt need to add. It's nothing but a big concatenation of all the expressions

@adrhill adrhill added the run benchmark Run benchmarks in CI label May 22, 2024
Copy link

Benchmark result

Judge result

Benchmark Report for /home/runner/work/SparseConnectivityTracer.jl/SparseConnectivityTracer.jl

Job Properties

  • Time of benchmarks:
    • Target: 22 May 2024 - 11:44
    • Baseline: 22 May 2024 - 11:48
  • Package commits:
    • Target: afd1da
    • Baseline: 2ea9b0
  • Julia commits:
    • Target: 0b4590
    • Baseline: 0b4590
  • Julia command flags:
    • Target: None
    • Baseline: None
  • Environment variables:
    • Target: None
    • Baseline: None

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["Jacobian", "Global", "conv", "size=128x128x3", "Set{UInt64}"] 1.07 (5%) ❌ 1.00 (1%)
["Jacobian", "Global", "conv", "size=128x128x3", "SortedVector{UInt64}"] 1.07 (5%) ❌ 1.00 (1%)
["Jacobian", "Global", "conv", "size=28x28x3", "DuplicateVector{UInt64}"] 0.95 (5%) ✅ 1.00 (1%)
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "BitSet"] 1.08 (5%) ❌ 1.04 (1%) ❌
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "Set{UInt64}"] 1.07 (5%) ❌ 1.03 (1%) ❌
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "BitSet"] 1.02 (5%) 0.98 (1%) ✅
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "DuplicateVector{UInt64}"] 1.09 (5%) ❌ 1.03 (1%) ❌
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "Set{UInt64}"] 0.92 (5%) ✅ 0.93 (1%) ✅
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "SortedVector{UInt64}"] 0.99 (5%) 0.98 (1%) ✅
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "DuplicateVector{UInt64}"] 1.06 (5%) ❌ 1.03 (1%) ❌
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "DuplicateVector{UInt64}"] 0.98 (5%) 0.95 (1%) ✅
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "Set{UInt64}"] 1.00 (5%) 1.01 (1%) ❌

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["Jacobian", "Global", "brusselator", "N=100"]
  • ["Jacobian", "Global", "brusselator", "N=24"]
  • ["Jacobian", "Global", "brusselator", "N=6"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=100"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=24"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=6"]
  • ["Jacobian", "Global", "conv", "size=128x128x3"]
  • ["Jacobian", "Global", "conv", "size=28x28x3"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.01"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.05"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.1"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.25"]

Julia versioninfo

Target

Julia Version 1.10.3
Commit 0b4590a5507 (2024-04-30 10:59 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1021-azure #22~22.04.1-Ubuntu SMP Tue Apr 30 16:08:18 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  2445 MHz       2255 s          0 s        176 s       5911 s          0 s
       #2  3242 MHz       1942 s          0 s        180 s       6204 s          0 s
       #3  3174 MHz       2335 s          0 s        195 s       5810 s          0 s
       #4  2597 MHz       2437 s          0 s        176 s       5728 s          0 s
  Memory: 15.606502532958984 GB (13310.46875 MB free)
  Uptime: 837.52 sec
  Load Avg:  1.01  1.07  0.8
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Baseline

Julia Version 1.10.3
Commit 0b4590a5507 (2024-04-30 10:59 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1021-azure #22~22.04.1-Ubuntu SMP Tue Apr 30 16:08:18 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  2603 MHz       3129 s          0 s        196 s       7431 s          0 s
       #2  3171 MHz       2984 s          0 s        218 s       7536 s          0 s
       #3  2445 MHz       2524 s          0 s        218 s       8011 s          0 s
       #4  3241 MHz       2706 s          0 s        193 s       7855 s          0 s
  Memory: 15.606502532958984 GB (13758.2734375 MB free)
  Uptime: 1079.28 sec
  Load Avg:  1.02  1.05  0.86
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Target result

Benchmark Report for /home/runner/work/SparseConnectivityTracer.jl/SparseConnectivityTracer.jl

Job Properties

  • Time of benchmark: 22 May 2024 - 11:44
  • Package commit: afd1da
  • Julia commit: 0b4590
  • Julia command flags: None
  • Environment variables: None

Results

Below is a table of this job's results, obtained by running the benchmarks.
The values listed in the ID column have the structure [parent_group, child_group, ..., key], and can be used to
index into the BaseBenchmarks suite to retrieve the corresponding benchmarks.
The percentages accompanying time and memory values in the below table are noise tolerances. The "true"
time/memory value for a given benchmark is expected to fall within this percentage of the reported value.
An empty cell means that the value was zero.

ID time GC time memory allocations
["Jacobian", "Global", "brusselator", "N=100", "BitSet"] 46.575 ms (5%) 6.729 ms 145.98 MiB (1%) 426018
["Jacobian", "Global", "brusselator", "N=100", "DuplicateVector{UInt64}"] 65.804 ms (5%) 4.342 ms 75.53 MiB (1%) 740057
["Jacobian", "Global", "brusselator", "N=100", "Set{UInt64}"] 44.330 ms (5%) 69.12 MiB (1%) 660060
["Jacobian", "Global", "brusselator", "N=100", "SortedVector{UInt64}"] 10.600 ms (5%) 23.19 MiB (1%) 200057
["Jacobian", "Global", "brusselator", "N=24", "BitSet"] 920.547 μs (5%) 2.10 MiB (1%) 23418
["Jacobian", "Global", "brusselator", "N=24", "DuplicateVector{UInt64}"] 3.482 ms (5%) 4.38 MiB (1%) 42665
["Jacobian", "Global", "brusselator", "N=24", "Set{UInt64}"] 2.225 ms (5%) 4.01 MiB (1%) 38060
["Jacobian", "Global", "brusselator", "N=24", "SortedVector{UInt64}"] 594.973 μs (5%) 1.37 MiB (1%) 11561
["Jacobian", "Global", "brusselator", "N=6", "BitSet"] 50.856 μs (5%) 104.75 KiB (1%) 1254
["Jacobian", "Global", "brusselator", "N=6", "DuplicateVector{UInt64}"] 214.200 μs (5%) 281.31 KiB (1%) 2693
["Jacobian", "Global", "brusselator", "N=6", "Set{UInt64}"] 116.318 μs (5%) 258.02 KiB (1%) 2408
["Jacobian", "Global", "brusselator", "N=6", "SortedVector{UInt64}"] 38.122 μs (5%) 88.38 KiB (1%) 749
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "BitSet"] 208.726 ms (5%) 52.236 ms 800.89 MiB (1%) 1225764
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "DuplicateVector{UInt64}"] 388.351 ms (5%) 37.816 ms 531.22 MiB (1%) 3100171
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "Set{UInt64}"] 312.960 ms (5%) 47.085 ms 360.10 MiB (1%) 2480179
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "SortedVector{UInt64}"] 36.941 ms (5%) 3.400 ms 80.25 MiB (1%) 460171
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "BitSet"] 2.590 ms (5%) 6.32 MiB (1%) 67431
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "DuplicateVector{UInt64}"] 17.739 ms (5%) 31.01 MiB (1%) 178708
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "Set{UInt64}"] 12.806 ms (5%) 20.76 MiB (1%) 143004
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "SortedVector{UInt64}"] 1.817 ms (5%) 4.64 MiB (1%) 26644
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "BitSet"] 144.531 μs (5%) 275.11 KiB (1%) 3233
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "DuplicateVector{UInt64}"] 1.098 ms (5%) 1.95 MiB (1%) 11295
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "Set{UInt64}"] 726.149 μs (5%) 1.31 MiB (1%) 9071
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "SortedVector{UInt64}"] 130.575 μs (5%) 305.86 KiB (1%) 1791
["Jacobian", "Global", "conv", "size=128x128x3", "BitSet"] 1.794 s (5%) 345.212 ms 19.15 GiB (1%) 16384623
["Jacobian", "Global", "conv", "size=128x128x3", "DuplicateVector{UInt64}"] 1.144 s (5%) 73.211 ms 1.20 GiB (1%) 7109869
["Jacobian", "Global", "conv", "size=128x128x3", "Set{UInt64}"] 3.471 s (5%) 620.706 ms 4.65 GiB (1%) 34480063
["Jacobian", "Global", "conv", "size=128x128x3", "SortedVector{UInt64}"] 676.643 ms (5%) 83.583 ms 1.20 GiB (1%) 7159021
["Jacobian", "Global", "conv", "size=28x28x3", "BitSet"] 24.720 ms (5%) 1.855 ms 74.83 MiB (1%) 598714
["Jacobian", "Global", "conv", "size=28x28x3", "DuplicateVector{UInt64}"] 36.712 ms (5%) 48.51 MiB (1%) 267454
["Jacobian", "Global", "conv", "size=28x28x3", "Set{UInt64}"] 111.561 ms (5%) 8.378 ms 181.66 MiB (1%) 1294798
["Jacobian", "Global", "conv", "size=28x28x3", "SortedVector{UInt64}"] 20.254 ms (5%) 48.65 MiB (1%) 269806
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "BitSet"] 16.061 μs (5%) 31.59 KiB (1%) 377
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "DuplicateVector{UInt64}"] 13.074 μs (5%) 22.28 KiB (1%) 227
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "Set{UInt64}"] 26.400 μs (5%) 72.20 KiB (1%) 687
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "SortedVector{UInt64}"] 14.357 μs (5%) 25.56 KiB (1%) 280
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "BitSet"] 83.236 μs (5%) 159.50 KiB (1%) 1285
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "DuplicateVector{UInt64}"] 942.997 μs (5%) 1.29 MiB (1%) 7665
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "Set{UInt64}"] 270.907 μs (5%) 463.61 KiB (1%) 3109
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "SortedVector{UInt64}"] 80.099 μs (5%) 166.58 KiB (1%) 751
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "BitSet"] 164.929 μs (5%) 341.62 KiB (1%) 2489
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "DuplicateVector{UInt64}"] 5.470 ms (5%) 12.33 MiB (1%) 25452
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "Set{UInt64}"] 1.308 ms (5%) 2.20 MiB (1%) 8649
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "SortedVector{UInt64}"] 223.769 μs (5%) 648.92 KiB (1%) 1352
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "BitSet"] 351.728 μs (5%) 592.44 KiB (1%) 6015
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "DuplicateVector{UInt64}"] 302.815 ms (5%) 161.951 ms 739.84 MiB (1%) 29856
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "Set{UInt64}"] 4.257 ms (5%) 8.32 MiB (1%) 25317
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "SortedVector{UInt64}"] 589.213 μs (5%) 1.97 MiB (1%) 3148

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["Jacobian", "Global", "brusselator", "N=100"]
  • ["Jacobian", "Global", "brusselator", "N=24"]
  • ["Jacobian", "Global", "brusselator", "N=6"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=100"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=24"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=6"]
  • ["Jacobian", "Global", "conv", "size=128x128x3"]
  • ["Jacobian", "Global", "conv", "size=28x28x3"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.01"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.05"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.1"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.25"]

Julia versioninfo

Julia Version 1.10.3
Commit 0b4590a5507 (2024-04-30 10:59 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1021-azure #22~22.04.1-Ubuntu SMP Tue Apr 30 16:08:18 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  2445 MHz       2255 s          0 s        176 s       5911 s          0 s
       #2  3242 MHz       1942 s          0 s        180 s       6204 s          0 s
       #3  3174 MHz       2335 s          0 s        195 s       5810 s          0 s
       #4  2597 MHz       2437 s          0 s        176 s       5728 s          0 s
  Memory: 15.606502532958984 GB (13310.46875 MB free)
  Uptime: 837.52 sec
  Load Avg:  1.01  1.07  0.8
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Baseline result

Benchmark Report for /home/runner/work/SparseConnectivityTracer.jl/SparseConnectivityTracer.jl

Job Properties

  • Time of benchmark: 22 May 2024 - 11:48
  • Package commit: 2ea9b0
  • Julia commit: 0b4590
  • Julia command flags: None
  • Environment variables: None

Results

Below is a table of this job's results, obtained by running the benchmarks.
The values listed in the ID column have the structure [parent_group, child_group, ..., key], and can be used to
index into the BaseBenchmarks suite to retrieve the corresponding benchmarks.
The percentages accompanying time and memory values in the below table are noise tolerances. The "true"
time/memory value for a given benchmark is expected to fall within this percentage of the reported value.
An empty cell means that the value was zero.

ID time GC time memory allocations
["Jacobian", "Global", "brusselator", "N=100", "BitSet"] 48.294 ms (5%) 10.357 ms 145.98 MiB (1%) 426018
["Jacobian", "Global", "brusselator", "N=100", "DuplicateVector{UInt64}"] 66.490 ms (5%) 4.610 ms 75.53 MiB (1%) 740057
["Jacobian", "Global", "brusselator", "N=100", "Set{UInt64}"] 44.111 ms (5%) 69.12 MiB (1%) 660060
["Jacobian", "Global", "brusselator", "N=100", "SortedVector{UInt64}"] 10.497 ms (5%) 23.19 MiB (1%) 200057
["Jacobian", "Global", "brusselator", "N=24", "BitSet"] 914.901 μs (5%) 2.10 MiB (1%) 23418
["Jacobian", "Global", "brusselator", "N=24", "DuplicateVector{UInt64}"] 3.460 ms (5%) 4.38 MiB (1%) 42665
["Jacobian", "Global", "brusselator", "N=24", "Set{UInt64}"] 2.197 ms (5%) 4.01 MiB (1%) 38060
["Jacobian", "Global", "brusselator", "N=24", "SortedVector{UInt64}"] 585.735 μs (5%) 1.37 MiB (1%) 11561
["Jacobian", "Global", "brusselator", "N=6", "BitSet"] 48.541 μs (5%) 104.75 KiB (1%) 1254
["Jacobian", "Global", "brusselator", "N=6", "DuplicateVector{UInt64}"] 212.056 μs (5%) 281.31 KiB (1%) 2693
["Jacobian", "Global", "brusselator", "N=6", "Set{UInt64}"] 111.598 μs (5%) 258.02 KiB (1%) 2408
["Jacobian", "Global", "brusselator", "N=6", "SortedVector{UInt64}"] 37.520 μs (5%) 88.38 KiB (1%) 749
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "BitSet"] 209.893 ms (5%) 52.455 ms 800.89 MiB (1%) 1225764
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "DuplicateVector{UInt64}"] 372.130 ms (5%) 31.706 ms 531.22 MiB (1%) 3100171
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "Set{UInt64}"] 302.449 ms (5%) 51.029 ms 360.10 MiB (1%) 2480179
["Jacobian", "Global", "brusselator_ode_solve", "N=100", "SortedVector{UInt64}"] 35.864 ms (5%) 3.047 ms 80.25 MiB (1%) 460171
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "BitSet"] 2.589 ms (5%) 6.32 MiB (1%) 67431
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "DuplicateVector{UInt64}"] 17.319 ms (5%) 31.01 MiB (1%) 178708
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "Set{UInt64}"] 12.739 ms (5%) 20.76 MiB (1%) 143004
["Jacobian", "Global", "brusselator_ode_solve", "N=24", "SortedVector{UInt64}"] 1.795 ms (5%) 4.64 MiB (1%) 26644
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "BitSet"] 138.689 μs (5%) 275.11 KiB (1%) 3233
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "DuplicateVector{UInt64}"] 1.058 ms (5%) 1.95 MiB (1%) 11295
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "Set{UInt64}"] 708.716 μs (5%) 1.31 MiB (1%) 9071
["Jacobian", "Global", "brusselator_ode_solve", "N=6", "SortedVector{UInt64}"] 128.972 μs (5%) 305.86 KiB (1%) 1791
["Jacobian", "Global", "conv", "size=128x128x3", "BitSet"] 1.867 s (5%) 319.100 ms 19.15 GiB (1%) 16384623
["Jacobian", "Global", "conv", "size=128x128x3", "DuplicateVector{UInt64}"] 1.135 s (5%) 57.482 ms 1.20 GiB (1%) 7109869
["Jacobian", "Global", "conv", "size=128x128x3", "Set{UInt64}"] 3.229 s (5%) 525.792 ms 4.65 GiB (1%) 34480063
["Jacobian", "Global", "conv", "size=128x128x3", "SortedVector{UInt64}"] 634.094 ms (5%) 60.211 ms 1.20 GiB (1%) 7159021
["Jacobian", "Global", "conv", "size=28x28x3", "BitSet"] 25.224 ms (5%) 2.393 ms 74.83 MiB (1%) 598714
["Jacobian", "Global", "conv", "size=28x28x3", "DuplicateVector{UInt64}"] 38.782 ms (5%) 48.51 MiB (1%) 267454
["Jacobian", "Global", "conv", "size=28x28x3", "Set{UInt64}"] 106.993 ms (5%) 6.192 ms 181.66 MiB (1%) 1294798
["Jacobian", "Global", "conv", "size=28x28x3", "SortedVector{UInt64}"] 19.941 ms (5%) 48.65 MiB (1%) 269806
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "BitSet"] 14.918 μs (5%) 30.50 KiB (1%) 361
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "DuplicateVector{UInt64}"] 12.673 μs (5%) 22.23 KiB (1%) 226
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "Set{UInt64}"] 24.727 μs (5%) 70.23 KiB (1%) 667
["Jacobian", "Global", "sparse_matmul", "sparsity=0.01", "SortedVector{UInt64}"] 14.236 μs (5%) 25.70 KiB (1%) 281
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "BitSet"] 81.663 μs (5%) 162.27 KiB (1%) 1309
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "DuplicateVector{UInt64}"] 865.199 μs (5%) 1.26 MiB (1%) 7445
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "Set{UInt64}"] 293.008 μs (5%) 499.62 KiB (1%) 3256
["Jacobian", "Global", "sparse_matmul", "sparsity=0.05", "SortedVector{UInt64}"] 81.303 μs (5%) 169.45 KiB (1%) 749
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "BitSet"] 159.488 μs (5%) 341.14 KiB (1%) 2487
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "DuplicateVector{UInt64}"] 5.142 ms (5%) 11.94 MiB (1%) 25027
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "Set{UInt64}"] 1.274 ms (5%) 2.19 MiB (1%) 8699
["Jacobian", "Global", "sparse_matmul", "sparsity=0.1", "SortedVector{UInt64}"] 216.676 μs (5%) 654.80 KiB (1%) 1356
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "BitSet"] 336.140 μs (5%) 597.81 KiB (1%) 6097
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "DuplicateVector{UInt64}"] 308.515 ms (5%) 158.686 ms 781.67 MiB (1%) 29937
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "Set{UInt64}"] 4.247 ms (5%) 8.22 MiB (1%) 25057
["Jacobian", "Global", "sparse_matmul", "sparsity=0.25", "SortedVector{UInt64}"] 580.956 μs (5%) 1.98 MiB (1%) 3176

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["Jacobian", "Global", "brusselator", "N=100"]
  • ["Jacobian", "Global", "brusselator", "N=24"]
  • ["Jacobian", "Global", "brusselator", "N=6"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=100"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=24"]
  • ["Jacobian", "Global", "brusselator_ode_solve", "N=6"]
  • ["Jacobian", "Global", "conv", "size=128x128x3"]
  • ["Jacobian", "Global", "conv", "size=28x28x3"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.01"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.05"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.1"]
  • ["Jacobian", "Global", "sparse_matmul", "sparsity=0.25"]

Julia versioninfo

Julia Version 1.10.3
Commit 0b4590a5507 (2024-04-30 10:59 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1021-azure #22~22.04.1-Ubuntu SMP Tue Apr 30 16:08:18 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  2603 MHz       3129 s          0 s        196 s       7431 s          0 s
       #2  3171 MHz       2984 s          0 s        218 s       7536 s          0 s
       #3  2445 MHz       2524 s          0 s        218 s       8011 s          0 s
       #4  3241 MHz       2706 s          0 s        193 s       7855 s          0 s
  Memory: 15.606502532958984 GB (13758.2734375 MB free)
  Uptime: 1079.28 sec
  Load Avg:  1.02  1.05  0.86
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Runtime information

Runtime Info
BLAS #threads 2
BLAS.vendor() lbt
Sys.CPU_THREADS 4

lscpu output:

Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             4
On-line CPU(s) list:                0-3
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 7763 64-Core Processor
CPU family:                         25
Model:                              1
Thread(s) per core:                 2
Core(s) per socket:                 2
Socket(s):                          1
Stepping:                           1
BogoMIPS:                           4890.85
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm
Virtualization:                     AMD-V
Hypervisor vendor:                  Microsoft
Virtualization type:                full
L1d cache:                          64 KiB (2 instances)
L1i cache:                          64 KiB (2 instances)
L2 cache:                           1 MiB (2 instances)
L3 cache:                           32 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass:    Vulnerable
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Cpu Property Value
Brand AMD EPYC 7763 64-Core Processor
Vendor :AMD
Architecture :Unknown
Model Family: 0xaf, Model: 0x01, Stepping: 0x01, Type: 0x00
Cores 16 physical cores, 16 logical cores (on executing CPU)
No Hyperthreading hardware capability detected
Clock Frequencies Not supported by CPU
Data Cache Level 1:3 : (32, 512, 32768) kbytes
64 byte cache line size
Address Size 48 bits virtual, 48 bits physical
SIMD 256 bit = 32 byte max. SIMD vector size
Time Stamp Counter TSC is accessible via rdtsc
TSC runs at constant rate (invariant from clock frequency)
Perf. Monitoring Performance Monitoring Counters (PMC) are not supported
Hypervisor Yes, Microsoft

@adrhill adrhill merged commit 57def1e into main May 22, 2024
5 checks passed
@adrhill adrhill deleted the gd/adnlp branch May 22, 2024 12:12
@mkitti
Copy link

mkitti commented May 22, 2024

  • Add the one-liner eval(SCT.overload_all(:SpecialFunctions))

Would it be easier to implement this as a macro? A macro is a function that returns an expression and calls eval.

@gdalle
Copy link
Collaborator Author

gdalle commented May 22, 2024

Would it be easier to implement this as a macro? A macro is a function that returns an expression and calls eval.

That's probably a good idea, will try it in a later PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run benchmark Run benchmarks in CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants