From 74f4f20319d3f6a1429416e9da44adbe0ad6dcb5 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 17 Oct 2024 11:20:33 +0000 Subject: [PATCH] build based on 75f6abe --- .../dev/.documenter-siteinfo.json | 2 +- .../dev/api/index.html | 10 +++++----- DifferentiationInterfaceTest/dev/index.html | 2 +- .../dev/tutorial/index.html | 18 +++++++++--------- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json b/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json index 2d79b4ec..9806adfe 100644 --- a/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json +++ b/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-16T18:14:46","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-17T11:20:26","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/DifferentiationInterfaceTest/dev/api/index.html b/DifferentiationInterfaceTest/dev/api/index.html index 5f02a753..18674f47 100644 --- a/DifferentiationInterfaceTest/dev/api/index.html +++ b/DifferentiationInterfaceTest/dev/api/index.html @@ -1,6 +1,6 @@ -API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.ScenarioType
Scenario{op,pl_op,pl_fun}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • pl_op: either :in (for op!(f, result, backend, x)) or :out (for result = op(f, backend, x))
  • pl_fun: either :in (for f!(y, x)) or :out (for y = f(x))

Constructors

Scenario{op,pl_op}(f, x; tang, contexts, res1, res2)
-Scenario{op,pl_op}(f!, y, x; tang, contexts, res1, res2)

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • tang::Union{Nothing, NTuple{N, T} where {N, T}}: tangents for pushforward, pullback or HVP

  • contexts::Tuple: contexts (if applicable)

  • res1::Any: first-order result of the operator (if applicable)

  • res2::Any: second-order result of the operator (if applicable)

source
DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
+API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.ScenarioType
Scenario{op,pl_op,pl_fun}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • pl_op: either :in (for op!(f, result, backend, x)) or :out (for result = op(f, backend, x))
  • pl_fun: either :in (for f!(y, x)) or :out (for y = f(x))

Constructors

Scenario{op,pl_op}(f, x; tang, contexts, res1, res2)
+Scenario{op,pl_op}(f!, y, x; tang, contexts, res1, res2)

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • tang::Union{Nothing, NTuple{N, T} where {N, T}}: tangents for pushforward, pullback or HVP

  • contexts::Tuple: contexts (if applicable)

  • res1::Any: first-order result of the operator (if applicable)

  • res2::Any: second-order result of the operator (if applicable)

source
DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
     backends::Vector{<:ADTypes.AbstractADType};
     ...
 ) -> Union{Nothing, DataFrames.DataFrame}
@@ -25,12 +25,12 @@
     count_calls,
     benchmark_test
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Apply a list of backends on a list of scenarios, running a variety of different tests and/or benchmarks.

Return

This function always creates and runs a @testset, though its contents may vary.

  • if benchmark == :none, it returns nothing.
  • if benchmark != :none, it returns a DataFrame of benchmark results, whose columns correspond to the fields of DifferentiationBenchmarkDataRow.

Positional arguments

  • backends::Vector{<:AbstractADType}: the backends to test
  • scenarios::Vector{<:Scenario}: the scenarios on which to test them (defaults to the output of default_scenarios())

Keyword arguments

Test categories:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=:none: whether (and how) to check type stability of operators with JET.jl.
  • allocations=:none: whether (and how) to check allocations inside operators with AllocCheck.jl
  • benchmark=:none: whether (and how) to benchmark operators with Chairmarks.jl

For type_stability, allocations and benchmark, the possible values are :none, :prepared or :full. Each setting tests/benchmarks a different subset of calls:

kwargprepared operatorunprepared operatorpreparation
:nonenonono
:preparedyesnono
:fullyesyesyes

Misc options:

  • excluded::Vector{Symbol}: list of operators to exclude, such as FIRST_ORDER or SECOND_ORDER
  • detailed=false: whether to create a detailed or condensed testset
  • logging=false: whether to log progress

Correctness options:

  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
  • scenario_intact=true: whether to check that the scenario remains unchanged after the operators are applied
  • sparsity=false: whether to check sparsity patterns for Jacobians / Hessians

Type stability options:

  • ignored_modules=nothing: list of modules that JET.jl should ignore
  • function_filter: filter for functions that JET.jl should ignore (with a reasonable default)

Benchmark options:

  • count_calls=true: whether to also count function calls during benchmarking
  • benchmark_test=true: whether to include tests which succeed iff benchmark doesn't error
source
test_differentiation(
+

Apply a list of backends on a list of scenarios, running a variety of different tests and/or benchmarks.

Return

This function always creates and runs a @testset, though its contents may vary.

  • if benchmark == :none, it returns nothing.
  • if benchmark != :none, it returns a DataFrame of benchmark results, whose columns correspond to the fields of DifferentiationBenchmarkDataRow.

Positional arguments

  • backends::Vector{<:AbstractADType}: the backends to test
  • scenarios::Vector{<:Scenario}: the scenarios on which to test them (defaults to the output of default_scenarios())

Keyword arguments

Test categories:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=:none: whether (and how) to check type stability of operators with JET.jl.
  • allocations=:none: whether (and how) to check allocations inside operators with AllocCheck.jl
  • benchmark=:none: whether (and how) to benchmark operators with Chairmarks.jl

For type_stability, allocations and benchmark, the possible values are :none, :prepared or :full. Each setting tests/benchmarks a different subset of calls:

kwargprepared operatorunprepared operatorpreparation
:nonenonono
:preparedyesnono
:fullyesyesyes

Misc options:

  • excluded::Vector{Symbol}: list of operators to exclude, such as FIRST_ORDER or SECOND_ORDER
  • detailed=false: whether to create a detailed or condensed testset
  • logging=false: whether to log progress

Correctness options:

  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
  • scenario_intact=true: whether to check that the scenario remains unchanged after the operators are applied
  • sparsity=false: whether to check sparsity patterns for Jacobians / Hessians

Type stability options:

  • ignored_modules=nothing: list of modules that JET.jl should ignore
  • function_filter: filter for functions that JET.jl should ignore (with a reasonable default)

Benchmark options:

  • count_calls=true: whether to also count function calls during benchmarking
  • benchmark_test=true: whether to include tests which succeed iff benchmark doesn't error
source
test_differentiation(
     backend::ADTypes.AbstractADType,
     args...;
     kwargs...
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Shortcut for a single backend.

source
DifferentiationInterfaceTest.benchmark_differentiationFunction
benchmark_differentiation(
     backends,
     scenarios::Vector{<:Scenario};
     benchmark,
@@ -39,4 +39,4 @@
     count_calls,
     benchmark_test
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Shortcut for test_differentiation with only benchmarks and no correctness or type stability checks.

Specifying the set of scenarios is mandatory for this function.

source

Utilities

DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • prepared::Union{Nothing, Bool}: whether the operator had been prepared

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Float64: minimum runtime over all samples, in seconds

  • allocs::Float64: minimum number of allocations over all samples

  • bytes::Float64: minimum memory allocated over all samples, in bytes

  • gc_fraction::Float64: minimum fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Float64: minimum fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Internals

This is not part of the public API.

Base.zeroMethod
zero(scen::Scenario)

Return a new Scenario identical to scen except for the first- and second-order results which are set to zero.

source
DifferentiationInterfaceTest.batchifyMethod
batchify(scen::Scenario)

Return a new Scenario identical to scen except for the tangents tang and associated results res1 / res2, which are duplicated (batch mode).

Only works if scen is a pushforward, pullback or hvp scenario.

source
DifferentiationInterfaceTest.cachifyMethod
cachify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional cache argument a to store the result before it is returned.

source
DifferentiationInterfaceTest.constantifyMethod
constantify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional constant argument a by which the output is multiplied. The output and result fields are updated accordingly.

source
DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, ForwardDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API.

source
+

Shortcut for test_differentiation with only benchmarks and no correctness or type stability checks.

Specifying the set of scenarios is mandatory for this function.

source

Utilities

DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • prepared::Union{Nothing, Bool}: whether the operator had been prepared

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Float64: minimum runtime over all samples, in seconds

  • allocs::Float64: minimum number of allocations over all samples

  • bytes::Float64: minimum memory allocated over all samples, in bytes

  • gc_fraction::Float64: minimum fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Float64: minimum fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Internals

This is not part of the public API.

Base.zeroMethod
zero(scen::Scenario)

Return a new Scenario identical to scen except for the first- and second-order results which are set to zero.

source
DifferentiationInterfaceTest.batchifyMethod
batchify(scen::Scenario)

Return a new Scenario identical to scen except for the tangents tang and associated results res1 / res2, which are duplicated (batch mode).

Only works if scen is a pushforward, pullback or hvp scenario.

source
DifferentiationInterfaceTest.cachifyMethod
cachify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional cache argument a to store the result before it is returned.

source
DifferentiationInterfaceTest.constantifyMethod
constantify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional constant argument a by which the output is multiplied. The output and result fields are updated accordingly.

source
DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, ForwardDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API.

source
diff --git a/DifferentiationInterfaceTest/dev/index.html b/DifferentiationInterfaceTest/dev/index.html index 4dc59c92..a53e9e65 100644 --- a/DifferentiationInterfaceTest/dev/index.html +++ b/DifferentiationInterfaceTest/dev/index.html @@ -11,4 +11,4 @@ Pkg.add( url="https://github.com/gdalle/DifferentiationInterface.jl", subdir="DifferentiationInterfaceTest" -) +) diff --git a/DifferentiationInterfaceTest/dev/tutorial/index.html b/DifferentiationInterfaceTest/dev/tutorial/index.html index 65f8adba..7e40e565 100644 --- a/DifferentiationInterfaceTest/dev/tutorial/index.html +++ b/DifferentiationInterfaceTest/dev/tutorial/index.html @@ -13,12 +13,12 @@ type_stability=:none, # checks type stability with JET.jl detailed=true, # prints a detailed test set )Test Summary: | Pass Total Time -Testing correctness | 88 88 8.9s - AutoForwardDiff() | 44 44 5.6s - gradient | 44 44 5.5s - Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 3.3s - Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 2.1s - AutoZygote() | 44 44 3.3s - gradient | 44 44 3.2s - Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 2.6s - Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 0.6s

If you are too lazy to manually specify the reference, you can also provide an AD backend as the ref_backend keyword argument, which will serve as the ground truth for comparison.

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
8×12 DataFrame
Rowbackendscenariooperatorpreparedcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolBoolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue122864925.6935e-83.0112.00.00.0
2AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue121735934.75582e-82.080.00.00.0
3AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue129632141.31883e-74.0192.00.00.0
4AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue125761781.13584e-73.0160.00.00.0
5AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue13032348.51588e-724.0672.00.00.0
6AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue13067456.414e-722.0608.00.00.0
7AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue12881803.18975e-710.0464.00.00.0
8AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue12762873.24851e-710.0464.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow:

+Testing correctness | 88 88 9.3s + AutoForwardDiff() | 44 44 5.8s + gradient | 44 44 5.7s + Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 3.5s + Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 2.2s + AutoZygote() | 44 44 3.4s + gradient | 44 44 3.4s + Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 2.7s + Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 0.7s

If you are too lazy to manually specify the reference, you can also provide an AD backend as the ref_backend keyword argument, which will serve as the ground truth for comparison.

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
8×12 DataFrame
Rowbackendscenariooperatorpreparedcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolBoolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue122365135.48012e-83.0112.00.00.0
2AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue120916214.53977e-82.080.00.00.0
3AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue130772031.37596e-74.0192.00.00.0
4AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue123971711.21801e-73.0160.00.00.0
5AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue13115348.33324e-724.0672.00.00.0
6AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue13026466.33348e-722.0608.00.00.0
7AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue13017803.3e-710.0464.00.00.0
8AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue12890783.37038e-710.0464.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow: