Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add heavy compilation benchmarks #313

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
*.jl.cov
*.jl.*.cov
*.jl.mem
!LocalPreferences.toml
!Manifest.toml
!src/inference/Manifest.toml
4 changes: 4 additions & 0 deletions LocalPreferences.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# NOTE These packages are only used for the inference benchmark,
# thus they really don't need to be precompiled.
[SnoopPrecompile]
skip_precompile = ["CSV", "DataFrames", "OrdinaryDiffEq", "Plots"]
127 changes: 117 additions & 10 deletions Manifest.toml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
REPL = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Expand Down
80 changes: 56 additions & 24 deletions src/inference/InferenceBenchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,7 @@ struct InferenceBenchmarker <: AbstractInterpreter
compress::Bool = true,
discard_trees::Bool = true,
inf_cache::Vector{InferenceResult} = InferenceResult[],
code_cache::InferenceBenchmarkerCache = InferenceBenchmarkerCache(IdDict{MethodInstance,CodeInstance}()),
)
code_cache::InferenceBenchmarkerCache = InferenceBenchmarkerCache(IdDict{MethodInstance,CodeInstance}()))
return new(
world,
inf_params,
Expand Down Expand Up @@ -182,27 +181,9 @@ function opt_call(@nospecialize(f), @nospecialize(types = Base.default_tt(f));
end
end

function tune_benchmarks!(
g::BenchmarkGroup;
seconds=30,
gcsample=true,
)
for v in values(g)
v.params.seconds = seconds
v.params.gcsample = gcsample
v.params.evals = 1 # `setup` must be functional
end
end

# "inference" benchmark targets
# =============================

# TODO add TTFP?
# XXX some targets below really depends on the compiler implementation itself
# (e.g. `abstract_call_gf_by_type`) and thus a bit more unreliable -- ideally
# we want to replace them with other functions that have the similar characteristics
# but whose call graph are orthogonal to the Julia's compiler implementation

using REPL
broadcasting(xs, x) = findall(>(x), abs.(xs))
let # check the compilation behavior for a function with lots of local variables
Expand Down Expand Up @@ -294,11 +275,50 @@ let # check performance of opaque closure handling
end
end

using Pkg
let old = Pkg.project().path
infbenchmarkenv = @__DIR__
try
Pkg.activate(infbenchmarkenv)
Pkg.instantiate()
Pkg.precompile()

using DataFrames, CSV, Plots, OrdinaryDiffEq
finally
Pkg.activate(old)
end
end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KristofferC Do you know how we can achieve something like this? Currently we get:

  Activating project at `/private/var/folders/xh/6zzly9vx71v05_y67nm_s9_c0000gn/T/jl_pat9yZ`
ERROR: LoadError: ArgumentError: Package BaseBenchmarks does not have DataFrames in its dependencies:
- You may have a partially installed environment. Try `Pkg.instantiate()`
  to ensure all packages in the environment are installed.
- Or, if you have BaseBenchmarks checked out for development and have
  added DataFrames as a dependency but haven't updated your primary
  environment's manifest file, try `Pkg.resolve()`.
- Otherwise you may need to report an issue with BaseBenchmarks
Stacktrace:
  [1] macro expansion
    @ ./loading.jl:1594 [inlined]
  [2] macro expansion
    @ ./lock.jl:267 [inlined]
  [3] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:1571
  [4] top-level scope
    @ ~/julia/packages/BaseBenchmarks/src/inference/InferenceBenchmarks.jl:286
  [5] include(mod::Module, _path::String)
    @ Base ./Base.jl:457
  [6] include(x::String)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:1
  [7] top-level scope
    @ none:1
  [8] eval
    @ ./boot.jl:370 [inlined]
  [9] load!(group::BenchmarkGroup, id::String; tune::Bool)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:42
 [10] load!
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:39 [inlined]
 [11] macro expansion
    @ ./timing.jl:393 [inlined]
 [12] loadall!(group::BenchmarkGroup; verbose::Bool, tune::Bool)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:58
 [13] loadall!
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:54 [inlined]
 [14] #loadall!#3
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:52 [inlined]
 [15] loadall!()
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:52
 [16] top-level scope
    @ ~/julia/packages/BaseBenchmarks/test/runtests.jl:8
 [17] include(fname::String)
    @ Base.MainInclude ./client.jl:478
 [18] top-level scope
    @ none:6
in expression starting at /Users/aviatesk/julia/packages/BaseBenchmarks/src/inference/InferenceBenchmarks.jl:1
in expression starting at /Users/aviatesk/julia/packages/BaseBenchmarks/test/runtests.jl:8
ERROR: Package BaseBenchmarks errored during testing

when running Pkg.test().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you try push this directory to LOAD_PATH instead of using activate?

Copy link
Member Author

@aviatesk aviatesk Apr 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this does not work neither:

using Pkg
let
    push!(LOAD_PATH, @__DIR__)
    try
        Pkg.instantiate()
        using DataFrames, CSV, Plots, OrdinaryDiffEq
    finally
        pop!(LOAD_PATH)
    end
end

(The error is same. And it persists even we do not call Pkg.instantiate())


function lorenz(du, u, p, t)
du[1] = 10.0(u[2] - u[1])
du[2] = u[1] * (28.0 - u[3]) - u[2]
du[3] = u[1] * u[2] - (8 / 3) * u[3]
end
let p = ODEProblem(lorenz, [1.0; 0.0; 0.0], (0.0, 1.0))
global prob::typeof(p) = p
end

function tune_benchmarks!(
g::BenchmarkGroup;
seconds=30,
gcsample=true)
default = BenchmarkTools.DEFAULT_PARAMETERS
for v in values(g)
if v.params.seconds == default.seconds
v.params.seconds = seconds
end
if v.params.gcsample == default.gcsample
v.params.gcsample = gcsample
end
v.params.evals = 1 # `setup` must be functional
end
end

const SUITE = BenchmarkGroup()

let g = addgroup!(SUITE, "abstract interpretation")
g["sin(42)"] = @benchmarkable (@abs_call sin(42))
g["rand(Float64)"] = @benchmarkable (@abs_call rand(Float64))
g["sin(42)"] = @benchmarkable @abs_call sin(42)
g["rand(Float64)"] = @benchmarkable @abs_call rand(Float64)
g["println(::QuoteNode)"] = @benchmarkable (abs_call(println, (QuoteNode,)))
g["broadcasting"] = @benchmarkable abs_call(broadcasting, (Vector{Float64},Float64))
g["REPL.REPLCompletions.completions"] = @benchmarkable abs_call(
Expand All @@ -310,6 +330,10 @@ let g = addgroup!(SUITE, "abstract interpretation")
g["many_global_refs"] = @benchmarkable abs_call(many_global_refs, (Int,))
g["many_invoke_calls"] = @benchmarkable abs_call(many_invoke_calls, (Vector{Float64},))
g["many_opaque_closures"] = @benchmarkable abs_call(many_opaque_closures, (Vector{Float64},))
g["DataFrames.DataFrame(::Dict{Symbol,Any})"] = @benchmarkable abs_call(DataFrame, (Dict{Symbol,Any},))
g["CSV.read(::String, DataFrame)"] = @benchmarkable (@abs_call CSV.read("some.csv", DataFrame)) seconds=70
g["Plots.plot(::Matrix{Float64})"] = @benchmarkable (@abs_call plot(rand(10,3))) seconds=100
g["OrdinaryDiffEq.solve(prob::ODEProblem, QNDF())"] = @benchmarkable @abs_call solve(prob, QNDF())
tune_benchmarks!(g)
end

Expand All @@ -327,12 +351,16 @@ let g = addgroup!(SUITE, "optimization")
g["many_global_refs"] = @benchmarkable f() (setup = (f = opt_call(many_global_refs, (Int,))))
g["many_invoke_calls"] = @benchmarkable f() (setup = (f = opt_call(many_invoke_calls, (Vector{Float64},))))
g["many_opaque_closures"] = @benchmarkable f() (setup = (f = opt_call(many_opaque_closures, (Vector{Float64},))))
g["DataFrames.DataFrame(::Dict{Symbol,Any})"] = @benchmarkable f() = (setup = (f = opt_call(DataFrame, (Dict{Symbol,Any},))))
g["CSV.read(::String, DataFrame)"] = @benchmarkable f() = (setup = (f = @opt_call CSV.read("some.csv", DataFarme)))
g["Plots.plot(::Matrix{Float64})"] = @benchmarkable f() = (setup = (f = @opt_call plot(rand(10,3))))
g["OrdinaryDiffEq.solve(prob::ODEProblem, QNDF())"] = @benchmarkable f() = (setup = (f = @opt_call solve(prob, QNDF())))
tune_benchmarks!(g)
end

let g = addgroup!(SUITE, "allinference")
g["sin(42)"] = @benchmarkable (@inf_call sin(42))
g["rand(Float64)"] = @benchmarkable (@inf_call rand(Float64))
g["sin(42)"] = @benchmarkable @inf_call sin(42)
g["rand(Float64)"] = @benchmarkable @inf_call rand(Float64)
g["println(::QuoteNode)"] = @benchmarkable (inf_call(println, (QuoteNode,)))
g["broadcasting"] = @benchmarkable inf_call(broadcasting, (Vector{Float64},Float64))
g["REPL.REPLCompletions.completions"] = @benchmarkable inf_call(
Expand All @@ -344,6 +372,10 @@ let g = addgroup!(SUITE, "allinference")
g["many_global_refs"] = @benchmarkable inf_call(many_global_refs, (Int,))
g["many_invoke_calls"] = @benchmarkable inf_call(many_invoke_calls, (Vector{Float64},))
g["many_opaque_closures"] = @benchmarkable inf_call(many_opaque_closures, (Vector{Float64},))
g["DataFrames.DataFrame(::Dict{Symbol,Any})"] = @benchmarkable inf_call(DataFrame, (Dict{Symbol,Any},))
g["CSV.read(::String, DataFrame)"] = @benchmarkable (@inf_call CSV.read("some.csv", DataFrame))
g["Plots.plot(::Matrix{Float64})"] = @benchmarkable (@inf_call plot(rand(10,3))) seconds=60
g["OrdinaryDiffEq.solve(prob::ODEProblem, QNDF())"] = @benchmarkable (@inf_call solve(prob, QNDF())) seconds=40
tune_benchmarks!(g)
end

Expand Down
Loading