Skip to content

Commit f2f2491

Browse files
committed
Don't cache inference results if not optimizing.
The logic had been switched since #18591.
1 parent 7a49be2 commit f2f2491

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

base/reflection.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -583,7 +583,7 @@ function code_typed(f::ANY, types::ANY=Tuple; optimize=true)
583583
asts = []
584584
for x in _methods(f, types, -1)
585585
meth = func_for_method_checked(x[3], types)
586-
(code, ty) = Core.Inference.typeinf_code(meth, x[1], x[2], optimize, !optimize)
586+
(code, ty) = Core.Inference.typeinf_code(meth, x[1], x[2], optimize, optimize)
587587
code === nothing && error("inference not successful") # Inference disabled?
588588
push!(asts, uncompressed_ast(meth, code) => ty)
589589
end

0 commit comments

Comments
 (0)