Skip to content

PINNErrorVsTime Benchmark Updates #1159

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
May 6, 2025

Conversation

ParamThakkar123
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ??

@ChrisRackauckas
Copy link
Member

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Jan 19, 2025

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

Yes. Actually I set it to that number just to get rid of that error

@ChrisRackauckas
Copy link
Member

Wait what's the error?

@ParamThakkar123
Copy link
Contributor Author

Wait what's the error?

the error went off on running again

@ChrisRackauckas
Copy link
Member

what error?

@ParamThakkar123
Copy link
Contributor Author

maxiters should be a number greater than 1000

@ChrisRackauckas
Copy link
Member

can you please just show the error...

@ParamThakkar123
Copy link
Contributor Author

AssertionError: maxiters for CubaCuhre(0, 0, 0) should be larger than 1000

Stacktrace: [1] __solvebp_call(prob::IntegralProblem{false, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, NeuralPDE.var"#integrand#109"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p),
NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}}, Vector{Float64}, @Kwargs{}}, alg::CubaCuhre,
sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, lb::Vector{Float64}, ub::Vector{Float64}, p::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64) @ IntegralsCuba C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:139 [2] __solvebp_call @ C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:134 [inlined] [3] #__solvebp_call#4 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:95 [inlined] [4] __solvebp_call @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:94 [inlined] [5] #rrule#5 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:17 [inlined] [6] rrule @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:14 [inlined] [7] rrule @ C:\Users\Hp.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144 [inlined] [8] chain_rrule_kw @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined] [9] macro expansion @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined] [10] _pullback @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91 [inlined] [11] solve! @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:84 [inlined] ... @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:162 [55] solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0ca831bf, 0x51abc8eb, 0xda1f388f, 0xf472bcea, 0x7492cfcb), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}, ::Optimisers.Adam; kwargs::@Kwargs{callback::var"#11#18"{var"#loss_function#17"{OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x700712a1, 0x6c1c91c2, 0xa5bfc01b, 0x66a91103, 0x0d12fcff), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}}, Vector{Any}, Vector{Any}, Vector{Any}}, maxiters::Int64}) @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:83 [56] allen_cahn(strategy::QuadratureTraining{CubaCuhre, Float64}, minimizer::Optimisers.Adam, maxIters::Int64) @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:105

@ChrisRackauckas
Copy link
Member

I see, that's for the sampling algorithm. You should only need that on Cuhre?

@ParamThakkar123
Copy link
Contributor Author

I see, that's for the sampling algorithm. You should only need that on Cuhre?

Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them

@ParamThakkar123
Copy link
Contributor Author

The CI has passed here. And all the code seems to run perfectly. Can you please review ??

@ChrisRackauckas
Copy link
Member

@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was?

@ArnoStrouwen
Copy link
Member

I don't remember myself, but that PR links to:
SciML/Integrals.jl#47

@ChrisRackauckas
Copy link
Member

Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅

@ChrisRackauckas
Copy link
Member

Can you force latest majors and make sure the manifest resolves?

@ParamThakkar123
Copy link
Contributor Author

I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts

@ChrisRackauckas
Copy link
Member

Can you share the resolution errors?

@ParamThakkar123
Copy link
Contributor Author

image

image

@ChrisRackauckas These are the resolution errors that occur

@ChrisRackauckas
Copy link
Member

Oh those were turned into extensions. Change using IntegralsCuba, IntegralsCubature into using Integrals, Cuba, Cubature and change the dependencies to directly depending on Cuba and Cubature.

@ParamThakkar123
Copy link
Contributor Author

Sure !! 🫡

@ParamThakkar123
Copy link
Contributor Author

This comes from the phi layer in Neural PDE

@sathvikbhagavan
Copy link
Member

What changes did you do? What line is erroring? Please be specific and verbose when you report some issue

@ParamThakkar123
Copy link
Contributor Author

What changes did you do? What line is erroring? Please be specific and verbose when you report some issue

@sathvikbhagavan

I'm sorry if I wasn't clear and verbose at this point. The issue is the following:
I made some changes to the code which I haven't pushed just yet. So this is the code :

using NeuralPDE
using Integrals, Cubature, Cuba
using ModelingToolkit, Optimization, OptimizationOptimJL
using Lux, Plots
using DelimitedFiles
using QuasiMonteCarlo
import ModelingToolkit: Interval, infimum, supremum

function allen_cahn(strategy, minimizer, maxIters)

    ##  DECLARATIONS
    @parameters t x1 x2 x3 x4
    @variables u(..)

    Dt = Differential(t)
    Dxx1 = Differential(x1)^2
    Dxx2 = Differential(x2)^2
    Dxx3 = Differential(x3)^2
    Dxx4 = Differential(x4)^2


    # Discretization
    tmax = 1.0
    x1width = 1.0
    x2width = 1.0
    x3width = 1.0
    x4width = 1.0

    tMeshNum = 10
    x1MeshNum = 10
    x2MeshNum = 10
    x3MeshNum = 10
    x4MeshNum = 10

    dt = tmax / tMeshNum
    dx1 = x1width / x1MeshNum
    dx2 = x2width / x2MeshNum
    dx3 = x3width / x3MeshNum
    dx4 = x4width / x4MeshNum

    domains = [t  Interval(0.0, tmax),
        x1  Interval(0.0, x1width),
        x2  Interval(0.0, x2width),
        x3  Interval(0.0, x3width),
        x4  Interval(0.0, x4width)]

    ts = 0.0:dt:tmax
    x1s = 0.0:dx1:x1width
    x2s = 0.0:dx2:x2width
    x3s = 0.0:dx3:x3width
    x4s = 0.0:dx4:x4width

    # Operators
    Δu = Dxx1(u(t, x1, x2, x3, x4)) + Dxx2(u(t, x1, x2, x3, x4)) + Dxx3(u(t, x1, x2, x3, x4)) + Dxx4(u(t, x1, x2, x3, x4)) # Laplacian


    # Equation
    eq = Dt(u(t, x1, x2, x3, x4)) - Δu - u(t, x1, x2, x3, x4) + u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) * u(t, x1, x2, x3, x4) ~ 0  #ALLEN CAHN EQUATION

    initialCondition = 1 / (2 + 0.4 * (x1 * x1 + x2 * x2 + x3 * x3 + x4 * x4)) # see PNAS paper

    bcs = [u(0, x1, x2, x3, x4) ~ initialCondition]  #from literature

    ## NEURAL NETWORK
    n = 10   #neuron number
    chain = Lux.Chain(Lux.Dense(5, n, Lux.σ), Lux.Dense(n, n, Lux.σ), Lux.Dense(n, 1))   #Neural network from OptimizationFlux library

    indvars = [t, x1, x2, x3, x4]   #phisically independent variables
    depvars = [u(t, x1, x2, x3, x4)]       #dependent (target) variable

    dim = length(domains)

    losses = []
    error = []
    times = []

    dx_err = 0.2

    error_strategy = GridTraining(dx_err)

    discretization_ = PhysicsInformedNN(chain, error_strategy)
    @named pde_system_ = PDESystem(eq, bcs, domains, indvars, depvars)
    prob_ = discretize(pde_system_, discretization_)

    function loss_function_(θ, p)
        return prob_.f.f(θ, nothing)
    end  

    cb_ = function (p, l)
        deltaT_s = time_ns() #Start a clock when the callback begins, this will evaluate questo misurerà anche il calcolo degli uniform error

        ctime = time_ns() - startTime - timeCounter #This variable is the time to use for the time benchmark plot
        append!(times, ctime / 10^9) #Conversion nanosec to seconds
        append!(losses, l)
        loss_ = loss_function_(p, nothing)
        append!(error, loss_)
        timeCounter = timeCounter + time_ns() - deltaT_s #timeCounter sums all delays due to the callback functions of the previous iterations

        #if (ctime/10^9 > time) #if I exceed the limit time I stop the training
        #    return true #Stop the minimizer and continue from line 142
        #end

        return false
    end

    @named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)

    discretization = NeuralPDE.PhysicsInformedNN(chain, strategy)
    prob = NeuralPDE.discretize(pde_system, discretization)

    timeCounter = 0.0
    startTime = time_ns() #Fix initial time (t=0) before starting the training
    res = Optimization.solve(prob, minimizer, callback=cb_, maxiters=maxIters)

    phi = discretization.phi

    params = res.minimizer

    # Model prediction
    domain = [ts, x1s, x2s, x3s, x4s]

    u_predict = [reshape([first(phi([t, x1, x2, x3, x4], res.minimizer; device=cdev)) for x1 in x1s for x2 in x2s for x3 in x3s for x4 in x4s], (length(x1s), length(x2s), length(x3s), length(x4s))) for t in ts]  #matrix of model's prediction

    return [error, params, domain, times, losses]
end

Initially the plot looked really wonky and that was because the loss function wasn't being calculated correctly. So I picked up the loss function from the initial implementation which was already present in the SciMLBenchmarks.jl documentation. So the error occurred when I added the following line and made changes accordingly :

function loss_function_(θ, p)
        return prob_.f.f(θ, nothing)
end  

The error is this :

MethodError: no method matching (::MLDataDevices.UnknownDevice)(::Matrix{Float64})

Stacktrace:
  [1] (::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})(x::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:42
  [2] (::NeuralPDE.var"#7#8")(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:354
  [3] numeric_derivative(phi::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, u::NeuralPDE.var"#7#8", x::Matrix{Float64}, εs::Vector{Vector{Float64}}, order::Int64, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\pinn_types.jl:382
  [4] macro expansion
    @ C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:130 [inlined]
  [5] macro expansion
    @ C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:163 [inlined]
  [6] macro expansion
    @ .\none:0 [inlined]
  [7] generated_callfunc(::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, ::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::typeof(NeuralPDE.numeric_derivative), ::NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, ::NeuralPDE.var"#7#8", ::Nothing)
    @ NeuralPDE .\none:0
  [8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr})(::Matrix{Float64}, ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}, ::NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, ::Function, ::Function, ::Function, ::Nothing)
    @ RuntimeGeneratedFunctions C:\Users\Hp\.julia\packages\RuntimeGeneratedFunctions\M9ZX8\src\RuntimeGeneratedFunctions.jl:150
  [9] (::NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing})(cord::Matrix{Float64}, θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\discretize.jl:150
 [10] (::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})(θ::Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}})
    @ NeuralPDE C:\Users\Hp\.julia\packages\NeuralPDE\nYBAW\src\training_strategies.jl:70
 [11] (::NeuralPDE.var"#263#284"{Optimization.OptimizationState{ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Float64, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, Shaped1DAxis((10,))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, Shaped1DAxis((10,))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, Shaped1DAxis((1,))))))}}}, Nothing, Optimisers.Leaf{Optimisers.Adam, Tuple{Vector{Float64}, Vector{Float64}, Tuple{Float64, Float64}}}}})(pde_loss_function::NeuralPDE.var"#78#79"{NeuralPDE.var"#197#198"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#226"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x875c3d6c, 0x2ff644f8, 0x57c4854c, 0xf710f944, 0x14a70865), Expr}, NeuralPDE.var"#7#8", NeuralPDE.var"#239#246"{NeuralPDE.var"#239#240#247"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, Nothing}, Matrix{Float64}})
    @ NeuralPDE .\none:0
...
    @ SciMLBase C:\Users\Hp\.julia\packages\SciMLBase\sYmAV\src\solve.jl:95
 [22] allen_cahn(strategy::QuadratureTraining{Float64, CubaCuhre}, minimizer::Optimisers.Adam, maxIters::Int64)
    @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W0sZmlsZQ==.jl:116
 [23] top-level scope
    @ e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W2sZmlsZQ==.jl:4

Which tells that there's some method error with the matrix. The stacktrace suggests that there is some problem with the Phi layer in NeuralPDE.jl specifically this seems to be coming from pinn_types.jl line 42. But not sure what changes are needed in order to incorporate this

@sathvikbhagavan
Copy link
Member

Try calling the loss function like loss_ = loss_function_(p.u, nothing) and not loss_ = loss_function_(p, nothing)

@ParamThakkar123
Copy link
Contributor Author

Try calling the loss function like loss_ = loss_function_(p.u, nothing) and not loss_ = loss_function_(p, nothing)

Yeah, this worked. Thank you so much. Also curious about this thing, is there any reference for this, where I can learn this from. This was not that easy to observe for me 😅

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas

image

This is the graph generated this time

@ParamThakkar123
Copy link
Contributor Author

I removed the HCubatureJL algorithm due to its incompatibility with NeuralPDE.jl

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas all the CI checks here have passed. Can you please review this ??

@sathvikbhagavan
Copy link
Member

Can you post how the plot looks like for alan_cahn now?

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Mar 13, 2025

image

@sathvikbhagavan

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas @sathvikbhagavan Does this graph still needs some changes ?

@sathvikbhagavan
Copy link
Member

Shouldn't all strategies start at t=0? Also plot time in log

@ChrisRackauckas
Copy link
Member

Just about, yes. Move the legend to the outside

@ParamThakkar123
Copy link
Contributor Author

image

@ChrisRackauckas @sathvikbhagavan I incorporated the suggested changes. I haven't pushed the code yet but this is how the plot looks like when I ran this code locally

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas @sathvikbhagavan does this graph look satisfactory or any changes are needed to this ?

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Mar 15, 2025

I haven't pushed the code yet because once I do so it would take 3 days for the CI to complete and for us to get the results. It takes only 7 minutes on my local system so if any changes are needed I can do it right away and we can see the results too,

@ChrisRackauckas @sathvikbhagavan

@sathvikbhagavan
Copy link
Member

A couple of things can be done to improve the plot:

  1. Compute loss at t=0 for all such that they all have same starting points
  2. If possible, along with the line plot, plot which points were actually recorded with scatter plot
  3. Maybe try a different color palette which is more readable

@ChrisRackauckas
Copy link
Member

The weird results seem to be a consequence of limitations of the quadrature libraries. The quadrature libraries for the cubatures have minimum numbers of steps they allow, and then have growth by odd factors. In previous plots they had failed, now they work because they clamp the iters to an allowed quantity, such as N^3 for 3D. This might mean there's only value that is in the allowed range, and then the next one takes way to long to compute.

So, this is okay, for odd reasons.

@ChrisRackauckas ChrisRackauckas merged commit 47c293c into SciML:master May 6, 2025
2 checks passed
@ChrisRackauckas
Copy link
Member

@ParamThakkar123 thanks! Go forth and claim your reward. You did well here. It took awhile for me to understand why the results were like this, and I couldn't find you did anything wrong here, it's just interesting behavior of the integration libraries.

ChrisRackauckas added a commit to SciML/sciml.ai that referenced this pull request May 6, 2025
This PR certifies the completion of the two bnechmark sets by @ParamThakkar123 with the following PRs:

* SciML/SciMLBenchmarks.jl#1148
* SciML/SciMLBenchmarks.jl#1160
* SciML/SciMLBenchmarks.jl#1159

It also adds two new benchmark sets that are in dire need of updates.

As a monetary decision, requires 3 approvals. I approve.
@ParamThakkar123
Copy link
Contributor Author

Hello @ChrisRackauckas, really glad that the things worked out and the PR got merged. Actually looking at the results even I was quite puzzled and got into thinking that something is wrong here and throughout this I spent a lot of time into getting them right. Glad that we reached a conclusion. It' really weird and interesting too that we got to see these results

@ParamThakkar123
Copy link
Contributor Author

Thanks for merging my PR. I will start a payout request on open collective to claim the reward

@ParamThakkar123
Copy link
Contributor Author

Always happy to contribute! Please do let me know if you have any more projects where I can contribute my best.

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas What amount should be the amount of the reward that I should set in the Open collective invoice ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants