Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up ForwardDiffExt with batched solve #84

Closed
gdalle opened this issue Aug 4, 2023 · 7 comments
Closed

Speed up ForwardDiffExt with batched solve #84

gdalle opened this issue Aug 4, 2023 · 7 comments
Labels
performance Improve performance or stability

Comments

@gdalle
Copy link
Member

gdalle commented Aug 4, 2023

See #71 (comment) by @thorek1

@gdalle gdalle added this to the v0.5.0 milestone Aug 4, 2023
@gdalle gdalle added the feature New feature or request label Aug 4, 2023
@thorek1
Copy link
Contributor

thorek1 commented Aug 4, 2023

here is how I solve it in my tailored ForwardDiff compatible calls

partials = mapreduce(ℱ.partials, hcat, BCX)'

b =.jacobian(x -> solve_sylvester_equation_condition(x, val), bcx)
a =.jacobian(x -> solve_sylvester_equation_condition(bcx, x), val)

reshape_matmul = LinearOperators.LinearOperator(Float64, size(b,1) * size(partials,2), size(b,1) * size(partials,2), false, false, 
        (sol,𝐱) -> begin 
        𝐗 = reshape(𝐱, (size(b,1),size(partials,2)))
        sol .= vec(a * 𝐗)
        return sol
end)

X, info = Krylov.gmres(reshape_matmul, -vec(b * partials))#, atol = tol)

jvp = reshape(X, (size(b,1),size(partials,2)))

@thorek1
Copy link
Contributor

thorek1 commented Aug 4, 2023

here you can see nicely as well why its worth having different conditions_backends. if you go from m input elements to n output elements and m >>n you want to use reverse mode overall but the conditions go both ways in terms of dimensionality. once you go from m to n and once from n to m (solution as input). in that sense, the case with the potentially largest efficiency gain is the one where you diff the condition with respect to the solution (b). and if m >> n or vice versa it can make sense to use autodiff the other way compared to what you use on the implicit function. I hope this was clear :)

@gdalle
Copy link
Member Author

gdalle commented Aug 5, 2023

I'm struggling to make this work because gmres(A, B) doesn't work to perform parallel solve when B is a matrix. On the other hand, A \ B works fine

@thorek1
Copy link
Contributor

thorek1 commented Aug 5, 2023

See the vec and reshape operations in my example.

I also gave it a try but was struggling with the operators. Working with realised matrices is somewhat easier for me...

@thorek1
Copy link
Contributor

thorek1 commented Aug 5, 2023

I tried a bit myself and managed for the direct solver case but the linear operator dimensions are a bit tricky to figure out

@gdalle
Copy link
Member Author

gdalle commented Aug 5, 2023

This is an internal performance issue that can always be improved in a non breaking way, so I'm putting it on hold to focus on the design for v0.5

@gdalle gdalle removed this from the v0.5.0 milestone Aug 7, 2023
@gdalle gdalle added performance Improve performance or stability and removed feature New feature or request labels Aug 8, 2023
@gdalle
Copy link
Member Author

gdalle commented Apr 10, 2024

Fixed by #135 for the dense Jacobian case, see #129 for the lazy Jacobian case

@gdalle gdalle closed this as completed Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Improve performance or stability
Projects
None yet
Development

No branches or pull requests

2 participants