-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeking advice for setting up solver for finite element #303
Comments
@kiendangtor I apologize for the slow response. This matrix is quite small to exploit full GPU power. Would you consider direct solve approach here? Other than that, configurations from provided examples are typical for this type of meshes (of course also depending on what are you trying to solve). I had in mind something like "config scanner" to find best solver options for given matrix, but no ETA on that. |
Thank you for your reply. "config scanner" could be awesome. I had bigger mesh with 20mil of non zero but to make the diagnosis easier I attached a very coarse mesh. But the behavior are the same. I was able to successfully run the solver with many configs but all the one that has "MULTICOLOR_DILU" did not work either as a smoother or as preconditioner. Do we need to provide graph separately? I look at the code and it doesnt seem like it need that. |
Understood, thanks. Right, starting with smaller matrix is often easier to put aside what's clearly not working.
Thanks, |
Thank you for your fast response. I have upload the larger system here: https://we.tl/t-ymNm1kBuan Here is the output: If I use GMRES_AMG_D2.json it took 16s Output for PCG_AGGREGATION_JACOBI.json (~16s as well): Adding strength threshold = 0.6 i manage to get GMRES_AMD_D2 to get to 7s but still longer than CG without preconditioner.
When I tried to used PCG_ILU.json, it return nan. AMGX version 2.5.0 Basically I am still struggling to get the right configuration for the matrix. The best results I got is from CG without any preconditioner. I hope this should clarify my issues. |
Environment information:
OS: Windows 11
CUDA runtime: CUDA 12.4.
AMGX version or commit hash: 2.5.0
NVIDIA driver: 551.78
NVIDIA GPU: NVIDIA RTX 3060 Ti
AMGX solver configuration: This is one of AMG configuration that I tried to use but failed to converged.
{
"config_version": 2,
"solver": {
"matrix_coloring_scheme": "MIN_MAX",
"max_uncolored_percentage": 0.15,
"algorithm": "AGGREGATION",
"obtain_timings": 1,
"solver": "AMG",
"smoother": "MULTICOLOR_DILU",
"print_solve_stats": 1,
"presweeps": 1,
"selector": "SIZE_2",
"coarsest_sweeps": 2,
"max_iters": 1000,
"monitor_residual": 1,
"scope": "main",
"max_levels": 1000,
"postsweeps": 1,
"tolerance": 0.1,
"print_grid_stats": 1,
"norm": "L1",
"cycle": "V"
}
}
Matrix is attached
Reproduction steps
I ran this with the command line as in the quick start document :"amgx_capi -m identity_system.mtx -c AGGREGATION_DILU.json
It got diverged. Samte thing with AMG_CLASSICAL_CG.json. Using AMG_AGGRREGATION_CG it manage to converge but very slow
If I use PCG_AGGREGATION_JACOBI i got the best convergence speed.
Can you please tell me which option should I choose for the type of matrix that I attached? This is basically a 10 nodes tet mesh generated from a 3D solid dishes.
Thanks,
Kien
coarse.zip
coarse.zip
The text was updated successfully, but these errors were encountered: