You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently using a custom L2 loss function and would like to try to achieve the same effect as the built-in L2 loss function in LGBM. However, based on my testing, when I set the parameter 'feature_fraction', according to eval_result, the result will have a deviation of the order of 1e-4. Here is my code
jameslamb
changed the title
Custom objective Unable to reproduce L2Loss for regression
[python-package] How do I reproduce LightGBM's L2 loss with a custom objective?
Aug 14, 2023
I see that you double-posted this here and on Stack Overflow (link). Please do not do that.
Maintainers here also monitor the [lightgbm] tag on Stack Overflow. I could have been spending time preparing an answer here while another maintainer was spending time answering your Stack Overflow post, which would have been a waste of maintainers' limited attention that could otherwise have been spent improving this project. Double-posting also makes it less likely that others with a similar question will find the relevant discussion and answer.
I am currently using a custom L2 loss function and would like to try to achieve the same effect as the built-in L2 loss function in LGBM. However, based on my testing, when I set the parameter 'feature_fraction', according to eval_result, the result will have a deviation of the order of 1e-4. Here is my code
But when I annotate this parameter "feature_fraction", the deviation will decrease to 1e-18. What is the reason for this?
The text was updated successfully, but these errors were encountered: