-
Notifications
You must be signed in to change notification settings - Fork 602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] make internal values available #378
Comments
Most of the algorithms don't construct a Hessian approximation… and most of the ones that do are low-storage variants that construct the Hessian (or, more commonly, its inverse) only implicitly as a low-rank sum. |
Any value would still be great to have exposed. Or is there a reason why this is difficult to expose? |
It's tricky to design a good API to expose information that is algorithm-dependent, since the type of information (e.g. how the Hessian is represented) will be completely different for different algorithm. What kind of API do you have in mind? |
While I agree that there is basically no way of unifying all the algorithms, the easiest seems to me an array of optional fields. Technically a dict-like structure. If fields are shared, e.g. if "grad" is known, then there is a field "grad". If the algorithms does not have a grad, there won't be a grad field. As a policy: return whatever you have. If others have the same fields, let use the same name. At the same time, it would be good if classifiers could receive information as initial values; basically to continue from a certain point. If repeated, similar minimizations have to be executed, this can considerably speed up things for example. The golden solution would actually be if this information could be given to a callback/criterion in order to decide when to stop the execution. |
As I said, it's even worse than that. Many of the algorithms that compute a Hessian do so in a rather complicated data structure that represents the Hessian implicitly as a low-rank decomposition. It seems like the sort of thing that few people would be able to use effectively, and a lot of work to export and document. C isn't dynamically typed, so a dict-like structure with arbitrary values is somewhat painful to use; probably you'd have to use |
Yes, which gives an estimate of the uncertainty of a parameter, that would be actually useful. But I agree, if it should be usable across languages, it's a little bit trickier. So then to reduce things: if an uncertainty-like quantity can be returned, that would be nice already. Feasible? |
It is certainly easier to have a uniform API to fetch an array of uncertainties for each parameter. Implementing this would only be possible for a subset of the algorithms, of course. |
First of all, thanks a lot for this great library and collection of algorithms!
It would be great if one could retrieve values used by the minimizer after the minimization, such as the hessian approximation. This allows for e.g. the evaluation of a custom metric after termination.
Is that possible?
The text was updated successfully, but these errors were encountered: