-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
will it work for spares one hot data - only 0s and 1s in data #1
Comments
Hi, and thanks! This code should work in general for any loss function that can be written as a sum of squares, so this probably should be fine with one-hot data: because your loss function is then If your problem is not large scale (e.g. <= 100 unknowns you want to optimize), then I would recommend DFO-LS. Unfortunately there are not a lot of accessible resources on the topic, but depending on your background I would recommend:
Unfortunately I don't have a print-friendly version of the presentation you mention. That talk covered more the DFO-LS software, so you could look at the papers mentioned in the readme (and the online documentation) for more details. These would be print friendly |
great thanks for soon answer If your problem is not large scale (e.g. <= 100 unknowns you want to optimize), then I would recommend DFO-LS. would your code work in a such a case? |
No, I don't think DFO-LS would be the right choice for problems that large (it isn't able to make use of sparsity). However, you should be able to use this code (DFBGN) ok, it would just be a matter of picking the Note that there is usually a tradeoff: larger |
Hello Dr. Roberts
great code and talk
https://www.youtube.com/watch?v=RvEZURqfaC4
thank you very much
but will it work for big very sparse one hot data - only 0s and 1s in data
https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/
https://en.wikipedia.org/wiki/One-hot
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
by the way do you have print friendly for
your presentation
Derivative-free optimisation for least-squares problems
https://lindonroberts.github.io/talk/unsw_202004/roberts_unsw.pdf
for example word format ?
or less simple slides to understand only idea
or another introductory video...
Thanks in advance ...
The text was updated successfully, but these errors were encountered: