You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of this writing, JDLL supports running models, but not training / retraining / fine tuning. There is an open question about whether it should do so in an engine-agnostic way, and if so, to what extent, and what sorts of training patterns and configurations to accommodate.
@axtimwalde suggests that there are some very common patterns that could be supported pretty easily.
My perspective is that we should start by having some Java-based plugins that need to do training/tuning use Appose directly to invoke the deep learning framework training API of their choice, and then once we have several such plugins doing this, scrutinize them for commonalities and consider what sorts of API might be worth generalizing into JDLL, if any. My intuition is that it will be rather diverse, and most/all plugins will not need such an engine-agnostic API from Java itself, but nonetheless, JDLL could provide at least a subset of training functionalities in an engine-agnostic way, if there is value in doing so.
Finally, there is also the bioimage-io engine that runs models on the server side as services, and we could simply say that if you want to do training, you should rely on that mechanism rather than running it locally on your own hardware (bioimage-io engine can run locally also, although it is heavier weight than using an in-process or interprocess/Appose-based approach, due to the Hypha server backend).
The text was updated successfully, but these errors were encountered:
As of this writing, JDLL supports running models, but not training / retraining / fine tuning. There is an open question about whether it should do so in an engine-agnostic way, and if so, to what extent, and what sorts of training patterns and configurations to accommodate.
@axtimwalde suggests that there are some very common patterns that could be supported pretty easily.
My perspective is that we should start by having some Java-based plugins that need to do training/tuning use Appose directly to invoke the deep learning framework training API of their choice, and then once we have several such plugins doing this, scrutinize them for commonalities and consider what sorts of API might be worth generalizing into JDLL, if any. My intuition is that it will be rather diverse, and most/all plugins will not need such an engine-agnostic API from Java itself, but nonetheless, JDLL could provide at least a subset of training functionalities in an engine-agnostic way, if there is value in doing so.
Finally, there is also the bioimage-io engine that runs models on the server side as services, and we could simply say that if you want to do training, you should rely on that mechanism rather than running it locally on your own hardware (bioimage-io engine can run locally also, although it is heavier weight than using an in-process or interprocess/Appose-based approach, due to the Hypha server backend).
The text was updated successfully, but these errors were encountered: