Does Java API has same capability as Python? #19233
-
Sorry if this is a too naive question as I just started to look at this project, and did a search in the discussions, could not find good answer to my question below: the project examples directory contains a lots of use case (I am interested in RL), but scala-package examples only has few. (assume just for illustrating how to use java API ?). Q: does all of examples implemented in python can be done via Java API (easily)? Any good examples using Java API for RL? -bit background: before bump into this project, I was looking at Tensorflow Java API. Their java API is not stable, mainly for serving now, might not worth invest too many time to train complicate model via their java API. I need to be able train from Java. (too expensive to evaluate in python for a trading strategy) I am also looking at deeplearning4j, any comments/advise will be appreciated! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Thanks for raising this question. At the moment the Java API in MXNet provides just inference. Also cc @lanking520 to comment. |
Beta Was this translation helpful? Give feedback.
-
@wangtieqiao Currently MXNet 1.x training API using deprecated feature and we are proposing the path to get Java API for 2.0, but mostly low level. From my personal point of view, TensorFlow Java (2.0) API is still going on a long path to support all of the operators. Again I would like to raise my +1 to DJL (djl.ai). It support MXNet training and leverage MXNet ParameterStore to serve for multi-gpu training. There is even a whole study series (https://github.com/aws-samples/d2l-java) to learn how you can use DJL for training using MXNet engine. It used the same API that MXNet Gluon used, such as autograd, cachedOp and a lot more. Talking About DL4J, they support training, however only in Symbolic way (old fashioned). Another core issue is the behavior is quite different if you would like to train it comparing to TensorFlow. |
Beta Was this translation helpful? Give feedback.
@wangtieqiao Currently MXNet 1.x training API using deprecated feature and we are proposing the path to get Java API for 2.0, but mostly low level.
From my personal point of view, TensorFlow Java (2.0) API is still going on a long path to support all of the operators. Again I would like to raise my +1 to DJL (djl.ai). It support MXNet training and leverage MXNet ParameterStore to serve for multi-gpu training. There is even a whole study series (https://github.com/aws-samples/d2l-java) to learn how you can use DJL for training using MXNet engine. It used the same API that MXNet Gluon used, such as autograd, cachedOp and a lot more.
Talking About DL4J, they support training, however only in…