-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ARCore + TFLite inference on GPU stalling both #1663
Comments
Can you share CPU profiling data for running both simultaneously? |
Hello @15kingben, you can find the profiling data here. |
Is the performance of the inference model alone significantly higher? ARCore uses TFLite for certain features, however e.g. the Semantics API runs on a downscaled 192x256 resolution image to save performance. Even if inference is run on the GPU, the CPU may be used for certain operations. |
Dear @15kingben thank you for investigating. What's more troubling and ultimately the reason I created this issue, is that ARCore performance seems to drop significantly as well, even stalling as the title says. At times the pose output of ARCore drops to 2-3 [Hz], I have also had occurrences where it stopped for several seconds, likely leading to an internal reset of the Filter. There also doesn't seem to be an option for deliberately splitting up ressources between the two (ARCore and TFLite) either. So I guess the gist of this issue is - can there be done anything at the API level to mitigate this drop in performance of ARCore? Since at every invocation I am essentially risking stability of the motion tracking process/output. Any help in that regard would be greatly appreciated. Many thanks again for your support. |
@15kingben has there any progress been made or are you aware of a workaround that I can use to overcome this issue? |
Hi Roman, I'm sorry I have not had time to investigate this issue recently. I was not able to recreate this issue by running our Semantics model simultaneously with ARCore on the GPU delegate in the hello_ar_kotlin example, although I was simply feeding dummy data into the model. Can you share the full code of your reproducible example, specifically how the images are sourced for the TFLite model's input. |
Hello, I will probably provide a sample app in the coming month. Thank you for your support and patience. |
SPECIFIC ISSUE ENCOUNTERED
I am running AR Core for motion tracking using the android SDK, basically at the state of the
hello_ar_kotlin
sample.When running inference on (separate!) image data, while TFLite can access the GPU, both systems break down when being run together, with ARCore dropping to 2-3 FPS and TFLite to 1 FPS.
VERSIONS USED
com.google.ar:core:1.42.0
adb shell getprop ro.build.fingerprint
:STEPS TO REPRODUCE THE ISSUE
Dependencies
hello_ar_kotlin
.WORKAROUNDS (IF ANY)
None found yet.
ADDITIONAL COMMENTS
I cannot use MLKit with ARCore as
GPU Memory usage of TFLite alone seems to be 771 MB on average. GPU Memory usage with ARCore together seems to be about 810 MB on average, so not that much more.
Ressource profiling data obtained with Android GPU inspector
The text was updated successfully, but these errors were encountered: