You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why is the change necessary
The current implementation has a complicated firmware which has code that are pre programmed to do complex tasks on the IOT devices itself. This is primarily because of the fact that and IOT hub (CPU only) cannot be expect to control all the IOT nodes with an acceptable performance.
The idea upon which we can improve is to make use of the CUDA lib and CUDA cores in GPU enabled IOT edges ( hubs ) like Nvidia Jetson and make use of these low performance cores to form an architecture where the IOT nodes provide very simple api while the hub computes complex behaviors for multiple devices and keeps sending them over network.
This would perform better with GPU because even though the cores of GPU are weak, but the number of these cores are in the order of hundreds. This would let us reuse the IOT nodes to perform different complex behaviors without actually reprogramming the firmware.
This happens to be the existing architecture where an IOT devices exposes a set of API that are all linked to complex tasks that cannot be reprogrammed once flashed on the device.
The new plan aims to expose very simple APIs that will be hit by the server via broadcast ticks with relevant data to perform more complicated tasks. This lets us program complicated tasks on the dock on the fly thus making the IOT device theoretically programmable.
Any expected performance improvement?
Y
Actions required
Add cuda library to docker image.
The text was updated successfully, but these errors were encountered:
Why is the change necessary
The current implementation has a complicated firmware which has code that are pre programmed to do complex tasks on the IOT devices itself. This is primarily because of the fact that and IOT hub (CPU only) cannot be expect to control all the IOT nodes with an acceptable performance.
The idea upon which we can improve is to make use of the CUDA lib and CUDA cores in GPU enabled IOT edges ( hubs ) like Nvidia Jetson and make use of these low performance cores to form an architecture where the IOT nodes provide very simple api while the hub computes complex behaviors for multiple devices and keeps sending them over network.
This would perform better with GPU because even though the cores of GPU are weak, but the number of these cores are in the order of hundreds. This would let us reuse the IOT nodes to perform different complex behaviors without actually reprogramming the firmware.
This happens to be the existing architecture where an IOT devices exposes a set of API that are all linked to complex tasks that cannot be reprogrammed once flashed on the device.
The new plan aims to expose very simple APIs that will be hit by the server via broadcast ticks with relevant data to perform more complicated tasks. This lets us program complicated tasks on the dock on the fly thus making the IOT device theoretically programmable.
Any expected performance improvement?
Y
Actions required
Add cuda library to docker image.
The text was updated successfully, but these errors were encountered: