You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, our team is planning to implement remote a rendering-based social VR application, where the rendering workload is offloaded to the server. We are wondering if it is possible to instrument the code of ubiq to implement it.
Specifically, we need to design a rendering pipeline on the server side and transmit the rendered content to the client. We want to implement our scheme based on ubiq. Do you have any suggestions about our idea?
The text was updated successfully, but these errors were encountered:
Sure, here are some thoughts on your application...
Ubiq doesn't have inherent support for remote render streaming. You would need to implement both input device and render capture and streaming on top. The most challenging step would be encoding and streaming video.
Ubiq's main messaging system is designed for discrete messages, intended to be delivered to multiple clients (e.g. avatar transforms). This can be used to unicast as well, with a Quality of Service good enough for many cases. However the expectation is that when you have a high bandwidth, latency sensitive, unicast stream (such as video), Ubiq is used to create a distinct, dedicated channel between two Components for that stream.
This is how voice chat works in Ubiq currently. Ubiq uses WebRTC for some of its real-time communication. The Voip Components on each Peer use Ubiq's messaging system to exchange WebRTC signalling data. Inside the Voip Components is a complete WebRTC stack, and instances of these stacks make direct connections between each pair of Peers for the audio.
WebRTC supports video. Ubiq uses the SipSorcery implementation of WebRTC for voice, but we haven't experimented with streaming video inside Ubiq.
To use Ubiq to facilitate render streaming, one approach would be to use Unity's package, and use Ubiq to exchange the signalling data necessary to establish a session.
Alternatively, you could capture the render stream yourself (i.e. reimplement the Unity package functionality) and use Ubiq's SipSorcery implementation to establish a p2p video stream (using Ubiq to exchange signalling data necessary to bootstrap this).
If you didn't want to use SipSorcery, you could encode the video yourself and have Ubiq Components negotiate a UDP channel to send the data.
Feel free to post back if you want to discuss further!
Hello, our team is planning to implement remote a rendering-based social VR application, where the rendering workload is offloaded to the server. We are wondering if it is possible to instrument the code of ubiq to implement it.
Specifically, we need to design a rendering pipeline on the server side and transmit the rendered content to the client. We want to implement our scheme based on ubiq. Do you have any suggestions about our idea?
The text was updated successfully, but these errors were encountered: