Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebXR Depth Camera #5848

Merged
merged 30 commits into from
Jan 19, 2024
Merged

WebXR Depth Camera #5848

merged 30 commits into from
Jan 19, 2024

Conversation

Maksims
Copy link
Collaborator

@Maksims Maksims commented Nov 25, 2023

This PR is dependent on #5786 (Camera Access).

With release of Quest 3, Depth Sensing API has been added to its browser, which also affected the Specs. Previously only one type of devices were capable of providing depth information: Android Phones, which assumed a single monoscope view. So XrDepthSensing was designed with assumption of a single view. Now Quest 3 exposes depth data per view, which is multiple depth textures. This does require a redesign of the access to depth data. So part of this API we are deprecating XrDepthSensing in favor of XrView access to depth information.

This API is pretty low-level. It also provides a more precise real-time depth based hit testing using getDepth method, which is more precise than XrHitTest and reacts to moving objects in real world.

In the future PR I will add material flag for "Depth Occlusion", that will automatically occlude virtual geometry with real world geometry, with underlying implementation that makes use of Depth Sensing for users matter of just a checkbox.

New APIs:

// pc.XrViews
views.supportedDepth // true if depth information is supported
views.availableDepth // true if depth information is available.
// This information is only available after XR session has started.
// It can be supported but not available due to lack of session features request, user permissions
// or underlying system capabilities
views.depthPixelFormat // PIXELFORMAT_LA8 or PIXELFORMAT_F32 based on underlying AR capabilities

// pc.XrView
view.textureDepth // returns a texture with depth information. If depth information is not available it will be null
view.depthUvMatrix // Mat4 that should be used to transform depth texture for a correct projection
view.depthValueToMeters // a coefficient for depth pixel information to be multiplied by, which will convert raw depth value to meters
view.getDepth(u, v) // returns a float in meters of based on depth in a view or null if depth information is not available at the present frame. U and V arguments are 0..1 floats, where 0,0 is top-left corner.

Examples:

  1. Camera Depth - renders a plane in front of a camera with custom shader that supports CPU and GPU paths to render a depth texture.
  2. Depth Sensing Placer - placing an object based on depth sensing information.

TODO:

  • GPU path (waiting for Quest 3 to either deploy OS v60 that makes Depth Sensing public, or get access to OS beta for development).
  • Example with depth map
  • Example with a depth hit test (object placer)

I confirm I have read the contributing guidelines and signed the Contributor License Agreement.

@mvaligursky mvaligursky added feature area: xr XR related issue labels Dec 4, 2023
@Maksims
Copy link
Collaborator Author

Maksims commented Jan 18, 2024

This PR is ready for a review.

Added an example for object placement using depth information. Bear in mind that it is only available when depth sensing uses CPU path. GPU path does not provide such functionality, but potentially can be emulated by reading pixel from a depth texture in a separate PR.

This is useful technique for more precise object placement when hit-test is not providing reliable information. I've also used both of those techniques, and pick the closest point which provided better results for reliable objects placement.

@Maksims Maksims marked this pull request as ready for review January 18, 2024 12:00
mipmaps: false,
addressU: ADDRESS_CLAMP_TO_EDGE,
addressV: ADDRESS_CLAMP_TO_EDGE,
minFilter: FILTER_LINEAR,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to use linear interpolation on the depth?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The depth texture always comes with lower resolution than framebuffer resolution of each eye. So interpolation helps to smooth out the edges. But now you mentioned it, if we don't upload actual texture, then we are not setting the texParameteri of it, right? So these flags pretty much do nothing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you might be right here.

if (this._textureColor) {
this._textureColor.destroy();
this._textureColor = null;
}

if (this._textureDepth) {
this._textureDepth.destroy();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we override this._textureDepth.impl._glTexture and then destroy it.

  1. Was _glTexture already allocated before? I suspect not, so we're not leaking it.
  2. can we destroy _glTexture provided by XR?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't allocate it indeed, but for the safety, when the view is destroyed (end of the session), I do destroy that texture also. Subsequent sessions work well, so it seems to be good.

Copy link
Contributor

@mvaligursky mvaligursky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice PR @Maksims . I added few minor comments / questions, but this is pretty much ready for merge. Thanks!

@Maksims
Copy link
Collaborator Author

Maksims commented Jan 19, 2024

Very nice PR @Maksims . I added few minor comments / questions, but this is pretty much ready for merge. Thanks!

Thank you!
Excited to implement the next PR based on this for automatic occlusion of virtual objects by real-world geometry!

@mvaligursky mvaligursky merged commit 5ffdb56 into playcanvas:main Jan 19, 2024
7 checks passed
@Maksims Maksims deleted the webxr-depth-camera branch January 22, 2024 11:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: xr XR related issue feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants