The Depth API helps a device’s camera to understand the size and shape of the real objects in a scene. It uses the camera to create depth images, or depth maps, thereby adding a layer of AR realism into your apps. You can use the information provided by a depth image to make virtual objects accurately appear in front of or behind real world objects, enabling immersive and realistic user experiences.
Depth information is calculated from motion and may be combined with information from a hardware depth sensor, such as a time-of-flight (ToF) sensor, if available. A device does not need a ToF sensor to support the Depth API.
Prerequisites
Make sure that you understand fundamental AR concepts and how to configure an ARCore session before proceeding.
Configure your app to be Depth Required or Depth Optional (Android only)
If your app requires Depth API support, either because a core part of the AR experience relies on depth, or because there's no graceful fallback for the parts of the app that use depth, you may choose to restrict distribution of your app in the Google Play Store to devices that support the Depth API .
Make your app Depth Required
Navigate to Edit > Project Settings > XR Plug-in Management > ARCore .
Depth is set to Required by default.
Make your app Depth Optional
-
Navigate to Edit > Project Settings > XR Plug-in Management > ARCore .
-
From the Depth drop-down menu, select Optional to set an app to Depth optional.
Enable Depth
To save resources, ARCore does not enable the Depth API by default. To take
advantage of depth on supported devices, you must manually add the AROcclusionManager
component to the AR Cameragame object with the Camera
and ARCameraBackground
component. See Automatic occlusion
in the Unity documentation for more information.
In a new ARCore session , check whether a user's device supports depth and the Depth API, as follows:
// Reference to AROcclusionManager that should be added to the AR Camera
// game object that contains the Camera and ARCameraBackground components.
var
occlusionManager
=
…
// Check whether the user's device supports the Depth API.
if
(
occlusionManager
.
descriptor
?.
supportsEnvironmentDepthImage
)
{
// If depth mode is available on the user's device, perform
// the steps you want here.
}
Acquire depth images
Get the latest environment depth image from the AROcclusionManager
.
// Reference to AROcclusionManager that should be added to the AR Camera
// game object that contains the Camera and ARCameraBackground components.
var
occlusionManager
=
…
if
(
occlusionManager
.
TryAcquireEnvironmentDepthCpuImage
(
out
XRCpuImage
image
))
{
using
(
image
)
{
// Use the texture.
}
}
You can convert the raw CPU image into a RawImage
for greater flexibility. An
example for how to do this can be found in Unity's ARFoundation samples
.
Understand depth values
Given point A
on the observed real-world geometry and a 2D point a
representing the same point in the depth image, the value given by the Depth
API at a
is equal to the length of CA
projected onto the principal axis.
This can also be referred as the z-coordinate of A
relative to the camera
origin C
. When working with the Depth API, it is important to understand that
the depth values are not the length of the ray CA
itself, but the projection
of it.
Occlude virtual objects and visualize depth data
Check out Unity's blog post for a high-level overview of depth data and how it can be used to occlude virtual images. Additionally, Unity's ARFoundation samples demonstrate occluding virtual images and visualizing depth data.
You can render occlusion using two-pass rendering or per-object, forward-pass rendering. The efficiency of each approach depends on the complexity of the scene and other app-specific considerations.
Per-object, forward-pass rendering
Per-object, forward-pass rendering determines the occlusion of each pixel of the object in its material shader. If the pixels are not visible, they are clipped, typically via alpha blending, thus simulating occlusion on the user’s device.
Two-pass rendering
With two-pass rendering, the first pass renders all of the virtual content into an intermediary buffer. The second pass blends the virtual scene onto the background based on the difference between the real-world depth with the virtual scene depth. This approach requires no additional object-specific shader work and generally produces more uniform-looking results than the forward-pass method.
Extract distance from a depth image
To use the Depth API for purposes other than occluding virtual objects or visualizing depth data, extract information from the depth image.
Texture2D
_depthTexture
;
short
[]
_depthArray
;
void
UpdateEnvironmentDepthImage
()
{
if
(
_occlusionManager
&&
_occlusionManager
.
TryAcquireEnvironmentDepthCpuImage
(
out
XRCpuImage
image
))
{
using
(
image
)
{
UpdateRawImage
(
ref
_depthTexture
,
image
,
TextureFormat
.
R16
);
_depthWidth
=
image
.
width
;
_depthHeight
=
image
.
height
;
}
}
var
byteBuffer
=
_depthTexture
.
GetRawTextureData
();
Buffer
.
BlockCopy
(
byteBuffer
,
0
,
_depthArray
,
0
,
byteBuffer
.
Length
);
}
// Obtain the depth value in meters at a normalized screen point.
public
static
float
GetDepthFromUV
(
Vector2
uv
,
short
[]
depthArray
)
{
int
depthX
=
(
int
)(
uv
.
x
*
(
DepthWidth
-
1
));
int
depthY
=
(
int
)(
uv
.
y
*
(
DepthHeight
-
1
));
return
GetDepthFromXY
(
depthX
,
depthY
,
depthArray
);
}
// Obtain the depth value in meters at the specified x, y location.
public
static
float
GetDepthFromXY
(
int
x
,
int
y
,
short
[]
depthArray
)
{
if
(
!
Initialized
)
{
return
InvalidDepthValue
;
}
if
(
x
> =
DepthWidth
||
x
<
0
||
y
> =
DepthHeight
||
y
<
0
)
{
return
InvalidDepthValue
;
}
var
depthIndex
=
(
y
*
DepthWidth
)
+
x
;
var
depthInShort
=
depthArray
[
depthIndex
];
var
depthInMeters
=
depthInShort
*
MillimeterToMeter
;
return
depthInMeters
;
}
What’s next
- Enable more accurate sensing with the Raw Depth API .
- Check out the ARCore Depth Lab , which demonstrates different ways to access depth data.